Python WebSocket Server & Client Guide with asyncio
The websockets library is the default choice for Python WebSocket
work. It handles the protocol correctly, integrates with asyncio, and
stays out of your way. The trade-off: it’s pure Python, so you’ll hit
a throughput ceiling around 10K concurrent connections per core. For
most applications, that’s fine.
Server with echo and broadcast
Section titled “Server with echo and broadcast”This server tracks connected clients, echoes messages back, and
broadcasts to all others. The finally block matters — without it,
crashed connections leak memory because the client set grows forever.
import asyncioimport websocketsimport jsonimport signal
CLIENTS = set()
async def handler(websocket): CLIENTS.add(websocket) try: async for message in websocket: await websocket.send(f"echo: {message}") others = CLIENTS - {websocket} data = json.dumps({"from": id(websocket), "msg": message}) websockets.broadcast(others, data) except websockets.ConnectionClosed: pass finally: CLIENTS.discard(websocket)
async def main(): loop = asyncio.get_running_loop() stop = loop.create_future() loop.add_signal_handler(signal.SIGTERM, stop.set_result, None)
async with websockets.serve(handler, "0.0.0.0", 8765): print("Server running on ws://0.0.0.0:8765") await stop # Run until SIGTERM
if __name__ == "__main__": asyncio.run(main())websockets.broadcast() sends to multiple clients concurrently and
silently drops failed sends. The signal handler gives you graceful
shutdown — connections finish their current message before closing.
Bind to 0.0.0.0, not localhost, or Docker and reverse proxies
can’t reach it.
Client with reconnection
Section titled “Client with reconnection”Clients disconnect. Networks fail. Mobile devices switch from Wi-Fi to cellular. Your client must handle this without losing the user’s session.
The pattern below uses exponential backoff with jitter and a cap. Without the cap, a client offline for an hour would wait over 30 minutes before retrying. Without jitter, all clients reconnect at the same instant after an outage — a thundering herd that can take down your server.
import asyncioimport websocketsimport random
async def connect_with_backoff(uri): delay = 1 while True: try: async with websockets.connect(uri) as ws: delay = 1 # Reset on success async for message in ws: print(f"Received: {message}") except (websockets.ConnectionClosed, OSError) as e: jitter = random.uniform(0, delay * 0.5) wait = min(delay + jitter, 30) print(f"Disconnected ({e}), retry in {wait:.1f}s") await asyncio.sleep(wait) delay = min(delay * 2, 30)
asyncio.run(connect_with_backoff("ws://localhost:8765"))Note what this doesn’t do: it doesn’t queue messages during disconnection, track acknowledgments, or resume from where it left off. For a chat app demo, that’s fine. For a product where dropped messages mean angry users, you need a protocol layer on top (see the protocol gap).
FastAPI integration
Section titled “FastAPI integration”If you’re already using FastAPI, don’t add the websockets library
separately. FastAPI has built-in WebSocket support through Starlette.
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
app = FastAPI()clients: list[WebSocket] = []
@app.websocket("/ws")async def websocket_endpoint(websocket: WebSocket): await websocket.accept() clients.append(websocket) try: while True: data = await websocket.receive_text() for client in clients: if client != websocket: await client.send_text(data) except WebSocketDisconnect: clients.remove(websocket)Run with uvicorn app:app --workers 4. Each worker is a separate
process with its own connection set, so clients on different workers
can’t see each other. You need Redis or NATS as a pub/sub bridge
to broadcast across workers. This is a fundamental limitation of
multi-process Python, not a FastAPI issue.
Python-specific gotchas
Section titled “Python-specific gotchas”The GIL doesn’t matter for WebSockets. WebSocket workloads are I/O-bound. The GIL blocks CPU-bound threads, but asyncio doesn’t use threads for I/O. However, if you do CPU-heavy work per message (image processing, ML inference), the GIL serializes that work. Offload it to a process pool:
from concurrent.futures import ProcessPoolExecutorimport asyncio
pool = ProcessPoolExecutor(max_workers=4)
async def handler(websocket): async for message in websocket: loop = asyncio.get_running_loop() result = await loop.run_in_executor( pool, cpu_heavy_work, message ) await websocket.send(result)Don’t mix asyncio.run() with existing event loops. If you’re
inside a Jupyter notebook, Django, or any framework that already runs
an event loop, calling asyncio.run() throws RuntimeError. Use
await directly or asyncio.ensure_future() instead.
Thread safety: asyncio objects are not thread-safe. If you call
websocket.send() from a thread (a Django view, a Celery task),
it will silently corrupt state. Use asyncio.run_coroutine_threadsafe()
to schedule work on the event loop from another thread:
asyncio.run_coroutine_threadsafe( websocket.send("from thread"), loop)Throughput ceiling is around 10K concurrent connections per core
with the standard event loop. Install uvloop to roughly double that:
import uvloopasyncio.set_event_loop_policy(uvloop.EventLoopPolicy())Beyond that, you need multiple worker processes.
Deployment
Section titled “Deployment”A WebSocket server on localhost is a demo. Here’s how to run one in production.
systemd — the simplest option for a single server:
[Unit]Description=WebSocket ServerAfter=network.target
[Service]User=www-dataExecStart=/usr/bin/python3 /opt/wsserver/server.pyRestart=alwaysRestartSec=3LimitNOFILE=65535
[Install]WantedBy=multi-user.targetLimitNOFILE matters — the default of 1024 means you can’t hold
more than ~1000 connections. Set it to at least 65535 for any
real workload.
Docker — for containerized deployments:
FROM python:3.12-slimWORKDIR /appCOPY requirements.txt .RUN pip install --no-cache-dir -r requirements.txtCOPY . .EXPOSE 8765CMD ["python", "server.py"]Run with docker run -p 8765:8765 --ulimit nofile=65535:65535.
The --ulimit flag is the Docker equivalent of LimitNOFILE.
Nginx reverse proxy — you need this in front of your WebSocket server to handle TLS termination:
location /ws { proxy_pass http://127.0.0.1:8765; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 86400;}The proxy_read_timeout is critical. Nginx defaults to 60 seconds,
and it will close idle WebSocket connections after that. Set it to
86400 (24 hours) or configure application-level pings.
The protocol gap
Section titled “The protocol gap”WebSockets give you a bidirectional byte pipe. Nothing more. In production, you quickly discover you need:
- Message continuity — the client reconnects, but what about the messages it missed?
- Acknowledgment — did the server actually process this?
- Presence — who’s connected right now?
- Auth — how do you validate tokens before accepting the upgrade?
You can build all of this. Many teams do, and then spend months maintaining it. The open-source option is Socket.IO, which handles reconnection and rooms. For a managed approach, Ably handles reconnection with message resume, presence, and guaranteed delivery without you operating the infrastructure.
For a hackathon, raw websockets is fine. For a product with users
who notice dropped messages, you need something on top.
When not to use Python
Section titled “When not to use Python”Python works well for WebSocket servers up to moderate scale. Beyond about 50K concurrent connections per server, look at Go or Rust — both handle hundreds of thousands of connections per process with lower memory overhead (~2KB per goroutine vs ~8KB per asyncio task).
The other case: if your message processing is CPU-bound (video transcoding, heavy computation per message), Python’s per-message overhead hurts. Use Python as the coordination layer and offload heavy work to a compiled service.
Frequently asked questions
Section titled “Frequently asked questions”What is the best Python WebSocket library?
Section titled “What is the best Python WebSocket library?”Use websockets for standalone async servers and clients. It has the
largest community, correct protocol handling, and clean asyncio
integration. For Django projects, use Django Channels — it plugs
into Django’s ORM and auth system. For FastAPI, use the built-in
WebSocket support (Starlette underneath). Avoid python-websocket
(the older synchronous library) for new projects — it blocks on
every operation and can’t handle concurrent connections efficiently.
How do I handle reconnection in Python?
Section titled “How do I handle reconnection in Python?”Wrap your connection in a loop with exponential backoff, as shown in the client example above. The details most tutorials skip: cap your backoff at 30 seconds (otherwise clients wait forever), add jitter (otherwise all clients reconnect simultaneously after an outage and create a thundering herd), and decide what to do about messages missed during disconnection. If you need guaranteed delivery, you need a protocol layer like Socket.IO or Ably that tracks message history and resumes from the last received message.
Can Python handle thousands of WebSocket connections?
Section titled “Can Python handle thousands of WebSocket connections?”Yes. Asyncio multiplexes connections on a single thread — no
thread-per-connection overhead. A single process handles roughly
10K concurrent connections before you hit the event loop’s
throughput limit. Use uvloop to push that higher. Beyond that,
run multiple worker processes behind a load balancer. The
bottleneck is rarely connection count itself — it’s what you do
per message. Routing JSON is fine. Heavy computation per message
is where Python slows down.
How do I deploy a Python WebSocket server in production?
Section titled “How do I deploy a Python WebSocket server in production?”Run your server behind Nginx for TLS termination and use systemd
or Docker for process management. Key details people miss: set
LimitNOFILE to at least 65535 (the default 1024 caps you at
~1000 connections), set Nginx’s proxy_read_timeout to 86400
(the 60-second default kills idle WebSocket connections), and bind
to 0.0.0.0 not localhost (or containers and proxies can’t
reach it). See the deployment section for configs.
Related content
Section titled “Related content”- WebSocket Protocol: RFC 6455 Handshake, Frames & More — The protocol underlying all WebSocket libraries
- WebSocket API: Events, Methods & Properties — Browser API reference for client-side WebSocket code
- WebSocket Libraries, Tools & Specs by Language — Curated list of libraries across all languages
- WebSockets at Scale — Architecture patterns for scaling Python WebSocket servers
- WebSocket Close Codes — Understanding close codes for error handling