Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lære Semaphores for Rate Limiting | Asyncio in Practice
Python Asyncio in Depth

Semaphores for Rate Limiting

Sveip for å vise menyen

A lock allows exactly one coroutine at a time. A semaphore allows up to n coroutines at a time. This makes it the right tool for rate limiting – preventing your program from hammering an API or exhausting a connection pool.

The Problem: Too Many Concurrent Requests

Without any limit, asyncio.gather() fires all requests simultaneously. For 100 posts, that's 100 connections at once – most APIs will throttle or reject them:

1234567891011121314151617181920212223
import asyncio import httpx import nest_asyncio nest_asyncio.apply() # Fetching 20 posts with no concurrency limit async def fetch_post(client, post_id): response = await client.get( f"https://jsonplaceholder.typicode.com/posts/{post_id}" ) return response.json()["title"] async def main(): async with httpx.AsyncClient() as client: # All 20 requests fire simultaneously titles = await asyncio.gather( *[fetch_post(client, post_id) for post_id in range(1, 21)] ) print(f"Fetched {len(titles)} titles") asyncio.run(main())

Using asyncio.Semaphore()

Wrap each request with a semaphore to cap the number of concurrent connections:

12345678910111213141516171819202122232425
import asyncio import httpx import nest_asyncio nest_asyncio.apply() semaphore = asyncio.Semaphore(5) # Maximum 5 concurrent requests # Fetching a post with a concurrency limit async def fetch_post(client, post_id): async with semaphore: # Acquiring a semaphore slot response = await client.get( f"https://jsonplaceholder.typicode.com/posts/{post_id}" ) return response.json()["title"] async def main(): async with httpx.AsyncClient() as client: titles = await asyncio.gather( *[fetch_post(client, post_id) for post_id in range(1, 21)] ) print(f"Fetched {len(titles)} titles") asyncio.run(main())

All 20 tasks are created immediately, but at most 5 can hold a semaphore slot at once. The rest wait until a slot is released.

Combining a Semaphore with a Timeout

Rate limiting and timeouts work well together – limit concurrency and fail fast if a request hangs:

1234567891011121314151617181920212223242526272829303132
import asyncio import httpx import nest_asyncio nest_asyncio.apply() semaphore = asyncio.Semaphore(3) # Fetching a post with both a semaphore and a per-request timeout async def fetch_post(client, post_id): async with semaphore: try: response = await asyncio.wait_for( client.get( f"https://jsonplaceholder.typicode.com/posts/{post_id}" ), timeout=4.0, ) return response.json()["title"] except asyncio.TimeoutError: return f"Post {post_id} timed out" async def main(): async with httpx.AsyncClient() as client: titles = await asyncio.gather( *[fetch_post(client, post_id) for post_id in range(1, 11)] ) for title in titles: print(title) asyncio.run(main())

Lock vs Semaphore

question mark

What is the key difference between asyncio.Lock() and asyncio.Semaphore(n)?

Velg det helt riktige svaret

Alt var klart?

Hvordan kan vi forbedre det?

Takk for tilbakemeldingene dine!

Seksjon 3. Kapittel 3

Spør AI

expand

Spør AI

ChatGPT

Spør om hva du vil, eller prøv ett av de foreslåtte spørsmålene for å starte chatten vår

Seksjon 3. Kapittel 3
some-alt