Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Oppiskele Semaphores for Rate Limiting | Asyncio in Practice
Python Asyncio in Depth

Semaphores for Rate Limiting

Pyyhkäise näyttääksesi valikon

A lock allows exactly one coroutine at a time. A semaphore allows up to n coroutines at a time. This makes it the right tool for rate limiting – preventing your program from hammering an API or exhausting a connection pool.

The Problem: Too Many Concurrent Requests

Without any limit, asyncio.gather() fires all requests simultaneously. For 100 posts, that's 100 connections at once – most APIs will throttle or reject them:

1234567891011121314151617181920212223
import asyncio import httpx import nest_asyncio nest_asyncio.apply() # Fetching 20 posts with no concurrency limit async def fetch_post(client, post_id): response = await client.get( f"https://jsonplaceholder.typicode.com/posts/{post_id}" ) return response.json()["title"] async def main(): async with httpx.AsyncClient() as client: # All 20 requests fire simultaneously titles = await asyncio.gather( *[fetch_post(client, post_id) for post_id in range(1, 21)] ) print(f"Fetched {len(titles)} titles") asyncio.run(main())

Using asyncio.Semaphore()

Wrap each request with a semaphore to cap the number of concurrent connections:

12345678910111213141516171819202122232425
import asyncio import httpx import nest_asyncio nest_asyncio.apply() semaphore = asyncio.Semaphore(5) # Maximum 5 concurrent requests # Fetching a post with a concurrency limit async def fetch_post(client, post_id): async with semaphore: # Acquiring a semaphore slot response = await client.get( f"https://jsonplaceholder.typicode.com/posts/{post_id}" ) return response.json()["title"] async def main(): async with httpx.AsyncClient() as client: titles = await asyncio.gather( *[fetch_post(client, post_id) for post_id in range(1, 21)] ) print(f"Fetched {len(titles)} titles") asyncio.run(main())

All 20 tasks are created immediately, but at most 5 can hold a semaphore slot at once. The rest wait until a slot is released.

Combining a Semaphore with a Timeout

Rate limiting and timeouts work well together – limit concurrency and fail fast if a request hangs:

1234567891011121314151617181920212223242526272829303132
import asyncio import httpx import nest_asyncio nest_asyncio.apply() semaphore = asyncio.Semaphore(3) # Fetching a post with both a semaphore and a per-request timeout async def fetch_post(client, post_id): async with semaphore: try: response = await asyncio.wait_for( client.get( f"https://jsonplaceholder.typicode.com/posts/{post_id}" ), timeout=4.0, ) return response.json()["title"] except asyncio.TimeoutError: return f"Post {post_id} timed out" async def main(): async with httpx.AsyncClient() as client: titles = await asyncio.gather( *[fetch_post(client, post_id) for post_id in range(1, 11)] ) for title in titles: print(title) asyncio.run(main())

Lock vs Semaphore

question mark

What is the key difference between asyncio.Lock() and asyncio.Semaphore(n)?

Valitse oikea vastaus

Oliko kaikki selvää?

Miten voimme parantaa sitä?

Kiitos palautteestasi!

Osio 3. Luku 3

Kysy tekoälyä

expand

Kysy tekoälyä

ChatGPT

Kysy mitä tahansa tai kokeile jotakin ehdotetuista kysymyksistä aloittaaksesi keskustelumme

Osio 3. Luku 3
some-alt