Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Impara Working with httpx for HTTP Requests | Asyncio in Practice
Python Asyncio in Depth

Working with httpx for HTTP Requests

Scorri per mostrare il menu

Every real asyncio application eventually makes HTTP requests. httpx is the standard choice – it mirrors the requests API you already know, adds full async support, and handles connection pooling, timeouts, and redirects out of the box.

Installation

pip install httpx

Making a Single Request

The async counterpart to requests.get() is await client.get(). Use httpx.AsyncClient as an async context manager to ensure connections are properly closed:

1234567891011121314151617
import asyncio import httpx import nest_asyncio nest_asyncio.apply() # Fetching a single post from JSONPlaceholder async def fetch_post(post_id): async with httpx.AsyncClient() as client: response = await client.get( f"https://jsonplaceholder.typicode.com/posts/{post_id}" ) data = response.json() print(f"Title: {data['title']}") print(f"Status: {response.status_code}") asyncio.run(fetch_post(1))

Reusing a Client Across Requests

Creating a new AsyncClient for every request is wasteful – it opens a fresh TCP connection each time. Pass a shared client into your functions:

12345678910111213141516171819202122232425
import asyncio import httpx import nest_asyncio nest_asyncio.apply() # Sharing a single client across multiple requests async def fetch_post(client, post_id): response = await client.get( f"https://jsonplaceholder.typicode.com/posts/{post_id}" ) return response.json()["title"] async def main(): async with httpx.AsyncClient() as client: titles = await asyncio.gather( fetch_post(client, 1), fetch_post(client, 2), fetch_post(client, 3), ) for title in titles: print(title) asyncio.run(main())

Setting Timeouts

Pass a timeout parameter to AsyncClient to apply it globally across all requests:

1234567891011121314151617181920
import asyncio import httpx import nest_asyncio nest_asyncio.apply() # Applying a global timeout to all requests async def fetch_posts(client): response = await client.get("https://jsonplaceholder.typicode.com/posts") return response.json() async def main(): async with httpx.AsyncClient(timeout=5.0) as client: try: posts = await fetch_posts(client) print(f"Fetched {len(posts)} posts") except httpx.TimeoutException: print("Request timed out") asyncio.run(main())

Handling HTTP Errors

Use response.raise_for_status() to raise an exception for 4xx and 5xx responses:

12345678910111213141516171819202122232425262728
import asyncio import httpx import nest_asyncio nest_asyncio.apply() # Handling a 404 response gracefully async def fetch_post(client, post_id): response = await client.get( f"https://jsonplaceholder.typicode.com/posts/{post_id}" ) try: response.raise_for_status() return response.json()["title"] except httpx.HTTPStatusError as error: return f"Error {error.response.status_code} for post {post_id}" async def main(): async with httpx.AsyncClient() as client: results = await asyncio.gather( fetch_post(client, 1), fetch_post(client, 99999), # Non-existent post – returns empty ) for result in results: print(result) asyncio.run(main())

requests vs httpx.AsyncClient

question mark

Why should a single httpx.AsyncClient instance be shared across multiple requests rather than creating a new one for each request?

Seleziona la risposta corretta

Tutto è chiaro?

Come possiamo migliorarlo?

Grazie per i tuoi commenti!

Sezione 3. Capitolo 1

Chieda ad AI

expand

Chieda ad AI

ChatGPT

Chieda pure quello che desidera o provi una delle domande suggerite per iniziare la nostra conversazione

Sezione 3. Capitolo 1
some-alt