Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Testing and Running the API | Model Deployment with FastAPI and Docker
MLOps for Machine Learning Engineers

bookTesting and Running the API

Once you have containerized your FastAPI application and started the Docker container, you need to verify that the API is running correctly and returning predictions as expected. To run your Docker container, use a command like:

Replacing your_image_name with the name of your built image. This command maps port 8000 on your local machine to port 8000 inside the container, making the FastAPI app accessible at:

Testing the /predict endpoint can be done using command line tools like curl or by sending an HTTP request from Python. Always ensure your input data matches the expected format defined by your FastAPI model. For example, if your model expects a JSON payload with certain fields, your test requests should include those fields with appropriate sample values.

import requests

# Replace with the actual URL if running on a different host or port
url = "http://localhost:8000/predict"

# Example input data matching the expected schema of your FastAPI model
input_data = {
    "feature1": 3.5,
    "feature2": 1.2,
    "feature3": 0.8
}

response = requests.post(url, json=input_data)

if response.status_code == 200:
    print("Prediction:", response.json())
else:
    print("Error:", response.status_code, response.text)
Note
Note

Warning: always validate input data and handle errors gracefully in production APIs. Never assume that clients will always send well-formed or expected data. Use FastAPI's validation features and implement clear error messages to help users and protect your service from unexpected input.

question mark

Which HTTP method and payload format should you use to test the /predict endpoint of your FastAPI API for making prediction requests?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 3. ChapterΒ 3

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

Suggested prompts:

How do I know if my FastAPI app is running correctly in the Docker container?

What should I do if I get an error when sending a request to the /predict endpoint?

Can you explain how to format the input data for the /predict endpoint?

Awesome!

Completion rate improved to 6.25

bookTesting and Running the API

Swipe to show menu

Once you have containerized your FastAPI application and started the Docker container, you need to verify that the API is running correctly and returning predictions as expected. To run your Docker container, use a command like:

Replacing your_image_name with the name of your built image. This command maps port 8000 on your local machine to port 8000 inside the container, making the FastAPI app accessible at:

Testing the /predict endpoint can be done using command line tools like curl or by sending an HTTP request from Python. Always ensure your input data matches the expected format defined by your FastAPI model. For example, if your model expects a JSON payload with certain fields, your test requests should include those fields with appropriate sample values.

import requests

# Replace with the actual URL if running on a different host or port
url = "http://localhost:8000/predict"

# Example input data matching the expected schema of your FastAPI model
input_data = {
    "feature1": 3.5,
    "feature2": 1.2,
    "feature3": 0.8
}

response = requests.post(url, json=input_data)

if response.status_code == 200:
    print("Prediction:", response.json())
else:
    print("Error:", response.status_code, response.text)
Note
Note

Warning: always validate input data and handle errors gracefully in production APIs. Never assume that clients will always send well-formed or expected data. Use FastAPI's validation features and implement clear error messages to help users and protect your service from unexpected input.

question mark

Which HTTP method and payload format should you use to test the /predict endpoint of your FastAPI API for making prediction requests?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 3. ChapterΒ 3
some-alt