Configuring the Docker Compose File
Now that all the necessary configurations are in place β for Filebeat, Logstash, and the Flask application β it is time to bring them together and run them as a single system. To do this, we will use Docker Compose.
Docker Compose is a tool that allows you to define all the required containers in a single YAML file and run them with one command.
Instead of writing a separate docker run
command for each service, you just write their configurations in docker-compose.yml
and run everything together.
A simple docker-compose.yml
file looks like this:
docker-compose.yml
Now, let's create a complete file step by step, explaining each container along the way.
Flask Application
This is your main application that generates logs.
docker-compose.yml
This block tells Docker to build the container from the current folder (where your Dockerfile is located).
The container will be named flask-app
.
The volumes
line connects the local ./logs
folder to /logs
inside the container. This way, all logs created by the application are saved both inside the container and on your computer.
The application runs inside the container on port 5000, but we map it to port 5050 on your machine so you can access it in your browser at http://localhost:5050
.
Filebeat
Filebeat acts as an agent that monitors log files and sends new entries to Logstash.
docker-compose.yml
Filebeat runs from the official 8.15.0
image.
It runs as the root
user so it has permission to read log files.
We attach the ./logs
folder in read-only mode and the filebeat.yml
configuration file. This tells Filebeat which logs to watch and where to send them.
depends_on
ensures Filebeat starts only after the application and Logstash are running.
Logstash
Logstash receives logs from Filebeat, processes them (for example, parses JSON), and forwards them to Elasticsearch.
docker-compose.yml
This container uses the official Logstash image.
We mount the logstash.conf
configuration file, which tells Logstash how to process incoming logs.
The container starts only after Elasticsearch is running, because it needs a destination for the processed logs.
Port 5044 is exposed so Filebeat can send data to Logstash.
Elasticsearch
Elasticsearch stores logs as documents and allows you to search through them quickly.
docker-compose.yml
We use the official Elasticsearch image version 8.15.0
.
discovery.type=single-node
specifies that we are running a single-node cluster (enough for local testing);xpack.security.enabled=false
disables security so you do not have to deal with usernames and passwords;ES_JAVA_OPTS
limits the Java process memory usage.
The service is accessible on port 9200 at http://localhost:9200
.
Kibana
Kibana is a web interface for Elasticsearch. It helps visualize logs and create dashboards.
docker-compose.yml
We use the official Kibana image.
Port 5601 is exposed so you can access Kibana in your browser at http://localhost:5601
.
ELASTICSEARCH_HOSTS
points Kibana to our Elasticsearch container so it can fetch and display data.
Final docker-compose.yml
Here is the complete Docker Compose file with all services connected in the proper order:
docker-compose.yml
This docker-compose.yml
starts all the services in the correct order. The Flask application generates logs, Filebeat collects them and sends them to Logstash, Logstash processes the logs and forwards them to Elasticsearch, and Kibana provides a web interface for viewing and analyzing the data.
All that is left is to run the system and test it, which will be covered in the next chapter.
1. What is the main purpose of Docker Compose in this setup?
2. Which service is responsible for collecting log files from the Flask application and sending them to Logstash?
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat
How do I start all the services using Docker Compose?
Can you explain how the services communicate with each other?
What should I check if one of the services fails to start?
Awesome!
Completion rate improved to 3.7
Configuring the Docker Compose File
Swipe to show menu
Now that all the necessary configurations are in place β for Filebeat, Logstash, and the Flask application β it is time to bring them together and run them as a single system. To do this, we will use Docker Compose.
Docker Compose is a tool that allows you to define all the required containers in a single YAML file and run them with one command.
Instead of writing a separate docker run
command for each service, you just write their configurations in docker-compose.yml
and run everything together.
A simple docker-compose.yml
file looks like this:
docker-compose.yml
Now, let's create a complete file step by step, explaining each container along the way.
Flask Application
This is your main application that generates logs.
docker-compose.yml
This block tells Docker to build the container from the current folder (where your Dockerfile is located).
The container will be named flask-app
.
The volumes
line connects the local ./logs
folder to /logs
inside the container. This way, all logs created by the application are saved both inside the container and on your computer.
The application runs inside the container on port 5000, but we map it to port 5050 on your machine so you can access it in your browser at http://localhost:5050
.
Filebeat
Filebeat acts as an agent that monitors log files and sends new entries to Logstash.
docker-compose.yml
Filebeat runs from the official 8.15.0
image.
It runs as the root
user so it has permission to read log files.
We attach the ./logs
folder in read-only mode and the filebeat.yml
configuration file. This tells Filebeat which logs to watch and where to send them.
depends_on
ensures Filebeat starts only after the application and Logstash are running.
Logstash
Logstash receives logs from Filebeat, processes them (for example, parses JSON), and forwards them to Elasticsearch.
docker-compose.yml
This container uses the official Logstash image.
We mount the logstash.conf
configuration file, which tells Logstash how to process incoming logs.
The container starts only after Elasticsearch is running, because it needs a destination for the processed logs.
Port 5044 is exposed so Filebeat can send data to Logstash.
Elasticsearch
Elasticsearch stores logs as documents and allows you to search through them quickly.
docker-compose.yml
We use the official Elasticsearch image version 8.15.0
.
discovery.type=single-node
specifies that we are running a single-node cluster (enough for local testing);xpack.security.enabled=false
disables security so you do not have to deal with usernames and passwords;ES_JAVA_OPTS
limits the Java process memory usage.
The service is accessible on port 9200 at http://localhost:9200
.
Kibana
Kibana is a web interface for Elasticsearch. It helps visualize logs and create dashboards.
docker-compose.yml
We use the official Kibana image.
Port 5601 is exposed so you can access Kibana in your browser at http://localhost:5601
.
ELASTICSEARCH_HOSTS
points Kibana to our Elasticsearch container so it can fetch and display data.
Final docker-compose.yml
Here is the complete Docker Compose file with all services connected in the proper order:
docker-compose.yml
This docker-compose.yml
starts all the services in the correct order. The Flask application generates logs, Filebeat collects them and sends them to Logstash, Logstash processes the logs and forwards them to Elasticsearch, and Kibana provides a web interface for viewing and analyzing the data.
All that is left is to run the system and test it, which will be covered in the next chapter.
1. What is the main purpose of Docker Compose in this setup?
2. Which service is responsible for collecting log files from the Flask application and sending them to Logstash?
Thanks for your feedback!