Docker Logs
Docker logs are data that are generated by either the containers themselves or the Docker engine. Docker containers and the Docker Engine generate these logs, which are essential for understanding how containers work in terms of debugging, monitoring, and general performance issues. This enables them to see, by inspecting the logs, what is going on in the containers and also what’s going on in Docker as a container.
Table of Content
- What is Docker Logging?
- How is Docker Logging Different?
- Docker Container Logs Command Options
- Docker Logging Tools and Software
- Why view Docker Logs?
- Getting Started with Docker Container Logs
- Using Log Shippers for Better Log Management
- Monitoring Docker Daemon Logs
- Viewing Docker Logs in Different Scenarios
- Troubleshooting Common Docker Logging Problems
- Best Practices For Managing Docker Container Logs
What is Docker Logging?
Logs are essential in managing Dockerized applications. When an application runs inside a Docker container, it generates logs that are sent to the container's standard output (stdout) and standard error (stderr). The container’s logging driver collects these logs and can forward them to different places, like a file, a log collector, or an external service, for analysis or storage.
By default, Docker uses the json-file
logging driver, which stores logs in JSON format on the host system.
When you're working with Docker, accessing logs is essential for troubleshooting and understanding how your containers are performing. To view the logs of a container, you can use the docker logs
command.
docker logs containerName
This will show you the logs of the specified container. If the container is still running, you'll be able to see the logs in real-time. If you want to follow the logs as they are being written, you can add the -f
option:
docker logs -f containerName
Docker also provides several options to filter and customize the logs you see, like --since
for logs from a specific time, --tail
for limiting the number of log lines shown, and --timestamps
to include the time each log entry was made.
To know more about this refer to this article - How to Get Docker Logs?
How is Docker Logging Different?
Logging in containerized environments can be more complex than traditional applications, which run on a single node and require minimal troubleshooting. In Docker, the problem becomes more challenging because you need more data and a wider search to identify the issue.
1. Containers are Ephemeral
Docker containers output logs to stdout and stderr. These logs are stored in JSON format by default on the Docker host. Each container’s logs are kept in a separate file, with a timestamp and log origin (either stdout or stderr). These logs are stored in the /var/lib/docker/containers/
directory on a Linux Docker host, specifically in a file like:
/var/lib/docker/containers/<container_id>/<container_id>-json.log
.
Since containers are short-lived, their logs are not persistent unless you centralize them. Storing logs on the Docker host is risky because they can accumulate quickly and fill up your disk space. It’s better to use a centralized logging system and set up log rotation to avoid potential issues.
2. Containers are Multi-Tiered
- Another key challenge in Docker logging is managing logs across multiple layers. For example, you'll have logs from the containerized application and the host server. The host server logs include system logs and Docker daemon logs, typically stored in
/var/log
or related subdirectories. - A basic log aggregator that has access to the host can’t easily pull logs from the containerized application. Instead, it needs to access the container's file system to collect logs from inside it. As your infrastructure grows with more containers, you'll also need a way to link log events to specific processes, not just individual containers.
Docker Container Logs Command Options
Here's a list of the available options provided by docker which will help you to customize and manage log output:
Command Option | Description |
---|---|
--details | Shows additional details included with the log entries. |
--follow, -f | This allows you see logs as they are created, in real-time. |
--since | It displays logs starting from a specific timestamp (e.g., |
--tail, -n | Specifies the number of log lines to display from the end (e.g., |
--timestamps, -t | Adds timestamps to each log entry for better context. |
--until | Limits the logs to those created before a specified timestamp. |
Docker Logging Tools and Software
The following table provides you details of Docker Logging Tools and Software:
Tool | Best For | Pros | Cons |
---|---|---|---|
ELK Stack | Large-scale log analysis | Powerful search | High resource usage |
Fluentd | Log forwarding | Lightweight | Needs setup |
Graylog | Centralized logging | Alerts & search | Multiple components |
AWS CloudWatch | AWS containers | Native AWS integration | Costly |
Google Cloud Logging | GCP containers | Works with BigQuery | Complex pricing |
Azure Monitor Logs | Azure AKS | Seamless Azure integration | Setup required |
Loki | Lightweight storage | Low footprint | Fewer features |
Papertrail | Simple real-time logs | Quick setup | Limited free history |
Splunk | Enterprise security | AI-driven insights | Expensive |
Logstash | Log filtering | Great for structured logs | Resource-heavy |
Why view Docker Logs?
When running applications inside Docker containers, logs are the best way to understand what’s happening inside. Here’s why checking Docker logs is important:
1. Fixing Errors & Debugging Issues
- If a container fails to start or crashes, logs help pinpoint the issue.
- They show error messages and help you understand why your app isn’t working as expected.
2. Monitoring App Performance
- Logs let you track how your application is running, including response times, background processes, and API calls.
- If something is slowing down, logs can help identify the cause.
3. Keeping Your System Secure
- Logs help detect unusual activity, failed login attempts, or security issues inside containers.
- Useful for audits and compliance checks.
4. Tracking Deployments & CI/CD Pipelines
- If an app fails to deploy, checking logs can show what went wrong.
- Logs help track changes in automated deployment processes.
5. Understanding How Services Communicate
- In a microservices setup, logs show how different services interact.
- If there’s a network issue or a service isn’t responding, logs can help troubleshoot it.
Getting Started with Docker Container Logs
There are two main types of logs in docker: daemon logs and container logs.
Docker container logs are the records generated by the containers themselves. These logs capture anything a container sends to its stdout
or stderr
streams. After the logs are captured, they're sent to a logging driver, which forwards them to a destination you choose, such as a file, remote server, or log management service.
Here are some basic commands to help you manage Docker logs and container statistics:
- View logs for a container:
docker logs containerName
- Continuously view new logs:
docker logs -f containerName
- View real-time CPU and memory usage:
docker stats
- View resource usage for specific containers:
docker stats containerName1 containerName2
- View running processes inside a container:
docker top containerName
- View Docker events:
docker events
- Check Docker storage usage:
docker system df
While viewing logs directly in the terminal is helpful during development, in production, you should centralize logs for better search, analysis, troubleshooting, and alerting.
1. Logging Driver
Docker uses logging drivers to gather logs from containers and send them to specific destinations. By default, it relies on the 'json-file' logging driver, but you can configure it to suit your needs, including integrating with external logging systems.
For example, to switch to the 'syslog' logging driver, you can use the following command:
docker run --log-driver syslog --log-opt syslog-address=udp://syslog-server:514 alpine echo hello world

2. Configure the Docker Logging Driver
There are two main options for configuring Docker logging drivers:
1. Set a default logging driver for all containers
You can set a default logging driver for all containers by editing the Docker configuration file (/etc/docker/daemon.json). For example, to use journald as the default logging driver:
{ "log-driver": "journald" }
After making changes to the configuration, restart Docker:
systemctl restart docker
2. Specify a logging driver for individual containers
You can specify a different logging driver for each container when creating it by using the --log-driver
and --log-opt
options.
Example:
docker run --log-driver syslog --log-opt syslog-address=udp://syslog-server:514 alpine echo hello world
By default, Docker stores logs on the local disk of the Docker host in a JSON file:
/var/lib/docker/containers/[container-id]/[container-id]-json.log
However, when you use a logging driver other than json-file
or journald
, logs are not saved locally. Instead, they are sent over the network, which can be risky if network issues occur. In some cases, Docker may stop the container if the logging driver fails to ship logs, depending on the delivery mode.
3. Docker Log Delivery Modes
Docker supports two delivery modes for logging: blocking and non-blocking.
1. Blocking (Default Mode)
In blocking mode, Docker will stop other tasks in the container until the log message is delivered. While this ensures all logs are sent to the logging driver, it can cause latency, especially if the driver is busy. The default json-file
driver typically doesn’t block, as it writes logs to the local disk.
2. Non-blocking Mode
In non-blocking mode, logs are first written to an in-memory ring buffer. If the logging driver is unavailable, the container continues its tasks while the logs wait in the buffer. While this ensures that logging doesn’t slow down the container's performance, it can lead to lost logs if the buffer is full. The buffer size can be adjusted using the max-buffer-size
option.
To enable non-blocking mode, either modify the daemon.json
file:
{
"log-driver": "json-file",
"log-opts": {
"mode": "non-blocking"
}
}
Or specify it on a per-container basis:
docker run --log-opt mode=non-blocking alpine echo hello world

4. Logging Driver Options
Docker offers a range of logging drivers to integrate with different logging systems. Some of the available options include:
- logagent: A general-purpose log shipper that adds metadata to logs (e.g., container name, ID, and Kubernetes metadata).
- syslog: Sends logs to a syslog server, commonly used for applications.
- journald: Sends logs to the systemd journal.
- fluentd: Sends logs to the Fluentd collector as structured data.
- awslogs: Sends logs to AWS CloudWatch Logs.
- splunk: Sends logs to Splunk using the HTTP Event Collector (HEC).
- etwlogs: Writes logs as Event Tracing for Windows (ETW) events (Windows only).
Each driver serves different use cases, so choose one that fits your logging infrastructure best.
5. Using the json-file
Log Driver with a Log Shipper Container
The json-file
log driver, combined with a log shipper, is one of the best ways to manage Docker container logs. This method ensures that you have a local copy of logs stored on your server while also benefiting from centralized log management for better analysis and monitoring.
Using Log Shippers for Better Log Management
Monitoring container logs becomes more challenging as your infrastructure grows. Unlike traditional applications that log to files, containers often write logs to standard output (stdout) and standard error (stderr). This mix of log formats can make it hard to understand and process logs efficiently.
Why use a log shipper?
Log shippers like Logagent, Logstash, or rsyslog help structure and process container logs before they are sent to storage or analysis tools. These tools are particularly useful when combined with platforms like Elasticsearch. However, adding multiple tools to your logging setup can increase complexity and the risk of failures.Docker Logging Strategies and Best Practices
Managing logs in Docker can be tricky, but there are several strategies to help streamline the process.
- Application-based logging: Here, the application inside the container handles its own logging. For instance, using a tool like Log4j2, the logs are sent directly to a remote location, bypassing Docker and the host OS. While this provides the most control, it also puts more load on the application and can result in data loss if logs are stored within the container. To avoid this, you should either forward logs to a remote service or configure persistent storage.
- Logging with data volumes: Data volumes can be used to persist logs by linking directories inside containers to directories on the host. This ensures that logs remain available even if the container is stopped. Volumes can also be shared between containers. However, this method can complicate container migrations between different hosts, as moving containers might result in data loss.
- Docker logging driver: Docker's logging drivers can capture logs directly from a container's stdout and stderr. By default, Docker uses the 'json-file' driver, which writes logs to files on the host. However, you can configure it to send logs to external systems like syslog or journald. While this approach improves performance, it has some limitations, like limited log parsing and reliance on the availability of the log server.
- Dedicated logging container: A dedicated logging container is another option, where a separate container is set up to gather logs from other containers and send them to a central location. This is particularly beneficial in microservices environments, where containers might need to be scaled or relocated easily.
- Sidecar container: In more complex setups, you can use a sidecar container alongside each application container to handle logging. The sidecar collects, tags, and forwards logs to an external log management system. This provides flexibility and customization for each container’s logs, but it can be resource-intensive and harder to scale.
Each logging strategy has its benefits and trade-offs. The best approach depends on the complexity of your application and the scale at which you're operating.
Monitoring Docker Daemon Logs
Docker daemon logs provide information about the Docker service itself, such as commands sent through its Remote API and other internal events. These logs are typically stored on the host machine, either in system logs or dedicated log files, depending on your operating system.
Why Monitor Daemon Logs?
While container logs show the status of your applications, daemon logs provide insight into the Docker platform's performance. Monitoring both types of logs helps you gain a complete understanding of your system's health.
Choosing a Logging Solution
1. Open-Source Tools
- Many teams use open-source tools like the Elastic Stack (Elasticsearch, Logstash, Kibana) for managing logs. These tools are powerful but require technical expertise to set up and maintain.
2. Managed Services
- For a hassle-free logging solution, platforms like ELK Stack (Elasticsearch, Logstash, and Kibana) or Amazon CloudWatch Logs can handle log collection, storage, and analysis. These tools simplify the process, allowing you to easily identify and resolve issues without dealing with complex configurations.
- To manage Docker logs effectively, you can use the
json-file
orfluentd
logging driver along with a log aggregator. For example, Fluentd is a widely-used open-source tool that works seamlessly with platforms like ELK or CloudWatch to gather and forward logs for centralized management.
3. How to Set Up Managed Logging?
- Configure the Logging Driver: Set the
fluentd
driver in Docker to send logs to a centralized log collector. Example command:
docker run -d \
--log-driver fluentd \
--log-opt tag="docker.{{.Name}}" \
myapp:latest
- Install a log aggregator: Deploy Fluentd or a similar tool on your system to collect container logs and forward them to a managed service.
- Monitor and analyze logs: Use dashboards like Kibana (for ELK) or Amazon CloudWatch to view, analyze, and set up alerts based on your logs.
By using managed services, you can store logs reliably, monitor your applications more effectively, and troubleshoot problems with ease, all while ensuring your system runs smoothly.
Viewing Docker Logs in Different Scenarios
1. Using Docker Desktop
Docker Desktop provides an intuitive interface to manage containers, including accessing logs. Here's how you can view logs:
- Open the "Containers" tab in Docker Desktop.
- Select the container whose logs you want to view.
- Navigate to the "Logs" section to see the output.
The interface also includes features like filtering and searching through logs, making it easier to pinpoint specific entries during debugging or monitoring.
2. Using Docker Compose
Docker Compose is a powerful tool for orchestrating multi-container applications. It simplifies log management across multiple services defined in a 'docker-compose.yml' file. To view the logs for all services, use the command:
docker-compose logs
This provides a consolidated view of logs from all containers, which is especially helpful for debugging complex setups.
3. Using Third-Party Tools
For large-scale environments, third-party tools can enhance log management with capabilities like aggregation, visualization, and alerts.
How to Integrate Third-Party Tools?
- Set log drivers: Update your container settings to use a specific log driver (e.g., 'syslog', 'json-file', or 'fluentd') that routes logs to the logging system.
- Deploy log collectors: Use a logging agent or sidecar container to gather and forward logs from Docker to the selected platform.
- Create dashboards and alerts: Use the tool's dashboard to customize log views and configure alerts based on application behavior.
These techniques help you manage logs effectively, ensuring better observability and smoother operation of containerized applications.
Troubleshooting Common Docker Logging Problems
1. Missing Logs
Logs may be missing due to improper logging driver configurations or a lack of necessary permissions. To fix this:
- Specify the correct logging driver either in the
docker run
command or in thedocker-compose
file. - Ensure the container has the permissions needed to write logs.
For example, when using the json-file
logging driver, you can configure it like this:
{
"log_driver": "json-file",
"log_opts": {
"max-size": "5m",
"max-file": "3"
}
}
2. Log Driver Compatibility
Incompatibility issues may arise if the logging driver selected doesn’t work with your Docker setup. To avoid this, check which logging drivers are supported on your system by running:
docker info --format '{{.LoggingDriver}}'
This command ensures that the chosen driver matches your Docker version and host environment.
3. Storage Space Problems
If logs grow uncontrollably, they can consume a significant amount of storage space, especially in systems that generate extensive logs. To manage this:
Set up log rotation to limit the size and number of log files.
Here’s how you can define log rotation in a docker-compose.json
file:
{
"services": {
"app": {
"image": "myapp:latest",
"logging": {
"driver": "json-file",
"options": {
"max-size": "100m",
"max-file": "10"
}
}
}
}
}
Alternatively, you can set it up directly when running a container:
docker run -d \
--name myapp \
--log-driver json-file \
--log-opt max-size=100m \
--log-opt max-file=10 \
myapp:latest
4. Application Logging Issues
Sometimes, an application may not send logs to stdout
or stderr
, which prevents Docker from capturing them. To address this, ensure your application is configured to log output to the console.
For example, in a Node.js application, you can use:
console.log('This is a log entry');
If you're using a logging library like Winston, configure it to log to the console as follows:
const winston = require('winston');
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
transports: [
new winston.transports.Console()
]
});
logger.info('This is a log message!');
By ensuring logs are routed correctly to stdout
, Docker can capture them and make them accessible through the docker logs
command, simplifying troubleshooting and monitoring.
Best Practices For Managing Docker Container Logs
Managing Docker container logs effectively is essential for troubleshooting, monitoring, and ensuring your applications run smoothly. Below are some practical methods and recommendations for handling logs efficiently.
1. Choosing the Right Logging Driver
Docker provides built-in logging drivers to capture logs from containers. These drivers handle logs from the container’s stdout and stderr streams and decide how and where the logs are stored.
- The default logging driver is
json-file
, which saves logs in JSON format on the host machine. - You can use other drivers like
syslog
orfluentd
to forward logs to external systems for centralized management. - If the default options don’t meet your needs, Docker also supports custom drivers, which you can add as plugins and distribute via a Docker registry.
To use a specific driver, include the --log-driver
option when running a container:
docker run --log-driver=json-file myapp:latest
2. Understanding Delivery Modes
Delivery modes determine how logs are sent from a container to the configured logging driver.
- Blocking Mode (Default): Logs are sent directly to the driver. While this ensures all logs are captured, it may slow down applications if the driver takes longer to process logs.
- Non-blocking Mode: Logs are temporarily stored in an in-memory ring buffer before being passed to the driver. This reduces delays but risks data loss if the buffer becomes full.
To enable non-blocking mode, add the mode=non-blocking
option when starting a container:
docker run --log-opt mode=non-blocking myapp:latest
3. Application Level Logging
You can manage logs within the application itself by using its built-in logging framework.
- This gives developers control over how logs are generated and processed.
- However, storing logs inside the container is risky because containers are temporary. When a container stops, its file system is removed.
To prevent log loss, use persistent storage or send logs to an external logging service for safekeeping.
4. Using Volumes for Log Storage
If you want to avoid losing logs when a container is stopped or removed, store logs in Docker volumes.
- Volumes provide persistent storage independent of the container’s lifecycle.
- Logs saved in volumes can be backed up, shared, or moved between environments.
Example: Mount a host directory as a volume to store logs:
docker run -v /host/logs:/container/logs myapp:latest
5. Dedicated Logging Containers
A dedicated logging container can centralize log collection and management across your Docker environment.
- These containers gather logs from other containers and forward them to a centralized system.
- They operate independently, making it easier to scale and move between environments.
6. Sidecar Containers for Logging
For more complex deployments, the sidecar pattern is a popular approach to managing logs in microservices.
- A sidecar container runs alongside the main application container, sharing its resources.
- It handles all logging responsibilities, including storing, labeling, and forwarding logs to external systems.
The sidecar simplifies identifying which application generated a log, thanks to custom tags. However, this method requires careful setup and consumes additional resources.
Conclusion
Managing logs is important for monitoring and troubleshooting Docker applications. Docker provides various logging options, such as container logs, daemon logs, and logging drivers which help you understand how your applications and systems are performing. By following best practices like centralizing logs, choosing suitable logging drivers, and integrating tools for advanced log analysis, you can simplify your workflow and address issues more efficiently. Whether you're working with a single container or a complex setup, having a strong log management strategy improves performance, reliability, and overall efficiency of your applications.