{"id":254,"date":"2018-10-22T11:13:00","date_gmt":"2018-10-22T11:13:00","guid":{"rendered":"https:\/\/opstree.com\/blog\/\/2018\/10\/22\/docker-logging-driver\/"},"modified":"2019-09-30T20:56:15","modified_gmt":"2019-09-30T15:26:15","slug":"docker-logging-driver","status":"publish","type":"post","link":"https:\/\/opstree.com\/blog\/2018\/10\/22\/docker-logging-driver\/","title":{"rendered":"Docker Logging Driver"},"content":{"rendered":"<p dir=\"ltr\" style=\"text-align:left;\">The&nbsp; <b>docker logs<\/b> command batch-retrieves logs present at the time of execution. The <b>docker logs<\/b> command shows information logged by a running container. The <b>docker service logs<\/b> command shows information logged by all containers participating in a service. The information that is logged and the format of the log depends almost entirely on the container\u2019s endpoint command.<\/p>\n<p>These logs are basically stored at <b>&#8220;\/var\/lib\/docker\/containers\/.log&#8221;<\/b>, So basically it is not easy to use this file by using <b>Filebeat<\/b> because the file will change every time when the new container is up with a new container id.<\/p>\n<p>So, <b>How to monitor these logs which are formed in different files ?<\/b> For this Docker logging driver were introduced to monitor the docker logs.<\/p>\n<p>Docker includes multiple logging mechanisms to help you get information from running containers &amp; services. These mechanisms are called <b>logging drivers<\/b>. These logging drivers are configured for the docker daemon.<\/p>\n<p>To configure the Docker daemon to default to a specific logging driver, set the value of log-driver to the name of the logging driver in the daemon.json file, which is located in <b>\/etc\/docker\/<\/b> on Linux hosts or <b>C:\\ProgramData\\docker\\config\\<\/b> on Windows server hosts.<\/p>\n<p>The default logging driver is <b>json-file<\/b>. The following example explicitly sets the default logging driver to syslog:<\/p>\n<p><span style=\"color:#e06666;\">{&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<\/span><br \/>\n<span style=\"color:#e06666;\">&nbsp; &#8220;log-driver&#8221;: &#8220;syslog&#8221;<\/span><br \/>\n<span style=\"color:#e06666;\">}<\/span><\/p>\n<p>After configuring the log driver in <b>daemon.json<\/b> file, you can define the log driver &amp; the destination where you want to send the logs for example logstash &amp; fluentd etc. You can define it either on the run time execution command as <b>&#8220;&#8211;log-driver=syslog &#8211;log-opt syslog-address=udp:\/\/logstash:5044&#8221;<\/b> or if you are using a docker-compose file then you can define it as:<\/p>\n<p><span style=\"color:#e06666;\"><span style=\"background-color:white;\">&#8220;`<br \/>\nlogging:<br \/>\ndriver: fluentd<br \/>\noptions:<br \/>\nfluentd-address: &#8220;192.168.1.1:24224&#8221;<br \/>\ntag: &#8220;{{ container_name }}&#8221;<br \/>\n&#8220;` <\/span><\/span><\/p>\n<p>Once you have configured the log driver, it will send all the docker logs to the configured destination. And now if you will try to see the docker logs on the terminal using the <b>docker logs<\/b> command, you will get a msg:<\/p>\n<p><span style=\"color:#e06666;\">&#8220;`<br \/>\nError response from daemon: configured logging driver does not support reading<br \/>\n&#8220;` <\/span><\/p>\n<p>Because all the logs have been parsed to the destination.<\/p>\n<p>Let me give you an example that how i configured logging driver <b>fluentd<\/b><br \/>\nand parse those logs onto Elasticsearch and viewed them on Kibana. In this case I am configuring the logging driver at the run-time by installing the logging driver plugin inside the fluentd but not in daemon.json. So make sure that your containers are created inside the same docker network where you will be configuring the logging driver.<\/p>\n<p><b>Step 1:<\/b> Create a docker network.<\/p>\n<p><span style=\"color:#e06666;\">&#8220;`<br \/>\ndocker network create docker-net<br \/>\n&#8220;` <\/span><\/p>\n<p><b>Step 2:<\/b> Create a container for elasticsearch inside a docker network.<\/p>\n<p><span style=\"color:#e06666;\">&#8220;`<br \/>\ndocker run -itd &#8211;name elasticsearch -p 9200:9200 &#8211;network=docker-net elasticsearch:6.4.1<br \/>\n&#8220;` <\/span><\/p>\n<p><b>Step 3:<\/b> Create a fluentd configuration where you will be configuring the logging driver inside the <b>fluent.conf<\/b> which is further being copied inside the fluentd docker image.<\/p>\n<p><b>fluent.conf<\/b><\/p>\n<p><span style=\"color:#e06666;\">&#8220;` <\/span><\/p>\n<p>@type forward<br \/>\nport 24224<br \/>\nbind 0.0.0.0<\/p>\n<p>@type copy<\/p>\n<p>@type elasticsearch<br \/>\nhost elasticsearch<br \/>\nport 9200<br \/>\nlogstash_format true<br \/>\nlogstash_prefix fluentd<br \/>\nlogstash_dateformat %Y%m%d<br \/>\ninclude_tag_key true<br \/>\ntype_name access_log<br \/>\ntag_key app<br \/>\nflush_interval 1s<br \/>\nindex_name fluentd<br \/>\ntype_name fluentd<\/p>\n<p>@type stdout<\/p>\n<p>&#8220;`<\/p>\n<p>This will also create an index naming as fluentd &amp; host is defined in the name of the service defined for elasticsearch.<\/p>\n<p><b>Step 4:<\/b> Build the fluentd image and create a docker container from that.<\/p>\n<p><b>Dockerfile.fluent<\/b><\/p>\n<p><span style=\"color:#e06666;\">&#8220;`<br \/>\nFROM fluent\/fluentd:latest<br \/>\nCOPY fluent.conf \/fluentd\/etc\/<br \/>\nRUN [&#8220;gem&#8221;, &#8220;install&#8221;, &#8220;fluent-plugin-elasticsearch&#8221;, &#8220;&#8211;no-rdoc&#8221;, &#8220;&#8211;no-ri&#8221;, &#8220;&#8211;version&#8221;, &#8220;1.9.5&#8221;]<br \/>\n&#8220;` <\/span><\/p>\n<p>Here the logging driver pluggin is been installed and configured inside the fluentd.<\/p>\n<p>Now build the docker image. And create a container.<\/p>\n<p><span style=\"color:#e06666;\">&#8220;`<br \/>\ndocker build -t fluent -f Dockerfile.fluent .<br \/>\ndocker run -itd &#8211;name fluentd -p 24224:24224 &#8211;network=docker-net fluent<br \/>\n&#8220;` <\/span><\/p>\n<p><b>Step 5:<\/b> Now you need to create a container whose logs you want to see on kibana by configuring it on the run time. In this example, I am creating an nginx container and configuring it for the log driver.<\/p>\n<p><span style=\"color:#e06666;\">&#8220;`<br \/>\ndocker run -itd &#8211;name nginx -p 80:80 &#8211;network=docker-net &#8211;log-driver=fluentd &#8211;log-opt fluentd-address=udp:\/\/:24224 opstree\/nginx:server<br \/>\n&#8220;` <\/span><\/p>\n<p><b>Step 6:<\/b> Finally you need to create a docker container for kibana inside the same network.<\/p>\n<p><span style=\"color:#e06666;\">&#8220;`<br \/>\ndocker run -itd &#8211;name kibana -p 5601:5601 &#8211;network=docker-net kibana<br \/>\n&#8220;` <\/span><\/p>\n<p>Now, You will be able to check the logs for the nginx container on kibana by creating an index <b>fluentd-*<\/b>.<\/p>\n<p>Types of Logging driver which can be used:<\/p>\n<p><b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Driver<\/b>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <b>Description<\/b><\/p>\n<ul style=\"text-align:left;\">\n<li>&nbsp;<b>none:<\/b> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; No logs are available for the container and docker logs does&nbsp; not return any output.<\/li>\n<\/ul>\n<ul style=\"text-align:left;\">\n<li>&nbsp;<b>json-file:<\/b> &nbsp;&nbsp;&nbsp; The logs are formatted as JSON. The default logging driver for Docker.<\/li>\n<\/ul>\n<ul style=\"text-align:left;\">\n<li>&nbsp;<b>syslog:<\/b> &nbsp;&nbsp;&nbsp; Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine.<\/li>\n<\/ul>\n<ul style=\"text-align:left;\">\n<li>&nbsp;<b>journald:<\/b> &nbsp;&nbsp;&nbsp; Writes log messages to journald. The journald daemon must be running on the host machine.<\/li>\n<\/ul>\n<ul style=\"text-align:left;\">\n<li>&nbsp;<b>gelf:<\/b> &nbsp;&nbsp;&nbsp; Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash.<\/li>\n<\/ul>\n<ul style=\"text-align:left;\">\n<li>&nbsp;<b>fluentd:<\/b> &nbsp;&nbsp;&nbsp; Writes log messages to fluentd (forward input). The fluentd daemon must be running on the host machine.<\/li>\n<\/ul>\n<ul style=\"text-align:left;\">\n<li>&nbsp;<b>awslogs:<\/b> &nbsp;&nbsp;&nbsp; Writes log messages to Amazon CloudWatch Logs.<\/li>\n<\/ul>\n<ul style=\"text-align:left;\">\n<li>&nbsp;<b>splunk:<\/b> &nbsp;&nbsp;&nbsp; Writes log messages to splunk using the HTTP Event Collector.<\/li>\n<\/ul>\n<ul style=\"text-align:left;\">\n<li>&nbsp;<b>etwlogs:<\/b> &nbsp;&nbsp;&nbsp; Writes log messages as Event Tracing for Windows (ETW) events. Only available on Windows platforms.<\/li>\n<\/ul>\n<ul style=\"text-align:left;\">\n<li>&nbsp;<b>gcplogs:<\/b> &nbsp;&nbsp;&nbsp; Writes log messages to Google Cloud Platform (GCP) Logging.<\/li>\n<\/ul>\n<ul style=\"text-align:left;\">\n<li>&nbsp;<b>logentries:<\/b> &nbsp;&nbsp;&nbsp; Writes log messages to Rapid7 Logentries.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>The&nbsp; docker logs command batch-retrieves logs present at the time of execution. The docker logs command shows information logged by a running container. The docker service logs command shows information logged by all containers participating in a service. The information that is logged and the format of the log depends almost entirely on the container\u2019s &hellip; <a href=\"https:\/\/opstree.com\/blog\/2018\/10\/22\/docker-logging-driver\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Docker Logging Driver&#8221;<\/span><\/a><\/p>\n","protected":false},"author":172651618,"featured_media":29900,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_coblocks_attr":"","_coblocks_dimensions":"","_coblocks_responsive_height":"","_coblocks_accordion_ie_support":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false},"version":2}},"categories":[28070474,4504191],"tags":[],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/opstree.com\/blog\/wp-content\/uploads\/2025\/11\/DevSecOps-1.jpg","jetpack_likes_enabled":true,"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/pfDBOm-46","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts\/254"}],"collection":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/users\/172651618"}],"replies":[{"embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/comments?post=254"}],"version-history":[{"count":3,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts\/254\/revisions"}],"predecessor-version":[{"id":1554,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/posts\/254\/revisions\/1554"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/media\/29900"}],"wp:attachment":[{"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/media?parent=254"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/categories?post=254"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/opstree.com\/blog\/wp-json\/wp\/v2\/tags?post=254"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}