Centralised Scalable log-server implementation on Ubuntu 15.04

Centralised and Scalable log-server implementation on Ubuntu 15.04

The goal of this article is to explain how to implement a very simple log server which can accept application logs coming from one or more servers. For simplicity I will be implementing the log server and log forwarder (explained in Components section) on the same computer.

In order to keep the article easy to understand and to keep the implementation simple as possible I have excluded the security of the logs that get transmitted from application servers to the log-server and considered outside the scope of this article. Therefore if you implement the exact solution make sure you consider security of log data as some logs may contain sensitive information about the applications.

Components

  • Log-courier Sends logs from servers that need to be monitored. In this example plain-text TCP transport protocol will be used for forwarding logs.
  • Logstash Processes incoming logs from log forwarder application (log-courier in this tutorial) and sends to elasticsearch for storage.
  • Elasticsearch Distributed search engine & document store used for storing all the logs that are processed in logstash.
  • Kibana User interface for filtering, sorting, discovering and visualising logs that are stored in Elasticsearch.

Things to consider

  • In a real world scenario you may need the log forwarder to be running on the servers you need to monitor and send logs back to the centralised log server. For simplicity this article, both log-server and the server being monitored are the same.
  • Logs can grow rapidly in relation to how much log data you will be sending from monitoring servers and how many servers are being monitored. Therefore my personal recommendation would be to transmit what you really need to monitor, for a web application this could be access logs, error logs, application level logs and/or database logs. I would additionally send the syslog as it would indicate OS level activity.
  • Transmitting Logs mean more network bandwidth requirements on the application server side, therefore if you are running your application on a Cloud-Computing platform such as EC2 or DigitalOcean, make sure you take network throttling to account when your centralised log monitoring implementation is deployed.

Installation steps

There is no perticular order that you need to follow to get the stack working.

Elasticsearch installation

  • Install Java
sudo add-apt-repository -y ppa:webupd8team/java
sudo apt-get update
sudo aptitude -y install oracle-java8-installer
java -version
  • Install Elasticsearch
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.deb -O /tmp/elasticsearch-1.7.1.deb
sudo dpkg -i /tmp/elasticsearch-1.7.1.deb
sudo update-rc.d elasticsearch defaults 95 10

then start the service sudo service elasticsearch start.

Install Logstash

echo 'deb http://packages.elasticsearch.org/logstash/1.5/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list
sudo apt-get update
sudo apt-get install logstash

Install log-courier

Installing log-courier could be bit tricky as we have to install the logstash plugins to support data communication protocol implemented in log-courier. More information can be found herehttps://github.com/driskell/log-courier/blob/master/docs/LogstashIntegration.md

sudo add-apt-repository ppa:devel-k/log-courier
sudo apt-get update
sudo apt-get install log-courier

You may now need to install log-courier plugins for logstash, when I installed logstash using above instructions, logstash was installed on /opt/logstash folder. So following commands assume that you also have the logstash automatically installed to /opt/ if not leave a comment below.

cd /opt/logstash/
sudo bin/plugin install logstash-input-courier
sudo bin/plugin install logstash-output-courier

Install Kibana

wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz -O /tmp/kibana-4.1.1-linux-x64.tar.gz
sudo tar xf /tmp/kibana-4.1.1-linux-x64.tar.gz -C /opt/

Install the init script to manage kibana service, and enable the service.

cd /etc/init.d && sudo wget https://gist.githubusercontent.com/thisismitch/8b15ac909aed214ad04a/raw/bce61d85643c2dcdfbc2728c55a41dab444dca20/kibana4
sudo chmod +x /etc/init.d/kibana4
sudo update-rc.d kibana4 defaults 96 9
sudo service kibana4 start

Configuration steps

Logstash configuration

Logstash need to be configured to accept log-courier transmitted log data, process and output to Elasticsearch. The following simple config adds

Logstash configuration should sit on the logstash server which stores and processes all the incoming logs from other servers which run the log-courier agent for sending logs.

sudo vim /etc/logstash/conf.d/01-simple.conf

And paste following config

input {
  courier {
    port => 5043
    type => "logs"
    transport => "tcp"
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
    elasticsearch { host => localhost }
    stdout { codec => rubydebug }
}

Log-Courier configuration

In a real world scenario, this log-courier configuration would sit in the servers that need to be monitored. Following is an example config for this tutorial which will ship syslog activity.

sudo vim /etc/log-courier/log-courier.conf
{
    "network": {
        "servers": [ "localhost:5043" ],
        "transport": "tcp"
    },
    "files": [
        {
            "paths": [ "/var/log/syslog" ],
            "fields": { "type": "syslog" }
        }
    ]
}

Enable services

Now we have Elasticsearch, Logstash, Kibana and Log-courier configured to process, store and view them. In order to use the setup we need to enable all the services using ubuntu service command.

sudo service elasticsearch start
sudo service logstash start
sudo service kibana4 start
sudo service log-courier start

The way I normally test whether services are listening on their perticular TCP ports is using the lsofcommand. This will show whether Elasticsearch, Logstash and Kibana are listening on their perticular ports.

sudo lsof -i:9200,5043,5601

An example output may look like

COMMAND     PID          USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
log-couri   970          root    9u  IPv4 373056      0t0  TCP localhost:38366->localhost:5043 (ESTABLISHED)
java      15513 elasticsearch  134u  IPv6 372333      0t0  TCP localhost:9200 (LISTEN)
java      15513 elasticsearch  166u  IPv6 373878      0t0  TCP localhost:9200->localhost:41345 (ESTABLISHED)
java      15612      logstash   16u  IPv6 372339      0t0  TCP *:5043 (LISTEN)
java      15612      logstash   18u  IPv6 373057      0t0  TCP localhost:5043->localhost:38366 (ESTABLISHED)
node      15723          root   10u  IPv4 372439      0t0  TCP localhost:41345->localhost:9200 (ESTABLISHED)
node      15723          root   11u  IPv4 373061      0t0  TCP *:5601 (LISTEN)
node      15723          root   13u  IPv4 389495      0t0  TCP 10.11.0.4:5601->10.11.0.2:64084 (ESTABLISHED)

Testing the setup

One way to test our configuration would be to log something to syslog, which can be done using “logger” command. Once the Kibana indices are setup (refer to Kibana documentation) the incoming log messages would appear on it realtime.

Example command to log a test message in syslog

logger test-log-message

Now goto Kibana web interface to discover the message you pushed to syslog. In this setup Kibana is listening on port 5601 on the log server (ex: http://10.11.0.4:5601/). Thats it!

Conclusion

Now you have setup a very simple centralised logging system for you application with Kibana, Logstash and Elasticsearch. Log-courier is the agent sending log messages from application servers to the logstash centralised logging server. This setup can be scaled having a cluster of Elasticsearch and logstash instances, a good starting point can be found athttps://www.elastic.co/guide/en/elasticsearch/guide/current/scale.html

This setup can accept any type of logs that come from log-courier agents and discover/filter logs using Kibana, Logstash can be configured to support log types using GROK patterns which can be found here https://grokdebug.herokuapp.com/

Advertisements

2 thoughts on “Centralised Scalable log-server implementation on Ubuntu 15.04

  1. Awesome Purinda, thanks for this, I managed to get a test system up and running in no time at all.

    With Kibana is there a way to secure it with a login?
    Do you just log your PHP apps to syslog via Monolog? And then setup your log courier config or do you use the application log directly?

    Error: Under ‘Enable Services’ the Kibana command should be `sudo service kibana4 start`.

    • Glad this helped.

      1. Kibana doesn’t support authentication as far as I know. You may need to protect it behind an Apache or Nginx revproxy with a virtualhost configuration supporting BasicAuth or something.

      2. Yes, I’ve got monolog configured to output Exceptions to Syslog. But we also ship symfony app logs which capture everything else. This is an example config from a web server.

      {
      "network": {
      "servers": [ "{{ log_server_uri }}" ],
      "transport": "tcp"
      },
      "files": [
      {
      "paths": [ "/var/log/messages" ],
      "fields": {
      "type": "syslog",
      "role": "web-server",
      "env": "{{ env }}"
      }
      },
      {
      "paths": [ "/var/log/nginx/frontend_access.log" ],
      "fields": {
      "type": "frontend-access-log",
      "role": "web-server",
      "env": "{{ env }}"
      }
      },
      {
      "paths": [ "/var/log/nginx/cms_access.log" ],
      "fields": {
      "type": "cms-access-log",
      "role": "web-server",
      "env": "{{ env }}"
      }
      },
      {
      "paths": [ "/var/log/nginx/recipes_access.log" ],
      "fields": {
      "type": "recipes-access-log",
      "role": "web-server",
      "env": "{{ env }}"
      }
      },
      {
      "paths": [ "/var/log/nginx/frontend_error.log" ],
      "fields": {
      "type": "frontend-error-log",
      "role": "web-server",
      "env": "{{ env }}"
      }
      },
      {
      "paths": [ "/var/log/nginx/cms_error.log" ],
      "fields": {
      "type": "cms-error-log",
      "role": "web-server",
      "env": "{{ env }}"
      }
      },
      {
      "paths": [ "/var/log/nginx/recipes_error.log" ],
      "fields": {
      "type": "recipes-error-log",
      "role": "web-server",
      "env": "{{ env }}"
      }
      }
      ]
      }

      3. Thanks for pointing that out!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s