Centralized logging is critical in organizations for ease of fault finding, and it is often required for auditing and compliance purposes. Getting the tools to do the job at a reasonable price can often be hard. Commercial solutions such as Splunk cost large amounts of money and are often not feasible for smaller companies.
There are, however, alternatives. One of them is the ELK stack, which is free and easily deployed. ELK comprises three applications: Elasticsearch, which is used for storing and indexing your log files; Logstash, that can be used as both a forwarder (which we will not be covering in this article) and as a server to receive and store the log files; and Kibana, which is the front-end dashboard that we use to search through our log files.
Installation and configuration of ElasticSearch
For the purposes of this article, I will assume that you are working from a CentOS 7 x64 server, which is fully up-to-date. Note also that all commands below should be run as root in the terminal.
The first thing we need to do is install openjdk as a dependency.
> yum install java-1.8.0-openjdk.x86_64
Now that we have Java installed we can go ahead and start installing Elasticsearch. Let’s first import their gpg key.
> rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Next we will add the elasticsearch repository.
> cat << EOF > /etc/yum.repos.d/elasticsearch.repo > [elasticsearch-5.x] > name=Elasticsearch repository for 5.x packages > baseurl=https://artifacts.elastic.co/packages/5.x/yum > gpgcheck=1 > gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch > enabled=1 > autorefresh=1 > type=rpm-md > EOF
Now we can go ahead and install Elasticsearch.
> yum install elasticsearch
For security before we start, it is best to restrict access to localhost. This is fine for our use case, as we’ll have everything on one box. However, if you intend to have Kibana or Logstash on separate servers, you would be better doing this with firewall rules.
> sed -i 's/^#network.host.*/network.host: localhost/g' /etc/elasticsearch/elasticsearch.yml
We can now go ahead and start Elasticsearch, and make it start on boot.
> systemctl enable elasticsearch > systemctl start elasticsearch
If you are using a box with less than 2GB of RAM, you may also need to adjust /etc/elasticsearch/jvm.options and modify the -Xms value—and change it to, say, 1G.
We now have Elasticsearch configured and ready to go, so let’s move on to Kibana.
Installation and configuration of Kibana
We can now go ahead and add the Kibana repository. This uses the same GPG key we configured earlier for Elasticsearch, so we can just add the repository.
&> cat << EOF > /etc/yum.repos.d/kibana.repo > [kibana-5.x] > name=Kibana repository for 5.x packages > baseurl=https://artifacts.elastic.co/packages/5.x/yum > gpgcheck=1 > gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch > enabled=1 > autorefresh=1 > type=rpm-md > EOF
Now that we have added the repository, we can install Kibana.
> yum install kibana
Prior to starting Kibana, we need to configure it to listen on all interfaces so we can access it remotely.
> sed -i 's/^#server.host.*/server.host: 0.0.0.0/g' /etc/kibana/kibana.yml
We can now go ahead and start Kibana and make it start at boot.
> systemctl enable kibana > systemctl start kibana
We now have Kibana ready to go!
Installation and configuration of Logstash
Finally, we will install Logstash. Again, we already have the elasticsearch gpg key added, so we can go straight to creating the repository and installing it.
> cat << EOF > /etc/yum.repos.d/logstash.repo > [logstash-5.x] > name=Elastic repository for 5.x packages > baseurl=https://artifacts.elastic.co/packages/5.x/yum > gpgcheck=1 > gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch > enabled=1 > autorefresh=1 > type=rpm-md > EOF
Now that the repository is added, go ahead and install Logstash.
> yum install logstash
We can now go ahead and set up our Logstash configuration file. This will consist of an input which will be syslog, and an output which will be elasticsearch. Create a config file in /etc/logstash/conf.d/ with the following configuration. In our example, we will create /etc/logstash/conf.d/syslog.conf with the following content:
> cat /etc/logstash/conf.d/syslog.conf input { udp { port => 10514 type => "rsyslog" } } output { if [type] == "rsyslog" { elasticsearch { hosts => [ "localhost:9200" ] } } }
We can now go ahead and start Logstash and enable it to start at boot. (Please note that it may take a few minutes for it to start up.)
> systemctl enable logstash > systemctl start logstash
Final Thoughts
We now have ELK up and running and ready for you to push your logs into. When you browse to Kibana on http://:5601, Kibana should automatically configure Logstash indexes for you, provided logs have already been processed by Logstash and stored in Elasticsearch, and allow you to pick the timefield it will index on. This is usually @timestamp. Once you have clicked “Create” and have been redirected to the next page, go to the “Discover” tab on the left, and you will be able to view your logs coming in.
With everything now set up you can point your servers to log syslog to your Logstash host and you will see them appear in Kibana. Remember we are using a non-standard port of “10514” when configuring this.
If you have any questions, please leave comments on this article and I will try and get back to you.