System Log Aggregation with the Elastic Stack
Jun 08, 2023 • 5 Minute Read
The Elastic Stack is infinitely configurable for just about any use case that involves collecting, searching, and analyzing data. To make it easy to get up and running, we can use modules to quickly implement a preconfigured pipeline. In this brief tutorial, we are going to use the System module to collect log events from
/var/log/secure
and /var/log/auth.log
and then analyze the log events through module-created dashboards in Kibana. For this demonstration, I am going to be using a t2.medium EC2 instance on the A Cloud Guru Cloud Playground. If you are not a Linux Academy subscriber, feel free to follow along with your own cloud server or virtual machine. All you need is a CentOS 7 host with 1 CPU and 4 GB of memory. Otherwise, the server is pre-configured for you! [caption id="attachment_11242" align="aligncenter" width="525"] Linux Academy Cloud Playground[/caption]
Elasticsearch
First, we need to install the only prerequisite for Elasticsearch, a Java JDK. I am going to be using OpenJDK, specifically thejava-1.8.0-openjdk
package:
sudo yum install java-1.8.0-openjdk -yNow we can install Elasticsearch. I am going to install via RPM, so first let's import Elastic's GPG key:
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearchNow we can download and install the Elasticsearch RPM:
curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.2.rpmsudo rpm --install elasticsearch-6.4.2.rpmsudo systemctl daemon-reload elasticsearchLet's enable the Elasticsearch service so it starts after a reboot and then start Elasticsearch:
sudo systemctl enable elasticsearchsudo systemctl start elasticsearchThe ingest pipeline created by the Filebeat system module uses a GeoIP processor to look up geographical information for IP addresses found in the log events. For this to work, we first need to install it as a plugin for Elasticsearch:
sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-geoipNow we need to restart Elasticsearch in order for it to recognize the new plugin:
sudo systemctl restart elasticsearch
Kibana
We already have the Elastic GPG key imported, so let's download and install the Kibana RPM:curl -O https://artifacts.elastic.co/downloads/kibana/kibana-6.4.2-x86_64.rpmsudo rpm --install kibana-6.4.2-x86_64.rpmNow we can start and enable the Kibana service:
sudo systemctl enable kibanasudo systemctl start kibanaBecause Kibana and Elasticsearch both come with sensible defaults for a single-node deployment, we do not need to make any configuration changes to either service.
Filebeat
Now we can install the client that will be collecting our logs, Filebeat. Again, because we already have the Elastic GPG key imported, we can download and install the Filebeat RPM:curl -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.2-x86_64.rpmsudo rpm --install filebeat-6.4.2-x86_64.rpmWe want to store our log events in Elasticsearch with a UTC timestamp. That way, Kibana can simply convert from UTC to whatever time zone our browser is in at request time. To enable this conversion, let's uncomment and enable the following variable in
/etc/filebeat/modules.d/system.yml.disabled
for both the syslog
and auth
sections:
var.convert_timezone: trueNow we can enable the System module and push the module assets to Elasticsearch and Kibana:
sudo filebeat modules enable systemsudo filebeat setupFinally, we can enable and start the Filebeat service to begin collecting our system log events:
sudo systemctl enable filebeatsudo systemctl start filebeat
Analyze
By default, Kibana listens onlocalhost:5601
. So in order to browse Kibana in our local web browser, let's use SSH to log in to our host with port forwarding:
ssh username@hostname_or_ip -L 5601:localhost:5601Now we can navigate to
https://localhost:5601
in our local web browser to access our remote instance of Kibana. From Kibana's side navigation pane, select Dashboard and search for "system" to see all the System module dashboards. To take things a step further, you can create your own honeypot by exposing your host to the internet to garner even more log events to analyze. [caption id="attachment_11237" align="aligncenter" width="525"] Syslog Dashboard[/caption] [caption id="attachment_11238" align="aligncenter" width="525"] Sudo Commands Dashboard[/caption] [caption id="attachment_11239" align="aligncenter" width="525"] SSH Logins Dashboard[/caption] [caption id="attachment_11240" align="aligncenter" width="525"] New Users and Groups Dashboard[/caption]