ELK Stack Setup on CentOS

ELK stack setup on centos

Introduction

ELK stack is formerly known as Elastic Stack. It consists of Elasticsearch, Logstash, and Kibana. It helps you to store all your logs in one place and analyze the issues by correlating the events at a particular time.ELK  allows you to search, analyze, and visualize logs generated from any source in any format. In this documentation, we will install the  ELK Stack on CentOS 7—that is, Elasticsearch 7.1.x, Logstash 7.1.x, and Kibana 7.1.x

“Logstash is a tool that can collect and parse data or logs for analyzing it. Kibana is a web interface or UI that can be used to search and view the logs which were indexed by Logstash. Both of the tools (Logstash and Kibana) are based on Elasticsearch. Elasticsearch, Logstash, and Kibana, together is known as an ELK stack.

The components of the ELK Stack are:

  •  Elasticsearch
  •  Logstash
  •  Beats
  •  Kibana

Where Beats are installed on the servers as agents known as “data shippers” which are used to send many kinds of operational data either directly to Elasticsearch or through Logstash. You can transfer the data within very little time and can modify or enhance the data.

Prerequisites

  • Elastic Stack server:
    One CentOS 7 server setup, including the non-root user with sudo privileges. The amount of RAM, CPU, and storage that your ELK stack-server will be required depends on the volume of data that you intend to gather. Here we are using CentOS instance  with the following specifications for our Elastic Stack server:
  • OS: CentOS 7
  • RAM: 4GB
  • CPU: 2

Elasticsearch

Elasticsearch is a full-text search engine that is capable of serving multiple customers with its single instance, and it is distributed with an HTTP web interface and schema-free JSON documents. Elasticsearch is a Restful search engine an open-source which is built on Apache Lucene and released under an Apache license. It is based on Java which can search and index document files in different formats. Elasticsearch index documents to the repository and during this operation, Elasticsearch transforms raw data like logs or messages into internal documents and stores them in a data structure which is similar to a JSON object. You can do an HTTP POST that transfers your document as a JSON object. Elasticsearch service is also available on AWS and Google cloud platform.

Elasticsearch is a No-SQL database that indexes and stores information. You can also query structured data, and use Elasticsearch as an analytics platform. You can run queries which can aggregate data and use the results for making graphs, pie charts, line charts, etc.

Kibana

Kibana is a web-Interface for searching and visualizing data. It is a plugin for Elasticsearch which provides visualization capabilities on the indexed patterns in the Elasticsearch cluster. Kibana is a User Interface for the stack in which you can create index patterns and analyze the data or logs, and you can query for data by using a query language and can generate visualizations/charts, etc.

Logstash

 

Logsath is an open-source server-side pipeline for data processing. This tool used to manage real-time logs and events by collecting a variety of data from a different source. It used as an Elasticsearch pipeline. Logstash provides a variety of filters that make meaningful data by parsing and transforming it.

Installation

Install Java 8 – which is required by Elasticsearch and Logstash . Note that Java 9 is not supported.  

Install Java with the yum command given below:

$ sudo yum install java-1.8.0-openjdk-devel
$ java -version


Install and Configure Elasticsearch

Command to import the public GPG key of Elasticsearch to the rpm package manager

$ sudo rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch

Create a file elasticsearch.repo in the /etc/yum.repos.d/ directory
Use any editor to create an elasticsearch.repo files than save and exit the file. 

$ sudo vi /etc/yum.repos.d/elasticsearch.repo

Create a file elasticsearch.repo in the /etc/yum.repos.d/ directory
Use any editor to create an elasticsearch.repo file. 

$ sudo vi /etc/yum.repos.d/elasticsearch.repo

Add the following lines to the file :

$   [elasticsearch-7.x]
    name=Elasticsearch repository for 7.x packages
    baseurl=https://artifacts.elastic.co/packages/7.x/yum
    gpgcheck=1
    gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
    enabled=1
    autorefresh=1
    type=rpm-md

Your repository is ready to use, now install Elasticsearch Package with the command: 

$ sudo yum install elasticsearch

Elasticsearch is Installed now, open configuration directory, and edit the elasticsearch.yml file. elasticsearch.yml file contains various configuration settings.

$ sudo vi /etc/elasticsearch/elasticsearch.yml

In the elasticsearch.yml file find the line that specifies a network and uncomment it and replace its value with “localhost” and http port value with 9200.

Note: 9200 is the default port of the elasticsearch.

network.host: localhost
http.port: 9200
Save and exit the elasticsearch.yml file.

Start the Elasticsearch service with the systemctl command:

Run the enable elasticsearch command to enable Elasticsearch to start up every time your server boots:

$ sudo systemctl enable elasticsearch

Check the status of Elasticsearch and make sure it is Running.

You can test  your Elasticsearch service is running by sending an HTTP request:
$ curl -X GET "localhost:9200"

 

Installing and Configuring the Kibana Dashboard 

According to the installation of ELK stack order in the official documentation, you should install Kibana after the setup of the Elasticsearch. After setting Kibana, we will be able to use its GUI to search for data and visualize the data that Elasticsearch stores. Because you already added the Elastic repository in the previous step, you can install the remaining components of the Elastic Stack using yum:

$ sudo yum install kibana

Enable and start the Kibana service:

$ sudo systemctl enable kibana
$ sudo systemctl start kibana

Open the kibana.yml file and edit it. Uncomment the lines for server.port, server.host and elasticsearch.host. Then Save and exit the file.

$ sudo vi /etc/kibana/kibana.yml

Note: Uncomment and edit server.host and change it value to “0.0.0.0 ” means allow all traffic to access kibana.

server.port: 5601
server.host: 0.0.0.0
elasticsearch.hosts: "http://localhost:9200"


Restart kibana services.

$  sudo systemctl restart kibana

Kibana is now accessible with the public IP address of your CentOS7 server. You can check the Kibana server’s status  by opening your browser and navigate  to the following address:

http://your_server_ip:5601
Note: 5601 is the default port of Kibana.   

Installing Logstash

Logstash is used to process the data first. Logstash is a tool which can collect and parse data or logs for analyzing it. This will  collect data from different sources, transform it into a standard format, and export it to another database.

$ sudo yum install logstash

Enable the logstash and start the service of logstash and check the status of logstash.

$ sudo systemctl start logstash
$ sudo systemctl enable logstash
$ sudo systemctl status logstash
How to manage system logs with Amazon S3 (2)
AWS
Anuj Mandloi

How to Manage System Logs with Amazon S3?

Contents Viewing Log File Contents The rsyslog Daemon The rsyslog Configuration File Creating and Testing Your Own Log Messages Rotating Log Files Rotating Log Files Manage System Logs With AWS S3 Viewing Log File Contents

Read More »
Steps To Host Static Website on AWS S3 (3)
AWS
kanika Agrawal

How to Host a Static Website on AWS S3

Contents What is Amazon S3 AWS S3 terminologies Static Website Hosting with AWS S3 Step 1: Create and configure a S3 bucket and uploading data Step 2: Configure your bucket to host a website What

Read More »