Logger Settings in Elasticsearch

Default loggers in Elasticsearch

The logging configuration of Elasticsearch is placed in logging.yml. The default logger level is INFO.
The logger level can be configured as follows in config file

es.logger.level: DEBUG

This can be done using API call as follows

PUT /_cluster/settings
{
   "transient": {
      "logger.discovery": "DEBUG"
   }
}

 

Slow log settings

Slowlog is used to log the indexing requests and queries which exceed mentioned threshold time in the configuration. By default slowlog is disabled. The settings is applied index specific, we can set separate settings for each index in the cluster

PUT /my_index/_settings
{
   "index.search.slowlog.threshold.query.warn": "10s",
   "index.search.slowlog.threshold.fetch.debug": "500ms",
   "index.indexing.slowlog.threshold.index.info": "5s"
}

whereas

“index.search.slowlog.threshold.query.warn”: “10s” will emit a WARN log when queries are slower than 10s.

“index.search.slowlog.threshold.fetch.debug”: “500ms” will emit a DEBUG log when fetches are slower than 500ms.

“index.indexing.slowlog.threshold.index.info”: “5s” will emit an INFO log when indexing takes longer than 5s.

Log file storage settings

Default Log storage settings

By default the log file has been stored by using dailyRollingFile, it rotates the log files based on date by appending date value in the file name.

file:
	type: dailyRollingFile
	file: ${path.logs}/${cluster.name}.log
	datePattern: "'.'yyyy-MM-dd"
	layout:
		type: pattern
		conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

For example, if the cluster name is marvel-production, then the current log will have the name marvel-production.log and the rolled log file will have the name like marvel-production.log.2016-08-08.

Search Slow Log storage settings

The search slow logs has been logged in files by using following configuration, the file will be named as marvel-production_index_search_slowlog.log

index_search_slow_log_file:
	type: dailyRollingFile
	file: ${path.logs}/${cluster.name}_index_search_slowlog.log
	datePattern: "'.'yyyy-MM-dd"
	layout:
		type: pattern
		conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

Custom logger settings

We can custom the logger settings to roll the log files based on the size by using following configuration

# Rolling file based on size
	file:
		type: rollingFile
		file: ${path.logs}/${cluster.name}.log
		maxFileSize: 1000000
		maxBackupIndex: 10
		layout:
			type: pattern
			conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

In the above configuration maxFileSize describes the log file size needs to rolled and maxBackupIndex describes the number of log files needed to present. If number of log file exceeds the mentioned maxBackupIndex, the old files will be deleted.

 

Happy coding!

Elasticsearch Index aliases

Elasticsearch allows the user to create an alias to the indices.

For example, if we are maintaining a different range of employee details in different indices. And we want to query the employee details of all ranges from our application. We can create an alias for our existing indices.

So that we can query all the needed indices with a single endpoint. If we are frequently adding new indices to our data sources. we don’t need to update every time in the application configuration part if we are using alias concept.

Whenever we are adding new indices to our data source, we can simply add the new index to alias. The endpoint will remain the same, we add our additional index without affecting our application (let’s say we are pointing our website to Elasticsearch for data set).

Index 1: sales-people

Index2: research-people

Index 3: management-people

Alias: our-employees (Sales people, Research people, Management people)

In this case, if we want to query the Index 1 to Index 3, we can simply run our query by pointing to the “our-employees” alias.

Following some snippets of Elasticsearch alias actions

Adding Alias

POST /_aliases
{
   "actions": [
      {
         "add": {
            "index": "sales-people",
            "alias": "our-employees"
         }
      }
   ]
}

Removing Alias

POST /_aliases
{
   "actions": [
      {
         "remove": {
            "index": "sales-people",
            "alias": "our-employees"
         }
      }
   ]
}

Adding and Removing index in alias using the same call

POST /_aliases
{
   "actions": [
      {
         "remove": {
            "index": "sales-people",
            "alias": "our-employees"
         },
         "add": {
            "index": "research-people",
            "alias": "our-employees"
         }
      }
   ]
}

Happy exploring data 🙂