Fun with Splunk Search Head Clustering Replication

How does replication work in a Splunk search head cluster? This tutorial will give you insight into how this is handled so you will be better equipped to troubleshoot any problems that come up in your search head cluster deployment.


  • Tom Kopchak
  • Mar 12, 2019
  • Tested on Splunk Version: 7.2

Once upon a time there was a Splunk engineer who wasn't quite sure how something worked, so he wanted to test it out. He also was a good Splunk engineer and decided to test it in the lab instead of production. The following tale chronicles his efforts to better understand the inner workings of search head cluster replication.

For whatever reason, I’ve been seeing a number of issues crop up related to search head clustering recently. I’m not going to imply that this technology is any less reliable than others, but it does introduce some management complexity to your environment - especially in terms of troubleshooting. One of these issues deals with a search head cluster where replication was out of sync and needed to be fixed.

During normal operation, the replication for search head clustering is pretty straightforward. Any apps get pushed from the Deployer to the search head cluster members. This process merges any app configurations from the local directory in apps into the default directory on the search head cluster peers. This means that the app directories on your deployer will look different from what ends up on the peers - and that’s expected/normal behavior for a search head cluster.

During normal Splunk operation, changes will occur that are made by users. These changes end up in the local directory of the apps on the search head cluster members - just like they would on a standalone Splunk instance. These changes, known as replicated changes, are automatically replicated across the cluster. This process is coordinated by the search head cluster’s captain, that is the instance in a search head cluster that is responsible for coordinating the operation of the search head cluster.

This works great until it doesn’t, and an admin needs to intervene to fix the cluster. What do you do then?

What gets replicated?

By default, not every config file in a search head cluster gets replicated. Changes made during runtime need to be made through specific configuration methods. Most notably, the cluster does not replicate any configuration changes made manually, such as direct edits to configuration files.

This doesn’t, however, mean that all of your admins will make changes in a way that gets replicated properly, especially if they have operating-system level access to the Splunk instance. Or, you might run into a scenario where you need to restore from a backup or have a failed search head cluster member and want to ensure everything is in sync. What do you do?

Fixing replication issues

Fortunately, Splunk has mechanisms available for dealing with these sorts of problems. Before trying any of these steps, be sure to understand what these commands are doing, as one mistake could result in the loss of configuration (backups are always a good idea, too). Additionally, for this specific example, I’m assuming that the search head cluster is otherwise in a healthy state.

First, it’s important to determine what system in the search head cluster is the captain. The captain controls replication, and it’s the source of truth for the local files on search head cluster members. This means that if the captain’s configuration is incorrect and you force a resync, all the other members will end up being incorrectly configured too.

To determine which member is captain, run the $SPLUNK_HOME/bin/splunk show shcluster-status command:

# /opt/splunk/bin/splunk show shcluster-status

 Captain:
		               dynamic_captain : 1
		               elected_captain : Mon Feb 25 08:33:35 2019
		                            id : ECF2E6D8-F3E6-47E3-9776-8D6D0CD8725D
		              initialized_flag : 1
		                         label : ccnprodshc03
		                      mgmt_uri : https://ccnprodshc03.customerscallnow.com:8089
		         min_peers_joined_flag : 1
		          rolling_restart_flag : 0
		            service_ready_flag : 1

 Members:
	ccnprodshc03
		                         label : ccnprodshc03
		                      mgmt_uri : https://ccnprodshc03.customerscallnow.com:8089
		                mgmt_uri_alias : https://ccnprodshc03:8089
		                        status : Up
	ccnprodshc01
		                         label : ccnprodshc01
		         last_conf_replication : Mon Feb 25 08:33:51 2019
		                      mgmt_uri : https://ccnprodshc01.customerscallnow.com:8089
		                mgmt_uri_alias : https://192.168.20.241:8089
		                        status : Up
	ccnprodshc02
		                         label : ccnprodshc02
		         last_conf_replication : Mon Feb 25 08:33:53 2019
		                      mgmt_uri : https://ccnprodshc02.customerscallnow.com:8089
		                mgmt_uri_alias : https://192.168.20.242:8089
		                        status : Up

You’ll see that in this example cluster, ccnprodshc03 is the captain.

Next, for the sake of example, we’ll do something we shouldn’t do under normal search head cluster operation, and create a local file within an app on the search head cluster captain. I just picked the TA-eset directory since it didn’t have any existing local configuration, to make this pretty obvious:

root@ccnprodshc03:/opt/splunk/etc/apps/TA-eset# ls
default  metadata
root@ccnprodshc03:/opt/splunk/etc/apps/TA-eset# mkdir local
root@ccnprodshc03:/opt/splunk/etc/apps/TA-eset# cd local/
root@ccnprodshc03:/opt/splunk/etc/apps/TA-eset/local# ls
root@ccnprodshc03:/opt/splunk/etc/apps/TA-eset/local# vim props.conf
root@ccnprodshc03:/opt/splunk/etc/apps/TA-eset/local# cat props.conf
[random-testing]
FIELDALIAS-stuff = stuff as things

Next, we’ll confirm on the other search head cluster members that this local change was not replicated:

root@ccnprodshc01:/opt/splunk/etc/apps/TA-eset# ls
default  metadata

Finally, we’ll perform a resync of the replicated config, which will pull in our local change from the captain:

root@ccnprodshc01:/opt/splunk/etc/apps/TA-eset# splunk resync shcluster-replicated-config
The member has been synced to the latest replicated configurations on the captain.
root@ccnprodshc01:/opt/splunk/etc/apps/TA-eset# ls
default  local  metadata
root@ccnprodshc01:/opt/splunk/etc/apps/TA-eset# cd local/
root@ccnprodshc01:/opt/splunk/etc/apps/TA-eset/local# ls
props.conf
root@ccnprodshc01:/opt/splunk/etc/apps/TA-eset/local# cat props.conf
[random-testing]
FIELDALIAS-stuff = stuff as things

Wrap up

Hopefully you won’t ever need to deal with search head cluster replication issues. However, if you do need to resync the configuration, it’s good to understand what exactly will be impacted so you’re not learning on a production environment. As you can see from this demonstration, running the splunk resync shcluster-replicated-config command is an effective method to get your search head cluster members back in line in the event of any replication issues or local configuration changes.




Close off Canvas Menu