How to Deploy Splunk SmartStore

Splunk SmartStore is a fundamental change in how Splunk stores data, allowing the use of an S3 compliant object store to store your data. In this article, Tom and Brian will walk you through configuring SmartStore on an existing Splunk instance and cluster, and migrating an existing index to this new storage.



Introduction

One of the major features released in Splunk 7.2 is SmartStore - a mechanism to use AWS S3 (or other S3 API compliant object stores) as volumes for storing your indexed data in Splunk.

We’re not going to spend a lot of time going into the details of what SmartStore is and the benefits of using it, since Splunk already has a fairly comprehensive series on this, check out the blogs: Splunk SmartStore: Cut the Cord by Decoupling Commute and Storage and Splunk SmartStore: Disrupting Existing Large Scale Data Management Paradigms.

The biggest takeaway here, however, is that this represents a fundamental shift in how data can be stored in Splunk.

Traditionally, Splunk relied on dedicated storage for each indexer, with various classes of storage to help manage costs (much of which is covered in one of our other tutorials, Splunking Responsibly Part 2: Sizing Your Storage). With SmartStore, the concepts of hot, warm, and cold storage are replaced with a cache manager and an object store. This effectively allows for historical data to be stored for longer at a lower cost than traditional on-premise storage options.

Let’s assume you have an existing Splunk environment and want to take SmartStore for a spin. Here’s what you need to do to get it deployed.

Standard Warning About Changing Things

Whenever you’re manipulating index settings, there can be a risk of data loss. If you’re fortunate enough to not be using your test environment for production, doing this in a lab is absolutely the best way to experiment. All of this testing was first done on a standalone Splunk instance before trying it out on our lab environment, which closely mirrors one of our managed Splunk clients.

Prerequisite Configuration

The common element you will need to implement SmartStore is an S3 compatible object store. For our testing, we used Amazon’s S3, so we didn’t have to worry about S3 compatibility. If you’re using a third-party mechanism, Splunk docs has some information on what is required for Smart Store to work. You can read more about managing indexers and clusters of indexers in the docs.

Assuming you’re using S3, configure a new bucket and enable API access as follows:

1.  Navigate to S3 in the AWS Management Console:

2.  Create a bucket:

3.  Give the bucket a (globally unique) name. For testing, none of the other defaults need to be changed. Record this bucket name and the AWS region where it is created - you’ll need this information later.

4.  Once the bucket is created, you’ll see it in the AWS console:

5.  Create an API key by navigating to IAM in the AWS console:

6.  Navigate to Users -> Security credentials -> Create access key:

7.  Record both the Access key ID and the Secret access key shown - you’ll need this later:

8.  While not strictly necessary, we also created a folder inside the bucket for each test we did (standalone for our initial testing, CCN for our test customer):

Once these steps are completed, you should be done with the AWS testing.

Deploying SmartStore on a Standalone Indexer

For our first test, we performed a clean installation of Splunk 7.3 on a Ubuntu virtual machine, and proceeded to configure a new Index with Smart Store. For testing, this simply involved changes to $SPLUNK_HOME/etc/system/local/indexes.conf on our instance.

Note: Splunk’s documentation on Deploying SmartStore gives a great example of the configuration to use.

Our final test indexes.conf ended up looking like this:

dev01:~# cat /opt/splunk/etc/system/local/indexes.conf
[default]
# Configure all indexes to use the SmartStore remote volume called
# "remote_store".
# Note: If you want only some of your indexes to use SmartStore,
# place this setting under the individual stanzas for each of the
# SmartStore indexes, rather than here.
#remotePath = volume:remote_store/$_index_name

# Configure the remote volume
[volume:remote_store]
storageType = remote

# On the next line, the volume's path setting points to the remote storage location
# where indexes reside. Each SmartStore index resides directly below the location
# specified by the path setting. The <scheme> identifies a supported remote
# storage system type, such as S3. The <remote-location-specifier> is a
# string specific to the remote storage system that specifies the location
# of the indexes inside the remote system.
# This is an S3 example: "path = s3://mybucket/some/path".

path = s3://splunk-smartstore-testing/standalone #our S3 bucket and path

# The following S3 settings are required only if you're using the access and secret
# keys. They are not needed if you are using AWS IAM roles.

remote.s3.access_key = <access key ID>
remote.s3.secret_key = <secret access key>
remote.s3.endpoint = https://s3.us-east-2.amazonaws.com #our AWS region

We also created a test index, cs_index, in the same indexes.conf file, and later added the _internal index as an additional test index:

# This example stanza configures a custom index, "cs_index".
[cs_index]
homePath = $SPLUNK_DB/$_index_name/db
# SmartStore-enabled indexes do not use thawedPath or coldPath, but you must still specify them here.
coldPath = $SPLUNK_DB/$_index_name/colddb
thawedPath = $SPLUNK_DB/cs_index/thaweddb
remotePath = volume:remote_store/$_index_name

[_internal]
remotePath = volume:remote_store/$_index_name

Upon a restart of Splunk, we immediately noticed folders created for each index in the AWS S3 console:

Navigating through the various folders, you’ll notice the same type of Splunk index structure that you’re familiar with:

For those of you who prefer a screencast or demo, here’s a quick runthrough of the process using another index:

Testing SmartStore

Once you have SmartStore writing to buckets in S3, you’ll want to make sure that data can be retrieved successfully from a SmartStore index entirely in S3. This is easiest to test when you first begin to have data in SmartStore, since it’s simple to evict everything from your cache when there’s only one or two buckets.

You can view all of the buckets in the cache manager via the REST API by navigating to services/admin/cacheman/:

To evict a bucket, you can post to the REST API using curl. For the sake of our testing, this command was used:

curl -u <username>:<password> -kX POST https://192.168.20.151:8089/services/admin/cacheman/bid%7C_internal~0~ABE42D9B-B567-46D9-8773-73D389146EDD%7C/evict

This will produce a lot of output when it runs:

The easiest way to confirm that the bucket was evicted is to go into the directory for that bucket and confirm that it is empty. The folder will still exist, but it won’t have any contents:

root@dev01:/opt/splunk/var/lib/splunk# ls -l /opt/splunk/var/lib/splunk/_internaldb/db/db_1559281745_1559281745_0_ABE42D9B-B567-46D9-8773-73D389146EDD
total 0

Running a search over this data will result in the bucket being retrieved locally and stored in the cache on disk, which you can see by checking the same folder:

root@dev01:/opt/splunk/var/lib/splunk# ls -l /opt/splunk/var/lib/splunk/_internaldb/db/db_1559281745_1559281745_0_ABE42D9B-B567-46D9-8773-73D389146EDD
total 32
-rw------- 1 root root 1073 Jun 19 11:27 1559281745-1559281745-18197050580047733540.tsidx
-rw------- 1 root root   65 Jun 19 11:27 bloomfilter
-rw------- 1 root root   67 Jun 19 11:27 bucket_info.csv
-rw------- 1 root root  123 Jun 19 11:27 cachemanager_local.json
-rw------- 1 root root  103 Jun 19 11:27 Hosts.data
drwx------ 2 root root   64 Jun 19 11:27 rawdata
-rw------- 1 root root  121 Jun 19 11:27 Sources.data
-rw------- 1 root root  109 Jun 19 11:27 SourceTypes.data
-rw------- 1 root root   41 Jun 19 11:27 Strings.data

Here is a demo of how this eviction and download process works in practice:

Extending This to a Cluster with Existing Data

After our lab testing was successful, we followed a very similar process to implement this in an indexer cluster. The configuration files were very similar, with some slight differences:

  • The SmartStore configuration was put in a infra_smartstore_base app which was deployed to the indexers via the cluster master.
  • The remotePath definition for the existing index was placed in the same app where indexes are defined.

After a rolling restart of the indexer cluster, our data began showing up in S3 - while still being searchable. Over the course of the next several hours, we noticed less and less disk space being consumed while more data was uploaded to AWS.

Tracking SmartStore Activity

SmartStore-related internal logs are stored in the Splunk _internal index under the CacheManager component. In our lab environment, this means we can track SmartStore activity with the following search:

index=_internal host=ccn*idx* component=CacheManager

Download and upload activities are also easy to locate, via the action field:

As you can see from a 36 hour graph, the amount of activity varies. There is a burst in uploading when the index is initially configured to use SmartStore, followed by bursts of download activity when searches require data that is only available in S3. Minimal activity is observed when data is only being ingested and not actively searched.

Observations and Conclusions

Overall, we were impressed with how simple it was to configure and get SmartStore working in our testing. The Splunk documentation for the implementation was well written - kudos to the documentation team for excellent work.

In our testing, we noticed that the process of retrieving data from SmartStore adds some latency to the searching process, which was noticeable in our testing for current data. However, since the use case for SmartStore primarily involves reducing the cost of long-term data storage, this is less of an issue for infrequently accessed data.

We’ll continue to produce more information on this feature as we work with it more. Let us know about your experiences using SmartStore as well.




Close off Canvas Menu