The No-Nonsense Guide to Creating Classy Server Classes in Splunk

If you’re an administrator who takes part in managing a Splunk environment, this blog post will help you understand server class organization strategies for a more successful deployment server experience.

Introduction

Planning is a huge success factor when using Splunk, and this is no different when it comes to designing server classes. For Splunk administrators, taking the time to consider future client additions, needs you’ll be addressing, and naming scheme choices will prevent a lot of stress.

A deployment server is usually born out of the necessity for deploying apps to clients–those you are likely eager to ingest data from. However, amid the excitement many organizations fall short in planning for the future. An arbitrary name is chosen, clients are added to get logs coming in, and things quickly start piling up. It doesn’t take long for this inconsistent behavior to become unmanageable, especially when you have multiple admins each with their own naming preferences. With a little planning and foresight, a lot of headaches can be prevented.

In this blog, I will be reviewing some experiences and advice to help organize your server classes–whether you are at the planning stage or in the midst of trying to clean up preexisting bad habits.

Considerations for your deployment server

A common and necessary standard app contains output configuration; this defines what indexers the Splunk system sends its data to.

Already we begin to think of the deployment server beyond the mechanism of delivering our apps. If we change our indexing configuration, wouldn’t it be nice to be able to update one app and be confident all necessary deployment clients will pull it down?

When staging a deployment server, there are ultimately three components to consider:

  1. server class
  2. apps
  3. clients

Diving into the function of each is beyond the scope of this article, but each has an organization opportunity. In some cases, you may not have control over the name of an app. For example, while you can technically change the name of any apps you download from Splunkbase, we’d recommend keeping them named as they are for a myriad of reasons–such as issues while upgrading the app in the future.

Descriptive naming schemes for quick app identification

For apps that you author, however, establishing a meaningful naming scheme will save you and anyone co-managing your environment a lot of trouble.

We’ve found a lot of success using a business prefix to quickly identify apps that belong to you, such as “hl_na_uf_outputs”. This name is also descriptive enough for someone to reasonably guess what it is for: output configuration for Hurricane Labs North American universal forwarders.

Remember to always be planning ahead. If you only support Splunk systems in North America right now, is it possible that you may someday include another region? If so, it’s a lot easier to account for it now than have to change many things later.

Let’s look at an example

Say you have a web server in a DMZ for public access and a web server on an internal subnet for internal access. In terms of ingesting web logs, both of these servers primarily use the same Splunk configuration–but for various reasons their data goes to different indexes.

Both web servers are based in the United States, so they’ll receive the same output configuration as discussed above. Since their web logs are destined for different indexes, they’ll need different input configuration. We will at least need two server classes and two similar, but separate, input apps.

Let’s start with the server classes. We need something more descriptive than “web_servers” or even “na_web_servers” since that applies to both. Generally, refrain from naming a server class (or app, for that matter) after a specific client. In the spirit of future planning, this limits us from using it for other systems that may apply later.

When you need more specific naming, consider the business requirement for these to be separate. In this example, a specific team only is approved to review the logs from the public web server, so they go to their own index for access control. Perhaps we use the names “dmz_web_servers” and “internal_web_servers”.

You might look at those names and ask, “but Adam, shouldn’t we consider if we will ever have web servers outside of the US?” Yes, you should! If the answer to that question is yes, “na_dmz_web_servers” would be even better!

Our two new server classes now need apps, and we’ve discussed that the inputs will need to be different for each. We can use the work we’ve already done naming the server class here, too. An app name such as “na_dmz_web_server_inputs” is an excellent, descriptive name.

Remember you’ll generally want to follow this naming scheme for any internal apps you need, regardless if they are inputs, outputs, props, transforms, and so on. Pick something that is right for your organization now and in the future!

The final piece–identifying the clients–can be one of the most complex and challenging. In our example, it’s easy; we can simply whitelist the appropriate web server–but we’re thinking forward, right? One of the more common scenarios we see occurs when the universal forwarder agent is installed to a system and the expected logs do not start coming in.

Setting up consistent filters for proper execution

Whitelists/blacklists are primarily used to determine what server classes a client matches (and by extension, what apps they receive). These can use regular expressions and can be set up to match future hosts that may check in. Since we’re always planning ahead, that sounds great, right?

I’ve encountered many incidents of lost (and sometimes unwanted) data due to great intentions but improper execution on these patterns. It also relies on a consistent internal naming scheme for servers and/or IP addressing, which is most likely out of your control as a Splunk administrator.

Take extra care on items that can affect the functionality of Splunk, such as outputs. One of the most common culprits of a host not indexing data has been that it isn’t receiving any output configuration because it doesn’t match a client whitelist.

Once a deployment server has been operational for a while it is likely to have acquired some bad habits especially if managed by multiple hands. Auditing your server classes is a useful exercise to ensure that you are standardizing apps and not deploying anything undesired, while cleaning up old/confusing configuration that may not be needed anymore.

Using the serverclass.conf file to define server classes

The file serverclass.conf on your deployment server contains all the information about what server classes exist, which apps belong to them, and the clients that match. While you can get the same information by clicking through tabs in the web interface, I find it easier to get the information needed from this file directly–but be careful, and definitely take backups before you make any changes.

Reviewing the structure of this configuration file is beyond the scope of this article, but let’s take a look at the general layout. Server classes are defined in one stanza along with the client matching rules, such as:

[serverClass:na_dmz_web_server]
whitelist.0 = usdwebexample01

Apps are then defined in that server class with a separate, although very similar, stanza. Notice that the app name, “na_dmz_web_server_inputs”, is added here:

[serverClass:na_dmz_web_server:app:na_dmz_web_server_inputs]
restartSplunkWeb = 0
restartSplunkd = 1
stateOnClient = enabled

We can tell quite a bit of information from just these two stanzas. The server class “na_dmz_web_server” has a client whitelist of “usdwebexample01” and contains an app “na_dmz_web_server_inputs” that is configured to restart Splunkd if changes are detected.

If we were auditing this, we may ask ourselves if the whitelist should be a specific host, as it is now, or if we could use RegEx to match future hosts, such as usdweb*. We would definitely be concerned if we knew usdwebexample02 existed, since our current whitelist doesn’t match it. If we were receiving data from usdwebexample02 into Splunk, it likely means someone built a similar app just for it and the two can be consolidated–this is a very common finding.

This file may be intimidating especially if your environment is a large one, as it is for one of our clients that I’m currently working through an audit with. I found a good place to start was using grep to output just a list of server classes to track in a spreadsheet and going through one-by-one.

An example command that will achieve this type of output would be:

cat serverclass.conf | grep "^\[serverClass" | grep -v ":app:"

Conclusion

At Hurricane Labs, we often find ourselves in one of two situations: a relatively new deployment server needing server classes or a veteran deployment server with a collection of server classes ranging from “why is this here?” to self-explanatory.

It can be difficult to look back and figure out what someone was thinking, even if that someone was actually you. The pressure to get a deployment server working has the tendency to overshadow the value of stable organization at times, but putting the effort in when you can will make all the difference while managing your Splunk environment.

I hope this article helps you understand server class best practices so you can incorporate more strategic planning for your Splunk success. Happy Splunking!



Close off Canvas Menu