Hand holding head icons transferring light bulbs

3 Splunk Best Practices I Learned the Hard Way

As we neared the end of last year, I was in a reflective mood and thought about the times I’ve ended up getting bitten by simple omissions I’ve made while laying the foundation for a Splunk environment. You know what I’m referring to, right? Those times when you’re in the middle of winding down your evening and the call comes in: Splunk is down. You race to troubleshoot the problem only to discover that it could have been prevented with some tweaks to your initial Splunk deployment. With that in mind, let me share three Splunk best practices that will go a long way in saving you time and face, and allow you to spend your evenings unwinding in peace.

Splunk Best Practice #1: Use Volumes to Manage Your Indexes

You’re probably already familiar with the maxTotalDataSizeMB setting in the indexes.conf file – it’s used to set the maximum size per index (default value: 500 GB). While maxTotalDataSizeMB is your first line of defense to avoid reaching the minimum free disk space before indexing halts, volumes will protect you from a miscalculation made when creating a new index. You have likely gone through the due diligence of sizing your indexes to account for the right growth, retention, and space available but another admin who creates an index in your absence may not be as prudent.

Enter the volume. You use volumes to bind indexes together and make sure that in their totality they do not surpass the limits set on the volume. Volumes are configured via indexes.conf and they require a very simple stanza:

Volume Stanza Example
[volume:CustomerIndexes] path = /san/splunk
maxVolumeDataSizeMB = 120000

The stanza above tells Splunk that we want to define a volume called “CustomerIndexes”, have it use the path “/san/splunk” to store the associated indexes, and finally to limit the total size of all of the indexes assigned to this volume to 120,000 MB. I know your mind has already conceived the next step, which is where we assign indexes to our “CustomIndexes” volume. This is also done in indexes.conf by prefixing your index’s cold and warm (home) path with the name of the volume:

Adding the Prefixes
[AppIndex] homePath = volume:CustomerIndexes/AppIndex/db
coldPath = volume:CustomerIndexes/AppIndex/colddb
thawedPath = $SPLUNK_DB/AppIndex/thaweddb

[RouterIndex] homePath = volume:CustomerIndexes/RouterIndex/db
coldPath = volume:CustomerIndexes/RouterIndex/colddb
thawedPath = $SPLUNK_DB/RouterIndex/thaweddb

*PRO TIP – use $_index_name to reference the name of your index definition

Using $_index_name
[RouterIndex] homePath = volume:CustomerIndexes/$_index_name/db
coldPath = volume:CustomerIndexes/$_index_name/colddb
thawedPath = $SPLUNK_DB/$_index_name/thaweddb

The beauty of this approach is that even though the next admin might be unaware of it, once you set it it’s difficult to miss when creating a new index via indexes.conf or the web interface. The result of using volumes is that the volume’s “maxVolumeDataSizeMB” setting overrides the indexes “maxTotalDataSizeMB” setting. In our example, if left to their own devices the AppIndex and RouterIndex would have grown to their default maximum size of 500,000 MB each, taking up a total of 1 TB of storage. With volumes we no longer have to worry about this. As a bonus, there is nothing stopping you from using separate volumes for cold and warm/hot buckets in case you have different tiers of storage available.

SPL reference guide ad

Splunk Best Practice #2: Use Apps and Addons Wherever Possible

Using the phrase is cliché nowadays, but in the Splunk world it pays to be the admin that says, “there’s an app for that”. We all know you can download apps from Splunkbase to extend Splunk’s functionality, and you’ve probably already used the deployment server to manage inputs and technical add-ons (TAs) on universal forwarders. But what about using apps to manage an environment? Not only is it possible, it’s recommended.

For starters, we’re assuming we have all of our Splunk nodes configured to use our deployment server. If not, then you can use this handy cli command to do that on each instance:

Configuring the Nodes
splunk set deploy-poll <IP_address/hostname>:<management_port>
splunk restart

Now continue by identifying stanzas that will be common across groups of nodes (search heads, indexers, forwarders, etc.) or all nodes. For example, there are two useful stanzas in outputs.conf used to make sure every node is aware of the indexers it needs to forward data to outputs.conf:

2 Useful Stanzas
[tcpout] defaultGroup=Indexers

[tcpout:Indexers] server=IndexerA:9997, IndexerB:9996

Create the following directory structure on your deployment server to accommodate our new app’s config files and place your version of outputs.conf file with the stanzas above in the “local” subdirectory. In this example we’re naming our app “all_outputs”.

Great, that was the hard part! You’ll need to repeat this and create a new app on the deployment server for every config file that a group of nodes has in common. Here are a few ideas:

  • All search heads usually share the same search peers and this can be accomplished via an app that provides distsearch.conf
  • All indexers will need to have the same version of props.conf and transforms.conf to consistently parse the data that they ingest
  • All forwarders can use an app that configures the allowRemoteLogin setting via server.conf to allow them to be managed remotely

To tie everything together, log on to your deployment server’s GUI and go to Settings > Forwarder Management to create server classes for the different groups of nodes in your Splunk environment and assign the appropriate apps and hosts to each server class.

Now here comes the fun part! Next time someone calls you up asking how to stand up a new heavy forwarder (or any other instance type) you can answer “there’s an app for that.”

Splunk Best Practice #3: Keep an Eye on Free Disk Space

Splunk frequently checks the free space available on any partition that contains indexes. It also checks for enough free space where the search dispatch directory is mounted before executing a search (usually wherever Splunk is installed). By default, the threshold is set at 5,000 MB and configurable by the “minFreeSpace” on server.conf. When it’s reached you can count on getting a call from your users informing you that Splunk has stopped indexing or that searches are not working.

It’s important to keep a close eye on this when your instance is running on a partition with less than 20 GB of free space, as Splunk will use several GB for its own processes. It’s difficult to pinpoint with certainty how an environment will grow as there are several directories that grow according to the daily use of Splunk that are not governed by the limits set on indexes or volumes. The top places to look for growth in an environment are:

  • Dispatch directory ($SPLUNK_HOME/var/run/splunk/dispatch)
  • KV store directory ($SPLUNK_DB/kvstore)
  • Configuration bundle directory ($SPLUNK_HOME/var/run/splunk/cluster/remote-bundle)
  • Knowledge bundle directory ($SPLUNK_HOME/var/run/searchpeers)

To avoid any surprises, use your monitoring tool of choice to alert for low disk space. A favorite fix of mine is implementing NMON across my cluster as it provides all types of useful metrics when troubleshooting and monitoring your environment. NMON conveniently has a predefined low disk space alert that you can adjust to your environment.

Splunk Best Practices at Your Fingertips

Aditum’s Splunk Professional Services consultants can assist your team with best practices to optimize your Splunk deployment and get more from Splunk. Our certified Splunk Architects and Splunk Consultants manage successful Splunk deployments, environment upgrades and scaling, dashboard, search, and report creation, and Splunk Health Checks. Aditum also has a team of accomplished Splunk Developers that focus on building Splunk apps and technical add-ons.

Contact us directly to learn more.

Share this Article