Hello fellow Splunkerinos and welcome to another edition of “things I wish someone had explained to me.” This time around I’ll introduce you to a seldom used but powerful command: foreach.
The foreach command is a tricky little thing to pin down. For starters, it takes some effort to wrap your mind around its purpose. If you are a programmer, then you’re familiar with what the foreach looping operator does in other languages – except in this case there are no built-in counters to track which loop you’re in, and then you are asked to use unusual tokens that you never use in Splunk outside of this command.
Fear not! Once you’re past those two hurdles you’ll find it will make your searches more compact, automated, and welcoming to changing field names.
Syntax per docs.splunk.com
The syntax definition found in the documentation makes it seem more complicated than the command is. What it’s trying to tell you is how to access the segments in the field name and the actual value of the field.
How foreach works
Basically, foreach will run your subsearch once for every field in your data that matches wc-field argument. Let’s break down its syntax by using an example to introduce the <<FIELD>> tag.
We start off with this very simple dataset:
As you can see, the format of the field is: Building#_DeviceName_Metric.
Now let’s say we’d like to create a new field for every one of the existing fields that contains the field name and the field value. We’ll use the same field names for the new field, but we’ll prefix it with the “info_” string.
The result is:
Did you get what happened after the equal sign? Take a minute to go through how we use the “<<FIELD>>” token enclosed in quotes to refer to the field name but <<FIELD>> token without the quotes to refer to the value of the field. To define the new field names, we used info_<<FIELD>>. A caveat here is that we don’t use quotes on the <<FIELD>> token before the equal sign when we reference the old field name to define the new field name.
Moving on to the segments, let’s say we’re only interested in keeping the device names and their metrics and so we’d like to remove the prefix that refers to the building number from the field names.
This gets us partially there:
Notice how we used the << MATCHSTR >> token to refer to whatever was matched by the wildcard and how we used it for the new field names but. We still have the building numbers to deal with though. For this we’ll need to be more granular by using the <<MATCHSEG#>> tokens:
There are two things worth noting here. The first one is that we used <<MATCHSEG2>> to reference the string in the field name matched by the second wildcard (similarly <<MATCHSEG1>> would have matched the first wildcard’s string and <<MATCHSEG3>> the third). Second is that we added the fields command to remove the original field from our search. Neat, right? Now you can get very creative with this.
Now that you’re getting the hang of the foreach command, let’s do something that you’d only see elite Splunkers do. For this scenario let’s say you had a data set that contained metric values and you’d like to compare those metrics against thresholds that are defined in a lookup file. The interesting part of this is that you don’t know the field names in advance, you only know the format.
Assuming we already loaded our metrics and thresholds into our data set from an index and a lookup file, we start with the following data set:
But with miniature lightning bolts emanating from our fingertips we craft:
And that gets us what we’re looking for:
We took advantage of the <<MATCHSEG1>> token name to create new field names and put the threshold checking logic inside the subsearch itself. Intense right?
Final thoughts on foreach
The foreach command allows you to define a set of commands to be executed on a group of fields you select. This example was from a recent monitoring use case that came my way but its applications are wide open. The most common use I’ve seen other people give this command is to save the Splunker multiple eval statements so keep in mind that its flexibility allows you to write queries that will let the users add or remove fields to the data (usually via a lookup) that keep getting processed without you having to lift a finger. Just make sure your users are clear on the naming convention and they’ll quickly realize they have a Splunk ninja on their team.
Aditum’s Splunk Professional Services consultants can assist your team with best practices to optimize your Splunk deployment and get more from Splunk. Our certified Splunk Architects and Splunk Consultants manage successful Splunk deployments, environment upgrades and scaling, dashboard, search, and report creation, and Splunk Health Checks. Aditum also has a team of accomplished Splunk Developers that focus on building Splunk apps and technical add-ons.
Contact us directly to learn more.