Last Updated on July 19, 2020
This post builds on our previous post, “Splunk Archive to AWS S3.”
This route can be used either in AWS or on-prem deployments. This route allows customers and or admins to mount S3 buckets to a Linux machine. Customers then can either use the mount point as a location for cold storage or use a coldToFrozenDir for their frozen data.
In some environments, having the ability to mount S3 storage will allow for a tiered approach to storage. I have been to many sites where the customer only had flash storage available. When we discuss the need for data retention and possibilities of 5-10TB of storage per indexer (dedicated), this can frighten some storage teams! I can’t stress enough the need to ensure AWS connectivity if going the route of cold to S3.
I would highly recommend contacting your AWS sales team about the S3 route and what the cost might be per your data. Using S3 storage for frozen will ensure you keep your data per your retention policies while also ensuring you don’t have to use up precious flash storage for tape-like data.
Here are some additional resources to assist you in this process:
In this post, we will not be going over installing Splunk but we do give examples for the indexes.conf
### Important Note
This setup can be used for cold storage depending on the reliability of the connection with AWS if on prem.
### End Note
Tested Environment – The environment used to set up an example of using mounted S3 for Splunk archive/cold:
• CentOS VM running on MAC
• AWS S3 bucket in AWS region US-West-1
• Splunk – Standalone
• Splunk Version 7.1
• Install fuse packages
• Install s3fs packages
Splunk S3 Configuration
Step 1: Install Fuse
Step 2: Install s3fs
Step 3: Setup AWS access and secret for use
Step 4: Change file permissions
When you are done with the above command and no errors are displayed you can look at /var/log/messages to also verify if any error messages are seen. By default s3fs uses us-east-1 first then go through other options for where the bucket may be. It will do this on its own and can be seen in the /var/log/messages log.
Step 6: /etc/rc.local change to automatically remount after reboot
Step 7: Verify S3 mount
Step 8: Write to S3 bucket via Linux cmd
Once you have mounted your S3 bucket you can now use the following option for your index stanza. The following route will set the default frozen time period in seconds for all indexes:
You can also choose to individually chose indexes to roll to your frozen dir and let others roll off data from cold to frozen by not setting the directory.
When you look at the above method of attaching the S3 bucket as a mount, you could also use the S3 mount point as a Cold location for your data. You really must trust the reliability of your AWS connection to go this route. Any downtime of the connection during Splunk use would cause the searchable data only to be of Hot/Warm.
Assistance with Archiving Splunk Data
For assistance with archiving Splunk data, or help with anything Splunk, contact Aditum’s Splunk Professional Services consultants. Our certified Splunk Architects and Splunk Consultants manage successful Splunk deployments, environment upgrades and scaling, dashboard, search, and report creation, and Splunk Health Checks. Aditum also has a team of accomplished Splunk Developers that focus on building Splunk apps and technical add-ons.
Contact us directly to learn more.
- Splunk Assets and Identities - June 18, 2020
- Splunk Attack Range Setup Guide - May 4, 2020
- The Art of Tuning Correlation Searches within Enterprise Security - July 12, 2019