God-like server racks

Splunk S3 Configuration: How to Mount S3 Storage to a Linux Machine

This post builds on our previous post, “Splunk Archive to AWS S3.”

This route can be used either in AWS or on-prem deployments. This route allows customers and or admins to mount S3 buckets to a Linux machine. Customers then can either use the mount point as a location for cold storage or use a coldToFrozenDir for their frozen data.

In some environments, having the ability to mount S3 storage will allow for a tiered approach to storage. I have been to many sites where the customer only had flash storage available. When we discuss the need for data retention and possibilities of 5-10TB of storage per indexer (dedicated), this can frighten some storage teams! I can’t stress enough the need to ensure AWS connectivity if going the route of cold to S3.

I would highly recommend contacting your AWS sales team about the S3 route and what the cost might be per your data. Using S3 storage for frozen will ensure you keep your data per your retention policies while also ensuring you don’t have to use up precious flash storage for tape-like data.

Here are some additional resources to assist you in this process:

https://calculator.s3.amazonaws.com/index.htmlhttps://aws.amazon.com/s3/

https://aws.amazon.com/s3/

In this post, we will not be going over installing Splunk but we do give examples for the indexes.conf

### Important Note
This setup can be used for cold storage depending on the reliability of the connection with AWS if on prem.
### End Note

Tested Environment – The environment used to set up an example of using mounted S3 for Splunk archive/cold:

• CentOS VM running on MAC
• AWS S3 bucket in AWS region US-West-1
• Splunk – Standalone
• Splunk Version 7.1
• Install fuse packages
• Install s3fs packages

Splunk S3 Configuration

Step 1: Install Fuse

Install Fuse
### Begin Commands ###
Yum update all

sudo yum install automake fuse fuse-devel gcc-c++ git libcurl-devel libxml2-devel make openssl-devel

### End Commands ###

Step 2: Install s3fs

Install s3fs
### Begin Commands ###
git clone https://github.com/s3fs-fuse/s3fs-fuse.git

cd s3fs-fuse
./autogen.sh
./configure –prefix=/usr –with-openssl
make
sudo make install

use the following command to verify the s3fs command is located within OS

which s3fs

### End Commands ###

Step 3: Setup AWS access and secret for use

Setup AWS
### Begin Commands ###
touch /etc/passwd-s3fs
vi /etc/passwd-s3fs

### Below gets pasted into vi ###

Accesskey: SecretKey

#### Above gets pasted into vi ###

Accesskey = SDFGDFGdgsgf234fg (example)
Secretkey = sdf3rsdfvsfw4rSDfGHFdh (example)

Step 4: Change file permissions

Change File Permissions
### Begin Commands ###
sudo chmod 640 /etc/passwd-s3fs

### End Commands ###

Step 5:

Step 5
### Begin Commands ###
mkdir /mys3bucket
s3fs your_bucketname -o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /mys3bucket

your_bucket = name of s3 bucket – example – my-test-bucket
use_cache = use directory for cache purpose
allow_other = allow other to write to mount point
uid = uid of user/owner of mountpoint OR
gid = group id of group
mp_umask removes other permissions
multireq_max = send request to s3 bucket
/mys3bucket = mountpoint where bucket is mounted

### End Commands ###

When you are done with the above command and no errors are displayed you can look at /var/log/messages to also verify if any error messages are seen. By default s3fs uses us-east-1 first then go through other options for where the bucket may be. It will do this on its own and can be seen in the /var/log/messages log.

Step 6: /etc/rc.local change to automatically remount after reboot

Mount on Boot
### Begin Commands ###
Which s3fs
/usr/local/bin/s3fs
vi /etc/rc.local
### Below gets pasted into vi ###
/usr/local/bin/s3fs ‘yourbucket’ –o use_cache=/tmp -o allow_other -o uid=1001 -o mp_umask=002 -o multireq_max=5 /mys3bucket
### Above gets pasted into vi ###

### End Commands ###

Step 7: Verify S3 mount

Verify S3 Mount
### Begin Commands ###
df –Th /mys3bucket

### End Commands ###

Step 8: Write to S3 bucket via Linux cmd

Write to S3 Bucket
### Begin Commands ###
Cd /mys3bucket
echo “This is a test” >> test.txt
ls

### End Commands ###

Once you have mounted your S3 bucket you can now use the following option for your index stanza. The following route will set the default frozen time period in seconds for all indexes:

Index Stanza
[default] maxWarmDBCount = 200
frozenTimePeriodInSecs = 432000
coldToFrozenDir = “/mys3Bucket /frozen”
frozenTimePeriodInSec = Set this to when you want data to be rolled off into either nothing or your Frozen Dir

You can also choose to individually chose indexes to roll to your frozen dir and let others roll off data from cold to frozen by not setting the directory.

Example Stanzas
[Test] homePath = volume:primary/$_index_name/db
coldPath = volume:primary/$_index_name/colddb
thawedPath = $SPLUNK_DB/$_index_name/thaweddb
coldToFrozenDir = /mys3Bucket/frozen

[Test1] homePath = volume:primary/$_index_name/db
coldPath = volume:primary/$_index_name/colddb
thawedPath = $SPLUNK_DB/$_index_name/thaweddb

When you look at the above method of attaching the S3 bucket as a mount, you could also use the S3 mount point as a Cold location for your data. You really must trust the reliability of your AWS connection to go this route. Any downtime of the connection during Splunk use would cause the searchable data only to be of Hot/Warm.

Secondary
[volume:secondary] path = /mys3Bucket/cold

[test] homePath = volume:primary/$_index_name/db
coldPath = volume:secondary/$_index_name/colddb
thawedPath = $SPLUNK_DB/$_index_name/thaweddb
coldToFrozenDir = /mys3Bucket/frozen

Assistance with Archiving Splunk Data

For assistance with archiving Splunk data, or help with anything Splunk, contact Aditum’s Splunk Professional Services consultants. Our certified Splunk Architects and Splunk Consultants manage successful Splunk deployments, environment upgrades and scaling, dashboard, search, and report creation, and Splunk Health Checks. Aditum also has a team of accomplished Splunk Developers that focus on building Splunk apps and technical add-ons.

Contact us directly to learn more.

Bill Ouellette
Share this Article