- Mar 24, 2016
- Tested on Splunk Version: N/A
A pain point many organizations face when they think about moving their Splunk implementation to the Cloud is pulling identity data into the Splunk App for Enterprise Security. This point will be addressed in this blog post.
When organizations think about moving their Splunk implementation to the Cloud, there are a couple of pain points that they encounter. For starters: LDAP Authentication. In this article, we’re not going to address that issue, as there aren’t current workarounds for it – other than opening a firewall rule between Splunk Cloud and one of your domain controllers. The second sticking point, that will be addressed in this post, is pulling identity data into the Splunk App for Enterprise Security.
Organizations typically want to set up an automated way to pull identity data from their LDAP infrastructure. In the past, this has involved setting up the previously mentioned firewall rule and then configuring the Splunk Support for Active Directory in the Cloud. Fortunately, I recently came across a way to bypass that need and instead ship identity information up to the Cloud.
Of course, you can always set up a Read Only Domain Controller and that will put some of your Windows Admins at ease. You can also use firewalls to lock down connections only coming from a single IP on a single port towards that Domain Controller. These are great strategies, but usually it’s a lot of extra work if all you need is a list of identities.
You can get the same level of identity information by using something in Splunk called summary indexing and a Splunk Heavy Forwarder in your environment. This would involve setting up Splunk Support for Active Directory locally and eliminating the need for any connections inbound to your domain controllers. All data would be sent out to Splunk Cloud via the same port as the rest of your data. Let’s get started!
To create the index in Splunk Cloud:
Next you’ll want to create an index on your Heavy Forwarder:
You’ll want to schedule a search with the following settings on your Heavy Forwarder
|ldapsearch domain=yourdomain search="(&(objectclass=user)(!(objectClass=computer)))" attrs="sAMAccountName,personalTitle,displayName,givenName,sn,mail,telephoneNumber, mobile,manager,department,whenCreated,userAccountControl" |makemv userAccountControl |search userAccountControl="NORMAL_ACCOUNT" |eval suffix="" |eval priority="medium" |eval category="normal" |eval watchlist="false" |eval endDate="" |table sAMAccountName,personalTitle,displayName,givenName,sn,suffix,mail,telephoneNumber,mobile, manager,priority,department,category,watchlist,whenCreated,endDate |rename sAMAccountName as identity, personalTitle as prefix, displayName as nick, givenName as first, sn as last, mail as email, telephoneNumber as phone, mobile as phone2, manager as managedBy, department as bunit, whenCreated as startDate
Select a schedule that is appropriate for how often you’d like to see your identities get updated. The key here is to make sure to select “Summary Indexing” and to choose the “summary_ldap” index you recently created.
The base lookup table file can be created by uploading a skeleton file through SplunkWeb. You will want to make sure that the destination app is the SplunkEnterpriseSecuritySuite and that the sharing permissions are set to Global (object should appear in all Apps).
The header of the CSV should be as follows:
The lookup table definition will be used in a saved search to generate the lookup table. The destination app for this lookup table definition should again be the SplunkEnterpriseSecuritySuite. The name can be whatever you would like the lookup definition to be called (ad_identities generally works fine for Active Directory). The type should be file-based, and the lookup file should be set to the CSV you uploaded in step 3.
You will want to make sure that the sharing permissions are set to Global on this as well.
Run the following search:
index=summary_ldap | table identity prefix nick first last suffix email phone phone2 managedBy priority bunit category watchlist startDate endDate
But add the | outputlookup <lookup_name>| outputlookup <lookup_name> command to the search. This will output the results to the lookup table specified.
Verify that this is functioning properly by running the following search |inputlookup <lookup_name> and verifying the number and format of the results.
This is assuming that the lookup table is being generated properly. Also, this can be scheduled at a later point in time to automatically update the lookup table.
Create a new identity, being careful to identify the type as “identity” and the source to the lookup definition you created earlier, which will be lookup://<lookup_name>. This should be enabled by default once it’s created, but verify that this is the case in the web interface.
Before you create the merged identity file, wait approximately 5 minutes for Splunk to automatically detect the change in the identity configuration.
The following search will show you the status:index=_internal source=*python_modular_input.log *identit*
index=_internal source=*python_modular_input.log *identit*
This is being done assuming all is working as expected. Ensure that set the time frame so you don’t get duplicate results. If you scheduled the search on your Heavy Forwarder for once per day, this search should also look only for the past 24 hours and not any more. Also, as a final note, don’t forget to disable the sample identities that are enabled by default in Enterprise Security.
If you're looking for something different than the typical "one-size-fits-all" security mentality, you've come to the right place.