SIEM tricks: dealing with delayed events in Splunk

tiSo after bugging the entire IT department and interrogating as many business teams as possible to grant you (the security guy) access to their data, you are finally in the process of developing your dreamed use cases. Lucky you!

Most SIEM projects already fall apart before reaching that stage. Please take the time to read a nicely written article by SIEM GM Anton Chuvakin. In case you don’t have the time, just make sure you check the section on “Mired in Data Collection”.

The process of conceptualizing, developing, deploying and testing use cases is challenging and should be continuous. There are so many things to cover, I bet you can always find out something is missing while reading yet another “X ways to screw up a SIEM” article.

So here’s another idea to prove it once again: how can you make sure the events are safely arriving at your DB or index? Or even beyond: how can you make sure the timestamps are being parsed or extracted appropriately? Why is it important?

Time is what keeps everything from happening at once.

First of all, I’m assuming the Splunk terminology here, so it’s easier to explain by example. Also, let’s make two definitions very clear:

Extracted time:  corresponding to the log generation time, coming from the log event itself. This one is usually stored as _time field in Splunk.

Index time: corresponding to the event indexing time, generated by Splunk indexer itself upon receiving an event. This one is stored as _indextime field in Splunk.

There are infinite reasons why you should make sure timestamps are properly handled during extraction or searching time, but here are just a few examples:

  1. Timezones: this piece of data is not always part of the logs. So time values from different locations may differ – a lot;
  2. Realtime/Batch processing: not all logs are easily collected near realtime. Sometimes they are collected in hourly or daily chunks;
  3. Correlation Searches (Rules) and forensic investigations are pretty much relying on the Extracted Time. Mainly because that’s the default behavior, either from the Time Picker (Search GUI) or the Rule editor.

Have you noticed the risk here?

Going under the radar

In case you haven’t figured out yet, apart from all other effects of not getting events’ time right, there’s a clear risk when it comes to security monitoring (alerting): delayed events may go unnoticed.

If you are another “realtime freak”, running Correlation Searches every 5 minutes, you are even more prone to this situation. Imagine the following: you deploy a rule (R1) that runs every 5 minutes, checking for a particular scenario (S1) within the last 5 minutes, and firing an alert whenever S1 is found.

For testing R1, you intentionally run a procedure or a set of commands that trigger the occurrence of S1. All fine, an alert is generated as expected.

Since correlation searches and, in fact, any search, scheduled or not, runs based on Extracted Time (_time) by default, supposing that S1 events are delayed by 5 minutes, those events will never trigger an alert from R1. Why?

Because the 5-minute window checked by the continuous, scheduled R1 will never re-scan the events from a previous, already checked window. The moment those delayed events are known to exist (indexed), R1 is already set to check another time window, therefore, missing the opportunity to detect S1 behavior from delayed events.

What can be done?

There are many ways to tackle this issue but regardless of which one is chosen, you should make sure the _time field is extracted correctly – it doesn’t matter if the event arrives later or not.

Clock skew monitoring dashboard

The clock skew problem here applies to the difference between Indexed (_indextime) and Extracted (_time) values. Assuming near realtime data collection, those values tend to be very close, which implies it’s completely fine to have them out of sync.

Folks at Telenor CERT were kind enough to allow me to share a slightly simplified version of a dashboard we’ve written that is used to monitor for this kind of issue, we call it “Event Flow Tracker”.

The code is available at Github and is basically a SimpleXML view, based on default fields (metadata). It should render well once it’s deployed to any search head.

Here’s a screenshot:

eft1

Since searches rely on metadata (tstats based), it runs pretty fast, and also tracks the event count (volume) and reporting agents (hosts) over time. Indexes are auto-discovered from a REST endpoint call, but the dashboard can also be extended or customized for specific indexes or source types.

When clicking at “Show charts” link under Violations highlighted in red, the following line charts are displayed:

eft2

So assuming a threshold of one hour (positive/negative), with the visualizations it’s easier to spot scenarios when those time fields are too different from each other.

The first chart shows how many events are actually under/above the threshold. The second chart depicts how many seconds those events are off in average.

How to read the charts?

Basically, assuming median as the key metric, in case the blue line (median) is kept steady above the green line (threshold), it might be related to a recurring, constant issue that should be investigated.

Since the dashboard is based on regular queries, those can be turned into alerts in case you want to systematically investigate specific scenarios. For example, for events that must follow strict time settings.

The dashboard is not yet using the base search feature, so perhaps it’s something you could consider in case you want to use or improve it.

Writing Rules – Best practices

Now, assuming the risk is known, that is, some events may land on the indexers a bit later due to a transport bottleneck (network, processing queue, etc), how to write reliable rules?

Delayed detection?

If data is not there yet, how can you reliably detect anything? This is an obvious decision. You should always consider capturing as much signal as you can in order to trigger a high-quality alert.

If you are into “realtime detection”, I suggest you consider checking how many events you might have missed due to this problem (delayed events). I’m more into detecting something with accuracy, even if a bit delayed, rather than trying to detect something almost immediately risking less accuracy or even the lack of alerting.

Also, depending on your search query (density, constraints, etc), you may gain some extra resource power by increasing the interval and time boundaries from your rules.

As a side note: reports say organizations take days if not months to detect a breach, but some insist on realtime detection. Is that what Mr. Trump tried to convey here?

Time boundaries based on Index Time?

Yes, that’s also an option. You can search based on _indextime. So basically, as soon as the event is indexed, no matter how off the Extracted time (_time) is, it may be consider for an alert.

The downside of it, besides adding more complexity when troubleshooting Throttling/Suppression, is that you need to carefully review all your drilldown searches from another perspective, taking _indextime into account. In other words, the searches should always specify _index_earliest and _index_latest. More info here.

References

Event indexing delay
http://docs.splunk.com/Documentation/Splunk/6.5.1/Troubleshooting/Troubleshootingeventsindexingdelay

Splunk/ES: dynamic drilldown searches

72345577One of the advantages of Splunk is the possibility to customize pretty much anything in terms of UI/Workflow. Below is one example on how to make dynamic drilldown searches based on the output of aggregated results (post-stats).

Even though Enterprise Security (ES) comes with built-in correlation searches (rules), some mature/eager users leverage Splunk’s development appeal and write their own rules based on their use cases and ideas, especially if they are already familiar with SPL.

Likewise, customizing “drilldown searches” is also possible, enabling users to define their own triage workflows, facilitating investigation of notable events (alerts).

Workflow 101: Search > Analytics > Drilldown

Perhaps the simplest way to define a workflow in ES is by generating alerts grouped by victim or host and later being able to quickly evaluate all the details, down to the RAW events related to a particular target scenario.

As expected, there are many ways to define a workflow, here’s a short summary of the stages listed above:

Search: here you define your base search, applying as many filters as possible so that only relevant data is processed down the pipe. Depending on how dense/rare your search is, enrichment and joins can also be done here.

Analytics: at this stage you should get the most out of stats() command. By using it you systematically aggregate and summarize the search results, which is something desirable given that every row returned will turn into a new notable event.

Drilldown: upon generating a notable event, the user should be able to quickly get to the RAW events building up the alert, enabling rapid assessment without exposing too many details for analysis right from the alert itself.

You may also want to craft a landing page (dashboard) from your drilldown search string, enabling advanced workflows such as Search > Analytics > Custom Dashboard (Dataviz, Enrichment) > RAW Events > Escalation (Case Management).

Example: McAfee ePO critical/high events

Taking McAfee’s endpoint security solution as an example (fictitious data, use case), here’s how a simple workflow would be built based on a custom correlation search that looks for high-severity ePO events.

First, the base search:

index=main sourcetype=mcafee:epo (severity=critical OR severity=high)

Next, using stats command to aggregate and summarize data, grouping by host:

| stats values(event_description) AS desc, values(signature) AS signature, values(file_name) AS file_path, count AS result BY dest

The above command is also performing some (quick) normalization to allow proper visualization within ES’ Incident Review dashboard, and also providing some quick statistics to facilitate the alert evaluation (event count, unique file names, etc).

Finally, it’s time for defining the dynamic drilldown search string based on the output of those two commands (search + stats):

| eval dd="index=main sourcetype=mcafee:epo (severity=critical OR severity=high) dest=".dest

Basically, the eval command is creating a new field/column named “dd” to store the exact search query needed to search for ePO events for a given host (dest).

In the end, putting it all together:

out1

Despite having more than 150 matching events (result) from each of those hosts, the maximum number of alerts that can be possibly generated over each correlation search execution is limited to the number of unique hosts affected.

And here’s how that translates into a correlation search definition:

cs1

cs2

Note that the “Drill-down search” value is based on a token expansion: search $dd$. This way, the value of “dd” is used to dynamically build the drilldown link.

Now, once the correlation search generates an alert, a link called “Search for raw events” should become available under “Contributing Events” after expanding the notable event details at the Incident Review dashboard.

By clicking the link, the user is directed to a new search containing all raw events for the specific host, within the same time window used by the correlation search:

cs4

Defining a “dd” field within your code is not only enabling custom dashboards development with easy access to the drilldown search (index=notable) but also standardizing the value for the drilldown search at the correlation search definition.

As always, the same drilldown search may be triggered via a Workflow Actions. Feel free to get in touch in case you are interested in this approach as well.

Happy Splunking!

 

Honing in on the Homeless – the Splunkish way

e9038252f910c840e582818a63dd9908_400x400Have you noticed Splunk just released a new version, including new data visualizations? I had been eager to start playing with one of the new charts when yesterday I came across a blog post by Bob Rudis, who is co-author of the Data-Driven Security Book and former member of the Verizon’s DBIR team.

In that post, @hrbrmstr is presenting readers with a dataviz challenge based on data from U.S. Department of Housing and Urban Development (HUD) related to homeless population estimates. So I’ve decided to give it a go with Splunk.

Even though -we can’t compare- the power of R and other Stats/Dataviz focused programming languages with current Splunk programming language (SPL), this exercise may serve to demonstrate some of the capabilities of Splunk Enterprise.

Sidenote: In case you are into Machine Learning (ML) and Splunk, it’s also worth checking the new ML stuff just released along with Splunk 6.4, including the awesome ML Toolkit showcase app.

The challenge is basically about asking insightful, relevant questions to the HUD data sets and generating visualizations that would help answering those questions.

What the data sets can tell about the homeless population issue?

The following are the questions I try to answer,  considering the one proposed in the challenge post: Which “states” have the worst problem in terms of homeless people?

  1. Which states currently have the largest homeless population per capita?
  2. Which states currently have the largest absolute homeless population?
  3. Which states are being successful or failing on lowering the figures compared to previous years?

I am far from considering myself a data scientist (was looking up standard deviation formula the other day), but love playing with data like many other Infosec folks in our community. So please take it easy with newbies!

Since we are dealing with data points representing estimates and this is a sort of experiment/lab, take them with a grain of salt and consider adding “according to the data sets…here’s what that Splunk guy verified” to the statements found here.

Which states currently have the largest homeless population per capita?

For this one, it’s pretty straightforward to go with a Column chart for quick results. Another approach would be to gather map data and work on a Choropleth chart.

Basically, after calculating the normalized values (homeless/100k population), I filter in only the US states making the top of the list, limiting to 10 values . They are then sorted by values from year 2015 and displayed on the chart below:

homeless-ratio

Homeless per 100k of population – Top 10 US states

The District of Columbia clearly stands out, followed by Hawaii and New York. That’s one  I would never guess. But there seems to be some explanation for it.

Which states currently have the largest absolute homeless population?

In this case, only the homeless figures are considered for extracting the top 10 states. Below are the US states where most homeless population lives based on latest numbers (2015), click to enlarge.

homeless-abs

Homeless by absolute values – Top 10 US states

As many would guess, New York and California are leading here. Those two states along with Florida and Texas are clearly making the top of the list since 2007.

Which states are being successful or failing on lowering the figures compared to previous years?

Here we make use of a new visualization called Horizon chart. In case you are not familiar with this one, I encourage you to check this link where everything you need to know about it is carefully explained.

Basically, it eases the challenge of visualizing multiple (time) series with less space (height) by using layered bands with different color codes to represent relative positive/negative values, and different color shades (intensity) to represent the actual measured values (data points).

After crafting the SPL query, here’s the result (3 bands, smoothed edges) for all 50 states plus DC, present in the data sets:

horizon-chart

So how to read this visualization? Keep in mind the chart is based on the same prepared data used in the first chart (homeless/100k population).

The red color means the data point is higher when compared to the previous measurement (more homeless/capita), whereas the blue represents a negative difference when comparing current and last measurements (less homeless/capita). This way, the chart also conveys trending, possibly uncovering the change in direction over time.

The more intense the color is, the higher the (absolute) value. You can also picture it as a stacked area chart without needing extra height for rendering.

The numbers listed at the right hand side represent the difference between immediate data points point in the timeline (current/previous). For instance, last year’s ratio (2015) for Washington decreased by ~96 as compared to the previous year (2014).

On a Splunk dashboard or from the search query interface (Web GUI), there’s also an interactive line that displays the relative values as the user hovers over a point in the timeline, which is really handy (seen below).

horizon_crop

The original data files are provided below and also referenced from the challenge’s blog and GitHub pages. I used a xlsx2csv one-liner before handling the data at Splunk (many other ways to do it though).

HUD’s homeless population figures (per State)
US Population (per State)

The Splunk query used to generate the data used as input for the Horizon chart is listed below. It seems a bit hacky, but does the job well without too much effort.

| inputlookup 2007-2015-PIT-Counts-by-State.csv
| streamstats last(eval(case(match(Total_Homeless, "Total"), Total_Homeless))) as _time_Homeless
| where NOT State_Homeless="State"
| rex mode=sed field=_time_Homeless "s|(^[^\d]+)(\d+)|\2-01-01|"
| rename *_Homeless AS *
| join max=0 type=inner _time State [
  | inputlookup uspop.csv
  | table iso_3166_2 name
  | map maxsearches=51 search="
    | inputlookup uspop.csv WHERE iso_3166_2=\"$iso_3166_2$\"
    | table X*
    | transpose column_name=\"_time\"
    | rename \"row 1\" AS \"Population\"
    | eval State=\"$iso_3166_2$\"
    | eval Name=\"$name$\"
  "
  | rex mode=sed field=_time "s|(^[^\d]+)(\d+)|\2-01-01|"
]
| eval _time=strptime(_time, "%Y-%m-%d&amp")
| eval ratio=round((100000*Total)/Population)
| chart useother=f limit=51 values(ratio) AS ratio over _time by Name

Want to check out more of those write-ups? I did one in Portuguese related to Brazil’s Federal Budget application (also based on Splunk charts). Perhaps I will update this one soon with new charts and a short English version.

Splunkers on Twitter

Below is a list of Splunk users I am following on Twitter, including Splunkers, partners and awesome users. Most of them are also into #Infosec. The list is not sorted in any particular order.

Missing someone, maybe you?! Please feel free to contact me for adding more. In case you want to follow a list, it is also available via Twitter here.

Ryan Kovar @meansec
Staff Security Strategist @Splunk. Enjoys clicking too fast, long walks in the woods, and data visualizations.

Holger Sesterhenn @sesterhenn_splk
Sales Engineer, CISSP, Security Know-How, Machinedata, Security Intelligence, IoT, Industrie 4.0, BigData, Hadoop, NoSQL, User Behavior Analytics

The Dark Overlord @StephenGailey
Towering intellect; effortlessly charming…

Cédric @_CLX
Let me grep you. #infosec and useless stuff. Using security buzzwords since 2005. https://github.com/c-x

Brad Shoop @bradshoop
Security Onion for Splunk app developer, infosec, devops, infrastructure, cloud and homebrewer.

monzy merza @monzymerza
Chief Security Evangelist @Splunk. Thoughts are my own.

Damien Dallimore @damiendallimore
Splunk Dev Evangelist, Golfer, Rugby Player, Musician, Scuba Diver, Thai linguist, Chef.

Adam Sealey @AdamSealey
Information security, both applied and research. CSIRT, DFIR, and analytics Generalist geek. Husband & father of 3. Tweets are my own.

Hacker Hurricane @HackerHurricane
Austin TX. area Information Security Professional

Mika Borner @my2ndhead
Splunk Artisan. Because Splunking is an art.

David Shpritz @automine
I Splunk all the things. Blieve, hon. Splunk, Web App Sec, Open source, EDC

Dimitri McKay @dimitrimckay
Glazed donut connoisseur, plus size hand model, technologist, splunker, replicant, security nerd, CISSP, MMA fighter, zombie killer & lover of pitbulls.

Luke Murphey @LukeMurphey
Developer of network security solutions at #splunk. Founding member of Threatfactor (http://ThreatFactor.com ) and Converged Security (acquired by GlassHouse).

Sebastien Tricaud @tricaud
Principal Security Strategist @Splunk. Playing with data, binary-ascii-utf16-whatever. Opinions are my own, not my employers. Re-tweeting != Agreeing

Dave Herrald @daveherrald
dad | husband | splunk security architect | GIAC GSE | tweets=mine

Ryan Chapman @rj_chap
Security enthusiast. Incident response analyst. Malware hobbyist. Retro game lover. Husband and father. TnVsbGl1cyBpbiB2ZXJiYS4= http://github.com/BechtelCIRT

Michael Porath @poezn
Product Manager for Data Visualization @splunk. Bay Area based Swiss Information Scientist

skywalka @skywalka
my daughter, basketball, hip hop, film, comics, linux, puppet, splunk, nagios, and sensu keep me awake

James Bower (Hando) @jamesbower
Pentester / Threat intelligence / #OSINT / #Honeypots / #Bro_IDS / #Splunk | #Python / Follower of Christ and occasional blogger – http://jamesbower.com

georgestarcher @georgestarcher
Information Security, Log analysis and Splunk, Forensics, Podcasting. Photography and OSX Fan. GnuPGP key ID: 875A3320BD558C9E

Brian Warehime @brian_warehime
Security Analyst | Threat Researcher | #Honeypots | #Splunk | #Python | #OSINT | #DFIR

Michel Oosterhof @micheloosterhof
Splunk // My opinions are my own.

Hal Rottenberg @halr9000
I am the Lorax. I speak for the Developers! @Splunk, Author, Podcaster @powerscripting , Speaker, #PowerShell MVP, #CiscoChampion, husband, father of four!

Jason McCord @digirati82
Security analyst, software developer, #Splunk fan. Log everything. #WLS #DFIR

 

My TOP 5 Security (and techie) talks from Splunk .conf 2015

indexIf you are into Security and didn’t have an opportunity to attend the Splunk conference in Las Vegas this year (maybe you’re busy playing Blackjack instead?), here’s what you can not miss.

The list is not sorted in any particular order and, whenever possible, entries include presenters’ Twitter handles as well as takeaways or comments that might help you choose where to start.

  1. Security Operations Use Cases at Bechtel (recording / slides)
    That’s the coolest customer talk from the ones I could watch. The presenters (@ltawfall / @rj_chap) discussed some interesting use cases and provided a lot of input for those willing to make Splunk their nerve center for security.
  2. Finding Advanced Attacks and Malware with Only 6 Windows EventIDs (recording / slides)
    This presentation is a must for those willing to monitor Windows events either via native or 3rd party endpoint solutions. @HackerHurricane really knows his stuff, which is not a surprise for someone calling himself a Malware Archaeologist.
  3. Hunting the Known Unknowns (with DNS) (recording / slides)
    If you are looking for concrete security use case ideas to build based on DNS data, that’s a gold. Don’t forget to provide feedback to Ryan Kovar and Steve Brant, I’m sure they will like it.
  4. Building a Cyber Security Program with Splunk App for Enterprise Security (recording / slides)
    Enterprise Security (ES) app relies heavily on accelerated data models, so besides interesting tips on how to leverage ES, Jeff Campbell provides ways to optimize your setup, showing what goes under the hood.
  5. Build A Sample App to Streamline Security Operations – And Put It to Use Immediately (recording)
    This talk was delivered by Splunkers @dimitrimckay and @daveherrald. They presented an example on how to build custom content on top of ES to enhance the context around an asset, which is packed to an app available at GitHub.

Now, in case you are not into Security but also enjoy watching hardcore, techie talks, here’s my TOP 5 list:

  1. Optimizing Splunk Knowledge Objects – A Tale of Unintended Consequences (recording / slides)
    Martin gives an a-w-e-s-o-m-e presentation on Knowledge Objects, unraveling what happens under the hood when using tags and eventtypes. Want to provide him feedback? Martin is often found at IRC, join #splunk and say ‘Hi’!
  2. Machine Learning and Analytics in Splunk (recording / slides)
    If you are into ML and the likes of R programming, the app presented here will definitely catch your attention. Just have a quick look on the slides to see what I mean. A lot of use cases for Security here as well.
  3. Beyond the Lookup Glass: Stepping Beyond Basic Lookups (recording)
    Wanna know about the challenges with CSV Lookups and KV store in big deployments? Stop here. Kudos to Duane Waddle and @georgestarcher!
  4. Splunk Search Pro Tips (recording / slides)
    Just do the following: browse the video recording and skip to around 30′ (magic!). Now, try not watching the entire presentation and thank Dan Aiello.
  5. Building Your App on an Accelerated Data Model (recording / slides)
    In this presentation, the creator of the ubberAgent@HelgeKlein – describes how to make the most of data models in great detail.

Still eager for more security related Splunk .conf stuff? Simply pick one below (recordings only).

For all presentations (recordings and slides), please visit the conference website.