It’s about time to change your correlation searches timing settings

Too late - conceptual alarm clock showing that you are too lateI wrote about the problem of delayed events in a previous post, so here the focus is on how to overcome that problem when writing a rule or a correlation search (CS).

What’s the problem?

Most if not all App/TA developers extract _time from the log generation time. And that’s the best practice since we all want to track the log generation time, usually set by the device or system generating the log.

If the extraction (regex based) goes wrong for whatever reason, basically, _time is set to _indextime. And that may lead to many other issues which are out of the scope here.

The thing is Splunk’s default behavior is to use _time across the entire system, from the Time Picker to scheduled searches and rules.

When a rule is executed using relative time (modifiers), the time reference is the rule engine’s clock, that means, the search head or the Splunk instance where the Enterprise Security (ES) App is installed.

A few risks introduced here, in a threat detection context – if you rely on properly extracted _time as the time reference for your searches or rules:

  1. In case there’s a delay or too much latency between the collection (UF) and the indexing of an event, the time window checked from your CS may have been scanned already, hence the event will never be considered. More details here;
  2. In case _time is extracted with a wrong value, there’s simply no integrity in the whole process. And here just a few scenarios when this may happen:
    1. Wrong clock set on the originating device or system;
    2. Wrong timezone settings;
    3. Wrong regex (lack of precision, picking the wrong epoch from the log, etc);
    4. Attacker changing or tampering with the system clock (Eventcode 4616).

Those are particularly valid when applied to “near real time” based rules or the ones running with a more aggressive interval (ex.: every minute).

Why is that important?

Most customers and users are NOT aware of such risks. And I can confirm that all customers I’ve visit so far, with no exception, were not taking this into account.

Basically, that means there’s a gap in detection coverage.

How to overcome or mitigate that?

Even though there’s no way to tell Splunk to ignore _time during searches (it’s always part of the scope/boundary), you can change this behavior by using index time as your time reference or relative time within a query.

The index time is stored as an internal field called _indextime. And the way to use it from your searches is quite simple:

  • Use index time as the time boundaries for your search. That means using _index_earliest and _index_latest within your CS code;
  • Set the standard time (_time) boundaries (earliest and latest) to a bigger window, at least bigger than the index time boundaries.

More details on time modifiers for your search can be found here.

How does it look in practice?

Below you can find a sample correlation search that leverages this approach. It also provides a dynamic drill down search query based exactly on the time boundaries used during the rule’s execution time.

Just assume you are stacking multiple critical NIDS signatures per target host every 5 minutes (interval) to raise an alert (notable event).

index=foo sourcetype=bar severity=1 _index_earliest=-5min@min
| stats min(_indextime) AS imin,
  max(_indextime) AS imax,
  values(signature) AS signature
  BY host
| eval dd="index=foo sourcetype=bar severity=1 host=".host
| eval dd=dd." _indextime>=".imin." _indextime<=".imax

Time settings

Earliest: -5h@h
Latest: +5h@h
Cron schedule (interval): */5 * * * *

Set your drill down search to search $dd$ and voila! (_time boundaries are automatically inherited via $info_min_time$ and $info_max_time$ tokens).

That would consider any matched event indexed within the last 5 minutes, allowing the event _time to be 5 hours off or “skewed” (positive/negative), as compared to the rule engine’s clock (search head).

Also, note the time boundaries are set by using _indextime instead of search modifiers_index_earliest and _index_latest. Reason for that is because the latter is not inclusive, meaning events having the latest time within the boundaries will not match.

Once you are OK with that approach, consider using tags/eventtypes/macros to optimize and build cleaner code.

What about performance?

And before you ask. No. There’s no noticeable impact in performance since the search engine will detect the narrowed index time settings and will reduce the search scope, despite the bigger window set from the regular time boundaries (-5h, +5h).

Log in to your test environment, try setting the Time Picker to “All Time” (_time boundaries) and running the following search if you want to check by yourself:

index=_* _index_earliest=-2s@s | stats count

That search query counts the number of events indexed within the last 2 seconds regardless of their _time values. It should be fast despite “All Time”.

In case you want to go deeper on _time x _indextime behavior in your environment, this post introduces a tstats based dashboard for tracking that.

Feel free to reach out in case you have comments/feedback and happy Splunking!

SIEM tricks: dealing with delayed events in Splunk

tiSo after bugging the entire IT department and interrogating as many business teams as possible to grant you (the security guy) access to their data, you are finally in the process of developing your dreamed use cases. Lucky you!

Most SIEM projects already fall apart before reaching that stage. Please take the time to read a nicely written article by SIEM GM Anton Chuvakin. In case you don’t have the time, just make sure you check the section on “Mired in Data Collection”.

The process of conceptualizing, developing, deploying and testing use cases is challenging and should be continuous. There are so many things to cover, I bet you can always find out something is missing while reading yet another “X ways to screw up a SIEM” article.

So here’s another idea to prove it once again: how can you make sure the events are safely arriving at your DB or index? Or even beyond: how can you make sure the timestamps are being parsed or extracted appropriately? Why is it important?

Time is what keeps everything from happening at once.

First of all, I’m assuming the Splunk terminology here, so it’s easier to explain by example. Also, let’s make two definitions very clear:

Extracted time:  corresponding to the log generation time, coming from the log event itself. This one is usually stored as _time field in Splunk.

Index time: corresponding to the event indexing time, generated by Splunk indexer itself upon receiving an event. This one is stored as _indextime field in Splunk.

There are infinite reasons why you should make sure timestamps are properly handled during extraction or searching time, but here are just a few examples:

  1. Timezones: this piece of data is not always part of the logs. So time values from different locations may differ – a lot;
  2. Realtime/Batch processing: not all logs are easily collected near realtime. Sometimes they are collected in hourly or daily chunks;
  3. Correlation Searches (Rules) and forensic investigations are pretty much relying on the Extracted Time. Mainly because that’s the default behavior, either from the Time Picker (Search GUI) or the Rule editor.

Have you noticed the risk here?

Going under the radar

In case you haven’t figured out yet, apart from all other effects of not getting events’ time right, there’s a clear risk when it comes to security monitoring (alerting): delayed events may go unnoticed.

If you are another “realtime freak”, running Correlation Searches every 5 minutes, you are even more prone to this situation. Imagine the following: you deploy a rule (R1) that runs every 5 minutes, checking for a particular scenario (S1) within the last 5 minutes, and firing an alert whenever S1 is found.

For testing R1, you intentionally run a procedure or a set of commands that trigger the occurrence of S1. All fine, an alert is generated as expected.

Since correlation searches and, in fact, any search, scheduled or not, runs based on Extracted Time (_time) by default, supposing that S1 events are delayed by 5 minutes, those events will never trigger an alert from R1. Why?

Because the 5-minute window checked by the continuous, scheduled R1 will never re-scan the events from a previous, already checked window. The moment those delayed events are known to exist (indexed), R1 is already set to check another time window, therefore, missing the opportunity to detect S1 behavior from delayed events.

What can be done?

There are many ways to tackle this issue but regardless of which one is chosen, you should make sure the _time field is extracted correctly – it doesn’t matter if the event arrives later or not.

Clock skew monitoring dashboard

The clock skew problem here applies to the difference between Indexed (_indextime) and Extracted (_time) values. Assuming near realtime data collection, those values tend to be very close, which implies it’s completely fine to have them out of sync.

Folks at Telenor CERT were kind enough to allow me to share a slightly simplified version of a dashboard we’ve written that is used to monitor for this kind of issue, we call it “Event Flow Tracker”.

The code is available at Github and is basically a SimpleXML view, based on default fields (metadata). It should render well once it’s deployed to any search head.

Here’s a screenshot:

eft1

Since searches rely on metadata (tstats based), it runs pretty fast, and also tracks the event count (volume) and reporting agents (hosts) over time. Indexes are auto-discovered from a REST endpoint call, but the dashboard can also be extended or customized for specific indexes or source types.

When clicking at “Show charts” link under Violations highlighted in red, the following line charts are displayed:

eft2

So assuming a threshold of one hour (positive/negative), with the visualizations it’s easier to spot scenarios when those time fields are too different from each other.

The first chart shows how many events are actually under/above the threshold. The second chart depicts how many seconds those events are off in average.

How to read the charts?

Basically, assuming median as the key metric, in case the blue line (median) is kept steady above the green line (threshold), it might be related to a recurring, constant issue that should be investigated.

Since the dashboard is based on regular queries, those can be turned into alerts in case you want to systematically investigate specific scenarios. For example, for events that must follow strict time settings.

The dashboard is not yet using the base search feature, so perhaps it’s something you could consider in case you want to use or improve it.

Writing Rules – Best practices

Now, assuming the risk is known, that is, some events may land on the indexers a bit later due to a transport bottleneck (network, processing queue, etc), how to write reliable rules?

Delayed detection?

If data is not there yet, how can you reliably detect anything? This is an obvious decision. You should always consider capturing as much signal as you can in order to trigger a high-quality alert.

If you are into “realtime detection”, I suggest you consider checking how many events you might have missed due to this problem (delayed events). I’m more into detecting something with accuracy, even if a bit delayed, rather than trying to detect something almost immediately risking less accuracy or even the lack of alerting.

Also, depending on your search query (density, constraints, etc), you may gain some extra resource power by increasing the interval and time boundaries from your rules.

As a side note: reports say organizations take days if not months to detect a breach, but some insist on realtime detection. Is that what Mr. Trump tried to convey here?

Time boundaries based on Index Time?

Yes, that’s also an option. You can search based on _indextime. So basically, as soon as the event is indexed, no matter how off the Extracted time (_time) is, it may be consider for an alert.

The downside of it, besides adding more complexity when troubleshooting Throttling/Suppression, is that you need to carefully review all your drilldown searches from another perspective, taking _indextime into account. In other words, the searches should always specify _index_earliest and _index_latest. More info here.

References

Event indexing delay
http://docs.splunk.com/Documentation/Splunk/6.5.1/Troubleshooting/Troubleshootingeventsindexingdelay

Splunk/ES: dynamic drilldown searches

72345577One of the advantages of Splunk is the possibility to customize pretty much anything in terms of UI/Workflow. Below is one example on how to make dynamic drilldown searches based on the output of aggregated results (post-stats).

Even though Enterprise Security (ES) comes with built-in correlation searches (rules), some mature/eager users leverage Splunk’s development appeal and write their own rules based on their use cases and ideas, especially if they are already familiar with SPL.

Likewise, customizing “drilldown searches” is also possible, enabling users to define their own triage workflows, facilitating investigation of notable events (alerts).

Workflow 101: Search > Analytics > Drilldown

Perhaps the simplest way to define a workflow in ES is by generating alerts grouped by victim or host and later being able to quickly evaluate all the details, down to the RAW events related to a particular target scenario.

As expected, there are many ways to define a workflow, here’s a short summary of the stages listed above:

Search: here you define your base search, applying as many filters as possible so that only relevant data is processed down the pipe. Depending on how dense/rare your search is, enrichment and joins can also be done here.

Analytics: at this stage you should get the most out of stats() command. By using it you systematically aggregate and summarize the search results, which is something desirable given that every row returned will turn into a new notable event.

Drilldown: upon generating a notable event, the user should be able to quickly get to the RAW events building up the alert, enabling rapid assessment without exposing too many details for analysis right from the alert itself.

You may also want to craft a landing page (dashboard) from your drilldown search string, enabling advanced workflows such as Search > Analytics > Custom Dashboard (Dataviz, Enrichment) > RAW Events > Escalation (Case Management).

Example: McAfee ePO critical/high events

Taking McAfee’s endpoint security solution as an example (fictitious data, use case), here’s how a simple workflow would be built based on a custom correlation search that looks for high-severity ePO events.

First, the base search:

index=main sourcetype=mcafee:epo (severity=critical OR severity=high)

Next, using stats command to aggregate and summarize data, grouping by host:

| stats values(event_description) AS desc, values(signature) AS signature, values(file_name) AS file_path, count AS result BY dest

The above command is also performing some (quick) normalization to allow proper visualization within ES’ Incident Review dashboard, and also providing some quick statistics to facilitate the alert evaluation (event count, unique file names, etc).

Finally, it’s time for defining the dynamic drilldown search string based on the output of those two commands (search + stats):

| eval dd="index=main sourcetype=mcafee:epo (severity=critical OR severity=high) dest=".dest

Basically, the eval command is creating a new field/column named “dd” to store the exact search query needed to search for ePO events for a given host (dest).

In the end, putting it all together:

out1

Despite having more than 150 matching events (result) from each of those hosts, the maximum number of alerts that can be possibly generated over each correlation search execution is limited to the number of unique hosts affected.

And here’s how that translates into a correlation search definition:

cs1

cs2

Note that the “Drill-down search” value is based on a token expansion: search $dd$. This way, the value of “dd” is used to dynamically build the drilldown link.

Now, once the correlation search generates an alert, a link called “Search for raw events” should become available under “Contributing Events” after expanding the notable event details at the Incident Review dashboard.

By clicking the link, the user is directed to a new search containing all raw events for the specific host, within the same time window used by the correlation search:

cs4

Defining a “dd” field within your code is not only enabling custom dashboards development with easy access to the drilldown search (index=notable) but also standardizing the value for the drilldown search at the correlation search definition.

As always, the same drilldown search may be triggered via a Workflow Actions. Feel free to get in touch in case you are interested in this approach as well.

Happy Splunking!