Sentinel Log Ingestion Threshold

Prevost, Ella 1 Reputation point
2022-04-12T20:52:28.67+00:00

I want to build the functionality to alert me when my org's Sentinel log ingestion is at or near the daily threshold. We're capped at 200 GB/day, so ideally I'd like to receive one alert when we're at 180 GB, another alert when we're at 190 GB, and then a final alert when hit 200 GB. What's the easiest way to go about this?

Microsoft Sentinel
Microsoft Sentinel
A scalable, cloud-native solution for security information event management and security orchestration automated response. Previously known as Azure Sentinel.
1,018 questions
0 comments No comments
{count} votes

3 answers

Sort by: Most helpful
  1. Andrew Blumhardt 9,676 Reputation points Microsoft Employee
    2022-04-12T21:13:34.56+00:00

    I recommend setting budget alerts at the resource group level: https://learn.microsoft.com/en-us/azure/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending

    You can also refer to the queries used by the Usage Workbook if you want KQL samples for alert monitoring.

    1 person found this answer helpful.

  2. George Moise 2,346 Reputation points Microsoft Employee
    2022-04-13T07:38:04.843+00:00

    Hi @Prevost, Ella ,

    You could make use of the budget alerts presented by Andrew, or you could make use of the records that are generated in a Log Analytics Workspace (in the _LogOperation table) when the Daily Cap is reached (check the procedure here) --> even if you will see that record after the Daily Cap is reached (Detail contains "OverQuota")

    Now, if you want to monitor this aspect with multiple thresholds, then you need a custom query for that:

    Using the Usage Table:

    Usage // this table contains details about the ammount of data ingested in each table in 1h aggregations
    | where TimeGenerated >= ago(24h) // here you should be more specific (to check from the moment the daily cap is resetted
    | where IsBillable == true // We exclude the free data types
    | summarize sum(Quantity)/1024 //Quantity is in MB so we calculate in GB

    Note: as the Usage table have details for every 1h aggregation, you might not be as precise as you might need on short intervals

    Using the _BilledSize function:

    search * // we get all records in the Workspace
    | where TimeGenerated >= ago(24h) // here you should be more specific (to check from the moment the daily cap is resetted
    | extend Size_Bytes = _BilledSize // we use the _BilledSize function to get the size of each record in
    | where _IsBillable == true // we exclude the free data types
    | summarize sum(Size_Bytes)/1024/1024/1024 // we calculate the size in GB

    If you use any of the queries, you could then create an Azure Monitor Alert that executes this query periodically and generate an alert when the criteria is met (thresholds reached).

    I hope it helps!
    BR,
    George Moise

    1 person found this answer helpful.
    0 comments No comments

  3. Gary Bushey 176 Reputation points
    2022-04-26T17:21:55.837+00:00

    I think you will find this very hard to do as by the time the alert gets created, you may have already ingested too much data. The quickest a scheduled rule can run is every 5 minutes. There are the Near Real Time (NRT) rules that run every minute but they are limited to what KQL commands can be used. The "union" command, that the one that is used to determine how much data has been ingested, cannot be used in NRT rules.

    As an aside, it is a really bad idea to cap the data coming into your Microsoft Sentinel environment as you could end up missing some very critical information. It would be better to tune your data collectors to make sure you are only ingesting useful information or just accept the fact that some days you may pay more for your data ingestion.

    1 person found this answer helpful.
    0 comments No comments