Events
Mar 17, 9 PM - Mar 21, 10 AM
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Logs related to digestion of files added to the input storage account. These can be used to verify that data is being successfully passed through to enrichment, or to troubleshoot issues with processing the raw data.
Attribute | Value |
---|---|
Resource types | microsoft.networkanalytics/dataproducts |
Categories | - |
Solutions | LogManagement |
Basic log | Yes |
Ingestion-time transformation | No |
Sample Queries | Yes |
Column | Type | Description |
---|---|---|
_BilledSize | real | The record size in bytes |
Datatype | string | The datatype of the file that was digested. |
FilePath | string | The path of the file that was digested. |
_IsBillable | string | Specifies whether ingesting the data is billable. When _IsBillable is false ingestion isn't billed to your Azure account |
Level | string | The level of the log. |
Message | string | The log message. |
_ResourceId | string | A unique identifier for the resource that the record is associated with |
SourceSystem | string | The type of agent the event was collected by. For example, OpsManager for Windows agent, either direct connect or Operations Manager, Linux for all Linux agents, or Azure for Azure Diagnostics |
_SubscriptionId | string | A unique identifier for the subscription that the record is associated with |
TenantId | string | The Log Analytics workspace ID |
TimeGenerated | datetime | The time (UTC) when the log was generated. |
Type | string | The name of the table |
Events
Mar 17, 9 PM - Mar 21, 10 AM
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowTraining
Learning path
Ingest data with Microsoft Fabric - Training
Explore how Microsoft Fabric enables you to ingest and orchestrate data from various sources (such as files, databases, or web services) through dataflows, notebooks, and pipelines.