Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Today’s organizations demand real-time responsiveness from their analytics platforms. When data processing relies on scheduled job runs, insights and actions are delayed, and decisions are based on stale data. Whether your data lands in Azure Blob Storage or Fabric OneLake, it should be processed the moment it arrives to ensure timely decisions and continuous data freshness. Fabric events and Azure events make that possible by enabling event-driven data workflows that react in real-time to new data, without manual triggers or schedules.
In this article, you learn how to configure an event-driven pipeline that automatically gets triggered when a new file lands in OneLake or Azure Blob Storage, to ingest and transform the new file.
Why event-driven workflows?
Fabric jobs, like pipelines and notebooks, can be scheduled to run at fixed intervals, but data doesn’t always arrive on a predictable schedule. This mismatch can lead to stale data and delayed insights. Fabric events and Azure events solve this problem by emitting events when a file is created, updated, or deleted in OneLake or Azure blob storage. These events can be consumed by Activator that can trigger Fabric items (for example, pipelines or notebooks) or Power Automate workflows.
This event-driven workflow enables:
- Faster time-to-insight with real-time data processing
- Reduced cost by eliminating unnecessary job (that is, pipeline or notebook) runs
- Greater automation and responsiveness in your data workflows
Automatically ingest and process files with an event-driven pipeline
In this tutorial you develop a solution that performs the following operations:
Monitors a folder in OneLake for new CSV files
Triggers a Fabric pipeline when a file is created
Processes and loads the data into a Lakehouse table, without any manual intervention or a schedule.
Create a lakehouse
First, Let’s create a lakehouse where you can upload the CSV files and have the resulting table.
Open another web browser tab, sign in to Microsoft Fabric.
Select My workspace on the left navigation bar.
On the workspace page, select New item.
In the New item pane, select Lakehouse in the Store data section.
In the New lakehouse window, enter TutorialLakehouse for the name, and select Create
Right-click on the Files folder, then select New subfolder
Name the subfolder Source and select Create
Build your pipeline
Next, configure a pipeline to ingest, transform, and deliver the data in your Lakehouse.
Open another web browser tab, sign in to Microsoft Fabric using the same account.
Select My workspace on the left navigation bar.
On the workspace page, select New item.
In the New item pane, select Pipeline in the Get data section.
Name it TutorialPipeline and select Create
In the Pipeline, select Pipeline activity, and then select Copy data.
Copy data with these properties:
In the General tab, enter CSVtoTable for Name.
In the Source tab, do these steps:
- For Connection, select the TutorialLakehouse you created earlier.
- For Root folder, select Files.
- For File path, select Source for the Directory.
- For File format, select DelimitedText.
In the Destination tab:
- For Connection, select TutorialLakehouse.
- For Root folder, select Tables.
- For Table, select + New, and enter Sales for the table name.
In the Mapping tab:
Save the pipeline using Save button on the toolbar at the top.
Set up an alert using Fabric Activator
Open another web browser tab, and sign in to Microsoft Fabric using the same account.
On the left navigation var, select Real-Time.
In Real-Time hub, select Fabric Events.
Hover over OneLake events to, and select Set Alert button (or) select ... (ellipsis), and then select Set alert to start configuring your alert.
In the Set alert window, for Source, choose Select events.
In the Configure connection settings window, for Event type, select Microsoft.Fabric.OneLake.FileCreated and Microsoft.Fabric.OneLake.FileDeleted events.
In the Select data source for events section, select Add a OneLake source.
In the OneLake catalog window, select TutorialLakehouse, and then select Next.
On the next page, expand Files, select Source, and then select Next.
On the Configure connection settings page, select Next.
On the Review + connect page, select Save.
Now, in the Set alert pane, follow these steps:
For Action, select Run a Fabric item.
For Workspace, select the workspace where you created the pipeline.
For Item, select TutorialPipeline.
In the Save location section, select the workspace where you want to create a Fabric activator item with the alert.
For Item, select Create a new item.
For New item name, enter TutorialActivator.
Select Create.
This setup ensures your pipeline runs instantly whenever a new file appears in the source folder.
Test the workflow
To test your workflow:
Upload this CSV file to the Source folder in your TutorialLakehouse. Close the Upload files pane after you uploaded the file.
A FileCreated event is emitted to trigger the TutorialPipeline through TutorialActivator.
After processing, you’ll see the Sales table that includes the newly ingested and transformed data ready for use. As it's the first time you dropped a file, give it a few minutes to see the table with data.
No manual refresh. No waiting for the next scheduled run. Your pipeline runs in real-time.
The result is Seamless automation With just a few steps, you built a responsive, event-driven workflow. Every time data lands in your Lakehouse as a file, it’s automatically ingested, transformed, and ready for downstream analytics.
While this tutorial focused on OneLake Events, you can achieve the same scenario using Azure Blob Storage events.
More use cases for event-driven scenarios
Beyond the use case we explored, here are more scenarios where you can use OneLake and Azure Blob Storage events in Microsoft Fabric:
- Trigger a Notebook through Activator for advanced data science preprocessing
- Forward events to webhook through Eventstreams for custom compliance and data quality scans.
- Get alerted when critical datasets are modified through Activator’s Teams and E-mail notifications.