Use FHIR data ingestion in Healthcare data solutions (preview)

[This article is prerelease documentation and is subject to change.]

Employ the FHIR data ingestion pipeline to efficiently import your Fast Healthcare Interoperability Resources (FHIR) data from the Azure Health Data Services FHIR service and store the raw JSON files in the data lake. With comprehensive support for all FHIR R4 resources, integrating your FHIR data into the lake environment allows you to harness a wealth of clinical, financial (including claims and explanation of benefits), and administrative data. This integration facilitates the development of analytical scenarios tailored for various healthcare needs. These scenarios include quality reporting, population health management, clinical research studies, operational reporting, and decision support.

To learn more about the capability and understand how to deploy and configure it, go to:

The capability includes the FHIR export service notebook healthcare#_msft_fhir_export_service for bringing data from the Azure FHIR service to OneLake.


The FHIR data ingestion capability is required to run other Healthcare data solutions (preview) capabilities if you're using your own FHIR data. The capability also has a direct dependency on the Healthcare data foundations capability. Before you deploy FHIR data ingestion and run the pipeline, ensure that you successfully deploy and set up Healthcare data foundations first.


You need to ensure you have the following requirements before executing the FHIR export service notebook:

  • If using Azure Health Data Services as your FHIR data source, the setup steps in Use FHIR service are completed.
  • If not using a FHIR server in your test environment, the provided sample data is set up as explained in Deploy sample data.
  • The FHIR data ingestion capability is successfully deployed in your Fabric workspace.
  • The healthcare#_msft_fhir_export_service notebook is configured, as explained in Configure the FHIR export service.

Execute the FHIR export service

To use the FHIR data ingestion pipeline, you can choose one of the following three data ingestion options:

  1. Use the sample data shipped with Healthcare data solutions (preview).
  2. Bring your own data to the Fabric lakehouse.
  3. Ingest data using a FHIR service such as Azure Health Data Services.


Ingesting data using a FHIR service only works with first-party Microsoft FHIR services.

Since the notebook configuration and execution differs for each data ingestion option, ensure you review the configuration guidance in FHIR data ingestion options.

Using the bulk $export operation endpoint in the FHIR service, the pipeline exports FHIR data to a storage container in an Azure Data Lake Storage Gen2 storage account. The FHIRExportService module in the healthcare#_msft_fhir_export_service notebook facilitates the status and the monitoring process of these export operations.

After confirming that you have the correct configuration set up based on your data ingestion option, follow these steps to execute the pipeline:

  • On the FHIR data ingestion capability management page, select the healthcare#_msft_fhir_export_service notebook to open it.
  • Review the details in the Configuration management and setup and Run the FHIRExportService sections.
  • Select the Run cell or Run all option to execute the pipeline and wait for the execution to complete.

We recommend scheduling this notebook job to run every four hours, or as needed, based on your requirement.


By default, all new Fabric workspaces use the latest Fabric runtime version, which is currently Runtime 1.2. However, the solution only supports Runtime 1.1 currently.

Hence, post deploying Healthcare data solutions (preview) to your workspace, remember to update the default Fabric runtime version to Runtime 1.1 (Apache Spark 3.3.1 and Delta Lake 2.2.0) before executing any of the pipelines or notebooks. If not, your notebook execution will fail.

See also