Ingest data from Microsoft Outlook

Important

This feature is in Beta. Workspace admins can control access to this feature from the Previews page. See Manage Azure Databricks previews.

This page shows how to create a managed Outlook ingestion pipeline using Databricks Lakeflow Connect.

Requirements

  • To create an ingestion pipeline, you must first meet the following requirements:

    • Your workspace must be enabled for Unity Catalog.

    • Serverless compute must be enabled for your workspace. See Serverless compute requirements.

    • If you plan to create a new connection: You must have CREATE CONNECTION privileges on the metastore. See Manage privileges in Unity Catalog.

      If the connector supports UI-based pipeline authoring, an admin can create the connection and the pipeline at the same time by completing the steps on this page. However, if the users who create pipelines use API-based pipeline authoring or are non-admin users, an admin must first create the connection in Catalog Explorer. See Connect to managed ingestion sources.

    • If you plan to use an existing connection: You must have USE CONNECTION privileges or ALL PRIVILEGES on the connection object.

    • You must have USE CATALOG privileges on the target catalog.

    • You must have USE SCHEMA and CREATE TABLE privileges on an existing schema or CREATE SCHEMA privileges on the target catalog.

  • To ingest from Microsoft Outlook, you must first complete the steps in Configure authentication to Microsoft Outlook and create a connection using Outlook.

Create an ingestion pipeline

The connector supports a single table, email_messages, under the default schema. All mailboxes are merged into this table with a mailbox column distinguishing between them. For details about the destination schema, see Supported data.

Databricks UI

  1. In the sidebar of the Azure Databricks workspace, click Data Ingestion.
  2. On the Add data page, under Databricks connectors, click Outlook.
  3. On the Connection page of the ingestion wizard, select the connection that stores your Microsoft Outlook access credentials. If you have the CREATE CONNECTION privilege on the metastore, you can click Plus icon. Create connection to create a new connection with the authentication details in Configure authentication to Microsoft Outlook.
  4. Click Next.
  5. On the Ingestion setup page, enter a unique name for the pipeline.
  6. Select a catalog and a schema to write event logs to. If you have USE CATALOG and CREATE SCHEMA privileges on the catalog, you can click Plus icon. Create schema in the drop-down menu to create a new schema.
  7. Click Create pipeline and continue.
  8. On the Source page, select the default schema to ingest the email_messages table.
  9. Click Save and continue.
  10. On the Destination page, select a catalog and a schema to load data into. If you have USE CATALOG and CREATE SCHEMA privileges on the catalog, you can click Plus icon. Create schema in the drop-down menu to create a new schema.
  11. Click Save and continue.
  12. (Optional) On the Schedules and notifications page, click Plus icon. Create schedule. Set the frequency to refresh the destination tables.
  13. (Optional) Click Plus icon. Add notification to set email notifications for pipeline operation success or failure, then click Save and run pipeline.

Declarative Automation Bundles

Use Declarative Automation Bundles to manage Outlook pipelines as code. Bundles can contain YAML definitions of jobs and tasks, are managed using the Databricks CLI, and can be shared and run in different target workspaces (such as development, staging, and production). For more information, see What are Declarative Automation Bundles?.

  1. Create a bundle using the Databricks CLI:

    databricks bundle init
    
  2. Add two new resource files to the bundle:

    • A pipeline definition file (for example, resources/outlook_pipeline.yml). See pipeline.ingestion_definition and Examples.
    • A job definition file that controls the frequency of data ingestion (for example, resources/outlook_job.yml).
  3. Deploy the pipeline using the Databricks CLI:

    databricks bundle deploy
    

Databricks notebook

  1. Import the following notebook into your Azure Databricks workspace:

    Get notebook

  2. Leave cell one as-is.

  3. Modify cell three with your pipeline configuration details. See pipeline.ingestion_definition and Examples.

  4. Click Run all.

Examples

Use these examples to configure your pipeline. See Connector options for the full list of available outlook_options.

Ingest all email messages (default Inbox folder)

This example ingests email messages from the Inbox folder of all accessible mailboxes in the tenant.

Declarative Automation Bundles

variables:
  dest_catalog:
    default: main
  dest_schema:
    default: ingest_destination_schema

resources:
  pipelines:
    pipeline_outlook:
      name: outlook_pipeline
      catalog: ${var.dest_catalog}
      schema: ${var.dest_schema}
      ingestion_definition:
        connection_name: <outlook-connection>
        objects:
          - schema:
              source_schema: default
              destination_catalog: ${var.dest_catalog}
              destination_schema: ${var.dest_schema}
              connector_options:
                outlook_options:
                  start_date: '2024-01-01'

Databricks notebook

pipeline_spec = """
{
  "name": "<pipeline-name>",
  "ingestion_definition": {
    "connection_name": "<outlook-connection>",
    "objects": [
      {
        "schema": {
          "source_schema": "default",
          "destination_catalog": "main",
          "destination_schema": "ingest_destination_schema",
          "connector_options": {
            "outlook_options": {
              "start_date": "2024-01-01"
            }
          }
        }
      }
    ]
  },
  "channel": "PREVIEW"
}
"""
create_pipeline(pipeline_spec)

Ingest from specific mailboxes with filters

This example ingests email messages from specific mailboxes, filtered by folder, sender, and subject.

Declarative Automation Bundles

variables:
  dest_catalog:
    default: main
  dest_schema:
    default: ingest_destination_schema

resources:
  pipelines:
    pipeline_outlook:
      name: outlook_pipeline
      catalog: ${var.dest_catalog}
      schema: ${var.dest_schema}
      ingestion_definition:
        connection_name: <outlook-connection>
        objects:
          - schema:
              source_schema: default
              destination_catalog: ${var.dest_catalog}
              destination_schema: ${var.dest_schema}
              connector_options:
                outlook_options:
                  include_mailboxes:
                    - user1@contoso.com
                    - user2@contoso.com
                  include_folders:
                    - Inbox
                    - Sent Items
                  include_senders:
                    - alerts@vendor.com
                    - noreply@system.io
                  include_subjects:
                    - SubjectExactMatch
                    - SubjectPrefixMatch*
                  start_date: '2024-01-01'
                  body_format: TEXT_PLAIN
                  attachment_mode: NON_INLINE_ONLY

Databricks notebook

pipeline_spec = """
{
  "name": "<pipeline-name>",
  "ingestion_definition": {
    "connection_name": "<outlook-connection>",
    "objects": [
      {
        "schema": {
          "source_schema": "default",
          "destination_catalog": "main",
          "destination_schema": "ingest_destination_schema",
          "connector_options": {
            "outlook_options": {
              "include_mailboxes": ["user1@contoso.com", "user2@contoso.com"],
              "include_folders": ["Inbox", "Sent Items"],
              "include_senders": ["alerts@vendor.com", "noreply@system.io"],
              "include_subjects": ["SubjectExactMatch", "SubjectPrefixMatch*"],
              "start_date": "2024-01-01",
              "body_format": "TEXT_PLAIN",
              "attachment_mode": "NON_INLINE_ONLY"
            }
          }
        }
      }
    ]
  },
  "channel": "PREVIEW"
}
"""
create_pipeline(pipeline_spec)

Ingest the email_messages table explicitly

This example selects the email_messages table directly instead of targeting the schema.

Declarative Automation Bundles

variables:
  dest_catalog:
    default: main
  dest_schema:
    default: ingest_destination_schema

resources:
  pipelines:
    pipeline_outlook:
      name: outlook_pipeline
      catalog: ${var.dest_catalog}
      schema: ${var.dest_schema}
      ingestion_definition:
        connection_name: <outlook-connection>
        objects:
          - table:
              source_schema: default
              source_table: email_messages
              destination_catalog: ${var.dest_catalog}
              destination_schema: ${var.dest_schema}
              destination_table: my_email_messages
              connector_options:
                outlook_options:
                  start_date: '2024-01-01'

Databricks notebook

pipeline_spec = """
{
  "name": "<pipeline-name>",
  "ingestion_definition": {
    "connection_name": "<outlook-connection>",
    "objects": [
      {
        "table": {
          "source_schema": "default",
          "source_table": "email_messages",
          "destination_catalog": "main",
          "destination_schema": "ingest_destination_schema",
          "destination_table": "my_email_messages",
          "connector_options": {
            "outlook_options": {
              "start_date": "2024-01-01"
            }
          }
        }
      }
    ]
  },
  "channel": "PREVIEW"
}
"""
create_pipeline(pipeline_spec)

Bundle job definition file

The following is an example job definition file to use with Declarative Automation Bundles. The job runs every day, exactly one day from the last run.

resources:
  jobs:
    outlook_dab_job:
      name: outlook_dab_job

      trigger:
        periodic:
          interval: 1
          unit: DAYS

      email_notifications:
        on_failure:
          - <email-address>

      tasks:
        - task_key: refresh_pipeline
          pipeline_task:
            pipeline_id: ${resources.pipelines.pipeline_outlook.id}

Common patterns

For advanced pipeline configurations, see Common patterns for managed ingestion pipelines.

Next steps

Start, schedule, and set alerts on your pipeline. See Common pipeline maintenance tasks.

Additional resources