Sündmused
31. märts, 23 - 2. apr, 23
Suurim Fabrici, Power BI ja SQL-i õppesündmus. 31. märts – 2. aprill. Kasutage koodi FABINSIDER, et säästa $400.
Registreeruge juba tänaSeda brauserit enam ei toetata.
Uusimate funktsioonide, turbevärskenduste ja tehnilise toe kasutamiseks võtke kasutusele Microsoft Edge.
Apache Airflow jobs allow you to build and schedule Apache Airflow Directed Acyclic Graphs in Microsoft Fabric. For more details, refer What are Apache Airflow job.
Apache Airflow job is charged based on pool uptime. Each Apache Airflow job has it's own isolated pool which are not shared across Apache Airflow jobs. There are two types of pools available: Starter and Custom.
The table below describes the CU consumption based on the size used for Apache Airflow job. By default, we use Large in both "Starter" and "Custom" pools. Small can be selected using the Custom pool. Each Apache Airflow job consists of an Apache Airflow cluster containing three nodes (unless you configure autoscale or add extra nodes).
Apache Airflow job size (Base) | Consumption Meters | Fabric CU consumption rate | Consumption reporting granularity |
---|---|---|---|
Small | DataWorkflow Small | 5 CUs | Per Apache Airflow job item |
Large | DataWorkflow Large | 10 CUs | Per Apache Airflow job item |
Since Apache Airflow job support autoscaling for better performance and scalability, you can add extra nodes to your data workflows. Each extra node is charged based on the table below.
Apache Airflow job extra node (Extra) | Consumption Meters | Fabric CU consumption rate | Consumption reporting granularity |
---|---|---|---|
Small | DataWorkflow Small | 0.6 CUs | Per Apache Airflow job item |
Large | DataWorkflow Large | 1.3 CUs | Per Apache Airflow job item |
Consumption rates are subject to change at any time. Microsoft uses reasonable efforts to provide notice via email and in-product notification. Changes are effective on the date stated in the Release Notes and the Microsoft Fabric Blog. If any change to a Microsoft Fabric Workload Consumption Rate materially increases the Capacity Units (CU) required to use a particular workload, customers can use the cancellation options available for the chosen payment method.
The Microsoft Fabric Capacity Metrics app provides visibility into capacity usage for all Fabric workspaces tied to a capacity. It's used by capacity administrators to monitor the performance of workloads and their usage compared to purchased capacity. Using the Metrics app is the most accurate way to estimate the costs of Apache Airflow job.
The following table can be utilized as a template to compute estimated costs using Fabric Metrics app for a Apache Airflow job:
Metric | Apache Airflow job size | Extra nodes |
---|---|---|
Total CUs | DataWorkflow Small CU seconds or DataWorkflow Large (Base) | DataWorkflow Small Extra Node or DataWorkflow Large Extra Node CU seconds (Extra) |
Effective CU-hours billed | Base / (60*60) CU-hour | Extra / (60*60) CU-hour |
Total Apache Airflow job cost = (Base + Extra CU-hour) * (Fabric capacity per unit price)
Sündmused
31. märts, 23 - 2. apr, 23
Suurim Fabrici, Power BI ja SQL-i õppesündmus. 31. märts – 2. aprill. Kasutage koodi FABINSIDER, et säästa $400.
Registreeruge juba tänaKoolitus
Moodul
Use Apache Spark in Microsoft Fabric - Training
Apache Spark is a core technology for large-scale data analytics. Microsoft Fabric provides support for Spark clusters, enabling you to analyze and process data at scale.
Sertimine
Microsoft Certified: Fabric Data Engineer Associate - Certifications
As a Fabric Data Engineer, you should have subject matter expertise with data loading patterns, data architectures, and orchestration processes.