AutoML code-first preview |
In Fabric Data Science, the new AutoML feature enables automation of your machine learning workflow. AutoML, or Automated Machine Learning, is a set of techniques and tools that can automatically train and optimize machine learning models for any given data and task type. |
AutoML low code user experience in Fabric (preview) |
AutoML, or Automated Machine Learning, is a process that automates the time-consuming and complex tasks of developing machine learning models. The new low code AutoML experience supports a variety of tasks, including regression, forecasting, classification, and multi-class classification. To get started, Create models with Automated ML (preview). |
Azure Data Factory item |
You can now bring your existing Azure Data Factory (ADF) to your Fabric workspace. This new preview capability allows you to connect to your existing Azure Data Factory from your Fabric workspace. Select "Create Azure Data Factory" inside of your Fabric Data Factory workspace, and you can manage your Azure data factories directly from the Fabric workspace. |
Capacity pools preview |
Capacity administrators can now create custom pools (preview) based on their workload requirements, providing granular control over compute resources. Custom pools for Data Engineering and Data Science can be set as Spark Pool options within Workspace Spark Settings and environment items. |
Code-First Hyperparameter Tuning preview |
In Fabric Data Science, FLAML is now integrated for hyperparameter tuning, currently a preview feature. Fabric's flaml.tune feature streamlines this process, offering a cost-effective and efficient approach to hyperparameter tuning. |
Copilot in Fabric is available worldwide |
Copilot in Fabric is now available to all customers, including Copilot for Power BI, Data Factory, Data Science & Data Engineering, and Real-Time Intelligence. Read more in our Overview on Copilot in Fabric. |
Copy job |
The Copy job (preview) has advantages over the legacy Copy activity. For more information, see Announcing Preview: Copy Job in Microsoft Fabric. For a tutorial, see Learn how to create a Copy job (preview) in Data Factory for Microsoft Fabric. |
Data Factory Apache Airflow jobs preview |
Apache Airflow job (preview) in Data Factory, powered by Apache Airflow, offer seamless authoring, scheduling, and monitoring experience for Python-based data processes defined as Directed Acyclic Graphs (DAGs). For more information, see Quickstart: Create a Data workflow. |
Data pipeline capabilities in Copilot for Data Factory (preview) |
The new Data pipeline capabilities in Copilot for Data Factory are now available in preview. These features function as an AI expert to help users build, troubleshoot, and maintain data pipelines. |
Data Wrangler for Spark DataFrames preview |
Data Wrangler now supports Spark DataFrames in preview, users can now edit Spark DataFrames in addition to pandas DataFrames with Data Wrangler. |
Data Science AI skill (preview) |
You can now build your own generative AI experiences over your data in Fabric with the AI skill (preview)! You can build question and answering AI systems over your Lakehouses and Warehouses. For more information, see Introducing AI Skills in Microsoft Fabric: Now in Preview. To get started, try AI skill example with the AdventureWorks dataset. |
Dataflow Gen2 with CI/CD and Git integration |
Dataflow Gen2 now supports Continuous Integration/Continuous Deployment (CI/CD) and Git integration. This preview feature allows you to create, edit, and manage dataflows in a Git repository that's connected to your fabric workspace. Additionally, you can use the deployment pipelines feature to automate the deployment of dataflows from your workspace to other workspaces. Also, you can use the Fabric Create, Read, Update, Delete, and List (CRUDL) API to manage Dataflow Gen2. |
Delta column mapping in the SQL analytics endpoint |
SQL analytics endpoint now supports Delta tables with column mapping enabled. For more information, see Delta column mapping and Limitations of the SQL analytics endpoint. This feature is currently in preview. |
Domains in OneLake (preview) |
Domains in OneLake help you organize your data into a logical data mesh, allowing federated governance and optimizing for business needs. You can now create sub domains, default domains for users, and move workspaces between domains. For more information, see Fabric domains. |
High concurrency mode for Notebooks in Pipelines (preview) |
High concurrency mode for Notebooks in Pipelines enables users to share Spark sessions across multiple notebooks within a pipeline. With high concurrency mode, users can trigger pipeline jobs, and these jobs are automatically packed into existing high concurrency sessions. |
Fabric gateway enables OneLake shortcuts to on-premises data |
Connect to on-premises data sources with a Fabric on-premises data gateway on a machine in your environment, with networking visibility of your S3 compatible or Google Cloud Storage data source. Then, you create your shortcut and select that gateway. For more information, see Create shortcuts to on-premises data. |
Fabric Spark connector for Fabric Data Warehouse in Spark runtime (preview) |
The Fabric Spark connector for Data Warehouse (preview) enables a Spark developer or a data scientist to access and work on data from a warehouse or SQL analytics endpoint of the lakehouse (either from within the same workspace or from across workspaces) with a simplified Spark API. |
Fabric Spark Diagnostic Emitter (preview) |
The Fabric Apache Spark Diagnostic Emitter (preview) allows Apache Spark users to collect logs, event logs, and metrics from their Spark applications and send them to various destinations, including Azure Event Hubs, Azure Storage, and Azure Log Analytics. |
Fabric SQL database (Preview) |
SQL database in Microsoft Fabric (Preview) is a developer-friendly transactional database, based on Azure SQL Database, that allow you to easily create your operational database in Fabric. A SQL database in Fabric uses the SQL Database Engine as Azure SQL Database. Review a Decision Guide for SQL databases. |
Folder in Workspace preview |
As an organizational unit in the workspace, folder addresses this pain point by providing a hierarchical structure for organizing and managing your items. For more information, see Create folders in workspaces (preview). |
Iceberg data in OneLake using Snowflake and shortcuts (preview) |
You can now consume Iceberg-formatted data across Microsoft Fabric with no data movement or duplication, plus Snowflake has added the ability to write Iceberg tables directly to OneLake. For more information, see Use Iceberg tables with OneLake. |
Incremental refresh for Dataflow Gen2 (preview) |
Incremental refresh for Dataflows Gen2 in Fabric Data Factory is designed to optimize data ingestion and transformation, particularly as your data continues to expand. For more information, see Announcing Preview: Incremental Refresh in Dataflow Gen2. |
Invoke remote pipeline (preview) in Data pipeline |
You can now use the Invoke Pipeline (preview) activity to call pipelines from Azure Data Factory or Synapse Analytics pipelines. This feature allows you to utilize your existing ADF or Synapse pipelines inside of a Fabric pipeline by calling it inline through this new Invoke Pipeline activity. |
Lakehouse schemas feature |
The Lakehouse schemas feature (preview) introduces data pipeline support for reading the schema info from Lakehouse tables and supports writing data into tables under specified schemas. Lakehouse schemas allow you to group your tables together for better data discovery, access control, and more. |
Lakehouse support for git integration and deployment pipelines (preview) |
The Lakehouse now integrates with the lifecycle management capabilities in Microsoft Fabric, providing a standardized collaboration between all development team members throughout the product's life. Lifecycle management facilitates an effective product versioning and release process by continuously delivering features and bug fixes into multiple environments. |
Managed virtual networks (preview) |
Managed virtual networks are virtual networks that are created and managed by Microsoft Fabric for each Fabric workspace. |
Microsoft 365 connector now supports ingesting data into Lakehouse (preview) |
The Microsoft 365 connector now supports ingesting data into Lakehouse tables. |
Microsoft Fabric Admin APIs |
Fabric Admin APIs are designed to streamline administrative tasks. The initial set of Fabric Admin APIs is tailored to simplify the discovery of workspaces, Fabric items, and user access details. |
Mirroring in Microsoft Fabric preview |
With database mirroring in Fabric, you can easily bring your databases into OneLake in Microsoft Fabric, enabling seamless zero-ETL, near real-time insights on your data – and unlocking warehousing, BI, AI, and more. For more information, see What is Mirroring in Fabric? |
Native Execution Engine on Runtime 1.3 (preview) |
Native Execution Engine for Fabric Runtime 1.3 is now available in preview, offering superior query performance across data processing, ETL, data science, and interactive queries. No code changes are required to speed up the execution of your Apache Spark jobs when using the Native Execution Engine. |
Nested common table expressions (CTEs) (preview) |
Fabric Warehouse and SQL analytics endpoint both support standard, sequential, and nested CTEs. While CTEs are generally available in Microsoft Fabric, nested common table expressions (CTE) in Fabric data warehouse are currently a preview feature. |
Notebook debug within vscode.dev (preview) |
You can now place breakpoints and debug your Notebook code with the Synapse VS Code - Remote extension in vscode.dev. This update first starts with the Fabric Runtime 1.3. |
OneLake data access roles |
OneLake data access roles for lakehouse are in preview. Role permissions and user/group assignments can be easily updated through a new folder security user interface. |
OneLake SAS (preview) |
Support for short-lived, user-delegated OneLake SAS is now in preview. This functionality allows applications to request a User Delegation Key backed by Microsoft Entra ID, and then use this key to construct a OneLake SAS token. This token can be handed off to provide delegated access to another tool, node, or user, ensuring secure and controlled access. |
Open mirroring (Preview) |
Open mirroring enables any application to write change data directly into a mirrored database in Fabric, based on the open mirroring public APIs and approach. Open mirroring is designed to be extensible, customizable, and open. It's a powerful feature that extends mirroring in Fabric based on open Delta Lake table format. To get started, see Tutorial: Configure Microsoft Fabric open mirrored databases. |
Prebuilt Azure AI services in Fabric preview |
The preview of prebuilt AI services in Fabric is an integration with Azure AI services, formerly known as Azure Cognitive Services. Prebuilt Azure AI services allow for easy enhancement of data with prebuilt AI models without any prerequisites. Currently, prebuilt AI services are in preview and include support for the Microsoft Azure OpenAI Service, Azure AI Language, and Azure AI Translator. |
Purview Data Loss Prevention policies have been extended to Fabric lakehouses |
Extending Microsoft Purview's Data Loss Prevention (DLP) policies into Fabric lakehouses is now in preview. |
Purview Data Loss Prevention policies now support the restrict access action for semantic models |
Restricting access based on sensitive content for semantic models, now in preview, helps you to automatically detect sensitive information as it is uploaded into Fabric lakehouses and semantic models. |
Real-Time Dashboards and underlying KQL databases access separation (preview) |
With separate permissions for dashboards and underlying data, administrators now have the flexibility to allow users to view dashboards without giving access to the raw data. |
Reserve maximum cores for jobs (preview) |
A new workspace-level setting allows you to reserve maximum cores for your active jobs for Spark workloads. For more information, see High concurrency mode in Apache Spark for Fabric. |
REST APIs for Fabric Data Factory pipelines preview |
The REST APIs for Fabric Data Factory Pipelines are now in preview. REST APIs for Data Factory pipelines enable you to extend the built-in capability in Fabric to create, read, update, delete, and list pipelines. |
Secure Data Streaming with Managed Private Endpoints in Eventstream (Preview) |
By creating a Fabric Managed Private Endpoint, you can now securely connect Eventstream to your Azure services, such as Azure Event Hubs or IoT Hub, within a private network or behind a firewall. For more information, see Secure Data Streaming with Managed Private Endpoints in Eventstream (Preview). |
Semantic model refresh activity (preview) |
Use the Semantic model refresh activity to refresh a Power BI Dataset (Preview), the most effective way to refresh your Fabric semantic models. |
Session Expiry Control in Workspace Settings for Notebook Interactive Runs (preview) |
A new session expiry control in Data Engineering/Science workspace settings allows you to set the maximum expiration time limit for notebook interactive sessions. By default, sessions expire after 20 minutes, but you can now customize the maximum expiration duration. |
Share Feature for Fabric AI skill (preview) |
"Share" capability for the Fabric AI skill (preview) allows you to share the AI Skill with others using a variety of permission models. |
Share the Fabric AI skill (preview) |
Share capability for the Fabric AI skill (preview) allows you to share the AI Skill with others using a variety of permission models. |
Spark Run Series Analysis preview |
The Spark Monitoring Run Series Analysis features allow you to analyze the run duration trend and performance comparison for Pipeline Spark activity recurring run instances and repetitive Spark run activities, from the same Notebook or Spark Job Definition. |
Splunk add-on preview |
Microsoft Fabric add-on for Splunk allows users to ingest logs from Splunk platform into a Fabric KQL DB using the Kusto python SDK. |
Tags |
Tags (preview) help admins categorize and organize data, enhancing the searchability of your data and boosting success rates and efficiency for end users. |
Task flows in Microsoft Fabric (preview) |
The preview of task flows in Microsoft Fabric is enabled for all Microsoft Fabric users. With Fabric task flows, when designing a data project, you no longer need to use a whiteboard to sketch out the different parts of the project and their interrelationships. Instead, you can use a task flow to build and bring this key information into the project itself. |
varchar(max) and varbinary(max) support in preview |
Support for the varchar(max) and varbinary(max) data types in Warehouse is now in preview. For more information, see Announcing public preview of VARCHAR(MAX) and VARBINARY(MAX) types in Fabric Data Warehouse. |
Terraform Provider for Fabric (preview) |
The Terraform Provider for Microsoft Fabric is now in preview. The Terraform Provider for Microsoft Fabric supports the creation and management of many Fabric resources. For more information, see Announcing the new Terraform Provider for Microsoft Fabric. |
T-SQL support in Fabric notebooks (preview) |
The T-SQL notebook feature in Microsoft Fabric (preview) lets you write and run T-SQL code within a notebook. You can use them to manage complex queries and write better markdown documentation. It also allows direct execution of T-SQL on connected warehouse or SQL analytics endpoint. To learn more, see Author and run T-SQL notebooks. |
Warehouse restore points and restore in place |
You can now create restore points and perform an in-place restore of a warehouse to a past point in time. Restore in-place is an essential part of data warehouse recovery, which allows to restore the data warehouse to a prior known reliable state by replacing or over-writing the existing data warehouse from which the restore point was created. |
Warehouse source control (preview) |
Using Git integration and/or deployment pipelines with your warehouse, you can manage development and deployment of versioned warehouse objects. You can use SQL Database Projects extension available inside of Azure Data Studio and Visual Studio Code. For more information on warehouse source control, see CI/CD with Warehouses in Microsoft Fabric. |