OPENROWSET (Transact-SQL)
Applies to: SQL Server Azure SQL Database Azure SQL Managed Instance
Includes all connection information that is required to access remote data from an OLE DB data source. This method is an alternative to accessing tables in a linked server and is a one-time, ad hoc method of connecting and accessing remote data by using OLE DB. For more frequent references to OLE DB data sources, use linked servers instead. For more information, see Linked Servers (Database Engine). The OPENROWSET
function can be referenced in the FROM
clause of a query as if it were a table name. The OPENROWSET
function can also be referenced as the target table of an INSERT
, UPDATE
, or DELETE
statement, subject to the capabilities of the OLE DB provider. Although the query might return multiple result sets, OPENROWSET
returns only the first one.
OPENROWSET
also supports bulk operations through a built-in BULK
provider that enables data from a file to be read and returned as a rowset.
Many examples in this article only apply to SQL Server only. Details and links to similar examples on other platforms:
- Azure SQL Database only supports reading from Azure Blob Storage.
- For examples on Azure SQL Managed Instance, see Query data sources using OPENROWSET.
- For information and examples with serverless SQL pools in Azure Synapse, see How to use OPENROWSET using serverless SQL pool in Azure Synapse Analytics.
- Dedicated SQL pools in Azure Synapse don't support the
OPENROWSET
function.
Transact-SQL syntax conventions
OPENROWSET
syntax is used to query external data sources:
OPENROWSET
( 'provider_name'
, { 'datasource' ; 'user_id' ; 'password' | 'provider_string' }
, { [ catalog. ] [ schema. ] object | 'query' }
)
OPENROWSET(BULK)
syntax is used to read external files:
OPENROWSET( BULK 'data_file' ,
{ FORMATFILE = 'format_file_path' [ <bulk_options> ]
| SINGLE_BLOB | SINGLE_CLOB | SINGLE_NCLOB }
)
<bulk_options> ::=
[ , DATASOURCE = 'data_source_name' ]
-- bulk_options related to input file format
[ , CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ]
[ , FORMAT = { 'CSV' | 'PARQUET' | 'DELTA' } ]
[ , FIELDQUOTE = 'quote_characters' ]
[ , FORMATFILE = 'format_file_path' ]
[ , FORMATFILE_DATA_SOURCE = 'data_source_name' ]
[ , FIRSTROW = first_row ]
[ , LASTROW = last_row ]
[ , MAXERRORS = maximum_errors ]
[ , ERRORFILE = 'file_name' ]
[ , ERRORFILE_DATA_SOURCE = 'data_source_name' ]
[ , ROWS_PER_BATCH = rows_per_batch ]
[ , ORDER ( { column [ ASC | DESC ] } [ , ...n ] ) [ UNIQUE ] ]
A character string that represents the friendly name (or PROGID
) of the OLE DB provider as specified in the registry. provider_name has no default value. Provider name examples are Microsoft.Jet.OLEDB.4.0
, SQLNCLI
, or MSDASQL
.
A string constant that corresponds to a particular OLE DB data source. datasource is the DBPROP_INIT_DATASOURCE
property to be passed to the IDBProperties
interface of the provider to initialize the provider. Typically, this string includes the name of the database file, the name of a database server, or a name that the provider understands for locating the database or databases.
Data source can be file path C:\SAMPLES\Northwind.mdb'
for Microsoft.Jet.OLEDB.4.0
provider, or connection string Server=Seattle1;Trusted_Connection=yes;
for SQLNCLI
provider.
A string constant that is the user name passed to the specified OLE DB provider. user_id specifies the security context for the connection and is passed in as the DBPROP_AUTH_USERID
property to initialize the provider. user_id can't be a Microsoft Windows login name.
A string constant that is the user password to be passed to the OLE DB provider. password is passed in as the DBPROP_AUTH_PASSWORD
property when initializing the provider. password can't be a Microsoft Windows password.
SELECT a.* FROM OPENROWSET(
'Microsoft.Jet.OLEDB.4.0',
'C:\SAMPLES\Northwind.mdb';
'admin';
'password',
Customers
) AS a;
A provider-specific connection string that is passed in as the DBPROP_INIT_PROVIDERSTRING
property to initialize the OLE DB provider. provider_string typically encapsulates all the connection information required to initialize the provider. For a list of keywords that the SQL Server Native Client OLE DB provider recognizes, see Initialization and Authorization Properties (Native Client OLE DB Provider).
SELECT d.* FROM OPENROWSET(
'SQLNCLI',
'Server=Seattle1;Trusted_Connection=yes;',
Department
) AS d;
Remote table or view containing the data that OPENROWSET
should read. It can be three-part-name object with the following components:
- catalog (optional) - the name of the catalog or database in which the specified object resides.
- schema (optional) - the name of the schema or object owner for the specified object.
- object - the object name that uniquely identifies the object to work with.
SELECT d.* FROM OPENROWSET(
'SQLNCLI',
'Server=Seattle1;Trusted_Connection=yes;',
AdventureWorks2022.HumanResources.Department
) AS d;
A string constant sent to and executed by the provider. The local instance of SQL Server doesn't process this query, but processes query results returned by the provider, a pass-through query. Pass-through queries are useful when used on providers that don't make available their tabular data through table names, but only through a command language. Pass-through queries are supported on the remote server, as long as the query provider supports the OLE DB Command object and its mandatory interfaces. For more information, see SQL Server Native Client (OLE DB) Interfaces.
SELECT a.*
FROM OPENROWSET(
'SQLNCLI',
'Server=Seattle1;Trusted_Connection=yes;',
'SELECT TOP 10 GroupName, Name FROM AdventureWorks2022.HumanResources.Department'
) AS a;
Uses the BULK
rowset provider for OPENROWSET
to read data from a file. In SQL Server, OPENROWSET
can read from a data file without loading the data into a target table. This lets you use OPENROWSET
with a basic SELECT
statement.
Important
Azure SQL Database only supports reading from Azure Blob Storage.
The arguments of the BULK
option allow for significant control over where to start and end reading data, how to deal with errors, and how data is interpreted. For example, you can specify that the data file is read as a single-row, single-column rowset of type varbinary, varchar, or nvarchar. The default behavior is described in the argument descriptions that follow.
For information about how to use the BULK
option, see the Remarks section later in this article. For information about the permissions that the BULK
option requires, see the Permissions section, later in this article.
Note
When used to import data with the full recovery model, OPENROWSET (BULK ...)
doesn't optimize logging.
For information on preparing data for bulk import, see Prepare data for bulk export or import.
The full path of the data file whose data is to be copied into the target table.
SELECT * FROM OPENROWSET(
BULK 'C:\DATA\inv-2017-01-19.csv',
SINGLE_CLOB
) AS DATA;
Beginning with SQL Server 2017 (14.x), the data_file can be in Azure Blob Storage. For examples, see Examples of bulk access to data in Azure Blob Storage.
Important
Azure SQL Database only supports reading from Azure Blob Storage.
Specifies the file used to collect rows that have formatting errors and can't be converted to an OLE DB rowset. These rows are copied into this error file from the data file "as is."
The error file is created at the start of the command execution. An error is raised if the file already exists. Additionally, a control file that has the extension .ERROR.txt is created. This file references each row in the error file and provides error diagnostics. After the errors are corrected, the data can be loaded.
Beginning with SQL Server 2017 (14.x), the error_file_path
can be in Azure Blob Storage.
Beginning with SQL Server 2017 (14.x), this argument is a named external data source pointing to the Azure Blob storage location of the error file that will contain errors found during the import. The external data source must be created using the TYPE = BLOB_STORAGE
. For more information, see CREATE EXTERNAL DATA SOURCE.
Specifies the maximum number of syntax errors or nonconforming rows, as defined in the format file, which can occur before OPENROWSET
throws an exception. Until MAXERRORS
is reached, OPENROWSET
ignores each bad row, not loading it, and counts the bad row as one error.
The default for maximum_errors is 10.
Note
MAX_ERRORS
doesn't apply to CHECK
constraints, or to converting money and bigint data types.
Specifies the number of the first row to load. The default is 1. This indicates the first row in the specified data file. The row numbers are determined by counting the row terminators. FIRSTROW
is 1-based.
Specifies the number of the last row to load. The default is 0. This indicates the last row in the specified data file.
Specifies the approximate number of rows of data in the data file. This value should be of the same order as the actual number of rows.
OPENROWSET
always imports a data file as a single batch. However, if you specify rows_per_batch with a value > 0, the query processor uses the value of rows_per_batch as a hint for allocating resources in the query plan.
By default, ROWS_PER_BATCH
is unknown. Specifying ROWS_PER_BATCH = 0
is the same as omitting ROWS_PER_BATCH
.
An optional hint that specifies how the data in the data file is sorted. By default, the bulk operation assumes the data file is unordered. Performance can improve if the query optimizer can exploit the order to generate a more efficient query plan. The following list provides examples for when specifying a sort can be beneficial:
- Inserting rows into a table that has a clustered index, where the rowset data is sorted on the clustered index key.
- Joining the rowset with another table, where the sort and join columns match.
- Aggregating the rowset data by the sort columns.
- Using the rowset as a source table in the
FROM
clause of a query, where the sort and join columns match.
Specifies that the data file doesn't have duplicate entries.
If the actual rows in the data file aren't sorted according to the order that is specified, or if the UNIQUE
hint is specified and duplicates keys are present, an error is returned.
Column aliases are required when ORDER
is used. The column alias list must reference the derived table that is being accessed by the BULK
clause. The column names that are specified in the ORDER
clause refer to this column alias list. Large value types (varchar(max), nvarchar(max), varbinary(max), and xml) and large object (LOB) types (text, ntext, and image) columns can't be specified.
Returns the contents of data_file as a single-row, single-column rowset of type varbinary(max).
Important
We recommend that you import XML data only using the SINGLE_BLOB
option, rather than SINGLE_CLOB
and SINGLE_NCLOB
, because only SINGLE_BLOB
supports all Windows encoding conversions.
By reading data_file as ASCII, returns the contents as a single-row, single-column rowset of type varchar(max), using the collation of the current database.
By reading data_file as Unicode, returns the contents as a single-row, single-column rowset of type nvarchar(max), using the collation of the current database.
SELECT * FROM OPENROWSET(
BULK N'C:\Text1.txt',
SINGLE_NCLOB
) AS Document;
Specifies the code page of the data in the data file. CODEPAGE
is relevant only if the data contains char, varchar, or text columns with character values more than 127 or less than 32.
Important
CODEPAGE
isn't a supported option on Linux.
Note
We recommend that you specify a collation name for each column in a format file, except when you want the 65001 option to have priority over the collation/code page specification.
CODEPAGE value | Description |
---|---|
ACP |
Converts columns of char, varchar, or text data type from the ANSI/Microsoft Windows code page (ISO 1252) to the SQL Server code page. |
OEM (default) |
Converts columns of char, varchar, or text data type from the system OEM code page to the SQL Server code page. |
RAW |
No conversion occurs from one code page to another. This is the fastest option. |
code_page |
Indicates the source code page on which the character data in the data file is encoded; for example, 850. Important Versions before SQL Server 2016 (13.x) don't support code page 65001 (UTF-8 encoding). |
Beginning with SQL Server 2017 (14.x), this argument specifies a comma separated values file compliant to the RFC 4180 standard.
Beginning with SQL Server 2022 (16.x), both Parquet and Delta formats are supported.
SELECT *
FROM OPENROWSET(BULK N'D:\XChange\test-csv.csv',
FORMATFILE = N'D:\XChange\test-csv.fmt',
FIRSTROW=2,
FORMAT='CSV') AS cars;
Specifies the full path of a format file. SQL Server supports two types of format files: XML and non-XML.
A format file is required to define column types in the result set. The only exception is when SINGLE_CLOB
, SINGLE_BLOB
, or SINGLE_NCLOB
is specified; in which case, the format file isn't required.
For information about format files, see Use a format file to bulk import data (SQL Server).
Beginning with SQL Server 2017 (14.x), the format_file_path can be in Azure Blob Storage. For examples, see Examples of bulk access to data in Azure Blob Storage.
Beginning with SQL Server 2017 (14.x), this argument specifies a character that is used as the quote character in the CSV file. If not specified, the quote character ("
) is used as the quote character as defined in the RFC 4180 standard.
OPENROWSET
can be used to access remote data from OLE DB data sources only when the DisallowAdhocAccess registry option is explicitly set to 0 for the specified provider, and the Ad Hoc Distributed Queries advanced configuration option is enabled. When these options aren't set, the default behavior doesn't allow for ad hoc access.
When you access remote OLE DB data sources, the login identity of trusted connections isn't automatically delegated from the server on which the client is connected to the server that is being queried. Authentication delegation must be configured.
Catalog and schema names are required if the OLE DB provider supports multiple catalogs and schemas in the specified data source. Values for catalog and schema can be omitted when the OLE DB provider doesn't support them. If the provider supports only schema names, a two-part name of the form schema.object must be specified. If the provider supports only catalog names, a three-part name of the form catalog.schema.object must be specified. Three-part names must be specified for pass-through queries that use the SQL Server Native Client OLE DB provider. For more information, see Transact-SQL syntax conventions.
OPENROWSET
doesn't accept variables for its arguments.
Any call to OPENDATASOURCE
, OPENQUERY
, or OPENROWSET
in the FROM
clause is evaluated separately and independently from any call to these functions used as the target of the update, even if identical arguments are supplied to the two calls. In particular, filter or join conditions applied on the result of one of those calls has no effect on the results of the other.
The following Transact-SQL enhancements support the OPENROWSET(BULK...)
function:
A
FROM
clause that is used withSELECT
can callOPENROWSET(BULK...)
instead of a table name, with fullSELECT
functionality.OPENROWSET
with theBULK
option requires a correlation name, also known as a range variable or alias, in theFROM
clause. Column aliases can be specified. If a column alias list isn't specified, the format file must have column names. Specifying column aliases overrides the column names in the format file, such as:FROM OPENROWSET(BULK...) AS table_alias
FROM OPENROWSET(BULK...) AS table_alias(column_alias,...n)
Important
Failure to add the
AS <table_alias>
will result in the error: Msg 491, Level 16, State 1, Line 20 A correlation name must be specified for the bulk rowset in the from clause.A
SELECT...FROM OPENROWSET(BULK...)
statement queries the data in a file directly, without importing the data into a table.SELECT...FROM OPENROWSET(BULK...)
statements can also list bulk-column aliases by using a format file to specify column names, and also data types.Using
OPENROWSET(BULK...)
as a source table in anINSERT
orMERGE
statement bulk imports data from a data file into a SQL Server table. For more information, see Use BULK INSERT or OPENROWSET(BULK...) to import data to SQL Server.When the
OPENROWSET BULK
option is used with anINSERT
statement, theBULK
clause supports table hints. In addition to the regular table hints, such asTABLOCK
, theBULK
clause can accept the following specialized table hints:IGNORE_CONSTRAINTS
(ignores only theCHECK
andFOREIGN KEY
constraints),IGNORE_TRIGGERS
,KEEPDEFAULTS
, andKEEPIDENTITY
. For more information, see Table Hints (Transact-SQL).For information about how to use
INSERT...SELECT * FROM OPENROWSET(BULK...)
statements, see Bulk Import and Export of Data (SQL Server). For information about when row-insert operations that are performed by bulk import are logged in the transaction log, see Prerequisites for minimal logging in bulk import.
Note
When you use OPENROWSET
, it's important to understand how SQL Server handles impersonation. For information about security considerations, see Use BULK INSERT or OPENROWSET(BULK...) to import data to SQL Server.
OPENROWSET(BULK...)
assumes that, if not specified, the maximum length of SQLCHAR
, SQLNCHAR
, or SQLBINARY
data doesn't exceed 8,000 bytes. If the data being imported is in a LOB data field that contains any varchar(max), nvarchar(max), or varbinary(max) objects that exceed 8,000 bytes, you must use an XML format file that defines the maximum length for the data field. To specify the maximum length, edit the format file and declare the MAX_LENGTH attribute.
Note
An automatically generated format file doesn't specify the length or maximum length for a LOB field. However, you can edit a format file and specify the length or maximum length manually.
To bulk export or import SQLXML data, use one of the following data types in your format file.
Data type | Effect |
---|---|
SQLCHAR or SQLVARYCHAR |
The data is sent in the client code page, or in the code page implied by the collation. |
SQLNCHAR or SQLNVARCHAR |
The data is sent as Unicode. |
SQLBINARY or SQLVARYBIN |
The data is sent without any conversion. |
OPENROWSET
permissions are determined by the permissions of the user name that is being passed to the OLE DB provider. To use the BULK
option requires ADMINISTER BULK OPERATIONS
or ADMINISTER DATABASE BULK OPERATIONS
permission.
This section provides general examples to demonstrate how to use OPENROWSET.
Applies to: SQL Server only.
The SQL Server Native Client (often abbreviated SNAC) has been removed from SQL Server 2022 (16.x) and SQL Server Management Studio 19 (SSMS). Both the SQL Server Native Client OLE DB provider (SQLNCLI or SQLNCLI11) and the legacy Microsoft OLE DB Provider for SQL Server (SQLOLEDB) are not recommended for new development. Switch to the new Microsoft OLE DB Driver (MSOLEDBSQL) for SQL Server going forward.
The following example uses the SQL Server Native Client OLE DB provider to access the HumanResources.Department
table in the AdventureWorks2022
database on the remote server Seattle1
. (Use SQLNCLI and SQL Server will redirect to the latest version of SQL Server Native Client OLE DB Provider.) A SELECT
statement is used to define the row set returned. The provider string contains the Server
and Trusted_Connection
keywords. These keywords are recognized by the SQL Server Native Client OLE DB provider.
SELECT a.*
FROM OPENROWSET(
'SQLNCLI', 'Server=Seattle1;Trusted_Connection=yes;',
'SELECT GroupName, Name, DepartmentID
FROM AdventureWorks2022.HumanResources.Department
ORDER BY GroupName, Name'
) AS a;
Applies to: SQL Server only.
The following example accesses the Customers
table in the Microsoft Access Northwind
database through the Microsoft OLE DB Provider for Jet.
Note
This example assumes that Microsoft Access is installed. To run this example, you must install the Northwind
database.
SELECT CustomerID, CompanyName
FROM OPENROWSET(
'Microsoft.Jet.OLEDB.4.0',
'C:\Program Files\Microsoft Office\OFFICE11\SAMPLES\Northwind.mdb';
'admin';'',
Customers
);
Important
Azure SQL Database only supports reading from Azure Blob Storage.
Applies to: SQL Server only.
The following example selects all data from the Customers
table from the local instance of SQL Server Northwind
database and from the Orders
table from the Access Northwind
database stored on the same computer.
Note
This example assumes that Access is installed. To run this example, you must install the Northwind
database.
USE Northwind;
GO
SELECT c.*, o.*
FROM Northwind.dbo.Customers AS c
INNER JOIN OPENROWSET(
'Microsoft.Jet.OLEDB.4.0',
'C:\Program Files\Microsoft Office\OFFICE11\SAMPLES\Northwind.mdb';'admin';'',
Orders) AS o
ON c.CustomerID = o.CustomerID;
Important
Azure SQL Database only supports reading from Azure Blob Storage.
Applies to: SQL Server only.
The following example creates a small table for demonstration purposes, and inserts file data from a file named Text1.txt
located in the C:
root directory into a varbinary(max) column.
CREATE TABLE myTable (
FileName NVARCHAR(60),
FileType NVARCHAR(60),
Document VARBINARY(MAX)
);
GO
INSERT INTO myTable (
FileName,
FileType,
Document
)
SELECT 'Text1.txt' AS FileName,
'.txt' AS FileType,
*
FROM OPENROWSET(
BULK N'C:\Text1.txt',
SINGLE_BLOB
) AS Document;
GO
Important
Azure SQL Database only supports reading from Azure Blob Storage.
Applies to: SQL Server only.
The following example uses a format file to retrieve rows from a tab-delimited text file, values.txt
that contains the following data:
1 Data Item 1
2 Data Item 2
3 Data Item 3
The format file, values.fmt
, describes the columns in values.txt
:
9.0
2
1 SQLCHAR 0 10 "\t" 1 ID SQL_Latin1_General_Cp437_BIN
2 SQLCHAR 0 40 "\r\n" 2 Description SQL_Latin1_General_Cp437_BIN
This query retrieves that data:
SELECT a.* FROM OPENROWSET(
BULK 'C:\test\values.txt',
FORMATFILE = 'C:\test\values.fmt'
) AS a;
Important
Azure SQL Database only supports reading from Azure Blob Storage.
Applies to: SQL Server only.
The following example shows how to use both the format file and code page options at the same time.
INSERT INTO MyTable
SELECT a.* FROM OPENROWSET (
BULK N'D:\data.csv',
FORMATFILE = 'D:\format_no_collation.txt',
CODEPAGE = '65001'
) AS a;
Applies to: SQL Server 2017 (14.x) and later versions only.
SELECT * FROM OPENROWSET(
BULK N'D:\XChange\test-csv.csv',
FORMATFILE = N'D:\XChange\test-csv.fmt',
FIRSTROW = 2,
FORMAT = 'CSV'
) AS cars;
Important
Azure SQL Database only supports reading from Azure Blob Storage.
Applies to: SQL Server only.
SELECT * FROM OPENROWSET(
BULK 'C:\Program Files\Microsoft SQL Server\MSSQL14.CTP1_1\MSSQL\DATA\inv-2017-01-19.csv',
SINGLE_CLOB
) AS DATA;
SELECT *
FROM OPENROWSET('MSDASQL',
'Driver={Microsoft Access Text Driver (*.txt, *.csv)}',
'SELECT * FROM E:\Tlog\TerritoryData.csv'
);
Important
The ODBC driver should be 64-bit. Open the Drivers tab of the Connect to an ODBC Data Source (SQL Server Import and Export Wizard) application in Windows to verify this. There's 32-bit Microsoft Text Driver (*.txt, *.csv)
that will not work with a 64-bit version of sqlservr.exe
.
Applies to: SQL Server 2017 (14.x) and later versions only.
In SQL Server 2017 (14.x) and later versions, the following example uses an external data source that points to a container in an Azure storage account and a database scoped credential created for a shared access signature.
SELECT * FROM OPENROWSET(
BULK 'inv-2017-01-19.csv',
DATA_SOURCE = 'MyAzureInvoices',
SINGLE_CLOB
) AS DataFile;
For complete OPENROWSET
examples including configuring the credential and external data source, see Examples of bulk access to data in Azure Blob Storage.
The following example shows how to use the OPENROWSET
command to load data from a csv file in an Azure Blob storage location on which you created the SAS key. The Azure Blob storage location is configured as an external data source. This requires a database scoped credential using a shared access signature that is encrypted using a master key in the user database.
-- Optional: a MASTER KEY is not required if a DATABASE SCOPED CREDENTIAL is not required because the blob is configured for public (anonymous) access!
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'YourStrongPassword1';
GO
-- Optional: a DATABASE SCOPED CREDENTIAL is not required because the blob is configured for public (anonymous) access!
CREATE DATABASE SCOPED CREDENTIAL MyAzureBlobStorageCredential
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = '******srt=sco&sp=rwac&se=2017-02-01T00:55:34Z&st=2016-12-29T16:55:34Z***************';
-- Make sure that you don't have a leading ? in the SAS token, and that you
-- have at least read permission on the object that should be loaded srt=o&sp=r,
-- and that expiration period is valid (all dates are in UTC time)
CREATE EXTERNAL DATA SOURCE MyAzureBlobStorage
WITH (
TYPE = BLOB_STORAGE,
LOCATION = 'https://****************.blob.core.windows.net/curriculum',
-- CREDENTIAL is not required if a blob is configured for public (anonymous) access!
CREDENTIAL = MyAzureBlobStorageCredential
);
INSERT INTO achievements
WITH (TABLOCK) (
id,
description
)
SELECT * FROM OPENROWSET(
BULK 'csv/achievements.csv',
DATA_SOURCE = 'MyAzureBlobStorage',
FORMAT = 'CSV',
FORMATFILE = 'csv/achievements-c.xml',
FORMATFILE_DATA_SOURCE = 'MyAzureBlobStorage'
) AS DataFile;
Important
Azure SQL Database only supports reading from Azure Blob Storage.
The following example creates a credential by using a managed identity, creates an external source and then loads data from a CSV hosted on the external source.
First, create the credential and specify blob storage as the external source:
CREATE DATABASE SCOPED CREDENTIAL sampletestcred
WITH IDENTITY = 'MANAGED IDENTITY';
CREATE EXTERNAL DATA SOURCE SampleSource
WITH (
TYPE = BLOB_STORAGE,
LOCATION = 'https://****************.blob.core.windows.net/curriculum',
CREDENTIAL = sampletestcred
);
Next, load data from the CSV file hosted on blob storage:
SELECT * FROM OPENROWSET(
BULK 'Test - Copy.csv',
DATA_SOURCE = 'SampleSource',
SINGLE_CLOB
) as test;
Important
Azure SQL Database only supports reading from Azure Blob Storage.
Applies to: SQL Server 2022 (16.x) and later versions.
The following example uses access several Parquet files from different location, all stored on S3-compatible object storage:
CREATE DATABASE SCOPED CREDENTIAL s3_dsc
WITH IDENTITY = 'S3 Access Key',
SECRET = 'contosoadmin:contosopwd';
GO
CREATE EXTERNAL DATA SOURCE s3_eds
WITH
(
LOCATION = 's3://10.199.40.235:9000/movies',
CREDENTIAL = s3_dsc
);
GO
SELECT * FROM OPENROWSET(
BULK (
'/decades/1950s/*.parquet',
'/decades/1960s/*.parquet',
'/decades/1970s/*.parquet'
),
FORMAT = 'PARQUET',
DATA_SOURCE = 's3_eds'
) AS data;
Applies to: SQL Server 2022 (16.x) and later versions.
In this example, the data table container is named Contoso
, and is located on an Azure Data Lake Gen2 storage account.
CREATE DATABASE SCOPED CREDENTIAL delta_storage_dsc
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = '<SAS Token>';
CREATE EXTERNAL DATA SOURCE Delta_ED
WITH (
LOCATION = 'adls://<container>@<storage_account>.dfs.core.windows.net',
CREDENTIAL = delta_storage_dsc
);
SELECT *
FROM OPENROWSET(
BULK '/Contoso',
FORMAT = 'DELTA',
DATA_SOURCE = 'Delta_ED'
) AS result;
For more examples that show using INSERT...SELECT * FROM OPENROWSET(BULK...)
, see the following articles:
- Examples of bulk import and export of XML documents (SQL Server)
- Keep identity values when bulk importing data (SQL Server)
- Keep nulls or default values during bulk import (SQL Server)
- Use a format file to bulk import data (SQL Server)
- Use character format to import or export data (SQL Server)
- Use a Format File to Skip a Table Column (SQL Server)
- Use a format file to skip a data field (SQL Server)
- Use a format file to map table columns to data-file fields (SQL Server)
- Query data sources using OPENROWSET in Azure SQL Managed Instances
- DELETE (Transact-SQL)
- FROM clause plus JOIN, APPLY, PIVOT (Transact-SQL)
- Bulk Import and Export of Data (SQL Server)
- INSERT (Transact-SQL)
- OPENDATASOURCE (Transact-SQL)
- OPENQUERY (Transact-SQL)
- SELECT (Transact-SQL)
- sp_addlinkedserver (Transact-SQL)
- sp_serveroption (Transact-SQL)
- UPDATE (Transact-SQL)
- WHERE (Transact-SQL)