Create a Data Factory Pipeline with Hive Activity

Azure Public Test Date Azure Public Test Result

Azure US Gov Last Test Date Azure US Gov Last Test Result

Best Practice Check Cred Scan Check This sample creates a data factory with a data pipeline that processes data by running Hive script on an Azure HDInsight (Hadoop) cluster.


  1. Complete the prerequisites mentioned in Overview and prerequisites article.
  2. Update values for the following parameters in azuredeploy.parameters.json file.
    • storageAccountResourceGroupName with name of the resource group that contains your Azure storage.
    • storageAccountName with the name of your Azure Storage account.
    • storageAccountKey with the key of your Azure Storage account.
  3. For the sample to work as-it-is, keep the following values:
    • blobContainer is the name of the blob container. For the sample, it is adfgetstarted.
    • inputBlobFolder is the name of the blob folder with input files. For the sample, it is inputdata.
    • inputBlobName is the name of the blob or file. For the sample, it is input.log.
    • outputBlobFolder is the name of the blob folder that will contain the output files. For the sample, it is partitioneddata.
    • hiveScriptFolder is the name of the folder that contains the hive query (HQL) file. For the tutorial, it is script.
    • hiveScriptFile is the name of the hive script file (HQL). For the sample, it is partitionweblogs.hql.

Deploy To Azure Deploy To Azure US Gov Visualize

When you deploy this Azure Resource Template, a data factory is created with the following entities:

  • Azure Storage linked service
  • Azure HDInsight linked service (on-demand)
  • Azure Blob input dataset
  • Azure Blob output dataset
  • Pipeline with a Hive activity

In this tutorial, inputdata folder of the adfgetstarted Azure blob container contains one file named input.log. This log file has entries from three months: January, February, and March of 2016. Here are the sample rows for each month in the input file.


When the file is processed by the pipeline with HDInsight Hive Activity, the activity runs a Hive script on the HDInsight cluster that partitions input data by year and month. The script creates three output folders that contain a file with entries from each month.


From the sample lines shown above, the first one (with 2016-01-01) is written to the 000000_0 file in the month=1 folder. Similarly, the second one is written to the file in the month=2 folder and the third one is written to the file in the month=3 folder.

For more information, see Overview and prerequisites article.

See Tutorial: Create a pipeline using Resource Manager Template article for a detailed walkthrough with step-by-step instructions.

Deploying sample

You can deploy this sample directly through the Azure Portal or by using the scripts supplied in the root of the repository.

To deploy a sample using the Azure Portal, click the Deploy to Azure button at the top of the article.

To deploy the sample via the command line (using Azure PowerShell or the Azure CLI) you can use the scripts.

Simply execute the script and pass in the folder name of the sample. For example:

.\Deploy-AzureResourceGroup.ps1 -ResourceGroupLocation 'eastus' -ArtifactStagingDirectory 101-data-factory-hive-transformation -a 101-data-factory-hive-transformation -l eastus -u

`Tags: Microsoft.DataFactory/datafactories, linkedservices, AzureStorage, HDInsightOnDemand, datasets, AzureBlob, TextFormat, datapipelines, HDInsightHive`