How to load data from csv to Hive database via notebook
Attachments: Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.
Hi @ShambhuRai-4099 , although it is bit older thread ,but serves the purpose:
from pyspark import SparkContextsc = SparkContext("local", "Simple App")from pyspark.sql import SQLContext, RowsqlContext = SQLContext(sc)from pyspark.sql import HiveContextsqlContext = HiveContext(sc)
df = sqlContext.read.format("jdbc").option("url","jdbc:sqlserver://<server>:<port>").option("databaseName","xxx").option("driver","com.microsoft.sqlserver.jdbc.SQLServerDriver").option("dbtable","xxxx").option("user","xxxxx").option("password","xxxxx").load()
df1= sqlContext.sql("select * from test where xxx= 6")df1.write.format("com.databricks.spark.csv").save("/xxxx/xxx/ami_saidulu")
df1.write.option("path", "/xxxx/xxx/ami_saidulu").saveAsTable("HIVE_DB.HIVE_TBL",format= 'csv',mode= 'Append')
Can not understand from where we can connect to csv table how to map column to column
When you are running the above code snippet are you experiencing any error message?
Meanwhile, you may checkout this article PySpark: Dataframe Write Modes which explain different options available while writing the dataframe.
can some explain me or show some code whcih eloboarate source and target steps for an example source is c:\test\and target is hive database using below code in notebook or some example
Hi Expert, My question in above code is where i have to mentioned the csv path and Hive path ?. can i write this code in one sql command in notebook or i have to split it
Df1 is the dataframe where you are writing as csv. You are reading from some URL and writing to csv file.
13 people are following this question.
extract data from form recognizer json to adf
Load data from SFTP to ADLS in azure synapse using data flow activity?
Can we convert .sav files into parquet in adf
columns in to rows to in notebook
How to query 3rd party Azure DataLake Gen2 and only store the results