50 million records in Databricks using json files

Birajdar, Sujata 61 Reputation points
2021-11-10T06:21:10.953+00:00

Hi All,

I need to infer schema into json file .Read 10 json files close to 50 million records.

Will Databricks support 50 million records using pyspark.

What all things we need to consider for good performance.

Thanks & Regards,
Sujata

Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
1,916 questions
0 comments No comments
{count} votes

Accepted answer
  1. PRADEEPCHEEKATLA-MSFT 76,921 Reputation points Microsoft Employee
    2021-11-10T16:12:14.547+00:00

    Hello @Birajdar, Sujata ,

    Welcome to the Microsoft Q&A platform.

    Yes, Azure Databricks support 50 million records using pyspark.

    For more details, refer to the below articles:

    You may checkout the below articles which describes more on optimize good performance:

    Hope this will help. Please let us know if any further queries.

    ------------------------------

    • Please don't forget to click on 130616-image.png or upvote 130671-image.png button whenever the information provided helps you. Original posters help the community find answers faster by identifying the correct answer. Here is how
    • Want a reminder to come back and check responses? Here is how to subscribe to a notification
    • If you are interested in joining the VM program and help shape the future of Q&A: Here is how you can be part of Q&A Volunteer Moderators
    0 comments No comments

0 additional answers

Sort by: Most helpful