@Anonymous Thanks for using Microsoft Q&A forum and posting your query.
In order to split large sized files, you will have to use mapping data flow, so that even if you have a complex JSON structure with nested arrays, you will be able to flatten the JSON first and then created partitioned files in your sink based on the partition settings in your data flow sink transformation settings.
For detailed explanation, please refer to this video by community contributor: Azure Data Factory - Split/Partition big file to smaller ones using Mapping data flow
Please note that this sample in the video is using csv/txt data but you can follow similar approach to achieve the same.
Here is another resource regarding the same requirement: Split a json file based on number of records in ADF
Hope this info helps.
Please don’t forget to Accept Answer
and Yes
for "was this answer helpful" wherever the information provided helps you, this can be beneficial to other community members.