@Christophe Laudou Welcome to the Microsoft Q&A platform.
These are different sources of data and the different ways in which that data can be ingested into a Data Lake Storage Gen2 account.
Data stored in on-premises or IaaS Hadoop clusters:
Large amounts of data may be stored in existing Hadoop clusters, locally on machines using HDFS. The Hadoop clusters may be in an on-premises deployment or may be within an IaaS cluster on Azure. There could be requirements to copy such data to Azure Data Lake Storage Gen2 for a one-off approach or in a recurring fashion. There are various options that you can use to achieve this. Below is a list of alternatives and the associated trade-offs.
Really large datasets:
For uploading datasets that range in several terabytes, using the methods described above can sometimes be slow and costly. In such cases, you can use Azure ExpressRoute.
Azure ExpressRoute lets you create private connections between Azure data centers and infrastructure on your premises. This provides a reliable option for transferring large amounts of data. To learn more, see Azure ExpressRoute documentation.
Reference: Migrate data from on-premise Hadoop to Azure Storage and Using Azure Data Lake Storage Gen2 for big data requirements
Hope this helps. Do let us know if you any further queries.
----------------------------------------------------------------------------------------
Do click on "Accept Answer" and Upvote on the post that helps you, this can be beneficial to other community members.