In some cases, Azure Table Storage is typically not designed for bulk operations maybe it is a good choice for fast access to small amounts of data. So it differs from case to another, from file-based storage (like CSV or Parquet) or relational databases (like PostgreSQL).
You mentioned trying to accumulate 100 rows and then performing a batch operation. This is generally a good approach as batch operations in Azure Table Storage can be more efficient. However, the maximum number of entities that can be included in a single transaction is 100, and the total payload of a batch operation must not exceed 4 MB. So you may need to check what you have done so far.
Another detail you mentioned is the different levels of parallelism (n_multiproc
) you have. Don't forget Overhead can sometimes kill the parallel processing, especially if the number of processes exceeds the number of available CPU cores.
This is my analysis , so you may need to rethink other alternatives for Azure Services.