Yes the lookup activity has a limitation of 5000 rows only
I would suggest to create a Databricks notebook, mount Azure Storage on it and then take the count of rows in parquet file using PySpark
This will give you the count even if the number of records are in billions
Article to help connect Databricks to Azure Storage: https://www.sqlshack.com/accessing-azure-blob-storage-from-azure-databricks/
Thanks!