Hi @Dinesh Prajapati ,
I'm not sure if you're using an Azure Logic App or App Service derivative i.e., web job, function app; but neither any of these will impact your original concern.
Having 15 files isn't an issue because you can simply union all the datasets together so that you can aggregate all the sources. What is an issue is when you say all the columns are different. In order to remove a duplicate, columns have to match. I think using will be the easiest platform to use to aggregate your data.
Read all your files in and use the Parse JSON object to convert your CSV rows into objects through data operations. By composing the rows into objects, you have more flexibility to determine what rows are duplicate or not. You can do the same thing through custom code as well, read in the file, serialize to an object, and compare objects through a custom hash to determine which objects are the same.