I am getting Negsignal.SIGKILL Airflow

Chirag Lalwani 20 Reputation points
2024-11-21T11:16:44.95+00:00

I have multiple integration running but for one integration when I extract zip file size of zip is 2-3 gbs, extract and upload each file to our storage.It shows Negsignal.SIGKILL. It was running fine a month ago but now its this error. The file is same always where it fails.

I am using Airflow for Azure Data Factory. When I run it like 1 more time it will get successfully run It seems to be issue with resources but I cant don’t know how much resouces airflow is using and how to increase it.

Is there a way to increase resources in DAG or will adding multiple workers solve the issue?or will I had to add concurrency in case of multiple works.

Dag code is simple we are using PythonOperator

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
10,990 questions
{count} votes

Accepted answer
  1. Chandra Boorla 4,445 Reputation points Microsoft Vendor
    2024-11-27T07:58:07.7233333+00:00

    @Chirag Lalwani

    I'm glad that you were able to resolve your issue and thank you for posting your solution so that others experiencing the same thing can easily reference this! Since the Microsoft Q&A community has a policy that "The question author cannot accept their own answer. They can only accept answers by others ", I'll repost your solution in case you'd like to accept the answer.

    Issue:

    I have multiple integration running but for one integration when I extract zip file size of zip is 2-3 gbs, extract and upload each file to our storage.It shows Negsignal.SIGKILL. It was running fine a month ago but now its this error. The file is same always where it fails.

    I am using Airflow for Azure Data Factory. When I run it like 1 more time it will get successfully run It seems to be issue with resources but I cant don’t know how much resouces airflow is using and how to increase it.

    Is there a way to increase resources in DAG or will adding multiple workers solve the issue?or will I had to add concurrency in case of multiple works.

    Dag code is simple we are using PythonOperator

    Solution:

    I have currently solved the issue by changing time of my integration and it seems to be working fine now. Their might be more usage of resources at particular time which cause integration to fail.

    If I missed anything please let me know and I'd be happy to add it to my answer, or feel free to comment below with any additional information.

    Hope this helps. Do let us know if you have any further queries.


    If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.

    1 person found this answer helpful.

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.