Share via

ADO - A scheduled pipeline is not running for a specific Bitbucket repository

Sebastian Bargas 40 Reputation points
2026-03-20T18:08:37.2233333+00:00

We have a pipeline connected to a Bitbucket repository. If we configure the continuous integration trigger, it runs successfully. However, the scheduled pipelines do not work.

I have done a lot of research and tried many solutions, but nothing has worked for me.

I set up a scheduled pipeline in the YAML file that should run every 15 minutes:

schedules:
 - cron: '*/15 * * * *'
   displayName: Test schedule
   branches:
     include:
     - main
   always: true

I have checked that there are no settings in the UI.

User's image

The pipelines are displayed correctly but never run

User's image User's image

The strangest thing is that this is happening in a specific repository; the only difference I can see compared to others is that it’s a repository where we have a huge number of branches—something like 10,000. Could this be affecting it in some way?

I’d really appreciate some help.

Azure DevOps

Answer accepted by question author
  1. Pravallika KV 13,305 Reputation points Microsoft External Staff Moderator
    2026-03-20T23:20:33.8866667+00:00

    Hi @Sebastian Bargas ,

    Based on how Bitbucket scheduling works under the covers, my hunch is that Azure DevOps is trying to fetch your YAML from the repo each time it’s due, but it’s running into a throttling/time-out or too-many-refs condition because you have ~10,000 branches. When that happens, you don’t get a normal build, you get an “informational run” that immediately cancels because the service can’t retrieve the files it needs to decide if there’s an update. By default those canceled runs are filtered out of your pipeline history, so it looks like “nothing happened.”

    Here’s what you can try:

    1. Look for informational runs
      • Go to your pipeline’s Runs page and turn on the “Canceled” filter.
      • You may see a bunch of tiny (<1 second) runs named like “Could not retrieve file content for azure-pipelines.yml from repository … one of the directories in the path contains too many files or subdirectories.”
      • That confirms the “too many refs”/throttling scenario.
    2. Work around or reduce the fetch scope
      • If possible, prune old branches in Bitbucket so you’re under 2,000 refs. Azure Pipelines only pulls up to 2,000 branches when it inventories a repo.
      • Alternatively, mirror the subset of branches you care about into a separate repo that only contains main.
      • Or move your code into an Azure Repos Git repo (where scheduled triggers tend to be more reliable at scale).
    3. Verify your service connection
      • CI is working, but just double-check your Bitbucket OAuth token hasn’t expired.
      • A bad/expired token on schedule runs will also lead to informational runs.
    4. Double-check your YAML schedule (just in case)
      
         schedules:
      
         - cron: '*/15 * * * *'
      
           displayName: Test schedule
      
           branches:
      
             include:
      
             - main
      
           always: true
      
      
      You’ve got that right, and you already see the preview of upcoming runs, so Azure Pipelines definitely picked it up.

    Hope this helps!

    References:

    1. Build Bitbucket Cloud repos - informational runs and branch-count limits
    2. Configure scheduled triggers in YAML - FAQ on always, informational runs, etc.
    1 person found this answer helpful.

Answer accepted by question author
  1. Q&A Assist
    2026-03-20T18:09:07.14+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    The large number of branches in the Bitbucket repository is the key difference and is very likely what is affecting the scheduled runs.

    Relevant behaviors from Azure Pipelines with Bitbucket and scheduled triggers:

    1. Scheduled triggers are evaluated per branch and rely on loading branch metadata
      • For YAML schedules, “Scheduled runs for a branch are added only if the branch matches the branch filters for the scheduled triggers in the YAML file in that particular branch.” In your case, that is main.
      • Azure Pipelines has a documented limitation: it loads a maximum of 2000 branches from a repository into dropdown lists and similar UI elements. With ~10,000 branches, this limit is exceeded.
      • While the limit is documented for UI dropdowns, the same underlying branch-loading constraints can affect how schedules are resolved and maintained for very large repos.
    2. Bitbucket integrations can produce “informational runs” instead of real runs
      • For Bitbucket (Cloud or on-prem), when Azure Pipelines cannot retrieve YAML or repository content (for example, due to throttling or other provider-side issues), it creates an informational run instead of a normal run.
      • Informational runs:
        • Have Status = Canceled
        • Duration < 1s
        • Contain messages like Could not retrieve file content for {file_path} from repository {repo_name}... or similar
      • These informational runs can be created both for CI and for scheduled checks (internal triggers that check for code changes before starting a scheduled run).
      • With a very large Bitbucket repo (10,000+ branches), the probability of throttling or retrieval failures increases, which can cause Azure Pipelines to repeatedly fail to load the YAML for the scheduled evaluation and never start the real run.
    3. YAML schedule configuration is otherwise correct
      • The schedule snippet:
             schedules:
             - cron: '*/15 * * * *'
               displayName: Test schedule
               branches:
                 include:
                 - main
               always: true
        
      • This matches the documented pattern: cron, branches.include, and always: true to force runs even without code changes.
      • You already confirmed there are no UI schedules overriding YAML, which is another common cause.
    4. What to check and try in this specific scenario
      1. Confirm whether informational runs are being created
      • Look at the pipeline run history around the scheduled times.
      • If there are runs with Status = Canceled, Duration < 1s, and messages like “Could not retrieve file content…” or similar, those are informational runs. That indicates Azure DevOps is failing to retrieve the YAML from Bitbucket when evaluating the schedule.
      1. Verify Bitbucket accessibility and throttling
      • For Bitbucket integrations, Azure Pipelines either polls (on-prem) or uses webhooks (cloud) and then must retrieve YAML from the repo.
      • If Bitbucket is throttling or intermittently failing requests (which is more likely with a very large repo and many branches), scheduled evaluations can fail and only produce informational runs.
      1. Reduce branch pressure where possible
      • Because Azure DevOps only loads up to 2000 branches into UI and has to scan branches for triggers, a repo with ~10,000 branches is at the edge of documented limits.
      • Practical mitigations:
        • Clean up or archive unused branches in Bitbucket to reduce the total count.
        • Move the pipeline YAML and scheduled build to a smaller, dedicated repo (for example, a “build” repo that references artifacts from the large repo by other means) so that the scheduler is not operating against a 10k-branch repository.
      1. Force schedule resync
      • Make a small, trivial change to the YAML in main and push it.
      • This forces Azure Pipelines to re-read the YAML and resync schedules for that branch.
      • Then re-check the “Scheduled runs” view and the run history.
      1. Confirm pipeline is not paused/disabled
      • In the pipeline editor, open Settings and verify the pipeline is not paused or disabled; otherwise, triggers (including schedules) will not fire.

    Summary:

    • The YAML schedule configuration is correct and CI proves the Bitbucket connection works.
    • The combination of Bitbucket + very large number of branches (~10,000) aligns with documented branch-loading limits and with the behavior that, when Azure DevOps cannot reliably retrieve YAML from the repo, it only creates informational runs instead of real scheduled runs.
    • Reducing the number of branches or moving the scheduled pipeline to a smaller repo, plus checking for informational runs and resyncing the schedule, are the concrete steps to resolve this.

    References:

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.