Hi @Sebastian Bargas ,
Based on how Bitbucket scheduling works under the covers, my hunch is that Azure DevOps is trying to fetch your YAML from the repo each time it’s due, but it’s running into a throttling/time-out or too-many-refs condition because you have ~10,000 branches. When that happens, you don’t get a normal build, you get an “informational run” that immediately cancels because the service can’t retrieve the files it needs to decide if there’s an update. By default those canceled runs are filtered out of your pipeline history, so it looks like “nothing happened.”
Here’s what you can try:
- Look for informational runs
- Go to your pipeline’s Runs page and turn on the “Canceled” filter.
- You may see a bunch of tiny (<1 second) runs named like “Could not retrieve file content for azure-pipelines.yml from repository … one of the directories in the path contains too many files or subdirectories.”
- That confirms the “too many refs”/throttling scenario.
- Work around or reduce the fetch scope
- If possible, prune old branches in Bitbucket so you’re under 2,000 refs. Azure Pipelines only pulls up to 2,000 branches when it inventories a repo.
- Alternatively, mirror the subset of branches you care about into a separate repo that only contains
main. - Or move your code into an Azure Repos Git repo (where scheduled triggers tend to be more reliable at scale).
- Verify your service connection
- CI is working, but just double-check your Bitbucket OAuth token hasn’t expired.
- A bad/expired token on schedule runs will also lead to informational runs.
- Double-check your YAML schedule (just in case)
You’ve got that right, and you already see the preview of upcoming runs, so Azure Pipelines definitely picked it up.schedules: - cron: '*/15 * * * *' displayName: Test schedule branches: include: - main always: true
Hope this helps!
References: