Not an issue, but rather a PSA to help others figure out the issues. Spent all day on this problem...
Starting from September 2022, Azure DevOps fetches only 1 commit by default in new pipelines. This means that by default, all commits would be squashed into 1, meaning all files in the repository would be marked as 'changed'.
This also means that if you merge with non-squash commits, due to the way the plugin performs the checking, it will get commit ids that the pipeline would have no idea about, leading to bad object errors.
Fix for this is quite simple:
- If pipeline is expected to be executed every commit (including squash merges), set
maxDepth to 2 on checkout.
- If pipeline is expected to be executed on merges, more than one commit will be required and therefore
maxDepth needs to be set to 0 (0 = no limit).
Example:
jobs:
- job: Check
displayName: Verify files changed
pool:
vmImage: ubuntu-latest
steps:
- checkout: self
fetchDepth: 2 # Set to 0 when merging with non Squash Commits
- task: ChangedFiles@1
name: CheckChanges
displayName: Check Changed Files
inputs:
rules: ...
Not an issue, but rather a PSA to help others figure out the issues. Spent all day on this problem...
Starting from September 2022, Azure DevOps fetches only 1 commit by default in new pipelines. This means that by default, all commits would be squashed into 1, meaning all files in the repository would be marked as 'changed'.
This also means that if you merge with non-squash commits, due to the way the plugin performs the checking, it will get commit ids that the pipeline would have no idea about, leading to
bad objecterrors.Fix for this is quite simple:
maxDepthto 2 on checkout.maxDepthneeds to be set to 0 (0 = no limit).Example: