Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Matrix trimming should account for new tags #1493

Open
mthalman opened this issue Nov 11, 2024 · 4 comments
Open

Matrix trimming should account for new tags #1493

mthalman opened this issue Nov 11, 2024 · 4 comments
Assignees

Comments

@mthalman
Copy link
Member

mthalman commented Nov 11, 2024

When a new tag is added to the manifest, and that is the only change for a given build, matrix trimming will end up trimming out a build for that image, preventing that new tag from being published. The new tag will only be published when a change occurs that would cause the image to be built, such as a Dockerfile change or base image update or disabling caching.

In a scenario like this, the trimming logic should account for newly added tags and cause the build job to run so the tag(s) can be applied.

Example build where this occurs.

Copy link

I couldn't figure out the best area label to add to this issue. If you have write-permissions please help me learn by adding exactly one area label.

1 similar comment
Copy link

I couldn't figure out the best area label to add to this issue. If you have write-permissions please help me learn by adding exactly one area label.

@lbussell
Copy link
Contributor

lbussell commented Nov 11, 2024

I wonder if there is any way we can have some shared infrastructure for testing these specific scenarios. We have lots of code dependent on Manifest and ImageArtifactDetails (image-info.json) at different points in the build. It would be nice if we had a set of scenarios...

  • Adding images
  • Adding tags
  • Removing images
  • Removing tags
  • Cached images
  • No-cache builds
  • etc.

...that we could apply via tests to all applicable commands, such as EOL annotations. That way the next time we implement something like this, for example image signing, we could know right away if it does The Right Thing in all of these known edge cases.

Perhaps just a list of those edge cases is sufficient to guide writing new tests.

@lbussell
Copy link
Contributor

I wonder if there is any way ...

I filed #1495 to track this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Backlog
Development

No branches or pull requests

2 participants