You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
We use tag retention policies to clean up projects from older images. However, we ran into multiple issues while trying to set this up which makes the feature useless in many situations.
Situation 1: Helm charts and container images share the same name. In this case, we can't use a policy like retain the most recently pushed 3 artifacts with tags matching ** because it does not differentiate between images and charts. If I push an image, followed by a chart and then later push two bugfix helm charts, this policy would delete the image, leaving only the helm charts.
Situation 2: We replicate images from another repository daily, matching a given pattern. If we change the pattern, new images get replicated but those not being replicated any longer are still in the project. I tried to clean them up using "last pushed" or "last pulled" policies, but the timestamps are not being updated when a replication runs. Otherwise, I would be able to cleanup those not replicated in the last 24hrs.
Situation 3: Similar to situation 2, but a little different. We use two Harbors, one "external" for get new images from outside and an "internal" one from where the images can be used. We have replication between them, but there is no way currently to get rid of the images on the internal one when the get deleted on the "external" Harbor.
Describe the solution you'd like
Allow user to differentiate between artifact types (charts, images, others) when creating policies,
Give users the possibility for creating policies based on the last time an artifact was being replicated.
The text was updated successfully, but these errors were encountered:
Situation 2: We replicate images from another repository daily, matching a given pattern. If we change the pattern, new images get replicated but those not being replicated any longer are still in the project. I tried to clean them up using "last pushed" or "last pulled" policies, but the timestamps are not being updated when a replication runs. Otherwise, I would be able to cleanup those not replicated in the last 24hrs.
For this requirement, it is the existing design of the Harbor, we cannot update the push time when the image exist in the target Harbor.
For situation 3, you could setup a replication rule based on event and check the delete remote when locally deleted.
Is your feature request related to a problem? Please describe.
We use tag retention policies to clean up projects from older images. However, we ran into multiple issues while trying to set this up which makes the feature useless in many situations.
Situation 1: Helm charts and container images share the same name. In this case, we can't use a policy like
retain the most recently pushed 3 artifacts with tags matching **
because it does not differentiate between images and charts. If I push an image, followed by a chart and then later push two bugfix helm charts, this policy would delete the image, leaving only the helm charts.Situation 2: We replicate images from another repository daily, matching a given pattern. If we change the pattern, new images get replicated but those not being replicated any longer are still in the project. I tried to clean them up using "last pushed" or "last pulled" policies, but the timestamps are not being updated when a replication runs. Otherwise, I would be able to cleanup those not replicated in the last 24hrs.
Situation 3: Similar to situation 2, but a little different. We use two Harbors, one "external" for get new images from outside and an "internal" one from where the images can be used. We have replication between them, but there is no way currently to get rid of the images on the internal one when the get deleted on the "external" Harbor.
Describe the solution you'd like
The text was updated successfully, but these errors were encountered: