Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Containerd snapshotter saves layers that can't be cleared through docker commands #5315

Open
jared-rodgers-figure opened this issue Aug 4, 2024 · 3 comments

Comments

@jared-rodgers-figure
Copy link

jared-rodgers-figure commented Aug 4, 2024

Description

I have the following enabled on my machine:

  "features": {
    "containerd-snapshotter": true
  }

If I do a pull and cancel it part way through it will save the layers.
If I never complete this pull the data will persist under /contianerd/io.containerd.snapshotter.v1.overlayfs and /containerd/io.containerd.content.v1.content
Running commands like docker system prune -a or docker image prune -a do not work for removing this data and there doesn't seem to be any way to manually trigger garbage collection of these files.

Reproduce

  1. enable snapshotter with code block from description.
  2. note reported usage of docker images with docker system df
  3. docker pull <any_image>
  4. cancel the pull before it completes, but after some layers have been pulled and extracted.
  5. confirm with docker system df that the usage has increased.

Expected behavior

I would expect docker system prune or docker image prune to do some garbage collection on layers that were saved as a result of the snapshotter in the same way it would clean dangling images.

docker version

Client:
 Version:           24.0.6
 API version:       1.43
 Go version:        go1.20.12
 Git commit:        ed223bc820
 Built:             Tue Aug 29 19:14:17 2023
 OS/Arch:           linux/amd64
 Context:           default

docker info

Client:
 Version:    24.0.6
 Context:    default
 Debug Mode: false
 Plugins:
  compose: Docker Compose (Docker Inc.)
    Version:  v2.27.0
    Path:     /usr/lib/docker/cli-plugins/docker-compose

Additional Info

No response

@thaJeztah
Copy link
Member

This probably should be opened in the https://github.com/moby/moby/ issue tracker instead, as this is not an issue with the daemon, not the docker CLI.

I see you're running docker 24.0, which is no longer maintained; are you still able to reproduce this on the current version (v27.x)? Can you also post the full output of docker version and docker info, as you only provided information about the CLI

@thaJeztah thaJeztah added status/more-info-needed containerd-integration Issues and PRs related to containerd integration labels Aug 5, 2024
@vvoland
Copy link
Collaborator

vvoland commented Aug 5, 2024

Thanks for the report!

Indeed, we don't prune the unfinished ingest data. I don't think they're handled by the containerd's GC either..
I think we could Abort all unfinished ingests after some time on our side, but perhaps there's a better way to handle it on the containerd side directly. Any ideas @dmcgowan?

@jared-rodgers-figure
Copy link
Author

Thanks and apologies for putting this on the wrong repo.
I can move this over to the containerd or moby issue tracker if that's a better place for it.

The issue was on a work machine, but I was able to reproduce this on 26.0.1 on my local.
Then upgraded to 27.1.1 and was also able to reproduce.
Here is the current info from my system. (Different machine from what I posted earlier, but still reproducible with the same steps)

docker version:

Client: Docker Engine - Community
 Version:           27.1.1
 API version:       1.46
 Go version:        go1.21.12
 Git commit:        6312585
 Built:             Tue Jul 23 19:57:01 2024
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          27.1.1
  API version:      1.46 (minimum version 1.24)
  Go version:       go1.21.12
  Git commit:       cc13f95
  Built:            Tue Jul 23 19:57:01 2024
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          1.7.19
  GitCommit:        2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41
 runc:
  Version:          1.7.19
  GitCommit:        v1.1.13-0-g58aa920
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

docker info:

Client: Docker Engine - Community
 Version:    27.1.1
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.16.1
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.29.1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 27.1.1
 Storage Driver: overlayfs
  driver-type: io.containerd.snapshotter.v1
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 nvidia runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41
 runc version: v1.1.13-0-g58aa920
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: builtin
  cgroupns
 Kernel Version: 6.5.0-44-generic
 Operating System: Ubuntu 22.04.4 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 20
 Total Memory: 62.46GiB
 Name:
 ID: 797409c4-1056-4a1f-946d-93f3a7004068
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Experimental: true
 Insecure Registries:
  localhost:5000
 Live Restore Enabled: false

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants