Skip to content
This repository has been archived by the owner on Nov 9, 2020. It is now read-only.

Migrating Docker vm to another ESX host results in volumes which cannot be mounted. #2096

Open
darkl0rd opened this issue Jun 5, 2019 · 2 comments

Comments

@darkl0rd
Copy link

darkl0rd commented Jun 5, 2019

Setup: VSphere cluster, consisting of 4 ESX hosts. Several Docker Hosts running on one of the ESX hosts, many of it's containers are using persistent volumes.

When the Docker VM is migrated to another ESX host, some, but not all of the volumes no longer correctly attach. After restarting the Docker Engine (sometimes several times), the volumes can be attached correctly. Strangely enough, in some cases the containers do start, but when inspecting the container, the volume isn't actually mounted; instead a "local" volume is used (even though docker volume ls does not show the volume as being local). This can be verified by docker exec mount

Summary:

  • Docker VM on esx node1
  • Start container on this Docker VM with persistent VMware storage
  • Migrate Docker VM to esx node2
  • Frequently your container will lose access to it's volume
@mitziwebb
Copy link

We have also seen this issue.

@liqdfire
Copy link

liqdfire commented Mar 4, 2020

It happens because the docker plugin has a bug in the reference counter feature, so that when a container is stopped it thinks there are still containers referencing the volume and does not detach it from the vm host.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants