You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using the Ansible Packer provisioner to execute a playbook on one or more AWS instances brought up by Packer to build a custom Amazon Managed Image (AMI).
The instances are brought up in parallel but the issue is when I execute the Ansible provisioner it errors out in cases where it tries to reinstall roles that are already installed and asks me to use --ignore-errors.
amazon-ebs.build_ami: [WARNING]: - ansible_role_<REDACTED> was NOT installed successfully: the
amazon-ebs.build_ami: Starting galaxy role install process
amazon-ebs.build_ami: specified role ansible_role_<REDACTED> appears to already exist. Use
amazon-ebs.build_ami: --force to replace it.
amazon-ebs.build_ami: - extracting ansible_role_<REDACTED> to <REDACTED>/.ansible/roles/ansible_role_<REDACTED>
amazon-ebs.build_ami: ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.
==> amazon-ebs.build_ami: Provisioning step had errors: Running the cleanup provisioner, if present...
==> amazon-ebs.build_ami: Terminating the source AWS instance...
Is there any way to stop the provisioner from trying to install the roles multiple times or send in --ignore-errors to the Ansible galaxy command used by the provisioner?
I have tried setting this up using the galaxy_command option but could not figure it out.
I also tried setting a custom roles-path and collections-path but the provisioner still installed the roles and collections to the default location specified by Ansible.
I would like to keep building the AMIs in parallel.
Sorry I took some time to respond to this issue, but since you're asking a general usage question I would encourage you to migrate this over to Discuss, as this is where our community is most active, and will more likely be able to help you on this.
Github issues in general are reserved for anything that needs maintainer attention: signalling bugs, requesting features, etc.
I'll close this issue for now, but please feel free to reopen if I missed something, thanks!
Dear Folks,
I am using the Ansible Packer provisioner to execute a playbook on one or more AWS instances brought up by Packer to build a custom Amazon Managed Image (AMI).
The instances are brought up in parallel but the issue is when I execute the Ansible provisioner it errors out in cases where it tries to reinstall roles that are already installed and asks me to use
--ignore-errors
.Is there any way to stop the provisioner from trying to install the roles multiple times or send in
--ignore-errors
to the Ansible galaxy command used by the provisioner?I have tried setting this up using the
galaxy_command
option but could not figure it out.I also tried setting a custom
roles-path
andcollections-path
but the provisioner still installed the roles and collections to the default location specified by Ansible.I would like to keep building the AMIs in parallel.
Here is my redacted build config:
And here is my redacted ansible provisioner config:
I hope someone could give me a way to either use
--ignore-errors
which is not ideal or some other way to get around this issue.The text was updated successfully, but these errors were encountered: