It is an under-documented feature that one can specify a directory as
the Ansible inventory source, not just a single file. In that case,
Ansible merges the contents of flat files and any executable inventory
plugins found in the directory.
This is useful, for instance, to put localhost in your inventory for use
with `local_action` even if your entire infrastructure is otherwise on
EC2 or some other dynamic inventory source. I also use a flat file to
create aliases for host groups automatically generated from the EC2 API,
like "staging" for `tag_Environment_staging`.
In eb70c0d6bbc8 we were trying to compare a Subprocess::Result to a
Fixnum, resulting in Vagrant always reporting failure regardless of
Ansible's exit code.
if the provisioning directory is mounted before this method is called, and the mounted filesystem is of a type that doesn't support chown (e.g. vmhgfs for vmware or hfs) then this method will fail.
This mimics the equivalent feature from the chef_solo provisioner, and
mounts the puppet manifests and modules with NFS. Doing so can greatly
shortens the time of a puppet run if you have many .pp files.
Enabling this is optional. Virtualbox's (or any other provider's) shared
folders method stays the default. A typical usage would look like this:
config.vm.provision :puppet do |puppet|
puppet.manifests_path = "puppetmaster/manifests"
puppet.module_path = ["puppetmaster/modules"]
puppet.manifest_file = "site.pp"
puppet.nfs = true
end
This fixes#1308.
This commit contains two fixes:
- The Chef provisioner was incorrectly referencing config.ssh.username
instead of machine.ssh_info[:username]. With the new change to default
ssh configuration, if a user had not set config.ssh.username,
provisioning would fail.
- The shell provisioner was not appropriately changing permissions to
the upload path. If a different ssh user attempted to use a shell
provisioner, provisioning would fail. The same case applied to
the Chef provisioner -- while permissions were being reset, they
were not done recursively.