The "destroy_on_error" functionality for `vagrant up` is implemented in
the `recover()` action chain, and works by firing off a destroy action
from inside that chain.
This is all well and good, but it copies its existing `env` which has
had `action_name` set for the up action. This was causing action_hooks
for up actions to attach to this destroy action stack.
Setting the action_name explicitly in the env before firing the runner
should correct the behavior. I'm not sure if raw_action_name is used
anywhere but I figured it was better to be consistent vs conservative in
what we change.
This addresses the surprising behavior that the StoreBoxMetadata hook
was running many times during a machine up, including during failed
operations where a destroy_on_error deleted the machine. This was
resulting in an error that looked like:
> No such file or directory @ rb_sysopen [...] /[...]/box_meta
Plugin action hooks using prepend/append were attaching every time a
Builder was run, including sub-Builders that show up for things like
Call actions.
To fix this, we tell Builders if they are "primary" and only run
prepend/append on those. See inline comments for more explanation.
In legacy Vagrant, any exception raised that's a subclass of
Vagrant::Errors::VagrantError is considered user-facing and so causes
the error message to be printed to the console and the process to use
exit code 1. Anything outside of that causes the process to use exit
code 255. (See `bin/vagrant` for the code.)
Here we mirror that behavior by treating errors that have a
LocalizedMessage as user-facing and those without as unexpected. This
allows the basic virtualbox component to pass in vagrant-spec!
This is a pass through test failures and deprecation warnings:
* Make all ambiguous `.with(..., key: val)` use explicit hashes to
prevent test failures for argument mismatch in Ruby 3.0
* Scope down all unbounded `raise_error` to address warnings (remove
one test that was revealed to be referencing a nonexistent variable
once the raise_error was scoped.)
* Update all `any_instance` usage to new syntax to address warnings
* Allow the service cache to be cleared and do so between some tests
* Fix a small bug in with_plugin's plugin not found code path (revealed
by a scoped and_raise)
There was a hash assignment that was overriding values when there were
multiple synced folders for a given implementation.
Includes some stub-tastic unit tests to help verify the hash munging
behavior does what it's supposed to do going forward.
The IsRunning action checks if `env[:machine].state.id == :running` but
this check was never passing as the protobuf-washed version of machine
state was yielding a machine state w/ a string value like `"running"`.
Easy fix in the mapper!
Boolean types (and possibly a few others) are returned as wrapper
classes when coming out from proto mapping; these need to be unwrapped
otherwise the caller who is expecting a nice clean boolean value ends up
with an icky protobuf class.
This fixes the shell provisioner, which relies on a communicator
receiving a settings hash `{error_check: false}` for a command that
usually fails but it sent just in case before provisioning starts.
The local-exec push strategy was assuming it was running from a CLI and
so it wouldn't be a big deal for it to straight up `exec` and replace
its running with the user command. That command will just do its thing
and we want the exit code for the CLI command to match anyways, right?
Sure that works for a shell, but in a GRPC server setting it's decidedly
Not Cool to suddenly swap out the running process!
As you can imagine - the effect of doing this was all sorts of broken
pipes and unexpected EOFs and a very confused @phinze.
Luckily we had a subprocess strategy sitting right there for Windows
compat, so it was just a matter of switching to that in the server
context as well. Long and winding debugging process; simple fix;
just another classic!
When testing all of the push functionality I ran into the fact that the
FTP upload code did not recognize that I had VAGRANT_CWD set, so it
wasn't finding the right files to upload.
This should make everything work properly relative to that location.