Most likely, you have encountered the well-known vagrant-aws # 72 problem : Error with Amazon Linux EC2 images .
Edit 3 (February 2014): Vagrant 1.4.0 (released December 2013) and later versions now support the config.ssh.pty logical configuration parameter. Set to true to force Vagrant to use PTY for provisioning. Vagrant creator Mitchell Hashimoto points out that you should not install config.ssh.pty in the global configuration, you should install it directly in the node configuration.
This new option should fix the problem and you will no longer need the workarounds listed below. (But note that I have not tested it myself yet.) For more details, see Vagrant CHANGELOG - unfortunately, the config.ssh.pty parameter is not yet documented under SSH Settings in Vagrant docs.
Edit 2: Bad news. It seems that even boothook will not start faster (for updating / etc / sudoers.d / for !requiretty ) than Vagrant tries rsync. During my testing today, I again began to experience sporadic "mkdir -p / vagrant" errors when running vagrant up --no-provision . So we are back to the previous point, where the most reliable fix seems to be a custom AMI image, which already includes a fix for /etc/sudoers.d .
Edit: Looks like I found a more reliable way to fix the problem. Use boothook to complete the fix. I manually confirmed that the script passed as a boothook is executed before the start of the Vagrant rsync phase. So far, it has worked reliably for me, and I do not need to create a custom AMI image.
Extra tip: if you rely on cloud-config , you can also create a Mime Multi Part Archive to combine boothook and cloud-config . You can get the latest write-mime-multipart helper script from GitHub.
Usage sketch:
$ cd /tmp $ wget https://raw.github.com/lovelysystems/cloud-init/master/tools/write-mime-multipart $ chmod +x write-mime-multipart $ cat boothook.sh
You can then pass the contents of "combined.txt" to aws.user_data, for example via:
aws.user_data = File.read("/tmp/combined.txt")
Sorry to not mention this before, but I'm literally fixing it right now. :)
Original answer (see above for a better approach)
TL DR . The most reliable solution is to βfixβ the Amazon Linux AMI AMI image, save it, and then use the customized AMI image in your Vagrantfile . See below for more details.
Background
A potential workaround is described (and linked in the bug report above) at https://github.com/mitchellh/vagrant-aws/pull/70/files . Briefly add the following to your Vagrantfile :
aws.user_data = "#!/bin/bash\necho 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty && chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty\nyum install -y puppet\n"
Most importantly, this will allow the OS not to require tty for the user ec2-user , which is apparently the root of the problem. I / think / that additional puppet installation is not required for the actual fix (although Vagrant may use Puppet to provide the machine later, depending on how you configured Vagrant).
My experience with the workaround described
I tried this workaround, but the tramp still sometimes fails with the same error. It could be a βrace condition," when Vagrant starts its rsync phase faster than cloud-init (which means aws.user_data transfer), it can prepare a workaround for # 72 on the Vagrant machine. If Vagrant is faster, you will see the same error; if cloud-init works faster.
Which will work (but you need more effort on your side)
What definitely works is to run a command on an Amazon Linux AMI image, and then save the modified image (= create an image snapshot) as a custom AMI image.
# Start an EC2 instance with a stock Amazon Linux AMI image and ssh-connect to it $ sudo su - root $ echo 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-vagrant-cloud-init-requiretty $ chmod 440 /etc/sudoers.d/999-vagrant-cloud-init-requiretty
Then you should use this custom AMI image in your Vagrantfile instead of Amazon stock. The obvious drawback is that you no longer use Amazon's AMI stock - whether this is a problem for you or not depends on your requirements.
What I tried but failed
For the record: I also tried passing cloud-config to aws.user_data , which enabled bootcmd to install !requiretty in the same way as the built-in shell script above. According to the cloud-init docs docs, bootcmd starts "very early" in the startup loop for an EC2 instance - the idea is that bootcmd instructions will execute before Vagrant tries to start its rsync phase. But, unfortunately, I found that the bootcmd function bootcmd not implemented in the legacy cloud-init version of the current Amazon Linux AMI (for example, ami-05355a6c has cloud-init 0.5.15-69.amzn1, but bootcmd was only introduced in 0.6.1 )