This works from my local terminal:
ssh -i ~/.ec2/mykey.pem ubuntu@ec2-yada-yada.amazonaws.com ls
Of course yes. But when I try to do the same with the node.js' child_process.spawn , it complains that the key does not exist / is not available.
// child process var childProcess = require('child_process').spawn; // spawn the slave using slaveId as the key slaves[slaveId] = childProcess('ssh', [ '-i /mykey.pem', ' ubuntu@ec2-yada.amazonaws.com ', 'ls' ])
Result:
stderr: Warning: Identity file /mykey.pem not accessible: No such file or directory. stderr: Permission denied (publickey).
Verified things:
Variations on the way to the key:
/actual/path/to/mykey.pem
mykey.pem (with a copy of the file in the root directory of the node project)
/mykey.pem (with a copy of the file in the root of the node project)
~/.ec2.mykey.pem (where it should be)
Running the command without the ssh part, i.e. childProcess(ls); - works.
chmod 644, 600, 400 etc. mykey.pem
My only theory at the moment is the problem of passing a link to a file, and I need to do something with the fs module. (?) And yes, I know that there are libraries for ssh access using node, but they use passwords that will not trim it, and in any case, my requirements do not justify the library.
Please tell me I'm stupid, and it's possible.
UPDATE:
OK, so I can use the exec command as follows:
var childProcess = require('child_process').exec; slaves[slaveId] = childProcess('ssh -i mykey.pem ubuntu@ec2-yada.amazonaws.com ls', function (error, stdout, stderr) {...}
However, I feel like I was downgraded from creating a true subordinate using fork with all the nice messages and convenient properties (my initial implementation, which runs normally locally), to having a vacuum cleaner, and they told him to do all the work itself ( now that I want to run slaves on remote hosts).