RSS

Most votes on amazon-web-services questions 3

Most votes on amazon-web-services questions 3. #21 What data is stored in Ephemeral Storage of Amazon EC2 instance? #22 AWS ssh access 'Permission denied (publickey)' issue #23 How to yum install Node.JS on Amazon Linux #24 S3 Static Website Hosting Route All Paths to Index.html #25 Cannot ping AWS EC2 instance #26 Do you get charged for a 'stopped' instance on EC2? #27 Setting up FTP on Amazon Cloud Server #28 When to use Amazon Cloudfront or S3 #29 Listing contents of a bucket with boto3 #30 Add Keypair to existing EC2 instance

Read all the top votes questions and answers in a single page.

#21: What data is stored in Ephemeral Storage of Amazon EC2 instance? (Score: 299)

Created: 2012-07-19 Last updated: 2015-12-21

Tags: amazon-web-services, amazon-ec2, amazon-ebs

I am trying to stop a Amazon EC2 instance and get the warning message

Warning: Please note that any data on the ephemeral storage of your instance will be lost when it is stopped.

My Question

What data is stored in ephemeral storage of an Amazon EC2 instance?

#21 Best answer 1 of What data is stored in Ephemeral Storage of Amazon EC2 instance? (Score: 268)

Created: 2013-04-11 Last updated: 2018-02-27

Basically, root volume (your entire virtual system disk) is ephemeral, but only if you choose to create AMI backed by Amazon EC2 instance store.

If you choose to create AMI backed by EBS then your root volume is backed by EBS and everything you have on your root volume will be saved between reboots.

If you are not sure what type of volume you have, look under EC2->Elastic Block Store->Volumes in your AWS console and if your AMI root volume is listed there then you are safe. Also, if you go to EC2->Instances and then look under column “Root device type” of your instance and if it says “ebs”, then you don’t have to worry about data on your root device.

More details here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html

#21 Best answer 2 of What data is stored in Ephemeral Storage of Amazon EC2 instance?(Score: 152)

Created: 2012-07-19 Last updated: 2018-06-08

Anything that is not stored on an EBS volume that is mounted to the instance will be lost.

For example, if you mount your EBS volume at /mystuff, then anything not in /mystuff will be lost. If you don’t mount an ebs volume and save stuff on it, then I believe everything will be lost.

You can create an AMI from your current machine state, which will contain everything in your ephemeral storage. Then, when you launch a new instance based on that AMI it will contain everything as it is now.

Update: to clarify based on comments by mattgmg1990 and glenn bech:

Note that there is a difference between “stop” and “terminate”. If you “stop” an instance that is backed by EBS then the information on the root volume will still be in the same state when you “start” the machine again. According to the documentation, “By default, the root device volume and the other Amazon EBS volumes attached when you launch an Amazon EBS-backed instance are automatically deleted when the instance terminates” but you can modify that via configuration.

See also original question in stackoverflow

#22: AWS ssh access 'Permission denied (publickey)' issue (Score: 295)

Created: 2009-09-21 Last updated: 2013-07-11

Tags: amazon-web-services, ssh-keys

How to connect to a AWS instance through ssh?

I have:

  1. Signed up at AWS;
  2. Created a public key and a certificate at AWS website and saved them to disk;
  3. Went to my console and created environment variables:
    $ export JAVA_HOME=/usr/lib/jvm/java-6-openjdk/
    $ export EC2_CERT=/home/default/aws/cert-EBAINCRNWHDSCWWIHSOKON2YWGJZ5LSQ.pem
    $ export EC2_PRIVATE_KEY=/home/default/aws/pk-EBAINCRNWHDSCWWIHSOKON2YWGJZ5LSQ.pem
  1. Told AWS API to use this keypair and saved the keypair to file:
    $ ec2-add-keypair ec2-keypair > ec2-keypair.pem
  1. Started an AWS Ubuntu 9 instance using this keypair:
    $ ec2-run-instances ami-ed46a784 -k ec2-keypair
  1. Attempted to establish a ssh connection to the instance:
 
    $ ssh -v -i ec2-keypair.pem [email protected]
    OpenSSH_5.1p1 Debian-5ubuntu1, OpenSSL 0.9.8g 19 Oct 2007
    debug1: Reading configuration data /etc/ssh/ssh_config
    debug1: Applying options for *
    debug1: Connecting to ec2-174-129-185-190.compute-1.amazonaws.com [174.129.185.190] port 22.
    debug1: Connection established.
    debug1: identity file ec2-keypair.pem type -1
    debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5ubuntu1
    debug1: match: OpenSSH_5.1p1 Debian-5ubuntu1 pat OpenSSH*
    debug1: Enabling compatibility mode for protocol 2.0
    debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-5ubuntu1
    debug1: SSH2_MSG_KEXINIT sent
    debug1: SSH2_MSG_KEXINIT received
    debug1: kex: server->client aes128-cbc hmac-md5 none
    debug1: kex: client->server aes128-cbc hmac-md5 none
    debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
    debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
    debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
    debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
    debug1: Host 'ec2-174-129-185-190.compute-1.amazonaws.com' is known and matches the RSA host key.
    debug1: Found key in /home/default/.ssh/known_hosts:11
    debug1: ssh_rsa_verify: signature correct
    debug1: SSH2_MSG_NEWKEYS sent
    debug1: expecting SSH2_MSG_NEWKEYS
    debug1: SSH2_MSG_NEWKEYS received
    debug1: SSH2_MSG_SERVICE_REQUEST sent
    debug1: SSH2_MSG_SERVICE_ACCEPT received
    debug1: Authentications that can continue: publickey
    debug1: Next authentication method: publickey
    debug1: Trying private key: ec2-keypair.pem
    debug1: read PEM private key done: type RSA
    debug1: Authentications that can continue: publickey
    debug1: No more authentication methods to try.
    Permission denied (publickey).

What could be the problem and how to make it work?

#22 Best answer 1 of AWS ssh access 'Permission denied (publickey)' issue (Score: 523)

Created: 2009-09-21 Last updated: 2013-04-22

For Ubuntu instances:

chmod 600 ec2-keypair.pem
ssh -v -i ec2-keypair.pem [email protected]

For other instances, you might have to use ec2-user instead of ubuntu.

Most EC2 Linux images I’ve used only have the root user created by default.

See also: http://www.youtube.com/watch?v=WBro0TEAd7g

#22 Best answer 2 of AWS ssh access 'Permission denied (publickey)' issue(Score: 93)

Created: 2010-12-03 Last updated: 2012-07-03

Now it’s:

ssh -v -i ec2-keypair.pem [email protected][yourdnsaddress]

See also original question in stackoverflow

#23: How to yum install Node.JS on Amazon Linux (Score: 275)

Created: 2014-12-08 Last updated: 2020-02-05

Tags: node.js, amazon-web-services, npm, yum, amazon-linux

I’ve seen the writeup on using yum to install the dependencies, and then installing Node.JS & NPM from source. While this does work, I feel like Node.JS and NPM should both be in a public repo somewhere.

How can I install Node.JS and NPM in one command on AWS Amazon Linux?

#23 Best answer 1 of How to yum install Node.JS on Amazon Linux (Score: 403)

Created: 2014-12-08 Last updated: 2018-09-17

Stumbled onto this, was strangely hard to find again later. Putting here for posterity:

sudo yum install nodejs npm --enablerepo=epel

EDIT 3: As of July 2016, EDIT 1 no longer works for nodejs 4 (and EDIT 2 neither). This answer (https://stackoverflow.com/a/35165401/78935) gives a true one-liner.

EDIT 1: If you’re looking for nodejs 4, please try the EPEL testing repo:

sudo yum install nodejs --enablerepo=epel-testing

EDIT 2: To upgrade from nodejs 0.12 installed through the EPEL repo using the command above, to nodejs 4 from the EPEL testing repo, please follow these steps:

sudo yum rm nodejs
sudo rm -f /usr/local/bin/node
sudo yum install nodejs --enablerepo=epel-testing

The newer packages put the node binaries in /usr/bin, instead of /usr/local/bin.

And some background:

The option --enablerepo=epel causes yum to search for the packages in the EPEL repository.

EPEL (Extra Packages for Enterprise Linux) is open source and free community based repository project from Fedora team which provides 100% high quality add-on software packages for Linux distribution including RHEL (Red Hat Enterprise Linux), CentOS, and Scientific Linux. Epel project is not a part of RHEL/Cent OS but it is designed for major Linux distributions by providing lots of open source packages like networking, sys admin, programming, monitoring and so on. Most of the epel packages are maintained by Fedora repo.

Via http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/

#23 Best answer 2 of How to yum install Node.JS on Amazon Linux(Score: 249)

Created: 2016-02-02 Last updated: 2020-12-18

Like others, the accepted answer also gave me an outdated version.

Here is another way to do it that works very well:

$ curl --silent --location https://rpm.nodesource.com/setup_14.x | bash -
$ yum -y install nodejs

You can also replace the 14.x with another version, such as 12.x, 10.x, etc.

You can see all available versions on the NodeSource Github page, and pull from there as well if desired.

Note: you may need to run using sudo depending on your environment.

See also original question in stackoverflow

#24: S3 Static Website Hosting Route All Paths to Index.html (Score: 272)

Created: 2013-04-28 Last updated: 2017-05-23

Tags: amazon-web-services, redirect, amazon-s3, routing, pushstate

I am using S3 to host a javascript app that will use HTML5 pushStates. The problem is if the user bookmarks any of the URLs, it will not resolve to anything. What I need is the ability to take all url requests and serve up the root index.html in my S3 bucket, rather than just doing a full redirect. Then my javascript application could parse the URL and serve the proper page.

Is there any way to tell S3 to serve the index.html for all URL requests instead of doing redirects? This would be similar to setting up apache to handle all incoming requests by serving up a single index.html as in this example: https://stackoverflow.com/a/10647521/1762614. I would really like to avoid running a web server just to handle these routes. Doing everything from S3 is very appealing.

#24 Best answer 1 of S3 Static Website Hosting Route All Paths to Index.html (Score: 378)

Created: 2016-02-27 Last updated: 2017-10-18

It’s very easy to solve it without url hacks, with CloudFront help.

  • Create S3 bucket, for example: react
  • Create CloudFront distributions with these settings:
    • Default Root Object: index.html
    • Origin Domain Name: S3 bucket domain, for example: react.s3.amazonaws.com
  • Go to Error Pages tab, click on Create Custom Error Response:
    • HTTP Error Code: 403: Forbidden (404: Not Found, in case of S3 Static Website)
    • Customize Error Response: Yes
    • Response Page Path: /index.html
    • HTTP Response Code: 200: OK
    • Click on Create

#24 Best answer 2 of S3 Static Website Hosting Route All Paths to Index.html(Score: 206)

Created: 2013-06-01 Last updated: 2015-07-04

The way I was able to get this to work is as follows:

In the Edit Redirection Rules section of the S3 Console for your domain, add the following rules:

<RoutingRules>
  <RoutingRule>
    <Condition>
      <HttpErrorCodeReturnedEquals>404</HttpErrorCodeReturnedEquals>
    </Condition>
    <Redirect>
      <HostName>yourdomainname.com</HostName>
      <ReplaceKeyPrefixWith>#!/</ReplaceKeyPrefixWith>
    </Redirect>
  </RoutingRule>
</RoutingRules>

This will redirect all paths that result in a 404 not found to your root domain with a hash-bang version of the path. So http://yourdomainname.com/posts will redirect to http://yourdomainname.com/#!/posts provided there is no file at /posts.

To use HTML5 pushStates however, we need to take this request and manually establish the proper pushState based on the hash-bang path. So add this to the top of your index.html file:

<script>
  history.pushState({}, "entry page", location.hash.substring(1));
</script>

This grabs the hash and turns it into an HTML5 pushState. From this point on you can use pushStates to have non-hash-bang paths in your app.

See also original question in stackoverflow

#25: Cannot ping AWS EC2 instance (Score: 271)

Created: 2014-02-24 Last updated: 2018-02-14

Tags: amazon-web-services, amazon-ec2, aws-security-group

I have an EC2 instance running in AWS. When I try to ping from my local box it is not available.

How can I make the instance pingable?

#25 Best answer 1 of Cannot ping AWS EC2 instance (Score: 310)

Created: 2015-05-30 Last updated: 2017-11-01

Add a new EC2 security group inbound rule:

  • Type: Custom ICMP rule
  • Protocol: Echo Request
  • Port: N/A
  • Source: your choice (I would select Anywhere to be able to ping from any machine)

#25 Best answer 2 of Cannot ping AWS EC2 instance(Score: 136)

Created: 2017-03-14 Last updated: 2017-05-20

A few years late but hopefully this will help someone else…

  1. First make sure the EC2 instance has a public IP. If has a Public DNS or Public IP address (circled below) then you should be good. This will be the address you ping. AWS public DNS address

  2. Next make sure the Amazon network rules allow Echo Requests. Go to the Security Group for the EC2.

  • right click, select inbound rules
  • A: select Add Rule
  • B: Select Custom ICMP Rule - IPv4
  • C: Select Echo Request
  • D: Select either Anywhere or My IP
  • E: Select Save

Add a Security Group ICMP Rule to allow Pings and Echos

  1. Next, Windows firewall blocks inbound Echo requests by default. Allow Echo requests by creating a windows firewall exception…
  • Go to Start and type Windows Firewall with Advanced Security
  • Select inbound rules

Add a Windows Server ICMP Rule to allow Pings and Echos

  1. Done! Hopefully you should now be able to ping your server.

See also original question in stackoverflow

#26: Do you get charged for a 'stopped' instance on EC2? (Score: 266)

Created: 2010-03-30 Last updated: 2018-07-29

Tags: amazon-web-services, amazon-ec2

Bit confused here, I have an on-demand instance but do I get charged even when I stop the instance?

#26 Best answer 1 of Do you get charged for a 'stopped' instance on EC2? (Score: 285)

Created: 2010-03-30 Last updated: 2014-04-23

No.

You get charged for:

  1. Online time
  2. Storage space (assumably you store the image on S3 [EBS])
  3. Elastic IP addresses
  4. Bandwidth

So… if you stop the EC2 instance you will only have to pay for the storage of the image on S3 (assuming you store an image ofcourse) and any IP addresses you’ve reserved.

#26 Best answer 2 of Do you get charged for a 'stopped' instance on EC2?(Score: 97)

Created: 2012-05-02

This may have changed since the question was asked, but there is a difference between stopping an instance and terminating an instance.

If your instance is EBS-based, it can be stopped. It will remain in your account, but you will not be charged for it (you will continue to be charged for EBS storage associated with the instance and unused Elastic IP addresses). You can re-start the instance at any time.

If the instance is terminated, it will be deleted from your account. You’ll be charged for any remaining EBS volumes, but by default the associated EBS volume will be deleted. This can be configured when you create the instance using the command-line EC2 API Tools.

See also original question in stackoverflow

#27: Setting up FTP on Amazon Cloud Server (Score: 259)

Created: 2011-08-13 Last updated: 2018-01-31

Tags: linux, amazon-web-services, amazon-s3, amazon-ec2, ftp

I am trying to set up FTP on Amazon Cloud Server, but without luck. I search over net and there is no concrete steps how to do it.

I found those commands to run:

$ yum install vsftpd
$ ec2-authorize default -p 20-21
$ ec2-authorize default -p 1024-1048
$ vi /etc/vsftpd/vsftpd.conf
#<em>---Add following lines at the end of file---</em>
    pasv_enable=YES
    pasv_min_port=1024
    pasv_max_port=1048
    pasv_address=<Public IP of your instance>
$ /etc/init.d/vsftpd restart

But I don’t know where to write them.

#27 Best answer 1 of Setting up FTP on Amazon Cloud Server (Score: 573)

Created: 2012-07-09 Last updated: 2016-04-26

Jaminto did a great job of answering the question, but I recently went through the process myself and wanted to expand on Jaminto’s answer.

I’m assuming that you already have an EC2 instance created and have associated an Elastic IP Address to it.


##Step #1: Install vsftpd##

SSH to your EC2 server. Type:

> sudo yum install vsftpd

This should install vsftpd.

##Step #2: Open up the FTP ports on your EC2 instance##

Next, you’ll need to open up the FTP ports on your EC2 server. Log in to the AWS EC2 Management Console and select Security Groups from the navigation tree on the left. Select the security group assigned to your EC2 instance. Then select the Inbound tab, then click Edit:

enter image description here

Add two Custom TCP Rules with port ranges 20-21 and 1024-1048. For Source, you can select ‘Anywhere’. If you decide to set Source to your own IP address, be aware that your IP address might change if it is being assigned via DHCP.

enter image description here



##Step #3: Make updates to the vsftpd.conf file##

Edit your vsftpd conf file by typing:

> sudo vi /etc/vsftpd/vsftpd.conf

Disable anonymous FTP by changing this line:

anonymous_enable=YES

to

anonymous_enable=NO

Then add the following lines to the bottom of the vsftpd.conf file:

pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance> 

Your vsftpd.conf file should look something like the following - except make sure to replace the pasv_address with your public facing IP address:

enter image description here

To save changes, press escape, then type :wq, then hit enter.



##Step #4: Restart vsftpd##

Restart vsftpd by typing:

> sudo /etc/init.d/vsftpd restart

You should see a message that looks like:

enter image description here


If this doesn't work, try:
> sudo /sbin/service vsftpd restart



##Step #5: Create an FTP user

If you take a peek at /etc/vsftpd/user_list, you’ll see the following:

# vsftpd userlist
# If userlist_deny=NO, only allow users in this file
# If userlist_deny=YES (default), never allow users in this file, and
# do not even prompt for a password.
# Note that the default vsftpd pam config also checks /etc/vsftpd/ftpusers
# for users that are denied.
root
bin
daemon
adm
lp
sync
shutdown
halt
mail
news
uucp
operator
games
nobody

This is basically saying, “Don’t allow these users FTP access.” vsftpd will allow FTP access to any user not on this list.

So, in order to create a new FTP account, you may need to create a new user on your server. (Or, if you already have a user account that’s not listed in /etc/vsftpd/user_list, you can skip to the next step.)

Creating a new user on an EC2 instance is pretty simple. For example, to create the user ‘bret’, type:

> sudo adduser bret
> sudo passwd bret

Here’s what it will look like:

enter image description here



##Step #6: Restricting users to their home directories

At this point, your FTP users are not restricted to their home directories. That’s not very secure, but we can fix it pretty easily.

Edit your vsftpd conf file again by typing:

> sudo vi /etc/vsftpd/vsftpd.conf

Un-comment out the line:

chroot_local_user=YES

It should look like this once you’re done:

enter image description here

Restart the vsftpd server again like so:

> sudo /etc/init.d/vsftpd restart

All done!


##Appendix A: Surviving a reboot##

vsftpd doesn’t automatically start when your server boots. If you’re like me, that means that after rebooting your EC2 instance, you’ll feel a moment of terror when FTP seems to be broken - but in reality, it’s just not running!. Here’s a handy way to fix that:

> sudo chkconfig --level 345 vsftpd on

Alternatively, if you are using redhat, another way to manage your services is by using this nifty graphic user interface to control which services should automatically start:

>  sudo ntsysv

enter image description here

Now vsftpd will automatically start up when your server boots up.


##Appendix B: Changing a user's FTP home directory##

*** NOTE: Iman Sedighi has posted a more elegant solution for restricting users access to a specific directory. Please refer to his excellent solution posted as an answer ***

You might want to create a user and restrict their FTP access to a specific folder, such as /var/www. In order to do this, you’ll need to change the user’s default home directory:

> sudo usermod -d /var/www/ username

In this specific example, it’s typical to give the user permissions to the ‘www’ group, which is often associated with the /var/www folder:

> sudo usermod -a -G www username

#27 Best answer 2 of Setting up FTP on Amazon Cloud Server(Score: 27)

Created: 2011-08-13

To enable passive ftp on an EC2 server, you need to configure the ports that your ftp server should use for inbound connections, then open a list of available ports for the ftp client data connections.

I’m not that familiar with linux, but the commands you posted are the steps to install the ftp server, configure the ec2 firewall rules (through the AWS API), then configure the ftp server to use the ports you allowed on the ec2 firewall.

So this step installs the ftp client (VSFTP)

> yum install vsftpd

These steps configure the ftp client

> vi /etc/vsftpd/vsftpd.conf
--    Add following lines at the end of file --
     pasv_enable=YES
     pasv_min_port=1024
     pasv_max_port=1048
     pasv_address=<Public IP of your instance> 
> /etc/init.d/vsftpd restart

but the other two steps are easier done through the amazon console under EC2 Security groups. There you need to configure the security group that is assigned to your server to allow connections on ports 20,21, and 1024-1048

See also original question in stackoverflow

#28: When to use Amazon Cloudfront or S3 (Score: 252)

Created: 2010-07-25 Last updated: 2020-08-25

Tags: amazon-web-services, amazon-s3, amazon-cloudfront

Are there use cases that lend themselves better to Amazon cloudfront over s3 or the other way around? I’m trying to understand the difference between the 2 through examples.

#28 Best answer 1 of When to use Amazon Cloudfront or S3 (Score: 396)

Created: 2010-07-25 Last updated: 2016-02-11

Amazon S3 is designed for large-capacity, low-cost file storage in one specific geographical region.* The storage and bandwidth costs are quite low.

Amazon CloudFront is a Content Delivery Network (CDN) which proxies and caches web data at edge locations as close to users as possible.

When end users request an object using this domain name, they are automatically routed to the nearest edge location for high performance delivery of your content. (Amazon)

The data served by CloudFront may or may not come from S3. Since it is more optimized for delivery speed, the bandwidth costs a little more.

If your user base is localized, you won’t see too much difference working with S3 or CloudFront (but you have to choose the right location for your S3 bucket: US, EU, APAC). If your user base is spread globally and speed is important, CloudFront may be a better option.

Both S3 and CloudFront allow domain aliases, however CloudFront allows multiple aliases so that d1.mystatics.com, d2.mystatics.com and d3.mystatics.com could all point to the same location increasing the capacity for parallel downloads (this used to be recommended by Google but with the introduction of SPDY and HTTP/2 is of lesser importance).

CloudFront also supports CORS as of 2014 (thanks sergiopantoja).

* Note: S3 can now automatically replicate to additional regions as of 2015.

#28 Best answer 2 of When to use Amazon Cloudfront or S3(Score: 59)

Created: 2017-01-31 Last updated: 2018-12-03

CloudFront and S3 Bucket is not the same. In layman’s terms: CloudFront enables you to accelerate content delivery of your web contents via Content Delivery Network (CDN) in edge locations, whereas S3 Buckets are where you store your actual files. CloudFront sources may not necessarily be from S3 but for easier visualization of S3 integration with CloudFront: enter image description here

See also original question in stackoverflow

#29: Listing contents of a bucket with boto3 (Score: 250)

Created: 2015-05-14 Last updated: 2021-02-05

Tags: python, amazon-web-services, amazon-s3, boto3, boto

How can I see what’s inside a bucket in S3 with boto3? (i.e. do an "ls")?

Doing the following:

import boto3
s3 = boto3.resource('s3')
my_bucket = s3.Bucket('some/path/')

returns:

s3.Bucket(name='some/path/')

How do I see its contents?

#29 Best answer 1 of Listing contents of a bucket with boto3 (Score: 303)

Created: 2015-05-15 Last updated: 2019-05-21

One way to see the contents would be:

for my_bucket_object in my_bucket.objects.all():
    print(my_bucket_object)

#29 Best answer 2 of Listing contents of a bucket with boto3(Score: 115)

Created: 2015-05-15 Last updated: 2017-04-05

This is similar to an ‘ls’ but it does not take into account the prefix folder convention and will list the objects in the bucket. It’s left up to the reader to filter out prefixes which are part of the Key name.

In Python 2:

from boto.s3.connection import S3Connection

conn = S3Connection() # assumes boto.cfg setup
bucket = conn.get_bucket('bucket_name')
for obj in bucket.get_all_keys():
    print(obj.key)

In Python 3:

from boto3 import client

conn = client('s3')  # again assumes boto.cfg setup, assume AWS S3
for key in conn.list_objects(Bucket='bucket_name')['Contents']:
    print(key['Key'])

See also original question in stackoverflow

#30: Add Keypair to existing EC2 instance (Score: 250)

Created: 2010-07-15 Last updated: 2021-01-26

Tags: amazon-web-services, authentication, ssh, amazon-ec2, permissions

I was given AWS Console access to an account with 2 instances running that I cannot shut down (in production). I would, however, like to gain SSH access to these instances, is it possible to create a new Keypair and apply it to the instances so I can SSH in? Obtaining the existing pem file for the keypair the instances were created under is currently not an option.

If this isn’t possible is there some other way I can get into the instances?

#30 Best answer 1 of Add Keypair to existing EC2 instance (Score: 178)

Created: 2010-07-16 Last updated: 2013-04-17

You can’t apply a keypair to a running instance. You can only use the new keypair to launch a new instance.

For recovery, if it’s an EBS boot AMI, you can stop it, make a snapshot of the volume. Create a new volume based on it. And be able to use it back to start the old instance, create a new image, or recover data.

Though data at ephemeral storage will be lost.


Due to the popularity of this question and answer, I wanted to capture the information in the link that Rodney posted on his comment.

Credit goes to Eric Hammond for this information.

Fixing Files on the Root EBS Volume of an EC2 Instance

You can examine and edit files on the root EBS volume on an EC2 instance even if you are in what you considered a disastrous situation like:

  • You lost your ssh key or forgot your password
  • You made a mistake editing the /etc/sudoers file and can no longer gain root access with sudo to fix it
  • Your long running instance is hung for some reason, cannot be contacted, and fails to boot properly
  • You need to recover files off of the instance but cannot get to it

On a physical computer sitting at your desk, you could simply boot the system with a CD or USB stick, mount the hard drive, check out and fix the files, then reboot the computer to be back in business.

A remote EC2 instance, however, seems distant and inaccessible when you are in one of these situations. Fortunately, AWS provides us with the power and flexibility to be able to recover a system like this, provided that we are running EBS boot instances and not instance-store.

The approach on EC2 is somewhat similar to the physical solution, but we’re going to move and mount the faulty “hard drive” (root EBS volume) to a different instance, fix it, then move it back.

In some situations, it might simply be easier to start a new EC2 instance and throw away the bad one, but if you really want to fix your files, here is the approach that has worked for many:

Setup

Identify the original instance (A) and volume that contains the broken root EBS volume with the files you want to view and edit.

instance_a=i-XXXXXXXX

volume=$(ec2-describe-instances $instance_a |
  egrep '^BLOCKDEVICE./dev/sda1' | cut -f3)

Identify the second EC2 instance (B) that you will use to fix the files on the original EBS volume. This instance must be running in the same availability zone as instance A so that it can have the EBS volume attached to it. If you don’t have an instance already running, start a temporary one.

instance_b=i-YYYYYYYY

Stop the broken instance A (waiting for it to come to a complete stop), detach the root EBS volume from the instance (waiting for it to be detached), then attach the volume to instance B on an unused device.

ec2-stop-instances $instance_a
ec2-detach-volume $volume
ec2-attach-volume --instance $instance_b --device /dev/sdj $volume

ssh to instance B and mount the volume so that you can access its file system.

ssh ...instance b...

sudo mkdir -p 000 /vol-a
sudo mount /dev/sdj /vol-a

Fix It

At this point your entire root file system from instance A is available for viewing and editing under /vol-a on instance B. For example, you may want to:

  • Put the correct ssh keys in /vol-a/home/ubuntu/.ssh/authorized_keys
  • Edit and fix /vol-a/etc/sudoers
  • Look for error messages in /vol-a/var/log/syslog
  • Copy important files out of /vol-a/…

Note: The uids on the two instances may not be identical, so take care if you are creating, editing, or copying files that belong to non-root users. For example, your mysql user on instance A may have the same UID as your postfix user on instance B which could cause problems if you chown files with one name and then move the volume back to A.

Wrap Up

After you are done and you are happy with the files under /vol-a, unmount the file system (still on instance-B):

sudo umount /vol-a
sudo rmdir /vol-a

Now, back on your system with ec2-api-tools, continue moving the EBS volume back to it’s home on the original instance A and start the instance again:

ec2-detach-volume $volume
ec2-attach-volume --instance $instance_a --device /dev/sda1 $volume
ec2-start-instances $instance_a

Hopefully, you fixed the problem, instance A comes up just fine, and you can accomplish what you originally set out to do. If not, you may need to continue repeating these steps until you have it working.

Note: If you had an Elastic IP address assigned to instance A when you stopped it, you’ll need to reassociate it after starting it up again.

Remember! If your instance B was temporarily started just for this process, don’t forget to terminate it now.

#30 Best answer 2 of Add Keypair to existing EC2 instance(Score: 93)

Created: 2013-04-08 Last updated: 2019-04-30

Though you can’t add a key pair to a running EC2 instance directly, you can create a linux user and create a new key pair for him, then use it like you would with the original user’s key pair.

In your case, you can ask the instance owner (who created it) to do the following. Thus, the instance owner doesn’t have to share his own keys with you, but you would still be able to ssh into these instances. These steps were originally posted by Utkarsh Sengar (aka. @zengr) at http://utkarshsengar.com/2011/01/manage-multiple-accounts-on-1-amazon-ec2-instance/. I’ve made only a few small changes.

  1. Step 1: login by default “ubuntu” user:

     $ ssh -i my_orig_key.pem [email protected]
    
  2. Step 2: create a new user, we will call our new user “john”:

     [[email protected] ~]$ sudo adduser john
    

    Set password for “john” by:

     [[email protected] ~]$ sudo su -
     [[email protected] ubuntu]# passwd john
    

    Add “john” to sudoer’s list by:

     [[email protected] ubuntu]# visudo
    

    .. and add the following to the end of the file:

     john   ALL = (ALL)    ALL
    

    Alright! We have our new user created, now you need to generate the key file which will be needed to login, like we have my_orin_key.pem in Step 1.

    Now, exit and go back to ubuntu, out of root.

     [[email protected] ubuntu]# exit
     [[email protected] ~]$
    
  3. Step 3: creating the public and private keys:

     [[email protected] ~]$ su john
    

    Enter the password you created for “john” in Step 2. Then create a key pair. Remember that the passphrase for key pair should be at least 4 characters.

     [[email protected] ubuntu]$ cd /home/john/
     [[email protected] ~]$ ssh-keygen -b 1024 -f john -t dsa
     [[email protected] ~]$ mkdir .ssh
     [[email protected] ~]$ chmod 700 .ssh
     [[email protected] ~]$ cat john.pub > .ssh/authorized_keys
     [[email protected] ~]$ chmod 600 .ssh/authorized_keys
     [[email protected] ~]$ sudo chown john:ubuntu .ssh
    

    In the above step, john is the user we created and ubuntu is the default user group.

     [[email protected] ~]$ sudo chown john:ubuntu .ssh/authorized_keys
    
  4. Step 4: now you just need to download the key called “john”. I use scp to download/upload files from EC2, here is how you can do it.

    You will still need to copy the file using ubuntu user, since you only have the key for that user name. So, you will need to move the key to ubuntu folder and chmod it to 777.

     [[email protected] ~]$ sudo cp john /home/ubuntu/
     [[email protected] ~]$ sudo chmod 777 /home/ubuntu/john
    

    Now come to local machine’s terminal, where you have my_orig_key.pem file and do this:

     $ cd ~/.ssh
     $ scp -i my_orig_key.pem [email protected]:/home/ubuntu/john john
    

    The above command will copy the key “john” to the present working directory on your local machine. Once you have copied the key to your local machine, you should delete “/home/ubuntu/john”, since it’s a private key.

    Now, one your local machine chmod john to 600.

     $ chmod 600 john
    
  5. Step 5: time to test your key:

     $ ssh -i john [email protected]
    

So, in this manner, you can setup multiple users to use one EC2 instance!!

See also original question in stackoverflow


Notes:
  1. This page use API to get the relevant data from stackoverflow community.
  2. Content license on this page is CC BY-SA 3.0.
  3. score = up votes - down votes.