Most votes on amazon-web-services questions 8

Most votes on amazon-web-services questions 8. #71 How to Configure SSL for Amazon S3 bucket #72 Send Test Email fails with Email address is not verified #73 What is the difference between Amazon S3 and Amazon EC2 instance? #74 .htaccess not working apache #75 Configuring region in Node.js AWS SDK #76 How to use multiple AWS accounts from the command line? #77 How to save S3 object to a file using boto3 #78 Find region from within an EC2 instance #79 Add EBS to Ubuntu EC2 Instance #80 How To Set Up GUI On Amazon EC2 Ubuntu server

Read all the top votes questions and answers in a single page.

#71: How to Configure SSL for Amazon S3 bucket (Score: 145)

Created: 2012-06-26 Last updated: 2019-03-22

Tags: ssl, amazon-s3, amazon-web-services, bucket

I am using an Amazon S3 bucket for uploading and downloading of data using my .NET application. Now my question is: I want to access my S3 bucket using SSL. Is it possible to implement SSL for an Amazon s3 bucket?

#71 Best answer 1 of How to Configure SSL for Amazon S3 bucket (Score: 152)

Created: 2012-06-26 Last updated: 2016-08-05

You can access your files via SSL like this:

If you use a custom domain for your bucket, you can use S3 and CloudFront together with your own SSL certificate (or generate a free one via Amazon Certificate Manager):

#71 Best answer 2 of How to Configure SSL for Amazon S3 bucket(Score: 28)

Created: 2013-06-12 Last updated: 2014-08-27

Custom domain SSL certs were just added today for $600/cert/month. Sign up for your invite below:

Update: SNI customer provided certs are now available for no additional charge. Much cheaper than $600/mo, and with XP nearly killed off, it should work well for most use cases.

@skalee AWS has a mechanism for achieving what the poster asks for, “implement SSL for an Amazon s3 bucket”, it’s called CloudFront. I’m reading “implement” as “use my SSL certs,” not “just put an S on the HTTP URL which I’m sure the OP could have surmised.

Since CloudFront costs exactly the same as S3 ($0.12/GB), but has a ton of additional features around SSL AND allows you to add your own SNI cert at no additional cost, it’s the obvious fix for “implementing SSL” on your domain.

See also original question in stackoverflow

#72: Send Test Email fails with Email address is not verified (Score: 144)

Created: 2016-05-30

Tags: amazon-web-services, amazon-ses

I want to use Amazon’s Simple Email Service to send emails.

I verified my domain as well as the email address I want to send from.

For both it says verified.

Now when I use the Send Test Email from the AWS Console to send a test email to [email protected], I only get the error message:

Email address is not verified. The following identities failed the check in region EU-WEST-1: [email protected] (Request ID: 9fb78de1-2673-11e6-bbbc-5f819fabe4f4)

Now it strikes me because it says [email protected] was not verified but I tried to send from [email protected]. The Send Test Email Dialog even forces you to use an email which already is registered.

How can this issue be resolved? Did I miss anything?

#72 Best answer 1 of Send Test Email fails with Email address is not verified (Score: 255)

Created: 2016-05-30

When your SES account is in “sandbox” mode, you can:

  1. Only send from verified domains and email addressed, and
  2. Only send to verified domains and email addresses

In order to send to anyone else, you must move your account out of sandbox mode by contacting AWS support and requesting it:

#72 Best answer 2 of Send Test Email fails with Email address is not verified(Score: 9)

Created: 2017-10-09

If the email is already verified and you’re out of the SES Sandbox, check that you’ve the correct AWS region for the SMTP server. I was trying to connect to when my SMTP credential was for the server.

enter image description here

See also original question in stackoverflow

#73: What is the difference between Amazon S3 and Amazon EC2 instance? (Score: 144)

Created: 2013-01-18

Tags: amazon-web-services

I need to create a web application using php mysql and html. The no.of requests and data will be very high. I need Amazon server space.

I read the Amazon documentation and found that S3 is a storage that provides a simple web services interface. EC2 is a web service that provides resizable compute capacity in the cloud.

Can I purchase S3 and run php and query my database?

Please tell me the difference between Amazon S3 and Amazon Ec2 instance.

#73 Best answer 1 of What is the difference between Amazon S3 and Amazon EC2 instance? (Score: 202)

Created: 2013-01-18

An EC2 instance is like a remote computer running Windows or Linux and on which you can install whatever software you want, including a Web server running PHP code and a database server.

Amazon S3 is just a storage service, typically used to store large binary files. Amazon also has other storage and database services, like RDS for relational databases and DynamoDB for NoSQL.

#73 Best answer 2 of What is the difference between Amazon S3 and Amazon EC2 instance?(Score: 30)

Created: 2017-11-26

Amazon EC2

It’s just kind of a regular computer hosted somewhere on one of AWS data-center. And, as part of that it has a hard-drive or local storage. And, it is not permanent in the sense that anything that you want to store long term you don’t want to store on the hard-drive of EC2 instance because of scaling-up and scaling-down while adding easy to servers, vice-versa(maintaining Elasticity property). And, so you don’t want to have things that you want to keep forever on to the local storage because as you add or remove instances then you can potentially lost that information or lose that data. EC2 is meant to deploy your application on server(using its processing power) and that server serve the contents through the S3 and RDS, respectively. Hence, Amazon EC2 good for any type of processing activity.

Amazon S3

Take an e.g. of Netflix that where they actually stores millions of physical video files that power their content. There have to be those video files and multiple versions of those store somewhere. That’s where S3 comes into play. Amazon S3 is a storage platform of AWS. It’s specially called large unlimited storage bucket(Limit is very high). So, S3 is perfect place for storing doc, movie, music, apps, pictures, anything you want to store, just dump onto S3. And, it’s going to be multiple redundancies and back-up of files that you put there. So, again you are always going to have high availability of any files that you decide to store on S3.

Uses of S3:

  1. Mass storage container
  2. Long-Term Storage

So, as a total failsafe Amazon S3 is the perfect place for anything that you want to keep for a long time and it has a load of redundancies and it’s great because it’s basically unlimited storage. So, Amazon S3 is where Netflix stores the thousands of petabytes of video files that they have to store. So, Amazon S3 is massive storage bucket.

See also original question in stackoverflow

#74: .htaccess not working apache (Score: 143)

Created: 2012-08-30 Last updated: 2018-03-25

Tags: apache, .htaccess, amazon-ec2, amazon-web-services, apache-config

I have a server from AWS EC2 service running on Linux ubuntu and I have installed apache, php, and mysql.

I have added a .htaccess file in my document root /var/www/html.

I entered this code in it: ErrorDocument 404 /var/www/html/404.php and it is still not showing up.

I kept entered this command multiple times: sudo service httpd restart to restart the server but no changes displayed…

How can I fix this… Did I do something wrong?

Thanks in advance!

#74 Best answer 1 of .htaccess not working apache (Score: 308)

Created: 2012-08-30

First, note that restarting httpd is not necessary for .htaccess files. .htaccess files are specifically for people who don’t have root - ie, don’t have access to the httpd server config file, and can’t restart the server. As you’re able to restart the server, you don’t need .htaccess files and can use the main server config directly.

Secondly, if .htaccess files are being ignored, you need to check to see that AllowOverride is set correctly. See for details. You need to also ensure that it is set in the correct scope - ie, in the right block in your configuration. Be sure you’re NOT editing the one in the block, for example.

Third, if you want to ensure that a .htaccess file is in fact being read, put garbage in it. An invalid line, such as “INVALID LINE HERE”, in your .htaccess file, will result in a 500 Server Error when you point your browser at the directory containing that file. If it doesn’t, then you don’t have AllowOverride configured correctly.

#74 Best answer 2 of .htaccess not working apache(Score: 121)

Created: 2013-10-16 Last updated: 2018-08-16

  1. Enable Apache mod_rewrite module

    a2enmod rewrite

  2. add the following code to /etc/apache2/sites-available/default

    AllowOverride All

  3. Restart apache

    /etc/init.d/apache2 restart

See also original question in stackoverflow

#75: Configuring region in Node.js AWS SDK (Score: 143)

Created: 2015-06-25 Last updated: 2016-09-19

Tags: javascript, node.js, amazon-web-services, aws-sdk

Can someone explain how to fix a missing config error with Node.js? I’ve followed all the examples from the aws doc page but I still get this error no matter what.

{ [ConfigError: Missing region in config]
message: 'Missing region in config',
code: 'ConfigError',
time: Wed Jun 24 2015 21:39:58 GMT-0400 (EDT) }>{ thumbnail: 
 { fieldname: 'thumbnail',
 originalname: 'testDoc.pdf',
 name: 'testDoc.pdf',
 encoding: '7bit',
 mimetype: 'application/pdf',
path: 'uploads/testDoc.pdf',
 extension: 'pdf',
 size: 24,
 truncated: false,
 buffer: null } }
 POST / 200 81.530 ms - -

Here is my code:

var express = require('express');
var router = express.Router();
var AWS = require('aws-sdk');
var dd = new AWS.DynamoDB();
var s3 = new AWS.S3();
var bucketName = 'my-bucket';



#75 Best answer 1 of Configuring region in Node.js AWS SDK (Score: 234)

Created: 2015-07-28

How about changing the order of statements? Update AWS config before instantiating s3 and dd

var AWS = require('aws-sdk');

var dd = new AWS.DynamoDB();
var s3 = new AWS.S3();

#75 Best answer 2 of Configuring region in Node.js AWS SDK(Score: 100)

Created: 2016-09-19 Last updated: 2016-09-19

I had the same issue “Missing region in config” and in my case it was that, unlike in the CLI or Python SDK, the Node SDK won’t read from the ~\.aws\config file.

To solve this, you have three options:

  1. Configure it programmatically (hard-coded): AWS.config.update({region:'your-region'});

  2. Use an environment variable. While the CLI uses AWS_DEFAULT_REGION, the Node SDK uses AWS_REGION.

  3. Load from a JSON file using AWS.config.loadFromPath('./config.json');

JSON format:

    "accessKeyId": "akid", 
    "secretAccessKey": "secret", 
    "region": "us-east-1" 

See also original question in stackoverflow

#76: How to use multiple AWS accounts from the command line? (Score: 142)

Created: 2009-02-27 Last updated: 2021-01-06

Tags: amazon-web-services, amazon-ec2, aws-cli

I’ve got two different apps that I am hosting (well the second one is about to go up) on Amazon EC2.

How can I work with both accounts at the command line (Mac OS X) but keep the EC2 keys & certificates separate? Do I need to change my environment variables before each ec2-* command?

Would using an alias and having it to the setting of the environment in-line work? Something like:

alias ec2-describe-instances1 = export EC2_PRIVATE_KEY=/path; ec2-describe-instances

#76 Best answer 1 of How to use multiple AWS accounts from the command line? (Score: 367)

Created: 2015-12-12 Last updated: 2017-05-26

You can work with two accounts by creating two profiles on the aws command line. It will prompt you for your AWS Access Key ID, AWS Secret Access Key and desired region, so have them ready.


$ aws configure --profile account1
$ aws configure --profile account2

You can then switch between the accounts by passing the profile on the command.

$ aws dynamodb list-tables --profile account1
$ aws s3 ls --profile account2


If you name the profile to be default it will become default profile i.e. when no --profile param in the command.

More on default profile

If you spend more time using account1, you can make it the default by setting the AWS_DEFAULT_PROFILE environment variable. When the default environment variable is set, you do not need to specify the profile on each command.

Linux, OS X Example:

$ export AWS_DEFAULT_PROFILE=account1
$ aws dynamodb list-tables

Windows Example:

$ set AWS_DEFAULT_PROFILE=account1
$ aws s3 ls

#76 Best answer 2 of How to use multiple AWS accounts from the command line?(Score: 90)

Created: 2015-11-28 Last updated: 2017-04-14

Maybe it still help someone. You can set it manually.

  1. Set in file




  1. Set in file




[profile {{profile_name}}]
  1. Test it with AWS Command Line and command and output will be JSON

    aws ec2 describe-instances –profile {{profile_name}}


See also original question in stackoverflow

#77: How to save S3 object to a file using boto3 (Score: 142)

Created: 2015-03-31 Last updated: 2015-04-02

Tags: python, amazon-web-services, boto, boto3

I’m trying to do a “hello world” with new boto3 client for AWS.

The use-case I have is fairly simple: get object from S3 and save it to the file.

In boto 2.X I would do it like this:

import boto
key = boto.connect_s3().get_bucket('foo').get_key('foo')

In boto 3 . I can’t find a clean way to do the same thing, so I’m manually iterating over the “Streaming” object:

import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
    chunk = key['Body'].read(1024*8)
    while chunk:
        chunk = key['Body'].read(1024*8)


import boto3
key = boto3.resource('s3').Object('fooo', 'docker/my-image.tar.gz').get()
with open('/tmp/my-image.tar.gz', 'w') as f:
    for chunk in iter(lambda: key['Body'].read(4096), b''):

And it works fine. I was wondering is there any “native” boto3 function that will do the same task?

#77 Best answer 1 of How to save S3 object to a file using boto3 (Score: 227)

Created: 2015-04-14 Last updated: 2019-10-02

There is a customization that went into Boto3 recently which helps with this (among other things). It is currently exposed on the low-level S3 client, and can be used like this:

s3_client = boto3.client('s3')
open('hello.txt').write('Hello, world!')

# Upload the file to S3
s3_client.upload_file('hello.txt', 'MyBucket', 'hello-remote.txt')

# Download the file from S3
s3_client.download_file('MyBucket', 'hello-remote.txt', 'hello2.txt')

These functions will automatically handle reading/writing files as well as doing multipart uploads in parallel for large files.

Note that s3_client.download_file won’t create a directory. It can be created as pathlib.Path('/path/to/file.txt').parent.mkdir(parents=True, exist_ok=True).

#77 Best answer 2 of How to save S3 object to a file using boto3(Score: 62)

Created: 2016-02-12 Last updated: 2016-06-23

boto3 now has a nicer interface than the client:

resource = boto3.resource('s3')
my_bucket = resource.Bucket('MyBucket')
my_bucket.download_file(key, local_filename)

This by itself isn’t tremendously better than the client in the accepted answer (although the docs say that it does a better job retrying uploads and downloads on failure) but considering that resources are generally more ergonomic (for example, the s3 bucket and object resources are nicer than the client methods) this does allow you to stay at the resource layer without having to drop down.

Resources generally can be created in the same way as clients, and they take all or most of the same arguments and just forward them to their internal clients.

See also original question in stackoverflow

#78: Find region from within an EC2 instance (Score: 141)

Created: 2010-11-22 Last updated: 2017-05-23

Tags: amazon-ec2, amazon-web-services

Is there a way to look up the region of an instance from within the instance?

I’m looking for something similar to the method of finding the instance id.

#78 Best answer 1 of Find region from within an EC2 instance (Score: 156)

Created: 2012-03-16 Last updated: 2018-03-22

That URL ( doesn’t appear to work anymore. I get a 404 when I tried to use it. I have the following code which seems to work though:

EC2_AVAIL_ZONE=`curl -s`
EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed 's/[a-z]$//'`"

Hope this helps.

EDIT: Improved sed based on comments

#78 Best answer 2 of Find region from within an EC2 instance(Score: 86)

Created: 2012-02-13 Last updated: 2012-08-02

There is one more way of achieving that:

REGION=`curl|grep region|awk -F\" '{print $4}'`

echo $REGION


See also original question in stackoverflow

#79: Add EBS to Ubuntu EC2 Instance (Score: 140)

Created: 2012-07-18 Last updated: 2013-04-02

Tags: amazon-web-services, amazon-ec2, amazon-ebs

I’m having problem connecting EBS volume to my Ubuntu EC2 Instance.

Here’s what I did:

  1. From the Amazon AWS Console, I created a EBS 150GB volume and attached it to an Ubuntu 11.10 EC2 instance. Under the EBS volume properties, “Attachment” shows: “[my Ubuntu instance id]:/dev/sdf (attached)”

  2. Tried mounting the drive on the Ubuntu box, and it told me “mount: /dev/sdf is not a block device”

sudo mount /dev/sdf /vol

  1. So I checked with fdisk and tried to mount from the new location and it told me it wasn’t the right file system.

sudo fdisk -l

sudo mount -v -t ext4 /dev/xvdf /vol

the error:

mount: wrong fs type, bad option, bad superblock on /dev/xvdf, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

“dmesg | tail” told me it gave the following error:

EXT4-fs (sda1): VFS: Can’t find ext4 filesystem

I also tried putting the configurations into /etc/fstab file as instructed on, but still gave same not the right file system error.


Q1: Based on point 1 (above), why was the volume mapped to ‘dev/sdf’ when it’s really mapped to ‘/dev/xvdf’?

Q2: What else do I need to do to get the EBS volume loaded? I thought it’ll just take care of everything for me when I attach it to a instance.

#79 Best answer of Add EBS to Ubuntu EC2 Instance (Score: 326)

Created: 2012-07-18 Last updated: 2014-01-07

Since this is a new volume, you need to format the EBS volume (block device) with a file system between step 1 and step 2. So the entire process with your sample mount point is:

  1. Create EBS volume.

  2. Attach EBS volume to /dev/sdf (EC2’s external name for this particular device number).

  3. Format file system /dev/xvdf (Ubuntu’s internal name for this particular device number):

     sudo mkfs.ext4 /dev/xvdf

Only format the file system if this is a new volume with no data on it. Formatting will make it difficult or impossible to retrieve any data that was on this volume previously.

  1. Mount file system (with update to /etc/fstab so it stays mounted on reboot):

     sudo mkdir -m 000 /vol
     echo "/dev/xvdf /vol auto noatime 0 0" | sudo tee -a /etc/fstab
     sudo mount /vol

See also original question in stackoverflow

#80: How To Set Up GUI On Amazon EC2 Ubuntu server (Score: 140)

Created: 2014-09-04 Last updated: 2014-09-07

Tags: ubuntu, amazon-web-services, amazon-ec2, vnc

I’m using an amazon Ubuntu EC2 instance which is only has a command line interface. I want to setup UI for that server to access using remote desktop tools. Is there any way to apply GUI to the EC2 instance?

#80 Best answer 1 of How To Set Up GUI On Amazon EC2 Ubuntu server (Score: 203)

Created: 2014-09-05 Last updated: 2018-12-13

This can be done. Following are the steps to setup the GUI

Create new user with password login

sudo useradd -m awsgui
sudo passwd awsgui
sudo usermod -aG admin awsgui

sudo vim /etc/ssh/sshd_config # edit line "PasswordAuthentication" to yes

sudo /etc/init.d/ssh restart

Setting up ui based ubuntu machine on AWS.

In security group open port 5901. Then ssh to the server instance. Run following commands to install ui and vnc server:

sudo apt-get update
sudo apt-get install ubuntu-desktop
sudo apt-get install vnc4server

Then run following commands and enter the login password for vnc connection:

su - awsgui


vncserver -kill :1

vim /home/awsgui/.vnc/xstartup

Then hit the Insert key, scroll around the text file with the keyboard arrows, and delete the pound (#) sign from the beginning of the two lines under the line that says “Uncomment the following two lines for normal desktop.” And on the second line add “sh” so the line reads

exec sh /etc/X11/xinit/xinitrc. 

When you’re done, hit Ctrl + C on the keyboard, type :wq and hit Enter.

Then start vnc server again.


You can download xtightvncviewer to view desktop(for Ubutnu) from here

In the vnc client, give public DNS plus “:1” (e.g. Enter the vnc login password. Make sure to use a normal connection. Don’t use the key files.

Additional guide available here:

Mac VNC client can be downloaded from here:

Port opening on console

sudo iptables -A INPUT -p tcp –dport 5901 -j ACCEPT

If the grey window issue comes. Mostly because of “.vnc/xstartup” file on different user. So run the vnc server also on same user instead of “awsgui” user.


#80 Best answer 2 of How To Set Up GUI On Amazon EC2 Ubuntu server(Score: 79)

Created: 2016-03-21 Last updated: 2017-04-13

So I follow first answer, but my vnc viewer gives me grey screen when I connect to it. And I found this Ask Ubuntu link to solve that.

The only difference with previous answer is you need to install these extra packages:

apt-get install gnome-panel gnome-settings-daemon metacity nautilus gnome-terminal

And use this ~/.vnc/xstartup file:



[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &

gnome-panel &
gnome-settings-daemon &
metacity &
nautilus &
gnome-terminal &

Everything else is the same.

Tested on EC2 Ubuntu 14.04 LTS.

See also original question in stackoverflow

  1. This page use API to get the relevant data from stackoverflow community.
  2. Content license on this page is CC BY-SA 3.0.
  3. score = up votes - down votes.