RSS

Most votes on amazon-web-services questions 7

Most votes on amazon-web-services questions 7. #61 Growing Amazon EBS Volume sizes #62 How do I get AWS_ACCESS_KEY_ID for Amazon? #63 How do I install Python 3 on an AWS EC2 instance? #64 What is the difference between Amazon ECS and Amazon EC2? #65 How to see all running Amazon EC2 instances across all regions? #66 How to upgrade AWS CLI to the latest version? #67 The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256 #68 Force CloudFront distribution/file update #69 Increasing client_max_body_size in Nginx conf on AWS Elastic Beanstalk #70 What is the difference between the AWS boto and boto3

Read all the top votes questions and answers in a single page.

#61: Growing Amazon EBS Volume sizes (Score: 154)

Created: 2009-02-15 Last updated: 2010-05-19

Tags: amazon-web-services, amazon-ebs

I’m quite impressed with Amazon’s EC2 and EBS services. I wanted to know if it is possible to grow an EBS Volume.

For example: If I have a 50 GB volume and I start to run out of space, can I bump it up to 100 GB when required?

#61 Best answer 1 of Growing Amazon EBS Volume sizes (Score: 103)

Created: 2009-02-15 Last updated: 2010-06-16

You can grow the storage, but it can’t be done on the fly. You’ll need to take a snapshot of the current block, add a new, larger block and re-attach your snapshot.

There’s a simple walkthrough here based on using Amazon’s EC2 command line tools

#61 Best answer 2 of Growing Amazon EBS Volume sizes(Score: 44)

Created: 2009-02-15

You can’t simply ‘bump in’ more space on the fly if you need it, but you can resize the partition with a snapshot.

Steps do to this:

  1. unmount ebs volume
  2. create a ebs snapshot
  3. add new volume with more space
  4. recreate partition table and resize filesystem
  5. mount the new ebs volume

Look at http://aws.amazon.com/ebs/ - EBS Snapshot:

Snapshots can also be used to instantiate multiple new volumes, expand the size of a volume or move volumes across Availability Zones. When a new volume is created, there is the option to create it based on an existing Amazon S3 snapshot. In that scenario, the new volume begins as an exact replica of the original volume. By optionally specifying a different volume size or a different Availability Zone, this functionality can be used as a way to increase the size of an existing volume or to create duplicate volumes in new Availability Zones. If you choose to use snapshots to resize your volume, you need to be sure your file system or application supports resizing a device.

See also original question in stackoverflow

#62: How do I get AWS_ACCESS_KEY_ID for Amazon? (Score: 153)

Created: 2014-01-29 Last updated: 2016-06-29

Tags: amazon-web-services, access-keys

I’m totally new to AWS.

I downloaded some sample code from Amazon and I need to set a number of constants:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • MERCHANT_ID
  • MARKETPLACE_ID

I just created an AWS account. I want some type of sandbox account so I can try out the code samples.

What are the exact steps I have to take to:

  1. Create a sandbox account
  2. Get these credentials

#62 Best answer 1 of How do I get AWS_ACCESS_KEY_ID for Amazon? (Score: 204)

Created: 2014-01-29 Last updated: 2015-12-17

  1. Go to: http://aws.amazon.com/
  2. Sign Up & create a new account (they’ll give you the option for 1 year trial or similar)
  3. Go to your AWS account overview
  4. Account menu in the upper-right (has your name on it)
  5. sub-menu: Security Credentials

#62 Best answer 2 of How do I get AWS_ACCESS_KEY_ID for Amazon?(Score: 83)

Created: 2016-06-21 Last updated: 2018-05-08

  1. Open the AWS Console
  2. Click on your username near the top right and select My Security Credentials
  3. Click on Users in the sidebar
  4. Click on your username
  5. Click on the Security Credentials tab
  6. Click Create Access Key
  7. Click Show User Security Credentials

See also original question in stackoverflow

#63: How do I install Python 3 on an AWS EC2 instance? (Score: 152)

Created: 2014-12-27 Last updated: 2019-03-07

Tags: python, python-3.x, amazon-web-services, amazon-ec2

I’m trying to install python 3.x on an AWS EC2 instance and:

sudo yum install python3

doesn’t work:

No package python3 available.

I’ve googled around and I can’t find anyone else who has this problem so I’m asking here. Do I have to manually download and install it?

#63 Best answer 1 of How do I install Python 3 on an AWS EC2 instance? (Score: 282)

Created: 2015-04-11 Last updated: 2019-08-01

If you do a

sudo yum list | grep python3

you will see that while they don’t have a “python3” package, they do have a “python34” package, or a more recent release, such as “python36”. Installing it is as easy as:

sudo yum install python34 python34-pip

#63 Best answer 2 of How do I install Python 3 on an AWS EC2 instance?(Score: 72)

Created: 2018-01-18 Last updated: 2019-04-03

Note: This may be obsolete for current versions of Amazon Linux 2 since late 2018 (see comments), you can now directly install it via yum install python3.

In Amazon Linux 2, there isn’t a python3[4-6] in the default yum repos, instead there’s the Amazon Extras Library.

sudo amazon-linux-extras install python3

If you want to set up isolated virtual environments with it; using yum install’d virtualenv tools don’t seem to reliably work.

virtualenv --python=python3 my_venv

Calling the venv module/tool is less finicky, and you could double check it’s what you want/expect with python3 --version beforehand.

python3 -m venv my_venv

Other things it can install (versions as of 18 Jan 18):

[[email protected] ~]$ amazon-linux-extras list
  0  ansible2   disabled  [ =2.4.2 ]
  1  emacs   disabled  [ =25.3 ]
  2  memcached1.5   disabled  [ =1.5.1 ]
  3  nginx1.12   disabled  [ =1.12.2 ]
  4  postgresql9.6   disabled  [ =9.6.6 ]
  5  python3=latest  enabled  [ =3.6.2 ]
  6  redis4.0   disabled  [ =4.0.5 ]
  7  R3.4   disabled  [ =3.4.3 ]
  8  rust1   disabled  [ =1.22.1 ]
  9  vim   disabled  [ =8.0 ]
 10  golang1.9   disabled  [ =1.9.2 ]
 11  ruby2.4   disabled  [ =2.4.2 ]
 12  nano   disabled  [ =2.9.1 ]
 13  php7.2   disabled  [ =7.2.0 ]
 14  lamp-mariadb10.2-php7.2   disabled  [ =10.2.10_7.2.0 ]

See also original question in stackoverflow

#64: What is the difference between Amazon ECS and Amazon EC2? (Score: 152)

Created: 2016-11-13 Last updated: 2019-07-16

Tags: amazon-web-services, amazon-ec2, amazon-ecs

I’m just getting started on AWS EC2. I understand that EC2 is like a remote computer where I can do pretty much everything I want. Then I found out about ECS. I know it uses Docker, but I’m confused about the relationship between these two.

Is ECS just a Docker install in EC2? If I already have an EC2 and I start an ECS, does it mean I have two instances?

#64 Best answer 1 of What is the difference between Amazon ECS and Amazon EC2? (Score: 202)

Created: 2016-11-14 Last updated: 2018-04-11

Your question

Is ECS just a docker install in EC2? If I already have a EC2, then I start a ECS, does it mean I have two instance?

No. AWS ECS is just a logical grouping (cluster) of EC2 instances, and all the EC2 instances part of an ECS act as Docker host i.e. ECS can send command to launch a container on them (EC2). If you already have an EC2, and then launch ECS, you’ll still have a single instance. If you add/register (by installing the AWS ECS Container Agent) the EC2 to ECS it’ll become the part of the cluster, but still a single instance of EC2.

An Amazon ECS without any EC2 registered (added to the cluster) is good for nothing.


TL; DR

An overview

  • EC2 - is simply a remote (virtual) machine.
  • ECS stands for Elastic Container Service - as per basic definition of computer cluster, ECS is basically a logical grouping of EC2 machines/instances. Technically speaking ECS is a mere configuration for an efficient use and management of your EC2 instance(s) resources i.e. storage, memory, CPU, etc.

To simplify it further, if you have launched an Amazon ECS with no EC2 instances added to it, it’s good for nothing i.e. you can’t do anything about it. ECS makes sense only once one (or more) EC2 instances are added to it.

The next confusing thing here is the container term - which is not fully virtualized machine instances, and Docker is one technology we can use to create container instances. Docker is a utility you can install on our machine, which makes it a Docker host, and on this host you can create containers (same as virtual machines - but much more light-weight). To sum up, ECS is just about clustering of EC2 instances, and uses Docker to instantiate containers/instances/virtual machines on these (EC2) hosts.

All you need to do is launch an ECS, and register/add as much EC2 instances to it as you need. You can add/register EC2 instances, all you need is Amazon ECS Container Agent running on your EC2 instance/machine, which can be done manually or directly using the special AMI (Amazon Machine Image) i.e. Amazon ECS-optimized AMI, which already has the Amazon ECS Container Agent. During the launch of a new EC2 instance the Agent automatically registers it to the default ECS cluster.

The container agent running on each of the instances (EC2 instances) within an Amazon ECS cluster sends information about the instance’s current running tasks and resource utilization to Amazon ECS, and starts and stops tasks whenever it receives a request from Amazon ECS. For more information, see Amazon ECS Container Agent. Once set, each of the created container instances (of whatever EC2 machine/node) will be an instance in Amazon ECS’s swarm.


For more information – read step 10 from this documentation: Launching an Amazon ECS Container Instance:

Choose an AMI for your container instance. You can choose the Amazon ECS-optimized AMI, or another operating system, such as CoreOS or Ubuntu. If you do not choose the Amazon ECS-optimized AMI, you need to follow the procedures in Installing the Amazon ECS Container Agent.

By default, your container instance launches into your default cluster. If you want to launch into your own cluster instead of the default, choose the Advanced Details list and paste the following script into the User data field, replacing your_cluster_name with the name of your cluster.

#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config

Or, if you have an ecs.config file in Amazon S3 and have enabled Amazon S3 read-only access to your container instance role, choose the Advanced Details list and paste the following script into the User data field, replacing your_bucket_name with the name of your bucket to install the AWS CLI and write your configuration file at launch time. Note For more information about this configuration, see Storing Container Instance Configuration in Amazon S3.

#!/bin/bash
yum install -y aws-cli
aws s3 cp s3://your_bucket_name/ecs.config /etc/ecs/ecs.config

Just to clarify it further – you can create containers on your single EC2 instance without ECS. Install any of the containerization technology i.e. Docker and run the create container command, setting your EC2 as a Docker host, and have as much Docker containers as you want (or as much as your EC2’s resources allow).

#64 Best answer 2 of What is the difference between Amazon ECS and Amazon EC2?(Score: 99)

Created: 2019-01-17

In simple words,ECS is a manager while EC2 instances are just like employees. All the employees (EC2) under this manager(ECS) can perform “Docker” tasks and the manager also understands “docker” pretty well. So,whenever you need “docker” resources, you show up to the Manager. Manager already has status from every employee(EC2) decides which one should perform the task.

Now, coming back to your question, a manager without an “employee” does not make sense.

enter image description here

See also original question in stackoverflow

#65: How to see all running Amazon EC2 instances across all regions? (Score: 149)

Created: 2017-02-07

Tags: amazon-web-services, amazon-ec2, ec2-ami

I switch instances between different regions frequently and sometimes I forget to turn off my running instance from a different region. I couldn’t find any way to see all the running instances on Amazon console.
Is there any way to display all the running instances regardless of region?

#65 Best answer 1 of How to see all running Amazon EC2 instances across all regions? (Score: 149)

Created: 2018-08-16 Last updated: 2021-04-18

A non-obvious GUI option is Resource Groups > Tag Editor. Here you can find all instances across all regions, even if the instances were not tagged. Screen capture of

Old Console:

#65 Best answer 2 of How to see all running Amazon EC2 instances across all regions?(Score: 71)

Created: 2017-02-07 Last updated: 2019-08-27

I don’t think you can currently do this in the AWS GUI. But here is a way to list all your instances across all regions with the AWS CLI:

for region in `aws ec2 describe-regions --region us-east-1 --output text | cut -f4`
do
     echo -e "\nListing Instances in region:'$region'..."
     aws ec2 describe-instances --region $region
done

Taken from here (If you want to see full discussion)

Also, if you’re getting a

You must specify a region. You can also configure your region by running “aws configure”

You can do so with aws configure set region us-east-1, thanks @Sabuncu for the comment.

Update

Now (in 2019) the cut command should be applied on the 4th field: cut -f4

See also original question in stackoverflow

#66: How to upgrade AWS CLI to the latest version? (Score: 149)

Created: 2016-05-01 Last updated: 2016-05-01

Tags: linux, ubuntu, amazon-web-services, amazon-s3, aws-cli

I recently noticed that I am running an old version of AWS CLI that is lacking some functionality I need:

$aws --version
aws-cli/1.2.9 Python/3.4.3 Linux/3.13.0-85-generic

How can I upgrade to the latest version of the AWS CLI (1.10.24)?

Edit:

Running the following command fails to update AWS CLI:

$ pip install --upgrade awscli
Requirement already up-to-date: awscli in /usr/local/lib/python2.7/dist-packages
Cleaning up...

Checking the version:

$ aws --version
aws-cli/1.2.9 Python/3.4.3 Linux/3.13.0-85-generic

#66 Best answer 1 of How to upgrade AWS CLI to the latest version? (Score: 120)

Created: 2016-05-01 Last updated: 2020-06-15

From http://docs.aws.amazon.com/cli/latest/userguide/installing.html#install-with-pip

To upgrade an existing AWS CLI installation, use the –upgrade option:

pip install --upgrade awscli

#66 Best answer 2 of How to upgrade AWS CLI to the latest version?(Score: 51)

Created: 2017-02-04 Last updated: 2019-06-06

On Linux and MacOS X, here are the three commands that correspond to each step:

$ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
$ unzip awscli-bundle.zip
$ sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws

See also original question in stackoverflow

#67: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256 (Score: 147)

Created: 2014-10-23 Last updated: 2018-07-28

Tags: ruby, amazon-web-services, amazon-s3, aws-sdk

I get an error AWS::S3::Errors::InvalidRequest The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. when I try upload file to S3 bucket in new Frankfurt region. All works properly with US Standard region.

Script:

backup_file = '/media/db-backup_for_dev/2014-10-23_02-00-07/slave_dump.sql.gz'
s3 = AWS::S3.new(
    access_key_id:     AMAZONS3['access_key_id'],
    secret_access_key: AMAZONS3['secret_access_key']
)

s3_bucket = s3.buckets['test-frankfurt']

# Folder and file name
s3_name = "database-backups-last20days/#{File.basename(File.dirname(backup_file))}_#{File.basename(backup_file)}"

file_obj = s3_bucket.objects[s3_name]
file_obj.write(file: backup_file)

aws-sdk (1.56.0)

How to fix it?

Thank you.

#67 Best answer 1 of The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256 (Score: 159)

Created: 2014-10-23 Last updated: 2016-09-22

AWS4-HMAC-SHA256, also known as Signature Version 4, (“V4”) is one of two authentication schemes supported by S3.

All regions support V4, but US-Standard¹, and many – but not all – other regions, also support the other, older scheme, Signature Version 2 (“V2”).

According to http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html … new S3 regions deployed after January, 2014 will only support V4.

Since Frankfurt was introduced late in 2014, it does not support V2, which is what this error suggests you are using.

http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html explains how to enable V4 in the various SDKs, assuming you are using an SDK that has that capability.

I would speculate that some older versions of the SDKs might not support this option, so if the above doesn’t help, you may need a newer release of the SDK you are using.


¹US Standard is the former name for the S3 regional deployment that is based in the us-east-1 region. Since the time this answer was originally written, “Amazon S3 renamed the US Standard Region to the US East (N. Virginia) Region to be consistent with AWS regional naming conventions." For all practical purposes, it’s only a change in naming.

#67 Best answer 2 of The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256(Score: 75)

Created: 2015-07-26

With node, try

var s3 = new AWS.S3( {
    endpoint: 's3-eu-central-1.amazonaws.com',
    signatureVersion: 'v4',
    region: 'eu-central-1'
} );

See also original question in stackoverflow

#68: Force CloudFront distribution/file update (Score: 147)

Created: 2009-08-12 Last updated: 2014-05-08

Tags: amazon-web-services, cloud, cdn, amazon-cloudfront

I’m using Amazon’s CloudFront to serve static files of my web apps.

Is there no way to tell a cloudfront distribution that it needs to refresh it’s file or point out a single file that should be refreshed?

Amazon recommend that you version your files like logo_1.gif, logo_2.gif and so on as a workaround for this problem but that seems like a pretty stupid solution. Is there absolutely no other way?

#68 Best answer 1 of Force CloudFront distribution/file update (Score: 136)

Created: 2010-09-01

Good news. Amazon finally added an Invalidation Feature. See the API Reference.

This is a sample request from the API Reference:

POST /2010-08-01/distribution/[distribution ID]/invalidation HTTP/1.0
Host: cloudfront.amazonaws.com
Authorization: [AWS authentication string]
Content-Type: text/xml

<InvalidationBatch>
   <Path>/image1.jpg</Path>
   <Path>/image2.jpg</Path>
   <Path>/videos/movie.flv</Path>
   <CallerReference>my-batch</CallerReference>
</InvalidationBatch>

#68 Best answer 2 of Force CloudFront distribution/file update(Score: 19)

Created: 2012-03-29

As of March 19, Amazon now allows Cloudfront’s cache TTL to be 0 seconds, thus you (theoretically) should never see stale objects. So if you have your assets in S3, you could simply go to AWS Web Panel => S3 => Edit Properties => Metadata, then set your “Cache-Control” value to “max-age=0”.

This is straight from the API documentation:

To control whether CloudFront caches an object and for how long, we recommend that you use the Cache-Control header with the max-age= directive. CloudFront caches the object for the specified number of seconds. (The minimum value is 0 seconds.)

See also original question in stackoverflow

#69: Increasing client_max_body_size in Nginx conf on AWS Elastic Beanstalk (Score: 146)

Created: 2013-09-20 Last updated: 2013-09-22

Tags: amazon-web-services, nginx, amazon-ec2, amazon-elastic-beanstalk

I’m running into “413 Request Entity Too Large” errors when posting files larger than 10MB to our API running on AWS Elastic Beanstalk.

I’ve done quite a bit of research and believe that I need to up the client_max_body_size for Nginx, however I cannot seem to find any documentation on how to do this using Elastic Beanstalk. My guess is that it needs to be modified using an ebetension file.

Anyone have thoughts on how I can up the limit? 10MB is pretty weak, there has to be a way to up this manually.

#69 Best answer 1 of Increasing client_max_body_size in Nginx conf on AWS Elastic Beanstalk (Score: 238)

Created: 2013-09-23 Last updated: 2021-02-09

There are two methods you can take for this. Unfortunately some work for some EB application types and some work for others.

Supported/recommended in AWS documentation

For some application types, like Java SE, Go, Node.js, and maybe Ruby (it’s not documented for Ruby, but all the other Nginx platforms seem to support this), Elasticbeanstalk has a built-in understanding of how to configure Nginx.

To extend Elastic Beanstalk’s default nginx configuration, add .conf configuration files to a folder named .ebextensions/nginx/conf.d/ in your application source bundle. Elastic Beanstalk’s nginx configuration includes .conf files in this folder automatically.

~/workspace/my-app/
|-- .ebextensions
|   `-- nginx
|       `-- conf.d
|           `-- myconf.conf
`-- web.jar

Configuring the Reverse Proxy - Java SE

To increase the maximum upload size specifically, then create a file at .ebextensions/nginx/conf.d/proxy.conf setting the max body size to whatever size you would prefer:

client_max_body_size 50M;

Create the Nginx config file directly

For some other application types, after much research and hours of working with the wonderful AWS support team, I created a config file inside of .ebextensions to supplement the nginx config. This change allowed for a larger post body size.

Inside of the .ebextensions directory, I created a file called 01_files.config with the following contents:

files:
    "/etc/nginx/conf.d/proxy.conf" :
        mode: "000755"
        owner: root
        group: root
        content: |
           client_max_body_size 20M;

This generates a proxy.conf file inside of the /etc/nginx/conf.d directory. The proxy.conf file simply contains the one liner client_max_body_size 20M; which does the trick.

Note that for some platforms, this file will be created during the deploy, but then removed in a later deployment phase.

You can specify other directives which are outlined in Nginx documentation.

http://wiki.nginx.org/Configuration

Hope this helps others!

#69 Best answer 2 of Increasing client_max_body_size in Nginx conf on AWS Elastic Beanstalk(Score: 77)

Created: 2020-05-21

I have tried all .ebextensions method of adding implementation level configuration and it didn’t helped me in the latest Amazon Linux AMI. I have did a lot research and after going through the logs i can find the deployment task runner is checking for a folder called .platform everytime and i thought of add one just like the .ebextensions. Below is the settings i have done in my root folder of my project.

Add the below folder setup in the root level of your project folder.

Folder structure (.platform/nginx/conf.d/proxy.conf)

.platform/
         nginx/
              conf.d/
                    proxy.conf
         00_myconf.config

Content of File 1 - proxy.conf (Inside .platform/nginx/conf.d/ folder)

client_max_body_size 50M;

Content of File 2 - 00_myconf.config (Inside .platform/ folder)

container_commands:
  01_reload_nginx:
    command: "service nginx reload"

Care full with the extensions. First file is .conf and second file is .config.

Now redeploy your project to Amazon Elastic Beanstalk and you will see the magic. This configuration will be added to all your EC2 instances, created as part of auto scaling.

Detailed folder structure below.

enter image description here

See also original question in stackoverflow

#70: What is the difference between the AWS boto and boto3 (Score: 146)

Created: 2015-09-01 Last updated: 2018-06-21

Tags: python, amazon-web-services, boto, boto3

I’m new to AWS using Python and I’m trying to learn the boto API however I noticed that there are two major versions/packages for Python. That would be boto and boto3.

What is the difference between the AWS boto and boto3 libraries?

#70 Best answer of What is the difference between the AWS boto and boto3 (Score: 195)

Created: 2015-09-01

The boto package is the hand-coded Python library that has been around since 2006. It is very popular and is fully supported by AWS but because it is hand-coded and there are so many services available (with more appearing all the time) it is difficult to maintain.

So, boto3 is a new version of the boto library based on botocore. All of the low-level interfaces to AWS are driven from JSON service descriptions that are generated automatically from the canonical descriptions of the services. So, the interfaces are always correct and always up to date. There is a resource layer on top of the client-layer that provides a nicer, more Pythonic interface.

The boto3 library is being actively developed by AWS and is the one I would recommend people use if they are starting new development.

See also original question in stackoverflow


Notes:
  1. This page use API to get the relevant data from stackoverflow community.
  2. Content license on this page is CC BY-SA 3.0.
  3. score = up votes - down votes.