RSS

Most votes on amazon-web-services questions 4

Most votes on amazon-web-services questions 4. #31 How to rename files and folder in Amazon S3? #32 How to handle errors with boto3? #33 S3 - Access-Control-Allow-Origin Header #34 AWS VPC - Internet Gateway vs. NAT #35 AWS S3: The bucket you are attempting to access must be addressed using the specified endpoint #36 Amazon SimpleDB vs Amazon DynamoDB #37 Can't push image to Amazon ECR - fails with "no basic auth credentials" #38 WARNING: UNPROTECTED PRIVATE KEY FILE! when trying to SSH into Amazon EC2 Instance #39 Download an already uploaded Lambda function #40 boto3 client NoRegionError: You must specify a region error only sometimes

Read all the top votes questions and answers in a single page.

#31: How to rename files and folder in Amazon S3? (Score: 248)

Created: 2014-01-17 Last updated: 2018-06-05

Tags: amazon-web-services, amazon-s3

Is there any function to rename files and folders in Amazon S3? Any related suggestions are also welcome.

#31 Best answer 1 of How to rename files and folder in Amazon S3? (Score: 517)

Created: 2016-01-30

I just tested this and it works:

aws s3 --recursive mv s3://<bucketname>/<folder_name_from> s3://<bucket>/<folder_name_to>

#31 Best answer 2 of How to rename files and folder in Amazon S3?(Score: 83)

Created: 2014-11-08 Last updated: 2019-03-23

There is no direct method to rename a file in S3. What you have to do is copy the existing file with a new name (just set the target key) and delete the old one.

See also original question in stackoverflow

#32: How to handle errors with boto3? (Score: 242)

Created: 2015-10-11 Last updated: 2020-01-28

Tags: python, amazon-web-services, boto, boto3

I am trying to figure how to do proper error handling with boto3.

I am trying to create an IAM user:

def create_user(username, iam_conn):
    try:
        user = iam_conn.create_user(UserName=username)
        return user
    except Exception as e:
        return e

When the call to create_user succeeds, I get a neat object that contains the http status code of the API call and the data of the newly created user.

Example:

{'ResponseMetadata': 
      {'HTTPStatusCode': 200, 
       'RequestId': 'omitted'
      },
 u'User': {u'Arn': 'arn:aws:iam::omitted:user/omitted',
           u'CreateDate': datetime.datetime(2015, 10, 11, 17, 13, 5, 882000, tzinfo=tzutc()),
           u'Path': '/',
           u'UserId': 'omitted',
           u'UserName': 'omitted'
          }
}

This works great. But when this fails (like if the user already exists), I just get an object of type botocore.exceptions.ClientError with only text to tell me what went wrong.

Example: ClientError(‘An error occurred (EntityAlreadyExists) when calling the CreateUser operation: User with name omitted already exists.’,)

This (AFAIK) makes error handling very hard because I can’t just switch on the resulting http status code (409 for user already exists according to the AWS API docs for IAM). This makes me think that I must be doing something the wrong way. The optimal way would be for boto3 to never throw exceptions, but juts always return an object that reflects how the API call went.

Can anyone enlighten me on this issue or point me in the right direction?

#32 Best answer 1 of How to handle errors with boto3? (Score: 474)

Created: 2015-11-12 Last updated: 2021-03-04

Use the response contained within the exception. Here is an example:

import boto3
from botocore.exceptions import ClientError

try:
    iam = boto3.client('iam')
    user = iam.create_user(UserName='fred')
    print("Created user: %s" % user)
except ClientError as e:
    if e.response['Error']['Code'] == 'EntityAlreadyExists':
        print("User already exists")
    else:
        print("Unexpected error: %s" % e)

The response dict in the exception will contain the following:

  • ['Error']['Code'] e.g. ‘EntityAlreadyExists’ or ‘ValidationError’
  • ['ResponseMetadata']['HTTPStatusCode'] e.g. 400
  • ['ResponseMetadata']['RequestId'] e.g. ‘d2b06652-88d7-11e5-99d0-812348583a35’
  • ['Error']['Message'] e.g. “An error occurred (EntityAlreadyExists) …”
  • ['Error']['Type'] e.g. ‘Sender’

For more information see:

[Updated: 2018-03-07]

The AWS Python SDK has begun to expose service exceptions on clients (though not on resources) that you can explicitly catch, so it is now possible to write that code like this:

import botocore
import boto3

try:
    iam = boto3.client('iam')
    user = iam.create_user(UserName='fred')
    print("Created user: %s" % user)
except iam.exceptions.EntityAlreadyExistsException:
    print("User already exists")
except botocore.exceptions.ParamValidationError as e:
    print("Parameter validation error: %s" % e)
except botocore.exceptions.ClientError as e:
    print("Unexpected error: %s" % e)

Unfortunately, there is currently no documentation for these errors/exceptions but you can get a list of the core errors as follows:

import botocore
import boto3
[e for e in dir(botocore.exceptions) if e.endswith('Error')]

Note that you must import both botocore and boto3. If you only import botocore then you will find that botocore has no attribute named exceptions. This is because the exceptions are dynamically populated into botocore by boto3.

You can get a list of service-specific exceptions as follows (replace iam the the relevant service as needed):

import boto3
iam = boto3.client('iam')
[e for e in dir(iam.exceptions) if e.endswith('Exception')]

#32 Best answer 2 of How to handle errors with boto3?(Score: 29)

Created: 2019-06-20 Last updated: 2019-10-16

I found it very useful, since the Exceptions are not documented, to list all exceptions to the screen for this package. Here is the code I used to do it:

import botocore.exceptions
def listexns(mod):
    #module = __import__(mod)
    exns = []
    for name in botocore.exceptions.__dict__:
        if (isinstance(botocore.exceptions.__dict__[name], Exception) or
            name.endswith('Error')):
            exns.append(name)
    for name in exns:
        print('%s.%s is an exception type' % (str(mod), name))
    return

if __name__ == '__main__':
    import sys
    if len(sys.argv) <= 1:
        print('Give me a module name on the $PYTHONPATH!')
    print('Looking for exception types in module: %s' % sys.argv[1])
    listexns(sys.argv[1])

Which results in:

Looking for exception types in module: boto3
boto3.BotoCoreError is an exception type
boto3.DataNotFoundError is an exception type
boto3.UnknownServiceError is an exception type
boto3.ApiVersionNotFoundError is an exception type
boto3.HTTPClientError is an exception type
boto3.ConnectionError is an exception type
boto3.EndpointConnectionError is an exception type
boto3.SSLError is an exception type
boto3.ConnectionClosedError is an exception type
boto3.ReadTimeoutError is an exception type
boto3.ConnectTimeoutError is an exception type
boto3.ProxyConnectionError is an exception type
boto3.NoCredentialsError is an exception type
boto3.PartialCredentialsError is an exception type
boto3.CredentialRetrievalError is an exception type
boto3.UnknownSignatureVersionError is an exception type
boto3.ServiceNotInRegionError is an exception type
boto3.BaseEndpointResolverError is an exception type
boto3.NoRegionError is an exception type
boto3.UnknownEndpointError is an exception type
boto3.ConfigParseError is an exception type
boto3.MissingParametersError is an exception type
boto3.ValidationError is an exception type
boto3.ParamValidationError is an exception type
boto3.UnknownKeyError is an exception type
boto3.RangeError is an exception type
boto3.UnknownParameterError is an exception type
boto3.AliasConflictParameterError is an exception type
boto3.PaginationError is an exception type
boto3.OperationNotPageableError is an exception type
boto3.ChecksumError is an exception type
boto3.UnseekableStreamError is an exception type
boto3.WaiterError is an exception type
boto3.IncompleteReadError is an exception type
boto3.InvalidExpressionError is an exception type
boto3.UnknownCredentialError is an exception type
boto3.WaiterConfigError is an exception type
boto3.UnknownClientMethodError is an exception type
boto3.UnsupportedSignatureVersionError is an exception type
boto3.ClientError is an exception type
boto3.EventStreamError is an exception type
boto3.InvalidDNSNameError is an exception type
boto3.InvalidS3AddressingStyleError is an exception type
boto3.InvalidRetryConfigurationError is an exception type
boto3.InvalidMaxRetryAttemptsError is an exception type
boto3.StubResponseError is an exception type
boto3.StubAssertionError is an exception type
boto3.UnStubbedResponseError is an exception type
boto3.InvalidConfigError is an exception type
boto3.InfiniteLoopConfigError is an exception type
boto3.RefreshWithMFAUnsupportedError is an exception type
boto3.MD5UnavailableError is an exception type
boto3.MetadataRetrievalError is an exception type
boto3.UndefinedModelAttributeError is an exception type
boto3.MissingServiceIdError is an exception type

See also original question in stackoverflow

#33: S3 - Access-Control-Allow-Origin Header (Score: 221)

Created: 2013-07-08 Last updated: 2020-03-23

Tags: amazon-web-services, amazon-s3, cors, http-headers

Did anyone manage to add Access-Control-Allow-Origin to the response headers? What I need is something like this:

<img src="http://360assets.s3.amazonaws.com/tours/8b16734d-336c-48c7-95c4-3a93fa023a57/1_AU_COM_180212_Areitbahn_Hahnkoplift_Bergstation.tiles/l2_f_0101.jpg" />

This get request should contain in the response, header, Access-Control-Allow-Origin: *

My CORS settings for the bucket looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

As you might expect there is no Origin response header.

#33 Best answer 1 of S3 - Access-Control-Allow-Origin Header (Score: 210)

Created: 2013-11-12 Last updated: 2020-03-23

Usually, all you need to do is to “Add CORS Configuration” in your bucket properties.

amazon-screen-shot

The <CORSConfiguration> comes with some default values. That’s all I needed to solve your problem. Just click “Save” and try again to see if it worked. If it doesn’t, you could also try the code below (from alxrb answer) which seems to have worked for most of the people.

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>HEAD</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>Authorization</AllowedHeader>
    </CORSRule>
</CORSConfiguration> 

For further info, you can read this article on Editing Bucket Permission.

#33 Best answer 2 of S3 - Access-Control-Allow-Origin Header(Score: 111)

Created: 2013-10-24 Last updated: 2020-03-23

I was having a similar problem with loading web fonts, when I clicked on ‘add CORS configuration’, in the bucket properties, this code was already there:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>HEAD</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>Authorization</AllowedHeader>
    </CORSRule>
</CORSConfiguration> 

I just clicked save and it worked a treat, my custom web fonts were loading in IE & Firefox. I’m no expert on this, I just thought this might help you out.

See also original question in stackoverflow

#34: AWS VPC - Internet Gateway vs. NAT (Score: 221)

Created: 2016-08-01 Last updated: 2016-08-01

Tags: amazon-web-services, amazon-vpc

What is an Internet Gateway? What is a NAT Instance? What services do they offer?

Reading AWS VPC documentation, I gather they both map private IP addresses to internet route-able addresses for the outgoing requests and route the incoming responses from the internet to the requester on the subnet.

So what are the differences between them? What scenarios do I use a NAT Instance instead of (or besides) an Internet Gateway? Are they essentially EC2 instances running some network applications or are they special hardware like a router?

Instead of simply pointing to AWS documentation links, can you please explain these with adding some background on what is public and private subnets so any beginner with limited knowledge of networking can understand these easily? Also when should I use a NAT Gateway instead of a NAT instance?

P.S. I am new to AWS VPC, so I might be comparing apples to oranges here.

#34 Best answer 1 of AWS VPC - Internet Gateway vs. NAT (Score: 245)

Created: 2016-08-01

Internet Gateway

An Internet Gateway is a logical connection between an Amazon VPC and the Internet. It is not a physical device. Only one can be associated with each VPC. It does not limit the bandwidth of Internet connectivity. (The only limitation on bandwidth is the size of the Amazon EC2 instance, and it applies to all traffic – internal to the VPC and out to the Internet.)

If a VPC does not have an Internet Gateway, then the resources in the VPC cannot be accessed from the Internet (unless the traffic flows via a corporate network and VPN/Direct Connect).

A subnet is deemed to be a Public Subnet if it has a Route Table that directs traffic to the Internet Gateway.

NAT Instance

A NAT Instance is an Amazon EC2 instance configured to forward traffic to the Internet. It can be launched from an existing AMI, or can be configured via User Data like this:

#!/bin/sh
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 0 > /proc/sys/net/ipv4/conf/eth0/send_redirects
/sbin/iptables -t nat -A POSTROUTING -o eth0 -s 0.0.0.0/0 -j MASQUERADE
/sbin/iptables-save > /etc/sysconfig/iptables
mkdir -p /etc/sysctl.d/
cat <<EOF > /etc/sysctl.d/nat.conf
net.ipv4.ip_forward = 1
net.ipv4.conf.eth0.send_redirects = 0
EOF

Instances in a private subnet that want to access the Internet can have their Internet-bound traffic forwarded to the NAT Instance via a Route Table configuration. The NAT Instance will then make the request to the Internet (since it is in a Public Subnet) and the response will be forwarded back to the private instance.

Traffic sent to a NAT Instance will typically be sent to an IP address that is not associated with the NAT Instance itself (it will be destined for a server on the Internet). Therefore, it is important to turn off the Source/Destination Check option on the NAT Instance otherwise the traffic will be blocked.

NAT Gateway

AWS introduced a NAT Gateway Service that can take the place of a NAT Instance. The benefits of using a NAT Gateway service are:

  • It is a fully-managed service – just create it and it works automatically, including fail-over
  • It can burst up to 10 Gbps (a NAT Instance is limited to the bandwidth associated with the EC2 instance type)

However:

  • Security Groups cannot be associated with a NAT Gateway
  • You’ll need one in each AZ since they only operate in a single AZ

#34 Best answer 2 of AWS VPC - Internet Gateway vs. NAT(Score: 132)

Created: 2016-08-01

As far as NAT gateway vs. NAT instance, either will work. A NAT instance can be a little cheaper, but the NAT gateway is fully managed by AWS, so it has the advantage of not needing to maintain an EC2 instance just for NATing.

However, for the instances that need to be available to the Internet, NAT gateway/instances aren’t what you are looking for. A NAT will allow private instances (without a public IP) to access the Internet, but not the other way around. So, for the EC2 instances that need to be available to the Internet, you need to assign a public IP. There is a workaround if you really need to keep the EC2 instances private - you can use an elastic load balancer to proxy the requests.

Internet Gateways

The Internet Gateway is how your VPC connects to the internet. You use an Internet Gateway with a route table to tell the VPC how internet traffic gets to the internet.

An Internet Gateway appears in the VPC as just a name. Amazon manages the gateway and there’s nothing you really have a say in (other than to use it or not; remember that you might want a completely segmented subnet that cannot access the internet at all).

A public subnet means a subnet that has internet traffic routed through AWS’s Internet Gateway. Any instance within a public subnet can have a public IP assigned to it (e.g. an EC2 instance with “associate public ip address” enabled).

A private subnet means the instances are not publicly accessible from the internet. They do NOT have a public IP address. For example, you cannot access them directly via SSH. Instances on private subnets may still access the internet themselves though (i.e. by using a NAT Gateway).

See also original question in stackoverflow

#35: AWS S3: The bucket you are attempting to access must be addressed using the specified endpoint (Score: 211)

Created: 2014-07-30 Last updated: 2019-07-16

Tags: ruby-on-rails, ruby, amazon-web-services, amazon-s3

I am trying to delete uploaded image files with the AWS-SDK-Core Ruby Gem.

I have the following code:

require 'aws-sdk-core'

def pull_picture(picture)
    Aws.config = {
        :access_key_id => ENV["AWS_ACCESS_KEY_ID"],
        :secret_access_key => ENV["AWS_SECRET_ACCESS_KEY"],
        :region => 'us-west-2'
    }

    s3 = Aws::S3::Client.new

    test = s3.get_object(
        :bucket => ENV["AWS_S3_BUCKET"],
        :key => picture.image_url.split('/')[-2],	
    )
end

However, I am getting the following error:

The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.

I know the region is correct because if I change it to us-east-1, the following error shows up:

The specified key does not exist.

What am I doing wrong here?

#35 Best answer 1 of AWS S3: The bucket you are attempting to access must be addressed using the specified endpoint (Score: 346)

Created: 2014-11-04 Last updated: 2016-12-21

It seems likely that this bucket was created in a different region, IE not us-west-2. That’s the only time I’ve seen “The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.”

US Standard is us-east-1

#35 Best answer 2 of AWS S3: The bucket you are attempting to access must be addressed using the specified endpoint(Score: 24)

Created: 2015-08-11

Check your bucket location in the console, then use this as reference to which endpoint to use: http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region

See also original question in stackoverflow

#36: Amazon SimpleDB vs Amazon DynamoDB (Score: 210)

Created: 2012-01-22 Last updated: 2017-10-21

Tags: nosql, amazon-web-services, amazon-simpledb, amazon-dynamodb

I have some basic understanding what Amazon SimpleDB is, but according to the Amazon DynamoDB description it seems to be almost the same: a NoSQL Key-value store service.

Can someone simply explain the main differences between them and tell in which cases to choose one over the other.

#36 Best answer 1 of Amazon SimpleDB vs Amazon DynamoDB (Score: 182)

Created: 2012-01-22 Last updated: 2014-09-11

This is addressed by the respective FAQ Q: How does Amazon DynamoDB differ from Amazon SimpleDB? Which should I use? (hash link no longer works, but use in-page Find to locate question within page) to some extent already, with the most compact summary at the end of the paragraph:

While SimpleDB has scaling limitations, it may be a good fit for smaller workloads that require query flexibility. Amazon SimpleDB automatically indexes all item attributes and thus supports query flexibility at the cost of performance and scale.

So it’s a trade off between performance/scalability and simplicity/flexibility, i.e. for simpler scenarios it might still be easier getting started with SimpleDB to avoid the complexities of architecturing your application for DynamoDB (see below for a different perspective).

The linked FAQ entry references Werner Vogel’s Amazon DynamoDB – a Fast and Scalable NoSQL Database Service Designed for Internet Scale Applications as well, which is indeed an elaborate and thus highly recommended read concerning the History of NoSQL at Amazon in general and Dynamo in particular; it contains many more insights addressing your question as well, e.g.

It became obvious that developers [even Amazon engineers] strongly preferred simplicity to fine-grained control as they voted “with their feet” and adopted cloud-based AWS solutions, like Amazon S3 and Amazon SimpleDB, over Dynamo. [addition mine]

Obviously DynamoDB has been introduced to address this and could thus be qualified as a successor of SimpleDB rather than ‘just’ amending their existing NoSQL offering:

We concluded that an ideal solution would combine the best parts of the original Dynamo design (incremental scalability, predictable high performance) with the best parts of SimpleDB (ease of administration of a cloud service, consistency, and a table-based data model that is richer than a pure key-value store).

Werner’s Summary suggests DynamoDB to be a good fit for applications of any size now accordingly:

Amazon DynamoDB is designed to maintain predictably high performance and to be highly cost efficient for workloads of any scale, from the smallest to the largest internet-scale applications.

#36 Best answer 2 of Amazon SimpleDB vs Amazon DynamoDB(Score: 27)

Created: 2013-08-08 Last updated: 2014-04-21

use SimpleDB or DynamoDB, it depends on your use case, I shared some of my experience using SimpleDB in some cases instead of DynamoDB. In another product, I used both SimpleDB and DynamoDB to store different data.

See also original question in stackoverflow

#37: Can't push image to Amazon ECR - fails with "no basic auth credentials" (Score: 207)

Created: 2016-01-09 Last updated: 2020-02-04

Tags: amazon-web-services, docker, aws-ecr

I’m trying to push a docker image to an Amazon ECR registry. I’m using docker client Docker version 1.9.1, build a34a1d5. I use aws ecr get-login --region us-east-1 to get the docker login creds. Then I successfully login with those creds as follows:

docker login -u AWS -p XXXX -e none https://####.dkr.ecr.us-east-1.amazonaws.com
WARNING: login credentials saved in /Users/ar/.docker/config.json
Login Succeeded

But when I try to push my image I get the following error:

$ docker push ####.dkr.ecr.us-east-1.amazonaws.com/image:latest
The push refers to a repository [####.dkr.ecr.us-east-1.amazonaws.com/image] (len: 1)
bcff5e7e3c7c: Preparing 
Post https://####.dkr.ecr.us-east-1.amazonaws.com/v2/image/blobs/uploads/: no basic auth credentials

I made sure that the aws user had the correct permissions. I also made sure that the repository allowed that user to push to it. Just to make sure that wasn’t an issue I set the registry to allow all users full access. Nothing changes the "no basic auth credentials" error. I don’t know how to begin to debug this since all the traffic is encrypted.

UPDATE

So I had a bit of Homer Simpson D’Oh moment when I realized the root cause of my problem. I have access to multiple AWS accounts. Even though I was using aws configure to set my credentials for the account where I had setup my repository the aws cli was actually using the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. So when I did aws ecr get-login it was returning a login for the wrong account. I failed to notice that the account numbers were different until I just went back now to try some of the proposed answers. When I remove the environment variables everything works correctly. I guess the motto of the story is if you hit this error, make sure that the repository you are logging into matches the tag you have applied to the image.

#37 Best answer 1 of Can't push image to Amazon ECR - fails with "no basic auth credentials" (Score: 128)

Created: 2016-01-28

if you run $(aws ecr get-login --region us-east-1) it will be all done for you

#37 Best answer 2 of Can't push image to Amazon ECR - fails with "no basic auth credentials"(Score: 59)

Created: 2016-08-04 Last updated: 2016-09-06

In my case this was a bug with Docker for Windows and their support for the Windows Credential Manager.

Open your ~/.docker/config.json and remove the "credsStore": "wincred" entry.

This will cause credentials to be written to the config.json directly. You’ll have to log in again afterwards.

You can track this bug through the tickets #22910 and #24968 on GitHub.

See also original question in stackoverflow

#38: WARNING: UNPROTECTED PRIVATE KEY FILE! when trying to SSH into Amazon EC2 Instance (Score: 205)

Created: 2008-10-14 Last updated: 2013-02-28

Tags: ssh, amazon-web-services, amazon-ec2, chmod

I’m working to set up Panda on an Amazon EC2 instance. I set up my account and tools last night and had no problem using SSH to interact with my own personal instance, but right now I’m not being allowed permission into Panda’s EC2 instance. Getting Started with Panda

I’m getting the following error:

@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @

Permissions 0644 for '~/.ec2/id_rsa-gsg-keypair' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.

I’ve chmoded my keypair to 600 in order to get into my personal instance last night, and experimented at length setting the permissions to 0 and even generating new key strings, but nothing seems to be working.

Any help at all would be a great help!


Hm, it seems as though unless permissions are set to 777 on the directory, the ec2-run-instances script is unable to find my keyfiles. I’m new to SSH so I might be overlooking something.

#38 Best answer 1 of WARNING: UNPROTECTED PRIVATE KEY FILE! when trying to SSH into Amazon EC2 Instance (Score: 223)

Created: 2008-10-20 Last updated: 2016-06-20

I’ve chmoded my keypair to 600 in order to get into my personal instance last night,

And this is the way it is supposed to be.

From the EC2 documentation we have “If you’re using OpenSSH (or any reasonably paranoid SSH client) then you’ll probably need to set the permissions of this file so that it’s only readable by you." The Panda documentation you link to links to Amazon’s documentation but really doesn’t convey how important it all is.

The idea is that the key pair files are like passwords and need to be protected. So, the ssh client you are using requires that those files be secured and that only your account can read them.

Setting the directory to 700 really should be enough, but 777 is not going to hurt as long as the files are 600.

Any problems you are having are client side, so be sure to include local OS information with any follow up questions!

#38 Best answer 2 of WARNING: UNPROTECTED PRIVATE KEY FILE! when trying to SSH into Amazon EC2 Instance(Score: 63)

Created: 2008-10-14 Last updated: 2013-02-28

Make sure that the directory containing the private key files is set to 700

chmod 700 ~/.ec2

See also original question in stackoverflow

#39: Download an already uploaded Lambda function (Score: 202)

Created: 2016-12-18 Last updated: 2017-07-21

Tags: amazon-web-services, aws-lambda

I created a lambda function in AWS (Python) using “upload .zip” I lost those files and I need to make some changes, is there is any way to download that .zip?

#39 Best answer 1 of Download an already uploaded Lambda function (Score: 352)

Created: 2016-12-18 Last updated: 2021-03-19

Yes!

Navigate over to your lambda function settings and on the top right you will have a button called “Actions”. In the drop down menu select “export” and in the popup click “Download deployment package” and the function will download in a .zip file.

Action button on top-right

Step#1

A popup from CTA above (Tap “Download deployment package” here)

Step#2

#39 Best answer 2 of Download an already uploaded Lambda function(Score: 33)

Created: 2018-12-10 Last updated: 2019-06-25

Update: Added link to script by sambhaji-sawant. Fixed Typos, improved answer and script based on comments!

You can use aws-cli to download the zip of any lambda.

First you need to get the URL to the lambda zip $ aws lambda get-function --function-name $functionName --query 'Code.Location'

Then you need to use wget/curl to download the zip from the URL. $ wget -O myfunction.zip URL_from_step_1

Additionally you can list all functions on your AWS account using

$ aws lambda list-functions

I made a simple bash script to parallel download all the lambda functions from your AWS account. You can see it here :)

Note: You will need to setup aws-cli before using the above commands (or any aws-cli command) using aws configure

Full guide here

See also original question in stackoverflow

#40: boto3 client NoRegionError: You must specify a region error only sometimes (Score: 194)

Created: 2016-11-02 Last updated: 2020-06-18

Tags: python, linux, amazon-web-services, boto3, aws-kms

I have a boto3 client :

boto3.client('kms')

But it happens on new machines, They open and close dynamically.

    if endpoint is None:
        if region_name is None:
            # Raise a more specific error message that will give
            # better guidance to the user what needs to happen.
            raise NoRegionError()

Why is this happening? and why only part of the time?

#40 Best answer 1 of boto3 client NoRegionError: You must specify a region error only sometimes (Score: 420)

Created: 2016-11-02 Last updated: 2019-02-07

One way or another you must tell boto3 in which region you wish the kms client to be created. This could be done explicitly using the region_name parameter as in:

kms = boto3.client('kms', region_name='us-west-2')

or you can have a default region associated with your profile in your ~/.aws/config file as in:

[default]
region=us-west-2

or you can use an environment variable as in:

export AWS_DEFAULT_REGION=us-west-2

but you do need to tell boto3 which region to use.

#40 Best answer 2 of boto3 client NoRegionError: You must specify a region error only sometimes(Score: 15)

Created: 2019-05-29 Last updated: 2019-05-29

os.environ['AWS_DEFAULT_REGION'] = 'your_region_name'

In my case sensitivity mattered.

See also original question in stackoverflow


Notes:
  1. This page use API to get the relevant data from stackoverflow community.
  2. Content license on this page is CC BY-SA 3.0.
  3. score = up votes - down votes.