RSS

Most votes on amazon-web-services questions 6

Most votes on amazon-web-services questions 6. #51 How to rename AWS S3 Bucket name #52 Amazon S3 direct file upload from client browser - private key disclosure #53 Opening port 80 EC2 Amazon web services #54 Font from origin has been blocked from loading by Cross-Origin Resource Sharing policy #55 Unable to verify secret hash for client in Amazon Cognito Userpools #56 AccessDenied for ListObjects for S3 bucket when permissions are s3:* #57 EC2 Instance Cloning #58 S3 Bucket action doesn't apply to any resources #59 AWS Lambda Scheduled Tasks #60 How to choose an AWS profile when using boto3 to connect to CloudFront

Read all the top votes questions and answers in a single page.

#51: How to rename AWS S3 Bucket name (Score: 172)

Created: 2017-01-06 Last updated: 2017-09-13

Tags: amazon-web-services, amazon-s3, cname

After all the tough works of migration etc. Just realise that If need to serve the content using CNAME (e.g media.abc.com). The bucket name need to start with media.abc.com/S3/amazon.com to ensure it work perfectly.

Just realise that S3 don’t allow direct rename from the console.

Is there any ways to work around for this?

#51 Best answer 1 of How to rename AWS S3 Bucket name (Score: 272)

Created: 2017-04-07 Last updated: 2017-07-05

Solution

aws s3 mb s3://[new-bucket]
aws s3 sync s3://[old-bucket] s3://[new-bucket]
aws s3 rb --force s3://[old-bucket]

Explanation

There’s no rename bucket functionality for S3 because there are technically no folders in S3 so we have to handle every file within the bucket.

The code above will 1. create a new bucket, 2. copy files over and 3. delete the old bucket. That’s it.

If you have lots of files in your bucket and you’re worried about the costs, then read on. Behind the scenes what happens is that all the files within the bucket are first copied and then deleted. It should cost an insignificant amount if you have a few thousand files. Otherwise check this answer to see how this would impact you.

Example

In the following example we create and populate the old bucket and then sync the files to the new one. Check the output of the commands to see what AWS does.

> # bucket suffix so we keep it unique
> suffix="ieXiy2"  # used `pwgen -1 -6` to get this
>
> # populate old bucket
> echo "asdf" > asdf.txt
> echo "yxcv" > yxcv.txt
> aws s3 mb s3://old-bucket-$suffix
make_bucket: old-bucket-ieXiy2
> aws s3 cp asdf.txt s3://old-bucket-$suffix/asdf.txt
upload: ./asdf.txt to s3://old-bucket-ieXiy2/asdf.txt
> aws s3 cp yxcv.txt s3://old-bucket-$suffix/yxcv.txt
upload: ./yxcv.txt to s3://old-bucket-ieXiy2/yxcv.txt
>
> # "rename" to new bucket
> aws s3 mb s3://new-bucket-$suffix
make_bucket: new-bucket-ieXiy2
> aws s3 sync s3://old-bucket-$suffix s3://new-bucket-$suffix
copy: s3://old-bucket-ieXiy2/yxcv.txt to s3://new-bucket-ieXiy2/yxcv.txt
copy: s3://old-bucket-ieXiy2/asdf.txt to s3://new-bucket-ieXiy2/asdf.txt
> aws s3 rb --force s3://old-bucket-$suffix
delete: s3://old-bucket-ieXiy2/asdf.txt
delete: s3://old-bucket-ieXiy2/yxcv.txt
remove_bucket: old-bucket-ieXiy2

#51 Best answer 2 of How to rename AWS S3 Bucket name(Score: 117)

Created: 2017-01-06

I think only way is to create a new bucket with correct name and then copy all your objects from old bucket to new bucket. You can do it using Aws CLI.

See also original question in stackoverflow

#52: Amazon S3 direct file upload from client browser - private key disclosure (Score: 171)

Created: 2013-07-11 Last updated: 2020-05-20

Tags: javascript, amazon-web-services, authentication, amazon-s3

I’m implementing a direct file upload from client machine to Amazon S3 via REST API using only JavaScript, without any server-side code. All works fine but one thing is worrying me…

When I send a request to Amazon S3 REST API, I need to sign the request and put a signature into Authentication header. To create a signature, I must use my secret key. But all things happens on a client side, so, the secret key can be easily revealed from page source (even if I obfuscate/encrypt my sources).

How can I handle this? And is it a problem at all? Maybe I can limit specific private key usage only to REST API calls from a specific CORS Origin and to only PUT and POST methods or maybe link key to only S3 and specific bucket? May be there are another authentication methods?

“Serverless” solution is ideal, but I can consider involving some serverside processing, excluding uploading a file to my server and then send in to S3.

#52 Best answer 1 of Amazon S3 direct file upload from client browser - private key disclosure (Score: 225)

Created: 2013-07-15 Last updated: 2013-07-17

I think what you want is Browser-Based Uploads Using POST.

Basically, you do need server-side code, but all it does is generate signed policies. Once the client-side code has the signed policy, it can upload using POST directly to S3 without the data going through your server.

Here’s the official doc links:

Diagram: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingHTTPPOST.html

Example code: http://docs.aws.amazon.com/AmazonS3/latest/dev/HTTPPOSTExamples.html

The signed policy would go in your html in a form like this:

<html>
  <head>
    ...
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    ...
  </head>
  <body>
  ...
  <form action="http://johnsmith.s3.amazonaws.com/" method="post" enctype="multipart/form-data">
    Key to upload: <input type="input" name="key" value="user/eric/" /><br />
    <input type="hidden" name="acl" value="public-read" />
    <input type="hidden" name="success_action_redirect" value="http://johnsmith.s3.amazonaws.com/successful_upload.html" />
    Content-Type: <input type="input" name="Content-Type" value="image/jpeg" /><br />
    <input type="hidden" name="x-amz-meta-uuid" value="14365123651274" />
    Tags for File: <input type="input" name="x-amz-meta-tag" value="" /><br />
    <input type="hidden" name="AWSAccessKeyId" value="AKIAIOSFODNN7EXAMPLE" />
    <input type="hidden" name="Policy" value="POLICY" />
    <input type="hidden" name="Signature" value="SIGNATURE" />
    File: <input type="file" name="file" /> <br />
    <!-- The elements after this will be ignored -->
    <input type="submit" name="submit" value="Upload to Amazon S3" />
  </form>
  ...
</html>

Notice the FORM action is sending the file directly to S3 - not via your server.

Every time one of your users wants to upload a file, you would create the POLICY and SIGNATURE on your server. You return the page to the user’s browser. The user can then upload a file directly to S3 without going through your server.

When you sign the policy, you typically make the policy expire after a few minutes. This forces your users to talk to your server before uploading. This lets you monitor and limit uploads if you desire.

The only data going to or from your server is the signed URLs. Your secret keys stay secret on the server.

#52 Best answer 2 of Amazon S3 direct file upload from client browser - private key disclosure(Score: 42)

Created: 2015-06-25 Last updated: 2017-06-20

You can do this by AWS S3 Cognito try this link here :

http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/browser-examples.html#Amazon_S3

Also try this code

Just change Region, IdentityPoolId and Your bucket name

<!DOCTYPE html>
<html>

<head>
    <title>AWS S3 File Upload</title>
    <script src="https://sdk.amazonaws.com/js/aws-sdk-2.1.12.min.js"></script>
</head>

<body>
    <input type="file" id="file-chooser" />
    <button id="upload-button">Upload to S3</button>
    <div id="results"></div>
    <script type="text/javascript">
    AWS.config.region = 'your-region'; // 1. Enter your region

    AWS.config.credentials = new AWS.CognitoIdentityCredentials({
        IdentityPoolId: 'your-IdentityPoolId' // 2. Enter your identity pool
    });

    AWS.config.credentials.get(function(err) {
        if (err) alert(err);
        console.log(AWS.config.credentials);
    });

    var bucketName = 'your-bucket'; // Enter your bucket name
    var bucket = new AWS.S3({
        params: {
            Bucket: bucketName
        }
    });

    var fileChooser = document.getElementById('file-chooser');
    var button = document.getElementById('upload-button');
    var results = document.getElementById('results');
    button.addEventListener('click', function() {

        var file = fileChooser.files[0];

        if (file) {

            results.innerHTML = '';
            var objKey = 'testing/' + file.name;
            var params = {
                Key: objKey,
                ContentType: file.type,
                Body: file,
                ACL: 'public-read'
            };

            bucket.putObject(params, function(err, data) {
                if (err) {
                    results.innerHTML = 'ERROR: ' + err;
                } else {
                    listObjs();
                }
            });
        } else {
            results.innerHTML = 'Nothing to upload.';
        }
    }, false);
    function listObjs() {
        var prefix = 'testing';
        bucket.listObjects({
            Prefix: prefix
        }, function(err, data) {
            if (err) {
                results.innerHTML = 'ERROR: ' + err;
            } else {
                var objKeys = "";
                data.Contents.forEach(function(obj) {
                    objKeys += obj.Key + "<br>";
                });
                results.innerHTML = objKeys;
            }
        });
    }
    </script>
</body>

</html>
For more details, Please check - Github

See also original question in stackoverflow

#53: Opening port 80 EC2 Amazon web services (Score: 164)

Created: 2011-02-15 Last updated: 2017-01-11

Tags: amazon-web-services, amazon-ec2

I’ve opened port 80 in the web console on my E2C instance’s security group but I still can’t access it via the public dns in the browser.

Any ideas?

#53 Best answer 1 of Opening port 80 EC2 Amazon web services (Score: 344)

Created: 2012-05-04 Last updated: 2016-06-29

This is actually really easy:

  • Go to the “Network & Security” -> Security Group settings in the left hand navigation
  • Find the Security Group that your instance is apart of
  • Click on Inbound Rules
  • Use the drop down and add HTTP (port 80)
  • Click Apply and enjoy

#53 Best answer 2 of Opening port 80 EC2 Amazon web services(Score: 19)

Created: 2011-02-15

Some quick tips:

  1. Disable the inbuilt firewall on your Windows instances.
  2. Use the IP address rather than the DNS entry.
  3. Create a security group for tcp ports 1 to 65000 and for source 0.0.0.0/0. It’s obviously not to be used for production purposes, but it will help avoid the Security Groups as a source of problems.
  4. Check that you can actually ping your server. This may also necessitate some Security Group modification.

See also original question in stackoverflow

#54: Font from origin has been blocked from loading by Cross-Origin Resource Sharing policy (Score: 163)

Created: 2014-08-30 Last updated: 2018-07-29

Tags: amazon-web-services, amazon-s3, cors, amazon-cloudfront

I’m receiving the following error on a couple of Chrome browsers but not all. Not sure entirely what the issue is at this point.

Font from origin ‘https://ABCDEFG.cloudfront.net’ has been blocked from loading by Cross-Origin Resource Sharing policy: No ‘Access-Control-Allow-Origin’ header is present on the requested resource. Origin ‘https://sub.domain.com’ is therefore not allowed access.

I have the following CORS Configuration on S3

<CORSConfiguration>
 <CORSRule>
   <AllowedOrigin>*</AllowedOrigin>
   <AllowedHeader>*</AllowedHeader>
   <AllowedMethod>GET</AllowedMethod>
 </CORSRule>
</CORSConfiguration>

The request

Remote Address:1.2.3.4:443
Request URL:https://abcdefg.cloudfront.net/folder/path/icons-f10eba064933db447695cf85b06f7df3.woff
Request Method:GET
Status Code:200 OK
Request Headers
Accept:*/*
Accept-Encoding:gzip,deflate
Accept-Language:en-US,en;q=0.8
Cache-Control:no-cache
Connection:keep-alive
Host:abcdefg.cloudfront.net
Origin:https://sub.domain.com
Pragma:no-cache
Referer:https://abcdefg.cloudfront.net/folder/path/icons-e283e9c896b17f5fb5717f7c9f6b05eb.css
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.94 Safari/537.36

All other requests from Cloudfront/S3 work properly, including JS files.

#54 Best answer 1 of Font from origin has been blocked from loading by Cross-Origin Resource Sharing policy (Score: 92)

Created: 2015-01-30 Last updated: 2015-06-23

Add this rule to your .htaccess

Header add Access-Control-Allow-Origin "*" 

even better, as suggested by @david thomas, you can use a specific domain value, e.g.

Header add Access-Control-Allow-Origin "your-domain.com"

#54 Best answer 2 of Font from origin has been blocked from loading by Cross-Origin Resource Sharing policy(Score: 59)

Created: 2014-09-06 Last updated: 2017-05-23

Chrome since ~Sep/Oct 2014 makes fonts subject to the same CORS checks as Firefox has done https://code.google.com/p/chromium/issues/detail?id=286681. There is a discussion on this in https://groups.google.com/a/chromium.org/forum/?fromgroups=#!topic/blink-dev/TT9D5-Zfnzw

Given that for fonts the browser may do a preflight check, then your S3 policy needs the cors request header as well. You can check your page in say Safari (which at present doesn’t do CORS checking for fonts) and Firefox (that does) to double check this is the problem described.

See Stack overflow answer on https://stackoverflow.com/questions/12229844/amazon-s3-cors-cross-origin-resource-sharing-and-firefox-cross-domain-font-loa for the Amazon S3 CORS details.

NB in general because this used to apply to Firefox only, so it may help to search for Firefox rather than Chrome.

See also original question in stackoverflow

#55: Unable to verify secret hash for client in Amazon Cognito Userpools (Score: 162)

Created: 2016-05-25 Last updated: 2018-06-09

Tags: amazon-web-services, amazon-cognito

I am stuck at “Amazon Cognito Identity user pools” process.

I tried all possible codes for authenticating user in cognito userpools. But I always get error saying “Error: Unable to verify secret hash for client 4b*******fd”.

Here is code:

AWS.config.region = 'us-east-1'; // Region
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
    IdentityPoolId: 'us-east-1:b64bb629-ec73-4569-91eb-0d950f854f4f'
});

AWSCognito.config.region = 'us-east-1';
AWSCognito.config.credentials = new AWS.CognitoIdentityCredentials({
    IdentityPoolId: 'us-east-1:b6b629-er73-9969-91eb-0dfffff445d'
});

AWSCognito.config.update({accessKeyId: 'AKIAJNYLRONAKTKBXGMWA', secretAccessKey: 'PITHVAS5/UBADLU/dHITesd7ilsBCm'})

var poolData = { 
    UserPoolId : 'us-east-1_l2arPB10',
    ClientId : '4bmsrr65ah3oas5d4sd54st11k'
};
var userPool = new AWSCognito.CognitoIdentityServiceProvider.CognitoUserPool(poolData);

var userData = {
     Username : '[email protected]',
     Pool : userPool
};

var cognitoUser = new AWSCognito.CognitoIdentityServiceProvider.CognitoUser(userData);

cognitoUser.confirmRegistration('123456', true,function(err, result) {
if (err) {
    alert(err);
    return;
}
console.log('call result: ' + result);
});

#55 Best answer 1 of Unable to verify secret hash for client in Amazon Cognito Userpools (Score: 210)

Created: 2016-05-26

It seems that currently AWS Cognito doesn’t handle client secret perfectly. It will work in the near future but as for now it is still a beta version.

For me it is working fine for an app without a client secret but fails for an app with a client secret.

So in your user pool try to create a new app without generating a client secret. Then use that app to signup a new user or to confirm registration.

#55 Best answer 2 of Unable to verify secret hash for client in Amazon Cognito Userpools(Score: 82)

Created: 2017-01-03

According to the Docs: http://docs.aws.amazon.com/cognito/latest/developerguide/setting-up-the-javascript-sdk.html

The Javascript SDK doesn’t support Apps with a Client Secret.

The instructions now state that you need to uncheck the “Generate Client Secret” when creating the app for the User Pool.

See also original question in stackoverflow

#56: AccessDenied for ListObjects for S3 bucket when permissions are s3:* (Score: 160)

Created: 2016-08-04 Last updated: 2020-02-12

Tags: amazon-web-services, amazon-s3

I am getting:

An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied

When I try to get folder from my S3 bucket.

Using this command:

aws s3 cp s3://bucket-name/data/all-data/ . --recursive

The IAM permissions for the bucket look like this:

{
"Version": "version_id",
"Statement": [
    {
        "Sid": "some_id",
        "Effect": "Allow",
        "Action": [
            "s3:*"
        ],
        "Resource": [
            "arn:aws:s3:::bucketname/*"
        ]
    }
] }

What do I need to change to be able to copy and ls successfully?

#56 Best answer 1 of AccessDenied for ListObjects for S3 bucket when permissions are s3:* (Score: 250)

Created: 2016-08-04 Last updated: 2020-05-19

You have given permission to perform commands on objects inside the S3 bucket, but you have not given permission to perform any actions on the bucket itself.

Slightly modifying your policy would look like this:

{
  "Version": "version_id",
  "Statement": [
    {
        "Sid": "some_id",
        "Effect": "Allow",
        "Action": [
            "s3:*"
        ],
        "Resource": [
            "arn:aws:s3:::bucketname",
            "arn:aws:s3:::bucketname/*"
        ]
    }
  ] 
}

However, that probably gives more permission than is needed. Following the AWS IAM best practice of Granting Least Privilege would look something like this:

{
  "Version": "2012-10-17",
  "Statement": [
      {
          "Effect": "Allow",
          "Action": [
              "s3:ListBucket"
          ],
          "Resource": [
              "arn:aws:s3:::bucketname"
          ]
      },
      {
          "Effect": "Allow",
          "Action": [
              "s3:GetObject"
          ],
          "Resource": [
              "arn:aws:s3:::bucketname/*"
          ]
      }
  ]
}

#56 Best answer 2 of AccessDenied for ListObjects for S3 bucket when permissions are s3:*(Score: 41)

Created: 2017-08-02 Last updated: 2019-10-03

If you wanted to copy all s3 bucket objects using the command “aws s3 cp s3://bucket-name/data/all-data/ . –recursive” as you mentioned, here is a safe and minimal policy to do that:

{
  "Version": "2012-10-17",
  "Statement": [
      {
          "Effect": "Allow",
          "Action": [
              "s3:ListBucket"
          ],
          "Resource": [
              "arn:aws:s3:::bucket-name"
          ],
          "Condition": {
              "StringLike": {
                  "s3:prefix": "data/all-data/*"
              }
          }
      },
      {
          "Effect": "Allow",
          "Action": [
              "s3:GetObject"
          ],
          "Resource": [
              "arn:aws:s3:::bucket-name/data/all-data/*"
          ]
      }
  ]
}

The first statement in this policy allows for listing objects inside a specific bucket’s sub directory. The resource needs to be the arn of the S3 bucket, and to limit listing to only a sub-directory in that bucket you can edit the “s3:prefix” value.

The second statement in this policy allows for getting objects inside of the bucket at a specific sub-directory. This means that anything inside the “s3://bucket-name/data/all-data/” path you will be able to copy. Be aware that this doesn’t allow you to copy from parent paths such as “s3://bucket-name/data/”.

This solution is specific to limiting use for AWS CLI commands; if you need to limit S3 access through the AWS console or API, then more policies will be needed. I suggest taking a look here: https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/.

A similar issue to this can be found here which led me to the solution I am giving. https://github.com/aws/aws-cli/issues/2408

Hope this helps!

See also original question in stackoverflow

#57: EC2 Instance Cloning (Score: 159)

Created: 2010-02-02 Last updated: 2012-04-03

Tags: amazon-ec2, amazon-web-services

Is it possible to clone a EC2 instance data and all?

#57 Best answer 1 of EC2 Instance Cloning (Score: 134)

Created: 2010-02-02 Last updated: 2014-03-24

You can make an AMI of an existing instance, and then launch other instances using that AMI.

#57 Best answer 2 of EC2 Instance Cloning(Score: 134)

Created: 2012-04-13 Last updated: 2013-01-04

The easier way is through the web management console:

  1. go to the instance
  2. select the instance and click on instance action
  3. create image

Once you have an image you can launch another cloned instance, data and all. :)

See also original question in stackoverflow

#58: S3 Bucket action doesn't apply to any resources (Score: 158)

Created: 2017-05-28

Tags: amazon-web-services, amazon-s3

I’m following the instructions from this answer to generate the follow S3 bucket policy:

{
  "Id": "Policy1495981680273",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1495981517155",
      "Action": [
        "s3:GetObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::surplace-audio",
      "Principal": "*"
    }
  ]
}

I get back the following error:

Action does not apply to any resource(s) in statement

What am I missing from my policy?

#58 Best answer 1 of S3 Bucket action doesn't apply to any resources (Score: 281)

Created: 2017-05-28

From IAM docs, http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Action

Some services do not let you specify actions for individual resources; instead, any actions that you list in the Action or NotAction element apply to all resources in that service. In these cases, you use the wildcard * in the Resource element.

With this information, resource should have a value like below:

"Resource": "arn:aws:s3:::surplace-audio/*"

#58 Best answer 2 of S3 Bucket action doesn't apply to any resources(Score: 110)

Created: 2018-09-28 Last updated: 2018-12-02

Just removing the s3:ListBucket permission wasn’t really a good enough solution for me, and probably isn’t for many others.

If you want the s3:ListBucket permission, you need to just have the plain arn of the bucket (without the /* at the end) as this permission applies to the bucket itself and not items within the bucket.

As shown below, you have to have the s3:ListBucket permission as a separate statement from the permissions pertaining to items within the bucket like s3:GetObject and s3:PutObject:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket"        
      ],
      "Principal": {
        "AWS": "[IAM ARN HERE]"
      },
      "Resource": "arn:aws:s3:::my-bucket-name"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject", 
        "s3:PutObject"
      ],
      "Principal": {
        "AWS": "[IAM ARN HERE]"
      },
      "Resource": "arn:aws:s3:::my-bucket-name/*"
    }
  ]
}

See also original question in stackoverflow

#59: AWS Lambda Scheduled Tasks (Score: 157)

Created: 2014-12-09

Tags: amazon-web-services, cron-task, aws-lambda

Amazon announced AWS Lambda (http://aws.amazon.com/lambda/).

The product description includes:

Scheduled Tasks

AWS Lambda functions can be triggered by external event timers, so functions can be run during regularly scheduled maintenance times or non-peak hours. For example, you can trigger an AWS Lambda function to perform nightly archive cleanups during non-busy hours.

When I read this, I understood I could finally have a way to consistently do “cron-like” tasks. I want to run a specific query everyday at 5PM let’s say.

However I do not find this anywhere in the documentation. They only mention triggers on programatical events, or events from other AWS services.

Did I misunderstand? Or can someone point me to the documentation?

#59 Best answer 1 of AWS Lambda Scheduled Tasks (Score: 164)

Created: 2014-12-09 Last updated: 2017-03-11

Native Support for Scheduled Events added October 8, 2015:

As announced in this AWS blog post, scheduling is now supported as an event source type (also called triggers) called “CloudWatch Events - Schedule”, and can be expressed as a rate or a cron expression.

Add Scheduled Event to a new lambda

Navigate to the ‘Configure triggers’ step of creation, and specify the ‘CloudWatch Event - Schedule’ trigger. Example configuration below:

Image that shows configuration for creating a scheduled event at 5pm UTC.

Add Scheduled Event to an existing lambda

Navigate to the ‘Triggers’ tab of your lambda, select ‘Add Trigger’, and specify the ‘CloudWatch Event - Schedule’ trigger. Example screenshot where I have an existing lambda with an SNS trigger:

Image that shows how to navigate to add trigger UI from Lambda console.

Once loaded, the UI to configure this trigger is identical to the screenshot in the above “Add Scheduled Event to a new lambda” section above.

Discussion

For your example case, you’ll want to use cron() instead of rate(). Cron expressions in lambda require all fields and are expressed in UTC. So to run a function every day at 5pm (UTC), use the following cron expression:

cron(0 17 * * ? *)

Further Resources

Notes

  • The name of this event type has changed from “Scheduled Event” to “CloudWatch Events - Schedule” since this feature was first released.

  • Prior to the release of this feature, the recommended solution to this issue (per “Getting Started with AWS Lambda” at 42min 50secs) was to use SWF to create a timer, or to create a timer with an external application.

  • The Lambda UI has been overhauled since the scheduled event blog post came out, and the screenshots within are no longer exact. See my updated screenshots above from 3/10/2017 for latest revisions.

#59 Best answer 2 of AWS Lambda Scheduled Tasks(Score: 18)

Created: 2015-07-11 Last updated: 2015-07-12

Since the time of this post, there seems to have risen another solution: Schedule Recurring AWS Lambda Invocations With The Unreliable Town Clock (UTC) in which the author proposes subscribing to the SNS topic Unreliable Town Clock. I’ve used neither SWF nor SNS, but it seems to me that the SNS solution is simpler. Here’s an excerpt from the article

Unreliable Town Clock (UTC)

The Unreliable Town Clock (UTC) is a new, free, public SNS Topic (Amazon Simple Notification Service) that broadcasts a “chime” message every quarter hour to all subscribers. It can send the chimes to AWS Lambda functions, SQS queues, and email addresses.

You can use the chime attributes to run your code every fifteen minutes, or only run your code once an hour (e.g., when minute == “00”) or once a day (e.g., when hour == “00” and minute == “00”) or any other series of intervals.

You can even subscribe a function you only want to run only once at a specific time in the future: Have the function ignore all invocations until it’s after the time it wants. When it is time, it can perform its job, then unsubscribe itself from the SNS Topic.

Connecting your code to the Unreliable Town Clock is fast and easy. No application process or account creation is required

See also original question in stackoverflow

#60: How to choose an AWS profile when using boto3 to connect to CloudFront (Score: 156)

Created: 2015-10-27 Last updated: 2018-04-27

Tags: python, amazon-web-services, boto3

I am using the Boto 3 python library, and want to connect to AWS CloudFront. I need to specify the correct AWS Profile (AWS Credentials), but looking at the official documentation, I see no way to specify it.

I am initializing the client using the code: client = boto3.client('cloudfront')

However, this results in it using the default profile to connect. I couldn’t find a method where I can specify which profile to use.

#60 Best answer 1 of How to choose an AWS profile when using boto3 to connect to CloudFront (Score: 279)

Created: 2015-10-28 Last updated: 2020-07-15

I think the docs aren’t wonderful at exposing how to do this. It has been a supported feature for some time, however, and there are some details in this pull request.

So there are three different ways to do this:

Option A) Create a new session with the profile

    dev = boto3.session.Session(profile_name='dev')

Option B) Change the profile of the default session in code

    boto3.setup_default_session(profile_name='dev')

Option C) Change the profile of the default session with an environment variable

    $ AWS_PROFILE=dev ipython
    >>> import boto3
    >>> s3dev = boto3.resource('s3')

#60 Best answer 2 of How to choose an AWS profile when using boto3 to connect to CloudFront(Score: 47)

Created: 2017-09-04

Do this to use a profile with name ‘dev’:

session = boto3.session.Session(profile_name='dev')
s3 = session.resource('s3')
for bucket in s3.buckets.all():
    print(bucket.name)

See also original question in stackoverflow


Notes:
  1. This page use API to get the relevant data from stackoverflow community.
  2. Content license on this page is CC BY-SA 3.0.
  3. score = up votes - down votes.