RSS

Most votes on amazon-web-services questions 5

Most votes on amazon-web-services questions 5. #41 Is there a way to list all resources in AWS #42 AWS Error Message: A conflicting conditional operation is currently in progress against this resource #43 How to test credentials for AWS Command Line Tools #44 DynamoDB vs MongoDB NoSQL #45 Amazon S3 - HTTPS/SSL - Is it possible? #46 Why should I use Amazon Kinesis and not SNS-SQS? #47 How can I tell how many objects I've stored in an S3 bucket? #48 How do you search an amazon s3 bucket? #49 How to make all Objects in AWS S3 bucket public by default? #50 How to load npm modules in AWS Lambda?

Read all the top votes questions and answers in a single page.

#41: Is there a way to list all resources in AWS (Score: 194)

Created: 2017-06-06 Last updated: 2018-09-04

Tags: amazon-web-services

Is there a way to list all resources in AWS? For all regions, all resources.. Such as list all EC2 instances, all VPCs, all APIs in API Gateway, etc… I would like to list all resources for my account, since it’s hard for me to find which resources I can relinquish now.

#41 Best answer 1 of Is there a way to list all resources in AWS (Score: 270)

Created: 2018-01-16 Last updated: 2021-03-28

Yes. Use the Tag Editor.

Set “Regions” to “All Regions”, “Resource Types” to “All supported resource types” and then click on “Search Resources”.

#41 Best answer 2 of Is there a way to list all resources in AWS(Score: 85)

Created: 2019-06-16 Last updated: 2020-03-31

You can use the Tag Editor.

  1. Go to AWS Console
  2. In the TOP Navigation Pane, click Resource Groups Dropdown
  3. Click Tag Editor AWS list all resources across all regions

Here we can select either a particular region in which we want to search or select all regions from the dropdown. Then we can select actual resources which we want to search or we can also click on individual resources.

enter image description here

See also original question in stackoverflow

#42: AWS Error Message: A conflicting conditional operation is currently in progress against this resource (Score: 188)

Created: 2012-12-16 Last updated: 2012-12-16

Tags: java, amazon-web-services, amazon-s3

I’m getting this error intermittently.

I have a program that uses the java aws sdk and loads over the 10s of thousands of small files to s3. I see this error intermittently.

Could not find any helpful answer after doing a quick search on the internet.

Note the calling program is single threaded. The underlying aws java sdk does seem to use worker threads.

Status Code: 409, AWS Service: Amazon S3, AWS Request ID: 75E16E8DE2193CA6, AWS Error Code: OperationAborted, AWS Error Message: A conflicting conditional operation is currently in progress against this resource. Please try again., S3 Extended Request ID: 0uquw2YEoFamLldm+c/p412Lzd8jHJGFBDz3h7wN+/4I0f6hnGLkPMe+5LZazKnZ
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:552)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:289)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:170)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:2648)
    at com.amazonaws.services.s3.AmazonS3Client.createBucket(AmazonS3Client.java:578)
    at com.amazonaws.services.s3.AmazonS3Client.createBucket(AmazonS3Client.java:503)

#42 Best answer 1 of AWS Error Message: A conflicting conditional operation is currently in progress against this resource (Score: 478)

Created: 2013-05-14 Last updated: 2013-11-27

I got the same error message, when I did following:

  1. created a bucket - it went by default to US region (used AWSCLI)

  2. realized, the bucket shall go to EU region and deleted it (used AWS console)

  3. (few minutes later) tried to create the bucket, specifying the EU region

At step 3, AWS console has shown me the error message from title of your question.

So I guess, that the bucket in US was deleted, but there are possibly some synchronization processes, which are taking time. And I hope, that waiting few hours I will find the bucket name again available for creation - this time in proper (EU) region.

FIX :- Edit: About an hour later, my attempt to create the bucket (in EU region) succeeded.

#42 Best answer 2 of AWS Error Message: A conflicting conditional operation is currently in progress against this resource(Score: 39)

Created: 2017-01-07

For all others who stumble upon this thread from google, as 1st result in search for this error message:

If you deleted bucket, to recreate in new region, do not wait “manually” until this background sync will be complete, instead put a small bash script to run and retry your needed bucket creation every 5 seconds or so.

Example:

#!/bin/bash 
RESULT=2 
until [  $RESULT -eq 0 ]; do
    aws s3 mb s3://your.bucket.name --region us-west-2
    RESULT=$?
    sleep 5 
done 
echo "Bucket created!"

it will retry the “create bucket” operation for you, every few seconds (depend on ‘sleep’ ) and as soon as it’s possible - will create it for you, so no one can steal your bucket name by mistake :)

hope it helps :)

See also original question in stackoverflow

#43: How to test credentials for AWS Command Line Tools (Score: 185)

Created: 2015-08-05 Last updated: 2017-03-16

Tags: amazon-web-services, aws-cli

Is there a command/subcommand that can be passed to the aws utility that can 1) verify that the credentials in the ~/.aws/credentials file are valid, and 2) give some indication which user the credentials belong to? I’m looking for something generic that doesn’t make any assumptions about the user having permissions to IAM or any specific service.

The use case for this is a deploy-time sanity check to make sure that the credentials are good. Ideally there would be some way to check the return value and abort the deploy if there are invalid credentials.

#43 Best answer 1 of How to test credentials for AWS Command Line Tools (Score: 284)

Created: 2017-02-15 Last updated: 2018-06-16

Use GetCallerIdentity:
aws sts get-caller-identity

Unlike other API/CLI calls it will always work, regardless of your IAM permissions.

You will get output in the following format:

{
    "Account": "123456789012", 
    "UserId": "AR#####:#####", 
    "Arn": "arn:aws:sts::123456789012:assumed-role/role-name/role-session-name"
}

Exact ARN format will depend on the type of credentials, but often includes the name of the (human) user.

It uses the standard AWS CLI error codes giving 0 on success and 255 if you have no credentials.

#43 Best answer 2 of How to test credentials for AWS Command Line Tools(Score: 70)

Created: 2015-08-05 Last updated: 2015-08-05

There is a straight way - aws iam get-user would tell the details about who you are (the current IAM User) - provided the user has iam privileges.

There are couple of CLI calls which support --dry-run flag like aws ec2 run-instances which you tell you whether you have necessary config / cred to perform the operation.

There is also --auth-dry-run which Checks whether you have the required permissions for the command, without actually running the command. If you have the required permissions, the command returns DryRunOperation; otherwise, it returns UnauthorizedOperation. [ From AWS Documentation - Common Options ]

You would be able to list the IAM Access Keys from Management Console which you can cross check to see who has been assigned which key.

The best way to understand which user / role has what privileges is make use of IAM Policy Simulator.

See also original question in stackoverflow

#44: DynamoDB vs MongoDB NoSQL (Score: 181)

Created: 2013-07-29 Last updated: 2019-08-05

Tags: mongodb, amazon-web-services, nosql, amazon-dynamodb

I’m trying to figure it out what can I use for a future project, we plan to store from about 500k records per month in the first year and maybe more for the next years this is a vertical application so there’s no need to use a database for this, that’s the reason why I decided to choose a noSQL data storage.

The first option that came to my mind was mongo db since is a very mature product with a lot of support from the community but in the other hand we got a brand new product that offers a managed service at top performance, I’ll develop this application but there’s no maintenance plan (at least for now) so I think that will be a huge advantage since amazon provides a elastic way to scale.

My major concern is about the query structure, I haven’t looked at the dynamoDB query capabilities yet but since is a k/v data storage I feel that this could be more limited than mongo db.

If someone had the experience of moving a project from mongoDB to DynamoDB, any advice will be totally appreciated.

#44 Best answer 1 of DynamoDB vs MongoDB NoSQL (Score: 166)

Created: 2015-03-07 Last updated: 2018-05-10

I know this is old, but it still comes up when you search for the comparison. We were using Mongo, have moved almost entirely to Dynamo, which is our first choice now. Not because it has more features, it doesn’t. Mongo has a better query language, you can index within a structure, there’s lots of little things. The superiority of Dynamo is in what the OP stated in his comment: it’s easy. You don’t have to take care of any servers. When you start to set up a Mongo sharded solution, it gets complicated. You can go to one of the hosting companies, but that’s not cheap either. With Dynamo, if you need more throughput, you just click a button. You can write scripts to scale automatically. When it’s time to upgrade Dynamo, it’s done for you. That is all a lot of precious stress and time not spent. If you don’t have dedicated ops people, Dynamo is excellent.

So we are now going on Dynamo by default. Mongo maybe, if the data structure is complicated enough to warrant it, but then we’d probably go back to a SQL database. Dynamo is obtuse, you really need to think about how you’re going to build it, and likely you’ll use Redis in Elasticcache to make it work for complex stuff. But it sure is nice to not have to take care of it. You code. That’s it.

#44 Best answer 2 of DynamoDB vs MongoDB NoSQL(Score: 70)

Created: 2013-08-08 Last updated: 2019-08-05

I recently migrated my MongoDB to DynamoDB, and wrote 3 blogs to share some experience and data about performance, cost.

Migrate from MongoDB to AWS DynamoDB + SimpleDB

7 Reasons You Should Use MongoDB over DynamoDB

3 Reasons You Should Use DynamoDB over MongoDB

See also original question in stackoverflow

#45: Amazon S3 - HTTPS/SSL - Is it possible? (Score: 179)

Created: 2010-06-15 Last updated: 2018-07-29

Tags: amazon-web-services, ssl, amazon-s3, https

I saw a few other questions regarding this without any real answers or information (or so it appeared).

I have an image here:
http://furniture.retailcatalog.us/products/2061/6262u9665.jpg

Which is redirecting to:
http://furniture.retailcatalog.us.s3.amazonaws.com/products/2061/6262u9665.jpg

I need it to be (https):
https://furniture.retailcatalog.us/products/2061/6262u9665.jpg

So I installed a wildcard ssl on retailcatalog.us (we have other subdomains), but it wasn’t working. I went to check
https://furniture.retailcatalog.us.s3.amazonaws.com/products/2061/6262u9665.jpg

And it wasn’t working, which means on the Amazon S3 website itself the https wasn’t working.

How do I make this work?

#45 Best answer 1 of Amazon S3 - HTTPS/SSL - Is it possible? (Score: 184)

Created: 2010-06-16 Last updated: 2020-06-20

This is a response I got from their Premium Services

Hello,

This is actually a issue with the way SSL validates names containing a period, ‘.’, > character. We’ve documented this behavior here:

http://docs.amazonwebservices.com/AmazonS3/latest/dev/BucketRestrictions.html

The only straight-forward fix for this is to use a bucket name that does not contain that character. You might instead use a bucket named ‘furniture-retailcatalog-us’. This would allow you use HTTPS with

https://furniture-retailcatalog-us.s3.amazonaws.com/

You could, of course, put a CNAME DNS record to make that more friendly. For example,

images-furniture.retailcatalog.us IN CNAME furniture-retailcatalog-us.s3.amazonaws.com.

Hope that helps. Let us know if you have any other questions.

Amazon Web Services

Unfortunately your “friendly” CNAME will cause host name mismatch when validating the certificate, therefore you cannot really use it for a secure connection. A big missing feature of S3 is accepting custom certificates for your domains.


UPDATE 10/2/2012

From @mpoisot:

The link Amazon provided no longer says anything about https. I poked around in the S3 docs and finally found a small note about it on the Virtual Hosting page: http://docs.amazonwebservices.com/AmazonS3/latest/dev/VirtualHosting.html


UPDATE 6/17/2013

From @Joseph Lust:

Just got it! Check it out and sign up for an invite: http://aws.amazon.com/cloudfront/custom-ssl-domains

#45 Best answer 2 of Amazon S3 - HTTPS/SSL - Is it possible?(Score: 111)

Created: 2011-05-28 Last updated: 2012-06-26

I know its a year after the fact, but using this solves it: https://s3.amazonaws.com/furniture.retailcatalog.us/products/2061/6262u9665.jpg

I saw this on another site (http://joonhachu.blogspot.com/2010/09/helpful-tip-for-amazon-s3-urls-for-ssl.html).

See also original question in stackoverflow

#46: Why should I use Amazon Kinesis and not SNS-SQS? (Score: 178)

Created: 2014-10-29

Tags: amazon-web-services, amazon-sqs, amazon-kinesis

I have a use case where there will be stream of data coming and I cannot consume it at the same pace and need a buffer. This can be solved using an SNS-SQS queue. I came to know the Kinesis solves the same purpose, so what is the difference? Why should I prefer (or should not prefer) Kinesis?

#46 Best answer 1 of Why should I use Amazon Kinesis and not SNS-SQS? (Score: 85)

Created: 2015-06-16 Last updated: 2019-10-22

Keep in mind this answer was correct for Jun 2015

After studying the issue for a while, having the same question in mind, I found that SQS (with SNS) is preferred for most use cases unless the order of the messages is important to you (SQS doesn’t guarantee FIFO on messages).

There are 2 main advantages for Kinesis:

  1. you can read the same message from several applications
  2. you can re-read messages in case you need to.

Both advantages can be achieved by using SNS as a fan out to SQS. That means that the producer of the message sends only one message to SNS, Then the SNS fans-out the message to multiple SQSs, one for each consumer application. In this way you can have as many consumers as you want without thinking about sharding capacity.

Moreover, we added one more SQS that is subscribed to the SNS that will hold messages for 14 days. In normal case no one reads from this SQS but in case of a bug that makes us want to rewind the data we can easily read all the messages from this SQS and re-send them to the SNS. While Kinesis only provides a 7 days retention.

In conclusion, SNS+SQSs is much easier and provides most capabilities. IMO you need a really strong case to choose Kinesis over it.

#46 Best answer 2 of Why should I use Amazon Kinesis and not SNS-SQS?(Score: 61)

Created: 2014-10-29 Last updated: 2015-11-19

On the surface they are vaguely similar, but your use case will determine which tool is appropriate. IMO, if you can get by with SQS then you should - if it will do what you want, it will be simpler and cheaper, but here is a better explanation from the AWS FAQ which gives examples of appropriate use-cases for both tools to help you decide:

FAQ’s

See also original question in stackoverflow

#47: How can I tell how many objects I've stored in an S3 bucket? (Score: 175)

Created: 2010-05-19 Last updated: 2020-10-23

Tags: file, count, amazon-s3, amazon-web-services

Unless I’m missing something, it seems that none of the APIs I’ve looked at will tell you how many objects are in an <S3 bucket>/<folder>. Is there any way to get a count?

#47 Best answer 1 of How can I tell how many objects I've stored in an S3 bucket? (Score: 295)

Created: 2015-10-02 Last updated: 2016-11-07

Using AWS CLI

aws s3 ls s3://mybucket/ --recursive | wc -l 

or

aws cloudwatch get-metric-statistics \
  --namespace AWS/S3 --metric-name NumberOfObjects \
  --dimensions Name=BucketName,Value=BUCKETNAME \
              Name=StorageType,Value=AllStorageTypes \
  --start-time 2016-11-05T00:00 --end-time 2016-11-05T00:10 \
  --period 60 --statistic Average

Note: The above cloudwatch command seems to work for some while not for others. Discussed here: https://forums.aws.amazon.com/thread.jspa?threadID=217050

Using AWS Web Console

You can look at cloudwatch’s metric section to get approx number of objects stored. enter image description here

I have approx 50 Million products and it took more than an hour to count using aws s3 ls

#47 Best answer 2 of How can I tell how many objects I've stored in an S3 bucket?(Score: 165)

Created: 2016-08-23 Last updated: 2019-08-12

There is a --summarize switch which includes bucket summary information (i.e. number of objects, total size).

Here’s the correct answer using AWS cli:

aws s3 ls s3://bucketName/path/ --recursive --summarize | grep "Total Objects:"

Total Objects: 194273

See the documentation

See also original question in stackoverflow

#48: How do you search an amazon s3 bucket? (Score: 174)

Created: 2011-02-12 Last updated: 2019-02-01

Tags: amazon-web-services, amazon-s3

I have a bucket with thousands of files in it. How can I search the bucket? Is there a tool you can recommend?

#48 Best answer 1 of How do you search an amazon s3 bucket? (Score: 267)

Created: 2014-02-17

Just a note to add on here: it’s now 3 years later, yet this post is top in Google when you type in “How to search an S3 Bucket.”

Perhaps you’re looking for something more complex, but if you landed here trying to figure out how to simply find an object (file) by it’s title, it’s crazy simple:

open the bucket, select “none” on the right hand side, and start typing in the file name.

http://docs.aws.amazon.com/AmazonS3/latest/UG/ListingObjectsinaBucket.html

#48 Best answer 2 of How do you search an amazon s3 bucket?(Score: 127)

Created: 2016-05-14

Here’s a short and ugly way to do search file names using the AWS CLI:

aws s3 ls s3://your-bucket --recursive | grep your-search | cut -c 32-

See also original question in stackoverflow

#49: How to make all Objects in AWS S3 bucket public by default? (Score: 173)

Created: 2013-10-04 Last updated: 2017-10-21

Tags: amazon-web-services, amazon-s3

I am using a PHP library to upload a file to my bucket. I have set the ACL to public-read-write and it works fine but the file is still private.

I found that if I change the Grantee to Everyone it makes the file public. What I want to know is how do I make the default Grantee on all objects in my bucket to be set to “Everyone”. Or is there another solution to make files public by default?

Code I am using is below:

public static function putObject($input, $bucket, $uri, $acl = self::ACL_PRIVATE, $metaHeaders = array(), $requestHeaders = array()) {
    if ($input === false) return false;
    $rest = new S3Request('PUT', $bucket, $uri);

    if (is_string($input)) $input = array(
        'data' => $input, 'size' => strlen($input),
        'md5sum' => base64_encode(md5($input, true))
    );

    // Data
    if (isset($input['fp']))
        $rest->fp =& $input['fp'];
    elseif (isset($input['file']))
        $rest->fp = @fopen($input['file'], 'rb');
    elseif (isset($input['data']))
        $rest->data = $input['data'];

    // Content-Length (required)
    if (isset($input['size']) && $input['size'] >= 0)
        $rest->size = $input['size'];
    else {
        if (isset($input['file']))
            $rest->size = filesize($input['file']);
        elseif (isset($input['data']))
            $rest->size = strlen($input['data']);
    }

    // Custom request headers (Content-Type, Content-Disposition, Content-Encoding)
    if (is_array($requestHeaders))
        foreach ($requestHeaders as $h => $v) $rest->setHeader($h, $v);
    elseif (is_string($requestHeaders)) // Support for legacy contentType parameter
        $input['type'] = $requestHeaders;

    // Content-Type
    if (!isset($input['type'])) {
        if (isset($requestHeaders['Content-Type']))
            $input['type'] =& $requestHeaders['Content-Type'];
        elseif (isset($input['file']))
            $input['type'] = self::__getMimeType($input['file']);
        else
            $input['type'] = 'application/octet-stream';
    }

    // We need to post with Content-Length and Content-Type, MD5 is optional
    if ($rest->size >= 0 && ($rest->fp !== false || $rest->data !== false)) {
        $rest->setHeader('Content-Type', $input['type']);
        if (isset($input['md5sum'])) $rest->setHeader('Content-MD5', $input['md5sum']);

        $rest->setAmzHeader('x-amz-acl', $acl);
        foreach ($metaHeaders as $h => $v) $rest->setAmzHeader('x-amz-meta-'.$h, $v);
        $rest->getResponse();
    } else
        $rest->response->error = array('code' => 0, 'message' => 'Missing input parameters');

    if ($rest->response->error === false && $rest->response->code !== 200)
        $rest->response->error = array('code' => $rest->response->code, 'message' => 'Unexpected HTTP status');
    if ($rest->response->error !== false) {
        trigger_error(sprintf("S3::putObject(): [%s] %s", $rest->response->error['code'], $rest->response->error['message']), E_USER_WARNING);
        return false;
    }
    return true;
}

#49 Best answer 1 of How to make all Objects in AWS S3 bucket public by default? (Score: 339)

Created: 2014-04-16

Go to http://awspolicygen.s3.amazonaws.com/policygen.html Fill in the details such as: enter image description here In Action select “GetObject” Select “Add Statement” Then select “Generate Policy”

Copy the text example:

{
  "Id": "Policy1397632521960",
  "Statement": [
    {
      "Sid": "Stmt1397633323327",
      "Action": [
        "s3:GetObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::bucketnm/*",
      "Principal": {
        "AWS": [
          "*"
        ]
      }
    }
  ]
}

Now go to your AWS S3 console, At the bucket level, click on Properties, Expand Permissions, then Select Add bucket policy. Paste the above generated code into the editor and hit save.

All your items in the bucket will be public by default.

#49 Best answer 2 of How to make all Objects in AWS S3 bucket public by default?(Score: 151)

Created: 2013-10-04 Last updated: 2016-09-08

If you want to make all objects public by default, the simplest way is to do it trough a Bucket Policy instead of Access Control Lists (ACLs) defined on each individual object.

enter image description here

You can use the AWS Policy Generator to generate a bucket policy for your bucket.

For example, the following policy will allow anyone to read every object in your S3 bucket (just replace <bucket-name> with the name of your bucket):

{
  "Id": "Policy1380877762691",
  "Statement": [
    {
      "Sid": "Stmt1380877761162",
      "Action": [
        "s3:GetObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::<bucket-name>/*",
      "Principal": {
        "AWS": [
          "*"
        ]
      }
    }
  ]
}

The Bucket Policy contains a list of Statements and each statement has an Effect (either Allow or Deny) for a list of Actions that are performed by Principal (the user) on the specified Resource (identified by an Amazon Resource Name or ARN).

The Id is just an optional policy id and the Sid is an optional unique statement id.

For S3 Bucket Policies, the Resource ARNs take the form:

arn:aws:s3:::<bucket_name>/<key_name>

The above example allows (Effect: Allow) anyone (Principal: *) to access (Action: s3:GetObject) any object in the bucket (Resource: arn:aws:s3:::<bucket-name>/*).

See also original question in stackoverflow

#50: How to load npm modules in AWS Lambda? (Score: 173)

Created: 2015-12-23 Last updated: 2018-11-23

Tags: amazon-web-services, npm, aws-lambda

I’ve created several Lambda functions using the web based editor. So far so good. I’d now like to start extending those with modules (such as Q for promises). I can’t figure out how to get the modules out to Lambda so they can be consumed by my functions.

I’ve read through this but it seems to involve setting up an EC2 and running Lambda functions from there. There is a mechanism to upload a zip when creating a function but that seems to involve sending up functions developed locally. Since I’m working in the web based editor that seems like a strange workflow.

How can I simply deploy some modules for use in my Lambda functions?

#50 Best answer 1 of How to load npm modules in AWS Lambda? (Score: 233)

Created: 2015-12-23 Last updated: 2019-07-02

You cannot load NPM modules without uploading a .zip file, but you can actually get this process down to two quick command lines.

Here’s how:

  1. Put your Lambda function file(s) in a separate directory. This is because you install npm packages locally for Lambda and you want to be able to isolate and test what you will upload to Lambda.

  2. Install your NPM packages locally with npm install packageName while you’re in your separate Lambda directory you created in step #1.

  3. Make sure your function works when running locally: node lambdaFunc.js (you can simply comment out the two export.handler lines in your code to adapt your code to run with Node locally).

  4. Go to the Lambda’s directory and compress the contents, make sure not to include the directory itself.

     zip -r lambdaFunc.zip .
    
  5. If you have the aws-cli installed, which I suggest having if you want to make your life easier, you can now enter this command:

    aws lambda update-function-code --function-name lambdaFunc \
    --zip-file fileb://~/path/to/your/lambdaFunc.zip
    

    (no quotes around the lambdaFunc part above in case you wonder as I did)

  6. Now you can click test in the Lambda console.

  7. I suggest adding a short alias for both of the above commands. Here’s what I have in mine for the much longer Lambda update command:

     alias up="aws lambda update-function-code --function-name lambdaFunc \
     --zip-file fileb://~/path/to/your/lambdaFunc.zip"
    

#50 Best answer 2 of How to load npm modules in AWS Lambda?(Score: 41)

Created: 2018-11-23 Last updated: 2020-01-04

A .zip file is required in order to include npm modules in Lambda. And you really shouldn’t be using the Lambda web editor for much of anything- as with any production code, you should be developing locally, committing to git, etc.

MY FLOW:

  1. My Lambda functions are usually helper utilities for a larger project, so I create a /aws/lambdas directory within that to house them.

  2. Each individual lambda directory contains an index.js file containing the function code, a package.json file defining dependencies, and a /node_modules subdirectory. (The package.json file is not used by Lambda, it’s just so we can locally run the npm install command.)

package.json:

{
  "name": "my_lambda",
  "dependencies": {
    "svg2png": "^4.1.1"
  }
}
  1. I .gitignore all node_modules directories and .zip files so that the files generated from npm installs and zipping won’t clutter our repo.

.gitignore:

# Ignore node_modules
**/node_modules

# Ignore any zip files
*.zip
  1. I run npm install from within the directory to install modules, and develop/test the function locally.

  2. I .zip the lambda directory and upload it via the console.

(IMPORTANT: Do not use Mac’s ‘compress’ utility from Finder to zip the file! You must run zip from the CLI from within the root of the directory- see here)

zip -r ../yourfilename.zip * 

NOTE:

You might run into problems if you install the node modules locally on your Mac, as some platform-specific modules may fail when deployed to Lambda’s Linux-based environment. (See https://stackoverflow.com/a/29994851/165673)

The solution is to compile the modules on an EC2 instance launched from the AMI that corresponds with the Lambda Node.js runtime you’re using (See this list of Lambda runtimes and their respective AMIs).


See also AWS Lambda Deployment Package in Node.js - AWS Lambda

See also original question in stackoverflow


Notes:
  1. This page use API to get the relevant data from stackoverflow community.
  2. Content license on this page is CC BY-SA 3.0.
  3. score = up votes - down votes.