RSS

Most votes on amazon-web-services questions 1

Most votes on amazon-web-services questions 1. #1 Why do people use Heroku when AWS is present? What distinguishes Heroku from AWS? #2 Trying to SSH into an Amazon Ec2 instance - permission error #3 Downloading an entire S3 bucket? #4 What is the difference between Amazon SNS and Amazon SQS? #5 Change key pair for ec2 instance #6 scp (secure copy) to ec2 instance without password #7 What is difference between Lightsail and EC2? #8 How to get the instance id from within an ec2 instance? #9 How to pass a querystring or route parameter to AWS Lambda from Amazon API Gateway #10 Benefits of EBS vs. instance-store (and vice-versa)

Read all the top votes questions and answers in a single page.

#1: Why do people use Heroku when AWS is present? What distinguishes Heroku from AWS? (Score: 1129)

Created: 2012-03-21 Last updated: 2017-02-19

Tags: ruby-on-rails, heroku, amazon-web-services

I’m a beginner RoR programmer who’s planning to deploy my app using Heroku. Word from my other advisor friends says that Heroku is really easy, good to use. The only problem is that I still have no idea what Heroku does…

I’ve looked at their website and in a nutshell, what Heroku does is help with scaling but… why does that even matter? How does Heroku help with:

  1. Speed - My research implied that deploying AWS on the US East Coast would be the fastest if I am targeting a US/Asia-based audience.

  2. Security - How secure are they?

  3. Scaling - How does it actually work?

  4. Cost efficiency - There’s something like a dyno that makes it easy to scale.

  5. How do they fare against their competitors? For example, Engine Yard and bluebox?

Please use layman English terms to explain… I’m a beginner programmer.

#1 Best answer 1 of Why do people use Heroku when AWS is present? What distinguishes Heroku from AWS? (Score: 2079)

Created: 2012-03-21 Last updated: 2019-08-11

First things first, AWS and Heroku are different things. AWS offer Infrastructure as a Service (IaaS) whereas Heroku offer a Platform as a Service (PaaS).

What’s the difference? Very approximately, IaaS gives you components you need in order to build things on top of it; PaaS gives you an environment where you just push code and some basic configuration and get a running application. IaaS can give you more power and flexibility, at the cost of having to build and maintain more yourself.

To get your code running on AWS and looking a bit like a Heroku deployment, you’ll want some EC2 instances - you’ll want a load balancer / caching layer installed on them (e.g. Varnish), you’ll want instances running something like Passenger and nginx to serve your code, you’ll want to deploy and configure a clustered database instance of something like PostgreSQL. You’ll want a deployment system with something like Capistrano, and something doing log aggregation.

That’s not an insignificant amount of work to set up and maintain. With Heroku, the effort required to get to that sort of stage is maybe a few lines of application code and a git push.

So you’re this far, and you want to scale up. Great. You’re using Puppet for your EC2 deployment, right? So now you configure your Capistrano files to spin up/down instances as needed; you re-jig your Puppet config so Varnish is aware of web-worker instances and will automatically pool between them. Or you heroku scale web:+5.

Hopefully that gives you an idea of the comparison between the two. Now to address your specific points:

Speed

Currently Heroku only runs on AWS instances in us-east and eu-west. For you, this sounds like what you want anyway. For others, it’s potentially more of a consideration.

Security

I’ve seen a lot of internally-maintained production servers that are way behind on security updates, or just generally poorly put together. With Heroku, you have someone else managing that sort of thing, which is either a blessing or a curse depending on how you look at it!

When you deploy, you’re effectively handing your code straight over to Heroku. This may be an issue for you. Their article on Dyno Isolation details their isolation technologies (it seems as though multiple dynos are run on individual EC2 instances). Several colleagues have expressed issues with these technologies and the strength of their isolation; I am alas not in a position of enough knowledge / experience to really comment, but my current Heroku deployments consider that “good enough”. It may be an issue for you, I don’t know.

Scaling

I touched on how one might implement this in my IaaS vs PaaS comparison above. Approximately, your application has a Procfile, which has lines of the form dyno_type: command_to_run, so for example (cribbed from http://devcenter.heroku.com/articles/process-model):

web:    bundle exec rails server
worker: bundle exec rake jobs:work

This, with a:

heroku scale web:2 worker:10

will result in you having 2 web dynos and 10 worker dynos running. Nice, simple, easy. Note that web is a special dyno type, which has access to the outside world, and is behind their nice web traffic multiplexer (probably some sort of Varnish / nginx combination) that will route traffic accordingly. Your workers probably interact with a message queue for similar routing, from which they’ll get the location via a URL in the environment.

Cost Efficiency

Lots of people have lots of different opinions about this. Currently it’s $0.05/hr for a dyno hour, compared to $0.025/hr for an AWS micro instance or $0.09/hr for an AWS small instance.

Heroku’s dyno documentation says you have about 512MB of RAM, so it’s probably not too unreasonable to consider a dyno as a bit like an EC2 micro instance. Is it worth double the price? How much do you value your time? The amount of time and effort required to build on top of an IaaS offering to get it to this standard is definitely not cheap. I can’t really answer this question for you, but don’t underestimate the ‘hidden costs’ of setup and maintenance.

(A bit of an aside, but if I connect to a dyno from here (heroku run bash), a cursory look shows 4 cores in /proc/cpuinfo and 36GB of RAM - this leads me to believe that I’m on a “High-Memory Double Extra Large Instance”. The Heroku dyno documentation says each dyno receives 512MB of RAM, so I’m potentially sharing with up to 71 other dynos. (I don’t have enough data about the homogeny of Heroku’s AWS instances, so your milage may vary))

How do they fare against their competitors?

This, I’m afraid I can’t really help you with. The only competitor I’ve ever really looked at was Google App Engine - at the time I was looking to deploy Java applications, and the amount of restrictions on usable frameworks and technologies was incredibly off-putting. This is more than “just a Java thing” - the amount of general restrictions and necessary considerations (the FAQ hints at several) seemed less than convenient. In contrast, deploying to Heroku has been a dream.

Conclusion

I hope this answers your questions (please comment if there are gaps / other areas you’d like addressed). I feel I should offer my personal position. I love Heroku for “quick deployments”. When I’m starting an application, and I want some cheap hosting (the Heroku free tier is awesome - essentially if you only need one web dyno and 5MB of PostgreSQL, it’s free to host an application), Heroku is my go-to position. For “Serious Production Deployment” with several paying customers, with a service-level-agreement, with dedicated time to spend on ops, et cetera, I can’t quite bring myself to offload that much control to Heroku, and then either AWS or our own servers have been the hosting platform of choice.

Ultimately, it’s about what works best for you. You say you’re “a beginner programmer” - it might just be that using Heroku will let you focus on writing Ruby, and not have to spend time getting all the other infrastructure around your code built up. I’d definitely give it a try.


Note, AWS does actually have a PaaS offering, Elastic Beanstalk, that supports Ruby, Node.js, PHP, Python, .NET and Java. I think generally most people, when they see “AWS”, jump to things like EC2 and S3 and EBS, which are definitely IaaS offerings

#1 Best answer 2 of Why do people use Heroku when AWS is present? What distinguishes Heroku from AWS?(Score: 268)

Created: 2015-10-05 Last updated: 2020-04-19

AWS / Heroku are both free for small hobby projects (to start with).

If you want to start an app right away, without much customization of the architecture, then choose Heroku.

If you want to focus on the architecture and to be able to use different web servers, then choose AWS. AWS is more time-consuming based on what service/product you choose, but can be worth it. AWS also comes with many plugin services and products.


Heroku

  • Platform as a Service (PAAS)
  • Good documentation
  • Has built-in tools and architecture.
  • Limited control over architecture while designing the app.
  • Deployment is taken care of (automatic via GitHub or manual via git commands or CLI).
  • Not time consuming.

AWS

  • Infrastructure as a Service (IAAS)
  • Versatile - has many products such as EC2, LAMBDA, EMR, etc.
  • Can use a Dedicated instance for more control over the architecture, such as choosing the OS, software version, etc. There is more than one backend layer.
  • Elastic Beanstalk is a feature similar to Heroku’s PAAS.
  • Can use the automated deployment, or roll your own.

See also original question in stackoverflow

#2: Trying to SSH into an Amazon Ec2 instance - permission error (Score: 839)

Created: 2011-11-19 Last updated: 2017-07-04

Tags: amazon-web-services, authentication, ssh, amazon-ec2, permissions

This is probably a stupidly simple question to some :)

I’ve created a new linux instance on Amazon EC2, and as part of that downloaded the .pem file to allow me to SSH in.

When I tried to ssh with:

ssh -i myfile.pem <public dns>

I got:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0644 for 'amazonec2.pem' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
bad permissions: ignore key: amazonec2.pem
Permission denied (publickey).

Following this post I tried to chmod +600 the pem file, but now when I ssh I just get:

Permission denied (publickey).

What school-boy error am I making here? The .pem file is in my home folder (in osx). It’s permissions look like this:

[email protected]   1 mattroberts  staff    1696 19 Nov 11:20 amazonec2.pem

#2 Best answer 1 of Trying to SSH into an Amazon Ec2 instance - permission error (Score: 1636)

Created: 2012-05-30 Last updated: 2020-05-12

The problem is having wrong mod on the file.

Easily solved by executing -

chmod 400 mykey.pem

Taken from Amazon’s instructions -

Your key file must not be publicly viewable for SSH to work. Use this command if needed: chmod 400 mykey.pem

400 protects it by making it read only and only for the owner.

#2 Best answer 2 of Trying to SSH into an Amazon Ec2 instance - permission error(Score: 272)

Created: 2011-11-19 Last updated: 2015-09-21

You are likely using the wrong username to login:

  • most Ubuntu images have a user ubuntu
  • Amazon’s AMI is ec2-user
  • most Debian images have either root or admin

To login, you need to adjust your ssh command:

ssh -l USERNAME_HERE -i .ssh/yourkey.pem public-ec2-host

HTH

See also original question in stackoverflow

#3: Downloading an entire S3 bucket? (Score: 815)

Created: 2011-12-28 Last updated: 2020-12-13

Tags: amazon-web-services, amazon-s3, aws-cli

I noticed that there doesn’t seem to be an option to download an entire S3 bucket from the AWS Management Console.

Is there an easy way to grab everything in one of my buckets? I was thinking about making the root folder public, using wget to grab it all, and then making it private again but I don’t know if there’s an easier way.

#3 Best answer 1 of Downloading an entire S3 bucket? (Score: 1531)

Created: 2013-09-12 Last updated: 2020-04-22

AWS CLI

See the “AWS CLI Command Reference” for more information.

AWS recently released their Command Line Tools, which work much like boto and can be installed using

sudo easy_install awscli

or

sudo pip install awscli

Once installed, you can then simply run:

aws s3 sync s3://<source_bucket> <local_destination>

For example:

aws s3 sync s3://mybucket .

will download all the objects in mybucket to the current directory.

And will output:

download: s3://mybucket/test.txt to test.txt
download: s3://mybucket/test2.txt to test2.txt

This will download all of your files using a one-way sync. It will not delete any existing files in your current directory unless you specify --delete, and it won’t change or delete any files on S3.

You can also do S3 bucket to S3 bucket, or local to S3 bucket sync.

Check out the documentation and other examples.

Whereas the above example is how to download a full bucket, you can also download a folder recursively by performing

aws s3 cp s3://BUCKETNAME/PATH/TO/FOLDER LocalFolderName --recursive

This will instruct the CLI to download all files and folder keys recursively within the PATH/TO/FOLDER directory within the BUCKETNAME bucket.

#3 Best answer 2 of Downloading an entire S3 bucket?(Score: 176)

Created: 2012-03-10 Last updated: 2020-04-22

You can use s3cmd to download your bucket:

s3cmd --configure
s3cmd sync s3://bucketnamehere/folder /destination/folder

There is another tool you can use called rclone. This is a code sample in the Rclone documentation:

rclone sync /home/local/directory remote:bucket

See also original question in stackoverflow

#4: What is the difference between Amazon SNS and Amazon SQS? (Score: 531)

Created: 2012-12-03 Last updated: 2021-02-02

Tags: amazon-web-services, amazon-sqs, amazon-sns

When would I use SNS versus SQS, and why are they always coupled together?

#4 Best answer 1 of What is the difference between Amazon SNS and Amazon SQS? (Score: 766)

Created: 2012-12-03 Last updated: 2021-02-02

SNS is a distributed publish-subscribe system. Messages are pushed to subscribers as and when they are sent by publishers to SNS.

SQS is distributed queuing system. Messages are not pushed to receivers. Receivers have to poll or pull messages from SQS. Messages can’t be received by multiple receivers at the same time. Any one receiver can receive a message, process and delete it. Other receivers do not receive the same message later. Polling inherently introduces some latency in message delivery in SQS unlike SNS where messages are immediately pushed to subscribers. SNS supports several end points such as email, SMS, HTTP end point and SQS. If you want unknown number and type of subscribers to receive messages, you need SNS.

You don’t have to couple SNS and SQS always. You can have SNS send messages to email, SMS or HTTP end point apart from SQS. There are advantages to coupling SNS with SQS. You may not want an external service to make connections to your hosts (a firewall may block all incoming connections to your host from outside).

Your end point may just die because of heavy volume of messages. Email and SMS maybe not your choice of processing messages quickly. By coupling SNS with SQS, you can receive messages at your pace. It allows clients to be offline, tolerant to network and host failures. You also achieve guaranteed delivery. If you configure SNS to send messages to an HTTP end point or email or SMS, several failures to send message may result in messages being dropped.

SQS is mainly used to decouple applications or integrate applications. Messages can be stored in SQS for a short duration of time (maximum 14 days). SNS distributes several copies of messages to several subscribers. For example, let’s say you want to replicate data generated by an application to several storage systems. You could use SNS and send this data to multiple subscribers, each replicating the messages it receives to different storage systems (S3, hard disk on your host, database, etc.).

#4 Best answer 2 of What is the difference between Amazon SNS and Amazon SQS?(Score: 302)

Created: 2018-06-14 Last updated: 2021-02-02

Here’s a comparison of the two:

Entity Type

  • SQS: Queue (Similar to JMS)
  • SNS: Topic (Pub/Sub system)

Message consumption

  • SQS: Pull Mechanism - Consumers poll and pull messages from SQS
  • SNS: Push Mechanism - SNS Pushes messages to consumers

Use Case

  • SQS: Decoupling two applications and allowing parallel asynchronous processing
  • SNS: Fanout - Processing the same message in multiple ways

Persistence

  • SQS: Messages are persisted for some (configurable) duration if no consumer is available (maximum two weeks), so the consumer does not have to be up when messages are added to queue.
  • SNS: No persistence. Whichever consumer is present at the time of message arrival gets the message and the message is deleted. If no consumers are available then the message is lost after a few retries.

Consumer Type

  • SQS: All the consumers are typically identical and hence process the messages in the exact same way (each message is processed once by one consumer, though in rare cases messages may be resent)
  • SNS: The consumers might process the messages in different ways

Sample applications

  • SQS: Jobs framework: The Jobs are submitted to SQS and the consumers at the other end can process the jobs asynchronously. If the job frequency increases, the number of consumers can simply be increased to achieve better throughput.
  • SNS: Image processing. If someone uploads an image to S3 then watermark that image, create a thumbnail and also send a Thank You email. In that case S3 can publish notifications to an SNS topic with three consumers listening to it. The first one watermarks the image, the second one creates a thumbnail and the third one sends a Thank You email. All of them receive the same message (image URL) and do their processing in parallel.

See also original question in stackoverflow

#5: Change key pair for ec2 instance (Score: 448)

Created: 2011-10-24 Last updated: 2020-01-04

Tags: amazon-web-services, amazon-ec2, ssh, key-pair

How do I change the key pair for my ec2 instance in AWS management console? I can stop the instance, I can create new key pair, but I don’t see any link to modify the instance’s key pair.

#5 Best answer 1 of Change key pair for ec2 instance (Score: 539)

Created: 2012-08-02 Last updated: 2020-10-17

This answer is useful in the case you no longer have SSH access to the existing server (i.e. you lost your private key).

If you still have SSH access, please use one of the answers below.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#replacing-lost-key-pair

Here is what I did, thanks to Eric Hammond’s blog post:

  1. Stop the running EC2 instance
  2. Detach its /dev/xvda1 volume (let’s call it volume A) - see here
  3. Start new t1.micro EC2 instance, using my new key pair. Make sure you create it in the same subnet, otherwise you will have to terminate the instance and create it again. - see here
  4. Attach volume A to the new micro instance, as /dev/xvdf (or /dev/sdf)
  5. SSH to the new micro instance and mount volume A to /mnt/tmp
$ sudo mkdir /mnt/tmp; sudo mount /dev/xvdf1 /mnt/tmp
  1. Copy ~/.ssh/authorized_keys to /mnt/tmp/home/ubuntu/.ssh/authorized_keys
  2. Logout
  3. Terminate micro instance
  4. Detach volume A from it
  5. Attach volume A back to the main instance as /dev/xvda
  6. Start the main instance
  7. Login as before, using your new .pem file

That’s it.

#5 Best answer 2 of Change key pair for ec2 instance(Score: 199)

Created: 2011-10-24 Last updated: 2011-10-24

Once an instance has been started, there is no way to change the keypair associated with the instance at a meta data level, but you can change what ssh key you use to connect to the instance.

There is a startup process on most AMIs that downloads the public ssh key and installs it in a .ssh/authorized_keys file so that you can ssh in as that user using the corresponding private ssh key.

If you want to change what ssh key you use to access an instance, you will want to edit the authorized_keys file on the instance itself and convert to your new ssh public key.

The authorized_keys file is under the .ssh subdirectory under the home directory of the user you are logging in as. Depending on the AMI you are running, it might be in one of:

/home/ec2-user/.ssh/authorized_keys
/home/ubuntu/.ssh/authorized_keys
/root/.ssh/authorized_keys

After editing an authorized_keys file, always use a different terminal to confirm that you are able to ssh in to the instance before you disconnect from the session you are using to edit the file. You don’t want to make a mistake and lock yourself out of the instance entirely.

While you’re thinking about ssh keypairs on EC2, I recommend uploading your own personal ssh public key to EC2 instead of having Amazon generate the keypair for you.

Here’s an article I wrote about this:

Uploading Personal ssh Keys to Amazon EC2
http://alestic.com/2010/10/ec2-ssh-keys

This would only apply to new instances you run.

See also original question in stackoverflow

#6: scp (secure copy) to ec2 instance without password (Score: 445)

Created: 2011-07-02 Last updated: 2018-07-28

Tags: amazon-web-services, amazon-ec2, ssh, scp, pem

I have an EC2 instance running (FreeBSD 9 AMI ami-8cce3fe5), and I can ssh into it using my amazon-created key file without password prompt, no problem.

However, when I want to copy a file to the instance using scp I am asked to enter a password:

scp somefile.txt -i mykey.pem [email protected]:/

Password:

Any ideas why this is happening/how it can be prevented?

#6 Best answer 1 of scp (secure copy) to ec2 instance without password (Score: 877)

Created: 2011-07-02 Last updated: 2016-03-22

I figured it out. I had the arguments in the wrong order. This works:

scp -i mykey.pem somefile.txt [email protected]:/

#6 Best answer 2 of scp (secure copy) to ec2 instance without password(Score: 64)

Created: 2014-04-10

scp -i /path/to/your/.pemkey -r /copy/from/path [email protected]:/copy/to/path

See also original question in stackoverflow

#7: What is difference between Lightsail and EC2? (Score: 419)

Created: 2016-12-02 Last updated: 2021-01-04

Tags: amazon-web-services, amazon-ec2, vps, amazon-lightsail

Recently Amazon launched Lightsail. Is there any difference between them? If yes, then what’s the difference? Are Lightsail instances more powerful than EC2?

#7 Best answer 1 of What is difference between Lightsail and EC2? (Score: 603)

Created: 2016-12-02 Last updated: 2019-07-11

Testing¹ reveals that Lightsail instances in fact are EC2 instances, from the t2 class of burstable instances.

EC2, of course, has many more instance families and classes other than the t2, almost all of which are more “powerful” (or better equipped for certain tasks) than these, but also much more expensive. But for meaningful comparisons, the 512 MiB Lightsail instance appears to be completely equivalent in specifications to the similarly-priced t2.nano, the 1GiB is a t2.micro, the 2 GiB is a t2.small, etc.

Lightsail is a lightweight, simplified product offering – hard disks are fixed size EBS SSD volumes, instances are still billable when stopped, security group rules are much less flexible, and only a very limited subset of EC2 features and options are accessible.

It also has a dramatically simplified console, and even though the machines run in EC2, you can’t see them in the EC2 section of the AWS console. The instances run in a special VPC, but this aspect is also provisioned automatically, and invisible in the console. Lightsail supports optionally peering this hidden VPC with your default VPC in the same AWS region, allowing Lightsail instances to access services like EC2 and RDS in the default VPC within the same AWS account.²

Bandwidth is unlimited, but of course free bandwidth is not – however, Lightsail instances do include a significant monthly bandwidth allowance before any bandwidth-related charges apply.³ Lightsail also has a simplified interface to Route 53 with limited functionality.

But if those sound like drawbacks, they aren’t. The point of Lightsail seems to be simplicity. The flexibility of EC2 (and much of AWS) leads inevitably to complexity. The target market for Lightsail appears to be those who “just want a simple VPS” without having to navigate the myriad options available in AWS services like EC2, EBS, VPC, and Route 53. There is virtually no learning curve, here. You don’t even technically need to know how to use SSH with a private key – the Lightsail console even has a built-in SSH client – but there is no requirement that you use it. You can access these instances normally, with a standard SSH client.


¹Lightsail instances, just like “regular” EC2 (VPC and Classic) instances, have access to the instance metadata service, which allows an instance to discover things about itself, such as its instance type and availability zone. Lightsail instances are identified in the instance metadata as t2 machines.

²The Lightsail docs are not explicit about the fact that peering only works with your Default VPC, but this appears to be the case. If your AWS account was created in 2013 or before, then you may not actually have a VPC with the “Default VPC” designation. This can be resolved by submitting a support request, as I explained in Can’t establish VPC peering connection from Amazon Lightsail (at Server Fault).

³The bandwidth allowance applies to both inbound and outbound traffic; after this total amount of traffic is exceeded, inbound traffic continues to be free, but outbound traffic becomes billable. See “What does data transfer cost?" in the Lightsail FAQ.

#7 Best answer 2 of What is difference between Lightsail and EC2?(Score: 37)

Created: 2016-12-15

Lightsail VPSs are bundles of existing AWS products, offered through a significantly simplified interface. The difference is that Lightsail offers you a limited and fixed menu of options but with much greater ease of use. Other than the narrower scope of Lightsail in order to meet the requirements for simplicity and low cost, the underlying technology is the same.

The pre-defined bundles can be described:

% aws lightsail --region us-east-1 get-bundles
{
    "bundles": [
        {
            "name": "Nano",
            "power": 300,
            "price": 5.0,
            "ramSizeInGb": 0.5,
            "diskSizeInGb": 20,
            "transferPerMonthInGb": 1000,
            "cpuCount": 1,
            "instanceType": "t2.nano",
            "isActive": true,
            "bundleId": "nano_1_0"
        },
        ...
    ]
}

It’s worth reading through the Amazon EC2 T2 Instances documentation, particularly the CPU Credits section which describes the base and burst performance characteristics of the underlying instances.

Importantly, since your Lightsail instances run in VPC, you still have access to the full spectrum of AWS services, e.g. S3, RDS, and so on, as you would from any EC2 instance.

See also original question in stackoverflow

#8: How to get the instance id from within an ec2 instance? (Score: 406)

Created: 2009-03-09 Last updated: 2018-02-14

Tags: amazon-ec2, amazon-web-services

How can I find out the instance id of an ec2 instance from within the ec2 instance?

#8 Best answer 1 of How to get the instance id from within an ec2 instance? (Score: 554)

Created: 2009-03-09 Last updated: 2021-01-18

See the EC2 documentation on the subject.

Run:

wget -q -O - http://169.254.169.254/latest/meta-data/instance-id

If you need programmatic access to the instance ID from within a script,

die() { status=$1; shift; echo "FATAL: $*"; exit $status; }
EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id || die \"wget instance-id has failed: $?\"`"

Here is an example of a more advanced use (retrieve instance ID as well as availability zone and region, etc.):

EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id || die \"wget instance-id has failed: $?\"`"
test -n "$EC2_INSTANCE_ID" || die 'cannot obtain instance-id'
EC2_AVAIL_ZONE="`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone || die \"wget availability-zone has failed: $?\"`"
test -n "$EC2_AVAIL_ZONE" || die 'cannot obtain availability-zone'
EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed -e 's:\([0-9][0-9]*\)[a-z]*\$:\\1:'`"

You may also use curl instead of wget, depending on what is installed on your platform.

#8 Best answer 2 of How to get the instance id from within an ec2 instance?(Score: 151)

Created: 2013-05-13 Last updated: 2018-01-29

On Amazon Linux AMIs you can do:

$ ec2-metadata -i
instance-id: i-1234567890abcdef0

Or, on Ubuntu and some other linux flavours, ec2metadata --instance-id (This command may not be installed by default on ubuntu, but you can add it with sudo apt-get install cloud-utils)

As its name suggests, you can use the command to get other useful metadata too.

See also original question in stackoverflow

#9: How to pass a querystring or route parameter to AWS Lambda from Amazon API Gateway (Score: 404)

Created: 2015-07-09 Last updated: 2015-07-27

Tags: amazon-web-services, aws-lambda, aws-api-gateway

for instance if we want to use

GET /user?name=bob

or

GET /user/bob

How would you pass both of these examples as a parameter to the Lambda function?

I saw something about setting a “mapped from” in the documentation, but I can’t find that setting in the API Gateway console.

  • method.request.path.parameter-name for a path parameter named parameter-name as defined in the Method Request page.
  • method.request.querystring.parameter-name for a query string parameter named parameter-name as defined in the Method Request page.

I don’t see either of these options even though I defined a query string.

#9 Best answer 1 of How to pass a querystring or route parameter to AWS Lambda from Amazon API Gateway (Score: 394)

Created: 2017-09-08

As of September 2017, you no longer have to configure mappings to access the request body.

All you need to do is check, “Use Lambda Proxy integration”, under Integration Request, under the resource.

enter image description here

You’ll then be able to access query parameters, path parameters and headers like so

event['pathParameters']['param1']
event["queryStringParameters"]['queryparam1']
event['requestContext']['identity']['userAgent']
event['requestContext']['identity']['sourceIP']

#9 Best answer 2 of How to pass a querystring or route parameter to AWS Lambda from Amazon API Gateway(Score: 227)

Created: 2015-07-10 Last updated: 2017-04-24

The steps to get this working are:

Within the API Gateway Console …

  1. go to Resources -> Integration Request

  2. click on the plus or edit icon next to templates dropdown (odd I know since the template field is already open and the button here looks greyed out)

  3. Explicitly type application/json in the content-type field even though it shows a default (if you don’t do this it will not save and will not give you an error message)

  4. put this in the input mapping { "name": "$input.params('name')" }

  5. click on the check box next to the templates dropdown (I’m assuming this is what finally saves it)

See also original question in stackoverflow

#10: Benefits of EBS vs. instance-store (and vice-versa) (Score: 385)

Created: 2010-09-02

Tags: amazon-ec2, amazon-web-services, amazon-ebs

I’m unclear as to what benefits I get from EBS vs. instance-store for my instances on Amazon EC2. If anything, it seems that EBS is way more useful (stop, start, persist + better speed) at relatively little difference in cost…? Also, is there any metric as to whether more people are using EBS now that it’s available, considering it is still relatively new?

#10 Best answer 1 of Benefits of EBS vs. instance-store (and vice-versa) (Score: 295)

Created: 2010-09-02 Last updated: 2016-09-26

The bottom line is you should almost always use EBS backed instances.

Here’s why

  • EBS backed instances can be set so that they cannot be (accidentally) terminated through the API.
  • EBS backed instances can be stopped when you’re not using them and resumed when you need them again (like pausing a Virtual PC), at least with my usage patterns saving much more money than I spend on a few dozen GB of EBS storage.
  • EBS backed instances don’t lose their instance storage when they crash (not a requirement for all users, but makes recovery much faster)
  • You can dynamically resize EBS instance storage.
  • You can transfer the EBS instance storage to a brand new instance (useful if the hardware at Amazon you were running on gets flaky or dies, which does happen from time to time)
  • It is faster to launch an EBS backed instance because the image does not have to be fetched from S3.
  • If the hardware your EBS-backed instance is scheduled for maintenance, stopping and starting the instance automatically migrates to new hardware. I was also able to move an EBS-backed instance on failed hardware by force-stopping the instance and launching it again (your mileage may vary on failed hardware).

I’m a heavy user of Amazon and switched all of my instances to EBS backed storage as soon as the technology came out of beta. I’ve been very happy with the result.

EBS can still fail - not a silver bullet

Keep in mind that any piece of cloud-based infrastructure can fail at any time. Plan your infrastructure accordingly. While EBS-backed instances provide certain level of durability compared to ephemeral storage instances, they can and do fail. Have an AMI from which you can launch new instances as needed in any availability zone, back up your important data (e.g. databases), and if your budget allows it, run multiple instances of servers for load balancing and redundancy (ideally in multiple availability zones).

When Not To

At some points in time, it may be cheaper to achieve faster IO on Instance Store instances. There was a time when it was certainly true. Now there are many options for EBS storage, catering to many needs. The options and their pricing evolve constantly as technology changes. If you have a significant amount of instances that are truly disposable (they don’t affect your business much if they just go away), do the math on cost vs. performance. EBS-backed instances can also die at any point in time, but my practical experience is that EBS is more durable.

#10 Best answer 2 of Benefits of EBS vs. instance-store (and vice-versa)(Score: 69)

Created: 2010-10-21 Last updated: 2012-12-06

99% of our AWS setup is recyclable. So for me it doesn’t really matter if I terminate an instance – nothing is lost ever. E.g. my application is automatically deployed on an instance from SVN, our logs are written to a central syslog server.

The only benefit of instance storage that I see are cost-savings. Otherwise EBS-backed instances win. Eric mentioned all the advantages.


[2012-07-16] I would phrase this answer a lot different today.

I haven’t had any good experience with EBS-backed instances in the past year or so. The last downtimes on AWS pretty much wrecked EBS as well.

I am guessing that a service like RDS uses some kind of EBS as well and that seems to work for the most part. On the instances we manage ourselves, we have got rid off EBS where possible.

Getting rid to an extend where we moved a database cluster back to iron (= real hardware). The only remaining piece in our infrastructure is a DB server where we stripe multiple EBS volumes into a software RAID and backup twice a day. Whatever would be lost in between backups, we can live with.

EBS is a somewhat flakey technology since it’s essentially a network volume: a volume attached to your server from remote. I am not negating the work done with it – it is an amazing product since essentially unlimited persistent storage is just an API call away. But it’s hardly fit for scenarios where I/O performance is key.

And in addition to how network storage behaves, all network is shared on EC2 instances. The smaller an instance (e.g. t1.micro, m1.small) the worse it gets because your network interfaces on the actual host system are shared among multiple VMs (= your EC2 instance) which run on top of it.

The larger instance you get, the better it gets of course. Better here means within reason.

When persistence is required, I would always advice people to use something like S3 to centralize between instances. S3 is a very stable service. Then automate your instance setup to a point where you can boot a new server and it gets ready by itself. Then there is no need to have network storage which lives longer than the instance.

So all in all, I see no benefit to EBS-backed instances what so ever. I rather add a minute to bootstrap, then run with a potential SPOF.

See also original question in stackoverflow


Notes:
  1. This page use API to get the relevant data from stackoverflow community.
  2. Content license on this page is CC BY-SA 3.0.
  3. score = up votes - down votes.