Most votes on amazon-web-services questions 10

Most votes on amazon-web-services questions 10. #91 AWS S3 CLI - Could not connect to the endpoint URL #92 AWS ECS Error when running task: No Container Instances were found in your cluster #93 Amazon products API - Looking for basic overview and information #94 How do you pass custom environment variable on Amazon Elastic Beanstalk (AWS EBS)? #95 What is the recommended way to delete a large number of items from DynamoDB? #96 What is the best way to pass AWS credentials to a Docker container? #97 AWS S3: how do I see how much disk space is using #98 Access denied; you need (at least one of) the SUPER privilege(s) for this operation #99 RRSet of type CNAME with DNS name is not permitted at apex in zone #100 How to change User Status FORCE_CHANGE_PASSWORD?

Read all the top votes questions and answers in a single page.

#91: AWS S3 CLI - Could not connect to the endpoint URL (Score: 128)

Created: 2016-11-03 Last updated: 2017-09-15

Tags: amazon-web-services, amazon-s3

$ aws s3 ls

Could not connect to the endpoint URL: ""

What could be the problem?

#91 Best answer 1 of AWS S3 CLI - Could not connect to the endpoint URL (Score: 294)

Created: 2016-11-03

You probably have something wrong in your default profile for the default region.

Check your file at ~/.aws/config, you have something like


Fix the region to region=us-east-1 and then the command will work correctly

#91 Best answer 2 of AWS S3 CLI - Could not connect to the endpoint URL(Score: 8)

Created: 2018-04-06

first you use ‘aws configure’ then input the access key, and secret key, and the region. the region you input would be important for this problem. try to input something like ‘’, not ‘’. it will solve the issue.

See also original question in stackoverflow

#92: AWS ECS Error when running task: No Container Instances were found in your cluster (Score: 128)

Created: 2016-04-09 Last updated: 2018-08-01

Tags: amazon-web-services, docker, aws-cli, amazon-ecs

Im trying to deploy a docker container image to AWS using ECS, but the EC2 instance is not being created. I have scoured the internet looking for an explanation as to why I’m receiving the following error:

“A client error (InvalidParameterException) occurred when calling the RunTask operation: No Container Instances were found in your cluster.”

Here are my steps:

1. Pushed a docker image FROM Ubuntu to my Amazon ECS repo.

2. Registered an ECS Task Definition:

aws ecs register-task-definition --cli-input-json file://path/to/my-task.json 

3. Ran the task:

aws ecs run-task --task-definition my-task

Yet, it fails.

Here is my task:

  "family": "my-task",
  "containerDefinitions": [
        "environment": [],
        "name": "my-container",
        "image": "my-namespace/my-image",
        "cpu": 10,
        "memory": 500,
        "portMappings": [
                "containerPort": 8080,
                "hostPort": 80
        "entryPoint": [
        "essential": true

I have also tried using the management console to configure a cluster and services, yet I get the same error. How do I configure the cluster to have ec2 instances, and what kind of container instances do I need to use? I thought this whole process was to create the EC2 instances to begin with!!

#92 Best answer 1 of AWS ECS Error when running task: No Container Instances were found in your cluster (Score: 180)

Created: 2016-04-10 Last updated: 2018-11-15

I figured this out after a few more hours of investigating. Amazon, if you are listening, you should state this somewhere in your management console when creating a cluster or adding instances to the cluster:

“Before you can add ECS instances to a cluster you must first go to the EC2 Management Console and create ecs-optimized instances with an IAM role that has the AmazonEC2ContainerServiceforEC2Role policy attached”

Here is the rigmarole:

1. Go to your EC2 Dashboard, and click the Launch Instance button.

2. Under Community AMIs, Search for ecs-optimized, and select the one that best fits your project needs. Any will work. Click next.

3. When you get to Configure Instance Details, click on the create new IAM role link and create a new role called ecsInstanceRole.

4. Attach the AmazonEC2ContainerServiceforEC2Role policy to that role.

5. Then, finish configuring your ECS Instance.
NOTE: If you are creating a web server you will want to create a securityGroup to allow access to port 80.

After a few minutes, when the instance is initialized and running you can refresh the ECS Instances tab you are trying to add instances too.

#92 Best answer 2 of AWS ECS Error when running task: No Container Instances were found in your cluster(Score: 44)

Created: 2017-01-03

Currently, the Amazon AWS web interface can automatically create instances with the correct AMI and the correct name so it’ll register to the correct cluster.

Even though all instances were created by Amazon with the correct settings, my instances wouldn’t register. On the Amazon AWS forums I found a clue. It turns out that your clusters need internet access and if your private VPC does not have an internet gateway, the clusters won’t be able to connect.

The fix

In the VPC dashboard you should create a new Internet Gateway and connect it to the VPC used by the cluster. Once attached you must update (or create) the route table for the VPC and add as last line igw-24b16740	

Where igw-24b16740 is the name of your freshly created internet gateway.

See also original question in stackoverflow

#93: Amazon products API - Looking for basic overview and information (Score: 128)

Created: 2009-10-20 Last updated: 2009-10-21

Tags: amazon-web-services

After using the ebay API recently, I was expecting it to be as simple to request info from Amazon, but it seems not…

There does not seem to be a good webpage which explains the basics. For starters, what is the service called? The old name has been dropped I think, and the acronym AWS used everywhere (but isn’t that an umbrella term which includes their cloud computing and 20 other services too?).

There is a lack of clear information about the new ‘signature’ process. Gathering together snippets of detail from various pages I’ve stumbled upon, it seems that prior to August 2009 you just needed a developer account with Amazon to make requests and get XML back. Now you have to use some fancy encryption process to create an extra number in your querystring. Does this mean Amazon data is completely out of reach for the programmer who just wants a quick and simple solution?

There seems to be a tiny bit of information on RSS feeds, and you can get a feed of items that have been ‘tagged’ easily, but I can’t tell if there is a way to search for titles using RSS too. Some websites seem to suggest this, but I think they are out of date now?

If anyone can give a short summary to the current state of play I’d be very grateful. All I want to do is go from a book title in my database, and use Classic ASP to get a set of products that match from Amazon, listing cover images and prices.

Amazon ‘widgets’ can display keyword search results on my pages, but I have less control over these, and they are shown to the user only - my code can’t look inside them.

#93 Best answer 1 of Amazon products API - Looking for basic overview and information (Score: 124)

Created: 2009-11-07 Last updated: 2016-07-12

Your post contains several questions, so I’ll try to answer them one at a time:

  1. The API you’re interested in is the Product Advertising API (PA). It allows you programmatic access to search and retrieve product information from Amazon’s catalog. If you’re having trouble finding information on the API, that’s because the web service has undergone two name changes in recent history: it was also known as ECS and AAWS.
  2. The signature process you’re referring to is the same HMAC signature that all of the other AWS services use for authentication. All that’s required to sign your requests to the Product Advertising API is a function to compute a SHA-1 hash and and AWS developer key. For more information, see the section of the developer documentation on signing requests.
  3. As far as I know, there is no support for retrieving RSS feeds of products or tags through PA. If anyone has information suggesting otherwise, please correct me.
  4. Either the REST or SOAP APIs should make your use case very straight forward. Amazon provides a fairly basic “getting started” guide available here. As well, you can view the complete API developer documentation here.

Although the documentation is a little hard to find (likely due to all the name changes), the PA API is very well documented and rather elegant. With a modicum of elbow grease and some previous experience in calling out to web services, you shouldn’t have any trouble getting the information you need from the API.

#93 Best answer 2 of Amazon products API - Looking for basic overview and information(Score: 29)

Created: 2009-11-01

I agree that Amazon appears to be intentionally obfuscating even how to find the API documentation, as well as use it. I’m just speculating though.

Renaming the services from “ECS” to “Product Advertising API” was probably also not the best move, it essentially invalidated all that Google mojo they had built up over time.

It took me quite a while to ‘discover’ this updated link for the Product Advertising API. I don’t remember being able to easily discover it through the typical ‘Developer’ link on the Amazon webpage. This documentation appears to valid and what I’ve worked from recently.

The change to authentication procedures also seems to add further complexity, but I’m sure they have a reason for it.

I use SOAP via C# to communicate with Amazon Product API.

With the REST API you have to encrypt the whole URL in a fairly specific way. The params have to be sorted, etc. There is just more to do. With the SOAP API, you just encrypt the operation+timestamp, and thats it.

Adam O’Neil’s post here, How to get album, dvd, and blueray cover art from Amazon, walks through the SOAP with C# method. Its not the original sample I pulled down, and contrary to his comment, it was not an official Amazon sample I stumbled on, though the code looks identical. However, Adam does a good job at presenting all the necessary steps. I wish I could credit the original author.

See also original question in stackoverflow

#94: How do you pass custom environment variable on Amazon Elastic Beanstalk (AWS EBS)? (Score: 127)

Created: 2012-06-26 Last updated: 2013-06-07

Tags: amazon-web-services, app-config, amazon-elastic-beanstalk

The Amazon Elastic Beanstalk blurb says:

Elastic Beanstalk lets you “open the hood” and retain full control … even pass environment variables through the Elastic Beanstalk console.

How to pass other environment variables besides the one in the Elastic Beanstalk configuration?

#94 Best answer 1 of How do you pass custom environment variable on Amazon Elastic Beanstalk (AWS EBS)? (Score: 147)

Created: 2013-07-26 Last updated: 2017-05-23

As a heads up to anyone who uses the .ebextensions/*.config way: nowadays you can add, edit and remove environment variables in the Elastic Beanstalk web interface.

The variables are under Configuration → Software Configuration:

Environment Properties

Creating the vars in .ebextensions like in Onema’s answer still works.

It can even be preferable, e.g. if you will deploy to another environment later and are afraid of forgetting to manually set them, or if you are ok with committing the values to source control. I use a mix of both.

#94 Best answer 2 of How do you pass custom environment variable on Amazon Elastic Beanstalk (AWS EBS)?(Score: 107)

Created: 2013-01-23 Last updated: 2013-11-19

Only 5 values is limiting, or you may want to have a custom environment variable name. You can do this by using the configuration files. Create a directory at the root of your project called


Then create a file called environment.config (this file can be called anything but it must have the .config extension) and add the following values

  - option_name: CUSTOM_ENV
    value: staging

After you deploy your application you will see this new value under Environment Details -> Edit Configuration -> Container

for more information check the documentation here:


To prevent committing to your repository values like API keys, secrets and so on, you can put a placeholder value.

  - option_name: SOME_API_KEY
    value: placeholder-value-change-me

Later you can go to the AWS admin panel (Environment Details -> Edit Configuration -> Container) and update the values there. In my experience these values do not change after subsequent deployments.

Update 2 As @Benjamin stated in his comment, since the new look and feel was rolled out July 18, 2013 it is possible to define any number of environment variables directly from the console:

Configuration > Software Configuration > Environment Properties

See also original question in stackoverflow

Created: 2012-02-06 Last updated: 2017-08-19

Tags: database, nosql, amazon-web-services, cloud, amazon-dynamodb

I’m writing a simple logging service in DynamoDB.

I have a logs table that is keyed by a user_id hash and a timestamp (Unix epoch int) range.

When a user of the service terminates their account, I need to delete all items in the table, regardless of the range value.

What is the recommended way of doing this sort of operation (Keeping in mind there could be millions of items to delete)?

My options, as far as I can see are:

A: Perform a Scan operation, calling delete on each returned item, until no items are left

B: Perform a BatchGet operation, again calling delete on each item until none are left

Both of these look terrible to me as they will take a long time.

What I ideally want to do is call LogTable.DeleteItem(user_id) - Without supplying the range, and have it delete everything for me.

Created: 2012-02-06 Last updated: 2017-05-23

What I ideally want to do is call LogTable.DeleteItem(user_id) - Without supplying the range, and have it delete everything for me.

An understandable request indeed; I can imagine advanced operations like these might get added over time by the AWS team (they have a history of starting with a limited feature set first and evaluate extensions based on customer feedback), but here is what you should do to avoid the cost of a full scan at least:

  1. Use Query rather than Scan to retrieve all items for user_id - this works regardless of the combined hash/range primary key in use, because HashKeyValue and RangeKeyCondition are separate parameters in this API and the former only targets the Attribute value of the hash component of the composite primary key..
  • Please note that you'’ll have to deal with the query API paging here as usual, see the ExclusiveStartKey parameter:

Primary key of the item from which to continue an earlier query. An earlier query might provide this value as the LastEvaluatedKey if that query operation was interrupted before completing the query; either because of the result set size or the Limit parameter. The LastEvaluatedKey can be passed back in a new query request to continue the operation from that point.

  1. Loop over all returned items and either facilitate DeleteItem as usual
  • Update: Most likely BatchWriteItem is more appropriate for a use case like this (see below for details).


As highlighted by ivant, the BatchWriteItem operation enables you to put or delete several items across multiple tables in a single API call [emphasis mine]:

To upload one item, you can use the PutItem API and to delete one item, you can use the DeleteItem API. However, when you want to upload or delete large amounts of data, such as uploading large amounts of data from Amazon Elastic MapReduce (EMR) or migrate data from another database in to Amazon DynamoDB, this API offers an efficient alternative.

Please note that this still has some relevant limitations, most notably:

  • Maximum operations in a single request — You can specify a total of up to 25 put or delete operations; however, the total request size cannot exceed 1 MB (the HTTP payload).

  • Not an atomic operation — Individual operations specified in a BatchWriteItem are atomic; however BatchWriteItem as a whole is a “best-effort” operation and not an atomic operation. That is, in a BatchWriteItem request, some operations might succeed and others might fail. […]

Nevertheless this obviously offers a potentially significant gain for use cases like the one at hand.

Created: 2013-04-15

According to the DynamoDB documentation you could just delete the full table.

See below:

“Deleting an entire table is significantly more efficient than removing items one-by-one, which essentially doubles the write throughput as you do as many delete operations as put operations”

If you wish to delete only a subset of your data, then you could make separate tables for each month, year or similar. This way you could remove “last month” and keep the rest of your data intact.

This is how you delete a table in Java using the AWS SDK:

DeleteTableRequest deleteTableRequest = new DeleteTableRequest()
DeleteTableResult result = client.deleteTable(deleteTableRequest);

See also original question in stackoverflow

#96: What is the best way to pass AWS credentials to a Docker container? (Score: 126)

Created: 2016-04-01 Last updated: 2021-01-25

Tags: amazon-web-services, docker, docker-compose

I am running docker-container on Amazon EC2. Currently I have added AWS Credentials to Dockerfile. Could you please let me know the best way to do this?

#96 Best answer 1 of What is the best way to pass AWS credentials to a Docker container? (Score: 127)

Created: 2019-05-10 Last updated: 2020-11-20

A lot has changed in Docker since this question was asked, so here’s an attempt at an updated answer.

First, specifically with AWS credentials on containers already running inside of the cloud, using IAM roles as Vor suggests is a really good option. If you can do that, then add one more plus one to his answer and skip the rest of this.

Once you start running things outside of the cloud, or have a different type of secret, there are two key places that I recommend against storing secrets:

  1. Environment variables: when these are defined on a container, every process inside the container has access to them, they are visible via /proc, apps may dump their environment to stdout where it gets stored in the logs, and most importantly, they appear in clear text when you inspect the container.

  2. In the image itself: images often get pushed to registries where many users have pull access, sometimes without any credentials required to pull the image. Even if you delete the secret from one layer, the image can be disassembled with common Linux utilities like tar and the secret can be found from the step where it was first added to the image.

So what other options are there for secrets in Docker containers?

Option A: If you need this secret only during the build of your image, cannot use the secret before the build starts, and do not have access to BuildKit yet, then a multi-stage build is a best of the bad options. You would add the secret to the initial stages of the build, use it there, and then copy the output of that stage without the secret to your release stage, and only push that release stage to the registry servers. This secret is still in the image cache on the build server, so I tend to use this only as a last resort.

Option B: Also during build time, if you can use BuildKit which was released in 18.09, there are currently experimental features to allow the injection of secrets as a volume mount for a single RUN line. That mount does not get written to the image layers, so you can access the secret during build without worrying it will be pushed to a public registry server. The resulting Dockerfile looks like:

# syntax = docker/dockerfile:experimental
FROM python:3
RUN pip install awscli
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...

And you build it with a command in 18.09 or newer like:

DOCKER_BUILDKIT=1 docker build -t your_image --secret id=aws,src=$HOME/.aws/credentials .

Option C: At runtime on a single node, without Swarm Mode or other orchestration, you can mount the credentials as a read only volume. Access to this credential requires the same access that you would have outside of docker to the same credentials file, so it’s no better or worse than the scenario without docker. Most importantly, the contents of this file should not be visible when you inspect the container, view the logs, or push the image to a registry server, since the volume is outside of that in every scenario. This does require that you copy your credentials on the docker host, separate from the deploy of the container. (Note, anyone with the ability to run containers on that host can view your credential since access to the docker API is root on the host and root can view the files of any user. If you don’t trust users with root on the host, then don’t give them docker API access.)

For a docker run, this looks like:

docker run -v $HOME/.aws/credentials:/home/app/.aws/credentials:ro your_image

Or for a compose file, you’d have:

version: '3'
    image: your_image
    - $HOME/.aws/credentials:/home/app/.aws/credentials:ro

Option D: With orchestration tools like Swarm Mode and Kubernetes, we now have secrets support that’s better than a volume. With Swarm Mode, the file is encrypted on the manager filesystem (though the decryption key is often there too, allowing the manager to be restarted without an admin entering a decrypt key). More importantly, the secret is only sent to the workers that need the secret (running a container with that secret), it is only stored in memory on the worker, never disk, and it is injected as a file into the container with a tmpfs mount. Users on the host outside of swarm cannot mount that secret directly into their own container, however, with open access to the docker API, they could extract the secret from a running container on the node, so again, limit who has this access to the API. From compose, this secret injection looks like:

version: '3.7'

    external: true

    image: your_image
    - source: aws_creds
      target: /home/user/.aws/credentials
      uid: '1000'
      gid: '1000'
      mode: 0700

You turn on swarm mode with docker swarm init for a single node, then follow the directions for adding additional nodes. You can create the secret externally with docker secret create aws_creds $HOME/.aws/credentials. And you deploy the compose file with docker stack deploy -c docker-compose.yml stack_name.

I often version my secrets using a script from:

Option E: Other tools exist to manage secrets, and my favorite is Vault because it gives the ability to create time limited secrets that automatically expire. Every application then gets its own set of tokens to request secrets, and those tokens give them the ability to request those time limited secrets for as long as they can reach the vault server. That reduces the risk if a secret is ever taken out of your network since it will either not work or be quick to expire. The functionality specific to AWS for Vault is documented at

#96 Best answer 2 of What is the best way to pass AWS credentials to a Docker container?(Score: 122)

Created: 2016-04-01 Last updated: 2020-05-12

The best way is to use IAM Role and do not deal with credentials at all. (see )

Credentials could be retrieved from Since this is a private ip address, it could be accessible only from EC2 instances.

All modern AWS client libraries “know” how to fetch, refresh and use credentials from there. So in most cases you don’t even need to know about it. Just run ec2 with correct IAM role and you good to go.

As an option you can pass them at the runtime as environment variables ( i.e docker run -e AWS_ACCESS_KEY_ID=xyz -e AWS_SECRET_ACCESS_KEY=aaa myimage)

You can access these environment variables by running printenv at the terminal.

See also original question in stackoverflow

#97: AWS S3: how do I see how much disk space is using (Score: 123)

Created: 2012-01-23 Last updated: 2019-12-26

Tags: amazon-s3, amazon-web-services

I have AWS account. I’m using S3 to store backups from different servers. The question is there any information in the AWS console about how much disk space is in use in my S3 cloud?

#97 Best answer 1 of AWS S3: how do I see how much disk space is using (Score: 124)

Created: 2014-01-27 Last updated: 2014-11-13

Yippe - an update to AWS CLI allows you to recursively ls through buckets…

aws s3 ls s3://<bucketname> --recursive  | grep -v -E "(Bucket: |Prefix: |LastWriteTime|^$|--)" | awk 'BEGIN {total=0}{total+=$3}END{print total/1024/1024" MB"}'

#97 Best answer 2 of AWS S3: how do I see how much disk space is using(Score: 118)

Created: 2015-08-27

I’m not sure when this was added to the AWSCLI given that the original question was 3 years ago, but the command line tool gives a nice summary by running:

aws s3 ls s3://mybucket --recursive --human-readable --summarize

See also original question in stackoverflow

#98: Access denied; you need (at least one of) the SUPER privilege(s) for this operation (Score: 121)

Created: 2017-05-17 Last updated: 2017-05-17

Tags: mysql, amazon-web-services, amazon-rds

So I try to import sql file into rds (1G MEM, 1 CPU). The sql file is like 1.4G

mysql -h -u user -ppass –max-allowed-packet=33554432 db < db.sql

It got stuck at:

ERROR 1227 (42000) at line 374: Access denied; you need (at least one of) the SUPER privilege(s) for this operation

The actual sql content is:

/*!50003 CREATE*/ /*!50017 DEFINER=`another_user`@``*/ /*!50003 TRIGGER `change_log_BINS` BEFORE INSERT ON `change_log` FOR EACH ROW
IF (NEW.created_at IS NULL OR NEW.created_at = '00-00-00 00:00:00' OR NEW.created_at = '') THEN
        SET NEW.created_at = NOW();
END IF */;;

another_user is not existed in rds, so I do:

GRANT ALL PRIVILEGES ON db.* TO [email protected]'localhost';

Still no luck.

#98 Best answer 1 of Access denied; you need (at least one of) the SUPER privilege(s) for this operation (Score: 207)

Created: 2017-05-17 Last updated: 2018-02-27

Either remove the DEFINER=.. statement from your sqldump file, or replace the user values with CURRENT_USER.

The MySQL server provided by RDS does not allow a DEFINER syntax for another user (in my experience).

You can use a sed script to remove them from the file:

sed 's/\sDEFINER=`[^`]*`@`[^`]*`//g' -i oldfile.sql

#98 Best answer 2 of Access denied; you need (at least one of) the SUPER privilege(s) for this operation(Score: 79)

Created: 2019-12-23

If your dump file doesn’t have DEFINER, make sure these lines below are also removed if they’re there, or commented-out with -- :

At the start:

-- SET @@GLOBAL.GTID_PURGED=/*!80000 '+'*/ '';

At the end:


See also original question in stackoverflow

#99: RRSet of type CNAME with DNS name is not permitted at apex in zone (Score: 121)

Created: 2013-11-26

Tags: amazon-web-services, dns, amazon-route53

I own and I am managing both in Route53. hosts my site, and I’d like to direct traffic from to I tried to set up a CNAME record for pointing to, but I got the error message:

RRSet of type CNAME with DNS name is not permitted at apex in zone

Why doesn’t this work, and what can I do instead?

#99 Best answer 1 of RRSet of type CNAME with DNS name is not permitted at apex in zone (Score: 102)

Created: 2013-11-27 Last updated: 2016-04-18

As per RFC1912 section 2.4:

 A CNAME record is not allowed to coexist with any other data.  In
 other words, if suzy.podunk.xx is an alias for sue.podunk.xx, you
 can't also have an MX record for, or an A record, or
 even a TXT record.  Especially do not try to combine CNAMEs and NS 
 records like this!:

           podunk.xx.      IN      NS      ns1
                           IN      NS      ns2
                           IN      CNAME   mary
           mary            IN      A

The RFC makes perfect sense as the nameserver wouldn’t know whether it needs to follow the CNAME or answer with the actual record the CNAME overlaps with. is a zone therefore it implicitly has an SOA record for the name. You can’t have both a SOA record and a CNAME with the same name.

However, given that SOA records are generally used only for zone maintenance, these situations where you want to provide a CNAME at the zone’s apex are quite common. Even though the RFC prohibits it, many engineers would like a behaviour such as: “follow the CNAME unless the query explicitly asks for the SOA record”. That’s why Route 53 provides alias records. These are a Route 53 specific feature which offer the exact functionality you require. Have a look at

#99 Best answer 2 of RRSet of type CNAME with DNS name is not permitted at apex in zone 65)

Created: 2015-11-20

  1. Create an S3 Bucket called (The name must be the same as the domain you want to redirect from in order for this to work!)
  2. In the S3 Bucket go to Properties > Static Website Hosting, select Redirect all requests to another host name and enter in the text box.
  3. Back in Route 53, in your Hosted Zone for, click Create Record Set. Select A - IPv4 address for type. Click Yes for Alias. Click the text box for Alias Target. should be listed under -- S3 Website Endpoints --. Save the record. Wait a few minutes and you should have a redirect setup to redirect requests from to

You can use this same method to redirect a naked domain to a subdomain (like www). I use this in cases where has to be a CNAME so I redirect from to with this same method. If is an A record, you can use this technique to redirect from to

NOTE: this method will forward with the full path. i.e. will forward to

See also original question in stackoverflow

#100: How to change User Status FORCE_CHANGE_PASSWORD? (Score: 120)

Created: 2016-10-27 Last updated: 2019-01-28

Tags: amazon-web-services, aws-cli, amazon-cognito

Using AWS Cognito, I want to create dummy users for testing purposes.

I then use the AWS Console to create such user, but the user has its status set to FORCE_CHANGE_PASSWORD. With that value, this user cannot be authenticated.

Is there a way to change this status?

UPDATE Same behavior when creating user from CLI

#100 Best answer 1 of How to change User Status FORCE_CHANGE_PASSWORD? (Score: 157)

Created: 2017-07-22 Last updated: 2019-09-10

I know it’s been a while but thought this might help other people who come across this post.

You can use the AWS CLI to change the users password, however it’s a multi step process:

Step 1: Get a session token for the desired user:

aws cognito-idp admin-initiate-auth --user-pool-id %USER POOL ID% --client-id %APP CLIENT ID% --auth-flow ADMIN_NO_SRP_AUTH --auth-parameters USERNAME=%USERS USERNAME%,PASSWORD=%USERS CURRENT PASSWORD%

If this returns an error about Unable to verify secret hash for client, create another app client without a secret and use that client ID.

Step 2: If step 1 is successful, it will respond with the challenge NEW_PASSWORD_REQUIRED, other challenge parameters and the users session key. Then, you can run the second command to issue the challenge response:

aws cognito-idp admin-respond-to-auth-challenge --user-pool-id %USER POOL ID% --client-id %CLIENT ID% --challenge-name NEW_PASSWORD_REQUIRED --challenge-responses NEW_PASSWORD=%DESIRED PASSWORD%,USERNAME=%USERS USERNAME% --session %SESSION KEY FROM PREVIOUS COMMAND with ""%

If you get an error about Invalid attributes given, XXX is missing pass the missing attributes using the format userAttributes.$FIELD_NAME=$VALUE

The above command should return a valid Authentication Result and appropriate Tokens.

Important: For this to work, the Cognito User Pool MUST have an App client configured with ADMIN_NO_SRP_AUTH functionality (Step 5 in this doc).

#100 Best answer 2 of How to change User Status FORCE_CHANGE_PASSWORD?(Score: 139)

Created: 2019-07-09 Last updated: 2021-01-28

This has finally been added to AWSCLI:

You can change a user’s password and update status using:

aws cognito-idp admin-set-user-password
  --user-pool-id <your-user-pool-id> \
  --username <username> \
  --password <password> \

Before using this, you may need to update your AWS CLI using:

pip3 install awscli --upgrade

See also original question in stackoverflow

  1. This page use API to get the relevant data from stackoverflow community.
  2. Content license on this page is CC BY-SA 3.0.
  3. score = up votes - down votes.