Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

AWS CLI S3 A client error (403) occurred when calling the HeadObject operation: Forbidden

Ask Question

I'm trying to setup a Amazon Linux AMI(ami-f0091d91) and have a script that runs a copy command to copy from a S3 bucket.

aws --debug s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .

This script works perfectly on my local machine but fails with the following error on the Amazon Image:

2016-03-22 01:07:47,110 - MainThread - botocore.auth - DEBUG - StringToSign: HEAD
Tue, 22 Mar 2016 01:07:47 GMT
x-amz-security-token:AQoDYXdzEPr//////////wEa4ANtcDKVDItVq8Z5OKms8wpQ3MS4dxLtxVq6Om1aWDhLmZhL2zdqiasNBV4nQtVqwyPsRVyxl1Urq1BBCnZzDdl4blSklm6dvu+3efjwjhudk7AKaCEHWlTd/VR3cksSNMFTcI9aIUUwzGW8lD9y8MVpKzDkpxzNB7ZJbr9HQNu8uF/st0f45+ABLm8X4FsBPCl2I3wKqvwV/s2VioP/tJf7RGQK3FC079oxw3mOid5sEi28o0Qp4h/Vy9xEHQ28YQNHXOBafHi0vt7vZpOtOfCJBzXvKbk4zRXbLMamnWVe3V0dArncbNEgL1aAi1ooSQ8+Xps8ufFnqDp7HsquAj50p459XnPedv90uFFd6YnwiVkng9nNTAF+2Jo73+eKTt955Us25Chxvk72nAQsAZlt6NpfR+fF/Qs7jjMGSF6ucjkKbm0x5aCqCw6YknsoE1Rtn8Qz9tFxTmUzyCTNd7uRaxbswm7oHOdsM/Q69otjzqSIztlwgUh2M53LzgChQYx5RjYlrjcyAolRguJjpSq3LwZ5NEacm/W17bDOdaZL3y1977rSJrCxb7lmnHCOER5W0tsF9+XUGW1LMX69EWgFYdn5QNqFk6mcJsZWrR9dkehaQwjLPcv/29QcM+b5u/0goazCtwU=
/aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm
2016-03-22 01:07:47,111 - MainThread - botocore.endpoint - DEBUG - Sending http request: <PreparedRequest [HEAD]>
2016-03-22 01:07:47,111 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - INFO - Starting new HTTPS connection (1): aws-codedeploy-us-west-2.s3.amazonaws.com
2016-03-22 01:07:47,151 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - DEBUG - "HEAD /latest/codedeploy-agent.noarch.rpm HTTP/1.1" 403 0
2016-03-22 01:07:47,151 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amz-id-2': '0mRvGge9ugu+KKyDmROm4jcTa1hAnA5Ax8vUlkKZXoJ//HVJAKxbpFHvOGaqiECa4sgon2F1kXw=', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': '6204CD88E880E5DD', 'date': 'Tue, 22 Mar 2016 01:07:46 GMT', 'content-type': 'application/xml'}
2016-03-22 01:07:47,152 - MainThread - botocore.parsers - DEBUG - Response body:
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f421075bcd0>
2016-03-22 01:07:47,152 - MainThread - botocore.retryhandler - DEBUG - No retry needed.
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <function enhance_error_msg at 0x7f4211085758>
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <awscli.errorhandler.ErrorHandler object at 0x7f421100cc90>
2016-03-22 01:07:47,152 - MainThread - awscli.errorhandler - DEBUG - HTTP Response Code: 403
2016-03-22 01:07:47,152 - MainThread - awscli.customizations.s3.s3handler - DEBUG - Exception caught during task execution: A client error (403) occurred when calling the HeadObject operation: Forbidden
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 100, in call
    total_files, total_parts = self._enqueue_tasks(files)
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 178, in _enqueue_tasks
    for filename in files:
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/fileinfobuilder.py", line 31, in call
    for file_base in files:
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 142, in call
    for src_path, extra_information in file_iterator:
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 314, in list_objects
    yield self._list_single_object(s3_path)
  File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 343, in _list_single_object
    response = self._client.head_object(**params)
  File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 228, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 488, in _make_api_call
    model=operation_model, context=request_context
  File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 226, in emit
    return self._emit(event_name, kwargs)
  File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 209, in _emit
    response = handler(**kwargs)
  File "/usr/local/lib/python2.7/site-packages/awscli/errorhandler.py", line 70, in __call__
    http_status_code=http_response.status_code)
ClientError: A client error (403) occurred when calling the HeadObject operation: Forbidden
2016-03-22 01:07:47,153 - Thread-1 - awscli.customizations.s3.executor - DEBUG - Received print task: PrintTask(message='A client error (403) occurred when calling the HeadObject operation: Forbidden', error=True, total_parts=None, warning=None)
A client error (403) occurred when calling the HeadObject operation: Forbidden

However, when I run it with the --no-sign-request option, it works perfectly:

aws --debug --no-sign-request s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .

Can someone please explain what is going on?

It looks like you're (maybe implicitly) using the instance's IAM role to make the request (that would explain x-amz-security-token -- temporary credentials from the role) and your role denies access to S3... or the bucket (not yours, I take it?) doesn't allow access with credentials -- though if it's public, that's strange. As always, make sure your system clock is correct, since with HEAD the error body is always suppressed. – Michael - sqlbot Mar 22, 2016 at 1:55 Hi, thank you for the quick response. The bucket that i'm trying access is, indeed, public. Not sure why it is complaining about a signed request then. It fails with similar error on my own bucket as well without the --no-sign-request option. – MojoJojo Mar 22, 2016 at 2:01 You do have an IAM role on this instance, right? It sounds as if that role may be restricting things, perhaps in unexpected ways. – Michael - sqlbot Mar 22, 2016 at 2:14

in my case the problem was the Resource statement in the user access policy.

First we had "Resource": "arn:aws:s3:::BUCKET_NAME", but in order to have access to objects within a bucket you need a /* at the end: "Resource": "arn:aws:s3:::BUCKET_NAME/*"

From the AWS documentation:

Bucket access permissions specify which users are allowed access to the objects in a bucket and which types of access they have. Object access permissions specify which users are allowed access to the object and which types of access they have. For example, one user might have only read permission, while another might have read and write permissions.

Note that when you edit a policy, it can take a few minutes for the changes to be effective. – Romain Sep 30, 2022 at 15:28

Trying to solve this problem myself, I discovered that there is no HeadBucket permission. It looks like there is, because that's what the error message tells you, but actually the HEAD operation requires the ListBucket permission. I also discovered that my IAM policy and my bucket policy were conflicting. Make sure you check both.

yes, this is the algorithm... youtu.be/YQsK4MtsELU?t=808 . you need to make sure resource policy does not conflict with IAM policy – overexchange Jul 31, 2019 at 16:34 This should be the accepted answer - see also this AWS support article. For some reason, this permission is required for aws s3 cp of an object from a bucket. – RichVel Mar 10, 2021 at 12:02

I figured it out. I had an error in my cloud formation template that was creating the EC2 instances. As a result, the EC2 instances that were trying to access the above code deploy buckets, were in different regions (not us-west-2). It seems like the access policies on the buckets (owned by Amazon) only allow access from the region they belong in. When I fixed the error in my template (it was wrong parameter map), the error disappeared

You wrote: "access policies on the buckets (owned by Amazon) only allow access from the region they belong in." Buckets don't "belong to a region". They are Global. Wish I understood what fixed your error. – LeslieK Jun 17, 2017 at 17:07 I was getting something similar on boto3 in a cross-region request, and discovered a bug in my policy in the process. in this answer. Can't say it's the same fix here, but maybe that's a hint? @LeslieK – init_js Nov 27, 2018 at 6:44 @LeslieK AWS S3 bucket names are global, yes, but buckets themselves are always regional. To get the bucket region see docs.aws.amazon.com/cli/latest/reference/s3api/… Only in Google Cloud buckets can be multi-regional (but still confined to one multi-region like US or EU). – Dzmitry Lazerka Feb 19, 2019 at 0:56

Check your object owner if you copy the file from another aws account.

In my case, I copy the file from another aws account without acl, so file's owner is the other aws account, it's mean the file belongs to origin account.

To fix it, copy or sync s3 files with acl, example:

aws s3 cp --acl bucket-owner-full-control s3://bucket1/key s3://bucket2/key
                out of date because it has been replaced by --acl as per the help page ( accepts values of private,  public-read,  public-read-write,  authenticated-read, aws-exec-read, bucket-owner-read, bucket-owner-full-control and log-delivery-write). Thanks for putting me on the right track
– Tom
                Mar 13, 2020 at 12:34
                I disagree that this is "stupid." By using 403 for both not found and not authorized, they make it impossible to determine if a given object exists or not unless you have permission to list it.
– Alex Grounds
                Apr 28, 2022 at 14:27
                I meant being stupid on my part, I used the wrong file name and spent almost a day trying to figure out what's wrong with the code.
– kumarahul
                Apr 29, 2022 at 4:10
                Would it not be better to return 403 no matter what if the user is unauthorized, but 404 if the user is authorized but the file does not exist?
– Sander
                Sep 15, 2022 at 19:25

The minimal permissions that worked for me when running HeadObject on any object in mybucket:

"Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket" "Resource": [ "arn:aws:s3:::mybucket/*", "arn:aws:s3:::mybucket" this grants all S3 permissions. It is advisable to be restrictive and grant only the level of permissions required for the role, or user, or task for that matter. – dataviews Nov 10, 2021 at 2:59

It's a terrible practice to give away access to the entire s3 (all actions, all buckets), just to unblock yourself.

The 403 error above is usually due to the lack of "Read" permission of files. The Read action for reading a file in S3 is s3:GetObject.

"Effect": "Allow", "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::mybucketname/path/*", "arn:aws:s3:::mybucketname"

Solution 1: A new Policy in IAM (Tell Role/User to know S3)

You can create a Policy (e.g. MY_S3_READER) with the following, and attach it to the user or role that's doing the job. (e.g. EC2 Instance's IAM role)

Here is the exact JSON for your Policy: (just replace mybucketname and path)

"Version": "2012-10-17", "Statement": [ "Sid": "VisualEditor0", "Effect": "Allow", "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::mybucketname/path/*", "arn:aws:s3:::mybucketname"

Create this Policy. Then, go to IAM > Roles > Attach Policy and attach it.

Solution 2: Edit Buckey Policy in S3 (Tell S3 to know User/Role)

Go to your bucket in S3, then add the following example: (replace mybucketname and myip)

"Version": "2012-10-17", "Id": "SourceIP", "Statement": [ "Sid": "ValidIpAllowRead", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::mybucketname", "arn:aws:s3:::mybucketname/*" "Condition": { "IpAddress": { "aws:SourceIp": "myip/32"

If you want to change this read permission to by User or Role (instead of IP Address), remove the Condition part, and change "Principal" to "Principal": { "AWS": "<IAM User/Role's ARN>" },".

Additional Notes

  • Check the permissions via aws s3 cp or aws s3 ls manually for faster debugging.

  • It sometimes takes up to 30 seconds for the permission change to be effective. Be patient.

  • Note that for doing "ls" (e.g. aws s3 ls s3://mybucket/mypath) you need s3:ListBucket access.

  • IMPORTANT Accessing files by their HTTP(S) URL via cURL or similar tools (e.g. axios on AJAX calls) requires you to grant either IP access, or supply proper headers, manually, or get a signedUrl from the SDK first.

    I was getting the error A client error (403) occurred when calling the HeadObject operation: Forbidden for my aws cli copy command aws s3 cp s3://bucket/file file. I was using a IAM role which had full S3 access using an Inline Policy.

    "Version": "2012-10-17", "Statement": [ "Effect": "Allow", "Action": "s3:*", "Resource": "*"

    If I give it the full S3 access from the Managed Policies instead, then the command works. I think this must be a bug from Amazon, because the policies in both cases were exactly the same.

    Btw, I was trying to use goofys to mount an s3 bucket in my ubuntu server filesystem via an IAM user attached to a similar policy to the above but with Resource: "example" set (instead of *), and that caused the inability to create files there (similar issue). I just changed it to the managed policy of AmazonS3FullAccess – Shadi Jul 2, 2017 at 13:31 It only allows access to all S3 buckets that the account has, and may be denied by bucket policies anyhow. However you should always deny public access to buckets unless you really know what you are doing ! ;-D – MikeW Oct 26, 2020 at 12:12

    I was getting this error message due to my EC2 instance's clock being out of sync.

    I was able to fix on Ubuntu using this:

    sudo ntpdate ntp.ubuntu.com
    sudo apt-get install ntp
    

    I had a lambda function doing the same, copy from bucket to bucket.

    The lambda had permissions to use the source bucket as trigger.

    Configuration tab

    But it also needs permissions to OPERATE with buckets.

    Permissions tab

    If s3 is not there, then you need to edit the Role used by the lambda and add it (see the s3FullAccess)

    I got this error with a mis-configured test event. I changed the source buckets ARN but forgot to edit the default S3 bucket name.

    I.e. make sure that in the bucket section of the test event both the ARN and bucket name are set correctly:

    "bucket": {
      "arn": "arn:aws:s3:::your_bucket_name",
      "name": "your_bucket_name",
      "ownerIdentity": {
        "principalId": "EXAMPLE"
    

    I was getting a 403 on HEAD requests while the GET requests were working. It turned out to be the CORS config in s3 permissions. I had to add HEAD

    <?xml version="1.0" encoding="UTF-8"?>
    <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>HEAD</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
    </CORSConfiguration>
    

    I also experienced that behaviour. In my case I've found that if the IAM policy doesn't have access to read the object (s3:GetObject), the same error is raised.

    I agree with you that the error raised from aws console & cli is not really well explained and may cause confusion.

    If running in an environment where the credential/role is not clear, be sure you included the --profile=yourprofile flag so the cli knows what credentials to use. For example:

    aws s3 cp s3://yourbucket destination.txt --profile=yourprofile

    will succeed while the following yielded the HeadObject error

    aws s3 cp s3://yourbucket destination.txt

    The profile settings reference entries in your config and credentials files.

    When it comes to cross-account S3 access

    An IAM user policy will not over-ride the policy defined for the bucket in the foreign account.

    s3:GetObject must be allowed for accountA/user as well as on the accountB/bucket

    I've already assigned AmazonS3FullAccess to the user I am using to communicate with AWS but I still get this error: An error occurred (403) when calling the HeadBucket operation: Forbidden when I try to do this: aws s3api head-bucket --bucket "asda". Have any idea? – Kasir Barati May 11 at 15:52

    I had the same issue. I was 99% sure that it was due to lack of permissions so I changed the policy of IAM role to full access even though it is not good practice. Using:

    "Version": "2012-10-17", "Statement": [ "Effect": "Allow", "Action": "s3:*", "Resource": "*"

    as stated in above comments from others.

    However, this did not work because I had just created my AWS account on Sunday evening and I wanted to use it immediately. I was in a US East region which was 12 hours behind my time. It took all night to me to try to solve the issue without success.

    I came back on Tuesday afternoon on the same Notebook and everything worked like charm.

    It is not true that the change of permission takes a few minutes. It can take up to 48h.

    I would simply use the following AWS IAM policy keeping in mind the AWS best practices and principle of least privilege:

    "Version": "2012-10-17", "Statement": [ "Sid": "Sid1", "Effect": "Allow", "Action": "s3:ListBucket", "Resource": [ "arn:aws:s3:::[BUCKET_NAME]" "Sid": "Sid2", "Effect": "Allow", "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::[BUCKET_NAME]/*"

    I have also experienced this scenario.

    I have a bucket with policy that uses AWS4-HMAC-SHA256. Turns out my awscli is not updated to the latest version. Mine was aws-cli/1.10.8. Upgrading it have solved the problem.

    pip install awscli --upgrade --user

    https://docs.aws.amazon.com/cli/latest/userguide/installing.html

    When I faced this issue, I discovered that my problem was that the files in the 'Source Account' were copied there by a 'third party' and the Owner was not the Source Account.

    I had to recopy the objects to themselves in the same bucket with the --metadata-directive REPLACE

    Detailed explanation in Amazon Documentation

    Permissions

    You need the s3:GetObject permission for this operation. For more information, see Specifying Permissions in a Policy. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.

    If you have the s3:ListBucket permission on the bucket, Amazon S3 returns an HTTP status code 404 ("no such key") error.
    If you don’t have the s3:ListBucket permission, Amazon S3 returns an HTTP status code 403 ("access denied") error.
    

    The following operation is related to HeadObject:

    GetObject
    

    Source: https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html

    Maybe this will help someone. In my case, I was running a CodeBuild job and the CodeBuild execution role had full access to S3. I was trying to list keys in an S3 bucket via the CLI. However, I was using the CLI from within the aws-cli Docker image and passing the credentials via environment variables per this article:

    https://docs.aws.amazon.com/codebuild/latest/userguide/troubleshooting.html#troubleshooting-versions

    No matter what I tried, any calls using aws s3api ... would fail with that same 403 error. The solution for me was to convert to a normal s3 CLI call (as opposed to s3api CLI call):

    aws s3 ls s3://bucket/key --recursive

    This change worked. The call using aws s3api did not.

    Problem is in your Police permission gived for the role that you are using, In Case you are using AWS Glue you need create police with these permissions, https://docs.aws.amazon.com/glue/latest/dg/create-sagemaker-notebook-policy.html

    This will solve "(403) occurred when calling the HeadObject operation: Forbidden"

    Thanks for contributing an answer to Stack Overflow!

    • Please be sure to answer the question. Provide details and share your research!

    But avoid

    • Asking for help, clarification, or responding to other answers.
    • Making statements based on opinion; back them up with references or personal experience.

    To learn more, see our tips on writing great answers.

  •