Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams
url = 'http://worldagnetwork.com/'
result = requests.get(url)
print(result.content.decode())

Its output:

<head><title>403 Forbidden</title></head> <body bgcolor="white"> <center><h1>403 Forbidden</h1></center> <hr><center>nginx</center> </body> </html>

Please, say what the problem is.

Adding user-agent headers works fine for my case. my_response = requests.get(target_url, headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)}` – Rajesh Swarnkar Jan 12, 2022 at 12:48 maybe you need cookie in header, or authorization creds in header: headers = { "Content-Type": "application/json", "Authorization": f"Basic {creds_enc}", "Cookie":"abcdefgh" } – Apurva Singh Jul 22, 2022 at 14:21

It seems the page rejects GET requests that do not identify a User-Agent. I visited the page with a browser (Chrome) and copied the User-Agent header of the GET request (look in the Network tab of the developer tools):

import requests
url = 'http://worldagnetwork.com/'
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'}
result = requests.get(url, headers=headers)
print(result.content.decode())
# <!doctype html>
# <!--[if lt IE 7 ]><html class="no-js ie ie6" lang="en"> <![endif]-->
# <!--[if IE 7 ]><html class="no-js ie ie7" lang="en"> <![endif]-->
# <!--[if IE 8 ]><html class="no-js ie ie8" lang="en"> <![endif]-->
# <!--[if (gte IE 9)|!(IE)]><!--><html class="no-js" lang="en"> <!--<![endif]-->
# ...
                You can also just execute navigator.userAgent in the Chrome developer console if you are too lazy to look in the network tab :)
– snrlx
                May 3, 2019 at 15:05
                Saved my day, thank you! Almost started to deeply investigate related problems with SSL-certificates but it was kind of dummy anti-robot defense.
– QtRoS
                Jul 3, 2020 at 12:48

Just add to Alberto's answer:

If you still get a 403 Forbidden after adding a user-agent, you may need to add more headers, such as referer:

headers = {
    'User-Agent': '...',
    'referer': 'https://...'

The headers can be found in the Network > Headers > Request Headers of the Developer Tools. (Press F12 to toggle it.)

Go to network, refresh the page so there are requests, select any http request (most of them are), then a new box opens which has headers, you should scroll down that list and you'll find request headers – anishtain4 Nov 15, 2020 at 3:06 I tried copying the user agent part to header it didn't work. I have heard on some sites it will never work. – Farhang Amaji Jul 19, 2021 at 20:41

If You are the server's owner/admin, and the accepted solution didn't work for You, then try disabling CSRF protection (link to an SO answer).

I am using Spring (Java), so the setup requires You to make a SecurityConfig.java file containing:

@Configuration
@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {
    @Override
    protected void configure (HttpSecurity http) throws Exception {
        http.csrf().disable();
    // ...