The Complete Guide to Proxy Error Codes (Including Solutions)

HTTP status codes: Why you are receiving them and how to combat them
14 min read
large header graphic with the famous 404 error code in big splashy font

In daily online data management and web scraping, encountering erroneous proxy messages is common. These error codes are crucial indicators of data delivery issues with proxies and play a vital role in diagnosing and resolving problems.

This article outlines different types of HTTP proxy error codes, covering the various types, interpretations, and conditions in which they commonly arise.

Proxy Error Codes

Proxy errors can happen for various reasons, including server downtime or inconsistent settings. Understanding these details helps you grasp what’s wrong, making it easier to fix.

In the following sections, you’ll learn about various proxy codes and how to troubleshoot and resolve them effectively.

3xx Codes: Redirects

An HTTP 3xx status code is used for redirections, indicating that the user-agent needs to take additional steps to complete the request. Typically, this code implies that you’re being directed to a new URL due to editorial changes or website restructuring.

When it comes to web scraping, you must deal with these redirects to maintain accurate and effective data collection.

301: Moved Permanently

If you receive a 301 error, the resource you’re looking for has been permanently moved. This often happens when a website is undergoing updates like redesigning or reorganizing its content.

If you encounter this error, your scraper needs to transfer its URL references to the new location coming from the response headers:

response_data = requests.get('http://example.com/old-page')
if response_data.status_code == 301:
    new_redirect_url = response_data.headers['Location']
    response_data = requests.get(new_redirect_url)

In this code, you instruct your scraper to get the redirect location that is coming from the header. Then, it goes and accesses the content at its new location:

301 moved permanently

302: Found (Temporary Redirect)

The 302 Found status code signals that the resource you’re trying to access has been moved temporarily to another URL. In this case, the change is not permanent, and the original URL is predicted to be viable again at some point. This often happens when a website is undergoing maintenance.

When web scraping, it’s important to configure your script to handle redirects automatically, ensuring that stored URLs remain unchanged. While many HTTP libraries, like the Python requests, handle 302 redirects automatically, it’s important to verify that this behavior aligns with your scraping goals, particularly when preserving the original request method is necessary:

302 found error

304: Not Modified

If the content you’re trying to access hasn’t been updated since the last validated request, you’ll receive a 304 error. This error helps increase the efficiency of web scraping activities, preventing the download of unnecessary data.

If your scraper accesses a page that has already been downloaded, the request headers, such as If-Modified-Since or If-None-Match, can be used to verify that the content is not altered:

import requests

# Correct header format and Python syntax
headers = {'If-Modified-Since': 'Sat, Oct 29 2024 19:43:31 GMT'}

# Making a GET request to the server with the headers
response = requests.get('http://example.com/page', headers=headers)

# Checking if the status code returned is 304
if response.status_code == 304:
    print("Content has not changed.")

In this code, you begin by testing if the response code is 304. If it’s true, then it prints Message Content has not changed, and you don’t have to do anything:

304 not modified error

307: Temporary Redirect

A temporary redirect, status code 307, tells you that the resource you’re trying to reach is temporarily placed at another URL. In this case, the same HTTP method and body of the original request can be reused with the redirected URL. This differs from 302, where the redirected URL can use a different method and body:

response = requests.post('http://examples.com/submit-form', data={'key': 'value'} )
if response.status_code == 307:
    response = requests.post(response.headers['Location'], data={'key': 'value'})

You must maintain order when redirecting a web crawler. This helps ensure reliable and effective data collection while respecting the target website’s structure and server system. The following code checks to see if the response status is 307; if it’s true, it resends the same data in the body to the new Location specified in the response header:

307 temporary redirect

4xx Codes: Client-Side Errors

Client-side errors are indicated by a 4xx range of HTTP status codes and usually result from a problem with the request made by the client. Often, these errors result in rectification of request parameters or operational enhancement of authentication mechanisms.

400: Bad Request

The 400 Bad Request error indicates that the server was not able to understand the request. In web scraping, this usually happens when the request header is not written correctly or is missing parts.

For instance, if you unintentionally send information in the wrong format (eg sending text instead of JSON), the server can’t handle the request, and it is rejected. To solve this issue, you must carefully execute validation and make sure the request syntax satisfies the expectations of the server.

In web scraping, there are several steps that you need to complete to verify that your requests meet the server’s expectations. Initially, you need to make sure that you understand the structure of the targeted website. You can use the browser developer tools to help you inspect elements and find out how the data is formatted. Additionally, you should implement testing and error handling and make sure that you use proper headers in your requests:

400 bad request

401: Unauthorized

401 Unauthorized error indicates a failed or missing authentication that’s required to access a resource. In web scraping, this commonly happens when trying to reach authenticated content. For instance, accessing subscription-based data with incorrect credentials triggers this error. To avoid this, make sure you include the correct authentication headers in your requests:

401 unauthorized

403: Forbidden

403 Forbidden error means that the server was able to understand the request but has refused to permit you to access the resource. This is a common occurrence when web scraping a website that has strict access controls. You’ll often encounter this error when you enter a forbidden part of a website. For example, if you’re authenticating as a user and you’re trying to access the posts of another user, you won’t be able to do so because you don’t have permission:

403 forbidden

If you receive a 403 error, verify authorization by checking your keys or credentials. If the authorization is unavailable and you don’t have any valid credentials, it’s recommended that you refrain from scraping this content to conform to the access policy of the website.

404: Not Found

When the server cannot find the resource being requested, it returns the 404 Not Found error.
This often happens when URLs used in web scraping are altered or broken, such as when a product page is deleted or its URL is modified without redirection or updates.

To solve this issue, verify the URLs in your scraping script and update them as needed to align with the current website structure:

404 not found

It’s always recommended that you handle any 404 errors in your code.

If you’re using Python and the server didn’t find the resource, you can instruct your code to pass the following block of code so that your code doesn’t stop when this error happens:

import requests

# List of URLs to fetch
urls = [
    "http://example.com/nonexistentpage.html",  # This should result in 404
    "http://example.com"  # This should succeed
]

for url in urls:
    try:
        response = requests.get(url)
        if response.status_code == 404:
            print(f"Error 404: URL not found: {url}")
            # Continue to the next URL in the list
            continue
        print(f"Successfully retrieved data from {url}")
        print(response.text[:200])  # Print the first 200 characters of the response content
    except requests.exceptions.RequestException as e:
        print(f"An error occurred while fetching {url}: {e}")
        continue  # Continue to the next URL even if a request exception occurs

print("Finished processing all URLs.")

In the following code, you iterate over the array of URLs and then you try to fetch the page content. When it fails with an error of 400, the code continues to the next URL in the array.

407: Proxy Authentication Required

The 407 Proxy Authentication Required error is triggered when the client needs to authenticate to the proxy server so that the request can proceed. This error commonly occurs during web scraping when the proxy server needs authentication. This is different from a 401 error when authentication is required to access data related to the target website.

For instance, if you encounter this error when using a private proxy to access data from a targeted website, you’re not authenticated. To solve this issue, you should add valid proxy authentication details in your requests:

407 Proxy Authentication Required

408: Request Timeout

408 Request Timeout status code indicates that the server waited too long for the request. This error can occur when your scraper is too slow or if the server is overloaded, especially during peak hours.

When request timing is optimized and retries are implemented with exponential backoff mechanisms, this problem can be minimized as the server has enough time to respond:

408 request timeout

429: Too Many Requests

The 429 Too Many Requests error is raised when a user has sent a lot of requests within a short time frame. This is a common occurrence when web scraping rate limits on a website are exceeded. For instance, if you query a website often, the rate limit will be activated and you’ll be blocked from scraping data.

Make sure you respect the targeted website API rate limits and apply some scraping best practices, such as delaying the requests, which can prevent this issue and maintain access to the necessary resources:

429 too many requests

5xx Codes: Server-Side Issues

Server-side issues are indicated by a 5xx series of HTTP status codes and refer to the inability of the server to fulfill requests due to internal issues. You must understand these errors in web scraping as they often demand a distinct approach compared to handling client-side errors.

500: Internal Server Error

500 Internal Server Error is a generic response that informs you that some abnormal situation occurred on the server that didn’t allow it to complete the specific request. This issue doesn’t come from any mistake made by the client; rather, it signifies that the problem is within the server itself.

For instance, when scraping data, this error can occur while trying to access a page on the server. To solve the issue, you can try again later or plan your web scraping projects to occur when it’s not peak hours and the server isn’t loaded:

500 internal server error

501: Not Implemented

The 501 Not Implemented error occurs when the server fails to either recognize the request method or complete this method. Because you typically test the methods of your crawler beforehand, this error rarely happens in web scraping, but it can occur if you’re using atypical HTTP methods.

For example, if your scraper is configured to use methods that are not supported by the server (eg PUT or DELETE) and these methods are necessary to your web scraping functions, you’ll receive a 501 error. To prevent this, make sure that if your scraping scripts use HTTP methods, these methods are required everywhere:

501 not implemented

502: Bad Gateway

The 502 Bad Gateway error indicates that, despite acting as a gateway or proxy, the server received an inappropriate response from the destination server and is accessed to fulfill the request. This indicates that there was a communication problem with the intermediary servers.

When web scraping, the 502 error can occur when the proxy server you’re using is unable to get an appropriate response from the target server. To fix this, verify that the health and configuration of your proxy server are working and can communicate with the target servers. You can monitor the CPU, memory, and network bandwidth usage on your proxy server. You can also check the error logs from the proxy server, which can indicate if there are problems with handling your requests:

502 bad gateway

503: Service Unavailable

The 503 Service Unavailable error indicates that the server is busy and can’t serve the request. This can occur due to server maintenance or overload.

When web scraping, you’ll often encounter this error when trying to access sites that are not accessible during maintenance or peak hours. Unlike the 500 error, which indicates a server issue, the 503 error indicates that the server is operational but unavailable at the moment.

To avoid this error, you should implement a retry strategy that uses exponential backoff. The backoff intervals should increase as requests are retried. As a result, the requests shouldn’t cause server saturation during downtime:

503 service unavailable

504: Gateway Timeout

The 504 Gateway Timeout error occurs when the server acting as a gateway or a proxy fails to get a response from the upstream server in time. This error is a timeout issue and a variant of the 502 error.

When it comes to web scraping, this error often happens when the reply from the target server to your proxy is too slow (ie it took more than 120 seconds). To solve this, you can tweak the timeout settings of your scraper to adhere to longer wait times or verify the health and responsiveness of your proxy server:

504 gateway time-out

505: HTTP Version Not Supported

The 505 HTTP Version Not Supported error occurs when the server does not recognize the HTTP protocol version specified in the request. This is uncommon in web scraping but may happen if the target server is set to support only certain versions of the HTTP protocol. For instance, if your scraping requests arrive with a version that is either too recent or too old, the server will not accept them.

To avoid this error, you should make sure that your HTTP request headers state a version that is acceptable for the target server, most likely HTTP/1.1 or HTTP/2, which are the ones that are most frequently supported:

505 PHP version not supported

Quick Tips to Avoid Common Proxy Errors

Proxy errors can be frustrating, but many proxy errors can be bypassed by implementing a few specific strategies in your web scraper.

Retry the Request

Many proxy problems are caused by short-term issues, such as short network interruptions or small server glitches. Retrying the request might bypass the problem if the issue has naturally been resolved.

Here’s how you can implement retries in your scraping script using Python’s requests library and urllib3 retry logic:

import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry

def requests_retry_session(
    retries=3,
    backoff_factor=0.3,
    status_forcelist=(500, 502, 503, 504),
    session=None,
):
    session = session or requests.Session()
    retry = Retry(
        total=retries,
        read=retries,
        connect=retries,
        backoff_factor=backoff_factor,
        status_forcelist=status_forcelist,
    )
    adapter = HTTPAdapter(max_retries=retry)
    session.mount('http://', adapter)
    session.mount('https://', adapter)
    return session

s = requests_retry_session()
try:
    response = s.get('http://example.com', proxies={"http": "http://proxy_address:port"})
    print(response.text)
except requests.exceptions.HTTPError as e:
    print('HTTPError:', e)

This code sets a retry mechanism with a backoff factor, which means if a request fails, you retry the same request up to three times, waiting a bit longer each time before the next attempt.

Verify Proxy Settings

Incorrect proxy settings can lead to numerous errors. For instance, these errors can occur if you enter an incorrect proxy port, IP address, or authentication information. Make sure you verify that your settings are correct according to your network needs so that requests can reach their destination.

Consult Documentation and Support

If you run into an issue when you’re utilizing a proxy service or library, always refer to the official documentation as your first line of defense. If you can’t find what you’re looking for from the documentation, check to see if the service or the library has a Slack or Discord channel that you can join. Lastly, you can always open a ticket on the support channel or send an email with the details and the questions you want answers to.

Conclusion

This article taught you all about various proxy error codes and their meanings, helping you identify each error and troubleshoot issues while web scraping. You also learned about some helpful tips to prevent common errors from occurring in the first place.

If you’re struggling with proxy errors, consider utilizing Bright Data’s proxy services. Our proxies can help reduce the occurrence of errors and result in a more efficient data scraping process. Whether you’re an expert or a novice, the Bright Data suite of proxy tools can help you strengthen your web scraping abilities.