@Ludovic LAURENT The 403 is a forbidden error, which implies that the web server you are trying to scrape is blocking your request.
Certain sites will block IP address ranges from VPN providers or cloud service providers, especially if the traffic coming from these ranges does not via an anticipated method such as requests that do not contain the user-agent header.
Can you please see if adding a user agent header to mimic a browser request resolves the issue?
import requests
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
response = requests.get("https://www.proclinic.es/tienda/020-195-unitwin-roth-022-5-5-s-i-gnch-3.html", headers=headers)
print(response)
print(response.content)
If adding the header works, your target website was only allowing browser level requests.
If the header does not work, see if you are able to scrape another website. Be sure to check the website's robots.txt file to ensure they have not blocked scrapping requests. If another website works, then it's likely the original site is blocking cloud provider IP addresses.
I hope this helps you with your project.