HTTP 错误403: 禁止

我试图使用 python 自动下载历史股票数据。我试图打开的 URL 响应一个 CSV 文件,但是我无法使用 urllib2打开它。我已经尝试改变用户代理指定的几个问题早些时候,我甚至尝试接受响应 cookie,没有运气。你能帮帮我吗。

注意: 同样的方法也适用于雅虎财经。

Code:

import urllib2,cookielib


site= "http://www.nseindia.com/live_market/dynaContent/live_watch/get_quote/getHistoricalData.jsp?symbol=JPASSOCIAT&fromDate=1-JAN-2012&toDate=1-AUG-2012&datePeriod=unselected&hiddDwnld=true"


hdr = {'User-Agent':'Mozilla/5.0'}


req = urllib2.Request(site,headers=hdr)


page = urllib2.urlopen(req)

错误

文件“ C: Python 27 lib urllib2.py”,第527行,http _ error _ default 引发 HTTPError (req.get _ full _ url () ,code,msg,hdrs,fp) urllib2.HTTPError: HTTP Error 403: Forban

Thanks for your assistance

223157 次浏览

By adding a few more headers I was able to get the data:

import urllib2,cookielib


site= "http://www.nseindia.com/live_market/dynaContent/live_watch/get_quote/getHistoricalData.jsp?symbol=JPASSOCIAT&fromDate=1-JAN-2012&toDate=1-AUG-2012&datePeriod=unselected&hiddDwnld=true"
hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding': 'none',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive'}


req = urllib2.Request(site, headers=hdr)


try:
page = urllib2.urlopen(req)
except urllib2.HTTPError, e:
print e.fp.read()


content = page.read()
print content

实际上,它只能使用一个额外的头:

'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',

这将在 Python3中工作

import urllib.request


user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.7) Gecko/2009021910 Firefox/3.0.7'


url = "http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers"
headers={'User-Agent':user_agent,}


request=urllib.request.Request(url,None,headers) #The assembled request
response = urllib.request.urlopen(request)
data = response.read() # The data u need

NSE 网站已经发生了变化,旧的脚本是半最佳的目前的网站。此代码片段可以收集安全性的日常细节。详细信息包括符号,证券类型,上次收盘,开放价格,高价格,低价格,平均价格,交易量,成交量,交易数量,交割数量和交割与交易比例的百分比。这些方便地表示为字典形式的列表。

带请求和 BeautifulSoup 的 Python 3.X 版本

from requests import get
from csv import DictReader
from bs4 import BeautifulSoup as Soup
from datetime import date
from io import StringIO


SECURITY_NAME="3MINDIA" # Change this to get quote for another stock
START_DATE= date(2017, 1, 1) # Start date of stock quote data DD-MM-YYYY
END_DATE= date(2017, 9, 14)  # End date of stock quote data DD-MM-YYYY




BASE_URL = "https://www.nseindia.com/products/dynaContent/common/productsSymbolMapping.jsp?symbol={security}&segmentLink=3&symbolCount=1&series=ALL&dateRange=+&fromDate={start_date}&toDate={end_date}&dataType=PRICEVOLUMEDELIVERABLE"








def getquote(symbol, start, end):
start = start.strftime("%-d-%-m-%Y")
end = end.strftime("%-d-%-m-%Y")


hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Referer': 'https://cssspritegenerator.com',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding': 'none',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive'}


url = BASE_URL.format(security=symbol, start_date=start, end_date=end)
d = get(url, headers=hdr)
soup = Soup(d.content, 'html.parser')
payload = soup.find('div', {'id': 'csvContentDiv'}).text.replace(':', '\n')
csv = DictReader(StringIO(payload))
for row in csv:
print({k:v.strip() for k, v in row.items()})




if __name__ == '__main__':
getquote(SECURITY_NAME, START_DATE, END_DATE)

此外,这是相对模块化和准备使用代码片段。

import urllib.request


bank_pdf_list = ["https://www.hdfcbank.com/content/bbp/repositories/723fb80a-2dde-42a3-9793-7ae1be57c87f/?path=/Personal/Home/content/rates.pdf",
"https://www.yesbank.in/pdf/forexcardratesenglish_pdf",
"https://www.sbi.co.in/documents/16012/1400784/FOREX_CARD_RATES.pdf"]




def get_pdf(url):
user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.7) Gecko/2009021910 Firefox/3.0.7'
    

#url = "https://www.yesbank.in/pdf/forexcardratesenglish_pdf"
headers={'User-Agent':user_agent,}
    

request=urllib.request.Request(url,None,headers) #The assembled request
response = urllib.request.urlopen(request)
#print(response.text)
data = response.read()
#    print(type(data))
    

name = url.split("www.")[-1].split("//")[-1].split(".")[0]+"_FOREX_CARD_RATES.pdf"
f = open(name, 'wb')
f.write(data)
f.close()
    



for bank_url in bank_pdf_list:
try:
get_pdf(bank_url)
except:
pass

这种错误通常发生在您请求的服务器不知道请求来自何处时,服务器这样做是为了避免任何不必要的访问。您可以通过定义一个头并沿着 urllib.request 传递它来绕过这个错误

这里的代码:

#defining header
header= {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) '
'AppleWebKit/537.11 (KHTML, like Gecko) '
'Chrome/23.0.1271.64 Safari/537.11',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding': 'none',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive'}


#the URL where you are requesting at
req = urllib.request.Request(url=your_url, headers=header)
page = urllib.request.urlopen(req).read()

有一件事值得一试,那就是更新 Python 版本。几个月前,我的一个抓取脚本在 Windows10上停止使用403。任何 user _ agent 都没有帮助,我打算放弃这个脚本。今天我用 Python (3.8.5-64位)在 Ubuntu 上尝试了相同的脚本,它没有错误。Python 版本的 Windows 有点老,只有3.6.2-32位。在将 Windows10上的 python 升级到3.9.5-64位之后,我再也看不到403了。如果您尝试使用它,不要忘记运行‘ pip freez > requments.txt’来导出包条目。我当然忘了。 This post is a reminder for me too when the 403 comes back again in the future.