使用beautifulsoup提取属性值

我试图提取一个单一的“价值”的内容;属性在特定的“输入”中;标签在网页上。我使用以下代码:

import urllib
f = urllib.urlopen("http://58.68.130.147")
s = f.read()
f.close()


from BeautifulSoup import BeautifulStoneSoup
soup = BeautifulStoneSoup(s)


inputTag = soup.findAll(attrs={"name" : "stainfo"})


output = inputTag['value']


print str(output)

我得到一个TypeError:列表索引必须是整数,而不是str

即使从Beautifulsoup文档我明白字符串不应该是一个问题在这里…但我不是专家,我可能误解了。

任何建议都非常感谢!

372244 次浏览

.find_all()返回所有找到元素的列表,因此:

input_tag = soup.find_all(attrs={"name" : "stainfo"})

input_tag是一个列表(可能只包含一个元素)。根据你想要什么,你应该做:

output = input_tag[0]['value']

或使用.find()方法,该方法只返回一个(第一个)找到的元素:

input_tag = soup.find(attrs={"name": "stainfo"})
output = input_tag['value']

我实际上会建议你一种节省时间的方法,假设你知道什么样的标签有这些属性。

假设标签xyz的属性管名为“staininfo”..

full_tag = soup.findAll("xyz")

我想让你明白full_tag是一个列表

for each_tag in full_tag:
staininfo_attrb_value = each_tag["staininfo"]
print staininfo_attrb_value

因此,您可以获得所有标记xyz的staininfo的所有attrb值

如果你想从上面的源代码中检索多个属性值,你可以使用findAll和一个列表推导式来获得你需要的一切:

import urllib
f = urllib.urlopen("http://58.68.130.147")
s = f.read()
f.close()


from BeautifulSoup import BeautifulStoneSoup
soup = BeautifulStoneSoup(s)


inputTags = soup.findAll(attrs={"name" : "stainfo"})
### You may be able to do findAll("input", attrs={"name" : "stainfo"})


output = [x["stainfo"] for x in inputTags]


print output
### This will print a list of the values.

Python 3.x中,只需对使用find_all获得的标记对象使用get(attr_name):

xmlData = None


with open('conf//test1.xml', 'r') as xmlFile:
xmlData = xmlFile.read()


xmlDecoded = xmlData


xmlSoup = BeautifulSoup(xmlData, 'html.parser')


repElemList = xmlSoup.find_all('repeatingelement')


for repElem in repElemList:
print("Processing repElem...")
repElemID = repElem.get('id')
repElemName = repElem.get('name')


print("Attribute id = %s" % repElemID)
print("Attribute name = %s" % repElemName)

针对XML文件conf//test1.xml,它看起来像:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<root>
<singleElement>
<subElementX>XYZ</subElementX>
</singleElement>
<repeatingElement id="11" name="Joe"/>
<repeatingElement id="12" name="Mary"/>
</root>

打印:

Processing repElem...
Attribute id = 11
Attribute name = Joe
Processing repElem...
Attribute id = 12
Attribute name = Mary

你也可以用这个:

import requests
from bs4 import BeautifulSoup
import csv


url = "http://58.68.130.147/"
r = requests.get(url)
data = r.text


soup = BeautifulSoup(data, "html.parser")
get_details = soup.find_all("input", attrs={"name":"stainfo"})


for val in get_details:
get_val = val["value"]
print(get_val)

我使用这个与Beautifulsoup 4.8.1来获得某些元素的所有类属性的值:

from bs4 import BeautifulSoup


html = "<td class='val1'/><td col='1'/><td class='val2' />"


bsoup = BeautifulSoup(html, 'html.parser')


for td in bsoup.find_all('td'):
if td.has_attr('class'):
print(td['class'][0])

需要注意的是,即使属性只有一个值,属性键也会检索一个列表。

对我来说:

<input id="color" value="Blue"/>

这可以通过下面的代码片段获取。

page = requests.get("https://www.abcd.com")
soup = BeautifulSoup(page.content, 'html.parser')
colorName = soup.find(id='color')
print(colorName['value'])

你可以尝试使用名为requests_html的强大的新包:

from requests_html import HTMLSession
session = HTMLSession()


r = session.get("https://www.bbc.co.uk/news/technology-54448223")
date = r.html.find('time', first = True) # finding a "tag" called "time"
print(date)  # you will have: <Element 'time' datetime='2020-10-07T11:41:22.000Z'>
# To get the text inside the "datetime" attribute use:
print(date.attrs['datetime']) # you will get '2020-10-07T11:41:22.000Z'

你可以尝试西班牙凉菜汤:

使用pip install gazpacho安装它

获取HTML并使用以下方法创建Soup:

from gazpacho import get, Soup


soup = Soup(get("http://ip.add.ress.here/"))  # get directly returns the html


inputs = soup.find('input', attrs={'name': 'stainfo'})  # Find all the input tags


if inputs:
if type(inputs) is list:
for input in inputs:
print(input.attr.get('value'))
else:
print(inputs.attr.get('value'))
else:
print('No <input> tag found with the attribute name="stainfo")

下面是一个如何提取所有a标记的href属性的示例:

import requests as rq
from bs4 import BeautifulSoup as bs


url = "http://www.cde.ca.gov/ds/sp/ai/"
page = rq.get(url)
html = bs(page.text, 'lxml')


hrefs = html.find_all("a")
all_hrefs = []
for href in hrefs:
# print(href.get("href"))
links = href.get("href")
all_hrefs.append(links)


print(all_hrefs)