如何将此XPath表达式转换为BeautifulSoup?
为了回答上一个问题,一些人建议我将BeautifulSoup用于我的项目。我一直在努力寻找他们的文档,但我无法解析它。有人可以指出我应该将该表达式转换为BeautifulSoup表达式的部分吗?
hxs.select('//td[@class="altRow"][2]/a/@href').re('/.a\w+')
上面的表达式来自Scrapy。我试图以应用正则表达式re('\.a\w+')
,以td class
altRow
获得从那里的链接。
我还要感谢其他任何教程或文档的指针。我找不到。
谢谢你的帮助。
编辑:
我正在看此页面:
>>> soup.head.title
<title>White & Case LLP - Lawyers</title>
>>> soup.find(href=re.compile("/cabel"))
>>> soup.find(href=re.compile("/diversity"))
<a href="/diversity/committee">Committee</a>
但是,如果您查看页面源"/cabel"
,则:
<td class="altRow" valign="middle" width="34%">
<a href='/cabel'>Abel, Christian</a>
由于某种原因,BeautifulSoup对搜索结果不可见,但是XPath对搜索结果可见,因为hxs.select('//td[@class="altRow"][2]/a/@href').re('/.a\w+')
捕获了“
/ cabel”
编辑: cobbal:它仍然无法正常工作。但是当我搜索这个:
>>>soup.findAll(href=re.compile(r'/.a\w+'))
[<link href="/FCWSite/Include/styles/main.css" rel="stylesheet" type="text/css" />, <link rel="shortcut icon" type="image/ico" href="/FCWSite/Include/main_favicon.ico" />, <a href="/careers/northamerica">North America</a>, <a href="/careers/middleeastafrica">Middle East Africa</a>, <a href="/careers/europe">Europe</a>, <a href="/careers/latinamerica">Latin America</a>, <a href="/careers/asia">Asia</a>, <a href="/diversity/manager">Diversity Director</a>]
>>>
它返回所有带有第二个字符“ a”的链接,但不返回律师姓名。因此,由于某些原因,BeautifulSoup无法看到那些链接(例如“ /
cabel”)。我不明白为什么。
-
我知道BeautifulSoup是规范的HTML解析模块,但是有时您只想从某些HTML中抓取一些子字符串,而pyparsing有一些有用的方法可以做到这一点。使用此代码:
from pyparsing import makeHTMLTags, withAttribute, SkipTo import urllib # get the HTML from your URL url = "http://www.whitecase.com/Attorneys/List.aspx?LastName=&FirstName=" page = urllib.urlopen(url) html = page.read() page.close() # define opening and closing tag expressions for <td> and <a> tags # (makeHTMLTags also comprehends tag variations, including attributes, # upper/lower case, etc.) tdStart,tdEnd = makeHTMLTags("td") aStart,aEnd = makeHTMLTags("a") # only interested in tdStarts if they have "class=altRow" attribute tdStart.setParseAction(withAttribute(("class","altRow"))) # compose total matching pattern (add trailing tdStart to filter out # extraneous <td> matches) patt = tdStart + aStart("a") + SkipTo(aEnd)("text") + aEnd + tdEnd + tdStart # scan input HTML source for matching refs, and print out the text and # href values for ref,s,e in patt.scanString(html): print ref.text, ref.a.href
我从您的页面中提取了914条引用,从Abel到Zupikova。
Abel, Christian /cabel Acevedo, Linda Jeannine /jacevedo Acuña, Jennifer /jacuna Adeyemi, Ike /igbadegesin Adler, Avraham /aadler ... Zhu, Jie /jzhu ZÃdek, AleÅ¡ /azidek Ziółek, Agnieszka /aziolek Zitter, Adam /azitter Zupikova, Jana /jzupikova