从字符串中删除HTML标记的Python代码

我有一段这样的文字:

text = """<div>
<h1>Title</h1>
<p>A long text........ </p>
<a href=""> a link </a>
</div>"""

使用纯Python,没有外部模块,我想有这个:

>>> print remove_tags(text)
Title A long text..... a link

我知道我可以使用lxml.HTML.fromString(text).text_内容()来实现,但我需要在纯Python中使用内置或2.6+的STD库来实现相同的功能。

我该怎么做?

350953 次浏览

Note that this isn't perfect, since if you had something like, say, <a title=">"> it would break. However, it's about the closest you'd get in non-library Python without a really complex function:

import re


TAG_RE = re.compile(r'<[^>]+>')


def remove_tags(text):
return TAG_RE.sub('', text)

However, as lvc mentions xml.etree is available in the Python Standard Library, so you could probably just adapt it to serve like your existing lxml version:

def remove_tags(text):
return ''.join(xml.etree.ElementTree.fromstring(text).itertext())

Python has several XML modules built in. The simplest one for the case that you already have a string with the full HTML is xml.etree, which works (somewhat) similarly to the lxml example you mention:

def remove_tags(text):
return ''.join(xml.etree.ElementTree.fromstring(text).itertext())

Using a regex

Using a regex, you can clean everything inside <> :

import re
# as per recommendation from @freylis, compile once only
CLEANR = re.compile('<.*?>')


def cleanhtml(raw_html):
cleantext = re.sub(CLEANR, '', raw_html)
return cleantext

Some HTML texts can also contain entities that are not enclosed in brackets, such as '&nsbm'. If that is the case, then you might want to write the regex as

CLEANR = re.compile('<.*?>|&([a-z0-9]+|#[0-9]{1,6}|#x[0-9a-f]{1,6});')

This link contains more details on this.

Using BeautifulSoup

You could also use BeautifulSoup additional package to find out all the raw text.

You will need to explicitly set a parser when calling BeautifulSoup I recommend "lxml" as mentioned in alternative answers (much more robust than the default one (html.parser) (i.e. available without additional install).

from bs4 import BeautifulSoup
cleantext = BeautifulSoup(raw_html, "lxml").text

But it doesn't prevent you from using external libraries, so I recommend the first solution.

EDIT: To use lxml you need to pip install lxml.

There's a simple way to this in any C-like language. The style is not Pythonic but works with pure Python:

def remove_html_markup(s):
tag = False
quote = False
out = ""


for c in s:
if c == '<' and not quote:
tag = True
elif c == '>' and not quote:
tag = False
elif (c == '"' or c == "'") and tag:
quote = not quote
elif not tag:
out = out + c


return out

The idea based in a simple finite-state machine and is detailed explained here: http://youtu.be/2tu9LTDujbw

You can see it working here: http://youtu.be/HPkNPcYed9M?t=35s

PS - If you're interested in the class(about smart debugging with python) I give you a link: https://www.udacity.com/course/software-debugging--cs259. It's free!

global temp


temp =''


s = ' '


def remove_strings(text):


global temp


if text == '':


return temp


start = text.find('<')


end = text.find('>')


if start == -1 and end == -1 :


temp = temp + text


return temp


newstring = text[end+1:]


fresh_start = newstring.find('<')


if newstring[:fresh_start] != '':


temp += s+newstring[:fresh_start]


remove_strings(newstring[fresh_start:])


return temp