我如何分割一个句子并将每个单词存储在一个列表中?例如,给定一个像"these are words"这样的字符串,我如何得到一个像["these", "are", "words"]这样的列表?
"these are words"
["these", "are", "words"]
在任意连续运行的空格中拆分字符串text:
text
words = text.split()
在自定义分隔符(如",")上拆分字符串text:
","
words = text.split(",")
words变量将是一个list,并包含分隔符上来自text的单词。
words
list
给定一个字符串sentence,它将每个单词存储在一个名为words的列表中:
sentence
words = sentence.split()
使用# EYZ1:
返回字符串中的单词列表,使用sep作为分隔符 ... 如果sep未指定或为None,则应用不同的分割算法:连续空格的运行被视为单个分隔符,如果字符串前导或尾部有空格,则结果将在开始或结束处不包含空字符串
>>> line = "a sentence with a few words" >>> line.split() ['a', 'sentence', 'with', 'a', 'few', 'words']
我希望我的python函数分割一个句子(输入),并将每个单词存储在一个列表中
str().split()方法是这样做的,它接受一个字符串,把它分成一个列表:
str().split()
>>> the_string = "this is a sentence" >>> words = the_string.split(" ") >>> print(words) ['this', 'is', 'a', 'sentence'] >>> type(words) <type 'list'> # or <class 'list'> in Python 3.0
根据您计划如何使用“句子列表”,您可能需要查看自然语言工具箱。它主要处理文本处理和计算。你也可以用它来解决你的问题:
import nltk words = nltk.word_tokenize(raw_sentence)
这样做还有分隔标点符号的额外好处。
例子:
>>> import nltk >>> s = "The fox's foot grazed the sleeping dog, waking it." >>> words = nltk.word_tokenize(s) >>> words ['The', 'fox', "'s", 'foot', 'grazed', 'the', 'sleeping', 'dog', ',', 'waking', 'it', '.']
这样你就可以过滤掉你不想要的标点符号,只使用单词。
请注意,如果您不打算对句子进行任何复杂的操作,使用string.split()的其他解决方案会更好。
string.split()
(编辑)
这个算法呢?在空格上拆分文本,然后修剪标点符号。这将小心地从单词边缘删除标点符号,而不会损害单词中的撇号,如we're。
we're
>>> text "'Oh, you can't help that,' said the Cat: 'we're all mad here. I'm mad. You're mad.'" >>> text.split() ["'Oh,", 'you', "can't", 'help', "that,'", 'said', 'the', 'Cat:', "'we're", 'all', 'mad', 'here.', "I'm", 'mad.', "You're", "mad.'"] >>> import string >>> [word.strip(string.punctuation) for word in text.split()] ['Oh', 'you', "can't", 'help', 'that', 'said', 'the', 'Cat', "we're", 'all', 'mad', 'here', "I'm", 'mad', "You're", 'mad']
shlex有一个.split()函数。它与str.split()的不同之处在于它不保留引号,并将引用的短语视为单个单词:
.split()
str.split()
>>> import shlex >>> shlex.split("sudo echo 'foo && bar'") ['sudo', 'echo', 'foo && bar']
注意:它适用于类unix的命令行字符串。它不适用于自然语言处理。
如果你想要列表中单词/句子的所有字符,这样做:
print(list("word")) # ['w', 'o', 'r', 'd'] print(list("some sentence")) # ['s', 'o', 'm', 'e', ' ', 's', 'e', 'n', 't', 'e', 'n', 'c', 'e']
将单词分开,而不伤害单词内部的撇号 请找到input_1和input_2摩尔定律
def split_into_words(line): import re word_regex_improved = r"(\w[\w']*\w|\w)" word_matcher = re.compile(word_regex_improved) return word_matcher.findall(line) #Example 1 input_1 = "computational power (see Moore's law) and " split_into_words(input_1) # output ['computational', 'power', 'see', "Moore's", 'law', 'and'] #Example 2 input_2 = """Oh, you can't help that,' said the Cat: 'we're all mad here. I'm mad. You're mad.""" split_into_words(input_2) #output ['Oh', 'you', "can't", 'help', 'that', 'said', 'the', 'Cat', "we're", 'all', 'mad', 'here', "I'm", 'mad', "You're", 'mad']