Kera Tokenizer 方法到底是做什么的?

有时,情况要求我们做以下事情:

from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(num_words=my_max)

然后,我们不约而同地吟诵这句咒语:

tokenizer.fit_on_texts(text)
sequences = tokenizer.texts_to_sequences(text)

While I (more or less) understand what the total effect is, I can't figure out what each one does separately, regardless of how much research I do (including, obviously, the documentation). I don't think I've ever seen one without the other.

那么每个人都做些什么呢?在任何情况下,你会使用其中一种而不使用另一种吗?如果不是,为什么它们不简单地组合成这样的东西:

sequences = tokenizer.fit_on_texts_to_sequences(text)

很抱歉我漏掉了一些显而易见的东西,但是我在这方面还是个新手。

91586 次浏览

来自 源代码:

  1. fit_on_texts 根据文本列表更新内部词汇表。此方法根据词频创建词汇表索引。所以如果你说“猫坐在垫子上”它将创建一个字典 s.t.word_index["the"] = 1; word_index["cat"] = 2它是 word-> index dictionary 所以每个单词都得到一个唯一的整数值。0保留用于填充。因此,较小的整数意味着更频繁的单词(通常前几个是停止单词,因为它们出现得很多)。
  2. 所以它基本上取出文本中的每个单词,并用 word_index字典中相应的整数值替换。没有更多,没有更少,当然没有涉及到魔法。

为什么不把它们结合起来呢?因为你几乎总是适合 once并转换成序列 很多次。您将适合您的训练语料库一次,并使用完全相同的 word_index字典在培训/评估/测试/预测时间转换为序列的实际文本,以饲料他们到网络。因此,将这些方法分开是有意义的。

让我们看看这行代码是做什么的。

tokenizer.fit_on_texts(text)

例如,考虑句子 " The earth is an awesome place live"

tokenizer.fit_on_texts("The earth is an awesome place live")符合 [[1,2,3,4,5,6,7]],其中3-> “是”,6-> “地方”,等等。

sequences = tokenizer.texts_to_sequences("The earth is an great place live")

返回 [[1,2,3,4,6,7]]

You see what happened here. The word "great" is not fit initially, so it does not recognize the word "great". Meaning, fit_on_text can be used independently on train data and then the fitted vocabulary index can be used to represent a completely new set of word sequence. These are two different processes. Hence the two lines of code.

在上面的答案中添加更多的例子将有助于更好地理解:

例子1 :

t  = Tokenizer()
fit_text = "The earth is an awesome place live"
t.fit_on_texts(fit_text)
test_text = "The earth is an great place live"
sequences = t.texts_to_sequences(test_text)


print("sequences : ",sequences,'\n')


print("word_index : ",t.word_index)
#[] specifies : 1. space b/w the words in the test_text    2. letters that have not occured in fit_text


Output :


sequences :  [[3], [4], [1], [], [1], [2], [8], [3], [4], [], [5], [6], [], [2], [9], [], [], [8], [1], [2], [3], [], [13], [7], [2], [14], [1], [], [7], [5], [15], [1]]


word_index :  {'e': 1, 'a': 2, 't': 3, 'h': 4, 'i': 5, 's': 6, 'l': 7, 'r': 8, 'n': 9, 'w': 10, 'o': 11, 'm': 12, 'p': 13, 'c': 14, 'v': 15}

Example 2:

t  = Tokenizer()
fit_text = ["The earth is an awesome place live"]
t.fit_on_texts(fit_text)


#fit_on_texts fits on sentences when list of sentences is passed to fit_on_texts() function.
#ie - fit_on_texts( [ sent1, sent2, sent3,....sentN ] )


#Similarly, list of sentences/single sentence in a list must be passed into texts_to_sequences.
test_text1 = "The earth is an great place live"
test_text2 = "The is my program"
sequences = t.texts_to_sequences([test_text1, test_text2])


print('sequences : ',sequences,'\n')


print('word_index : ',t.word_index)
#texts_to_sequences() returns list of list. ie - [ [] ]


Output:


sequences :  [[1, 2, 3, 4, 6, 7], [1, 3]]


word_index :  {'the': 1, 'earth': 2, 'is': 3, 'an': 4, 'awesome': 5, 'place': 6, 'live': 7}

Niric 已经满足了这个问题,但是我要补充一些东西。

在这个例子中,请同时关注单词 基于频率的编码和 OOV:

from tensorflow.keras.preprocessing.text        import Tokenizer


corpus =['The', 'cat', 'is', 'on', 'the', 'table', 'a', 'very', 'long', 'table']


tok_obj = Tokenizer(num_words=10, oov_token='<OOV>')
tok_obj.fit_on_texts(corpus)

[ TL; DR ]标记器将包括出现在语料库中的第一个 10单词。这里 10字,但只有 8是唯一的。最常见的 10字将被编码,如果他们超过这个数字,他们将去 OOV (超出词汇表)。

Built dictionary:

请注意频率

{'<OOV>': 1, 'the': 2, 'table': 3, 'cat': 4, 'is': 5, 'on': 6, 'a': 7, 'very': 8, 'long': 9}

句子处理 :

processed_seq = tok_obj.texts_to_sequences(['The dog is on the bed'])

Which gives:

>>> processed_seq
[[2, 1, 5, 6, 2, 1]]

如何检索句子?

编译字典 abc0并使用 It! 列表内涵可以用来压缩代码。

inv_map = {v: k for k, v in tok_obj.word_index.items()}


for seq in processed_seq:
for tok in seq:
print(inv_map[tok])

它给出了:

>>> the
<OOV>
is
on
the
<OOV>

因为字典里没有

列表内涵 可以用来压缩代码。

[inv_map[tok] for seq in processed_seq for tok in seq]

它给出了:

>>> ['the', '<OOV>', 'is', 'on', 'the', '<OOV>']