龙空技术网

Pytorch学习记录-torchtext使用语言模型

一点点Citespace 106

前言:

而今小伙伴们对“r语言 pytorch”大体比较关心,各位老铁们都想要剖析一些“r语言 pytorch”的相关资讯。那么小编在网上汇集了一些有关“r语言 pytorch””的相关内容,希望我们能喜欢,兄弟们一起来了解一下吧!

Pytorch学习记录-torchtext使用语言模型

昨天写的那个太粗糙了。又找了一个教程来看。

主要包括三个方面

使用torchtext进行文本预处理使用Keras和PyTorch构建数据集进行文本预处理(暂时放弃)使用gensim加载预训练的词向量,并使用PyTorch实现语言模型

2.0 Glove词向量处理

搞了好久才发现,是glove拿来没有做转换

from gensim.scripts.glove2word2vec import glove2word2vecglove_input_file = r'D:\DesktopBackup\right\MLHomework\AllenNLP\data\glove.6B.100d.txt'word2vec_output_file = r'D:\DesktopBackup\right\MLHomework\AllenNLP\data\glove.6B.100d.word2vec.txt'(count, dimensions) = glove2word2vec(glove_input_file, word2vec_output_file)print(count, '\n', dimensions)

2.1 引入必要库

import torchfrom torch import nnimport torch.nn.functional as Fimport torch.optim as optimimport gensim# 2-gramCONTEXT_SIZE = 2test_sentence = '''When forty winters shall besiege thy brow,And dig deep trenches in thy beauty's field,Thy youth's proud livery so gazed on now,Will be a totter'd weed of small worth held:Then being asked, where all thy beauty lies,Where all the treasure of thy lusty days;To say, within thine own deep sunken eyes,Were an all-eating shame, and thriftless praise.How much more praise deserv'd thy beauty's use,If thou couldst answer 'This fair child of mineShall sum my count, and make my old excuse,'Proving his beauty by succession thine!This were to be new made when thou art old,And see thy blood warm when thou feel'st it cold.'''.split()

2.2 加载预训练模型

# 赋予每个单词编码,用数字表示单词,这样才能传入word embedding得到词向量vocab = set(test_sentence)word_to_idx = {word: i + 1 for i, word in enumerate(vocab)}# 定义一个unknown的词汇,如果没有出现在词表中,都作为unkword_to_idx['<unk>'] = 0idx_to_word = {i + 1: word for i, word in enumerate(vocab)}idx_to_word[0] = '<unk>'trigram = [((test_sentence[i], test_sentence[i + 1]), test_sentence[i + 2]) for i in range(len(test_sentence) - 2)]wvmodel = gensim.models.KeyedVectors.load_word2vec_format( r'D:\DesktopBackup\right\MLHomework\AllenNLP\data\glove.6B.100d.word2vec.txt', binary=False)vocab_size = len(word_to_idx)embed_size = 100weight = torch.zeros(vocab_size, embed_size)for i in range(len(wvmodel.index2word)): try: index = word_to_idx[wvmodel.index2word[i]] except: continue weight[index, :] = torch.from_numpy(wvmodel.get_vector(idx_to_word[word_to_idx[wvmodel.index2word[i]]]))

2.3 定义和训练模型

# 定义模型class NgramModel(nn.Module): def __init__(self, vocab_size, context_size, n_dim): super(NgramModel, self).__init__() self.n_word = vocab_size self.embedding = nn.Embedding.from_pretrained(weight) self.embedding.weight.requires_grad = True self.linear1 = nn.Linear(context_size * n_dim, 128) self.linear2 = nn.Linear(128, self.n_word) def forward(self, x): emb = self.embedding(x) emb = emb.view(1, -1) out = self.linear1(emb) out = F.relu(out) out = self.linear2(out) log_prob = F.log_softmax(out) return log_probngrammodel = NgramModel(len(word_to_idx), CONTEXT_SIZE, 100)criterion = nn.NLLLoss()optimizer = optim.SGD(ngrammodel.parameters(), lr=1e-3)# 训练模型for epoch in range(100): print('epoch {}'.format(epoch + 1)) print('*' * 10) running_loss = 0.0 for data in trigram: word, label = data word = torch.LongTensor([word_to_idx[i] for i in word]) label = torch.LongTensor([word_to_idx[label]]) out = ngrammodel(word) loss = criterion(out, label) running_loss += loss.item() optimizer.zero_grad() loss.backward() optimizer.step() print('loss:{:.6f}'.format(running_loss / len(word_to_idx)))

2.4 测试模型

# 测试模型的效果word, label = trigram[3]word = torch.LongTensor([word_to_idx[i] for i in word])out = ngrammodel(word)_, predict_label = torch.max(out, 1)predict_word = idx_to_word[predict_label.item()]print('real word is {}, predict word is {}'.format(label, predict_word))

标签: #r语言 pytorch