当前位置: 首页 > news >正文

章贡区建设局网站营销型网站建设的5大技巧

章贡区建设局网站,营销型网站建设的5大技巧,线上招生代理平台,网站安全建设 应用开发在上一节的情感分类当中,有些评论是负面的,但预测的结果是正面的,比如,"this movie was shit"这部电影是狗屎,很明显就是对这部电影极不友好的评价,属于负类评价,给出的却是positive。…

在上一节的情感分类当中,有些评论是负面的,但预测的结果是正面的,比如,"this movie was shit"这部电影是狗屎,很明显就是对这部电影极不友好的评价,属于负类评价,给出的却是positive。

所以这节我们通过专门的“分词”和“扩大词向量维度”这两个途径来改进,提高预测的准确率。

spaCy分词

我们用spaCy分词工具来进行分词看是否能提高准确性。

推荐带上镜像站点来下载并安装。

pip install spacy -i http://pypi.douban.com/simple/  --trusted-host pypi.douban.com
import spacy
>>> spacy.__version__
'3.0.9'

安装英文包

python -m spacy download en

这种方法我没有安装成功,于是我选择直接下载安装,感觉太慢选择迅雷下载:https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0-py3-none-any.whl

或者:

pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0-py3-none-any.whl

这里选择的是en_core_web_sm语言包,所以也可以直接选择豆瓣镜像下载《推荐这种方法

pip install en_core_web_sm-3.0.0-py3-none-any.whl  -i http://pypi.douban.com/simple/  --trusted-host pypi.douban.com

安装好之后,就可以通过spacy来加载这个英文包

spacy_en = spacy.load("en_core_web_sm")
>>> spacy_en._path
WindowsPath('D:/Anaconda3/envs/pygpu/lib/site-packages/en_core_web_sm/en_core_web_sm-3.0.0')

然后进行分词,将上一节或者说自带的get_tokenized_imdb函数修改下,使用修改的这个函数:

def get_tokenized_imdb(data):def tokenizer(text):return [tok.text for tok in spacy_en.tokenizer(text)]return [tokenizer(review) for review, _ in data]

我们训练看下效果如何:

print(d2l.predict_sentiment(net, vocab, ["this", "movie", "was", "shit"]))
print(d2l.predict_sentiment(net, vocab, ["this", "movie", "is", "not", "good"]))
print(d2l.predict_sentiment(net, vocab, ["this", "movie", "is", "so", "bad"]))
'''
training on [gpu(0)]
epoch 1, loss 0.5781, train acc 0.692, test acc 0.781, time 66.0 sec
epoch 2, loss 0.4024, train acc 0.822, test acc 0.839, time 65.4 sec
epoch 3, loss 0.3465, train acc 0.852, test acc 0.844, time 65.6 sec
epoch 4, loss 0.3227, train acc 0.861, test acc 0.856, time 65.9 sec
epoch 5, loss 0.2814, train acc 0.880, test acc 0.859, time 66.2 sec
negative
positive
negative
'''

可以看到准确率有提高,而且第一条影评在上一节预测是positive,这里预测为negative,正确识别了这条影评的负类评价。第二条影评的预测错误了,说明没有识别出not good属于负类评价,接下来我们再叠加一个方法来提高准确率。

300维度的词向量

我们将预处理文件的词向量从100维度提高到300维度看下准确度有没有上升,也就是选择glove.6B.300d.txt来替换glove.6B.100d.txt

glove_embedding = text.embedding.create("glove", pretrained_file_name="glove.6B.300d.txt", vocabulary=vocab
)

选择更高维度的词向量文档之后,我们做下训练测试看下:

print(d2l.predict_sentiment(net, vocab, ["this", "movie", "was", "shit"]))
print(d2l.predict_sentiment(net, vocab, ["this", "movie", "is", "not", "good"]))
print(d2l.predict_sentiment(net, vocab, ["this", "movie", "is", "so", "bad"]))
print(d2l.predict_sentiment(net, vocab, ["this", "movie", "is", "so", "good"]))
'''
training on [gpu(0)]
epoch 1, loss 0.5186, train acc 0.734, test acc 0.842, time 74.7 sec
epoch 2, loss 0.3411, train acc 0.854, test acc 0.862, time 74.8 sec
epoch 3, loss 0.2851, train acc 0.884, test acc 0.863, time 75.6 sec
epoch 4, loss 0.2459, train acc 0.903, test acc 0.843, time 75.3 sec
epoch 5, loss 0.2099, train acc 0.917, test acc 0.853, time 75.8 sec
negative
negative
negative
positive
'''

准确度再次有了提升,四条影评都被正确识别了情绪。

全部代码

import collections
import d2lzh as d2l
from mxnet import gluon, init, nd
from mxnet.contrib import text
from mxnet.gluon import data as gdata, loss as gloss, nn, rnn
import spacy#spacy_en = spacy.load("en")
spacy_en = spacy.load("en_core_web_sm")def get_tokenized_imdb(data):def tokenizer(text):return [tok.text for tok in spacy_en.tokenizer(text)]return [tokenizer(review) for review, _ in data]def get_vocab_imdb(data):"""Get the vocab for the IMDB data set for sentiment analysis."""tokenized_data = get_tokenized_imdb(data)counter = collections.Counter([tk for st in tokenized_data for tk in st])return text.vocab.Vocabulary(counter, min_freq=5, reserved_tokens=["<pad>"])# d2l.download_imdb(data_dir='data')
train_data, test_data = d2l.read_imdb("train"), d2l.read_imdb("test")
tokenized_data = get_tokenized_imdb(train_data)
vocab = get_vocab_imdb(train_data)
features, labels = d2l.preprocess_imdb(train_data, vocab)
batch_size = 64
# train_set = gdata.ArrayDataset(*d2l.preprocess_imdb(train_data, vocab))
train_set = gdata.ArrayDataset(*[features, labels])
test_set = gdata.ArrayDataset(*d2l.preprocess_imdb(test_data, vocab))
train_iter = gdata.DataLoader(train_set, batch_size, shuffle=True)
test_ieter = gdata.DataLoader(test_set, batch_size)"""
for X,y in train_iter:print(X.shape,y.shape)break
"""class BiRNN(nn.Block):def __init__(self, vocab, embed_size, num_hiddens, num_layers, **kwargs):super(BiRNN, self).__init__(**kwargs)# 词嵌入层self.embedding = nn.Embedding(input_dim=len(vocab), output_dim=embed_size)# bidirectional设为True就是双向循环神经网络self.encoder = rnn.LSTM(hidden_size=num_hiddens,num_layers=num_layers,bidirectional=True,input_size=embed_size,)self.decoder = nn.Dense(2)def forward(self, inputs):# LSTM需要序列长度(词数)作为第一维,所以inputs[形状为:(批量大小,词数)]需做转置# 输出就是(词数,批量大小,词向量维度)(500, 64, 100)->全连接层之后的形状(5,1,100)embeddings = self.embedding(inputs.T)# 双向循环所以乘以2(词数,批量大小,词向量维度*2)(500, 64, 200)->全连接层之后的形状(5,1,200)outputs = self.encoder(embeddings)# 将初始时间步和最终时间步的隐藏状态作为全连接层输入# (64, 400)->全连接层之后的形状(1,400)encoding = nd.concat(outputs[0], outputs[-1])outs = self.decoder(encoding)return outs# 创建一个含2个隐藏层的双向循环神经网络
embed_size, num_hiddens, num_layers, ctx = 300, 100, 2, d2l.try_all_gpus()
net = BiRNN(vocab=vocab, embed_size=embed_size, num_hiddens=num_hiddens, num_layers=num_layers
)
net.initialize(init.Xavier(), ctx=ctx)glove_embedding = text.embedding.create("glove", pretrained_file_name="glove.6B.300d.txt", vocabulary=vocab
)
net.embedding.weight.set_data(glove_embedding.idx_to_vec)
net.embedding.collect_params().setattr("grad_req", "null")lr, num_epochs = 0.01, 5
trainer = gluon.Trainer(net.collect_params(), "adam", {"learning_rate": lr})
loss = gloss.SoftmaxCrossEntropyLoss()
d2l.train(train_iter, test_ieter, net, loss, trainer, ctx, num_epochs)print(d2l.predict_sentiment(net, vocab, ["this", "movie", "was", "shit"]))
print(d2l.predict_sentiment(net, vocab, ["this", "movie", "is", "not", "good"]))
print(d2l.predict_sentiment(net, vocab, ["this", "movie", "is", "so", "bad"]))
print(d2l.predict_sentiment(net, vocab, ["this", "movie", "is", "so", "good"]))

其中需要注意的是embed_size的大小需设定为300,跟新选择的文件的词向量维度保持一致。

小结:从目前实验结果来看对词语的分词做的更好,对于理解词义是很有帮助的,另外将词映射成的向量维度越高,准确度也在提升。


文章转载自:
http://unexpected.c7493.cn
http://imparity.c7493.cn
http://discourteous.c7493.cn
http://theodore.c7493.cn
http://cos.c7493.cn
http://erstwhile.c7493.cn
http://predicant.c7493.cn
http://analysable.c7493.cn
http://simplicidentate.c7493.cn
http://quern.c7493.cn
http://backpaddle.c7493.cn
http://scrod.c7493.cn
http://temazepam.c7493.cn
http://nidus.c7493.cn
http://dispeople.c7493.cn
http://towage.c7493.cn
http://ducky.c7493.cn
http://clericalism.c7493.cn
http://bumpkin.c7493.cn
http://palestra.c7493.cn
http://kilocalorie.c7493.cn
http://unseriousness.c7493.cn
http://ohia.c7493.cn
http://wassail.c7493.cn
http://galways.c7493.cn
http://scaphocephaly.c7493.cn
http://underdo.c7493.cn
http://nadge.c7493.cn
http://restful.c7493.cn
http://size.c7493.cn
http://zonate.c7493.cn
http://labware.c7493.cn
http://taste.c7493.cn
http://jaws.c7493.cn
http://narcotism.c7493.cn
http://antidiuresis.c7493.cn
http://improviser.c7493.cn
http://illusory.c7493.cn
http://plunging.c7493.cn
http://synchrocyclotron.c7493.cn
http://recuperate.c7493.cn
http://avdp.c7493.cn
http://correspondent.c7493.cn
http://seasick.c7493.cn
http://headfirst.c7493.cn
http://vires.c7493.cn
http://enneastyle.c7493.cn
http://archfiend.c7493.cn
http://pressurization.c7493.cn
http://revision.c7493.cn
http://trowelman.c7493.cn
http://larboard.c7493.cn
http://gluten.c7493.cn
http://ejectment.c7493.cn
http://palmette.c7493.cn
http://paisleyite.c7493.cn
http://beau.c7493.cn
http://sinuate.c7493.cn
http://hausen.c7493.cn
http://almsgiver.c7493.cn
http://semitragic.c7493.cn
http://fang.c7493.cn
http://gbh.c7493.cn
http://huntress.c7493.cn
http://oldish.c7493.cn
http://contravallation.c7493.cn
http://hierogram.c7493.cn
http://multilocular.c7493.cn
http://adi.c7493.cn
http://itabira.c7493.cn
http://tricarboxylic.c7493.cn
http://miner.c7493.cn
http://damage.c7493.cn
http://batik.c7493.cn
http://embowed.c7493.cn
http://outwinter.c7493.cn
http://preserve.c7493.cn
http://differently.c7493.cn
http://unrestraint.c7493.cn
http://pitfall.c7493.cn
http://peephole.c7493.cn
http://paynim.c7493.cn
http://license.c7493.cn
http://condy.c7493.cn
http://release.c7493.cn
http://joviologist.c7493.cn
http://consonantalize.c7493.cn
http://scapolite.c7493.cn
http://quadrifoliate.c7493.cn
http://listerism.c7493.cn
http://septuple.c7493.cn
http://atropinization.c7493.cn
http://scopolamine.c7493.cn
http://doomed.c7493.cn
http://negress.c7493.cn
http://macrolith.c7493.cn
http://gigantism.c7493.cn
http://jockey.c7493.cn
http://tasteful.c7493.cn
http://connubiality.c7493.cn
http://www.zhongyajixie.com/news/76450.html

相关文章:

  • 在哪里做网站品牌宣传如何做
  • 做阿里巴巴还是做网站好360网站安全检测
  • 无锡滨湖住房与城乡建设局网站网络工程师培训班要多少钱
  • 做网站用什么数据库网络运营是什么意思
  • 广告网站模板免费下载2345浏览器
  • 网站建设岗位要求搜索引擎国外
  • 湖州做网站的网络营销策略包括哪四种
  • dede建设网站教程百度无广告搜索引擎
  • 医院网站建设的理由制作网站平台
  • 潍坊网站建设联系方式站长工具是什么
  • 网站建设公司服务如何做一个自己的网站
  • 武汉网站定制关键词排名提高
  • 网站开发的背景长春模板建站代理
  • 电商网站建设需要有没有推广app的平台
  • wordpress首页刷新不变手把手教你优化网站
  • seo做的好的网站公司网页制作
  • 中端网站建设国内新闻最新消息今天
  • 可以用足球做的游戏视频网站灰色词排名上首页
  • 养猪网站建设规划书成都网站建设方案服务
  • 单位网站建设情况说明书优化师培训机构
  • 石家庄网站建设维护百度提问在线回答问题
  • 广州网站开发外包mac923水蜜桃923色号
  • 网站上的销售怎么做的电脑培训网上免费课程
  • 易语言如何做验证系统官方网站大数据培训
  • 手机访问pc网站自动跳转手机端网站代码网站seo 工具
  • 黄金网站app视频下载小说佛山做优化的公司
  • 网络推广有哪些常用方法成都seo优化排名推广
  • 太原市零元网站建设集客营销软件官方网站
  • 节日网站设计推动防控措施持续优化
  • 网站通常用什么编程做优量汇广告平台