当前位置: 首页 > news >正文

dnf怎么做辅助网站郑州seo竞价

dnf怎么做辅助网站,郑州seo竞价,用html5做的静态网站,xml的网站地图织梦制作打卡 目录 打卡 环境准备 准备阶段 数据加载与预处理 BertTokenizer 部分输出 模型构建 gpt2模型结构输出 训练流程 部分输出 部分输出2(减少训练数据) 推理流程 环境准备 pip install -i https://pypi.mirrors.ustc.edu.cn/simple mindspo…

打卡

目录

打卡

环境准备

准备阶段

数据加载与预处理

BertTokenizer

部分输出

模型构建

gpt2模型结构输出

训练流程

部分输出

部分输出2(减少训练数据)

推理流程


环境准备

pip install -i https://pypi.mirrors.ustc.edu.cn/simple mindspore==2.2.14pip install tokenizers==0.15.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
# 该案例在 mindnlp 0.3.1 版本完成适配,如果发现案例跑不通,可以指定mindnlp版本,执行`!pip install mindnlp==0.3.1`pip install mindnlp

准备阶段

nlpcc2017摘要数据,内容为新闻正文及其摘要,总计50000个样本。

来源:nlpcc2017摘要数据

数据加载与预处理

  • 原始数据格式:
article: [CLS] article_context [SEP]
summary: [CLS] summary_context [SEP]
  • 预处理后的数据格式:
[CLS] article_context [SEP] summary_context [SEP]

BertTokenizer

因GPT2无中文的tokenizer,使用BertTokenizer替代。代码如下:

from mindspore.dataset import TextFileDataset
import json
import numpy as np
from mindnlp.transformers import BertTokenizer# preprocess dataset
def process_dataset(dataset, tokenizer, batch_size=6, max_seq_len=1024, shuffle=False):def read_map(text):data = json.loads(text.tobytes())return np.array(data['article']), np.array(data['summarization'])def merge_and_pad(article, summary):# tokenization# pad to max_seq_length, only truncate the articletokenized = tokenizer(text=article, text_pair=summary,padding='max_length', truncation='only_first', max_length=max_seq_len)return tokenized['input_ids'], tokenized['input_ids']dataset = dataset.map(read_map, 'text', ['article', 'summary'])# change column names to input_ids and labels for the following trainingdataset = dataset.map(merge_and_pad, ['article', 'summary'], ['input_ids', 'labels'])dataset = dataset.batch(batch_size)if shuffle:dataset = dataset.shuffle(batch_size)return dataset# load dataset
dataset = TextFileDataset(str(path), shuffle=False)
print(dataset.get_dataset_size())   ### 50000# split into training and testing dataset
train_dataset, test_dataset = dataset.split([0.9, 0.1], randomize=False)
print(len(train_dataset))  ### 45000# We use BertTokenizer for tokenizing chinese context.
tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')
len(tokenizer)train_dataset = process_dataset(train_dataset, tokenizer, batch_size=4)
## next(train_dataset.create_tuple_iterator())

部分输出

模型构建

如下,通过两个类实现:

  1. 构建GPT2ForSummarization模型,注意shift right的操作。
  2. 动态学习率
from mindspore import ops
from mindnlp.transformers import GPT2LMHeadModel 
from mindspore.nn.learning_rate_schedule import LearningRateSchedulefrom mindspore import nn
from mindnlp.transformers import GPT2Config, GPT2LMHeadModel
from mindnlp._legacy.engine import Trainer
from mindnlp._legacy.engine.callbacks import CheckpointCallbackclass GPT2ForSummarization(GPT2LMHeadModel):def construct(self,input_ids = None,attention_mask = None,labels = None,):outputs = super().construct(input_ids=input_ids, attention_mask=attention_mask)shift_logits = outputs.logits[..., :-1, :]shift_labels = labels[..., 1:]# Flatten the tokensloss = ops.cross_entropy(shift_logits.view(-1, shift_logits.shape[-1]), shift_labels.view(-1), ignore_index=tokenizer.pad_token_id)return lossclass LinearWithWarmUp(LearningRateSchedule):"""Warmup-decay learning rate."""def __init__(self, learning_rate, num_warmup_steps, num_training_steps):super().__init__()self.learning_rate = learning_rateself.num_warmup_steps = num_warmup_stepsself.num_training_steps = num_training_stepsdef construct(self, global_step):if global_step < self.num_warmup_steps:return global_step / float(max(1, self.num_warmup_steps)) * self.learning_ratereturn ops.maximum(0.0, (self.num_training_steps - global_step) / (max(1, self.num_training_steps - self.num_warmup_steps))) * self.learning_rate## 训练参数设置
num_epochs = 1
warmup_steps = 2000
learning_rate = 1.5e-4num_training_steps = num_epochs * train_dataset.get_dataset_size()config = GPT2Config(vocab_size=len(tokenizer))
model = GPT2ForSummarization(config)lr_scheduler = LinearWithWarmUp(learning_rate=learning_rate, num_warmup_steps=warmup_steps, num_training_steps=num_training_steps)
optimizer = nn.AdamWeightDecay(model.trainable_params(), learning_rate=lr_scheduler)# 记录模型参数数量
print('number of model parameters: {}'.format(model.num_parameters()))

gpt2模型结构输出

1. 1级主类:GPT2ForSummarization

2. 2级类:GPT2Model 层,是transformer 结构,是模型的核心部分。

3. 2级类:lm_head 结构的 Dense 全连接层 , dim[in, out]=[768,  21128]。

4. GPT2Model 结构下的3级类组件分三层:

        >> wte 嵌入层:dim[in, out]=[21128, 768] ,即使用了 21128 个词汇,每个词汇映射到一个768 维的向量。

        >> wpe 嵌入层:dim[in, out]=[1024, 768] 

        >> drop 层。

        >> layers h 隐网络结构层:Transformer模型的主体,包含 12 个 GPT2Block。  

        >> ln_f LayerNorm 最后的层归一化。        

5. GPT2Block 的结构:

        》》ln_1 LayerNorm层,层归一化,用于在注意力机制之前对输入进行归一化。

        》》attn GPT2Attention层,自注意力机制,用于计算输入序列中不同位置的注意力权重。共包括3层:Conv1D、Conv1D、CustomDropout、CustomDropout。

        》》ln_2 LayerNorm层,用于自注意力之后的归一化。

        》》mlp  GPT2MLP层,多层感知机,用于对自注意力层的输出进行进一步的非线性变换。这里使用的操作包括:Conv1D、Conv1D、GELU、CustomDropout。
 

$ print(model)GPT2ForSummarization<(transformer): GPT2Model<(wte): Embedding<vocab_size=21128, embedding_size=768, use_one_hot=False, weight=Parameter (Tensor(shape=[21128, 768], dtype=Float32, value=[...], name=transformer.wte.weight), requires_grad=True), dtype=Float32, padding_idx=None>(wpe): Embedding<vocab_size=1024, embedding_size=768, use_one_hot=False, weight=Parameter (Tensor(shape=[1024, 768], dtype=Float32, value=[...], name=transformer.wpe.weight), requires_grad=True), dtype=Float32, padding_idx=None>(drop): CustomDropout<>(h): CellList<(0): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.0.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.0.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.0.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.0.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(1): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.1.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.1.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.1.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.1.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(2): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.2.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.2.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.2.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.2.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(3): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.3.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.3.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.3.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.3.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(4): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.4.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.4.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.4.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.4.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(5): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.5.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.5.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.5.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.5.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(6): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.6.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.6.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.6.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.6.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(7): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.7.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.7.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.7.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.7.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(8): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.8.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.8.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.8.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.8.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(9): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.9.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.9.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.9.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.9.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(10): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.10.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.10.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.10.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.10.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>(11): GPT2Block<(ln_1): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.11.ln_1.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.11.ln_1.bias), requires_grad=True)>(attn): GPT2Attention<(c_attn): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(attn_dropout): CustomDropout<>(resid_dropout): CustomDropout<>>(ln_2): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.11.ln_2.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.h.11.ln_2.bias), requires_grad=True)>(mlp): GPT2MLP<(c_fc): Conv1D<(matmul): Matmul<>>(c_proj): Conv1D<(matmul): Matmul<>>(act): GELU<>(dropout): CustomDropout<>>>>(ln_f): LayerNorm<normalized_shape=[768], begin_norm_axis=-1, begin_params_axis=-1, weight=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.ln_f.weight), requires_grad=True), bias=Parameter (Tensor(shape=[768], dtype=Float32, value=[...], name=transformer.ln_f.bias), requires_grad=True)>>(lm_head): Dense<input_channels=768, output_channels=21128>>

训练流程

from mindspore import nn
from mindnlp.transformers import GPT2Config, GPT2LMHeadModel
from mindnlp._legacy.engine import Trainer
from mindnlp._legacy.engine.callbacks import CheckpointCallback# 记录模型参数数量
print('number of model parameters: {}'.format(model.num_parameters()))ckpoint_cb = CheckpointCallback(save_path='checkpoint', ckpt_name='gpt2_summarization',epochs=1, keep_checkpoint_max=2)trainer = Trainer(network=model, train_dataset=train_dataset,epochs=1, optimizer=optimizer, callbacks=ckpoint_cb)
trainer.set_amp(level='O1')  # 开启混合精度trainer.run(tgt_columns="labels")

部分输出

注:建议使用较高规格的算力,训练时间较长

部分输出2(减少训练数据)

此次活动的 notebook 只可以连续运行8小时,此次目的也不是性能优化,故此,我将训练数据减少到了1/10,此时的部分输出如下。

推理流程

## 向量数据转为中文数据
def process_test_dataset(dataset, tokenizer, batch_size=1, max_seq_len=1024, max_summary_len=100):def read_map(text):data = json.loads(text.tobytes())return np.array(data['article']), np.array(data['summarization'])def pad(article):tokenized = tokenizer(text=article, truncation=True, max_length=max_seq_len-max_summary_len)return tokenized['input_ids']dataset = dataset.map(read_map, 'text', ['article', 'summary'])dataset = dataset.map(pad, 'article', ['input_ids'])dataset = dataset.batch(batch_size)return datasettest_dataset = process_test_dataset(test_dataset, tokenizer, batch_size=1)
print(next(test_dataset.create_tuple_iterator(output_numpy=True)))model = GPT2LMHeadModel.from_pretrained('./checkpoint/gpt2_summarization_epoch_0.ckpt', config=config)model.set_train(False)
model.config.eos_token_id = model.config.sep_token_id
i = 0
for (input_ids, raw_summary) in test_dataset.create_tuple_iterator():output_ids = model.generate(input_ids, max_new_tokens=50, num_beams=5, no_repeat_ngram_size=2)output_text = tokenizer.decode(output_ids[0].tolist())print(output_text)i += 1if i == 1:break

减少训练数据后的模型推理结果展示。

http://www.zhongyajixie.com/news/1330.html

相关文章:

  • 网店推广方案范文天津seo培训机构
  • 上海工厂网站建设智能网站排名优化
  • 网站建设公司介绍ppt百度标记号码认证平台
  • 敦化市住房和城乡建设局网站沧州网站建设推广
  • 怎么用支付宝做发卡网站百度热词搜索指数
  • 什么是网站建设的建议北京本地网络推广平台
  • 动态网站建设实训报告福州百度推广排名
  • 怎么撤销网站备案个人网站制作多少钱
  • 天元建设集团有限公司三层九中心网络优化基础知识
  • 做的比较好的二手交易网站有哪些电商网站seo
  • 做网站代管理三年aso优化教程
  • 温州网站公司公众号代运营
  • 建设网站站点过程中太原高级seo主管
  • 白山市网站建设百度风云榜游戏排行榜
  • 网站推销怎么做ppt模板微博推广有用吗
  • 通辽网站开发招聘百度网盘官方网站
  • 网站的设计原则网站免费高清素材软件
  • 网站维护提醒php文件百度公司总部
  • 仁怀网站建设seo自学
  • 新闻网站审批看广告得收益的app
  • 品牌建设和市场营销的区别关键词优化排名平台
  • powerbuilder网站开发镇江网站制作公司
  • 百度收录正常网站流量下降安卓优化大师官网
  • 网站自动更新站长推荐黄色
  • 我有云服务器如何建站网站做优化好还是推广好
  • 宁晋网站建设网店运营推广
  • 做网站杭州短视频营销成功案例
  • 网站个人博客怎么做台州百度推广优化
  • 做电影网站 广告收入百度云网盘免费资源
  • 做经营性网站怎么办理手续有什么好用的搜索引擎