1 大纲概述

  文本分类这个系列将会有十篇左右,包括基于word2vec预训练的文本分类,与及基于最新的预训练模型(ELMo,BERT等)的文本分类。总共有以下系列:

  word2vec预训练词向量

  textCNN 模型

  charCNN 模型

  Bi-LSTM 模型

  Bi-LSTM + Attention 模型

  RCNN 模型

  Adversarial LSTM 模型

  Transformer 模型

  ELMo 预训练模型

  BERT 预训练模型

  jupyter notebook代码均在textClassifier仓库中,python代码在NLP-Project中的text_classfier中。

2 数据集

  数据集为IMDB 电影影评,总共有三个数据文件,在/data/rawData目录下,包括unlabeledTrainData.tsv,labeledTrainData.tsv,testData.tsv。在进行文本分类时需要有标签的数据(labeledTrainData),数据预处理如文本分类实战(一)—— word2vec预训练词向量中一样,预处理后的文件为/data/preprocess/labeledTrain.csv。

3 Bi-LSTM模型结构

  Bi-LSTM即双向LSTM,较单向的LSTM,Bi-LSTM能更好地捕获句子中上下文的信息。LSTM的介绍见这篇。在本次实战中采用双层的Bi-LSTM结构来进行文本分类。

4 配置参数

import os
import csv
import time
import datetime
import random
import jsonimport warnings
from collections import Counter
from math import sqrtimport gensim
import pandas as pd
import numpy as np
import tensorflow as tf
from sklearn.metrics import roc_auc_score, accuracy_score, precision_score, recall_score
warnings.filterwarnings("ignore")

# 配置参数class TrainingConfig(object):epoches = 10evaluateEvery = 100checkpointEvery = 100learningRate = 0.001class ModelConfig(object):embeddingSize = 200hiddenSizes = [256, 256]  # 单层LSTM结构的神经元个数dropoutKeepProb = 0.5l2RegLambda = 0.0class Config(object):sequenceLength = 200  # 取了所有序列长度的均值batchSize = 128dataSource = "../data/preProcess/labeledTrain.csv"stopWordSource = "../data/english"numClasses = 1  # 二分类设置为1,多分类设置为类别的数目rate = 0.8  # 训练集的比例training = TrainingConfig()model = ModelConfig()# 实例化配置参数对象
config = Config()

5 生成训练数据

  1)将数据加载进来,将句子分割成词表示,并去除低频词和停用词。

  2)将词映射成索引表示,构建词汇-索引映射表,并保存成json的数据格式,之后做inference时可以用到。(注意,有的词可能不在word2vec的预训练词向量中,这种词直接用UNK表示)

  3)从预训练的词向量模型中读取出词向量,作为初始化值输入到模型中。

  4)将数据集分割成训练集和测试集

# 数据预处理的类,生成训练集和测试集class Dataset(object):def __init__(self, config):self.config = configself._dataSource = config.dataSourceself._stopWordSource = config.stopWordSource  self._sequenceLength = config.sequenceLength  # 每条输入的序列处理为定长self._embeddingSize = config.model.embeddingSizeself._batchSize = config.batchSizeself._rate = config.rateself._stopWordDict = {}self.trainReviews = []self.trainLabels = []self.evalReviews = []self.evalLabels = []self.wordEmbedding =Noneself.labelList = []def _readData(self, filePath):"""从csv文件中读取数据集"""df = pd.read_csv(filePath)if self.config.numClasses == 1:labels = df["sentiment"].tolist()elif self.config.numClasses > 1:labels = df["rate"].tolist()review = df["review"].tolist()reviews = [line.strip().split() for line in review]return reviews, labelsdef _labelToIndex(self, labels, label2idx):"""将标签转换成索引表示"""labelIds = [label2idx[label] for label in labels]return labelIdsdef _wordToIndex(self, reviews, word2idx):"""将词转换成索引"""reviewIds = [[word2idx.get(item, word2idx["UNK"]) for item in review] for review in reviews]return reviewIdsdef _genTrainEvalData(self, x, y, word2idx, rate):"""生成训练集和验证集"""reviews = []for review in x:if len(review) >= self._sequenceLength:reviews.append(review[:self._sequenceLength])else:reviews.append(review + [word2idx["PAD"]] * (self._sequenceLength - len(review)))trainIndex = int(len(x) * rate)trainReviews = np.asarray(reviews[:trainIndex], dtype="int64")trainLabels = np.array(y[:trainIndex], dtype="float32")evalReviews = np.asarray(reviews[trainIndex:], dtype="int64")evalLabels = np.array(y[trainIndex:], dtype="float32")return trainReviews, trainLabels, evalReviews, evalLabelsdef _genVocabulary(self, reviews, labels):"""生成词向量和词汇-索引映射字典,可以用全数据集"""allWords = [word for review in reviews for word in review]# 去掉停用词subWords = [word for word in allWords if word not in self.stopWordDict]wordCount = Counter(subWords)  # 统计词频sortWordCount = sorted(wordCount.items(), key=lambda x: x[1], reverse=True)# 去除低频词words = [item[0] for item in sortWordCount if item[1] >= 5]vocab, wordEmbedding = self._getWordEmbedding(words)self.wordEmbedding = wordEmbeddingword2idx = dict(zip(vocab, list(range(len(vocab)))))uniqueLabel = list(set(labels))label2idx = dict(zip(uniqueLabel, list(range(len(uniqueLabel)))))self.labelList = list(range(len(uniqueLabel)))# 将词汇-索引映射表保存为json数据,之后做inference时直接加载来处理数据with open("../data/wordJson/word2idx.json", "w", encoding="utf-8") as f:json.dump(word2idx, f)with open("../data/wordJson/label2idx.json", "w", encoding="utf-8") as f:json.dump(label2idx, f)return word2idx, label2idxdef _getWordEmbedding(self, words):"""按照我们的数据集中的单词取出预训练好的word2vec中的词向量"""wordVec = gensim.models.KeyedVectors.load_word2vec_format("../word2vec/word2Vec.bin", binary=True)vocab = []wordEmbedding = []# 添加 "pad" 和 "UNK", vocab.append("PAD")vocab.append("UNK")wordEmbedding.append(np.zeros(self._embeddingSize))wordEmbedding.append(np.random.randn(self._embeddingSize))for word in words:try:vector = wordVec.wv[word]vocab.append(word)wordEmbedding.append(vector)except:print(word + "不存在于词向量中")return vocab, np.array(wordEmbedding)def _readStopWord(self, stopWordPath):"""读取停用词"""with open(stopWordPath, "r") as f:stopWords = f.read()stopWordList = stopWords.splitlines()# 将停用词用列表的形式生成,之后查找停用词时会比较快self.stopWordDict = dict(zip(stopWordList, list(range(len(stopWordList)))))def dataGen(self):"""初始化训练集和验证集"""# 初始化停用词self._readStopWord(self._stopWordSource)# 初始化数据集reviews, labels = self._readData(self._dataSource)# 初始化词汇-索引映射表和词向量矩阵word2idx, label2idx = self._genVocabulary(reviews, labels)# 将标签和句子数值化labelIds = self._labelToIndex(labels, label2idx)reviewIds = self._wordToIndex(reviews, word2idx)# 初始化训练集和测试集trainReviews, trainLabels, evalReviews, evalLabels = self._genTrainEvalData(reviewIds, labelIds, word2idx, self._rate)self.trainReviews = trainReviewsself.trainLabels = trainLabelsself.evalReviews = evalReviewsself.evalLabels = evalLabelsdata = Dataset(config)
data.dataGen()

6 生成batch数据集

  采用生成器的形式向模型输入batch数据集,(生成器可以避免将所有的数据加入到内存中)

# 输出batch数据集def nextBatch(x, y, batchSize):"""生成batch数据集,用生成器的方式输出"""perm = np.arange(len(x))np.random.shuffle(perm)x = x[perm]y = y[perm]numBatches = len(x) // batchSizefor i in range(numBatches):start = i * batchSizeend = start + batchSizebatchX = np.array(x[start: end], dtype="int64")batchY = np.array(y[start: end], dtype="float32")yield batchX, batchY

7 Bi-LSTM模型

# 构建模型
class BiLSTM(object):"""Bi-LSTM 用于文本分类"""def __init__(self, config, wordEmbedding):# 定义模型的输入self.inputX = tf.placeholder(tf.int32, [None, config.sequenceLength], name="inputX")self.inputY = tf.placeholder(tf.int32, [None], name="inputY")self.dropoutKeepProb = tf.placeholder(tf.float32, name="dropoutKeepProb")# 定义l2损失l2Loss = tf.constant(0.0)# 词嵌入层with tf.name_scope("embedding"):# 利用预训练的词向量初始化词嵌入矩阵self.W = tf.Variable(tf.cast(wordEmbedding, dtype=tf.float32, name="word2vec") ,name="W")# 利用词嵌入矩阵将输入的数据中的词转换成词向量,维度[batch_size, sequence_length, embedding_size]self.embeddedWords = tf.nn.embedding_lookup(self.W, self.inputX)# 定义两层双向LSTM的模型结构with tf.name_scope("Bi-LSTM"):for idx, hiddenSize in enumerate(config.model.hiddenSizes):with tf.name_scope("Bi-LSTM" + str(idx)):# 定义前向LSTM结构lstmFwCell = tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.LSTMCell(num_units=hiddenSize, state_is_tuple=True),output_keep_prob=self.dropoutKeepProb)# 定义反向LSTM结构lstmBwCell = tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.LSTMCell(num_units=hiddenSize, state_is_tuple=True),output_keep_prob=self.dropoutKeepProb)# 采用动态rnn,可以动态的输入序列的长度,若没有输入,则取序列的全长# outputs是一个元祖(output_fw, output_bw),其中两个元素的维度都是[batch_size, max_time, hidden_size],fw和bw的hidden_size一样# self.current_state 是最终的状态,二元组(state_fw, state_bw),state_fw=[batch_size, s],s是一个元祖(h, c)outputs, self.current_state = tf.nn.bidirectional_dynamic_rnn(lstmFwCell, lstmBwCell, self.embeddedWords, dtype=tf.float32,scope="bi-lstm" + str(idx))# 对outputs中的fw和bw的结果拼接 [batch_size, time_step, hidden_size * 2]self.embeddedWords = tf.concat(outputs, 2)# 去除最后时间步的输出作为全连接的输入finalOutput = self.embeddedWords[:, 0, :]outputSize = config.model.hiddenSizes[-1] * 2  # 因为是双向LSTM,最终的输出值是fw和bw的拼接,因此要乘以2output = tf.reshape(finalOutput, [-1, outputSize])  # reshape成全连接层的输入维度# 全连接层的输出with tf.name_scope("output"):outputW = tf.get_variable("outputW",shape=[outputSize, config.numClasses],initializer=tf.contrib.layers.xavier_initializer())outputB= tf.Variable(tf.constant(0.1, shape=[config.numClasses]), name="outputB")l2Loss += tf.nn.l2_loss(outputW)l2Loss += tf.nn.l2_loss(outputB)self.logits = tf.nn.xw_plus_b(output, outputW, outputB, name="logits")if config.numClasses == 1:self.predictions = tf.cast(tf.greater_equal(self.logits, 0.0), tf.float32, name="predictions")elif config.numClasses > 1:self.predictions = tf.argmax(self.logits, axis=-1, name="predictions")# 计算二元交叉熵损失with tf.name_scope("loss"):if config.numClasses == 1:losses = tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logits, labels=tf.cast(tf.reshape(self.inputY, [-1, 1]), dtype=tf.float32))elif config.numClasses > 1:losses = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=self.logits, labels=self.inputY)self.loss = tf.reduce_mean(losses) + config.model.l2RegLambda * l2Loss

8 定义计算metrics的函数

"""
定义各类性能指标
"""def mean(item: list) -> float:"""计算列表中元素的平均值:param item: 列表对象:return:"""res = sum(item) / len(item) if len(item) > 0 else 0return resdef accuracy(pred_y, true_y):"""计算二类和多类的准确率:param pred_y: 预测结果:param true_y: 真实结果:return:"""if isinstance(pred_y[0], list):pred_y = [item[0] for item in pred_y]corr = 0for i in range(len(pred_y)):if pred_y[i] == true_y[i]:corr += 1acc = corr / len(pred_y) if len(pred_y) > 0 else 0return accdef binary_precision(pred_y, true_y, positive=1):"""二类的精确率计算:param pred_y: 预测结果:param true_y: 真实结果:param positive: 正例的索引表示:return:"""corr = 0pred_corr = 0for i in range(len(pred_y)):if pred_y[i] == positive:pred_corr += 1if pred_y[i] == true_y[i]:corr += 1prec = corr / pred_corr if pred_corr > 0 else 0return precdef binary_recall(pred_y, true_y, positive=1):"""二类的召回率:param pred_y: 预测结果:param true_y: 真实结果:param positive: 正例的索引表示:return:"""corr = 0true_corr = 0for i in range(len(pred_y)):if true_y[i] == positive:true_corr += 1if pred_y[i] == true_y[i]:corr += 1rec = corr / true_corr if true_corr > 0 else 0return recdef binary_f_beta(pred_y, true_y, beta=1.0, positive=1):"""二类的f beta值:param pred_y: 预测结果:param true_y: 真实结果:param beta: beta值:param positive: 正例的索引表示:return:"""precision = binary_precision(pred_y, true_y, positive)recall = binary_recall(pred_y, true_y, positive)try:f_b = (1 + beta * beta) * precision * recall / (beta * beta * precision + recall)except:f_b = 0return f_bdef multi_precision(pred_y, true_y, labels):"""多类的精确率:param pred_y: 预测结果:param true_y: 真实结果:param labels: 标签列表:return:"""if isinstance(pred_y[0], list):pred_y = [item[0] for item in pred_y]precisions = [binary_precision(pred_y, true_y, label) for label in labels]prec = mean(precisions)return precdef multi_recall(pred_y, true_y, labels):"""多类的召回率:param pred_y: 预测结果:param true_y: 真实结果:param labels: 标签列表:return:"""if isinstance(pred_y[0], list):pred_y = [item[0] for item in pred_y]recalls = [binary_recall(pred_y, true_y, label) for label in labels]rec = mean(recalls)return recdef multi_f_beta(pred_y, true_y, labels, beta=1.0):"""多类的f beta值:param pred_y: 预测结果:param true_y: 真实结果:param labels: 标签列表:param beta: beta值:return:"""if isinstance(pred_y[0], list):pred_y = [item[0] for item in pred_y]f_betas = [binary_f_beta(pred_y, true_y, beta, label) for label in labels]f_beta = mean(f_betas)return f_betadef get_binary_metrics(pred_y, true_y, f_beta=1.0):"""得到二分类的性能指标:param pred_y::param true_y::param f_beta::return:"""acc = accuracy(pred_y, true_y)recall = binary_recall(pred_y, true_y)precision = binary_precision(pred_y, true_y)f_beta = binary_f_beta(pred_y, true_y, f_beta)return acc, recall, precision, f_betadef get_multi_metrics(pred_y, true_y, labels, f_beta=1.0):"""得到多分类的性能指标:param pred_y::param true_y::param labels::param f_beta::return:"""acc = accuracy(pred_y, true_y)recall = multi_recall(pred_y, true_y, labels)precision = multi_precision(pred_y, true_y, labels)f_beta = multi_f_beta(pred_y, true_y, labels, f_beta)return acc, recall, precision, f_beta

9 训练模型

  在训练时,我们定义了tensorBoard的输出,并定义了两种模型保存的方法。

# 训练模型# 生成训练集和验证集
trainReviews = data.trainReviews
trainLabels = data.trainLabels
evalReviews = data.evalReviews
evalLabels = data.evalLabelswordEmbedding = data.wordEmbedding
labelList = data.labelList# 定义计算图
with tf.Graph().as_default():session_conf = tf.ConfigProto(allow_soft_placement=True, log_device_placement=False)session_conf.gpu_options.allow_growth=Truesession_conf.gpu_options.per_process_gpu_memory_fraction = 0.9  # 配置gpu占用率  sess = tf.Session(config=session_conf)# 定义会话with sess.as_default():lstm = BiLSTM(config, wordEmbedding)globalStep = tf.Variable(0, name="globalStep", trainable=False)# 定义优化函数,传入学习速率参数optimizer = tf.train.AdamOptimizer(config.training.learningRate)# 计算梯度,得到梯度和变量gradsAndVars = optimizer.compute_gradients(lstm.loss)# 将梯度应用到变量下,生成训练器trainOp = optimizer.apply_gradients(gradsAndVars, global_step=globalStep)# 用summary绘制tensorBoardgradSummaries = []for g, v in gradsAndVars:if g is not None:tf.summary.histogram("{}/grad/hist".format(v.name), g)tf.summary.scalar("{}/grad/sparsity".format(v.name), tf.nn.zero_fraction(g))outDir = os.path.abspath(os.path.join(os.path.curdir, "summarys"))print("Writing to {}\n".format(outDir))lossSummary = tf.summary.scalar("loss", lstm.loss)summaryOp = tf.summary.merge_all()trainSummaryDir = os.path.join(outDir, "train")trainSummaryWriter = tf.summary.FileWriter(trainSummaryDir, sess.graph)evalSummaryDir = os.path.join(outDir, "eval")evalSummaryWriter = tf.summary.FileWriter(evalSummaryDir, sess.graph)# 初始化所有变量saver = tf.train.Saver(tf.global_variables(), max_to_keep=5)# 保存模型的一种方式,保存为pb文件savedModelPath = "../model/Bi-LSTM/savedModel"if os.path.exists(savedModelPath):os.rmdir(savedModelPath)builder = tf.saved_model.builder.SavedModelBuilder(savedModelPath)sess.run(tf.global_variables_initializer())def trainStep(batchX, batchY):"""训练函数"""   feed_dict = {lstm.inputX: batchX,lstm.inputY: batchY,lstm.dropoutKeepProb: config.model.dropoutKeepProb}_, summary, step, loss, predictions = sess.run([trainOp, summaryOp, globalStep, lstm.loss, lstm.predictions],feed_dict)timeStr = datetime.datetime.now().isoformat()if config.numClasses == 1:acc, recall, prec, f_beta = get_binary_metrics(pred_y=predictions, true_y=batchY)elif config.numClasses > 1:acc, recall, prec, f_beta = get_multi_metrics(pred_y=predictions, true_y=batchY,labels=labelList)trainSummaryWriter.add_summary(summary, step)return loss, acc, prec, recall, f_betadef devStep(batchX, batchY):"""验证函数"""feed_dict = {lstm.inputX: batchX,lstm.inputY: batchY,lstm.dropoutKeepProb: 1.0}summary, step, loss, predictions = sess.run([summaryOp, globalStep, lstm.loss, lstm.predictions],feed_dict)if config.numClasses == 1:acc, precision, recall, f_beta = get_binary_metrics(pred_y=predictions, true_y=batchY)elif config.numClasses > 1:acc, precision, recall, f_beta = get_multi_metrics(pred_y=predictions, true_y=batchY, labels=labelList)evalSummaryWriter.add_summary(summary, step)return loss, acc, precision, recall, f_betafor i in range(config.training.epoches):# 训练模型print("start training model")for batchTrain in nextBatch(trainReviews, trainLabels, config.batchSize):loss, acc, prec, recall, f_beta = trainStep(batchTrain[0], batchTrain[1])currentStep = tf.train.global_step(sess, globalStep) print("train: step: {}, loss: {}, acc: {}, recall: {}, precision: {}, f_beta: {}".format(currentStep, loss, acc, recall, prec, f_beta))if currentStep % config.training.evaluateEvery == 0:print("\nEvaluation:")losses = []accs = []f_betas = []precisions = []recalls = []for batchEval in nextBatch(evalReviews, evalLabels, config.batchSize):loss, acc, precision, recall, f_beta = devStep(batchEval[0], batchEval[1])losses.append(loss)accs.append(acc)f_betas.append(f_beta)precisions.append(precision)recalls.append(recall)time_str = datetime.datetime.now().isoformat()print("{}, step: {}, loss: {}, acc: {},precision: {}, recall: {}, f_beta: {}".format(time_str, currentStep, mean(losses), mean(accs), mean(precisions),mean(recalls), mean(f_betas)))if currentStep % config.training.checkpointEvery == 0:# 保存模型的另一种方法,保存checkpoint文件path = saver.save(sess, "../model/Bi-LSTM/model/my-model", global_step=currentStep)print("Saved model checkpoint to {}\n".format(path))inputs = {"inputX": tf.saved_model.utils.build_tensor_info(lstm.inputX),"keepProb": tf.saved_model.utils.build_tensor_info(lstm.dropoutKeepProb)}outputs = {"predictions": tf.saved_model.utils.build_tensor_info(lstm.binaryPreds)}prediction_signature = tf.saved_model.signature_def_utils.build_signature_def(inputs=inputs, outputs=outputs,method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)legacy_init_op = tf.group(tf.tables_initializer(), name="legacy_init_op")builder.add_meta_graph_and_variables(sess, [tf.saved_model.tag_constants.SERVING],signature_def_map={"predict": prediction_signature}, legacy_init_op=legacy_init_op)builder.save()

10 预测代码

x = "this movie is full of references like mad max ii the wild one and many others the ladybug´s face it´s a clear reference or tribute to peter lorre this movie is a masterpiece we´ll talk much more about in the future"# 注:下面两个词典要保证和当前加载的模型对应的词典是一致的
with open("../data/wordJson/word2idx.json", "r", encoding="utf-8") as f:word2idx = json.load(f)with open("../data/wordJson/label2idx.json", "r", encoding="utf-8") as f:label2idx = json.load(f)
idx2label = {value: key for key, value in label2idx.items()}xIds = [word2idx.get(item, word2idx["UNK"]) for item in x.split(" ")]
if len(xIds) >= config.sequenceLength:xIds = xIds[:config.sequenceLength]
else:xIds = xIds + [word2idx["PAD"]] * (config.sequenceLength - len(xIds))graph = tf.Graph()
with graph.as_default():gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)session_conf = tf.ConfigProto(allow_soft_placement=True, log_device_placement=False, gpu_options=gpu_options)sess = tf.Session(config=session_conf)with sess.as_default():checkpoint_file = tf.train.latest_checkpoint("../model/Bi-LSTM/model/")saver = tf.train.import_meta_graph("{}.meta".format(checkpoint_file))saver.restore(sess, checkpoint_file)# 获得需要喂给模型的参数,输出的结果依赖的输入值inputX = graph.get_operation_by_name("inputX").outputs[0]dropoutKeepProb = graph.get_operation_by_name("dropoutKeepProb").outputs[0]# 获得输出的结果predictions = graph.get_tensor_by_name("output/predictions:0")pred = sess.run(predictions, feed_dict={inputX: [xIds], dropoutKeepProb: 1.0})[0]pred = [idx2label[item] for item in pred]
print(pred)

文本分类实战—— Bi-LSTM模型相关推荐

  1. 自然语言处理-应用场景-文本分类:基于LSTM模型的情感分析【IMDB电影评论数据集】--(重点技术:自定义分词、文本序列化、输入数据批次化、词向量迁移使用)

    文本情感分类 1. 案例介绍 现在我们有一个经典的数据集IMDB数据集,地址:http://ai.stanford.edu/~amaas/data/sentiment/,这是一份包含了5万条流行电影的 ...

  2. 【英文文本分类实战】之一——实战项目总览

    [1] 总览   [英文文本分类实战]系列共六篇文章:   [英文文本分类实战]之一--实战项目总览   [英文文本分类实战]之二--数据集挑选与划分   [英文文本分类实战]之三--数据清洗   [ ...

  3. 文本分类实战(七)—— Adversarial LSTM模型

    1 大纲概述 文本分类这个系列将会有十篇左右,包括基于word2vec预训练的文本分类,与及基于最新的预训练模型(ELMo,BERT等)的文本分类.总共有以下系列: word2vec预训练词向量 te ...

  4. 文本分类实战(三)—— charCNN模型

    1 大纲概述 文本分类这个系列将会有十篇左右,包括基于word2vec预训练的文本分类,与及基于最新的预训练模型(ELMo,BERT等)的文本分类.总共有以下系列: word2vec预训练词向量 te ...

  5. 人工智能框架实战精讲:Keras项目-英文语料的DNN、Word2Vec、CNN、LSTM文本分类实战与调参优化

    Keras项目-英文语料的文本分类实战 一.机器学习模型 1.1 数据简介 1.2 数据读取与预处理 1.3 数据切分与逻辑回归模型构建 二.全连接神经网络模型 2.1 模型训练 2.2 模型结果展示 ...

  6. textcnn文本词向量_基于Text-CNN模型的中文文本分类实战

    1 文本分类 文本分类是自然语言处理领域最活跃的研究方向之一,目前文本分类在工业界的应用场景非常普遍,从新闻的分类.商品评论信息的情感分类到微博信息打标签辅助推荐系统,了解文本分类技术是NLP初学者比 ...

  7. 【阿旭机器学习实战】【11】文本分类实战:利用朴素贝叶斯模型进行邮件分类

    [阿旭机器学习实战]系列文章主要介绍机器学习的各种算法模型及其实战案例,欢迎点赞,关注共同学习交流. 本文主要介绍如何使用朴素贝叶斯模型进行邮件分类,置于朴素贝叶斯模型的原理及分类,可以参考我的上一篇 ...

  8. 英文文本分类实战总结

    之前参加了一个英文文本的分类比赛.比赛结束到了过年,加上开学又有一些事情,所以总结的工作就一直没有进行.现在空了一些,所以把之前的工作写一写,比赛中用到的代码也会放到github上. 对这个比赛的任务 ...

  9. 【BERT-多标签文本分类实战】之二——BERT的地位与名词术语解释

    ·请参考本系列目录:[BERT-多标签文本分类实战]之一--实战项目总览 ·下载本实战项目资源:>=点击此处=< [注]本篇将从宏观上介绍bert的产生和在众多模型中的地位,以及与bert ...

最新文章

  1. 管理人员要求写日报、周报,项目进度汇报真有用吗?
  2. ubuntu 16.04 python3 使用ryu
  3. 局域网防雷电***实用解决方案
  4. Redis在C#中的使用及Redis的封装
  5. C++标准类型库string
  6. ora-39142,ora-39001,ora-39000
  7. 《H5 移动营销设计指南》 读书笔记整理
  8. 仙岛求药(信息学奥赛一本通-T1251)
  9. mysql2014安装文档_hive安装文档
  10. [vb]SendMessageA函数
  11. Angr安装与使用之使用篇(八)
  12. 你真的懂Linux吗?Linux运维快速入门学习方法
  13. 电力用户用电信息采集系统通信协议报文解析示例
  14. 阿里云服务器安全警告-异常网络连接-访问恶意域名
  15. php use not allowed,PHP Curl - Received HTTP/0.9 when not allowed
  16. 李一男旗下自游家汽车无法交付车辆 旅程还没开始就将结束
  17. 玩转云服务器,怎样用云服务器架设搭建游戏:浪剑天下架设教程,手把手教你架设游戏服务器,小白一看就会
  18. python 爬虫之字体反反爬
  19. 在macOS下如何格式化磁盘
  20. 最新蓝奏云php直链源码,蓝奏云直链源码

热门文章

  1. 如何在笔记本上设置wifi热点(菜鸟篇)
  2. python线程池原理及使用
  3. 一分钟搞定最长公共子序列与最长公共子串的问题
  4. 安全防御之入侵检测篇
  5. 通过宝塔面板部署.NET项目(安装环境=>前后端部署)
  6. 把移动和社交融入SaaS云服务
  7. u8文件服务器错误,u8提示文件服务器未配置
  8. 数据库大作业 openGauss程序设计
  9. Linux入门——常见命令
  10. 手机一个2k屏60hz,一个1080p屏90hz,哪个好呀?