目录

  • 简介
  • 导入相关的torch工具包
  • 访问原始数据集迭代器
  • 使用原始训练数据集构建词汇表
  • 生成数据批处理和迭代器
  • 定义模型
  • 定义函数来训练模型和评估结果
  • 实例化并运行模型
  • 使用测试数据集评估模型
  • 测试随机新闻
  • 完整代码
  • 参考链接

简介

使用浅层网络构建新闻主题分类器。
以一段新闻报道中的文本描述内容为输入, 使用模型帮助我们判断它最有可能属于哪一种类型的新闻, 这是典型的文本分类问题, 我们这里假定每种类型是互斥的, 即文本描述有且只有一种类型。

导入相关的torch工具包

import timeimport torch
import torch.nn as nn
from torchtext.datasets import AG_NEWS
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
from torch.utils.data import DataLoader
from torch.utils.data.dataset import random_split
from torchtext.data.functional import to_map_style_dataset
from TextClassificationModule import TextClassificationModule

访问原始数据集迭代器

torchtext 库提供了一些原始数据集迭代器,它们产生原始文本字符串。例如,AG_NEWS数据集迭代器将原始数据生成为标签和文本的元组。

# 可用设备检测, 有GPU的话将优先使用GPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 基本的英文分词器
tokenizer = get_tokenizer('basic_english')
# 训练数据加载器
train_iter = AG_NEWS(split="train")
test_iter = AG_NEWS(split="test")

对读取到的数据进行测试,该读取的数据是从网上自动下载到缓存,其中读取到的 train_iter 和 test_iter 为训练集和测试集,且均为迭代器类型。

print('test:')
train_data = iter(train_iter)
test_data = iter(test_iter)
print(next(train_data))
print(next(test_data))

运行结果

test:
(3, "Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street's dwindling\\band of ultra-cynics, are seeing green again.")
(3, "Fears for T N pension after talks Unions representing workers at Turner   Newall say they are 'disappointed' after talks with stricken parent firm Federal Mogul.")

使用原始训练数据集构建词汇表

其中分词生成器中的 “_” 表示一个不用的变量即类别,text 表示新闻文本,如:
_ = 3
text = Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street's dwindling\\band of ultra-cynics, are seeing green again.
python 中 yield 的作用就是把一个函数变成一个 generator,带有 yield 的函数不再是一个普通函数,Python 解释器会将其视为一个 generator,调用 fab(5) 不会执行 fab 函数,而是返回一个 iterable 对象。
示例

def yield_test(n):  for i in range(n):  yield call(i)  print("i=",i)  #做一些其它的事情print("do something.")      print("end.")  def call(i):  return i*2  #使用for循环
for i in yield_test(5):  print(i,",")

运行结果

0 ,
i= 0
2 ,
i= 1
4 ,
i= 2
6 ,
i= 3
8 ,
i= 4
do something.
end.

使用原始训练数据集构建词汇表

# 分词生成器
def yield_tokens(data_iter):for _, text in data_iter:yield tokenizer(text)# 根据训练数据构建词汇表,torchtext.vocab
vocab = build_vocab_from_iterator(yield_tokens(train_iter), specials=["<unk>"])
# 设置默认索引,当某个单词不在词汇表 vocab 时(OOV),返回该单词索引
vocab.set_default_index(vocab["<unk>"])# 词汇表会将 token 映射到词汇表中的索引上
print(vocab(["here", "is", "an", "example"]))# 构建数据加载器 dataloader
# text_pipeline 将一个文本字符串转换为整数 List, List 中每项对应词汇表 vocab 中的单词的索引号
text_pipeline = lambda x: vocab(tokenizer(x))# label_pipeline 将 label 转换为整数
label_pipeline = lambda x: int(x) - 1# pipeline example
print(text_pipeline("hello world! I'am happy"))
print(label_pipeline("10"))

运行结果

[475, 21, 30, 5297]
[12544, 50, 764, 282, 16, 1913, 2734]
9

生成数据批处理和迭代器

def collate_batch(batch):label_list, text_list, offsets = [], [], [0]for (_label, _text) in batch:label_list.append(label_pipeline(_label))processed_text = torch.tensor(text_pipeline(_text), dtype=torch.int64)text_list.append(processed_text)offsets.append(processed_text.size(0))label_list = torch.tensor(label_list, dtype=torch.int64)offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)text_list = torch.cat(text_list)return label_list.to(device), text_list.to(device), offsets.to(device)# 加载数据集合,转换为张量
dataloader = DataLoader(train_iter, batch_size=8, shuffle=False, collate_fn=collate_batch)

定义模型

该模型由 nn.EmbeddingBag 层和用于分类目的的线性层组成。nn.EmbeddingBag 使用默认模式“mean”计算嵌入“bag”的平均值。尽管此处的文本条目具有不同的长度,但 nn.EmbeddingBag 模块在此处不需要填充,因为文本长度保存在偏移量中。
nn.EmbeddingBag 可以提高性能和内存效率以处理一系列张量。

import torch.nn as nnclass TextClassificationModule(nn.Module):def __init__(self, vocab_size, embed_dim, num_class):"""文本分类模型description: 类的初始化函数:param vocab_size: 整个语料包含的不同词汇总数:param embed_dim: 指定词嵌入的维度:param num_class: 文本分类的类别总数"""super(TextClassificationModule, self).__init__()# 实例化embedding层, sparse=True代表每次对该层求解梯度时, 只更新部分权重self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)# 实例化全连接层, 参数分别是embed_dim和num_classself.fc = nn.Linear(embed_dim, num_class)# 为各层初始化权重self.init_weights()def init_weights(self):"""初始化权重函数"""# 指定初始权重的取值范围数initrange = 0.5# 各层的权重参数都是初始化为均匀分布self.embedding.weight.data.uniform_(-initrange, initrange)self.fc.weight.data.uniform_(-initrange, initrange)# 偏置初始化为0self.fc.bias.data.zero_()def forward(self, text, offsets):""":param text: 文本数值映射后的结果:return: 与类别数尺寸相同的张量, 用以判断文本类别"""embedded = self.embedding(text, offsets)return self.fc(embedded)

定义函数来训练模型和评估结果

def train(dataloader):model.train()total_acc, total_count = 0, 0log_interval = 500start_time = time.time()for idx, (label, text, offsets) in enumerate(dataloader):optimizer.zero_grad()predicted_label = model(text, offsets)loss = criterion(predicted_label, label)loss.backward()torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)optimizer.step()total_acc += (predicted_label.argmax(1) == label).sum().item()total_count += label.size(0)if idx % log_interval == 0 and idx > 0:elapsed = time.time() - start_timeprint('| epoch {:3d} | {:5d}/{:5d} batches ''| accuracy {:8.3f}'.format(epoch, idx, len(dataloader),total_acc / total_count))total_acc, total_count = 0, 0start_time = time.time()def evaluate(dataloader):model.eval()total_acc, total_count = 0, 0with torch.no_grad():for idx, (label, text, offsets) in enumerate(dataloader):predicted_label = model(text, offsets)loss = criterion(predicted_label, label)total_acc += (predicted_label.argmax(1) == label).sum().item()total_count += label.size(0)return total_acc / total_count

实例化并运行模型

# 加载数据集合,转换为张量
dataloader = DataLoader(train_iter, batch_size=8, shuffle=False, collate_fn=collate_batch)
# 一个嵌入维度为 64 的模型。词汇大小等于词汇实例的长度。类的数量等于标签的数量,
num_class = len(set([label for (label, text) in train_iter]))
vocab_size = len(vocab)
emsize = 64
model = TextClassificationModule(vocab_size, emsize, num_class).to(device)# 训练轮数
EPOCHS = 10
# 学习率
LR = 5
# 训练数据规模
BATCH_SIZE = 64
# 交叉熵损失函数
criterion = torch.nn.CrossEntropyLoss()
# 优化器
optimizer = torch.optim.SGD(model.parameters(), lr=LR)
# 调整学习率机制
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.1)
total_accu = None
train_dataset = to_map_style_dataset(train_iter)
test_dataset = to_map_style_dataset(test_iter)
# 划分训练集中5%的数据最为验证集
num_train = int(len(train_dataset) * 0.95)
split_train_, split_valid_ = random_split(train_dataset, [num_train, len(train_dataset) - num_train])train_dataloader = DataLoader(split_train_, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch)
valid_dataloader = DataLoader(split_valid_, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch)
test_dataloader = DataLoader(test_dataset, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch)for epoch in range(1, EPOCHS + 1):epoch_start_time = time.time()train(train_dataloader)accu_val = evaluate(valid_dataloader)if total_accu is not None and total_accu > accu_val:scheduler.step()else:total_accu = accu_valprint('-' * 59)print('| end of epoch {:3d} | time: {:5.2f}s | ''valid accuracy {:8.3f} '.format(epoch,time.time() - epoch_start_time,accu_val))print('-' * 59)

运行结果

| epoch   1 |   500/ 1782 batches | accuracy    0.689
| epoch   1 |  1000/ 1782 batches | accuracy    0.856
| epoch   1 |  1500/ 1782 batches | accuracy    0.873
-----------------------------------------------------------
| end of epoch   1 | time: 23.38s | valid accuracy    0.879
-----------------------------------------------------------
| epoch   2 |   500/ 1782 batches | accuracy    0.896
| epoch   2 |  1000/ 1782 batches | accuracy    0.904
| epoch   2 |  1500/ 1782 batches | accuracy    0.900
-----------------------------------------------------------
| end of epoch   2 | time: 32.21s | valid accuracy    0.891
-----------------------------------------------------------
| epoch   3 |   500/ 1782 batches | accuracy    0.915
| epoch   3 |  1000/ 1782 batches | accuracy    0.916
| epoch   3 |  1500/ 1782 batches | accuracy    0.915
-----------------------------------------------------------
| end of epoch   3 | time: 36.85s | valid accuracy    0.899
-----------------------------------------------------------
| epoch   4 |   500/ 1782 batches | accuracy    0.925
| epoch   4 |  1000/ 1782 batches | accuracy    0.925
| epoch   4 |  1500/ 1782 batches | accuracy    0.922
-----------------------------------------------------------
| end of epoch   4 | time: 20.15s | valid accuracy    0.897
-----------------------------------------------------------
| epoch   5 |   500/ 1782 batches | accuracy    0.937
| epoch   5 |  1000/ 1782 batches | accuracy    0.938
| epoch   5 |  1500/ 1782 batches | accuracy    0.936
-----------------------------------------------------------
| end of epoch   5 | time: 28.52s | valid accuracy    0.905
-----------------------------------------------------------
| epoch   6 |   500/ 1782 batches | accuracy    0.939
| epoch   6 |  1000/ 1782 batches | accuracy    0.938
| epoch   6 |  1500/ 1782 batches | accuracy    0.941
-----------------------------------------------------------
| end of epoch   6 | time: 33.47s | valid accuracy    0.905
-----------------------------------------------------------
| epoch   7 |   500/ 1782 batches | accuracy    0.940
| epoch   7 |  1000/ 1782 batches | accuracy    0.941
| epoch   7 |  1500/ 1782 batches | accuracy    0.939
-----------------------------------------------------------
| end of epoch   7 | time: 20.75s | valid accuracy    0.904
-----------------------------------------------------------
| epoch   8 |   500/ 1782 batches | accuracy    0.941
| epoch   8 |  1000/ 1782 batches | accuracy    0.941
| epoch   8 |  1500/ 1782 batches | accuracy    0.940
-----------------------------------------------------------
| end of epoch   8 | time: 27.11s | valid accuracy    0.906
-----------------------------------------------------------
| epoch   9 |   500/ 1782 batches | accuracy    0.942
| epoch   9 |  1000/ 1782 batches | accuracy    0.942
| epoch   9 |  1500/ 1782 batches | accuracy    0.942
-----------------------------------------------------------
| end of epoch   9 | time: 34.83s | valid accuracy    0.906
-----------------------------------------------------------
| epoch  10 |   500/ 1782 batches | accuracy    0.942
| epoch  10 |  1000/ 1782 batches | accuracy    0.942
| epoch  10 |  1500/ 1782 batches | accuracy    0.940
-----------------------------------------------------------
| end of epoch  10 | time: 22.78s | valid accuracy    0.906
-----------------------------------------------------------

使用测试数据集评估模型

print('Checking the results of test dataset.')
accu_test = evaluate(test_dataloader)
print('test accuracy {:8.3f}'.format(accu_test))

运行结果

Checking the results of test dataset.
test accuracy    0.906

测试随机新闻

# 测试随机新闻
# 使用迄今为止最好的模型并测试高尔夫新闻。
ag_news_label = {1: "World",2: "Sports",3: "Business",4: "Sci/Tec"}def predict(text, text_pipeline):with torch.no_grad():text = torch.tensor(text_pipeline(text))output = model(text, torch.tensor([0]))return output.argmax(1).item() + 1ex_text_str = "MEMPHIS, Tenn. – Four days ago, Jon Rahm was \enduring the season’s worst weather conditions on Sunday at The \Open on his way to a closing 75 at Royal Portrush, which \considering the wind and the rain was a respectable showing. \Thursday’s first round at the WGC-FedEx St. Jude Invitational \was another story. With temperatures in the mid-80s and hardly any \wind, the Spaniard was 13 strokes better in a flawless round. \Thanks to his best putting performance on the PGA Tour, Rahm \finished with an 8-under 62 for a three-stroke lead, which \was even more impressive considering he’d never played the \front nine at TPC Southwind."model = model.to("cpu")print("This is a %s news" % ag_news_label[predict(ex_text_str, text_pipeline)])

运行结果

This is a Sports news

完整代码

import timeimport torch
import torch.nn as nn
from torchtext.datasets import AG_NEWS
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
from torch.utils.data import DataLoader
from torch.utils.data.dataset import random_split
from torchtext.data.functional import to_map_style_dataset# 可用设备检测, 有GPU的话将优先使用GPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 基本的英文分词器
tokenizer = get_tokenizer('basic_english')
# 训练数据加载器
train_iter = AG_NEWS(split="train")
test_iter = AG_NEWS(split="test")# print('test:')
# train_data = iter(train_iter)
# test_data = iter(test_iter)
# print(next(train_data))
# print(next(test_data))# 分词生成器
def yield_tokens(data_iter):for _, text in data_iter:yield tokenizer(text)# 根据训练数据构建词汇表
vocab = build_vocab_from_iterator(yield_tokens(train_iter), specials=["<unk>"])
# 设置默认索引,当某个单词不在词汇表 vocab 时(OOV),返回该单词索引
vocab.set_default_index(vocab["<unk>"])# 词汇表会将 token 映射到词汇表中的索引上
# print(vocab(["here", "is", "an", "example"]))# 构建数据加载器 dataloader
# text_pipeline 将一个文本字符串转换为整数 List, List 中每项对应词汇表 vocab 中的单词的索引号
text_pipeline = lambda x: vocab(tokenizer(x))# label_pipeline 将 label 转换为整数
label_pipeline = lambda x: int(x) - 1# pipeline example
# print(text_pipeline("hello world! I'am happy"))
# print(label_pipeline("10"))# 模型
class TextClassificationModule(nn.Module):def __init__(self, vocab_size, embed_dim, num_class):"""文本分类模型description: 类的初始化函数:param vocab_size: 整个语料包含的不同词汇总数:param embed_dim: 指定词嵌入的维度:param num_class: 文本分类的类别总数"""super(TextClassificationModule, self).__init__()# 实例化embedding层, sparse=True代表每次对该层求解梯度时, 只更新部分权重self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)# 实例化全连接层, 参数分别是embed_dim和num_classself.fc = nn.Linear(embed_dim, num_class)# 为各层初始化权重self.init_weights()def init_weights(self):"""初始化权重函数"""# 指定初始权重的取值范围数initrange = 0.5# 各层的权重参数都是初始化为均匀分布self.embedding.weight.data.uniform_(-initrange, initrange)self.fc.weight.data.uniform_(-initrange, initrange)# 偏置初始化为0self.fc.bias.data.zero_()def forward(self, text, offsets):""":param text: 文本数值映射后的结果:return: 与类别数尺寸相同的张量, 用以判断文本类别"""embedded = self.embedding(text, offsets)return self.fc(embedded)def collate_batch(batch):label_list, text_list, offsets = [], [], [0]for (_label, _text) in batch:label_list.append(label_pipeline(_label))processed_text = torch.tensor(text_pipeline(_text), dtype=torch.int64)text_list.append(processed_text)offsets.append(processed_text.size(0))label_list = torch.tensor(label_list, dtype=torch.int64)offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)text_list = torch.cat(text_list)return label_list.to(device), text_list.to(device), offsets.to(device)def train(dataloader):model.train()total_acc, total_count = 0, 0log_interval = 500start_time = time.time()for idx, (label, text, offsets) in enumerate(dataloader):optimizer.zero_grad()predicted_label = model(text, offsets)loss = criterion(predicted_label, label)loss.backward()torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)optimizer.step()total_acc += (predicted_label.argmax(1) == label).sum().item()total_count += label.size(0)if idx % log_interval == 0 and idx > 0:elapsed = time.time() - start_timeprint('| epoch {:3d} | {:5d}/{:5d} batches ''| accuracy {:8.3f}'.format(epoch, idx, len(dataloader),total_acc / total_count))total_acc, total_count = 0, 0start_time = time.time()def evaluate(dataloader):model.eval()total_acc, total_count = 0, 0with torch.no_grad():for idx, (label, text, offsets) in enumerate(dataloader):predicted_label = model(text, offsets)loss = criterion(predicted_label, label)total_acc += (predicted_label.argmax(1) == label).sum().item()total_count += label.size(0)return total_acc / total_count# 加载数据集合,转换为张量
dataloader = DataLoader(train_iter, batch_size=8, shuffle=False, collate_fn=collate_batch)
# 一个嵌入维度为 64 的模型。词汇大小等于词汇实例的长度。类的数量等于标签的数量,
num_class = len(set([label for (label, text) in train_iter]))
vocab_size = len(vocab)
emsize = 64
model = TextClassificationModule(vocab_size, emsize, num_class).to(device)# 训练轮数
EPOCHS = 10
# 学习率
LR = 5
# 训练数据规模
BATCH_SIZE = 64
# 交叉熵损失函数
criterion = torch.nn.CrossEntropyLoss()
# 优化器
optimizer = torch.optim.SGD(model.parameters(), lr=LR)
# 调整学习率机制
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.1)
total_accu = None
train_dataset = to_map_style_dataset(train_iter)
test_dataset = to_map_style_dataset(test_iter)# 划分训练集中5%的数据最为验证集
num_train = int(len(train_dataset) * 0.95)
split_train_, split_valid_ = random_split(train_dataset, [num_train, len(train_dataset) - num_train])train_dataloader = DataLoader(split_train_, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch)
valid_dataloader = DataLoader(split_valid_, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch)
test_dataloader = DataLoader(test_dataset, batch_size=BATCH_SIZE, shuffle=True, collate_fn=collate_batch)for epoch in range(1, EPOCHS + 1):epoch_start_time = time.time()train(train_dataloader)accu_val = evaluate(valid_dataloader)if total_accu is not None and total_accu > accu_val:scheduler.step()else:total_accu = accu_valprint('-' * 59)print('| end of epoch {:3d} | time: {:5.2f}s | ''valid accuracy {:8.3f} '.format(epoch,time.time() - epoch_start_time,accu_val))print('-' * 59)'''使用测试数据集评估模型'''
print('Checking the results of test dataset.')
accu_test = evaluate(test_dataloader)
print('test accuracy {:8.3f}'.format(accu_test))# 测试随机新闻
# 使用迄今为止最好的模型并测试高尔夫新闻。
ag_news_label = {1: "World",2: "Sports",3: "Business",4: "Sci/Tec"}def predict(text, text_pipeline):with torch.no_grad():text = torch.tensor(text_pipeline(text))output = model(text, torch.tensor([0]))return output.argmax(1).item() + 1ex_text_str = "MEMPHIS, Tenn. – Four days ago, Jon Rahm was \enduring the season’s worst weather conditions on Sunday at The \Open on his way to a closing 75 at Royal Portrush, which \considering the wind and the rain was a respectable showing. \Thursday’s first round at the WGC-FedEx St. Jude Invitational \was another story. With temperatures in the mid-80s and hardly any \wind, the Spaniard was 13 strokes better in a flawless round. \Thanks to his best putting performance on the PGA Tour, Rahm \finished with an 8-under 62 for a three-stroke lead, which \was even more impressive considering he’d never played the \front nine at TPC Southwind."model = model.to("cpu")print("This is a %s news" % ag_news_label[predict(ex_text_str, text_pipeline)])

运行结果

| epoch   1 |   500/ 1782 batches | accuracy    0.689
| epoch   1 |  1000/ 1782 batches | accuracy    0.856
| epoch   1 |  1500/ 1782 batches | accuracy    0.873
-----------------------------------------------------------
| end of epoch   1 | time: 23.38s | valid accuracy    0.879
-----------------------------------------------------------
| epoch   2 |   500/ 1782 batches | accuracy    0.896
| epoch   2 |  1000/ 1782 batches | accuracy    0.904
| epoch   2 |  1500/ 1782 batches | accuracy    0.900
-----------------------------------------------------------
| end of epoch   2 | time: 32.21s | valid accuracy    0.891
-----------------------------------------------------------
| epoch   3 |   500/ 1782 batches | accuracy    0.915
| epoch   3 |  1000/ 1782 batches | accuracy    0.916
| epoch   3 |  1500/ 1782 batches | accuracy    0.915
-----------------------------------------------------------
| end of epoch   3 | time: 36.85s | valid accuracy    0.899
-----------------------------------------------------------
| epoch   4 |   500/ 1782 batches | accuracy    0.925
| epoch   4 |  1000/ 1782 batches | accuracy    0.925
| epoch   4 |  1500/ 1782 batches | accuracy    0.922
-----------------------------------------------------------
| end of epoch   4 | time: 20.15s | valid accuracy    0.897
-----------------------------------------------------------
| epoch   5 |   500/ 1782 batches | accuracy    0.937
| epoch   5 |  1000/ 1782 batches | accuracy    0.938
| epoch   5 |  1500/ 1782 batches | accuracy    0.936
-----------------------------------------------------------
| end of epoch   5 | time: 28.52s | valid accuracy    0.905
-----------------------------------------------------------
| epoch   6 |   500/ 1782 batches | accuracy    0.939
| epoch   6 |  1000/ 1782 batches | accuracy    0.938
| epoch   6 |  1500/ 1782 batches | accuracy    0.941
-----------------------------------------------------------
| end of epoch   6 | time: 33.47s | valid accuracy    0.905
-----------------------------------------------------------
| epoch   7 |   500/ 1782 batches | accuracy    0.940
| epoch   7 |  1000/ 1782 batches | accuracy    0.941
| epoch   7 |  1500/ 1782 batches | accuracy    0.939
-----------------------------------------------------------
| end of epoch   7 | time: 20.75s | valid accuracy    0.904
-----------------------------------------------------------
| epoch   8 |   500/ 1782 batches | accuracy    0.941
| epoch   8 |  1000/ 1782 batches | accuracy    0.941
| epoch   8 |  1500/ 1782 batches | accuracy    0.940
-----------------------------------------------------------
| end of epoch   8 | time: 27.11s | valid accuracy    0.906
-----------------------------------------------------------
| epoch   9 |   500/ 1782 batches | accuracy    0.942
| epoch   9 |  1000/ 1782 batches | accuracy    0.942
| epoch   9 |  1500/ 1782 batches | accuracy    0.942
-----------------------------------------------------------
| end of epoch   9 | time: 34.83s | valid accuracy    0.906
-----------------------------------------------------------
| epoch  10 |   500/ 1782 batches | accuracy    0.942
| epoch  10 |  1000/ 1782 batches | accuracy    0.942
| epoch  10 |  1500/ 1782 batches | accuracy    0.940
-----------------------------------------------------------
| end of epoch  10 | time: 22.78s | valid accuracy    0.906
-----------------------------------------------------------
Checking the results of test dataset.
test accuracy    0.906
This is a Sports newsProcess finished with exit code 0

参考链接

https://pytorch.org/tutorials/beginner/text_sentiment_ngrams_tutorial.html

新闻主题分类任务——torchtext 库进行文本分类相关推荐

  1. 使用TorchText库进行文本分类

    使用Torchtext库进行文本分类(官方的例子) 配置: torch                  1.8.1+cpu torchtext              0.9.1 官方文档的链接: ...

  2. 传统文本分类和基于深度学习文本分类

    用深度学习(CNN RNN Attention)解决大规模文本分类问题 - 综述和实践 近来在同时做一个应用深度学习解决淘宝商品的类目预测问题的项目,恰好硕士毕业时论文题目便是文本分类问题,趁此机会总 ...

  3. pytorch实现文本分类_使用变形金刚进行文本分类(Pytorch实现)

    pytorch实现文本分类 'Attention Is All You Need' "注意力就是你所需要的" New deep learning models are introd ...

  4. 文本分类模型_多标签文本分类、情感倾向分析、文本实体抽取模型如何定制?...

    文心(ERNIE)是依托百度深度学习平台飞桨打造的语义理解技术与平台,集先进的预训练模型.全面的NLP算法集.端到端开发套件和平台化服务于一体,为企业和开发者提供一整套NLP定制与应用能力.在2020 ...

  5. pytorch bert文本分类_一起读Bert文本分类代码 (pytorch篇 四)

    Bert是去年google发布的新模型,打破了11项纪录,关于模型基础部分就不在这篇文章里多说了.这次想和大家一起读的是huggingface的pytorch-pretrained-BERT代码exa ...

  6. html文本分类输出,NLP哪里跑: 文本分类工具一览 · ZMonster's Blog

    AllenNLP 完全通过配置文件来对数据处理.模型结果和训练过程进行设置,最简单的情况下可以一行代码不写就把一个文本分类模型训练出来.下面是一个配置文件示例: { "dataset_rea ...

  7. 基于支持向量机的文本分类算法研究(三)—— 核函数文本分类性能评价指标(stitp项目)

    3 核函数评价指标 核函数评价指标,即准确率(P).召回率®和 F1 值,通过这几个数值,可以直观的反映核函数的性能,也使得支持向量机核函数评价科学化.准确化.本次实验函数从特征值 1000循环五十次 ...

  8. java knn文本分类算法_使用KNN算法的文本分类.PDF

    使用KNN算法的文本分类.PDF 第31 卷 第8 期 计 算 机 工 程 2005 年4 月 Vol.31 8 Computer Engineering April 2005 人工智能及识别技术 文 ...

  9. 【NLP】文本分类TorchText实战-AG_NEWS 新闻主题分类任务(PyTorch版)

    AG_NEWS 新闻主题分类任务(PyTorch版) 前言 1. 使用 N 元组加载数据 2. 安装 Torch-GPU&TorchText 3. 访问原始数据集迭代器 4. 准备数据处理管道 ...

最新文章

  1. java中的文件_JAVA中文件的操作
  2. 架构师之路 — 软件架构 — 架构软件的过程
  3. 程序员面试的一些心得
  4. php面向对象全攻略 (十四),php面向对象全攻略 (十四) php5接口技术
  5. Python与C之间的相互调用
  6. [开源]基于姿态估计的运动计数APP开发(一)
  7. php编写六十甲子纳音表_六十甲子纳音表详细说明,看看你属于什么命,属于那个颜色...
  8. 3个月内第4起!香港一辆特斯拉Model S再次起火自燃
  9. Tomcat启动提示At least one JAR was scanned for TLDs yet contained no TLDs
  10. CocurrentHashMap和Hashtable的区别
  11. 十种常见的图像标注方法 | 数据标注
  12. 解决谷歌浏览器打开控制台有延迟
  13. 数据库课程设计(DatabaseCourseDesign)
  14. 使用echartJs展示报表广东省地图+柱状图
  15. CentOS下连VisualSVN服务器时报SSL handshake failed: SSL error: Key usage violation in certificate has been d
  16. python去除图像光照不均匀_python+opencv——去除图像光照不均匀
  17. 为了提高python代码运行速度和进行_一行代码让你的python运行速度提高100倍
  18. 知来路方知去处。坎坷已过,一马平川后必看的经典!——2018最新倾斜摄影建模与无人机航拍影像处理完美配置解决方案!
  19. Python - 装机系列63 docker镜像不通 no such host
  20. android aar编程,Android Studio模块化编程实践之aar

热门文章

  1. java拆分excel_apache poi拆分excel表格
  2. 什么是高内聚与低耦合?
  3. 通信原理 | 基本概念
  4. 用SQL语句进行数据分页查询
  5. influxDB自定义查询时区
  6. 【转】激励循环——加密算法如何实际修复现有激励循环
  7. 原创短视频推广方式有哪些
  8. filter过滤器设置URL例外
  9. 每日文献:2018-01-10
  10. 分享一下我的一些学习方法