用cpu就能运行的用CPU跑就好,需要用预训练模型,比如GPU,就要启动GPU

PaddleNLP简介
PaddleNLP基于飞桨深度学习框架Paddle 2.0开发,拥有覆盖多场景的模型库、简洁易用的全流程API与动静统一的高性能分布式训练能力,旨在帮助开发者提升文本处理、建模效率,提供从模型搭建到训练部署的优质体验,提供基于PaddlePaddle 2.0的NLP领域最佳实践。GitHub链接:https://github.com/PaddlePaddle/PaddleNLP丰富的模型库涵盖了NLP主流应用相关的前沿模型,包括中文词向量、预训练模型、词法分析、文本分类、文本匹配、文本生成、机器翻译、通用对话、问答系统等。简洁易用的全流程API深度兼容飞桨2.0的高层API体系,提供更多可复用的文本建模模块,可大幅度减少数据处理、组网、训练环节的代码开发,提高开发效率。高性能分布式训练通过高度优化的Transformer网络实现,结合混合精度与Fleet分布式训练API,可充分利用GPU集群资源,高效完成预训练模型的分布式训练。图1:PaddleNLP API概览PaddleNLP提供了基本组网单元和预训练模型两类组网API。图2:一行代码快速实现组网这里,我们以快递单信息抽取为例,介绍如何使用这两类API完成网络搭建。包括:方案1: PaddleNLP中的基本组网单元BiGRU、CRF、ViterbiDecoder。
方案2: 通过paddlenlp.embedding的功能,热启动加载中文词向量,提升效果
方案3:使用预训练模型ernie组网
项目介绍 - 如何从快递单中抽取关键信息
本项目将演示如何从用户提供的快递单中,抽取姓名、电话、省、市、区、详细地址等内容,形成结构化信息。辅助物流行业从业者进行有效信息的提取,从而降低客户填单的成本。快递单信息抽取任务介绍
如何从物流信息中抽取想要的关键信息呢?我们首先要定义好需要抽取哪些字段。比如现在拿到一个快递单,可以作为我们的模型输入,例如“张三18625584663广东省深圳市南山区学府路东百度国际大厦”,那么序列标注模型的目的就是识别出其中的“张三”为人名(用符号 P 表示),“18625584663”为电话名(用符号 T 表示),“广东省深圳市南山区百度国际大厦”分别是 1-4 级的地址(分别用 A1~A4 表示,可以释义为省、市、区、街道)。这是一个典型的命名实体识别(Named Entity Recognition,NER)场景,各实体类型及相应符号表示见下表:抽取实体/字段 符号  抽取结果
姓名  P   张三
电话  T   18625584663
省   A1  广东省
市   A2  深圳市
区   A3  南山区
详细地址    A4  百度国际大厦
序列标注模型介绍
我们可以用序列标注模型来解决快递单的信息抽取任务,下面具体介绍一下序列标注模型。在序列标注任务中,一般会定义一个标签集合,来表示所以可能取到的预测结果。在本案例中,针对需要被抽取的“姓名、电话、省、市、区、详细地址”等实体,标签集合可以定义为:label = {P-B, P-I, T-B, T-I, A1-B, A1-I, A2-B, A2-I, A3-B, A3-I, A4-B, A4-I, O}每个标签的定义分别为:标签   定义
P-B 姓名起始位置
P-I 姓名中间位置或结束位置
T-B 电话起始位置
T-I 电话中间位置或结束位置
A1-B    省份起始位置
A1-I    省份中间位置或结束位置
A2-B    城市起始位置
A2-I    城市中间位置或结束位置
A3-B    县区起始位置
A3-I    县区中间位置或结束位置
A4-B    详细地址起始位置
A4-I    详细地址中间位置或结束位置
O   无关字符
注意每个标签的结果只有 B、I、O 三种,这种标签的定义方式叫做 BIO 体系,也有稍麻烦一点的 BIESO 体系,这里不做展开。其中 B 表示一个标签类别的开头,比如 P-B 指的是姓名的开头;相应的,I 表示一个标签的延续。对于句子“张三18625584663广东省深圳市南山区百度国际大厦”,每个汉字及对应标签为:图3:数据集标注示例注意到“张“,”三”在这里表示成了“P-B” 和 “P-I”,“P-B”和“P-I”合并成“P” 这个标签。这样重新组合后可以得到以下信息抽取结果:张三  18625584663   广东省 深圳市 南山区 百度国际大厦
P   T   A1  A2  A3  A4
门控循环单元(GRU,Gate Recurrent Unit)
BIGRU是一种经典的循环神经网络(RNN,Recurrent Neural Network),用于对句子等序列信息进行建模。这里我们重点解释下其概念和相关原理。一个 RNN 的示意图如下所示,图4:RNN示意图左边是原始的 RNN,可以看到绿色的点代码输入 x,红色的点代表输出 y,中间的蓝色是 RNN 模型部分。橙色的箭头由自身指向自身,表示 RNN 的输入来自于上时刻的输出,这也是为什么名字中带有循环(Recurrent)这个词。右边是按照时间序列展开的示意图,注意到蓝色的 RNN 模块是同一个,只不过在不同的时刻复用了。这时候能够清晰地表示序列标注模型的输入输出。GRU为了解决长期记忆和反向传播中梯度问题而提出来的,和LSTM一样能够有效对长序列建模,且GRU训练效率更高。条件随机场(CRF,Conditional Random Fields)
长句子的问题解决了,序列标注任务的另外一个问题也亟待解决,即标签之间的依赖性。举个例子,我们预测的标签一般不会出现 P-B,T-I 并列的情况,因为这样的标签不合理,也无法解析。无论是 RNN 还是 LSTM 都只能尽量不出现,却无法从原理上避免这个问题。下面要提到的条件随机场(CRF,Conditional Random Field)却很好的解决了这个问题。条件随机场这个模型属于概率图模型中的无向图模型,这里我们不做展开,只直观解释下该模型背后考量的思想。一个经典的链式 CRF 如下图所示,图5:CRF示意图CRF 本质是一个无向图,其中绿色点表示输入,红色点表示输出。点与点之间的边可以分成两类,一类是 xxx 与 yyy 之间的连线,表示其相关性;另一类是相邻时刻的 yyy 之间的相关性。也就是说,在预测某时刻 yyy 时,同时要考虑相邻的标签解决。当 CRF 模型收敛时,就会学到类似 P-B 和 T-I 作为相邻标签的概率非常低。预训练模型
除了GRU+CRF方案外,我们也可以使用预训练模型,将序列信息抽取问题,建模成字符级分类问题。这里我们采用强大的语义模型ERNIE,完成字符级分类任务。图6:ERNIE模型示意图代码实践
本项目基于PaddleNLP NER example的代码进行修改,分别基于基本组网单元GRU、CRF、TokenEmbedding和预训练模型,实现了多个方案。方案1: PaddleNLP中的基本组网单元BiGRU、CRF、ViterbiDecoder。
方案2: 通过paddlenlp.embedding的功能,热启动加载中文词向量,提升效果
方案3:使用预训练模型ernie组网
AI Studio平台后续会默认安装PaddleNLP,在此之前可使用如下命令安装。In [1]
!pip install --upgrade paddlenlp\>=2.0.0rc0 -i https://pypi.org/simple
Requirement already satisfied: paddlenlp>=2.0.0rc0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (2.0.0rc18)
Requirement already satisfied: colorama in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp>=2.0.0rc0) (0.4.4)
Requirement already satisfied: visualdl in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp>=2.0.0rc0) (2.1.1)
Requirement already satisfied: seqeval in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp>=2.0.0rc0) (1.2.2)
Requirement already satisfied: h5py in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp>=2.0.0rc0) (2.9.0)
Requirement already satisfied: jieba in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp>=2.0.0rc0) (0.42.1)
Requirement already satisfied: colorlog in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp>=2.0.0rc0) (4.1.0)
Requirement already satisfied: numpy>=1.7 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from h5py->paddlenlp>=2.0.0rc0) (1.20.2)
Requirement already satisfied: six in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from h5py->paddlenlp>=2.0.0rc0) (1.15.0)
Requirement already satisfied: scikit-learn>=0.21.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from seqeval->paddlenlp>=2.0.0rc0) (0.24.1)
Requirement already satisfied: scipy>=0.19.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp>=2.0.0rc0) (1.6.2)
Requirement already satisfied: joblib>=0.11 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp>=2.0.0rc0) (0.14.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp>=2.0.0rc0) (2.1.0)
Requirement already satisfied: protobuf>=3.11.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp>=2.0.0rc0) (3.14.0)
Requirement already satisfied: shellcheck-py in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp>=2.0.0rc0) (0.7.1.1)
Requirement already satisfied: Flask-Babel>=1.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp>=2.0.0rc0) (1.0.0)
Requirement already satisfied: Pillow>=7.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp>=2.0.0rc0) (7.1.2)
Requirement already satisfied: flake8>=3.7.9 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp>=2.0.0rc0) (3.8.2)
Requirement already satisfied: bce-python-sdk in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp>=2.0.0rc0) (0.8.53)
Requirement already satisfied: pre-commit in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp>=2.0.0rc0) (1.21.0)
Requirement already satisfied: requests in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp>=2.0.0rc0) (2.22.0)
Requirement already satisfied: flask>=1.1.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl->paddlenlp>=2.0.0rc0) (1.1.1)
Requirement already satisfied: pycodestyle<2.7.0,>=2.6.0a1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8>=3.7.9->visualdl->paddlenlp>=2.0.0rc0) (2.6.0)
Requirement already satisfied: pyflakes<2.3.0,>=2.2.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8>=3.7.9->visualdl->paddlenlp>=2.0.0rc0) (2.2.0)
Requirement already satisfied: mccabe<0.7.0,>=0.6.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8>=3.7.9->visualdl->paddlenlp>=2.0.0rc0) (0.6.1)
Requirement already satisfied: importlib-metadata in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8>=3.7.9->visualdl->paddlenlp>=2.0.0rc0) (0.23)
Requirement already satisfied: click>=5.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl->paddlenlp>=2.0.0rc0) (7.0)
Requirement already satisfied: Jinja2>=2.10.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl->paddlenlp>=2.0.0rc0) (2.10.1)
Requirement already satisfied: itsdangerous>=0.24 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl->paddlenlp>=2.0.0rc0) (1.1.0)
Requirement already satisfied: Werkzeug>=0.15 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl->paddlenlp>=2.0.0rc0) (0.16.0)
Requirement already satisfied: pytz in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel>=1.0.0->visualdl->paddlenlp>=2.0.0rc0) (2019.3)
Requirement already satisfied: Babel>=2.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel>=1.0.0->visualdl->paddlenlp>=2.0.0rc0) (2.8.0)
Requirement already satisfied: MarkupSafe>=0.23 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Jinja2>=2.10.1->flask>=1.1.1->visualdl->paddlenlp>=2.0.0rc0) (1.1.1)
Requirement already satisfied: future>=0.6.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from bce-python-sdk->visualdl->paddlenlp>=2.0.0rc0) (0.18.0)
Requirement already satisfied: pycryptodome>=3.8.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from bce-python-sdk->visualdl->paddlenlp>=2.0.0rc0) (3.9.9)
Requirement already satisfied: zipp>=0.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from importlib-metadata->flake8>=3.7.9->visualdl->paddlenlp>=2.0.0rc0) (0.6.0)
Requirement already satisfied: more-itertools in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from zipp>=0.5->importlib-metadata->flake8>=3.7.9->visualdl->paddlenlp>=2.0.0rc0) (7.2.0)
Requirement already satisfied: aspy.yaml in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl->paddlenlp>=2.0.0rc0) (1.3.0)
Requirement already satisfied: toml in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl->paddlenlp>=2.0.0rc0) (0.10.0)
Requirement already satisfied: virtualenv>=15.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl->paddlenlp>=2.0.0rc0) (16.7.9)
Requirement already satisfied: nodeenv>=0.11.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl->paddlenlp>=2.0.0rc0) (1.3.4)
Requirement already satisfied: identify>=1.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl->paddlenlp>=2.0.0rc0) (1.4.10)
Requirement already satisfied: pyyaml in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl->paddlenlp>=2.0.0rc0) (5.1.2)
Requirement already satisfied: cfgv>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl->paddlenlp>=2.0.0rc0) (2.0.1)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl->paddlenlp>=2.0.0rc0) (1.25.6)
Requirement already satisfied: idna<2.9,>=2.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl->paddlenlp>=2.0.0rc0) (2.8)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl->paddlenlp>=2.0.0rc0) (2019.9.11)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests->visualdl->paddlenlp>=2.0.0rc0) (3.0.4)
WARNING: You are using pip version 21.0.1; however, version 21.1 is available.
You should consider upgrading via the '/opt/conda/envs/python35-paddle120-env/bin/python -m pip install --upgrade pip' command.
PART A. 使用基本组网单元 - BiGRU、CRF、ViterbiDecoder搭建网络
In [2]
import paddle
import paddle.nn as nnimport paddlenlp
from paddlenlp.datasets import MapDataset
from paddlenlp.data import Stack, Tuple, Pad
from paddlenlp.layers import LinearChainCrf, ViterbiDecoder, LinearChainCrfLoss
from paddlenlp.metrics import ChunkEvaluator
from utils import load_dict, evaluate, predict, parse_decodes1, parse_decodes2
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:26: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecationsdef convert_to_list(value, n, name, dtype=np.int):
A.1 数据准备
依赖以下词典数据,词典数据存放在conf目录中。输入文本词典word.dic
对输入文本中特殊字符进行转换的词典q2b.dic
标记标签的词典tag.dic
这里我们提供一份已标注的快递单关键信息数据集。训练使用的数据也可以由大家自己组织数据。数据格式除了第一行是 text_a\tlabel 固定的开头,后面的每行数据都是由两列组成,以制表符分隔,第一列是 utf-8 编码的中文文本,以 \002 分割,第二列是对应每个字的标注,以 \002 分割。在训练和预测阶段,我们都需要进行原始数据的预处理,具体处理工作包括:从原始数据文件中抽取出句子和标签,构造句子序列和标签序列
将句子序列中的特殊字符进行转换
依据词典获取词对应的id索引
自定义数据集
In [3]
# 解压数据集
!unzip -o /home/aistudio/work/data/express_ner.zip -d /home/aistudio/  -x __MACOSX/*
Archive:  /home/aistudio/work/data/express_ner.zipinflating: /home/aistudio/express_ner/dev.txt  inflating: /home/aistudio/express_ner/train.txt  inflating: /home/aistudio/express_ner/test.txt
看一下数据训练集中除第一行是 text_a\tlabel,后面的每行数据都是由两列组成,以制表符分隔,第一列是 utf-8 编码的中文文本,以 \002 分割,第二列是对应序列标注的结果,以 \002 分割。In [4]
!head -2  express_ner/train.txt!cat ./conf/tag.dic
text_a  label
16620200077宣荣嗣甘肃省白银市会宁县河畔镇十字街金海超市西行50米  T-BT-IT-IT-IT-IT-IT-IT-IT-IT-IT-IP-BP-IP-IA1-BA1-IA1-IA2-BA2-IA2-IA3-BA3-IA3-IA4-BA4-IA4-IA4-IA4-IA4-IA4-IA4-IA4-IA4-IA4-IA4-IA4-IA4-IA4-I
P-B
P-I
T-B
T-I
A1-B
A1-I
A2-B
A2-I
A3-B
A3-I
A4-B
A4-I
O
推荐使用MapDataset()自定义数据集。如需自定义数据集,我们推荐基于MapDataset或IterDataset自定义。我们推荐使用yield将数据读取代码写成生成器(generator)的形式,这样可以便捷得构建 MapDataset 和 IterDataset 两种数据集。MapDataset和IterDataset均提供了map()函数,便于进行任意形式的数据处理。更详细的方法可参考自定义数据集说明文档和 数据处理说明文档。In [5]
def load_dataset(datafiles):def read(data_path):with open(data_path, 'r', encoding='utf-8') as fp:next(fp)for line in fp.readlines():words, labels = line.strip('\n').split('\t')words = words.split('\002')labels = labels.split('\002')yield words, labelsif isinstance(datafiles, str):return MapDataset(list(read(datafiles)))elif isinstance(datafiles, list) or isinstance(datafiles, tuple):return [MapDataset(list(read(datafile))) for datafile in datafiles]train_ds, dev_ds, test_ds = load_dataset(datafiles=('express_ner/train.txt', 'express_ner/dev.txt', 'express_ner/test.txt'))
In [6]
for i in range(2):print(train_ds[i])
(['1', '6', '6', '2', '0', '2', '0', '0', '0', '7', '7', '宣', '荣', '嗣', '甘', '肃', '省', '白', '银', '市', '会', '宁', '县', '河', '畔', '镇', '十', '字', '街', '金', '海', '超', '市', '西', '行', '5', '0', '米'], ['T-B', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'P-B', 'P-I', 'P-I', 'A1-B', 'A1-I', 'A1-I', 'A2-B', 'A2-I', 'A2-I', 'A3-B', 'A3-I', 'A3-I', 'A4-B', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I'])
(['1', '3', '5', '5', '2', '6', '6', '4', '3', '0', '7', '姜', '骏', '炜', '云', '南', '省', '德', '宏', '傣', '族', '景', '颇', '族', '自', '治', '州', '盈', '江', '县', '平', '原', '镇', '蜜', '回', '路', '下', '段'], ['T-B', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'T-I', 'P-B', 'P-I', 'P-I', 'A1-B', 'A1-I', 'A1-I', 'A2-B', 'A2-I', 'A2-I', 'A2-I', 'A2-I', 'A2-I', 'A2-I', 'A2-I', 'A2-I', 'A2-I', 'A3-B', 'A3-I', 'A3-I', 'A4-B', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I', 'A4-I'])
构造dataloder
In [7]
label_vocab = load_dict('./conf/tag.dic')
word_vocab = load_dict('./conf/word.dic')# 将token转换为id
def convert_tokens_to_ids(tokens, vocab, oov_token=None):token_ids = []oov_id = vocab.get(oov_token) if oov_token else Nonefor token in tokens:token_id = vocab.get(token, oov_id)token_ids.append(token_id)return token_ids# 将文本和label转换为id
def convert_example(example):tokens, labels = exampletoken_ids = convert_tokens_to_ids(tokens, word_vocab, 'OOV')label_ids = convert_tokens_to_ids(labels, label_vocab, 'O')return token_ids, len(token_ids), label_ids# 调用内置的map()方法,进行数据处理:转换id
train_ds.map(convert_example)
dev_ds.map(convert_example)
test_ds.map(convert_example)
<paddlenlp.datasets.dataset.MapDataset at 0x7f95a8d58f50>
调用map()函数,将文本和标签转换为id后,我们需要组batch,通过paddle.io.DataLoader封装。通过paddle.io.DataLoader的collate_fn 参数,指定如何将样本列表组合为mini-batch数据。构建collate_fn时,我们借助PaddleNLP内置的API:paddlenlp.data.Stack,用于堆叠N个具有相同shape的输入数据来构建一个batch。paddlenlp.data.Pad,用于堆叠N个输入数据来构建一个batch,每个输入数据将会被padding到N个输入数据中最大的长度。paddlenlp.data.Tuple,用于将多个batchify函数包装在一起,返回tuple类型。In [8]
batchify_fn = lambda samples, fn=Tuple(Pad(axis=0, pad_val=word_vocab.get('OOV')),  # token_idsStack(),  # seq_lenPad(axis=0, pad_val=label_vocab.get('O'))  # label_ids): fn(samples)train_loader = paddle.io.DataLoader(dataset=train_ds,batch_size=32,shuffle=True,drop_last=True,return_list=True,collate_fn=batchify_fn)dev_loader = paddle.io.DataLoader(dataset=dev_ds,batch_size=32,drop_last=True,return_list=True,collate_fn=batchify_fn)test_loader = paddle.io.DataLoader(dataset=test_ds,batch_size=32,drop_last=True,return_list=True,collate_fn=batchify_fn)
您可通过以下小示例清晰理解Stack, Pad, Tuple的作用。In [10]
a = [1, 2, 3, 4]
b = [3, 4, 5, 6]
c = [5, 6, 7, 8]
result = Stack()([a, b, c])
print("Stacked Data: \n", result)
print()a = [1, 2, 3, 4]
b = [5, 6, 7]
c = [8, 9]
result = Pad(pad_val=0)([a, b, c])
print("Padded Data: \n", result)
print()data = [[[1, 2, 3, 4], [1]],[[5, 6, 7], [0]],[[8, 9], [1]],]
batchify_fn = Tuple(Pad(pad_val=0), Stack())
ids, labels = batchify_fn(data)
print("ids: \n", ids)
print()
print("labels: \n", labels)
print()
Stacked Data: [[1 2 3 4][3 4 5 6][5 6 7 8]]Padded Data: [[1 2 3 4][5 6 7 0][8 9 0 0]]ids: [[1 2 3 4][5 6 7 0][8 9 0 0]]labels: [[1][0][1]]A.2 网络构建图3:训练流程图序列标注任务常用的模型是RNN+CRF。GRU和LSTM都是常用的RNN单元。这里我们以Bi-GRU+CRF模型为例,介绍如何使用 PaddleNLP 定义序列化标注任务的网络结构。如下图所示,GRU的输出可以作为 CRF 的输入,最后 CRF 的输出作为模型整体的预测结果。图4:Bi-GRU+CRFIn [11]
class BiGRUWithCRF(nn.Layer):def __init__(self,emb_size,hidden_size,word_num,label_num,use_w2v_emb=False):super(BiGRUWithCRF, self).__init__()if use_w2v_emb:self.word_emb = TokenEmbedding(extended_vocab_path='./conf/word.dic', unknown_token='OOV')else:self.word_emb = nn.Embedding(word_num, emb_size)self.gru = nn.GRU(emb_size,hidden_size,num_layers=2,direction='bidirectional')self.fc = nn.Linear(hidden_size * 2, label_num + 2)  # BOS EOSself.crf = LinearChainCrf(label_num)self.decoder = ViterbiDecoder(self.crf.transitions)def forward(self, x, lens):embs = self.word_emb(x)output, _ = self.gru(embs)output = self.fc(output)_, pred = self.decoder(output, lens)return output, lens, pred# Define the model netword and its loss
network = BiGRUWithCRF(300, 300, len(word_vocab), len(label_vocab))
model = paddle.Model(network)
这里的use_w2v_emb参数,决定是否使用预训练的词向量对embedding层进行初始化。A.3 网络配置
定义网络结构后,需要配置优化器、损失函数、评价指标。评价指标
针对每条序列样本的预测结果,序列标注任务将预测结果按照语块(chunk)进行结合并进行评价。评价指标通常有 Precision、Recall 和 F1。Precision,精确率,也叫查准率,由模型预测正确的个数除以模型总的预测的个数得到,关注模型预测出来的结果准不准
Recall,召回率,又叫查全率, 由模型预测正确的个数除以真实标签的个数得到,关注模型漏了哪些东西
F1,综合评价指标,计算公式如下,F1=2∗Precision∗RecallPrecision+RecallF1 = \frac{2*Precision*Recall}{Precision+Recall}F1=
Precision+Recall
2∗Precision∗Recall
​   ,同时考虑 Precision 和 Recall ,是 Precision 和 Recall 的折中。
paddlenlp.metrics中集成了ChunkEvaluator评价指标,并逐步丰富中,In [12]
optimizer = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())
crf_loss = LinearChainCrfLoss(network.crf)
chunk_evaluator = ChunkEvaluator(label_list=label_vocab.keys(), suffix=True)
model.prepare(optimizer, crf_loss, chunk_evaluator)
A.4 模型训练
In [13]
model.fit(train_data=train_loader,eval_data=dev_loader,epochs=10,save_dir='./results',log_freq=1)
The loss value printed in the log is the current step, and the metric is the average value of previous step.
Epoch 1/10
step  1/50 - loss: 116.7822 - precision: 0.0000e+00 - recall: 0.0000e+00 - f1: 0.0000e+00 - 235ms/step
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop workingreturn (isinstance(seq, collections.Sequence) and
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/tensor/creation.py:143: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecationsif data.dtype == np.object:
[2021-04-28 21:11:46,296] [ WARNING]- Compatibility Warning: The params of LinearChainCrfLoss.forward has been modified. The third param is `labels`, and the fourth is not necessary. Please update the usage./opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/math_op_patch.py:238: UserWarning: The dtype of left and right variables are not the same, left dtype is VarType.FP32, but right dtype is VarType.INT64, the right dtype will convert to VarType.FP32format(lhs_dtype, rhs_dtype, lhs_dtype))
[2021-04-28 21:11:46,335] [ WARNING]- Compatibility Warning: The params of ChunkEvaluator.compute has been modified. The old version is `inputs`, `lengths`, `predictions`, `labels` while the current version is `lengths`, `predictions`, `labels`.  Please update the usage.step  2/50 - loss: 89.5565 - precision: 0.0000e+00 - recall: 0.0000e+00 - f1: 0.0000e+00 - 144ms/step
step  3/50 - loss: 85.9611 - precision: 0.0000e+00 - recall: 0.0000e+00 - f1: 0.0000e+00 - 113ms/step
step  4/50 - loss: 74.8487 - precision: 0.0158 - recall: 0.0209 - f1: 0.0180 - 99ms/step
step  5/50 - loss: 79.8392 - precision: 0.0153 - recall: 0.0167 - f1: 0.0160 - 89ms/step
step  6/50 - loss: 73.1430 - precision: 0.0162 - recall: 0.0157 - f1: 0.0159 - 82ms/step
step  7/50 - loss: 58.9575 - precision: 0.0152 - recall: 0.0134 - f1: 0.0143 - 78ms/step
step  8/50 - loss: 57.4230 - precision: 0.0198 - recall: 0.0164 - f1: 0.0179 - 74ms/step
step  9/50 - loss: 67.9495 - precision: 0.0193 - recall: 0.0151 - f1: 0.0169 - 72ms/step
step 10/50 - loss: 62.0777 - precision: 0.0195 - recall: 0.0147 - f1: 0.0167 - 69ms/step
step 11/50 - loss: 54.1168 - precision: 0.0210 - recall: 0.0152 - f1: 0.0176 - 68ms/step
step 12/50 - loss: 52.7695 - precision: 0.0250 - recall: 0.0174 - f1: 0.0205 - 67ms/step
step 13/50 - loss: 50.4947 - precision: 0.0299 - recall: 0.0201 - f1: 0.0241 - 65ms/step
step 14/50 - loss: 60.0225 - precision: 0.0323 - recall: 0.0209 - f1: 0.0254 - 65ms/step
step 15/50 - loss: 44.1765 - precision: 0.0361 - recall: 0.0227 - f1: 0.0279 - 64ms/step
step 16/50 - loss: 57.4164 - precision: 0.0384 - recall: 0.0236 - f1: 0.0292 - 63ms/step
step 17/50 - loss: 41.4529 - precision: 0.0391 - recall: 0.0238 - f1: 0.0296 - 63ms/step
step 18/50 - loss: 45.1909 - precision: 0.0391 - recall: 0.0239 - f1: 0.0297 - 62ms/step
step 19/50 - loss: 39.7813 - precision: 0.0389 - recall: 0.0240 - f1: 0.0297 - 61ms/step
step 20/50 - loss: 31.6452 - precision: 0.0462 - recall: 0.0291 - f1: 0.0357 - 61ms/step
step 21/50 - loss: 36.1601 - precision: 0.0532 - recall: 0.0342 - f1: 0.0416 - 61ms/step
step 22/50 - loss: 42.2555 - precision: 0.0600 - recall: 0.0391 - f1: 0.0473 - 60ms/step
step 23/50 - loss: 30.8583 - precision: 0.0656 - recall: 0.0431 - f1: 0.0520 - 60ms/step
step 24/50 - loss: 26.7765 - precision: 0.0714 - recall: 0.0472 - f1: 0.0568 - 59ms/step
step 25/50 - loss: 32.4414 - precision: 0.0753 - recall: 0.0501 - f1: 0.0602 - 59ms/step
step 26/50 - loss: 28.9526 - precision: 0.0826 - recall: 0.0552 - f1: 0.0662 - 59ms/step
step 27/50 - loss: 34.3277 - precision: 0.0840 - recall: 0.0569 - f1: 0.0678 - 58ms/step
step 28/50 - loss: 37.2988 - precision: 0.0881 - recall: 0.0608 - f1: 0.0719 - 58ms/step
step 29/50 - loss: 18.5073 - precision: 0.0929 - recall: 0.0659 - f1: 0.0772 - 58ms/step
step 30/50 - loss: 33.8244 - precision: 0.1003 - recall: 0.0723 - f1: 0.0840 - 58ms/step
step 31/50 - loss: 18.0585 - precision: 0.1061 - recall: 0.0777 - f1: 0.0897 - 58ms/step
step 32/50 - loss: 37.0378 - precision: 0.1136 - recall: 0.0845 - f1: 0.0969 - 58ms/step
step 33/50 - loss: 27.9482 - precision: 0.1235 - recall: 0.0928 - f1: 0.1060 - 57ms/step
step 34/50 - loss: 16.2767 - precision: 0.1301 - recall: 0.0989 - f1: 0.1124 - 57ms/step
step 35/50 - loss: 12.6824 - precision: 0.1361 - recall: 0.1044 - f1: 0.1182 - 57ms/step
step 36/50 - loss: 16.2246 - precision: 0.1422 - recall: 0.1099 - f1: 0.1240 - 57ms/step
step 37/50 - loss: 16.5782 - precision: 0.1544 - recall: 0.1202 - f1: 0.1352 - 57ms/step
step 38/50 - loss: 21.3546 - precision: 0.1621 - recall: 0.1274 - f1: 0.1427 - 57ms/step
step 39/50 - loss: 15.0479 - precision: 0.1734 - recall: 0.1377 - f1: 0.1535 - 57ms/step
step 40/50 - loss: 18.3840 - precision: 0.1802 - recall: 0.1449 - f1: 0.1606 - 57ms/step
step 41/50 - loss: 8.8935 - precision: 0.1863 - recall: 0.1513 - f1: 0.1670 - 57ms/step
step 42/50 - loss: 8.8197 - precision: 0.1929 - recall: 0.1580 - f1: 0.1737 - 57ms/step
step 43/50 - loss: 33.4510 - precision: 0.1990 - recall: 0.1642 - f1: 0.1799 - 57ms/step
step 44/50 - loss: 8.1927 - precision: 0.2101 - recall: 0.1745 - f1: 0.1907 - 56ms/step
step 45/50 - loss: 17.9160 - precision: 0.2211 - recall: 0.1848 - f1: 0.2013 - 56ms/step
step 46/50 - loss: 7.2073 - precision: 0.2306 - recall: 0.1940 - f1: 0.2107 - 56ms/step
step 47/50 - loss: 12.9816 - precision: 0.2424 - recall: 0.2053 - f1: 0.2223 - 56ms/step
step 48/50 - loss: 7.3841 - precision: 0.2511 - recall: 0.2139 - f1: 0.2310 - 56ms/step
step 49/50 - loss: 3.9771 - precision: 0.2604 - recall: 0.2232 - f1: 0.2404 - 56ms/step
step 50/50 - loss: 5.5663 - precision: 0.2703 - recall: 0.2330 - f1: 0.2503 - 56ms/step
save checkpoint at /home/aistudio/results/0
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 10.3361 - precision: 0.6505 - recall: 0.7128 - f1: 0.6802 - 41ms/step
step 2/6 - loss: 9.7932 - precision: 0.6408 - recall: 0.6947 - f1: 0.6667 - 37ms/step
step 3/6 - loss: 3.3392 - precision: 0.6053 - recall: 0.6795 - f1: 0.6403 - 37ms/step
step 4/6 - loss: 3.4635 - precision: 0.6187 - recall: 0.6877 - f1: 0.6513 - 36ms/step
step 5/6 - loss: 7.3338 - precision: 0.6334 - recall: 0.6992 - f1: 0.6647 - 35ms/step
step 6/6 - loss: 16.8057 - precision: 0.6241 - recall: 0.6873 - f1: 0.6542 - 35ms/step
Eval samples: 192
Epoch 2/10
step  1/50 - loss: 2.7567 - precision: 0.7065 - recall: 0.7396 - f1: 0.7226 - 55ms/step
step  2/50 - loss: 2.5663 - precision: 0.7007 - recall: 0.7520 - f1: 0.7254 - 52ms/step
step  3/50 - loss: 6.3009 - precision: 0.6973 - recall: 0.7544 - f1: 0.7247 - 53ms/step
step  4/50 - loss: 4.1870 - precision: 0.6611 - recall: 0.7268 - f1: 0.6924 - 52ms/step
step  5/50 - loss: 4.6492 - precision: 0.6597 - recall: 0.7252 - f1: 0.6909 - 51ms/step
step  6/50 - loss: 11.3722 - precision: 0.6653 - recall: 0.7308 - f1: 0.6966 - 52ms/step
step  7/50 - loss: 0.7487 - precision: 0.6721 - recall: 0.7364 - f1: 0.7028 - 52ms/step
step  8/50 - loss: 2.3578 - precision: 0.6814 - recall: 0.7448 - f1: 0.7117 - 51ms/step
step  9/50 - loss: 6.1270 - precision: 0.6760 - recall: 0.7413 - f1: 0.7072 - 51ms/step
step 10/50 - loss: 2.8077 - precision: 0.6754 - recall: 0.7436 - f1: 0.7078 - 52ms/step
step 11/50 - loss: 10.2073 - precision: 0.6825 - recall: 0.7470 - f1: 0.7133 - 52ms/step
step 12/50 - loss: 1.3965 - precision: 0.6925 - recall: 0.7564 - f1: 0.7230 - 53ms/step
step 13/50 - loss: 6.8968 - precision: 0.6960 - recall: 0.7570 - f1: 0.7252 - 52ms/step
step 14/50 - loss: 1.5732 - precision: 0.7003 - recall: 0.7602 - f1: 0.7290 - 52ms/step
step 15/50 - loss: 0.0000e+00 - precision: 0.7029 - recall: 0.7633 - f1: 0.7318 - 52ms/step
step 16/50 - loss: 1.2291 - precision: 0.7108 - recall: 0.7723 - f1: 0.7403 - 52ms/step
step 17/50 - loss: 1.0009 - precision: 0.7174 - recall: 0.7786 - f1: 0.7467 - 52ms/step
step 18/50 - loss: 0.4626 - precision: 0.7200 - recall: 0.7813 - f1: 0.7494 - 52ms/step
step 19/50 - loss: 3.4614 - precision: 0.7256 - recall: 0.7876 - f1: 0.7554 - 52ms/step
step 20/50 - loss: 0.0000e+00 - precision: 0.7309 - recall: 0.7925 - f1: 0.7605 - 52ms/step
step 21/50 - loss: 15.0830 - precision: 0.7357 - recall: 0.7964 - f1: 0.7648 - 52ms/step
step 22/50 - loss: 11.4219 - precision: 0.7397 - recall: 0.7994 - f1: 0.7684 - 52ms/step
step 23/50 - loss: 1.6313 - precision: 0.7451 - recall: 0.8040 - f1: 0.7735 - 52ms/step
step 24/50 - loss: 0.0000e+00 - precision: 0.7531 - recall: 0.8105 - f1: 0.7808 - 51ms/step
step 25/50 - loss: 7.7367 - precision: 0.7575 - recall: 0.8141 - f1: 0.7848 - 51ms/step
step 26/50 - loss: 0.7949 - precision: 0.7634 - recall: 0.8193 - f1: 0.7903 - 51ms/step
step 27/50 - loss: 0.2371 - precision: 0.7688 - recall: 0.8242 - f1: 0.7955 - 52ms/step
step 28/50 - loss: 0.0000e+00 - precision: 0.7720 - recall: 0.8279 - f1: 0.7990 - 51ms/step
step 29/50 - loss: 2.3621 - precision: 0.7754 - recall: 0.8306 - f1: 0.8021 - 51ms/step
step 30/50 - loss: 0.2213 - precision: 0.7801 - recall: 0.8345 - f1: 0.8064 - 51ms/step
step 31/50 - loss: 1.1136 - precision: 0.7841 - recall: 0.8382 - f1: 0.8102 - 51ms/step
step 32/50 - loss: 0.1253 - precision: 0.7878 - recall: 0.8407 - f1: 0.8134 - 51ms/step
step 33/50 - loss: 0.6534 - precision: 0.7915 - recall: 0.8434 - f1: 0.8166 - 51ms/step
step 34/50 - loss: 0.0000e+00 - precision: 0.7957 - recall: 0.8466 - f1: 0.8204 - 51ms/step
step 35/50 - loss: 1.3886 - precision: 0.7988 - recall: 0.8489 - f1: 0.8231 - 51ms/step
step 36/50 - loss: 0.0000e+00 - precision: 0.8024 - recall: 0.8517 - f1: 0.8263 - 51ms/step
step 37/50 - loss: 0.0000e+00 - precision: 0.8056 - recall: 0.8544 - f1: 0.8293 - 51ms/step
step 38/50 - loss: 0.3761 - precision: 0.8075 - recall: 0.8563 - f1: 0.8311 - 51ms/step
step 39/50 - loss: 15.4030 - precision: 0.8108 - recall: 0.8589 - f1: 0.8342 - 51ms/step
step 40/50 - loss: 0.1049 - precision: 0.8138 - recall: 0.8615 - f1: 0.8370 - 51ms/step
step 41/50 - loss: 0.0000e+00 - precision: 0.8167 - recall: 0.8637 - f1: 0.8395 - 51ms/step
step 42/50 - loss: 0.0000e+00 - precision: 0.8184 - recall: 0.8650 - f1: 0.8411 - 51ms/step
step 43/50 - loss: 0.0000e+00 - precision: 0.8211 - recall: 0.8675 - f1: 0.8436 - 51ms/step
step 44/50 - loss: 0.0000e+00 - precision: 0.8235 - recall: 0.8693 - f1: 0.8458 - 51ms/step
step 45/50 - loss: 1.3005 - precision: 0.8265 - recall: 0.8713 - f1: 0.8483 - 51ms/step
step 46/50 - loss: 0.0000e+00 - precision: 0.8278 - recall: 0.8720 - f1: 0.8493 - 51ms/step
step 47/50 - loss: 1.0327 - precision: 0.8291 - recall: 0.8731 - f1: 0.8505 - 51ms/step
step 48/50 - loss: 1.6541 - precision: 0.8305 - recall: 0.8745 - f1: 0.8520 - 51ms/step
step 49/50 - loss: 0.1761 - precision: 0.8325 - recall: 0.8758 - f1: 0.8536 - 51ms/step
step 50/50 - loss: 1.0046 - precision: 0.8342 - recall: 0.8772 - f1: 0.8552 - 51ms/step
save checkpoint at /home/aistudio/results/1
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.0000e+00 - precision: 0.8894 - recall: 0.9415 - f1: 0.9147 - 42ms/step
step 2/6 - loss: 0.4980 - precision: 0.9186 - recall: 0.9500 - f1: 0.9340 - 37ms/step
step 3/6 - loss: 0.0000e+00 - precision: 0.9155 - recall: 0.9492 - f1: 0.9321 - 38ms/step
step 4/6 - loss: 0.0000e+00 - precision: 0.9198 - recall: 0.9488 - f1: 0.9341 - 36ms/step
step 5/6 - loss: 0.5216 - precision: 0.9257 - recall: 0.9539 - f1: 0.9396 - 35ms/step
step 6/6 - loss: 4.6050 - precision: 0.9228 - recall: 0.9493 - f1: 0.9359 - 35ms/step
Eval samples: 192
Epoch 3/10
step  1/50 - loss: 0.0000e+00 - precision: 0.9585 - recall: 0.9737 - f1: 0.9661 - 52ms/step
step  2/50 - loss: 0.6101 - precision: 0.9740 - recall: 0.9817 - f1: 0.9778 - 50ms/step
step  3/50 - loss: 0.0000e+00 - precision: 0.9758 - recall: 0.9843 - f1: 0.9800 - 51ms/step
step  4/50 - loss: 0.0000e+00 - precision: 0.9754 - recall: 0.9830 - f1: 0.9792 - 50ms/step
step  5/50 - loss: 0.0000e+00 - precision: 0.9720 - recall: 0.9812 - f1: 0.9766 - 50ms/step
step  6/50 - loss: 0.0000e+00 - precision: 0.9699 - recall: 0.9800 - f1: 0.9749 - 50ms/step
step  7/50 - loss: 0.0000e+00 - precision: 0.9654 - recall: 0.9769 - f1: 0.9711 - 50ms/step
step  8/50 - loss: 1.6988 - precision: 0.9569 - recall: 0.9706 - f1: 0.9637 - 51ms/step
step  9/50 - loss: 0.0000e+00 - precision: 0.9582 - recall: 0.9716 - f1: 0.9649 - 51ms/step
step 10/50 - loss: 1.2470 - precision: 0.9548 - recall: 0.9697 - f1: 0.9622 - 51ms/step
step 11/50 - loss: 0.0000e+00 - precision: 0.9532 - recall: 0.9682 - f1: 0.9607 - 51ms/step
step 12/50 - loss: 0.0000e+00 - precision: 0.9546 - recall: 0.9695 - f1: 0.9620 - 52ms/step
step 13/50 - loss: 0.0000e+00 - precision: 0.9557 - recall: 0.9699 - f1: 0.9627 - 52ms/step
step 14/50 - loss: 0.0000e+00 - precision: 0.9559 - recall: 0.9705 - f1: 0.9632 - 52ms/step
step 15/50 - loss: 0.0000e+00 - precision: 0.9588 - recall: 0.9725 - f1: 0.9656 - 53ms/step
step 16/50 - loss: 0.1884 - precision: 0.9591 - recall: 0.9719 - f1: 0.9655 - 53ms/step
step 17/50 - loss: 0.0000e+00 - precision: 0.9602 - recall: 0.9723 - f1: 0.9662 - 53ms/step
step 18/50 - loss: 0.0000e+00 - precision: 0.9618 - recall: 0.9733 - f1: 0.9675 - 53ms/step
step 19/50 - loss: 0.0000e+00 - precision: 0.9638 - recall: 0.9747 - f1: 0.9692 - 53ms/step
step 20/50 - loss: 0.4645 - precision: 0.9653 - recall: 0.9754 - f1: 0.9704 - 53ms/step
step 21/50 - loss: 0.0000e+00 - precision: 0.9638 - recall: 0.9751 - f1: 0.9694 - 53ms/step
step 22/50 - loss: 0.5070 - precision: 0.9645 - recall: 0.9758 - f1: 0.9701 - 52ms/step
step 23/50 - loss: 9.5725 - precision: 0.9636 - recall: 0.9748 - f1: 0.9691 - 52ms/step
step 24/50 - loss: 1.3508 - precision: 0.9629 - recall: 0.9745 - f1: 0.9687 - 53ms/step
step 25/50 - loss: 0.7529 - precision: 0.9628 - recall: 0.9743 - f1: 0.9685 - 53ms/step
step 26/50 - loss: 0.0000e+00 - precision: 0.9622 - recall: 0.9740 - f1: 0.9681 - 52ms/step
step 27/50 - loss: 0.3336 - precision: 0.9600 - recall: 0.9727 - f1: 0.9663 - 53ms/step
step 28/50 - loss: 0.0000e+00 - precision: 0.9600 - recall: 0.9727 - f1: 0.9663 - 53ms/step
step 29/50 - loss: 1.7835 - precision: 0.9591 - recall: 0.9724 - f1: 0.9657 - 53ms/step
step 30/50 - loss: 0.0000e+00 - precision: 0.9597 - recall: 0.9730 - f1: 0.9663 - 53ms/step
step 31/50 - loss: 0.0000e+00 - precision: 0.9576 - recall: 0.9715 - f1: 0.9645 - 53ms/step
step 32/50 - loss: 0.0000e+00 - precision: 0.9570 - recall: 0.9714 - f1: 0.9641 - 53ms/step
step 33/50 - loss: 0.0851 - precision: 0.9573 - recall: 0.9716 - f1: 0.9644 - 53ms/step
step 34/50 - loss: 0.0000e+00 - precision: 0.9578 - recall: 0.9718 - f1: 0.9648 - 53ms/step
step 35/50 - loss: 0.0000e+00 - precision: 0.9587 - recall: 0.9725 - f1: 0.9656 - 53ms/step
step 36/50 - loss: 0.0000e+00 - precision: 0.9586 - recall: 0.9725 - f1: 0.9655 - 53ms/step
step 37/50 - loss: 0.0000e+00 - precision: 0.9591 - recall: 0.9727 - f1: 0.9659 - 53ms/step
step 38/50 - loss: 0.0000e+00 - precision: 0.9597 - recall: 0.9731 - f1: 0.9664 - 53ms/step
step 39/50 - loss: 0.0000e+00 - precision: 0.9602 - recall: 0.9734 - f1: 0.9668 - 53ms/step
step 40/50 - loss: 0.0000e+00 - precision: 0.9595 - recall: 0.9731 - f1: 0.9662 - 53ms/step
step 41/50 - loss: 1.2208 - precision: 0.9599 - recall: 0.9735 - f1: 0.9666 - 53ms/step
step 42/50 - loss: 0.0000e+00 - precision: 0.9600 - recall: 0.9736 - f1: 0.9667 - 53ms/step
step 43/50 - loss: 0.0000e+00 - precision: 0.9602 - recall: 0.9736 - f1: 0.9668 - 53ms/step
step 44/50 - loss: 0.0000e+00 - precision: 0.9608 - recall: 0.9741 - f1: 0.9674 - 53ms/step
step 45/50 - loss: 0.0000e+00 - precision: 0.9615 - recall: 0.9744 - f1: 0.9679 - 53ms/step
step 46/50 - loss: 0.0000e+00 - precision: 0.9617 - recall: 0.9745 - f1: 0.9681 - 53ms/step
step 47/50 - loss: 0.0000e+00 - precision: 0.9621 - recall: 0.9747 - f1: 0.9684 - 53ms/step
step 48/50 - loss: 0.0000e+00 - precision: 0.9622 - recall: 0.9749 - f1: 0.9685 - 53ms/step
step 49/50 - loss: 0.0000e+00 - precision: 0.9628 - recall: 0.9753 - f1: 0.9690 - 53ms/step
step 50/50 - loss: 0.0000e+00 - precision: 0.9629 - recall: 0.9755 - f1: 0.9692 - 53ms/step
save checkpoint at /home/aistudio/results/2
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.0000e+00 - precision: 0.9385 - recall: 0.9734 - f1: 0.9556 - 42ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9557 - recall: 0.9658 - f1: 0.9607 - 38ms/step
step 3/6 - loss: 0.0000e+00 - precision: 0.9585 - recall: 0.9702 - f1: 0.9643 - 38ms/step
step 4/6 - loss: 0.0000e+00 - precision: 0.9584 - recall: 0.9685 - f1: 0.9634 - 36ms/step
step 5/6 - loss: 3.6313 - precision: 0.9585 - recall: 0.9686 - f1: 0.9635 - 36ms/step
step 6/6 - loss: 4.4833 - precision: 0.9551 - recall: 0.9651 - f1: 0.9600 - 36ms/step
Eval samples: 192
Epoch 4/10
step  1/50 - loss: 3.1964 - precision: 0.9485 - recall: 0.9634 - f1: 0.9558 - 53ms/step
step  2/50 - loss: 0.0000e+00 - precision: 0.9739 - recall: 0.9816 - f1: 0.9777 - 53ms/step
step  3/50 - loss: 0.0000e+00 - precision: 0.9756 - recall: 0.9825 - f1: 0.9790 - 56ms/step
step  4/50 - loss: 0.0000e+00 - precision: 0.9727 - recall: 0.9829 - f1: 0.9778 - 55ms/step
step  5/50 - loss: 0.0000e+00 - precision: 0.9741 - recall: 0.9843 - f1: 0.9791 - 54ms/step
step  6/50 - loss: 0.0000e+00 - precision: 0.9741 - recall: 0.9843 - f1: 0.9791 - 54ms/step
step  7/50 - loss: 0.0000e+00 - precision: 0.9734 - recall: 0.9843 - f1: 0.9788 - 54ms/step
step  8/50 - loss: 0.0000e+00 - precision: 0.9722 - recall: 0.9843 - f1: 0.9782 - 54ms/step
step  9/50 - loss: 0.5547 - precision: 0.9753 - recall: 0.9861 - f1: 0.9806 - 53ms/step
step 10/50 - loss: 0.0000e+00 - precision: 0.9762 - recall: 0.9864 - f1: 0.9812 - 53ms/step
step 11/50 - loss: 3.2762 - precision: 0.9755 - recall: 0.9857 - f1: 0.9806 - 53ms/step
step 12/50 - loss: 0.0000e+00 - precision: 0.9758 - recall: 0.9860 - f1: 0.9809 - 53ms/step
step 13/50 - loss: 0.0000e+00 - precision: 0.9761 - recall: 0.9863 - f1: 0.9812 - 53ms/step
step 14/50 - loss: 0.0000e+00 - precision: 0.9759 - recall: 0.9858 - f1: 0.9808 - 53ms/step
step 15/50 - loss: 0.0000e+00 - precision: 0.9755 - recall: 0.9857 - f1: 0.9805 - 53ms/step
step 16/50 - loss: 0.0000e+00 - precision: 0.9751 - recall: 0.9856 - f1: 0.9803 - 53ms/step
step 17/50 - loss: 0.0000e+00 - precision: 0.9750 - recall: 0.9858 - f1: 0.9804 - 53ms/step
step 18/50 - loss: 0.0000e+00 - precision: 0.9747 - recall: 0.9858 - f1: 0.9802 - 53ms/step
step 19/50 - loss: 0.0000e+00 - precision: 0.9744 - recall: 0.9854 - f1: 0.9799 - 53ms/step
step 20/50 - loss: 0.0000e+00 - precision: 0.9751 - recall: 0.9859 - f1: 0.9805 - 53ms/step
step 21/50 - loss: 0.0000e+00 - precision: 0.9756 - recall: 0.9858 - f1: 0.9807 - 53ms/step
step 22/50 - loss: 0.0000e+00 - precision: 0.9753 - recall: 0.9857 - f1: 0.9805 - 53ms/step
step 23/50 - loss: 0.0000e+00 - precision: 0.9754 - recall: 0.9854 - f1: 0.9804 - 53ms/step
step 24/50 - loss: 0.0000e+00 - precision: 0.9765 - recall: 0.9860 - f1: 0.9812 - 53ms/step
step 25/50 - loss: 0.0000e+00 - precision: 0.9766 - recall: 0.9860 - f1: 0.9813 - 53ms/step
step 26/50 - loss: 0.0000e+00 - precision: 0.9771 - recall: 0.9861 - f1: 0.9816 - 53ms/step
step 27/50 - loss: 0.0000e+00 - precision: 0.9779 - recall: 0.9866 - f1: 0.9822 - 53ms/step
step 28/50 - loss: 0.0000e+00 - precision: 0.9776 - recall: 0.9860 - f1: 0.9818 - 53ms/step
step 29/50 - loss: 0.0000e+00 - precision: 0.9773 - recall: 0.9858 - f1: 0.9815 - 54ms/step
step 30/50 - loss: 0.0000e+00 - precision: 0.9751 - recall: 0.9850 - f1: 0.9800 - 54ms/step
step 31/50 - loss: 1.1032 - precision: 0.9753 - recall: 0.9851 - f1: 0.9802 - 54ms/step
step 32/50 - loss: 0.0000e+00 - precision: 0.9757 - recall: 0.9853 - f1: 0.9805 - 54ms/step
step 33/50 - loss: 0.0000e+00 - precision: 0.9758 - recall: 0.9853 - f1: 0.9805 - 54ms/step
step 34/50 - loss: 0.0000e+00 - precision: 0.9765 - recall: 0.9857 - f1: 0.9811 - 53ms/step
step 35/50 - loss: 0.0000e+00 - precision: 0.9767 - recall: 0.9857 - f1: 0.9812 - 54ms/step
step 36/50 - loss: 0.0000e+00 - precision: 0.9765 - recall: 0.9855 - f1: 0.9810 - 54ms/step
step 37/50 - loss: 0.0000e+00 - precision: 0.9758 - recall: 0.9852 - f1: 0.9804 - 54ms/step
step 38/50 - loss: 0.0000e+00 - precision: 0.9750 - recall: 0.9847 - f1: 0.9799 - 53ms/step
step 39/50 - loss: 0.1810 - precision: 0.9757 - recall: 0.9851 - f1: 0.9804 - 53ms/step
step 40/50 - loss: 0.0000e+00 - precision: 0.9751 - recall: 0.9850 - f1: 0.9800 - 53ms/step
step 41/50 - loss: 0.0000e+00 - precision: 0.9755 - recall: 0.9851 - f1: 0.9803 - 53ms/step
step 42/50 - loss: 0.0000e+00 - precision: 0.9758 - recall: 0.9853 - f1: 0.9805 - 53ms/step
step 43/50 - loss: 0.0000e+00 - precision: 0.9756 - recall: 0.9850 - f1: 0.9803 - 53ms/step
step 44/50 - loss: 0.0000e+00 - precision: 0.9755 - recall: 0.9849 - f1: 0.9802 - 53ms/step
step 45/50 - loss: 0.0000e+00 - precision: 0.9751 - recall: 0.9845 - f1: 0.9798 - 53ms/step
step 46/50 - loss: 0.0000e+00 - precision: 0.9751 - recall: 0.9846 - f1: 0.9799 - 53ms/step
step 47/50 - loss: 0.1888 - precision: 0.9729 - recall: 0.9830 - f1: 0.9779 - 53ms/step
step 48/50 - loss: 0.0000e+00 - precision: 0.9691 - recall: 0.9809 - f1: 0.9750 - 53ms/step
step 49/50 - loss: 0.0000e+00 - precision: 0.9668 - recall: 0.9796 - f1: 0.9732 - 53ms/step
step 50/50 - loss: 3.2681 - precision: 0.9644 - recall: 0.9783 - f1: 0.9713 - 53ms/step
save checkpoint at /home/aistudio/results/3
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.0000e+00 - precision: 0.8068 - recall: 0.8883 - f1: 0.8456 - 42ms/step
step 2/6 - loss: 0.2324 - precision: 0.8354 - recall: 0.8947 - f1: 0.8640 - 38ms/step
step 3/6 - loss: 0.0000e+00 - precision: 0.8642 - recall: 0.9142 - f1: 0.8885 - 38ms/step
step 4/6 - loss: 0.0000e+00 - precision: 0.8632 - recall: 0.9108 - f1: 0.8863 - 36ms/step
step 5/6 - loss: 2.6837 - precision: 0.8658 - recall: 0.9130 - f1: 0.8888 - 36ms/step
step 6/6 - loss: 20.9622 - precision: 0.8533 - recall: 0.9092 - f1: 0.8803 - 36ms/step
Eval samples: 192
Epoch 5/10
step  1/50 - loss: 0.0000e+00 - precision: 0.8971 - recall: 0.9581 - f1: 0.9266 - 55ms/step
step  2/50 - loss: 0.0000e+00 - precision: 0.9250 - recall: 0.9686 - f1: 0.9463 - 54ms/step
step  3/50 - loss: 0.0000e+00 - precision: 0.9183 - recall: 0.9599 - f1: 0.9387 - 55ms/step
step  4/50 - loss: 1.2851 - precision: 0.9124 - recall: 0.9517 - f1: 0.9316 - 54ms/step
step  5/50 - loss: 0.0000e+00 - precision: 0.9117 - recall: 0.9498 - f1: 0.9304 - 54ms/step
step  6/50 - loss: 0.0000e+00 - precision: 0.9203 - recall: 0.9547 - f1: 0.9372 - 54ms/step
step  7/50 - loss: 7.7253 - precision: 0.9270 - recall: 0.9574 - f1: 0.9420 - 55ms/step
step  8/50 - loss: 0.0000e+00 - precision: 0.9340 - recall: 0.9608 - f1: 0.9472 - 55ms/step
step  9/50 - loss: 0.0000e+00 - precision: 0.9324 - recall: 0.9611 - f1: 0.9465 - 54ms/step
step 10/50 - loss: 0.0000e+00 - precision: 0.9297 - recall: 0.9598 - f1: 0.9445 - 54ms/step
step 11/50 - loss: 0.0000e+00 - precision: 0.9319 - recall: 0.9615 - f1: 0.9465 - 54ms/step
step 12/50 - loss: 7.8134 - precision: 0.9276 - recall: 0.9595 - f1: 0.9433 - 53ms/step
step 13/50 - loss: 0.0000e+00 - precision: 0.9255 - recall: 0.9582 - f1: 0.9415 - 53ms/step
step 14/50 - loss: 0.0000e+00 - precision: 0.9296 - recall: 0.9608 - f1: 0.9450 - 53ms/step
step 15/50 - loss: 0.0000e+00 - precision: 0.9315 - recall: 0.9613 - f1: 0.9462 - 53ms/step
step 16/50 - loss: 0.0000e+00 - precision: 0.9326 - recall: 0.9621 - f1: 0.9471 - 53ms/step
step 17/50 - loss: 0.3509 - precision: 0.9313 - recall: 0.9616 - f1: 0.9462 - 53ms/step
step 18/50 - loss: 0.0000e+00 - precision: 0.9328 - recall: 0.9626 - f1: 0.9474 - 53ms/step
step 19/50 - loss: 0.5321 - precision: 0.9357 - recall: 0.9643 - f1: 0.9498 - 53ms/step
step 20/50 - loss: 0.0000e+00 - precision: 0.9371 - recall: 0.9642 - f1: 0.9505 - 53ms/step
step 21/50 - loss: 0.0000e+00 - precision: 0.9369 - recall: 0.9639 - f1: 0.9502 - 53ms/step
step 22/50 - loss: 0.0000e+00 - precision: 0.9377 - recall: 0.9644 - f1: 0.9508 - 53ms/step
step 23/50 - loss: 0.8012 - precision: 0.9381 - recall: 0.9648 - f1: 0.9513 - 53ms/step
step 24/50 - loss: 0.3049 - precision: 0.9394 - recall: 0.9654 - f1: 0.9522 - 53ms/step
step 25/50 - loss: 0.0000e+00 - precision: 0.9376 - recall: 0.9647 - f1: 0.9510 - 53ms/step
step 26/50 - loss: 0.0000e+00 - precision: 0.9373 - recall: 0.9646 - f1: 0.9508 - 53ms/step
step 27/50 - loss: 0.0000e+00 - precision: 0.9381 - recall: 0.9650 - f1: 0.9514 - 53ms/step
step 28/50 - loss: 0.0000e+00 - precision: 0.9388 - recall: 0.9651 - f1: 0.9518 - 53ms/step
step 29/50 - loss: 0.1124 - precision: 0.9400 - recall: 0.9656 - f1: 0.9526 - 54ms/step
step 30/50 - loss: 0.0000e+00 - precision: 0.9413 - recall: 0.9662 - f1: 0.9536 - 53ms/step
step 31/50 - loss: 0.0000e+00 - precision: 0.9425 - recall: 0.9666 - f1: 0.9544 - 53ms/step
step 32/50 - loss: 0.0000e+00 - precision: 0.9433 - recall: 0.9670 - f1: 0.9550 - 53ms/step
step 33/50 - loss: 0.0000e+00 - precision: 0.9431 - recall: 0.9666 - f1: 0.9547 - 53ms/step
step 34/50 - loss: 4.1823 - precision: 0.9439 - recall: 0.9669 - f1: 0.9553 - 53ms/step
step 35/50 - loss: 0.0000e+00 - precision: 0.9446 - recall: 0.9674 - f1: 0.9559 - 53ms/step
step 36/50 - loss: 0.0000e+00 - precision: 0.9458 - recall: 0.9682 - f1: 0.9568 - 53ms/step
step 37/50 - loss: 0.0000e+00 - precision: 0.9467 - recall: 0.9686 - f1: 0.9575 - 54ms/step
step 38/50 - loss: 0.0000e+00 - precision: 0.9469 - recall: 0.9688 - f1: 0.9577 - 54ms/step
step 39/50 - loss: 0.0000e+00 - precision: 0.9461 - recall: 0.9683 - f1: 0.9571 - 54ms/step
step 40/50 - loss: 0.0000e+00 - precision: 0.9464 - recall: 0.9683 - f1: 0.9573 - 54ms/step
step 41/50 - loss: 0.0000e+00 - precision: 0.9471 - recall: 0.9684 - f1: 0.9576 - 54ms/step
step 42/50 - loss: 0.0000e+00 - precision: 0.9481 - recall: 0.9690 - f1: 0.9584 - 54ms/step
step 43/50 - loss: 0.0000e+00 - precision: 0.9489 - recall: 0.9695 - f1: 0.9591 - 54ms/step
step 44/50 - loss: 0.0000e+00 - precision: 0.9494 - recall: 0.9698 - f1: 0.9595 - 54ms/step
step 45/50 - loss: 1.3053 - precision: 0.9502 - recall: 0.9702 - f1: 0.9601 - 54ms/step
step 46/50 - loss: 0.0000e+00 - precision: 0.9505 - recall: 0.9703 - f1: 0.9603 - 54ms/step
step 47/50 - loss: 0.0000e+00 - precision: 0.9511 - recall: 0.9707 - f1: 0.9608 - 54ms/step
step 48/50 - loss: 0.7930 - precision: 0.9511 - recall: 0.9705 - f1: 0.9607 - 54ms/step
step 49/50 - loss: 0.0000e+00 - precision: 0.9516 - recall: 0.9707 - f1: 0.9611 - 54ms/step
step 50/50 - loss: 0.0000e+00 - precision: 0.9520 - recall: 0.9709 - f1: 0.9614 - 54ms/step
save checkpoint at /home/aistudio/results/4
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.0000e+00 - precision: 0.9381 - recall: 0.9681 - f1: 0.9529 - 42ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9506 - recall: 0.9632 - f1: 0.9569 - 39ms/step
step 3/6 - loss: 0.0000e+00 - precision: 0.9433 - recall: 0.9615 - f1: 0.9523 - 41ms/step
step 4/6 - loss: 0.0000e+00 - precision: 0.9471 - recall: 0.9633 - f1: 0.9551 - 39ms/step
step 5/6 - loss: 3.5535 - precision: 0.9494 - recall: 0.9644 - f1: 0.9568 - 38ms/step
step 6/6 - loss: 2.9800 - precision: 0.9475 - recall: 0.9616 - f1: 0.9545 - 38ms/step
Eval samples: 192
Epoch 6/10
step  1/50 - loss: 0.0000e+00 - precision: 0.9738 - recall: 0.9688 - f1: 0.9713 - 51ms/step
step  2/50 - loss: 0.0000e+00 - precision: 0.9714 - recall: 0.9765 - f1: 0.9740 - 50ms/step
step  3/50 - loss: 0.0000e+00 - precision: 0.9775 - recall: 0.9826 - f1: 0.9801 - 52ms/step
step  4/50 - loss: 0.0000e+00 - precision: 0.9805 - recall: 0.9843 - f1: 0.9824 - 52ms/step
step  5/50 - loss: 0.0000e+00 - precision: 0.9844 - recall: 0.9875 - f1: 0.9859 - 52ms/step
step  6/50 - loss: 0.0000e+00 - precision: 0.9809 - recall: 0.9869 - f1: 0.9839 - 53ms/step
step  7/50 - loss: 0.1471 - precision: 0.9836 - recall: 0.9888 - f1: 0.9862 - 53ms/step
step  8/50 - loss: 0.0000e+00 - precision: 0.9844 - recall: 0.9889 - f1: 0.9866 - 54ms/step
step  9/50 - loss: 0.0000e+00 - precision: 0.9850 - recall: 0.9895 - f1: 0.9873 - 53ms/step
step 10/50 - loss: 0.0000e+00 - precision: 0.9854 - recall: 0.9901 - f1: 0.9878 - 53ms/step
step 11/50 - loss: 0.0000e+00 - precision: 0.9858 - recall: 0.9905 - f1: 0.9881 - 53ms/step
step 12/50 - loss: 0.0000e+00 - precision: 0.9870 - recall: 0.9913 - f1: 0.9891 - 53ms/step
step 13/50 - loss: 0.0000e+00 - precision: 0.9864 - recall: 0.9908 - f1: 0.9886 - 53ms/step
step 14/50 - loss: 0.0000e+00 - precision: 0.9874 - recall: 0.9914 - f1: 0.9894 - 53ms/step
step 15/50 - loss: 0.0000e+00 - precision: 0.9882 - recall: 0.9920 - f1: 0.9901 - 53ms/step
step 16/50 - loss: 0.0000e+00 - precision: 0.9863 - recall: 0.9902 - f1: 0.9883 - 53ms/step
step 17/50 - loss: 0.0000e+00 - precision: 0.9862 - recall: 0.9905 - f1: 0.9883 - 53ms/step
step 18/50 - loss: 0.0000e+00 - precision: 0.9870 - recall: 0.9910 - f1: 0.9890 - 53ms/step
step 19/50 - loss: 0.0000e+00 - precision: 0.9877 - recall: 0.9915 - f1: 0.9896 - 53ms/step
step 20/50 - loss: 0.0000e+00 - precision: 0.9878 - recall: 0.9914 - f1: 0.9896 - 53ms/step
step 21/50 - loss: 0.0000e+00 - precision: 0.9873 - recall: 0.9913 - f1: 0.9893 - 53ms/step
step 22/50 - loss: 0.0000e+00 - precision: 0.9870 - recall: 0.9912 - f1: 0.9891 - 53ms/step
step 23/50 - loss: 0.0000e+00 - precision: 0.9864 - recall: 0.9911 - f1: 0.9888 - 53ms/step
step 24/50 - loss: 0.0000e+00 - precision: 0.9863 - recall: 0.9908 - f1: 0.9886 - 53ms/step
step 25/50 - loss: 0.0000e+00 - precision: 0.9865 - recall: 0.9908 - f1: 0.9886 - 53ms/step
step 26/50 - loss: 0.0000e+00 - precision: 0.9870 - recall: 0.9911 - f1: 0.9891 - 53ms/step
step 27/50 - loss: 0.0000e+00 - precision: 0.9867 - recall: 0.9909 - f1: 0.9888 - 53ms/step
step 28/50 - loss: 0.0000e+00 - precision: 0.9868 - recall: 0.9910 - f1: 0.9889 - 53ms/step
step 29/50 - loss: 0.0000e+00 - precision: 0.9862 - recall: 0.9904 - f1: 0.9883 - 53ms/step
step 30/50 - loss: 0.0000e+00 - precision: 0.9851 - recall: 0.9901 - f1: 0.9876 - 53ms/step
step 31/50 - loss: 0.0000e+00 - precision: 0.9856 - recall: 0.9904 - f1: 0.9880 - 53ms/step
step 32/50 - loss: 0.0000e+00 - precision: 0.9852 - recall: 0.9902 - f1: 0.9877 - 53ms/step
step 33/50 - loss: 0.0000e+00 - precision: 0.9847 - recall: 0.9897 - f1: 0.9872 - 53ms/step
step 34/50 - loss: 0.0000e+00 - precision: 0.9845 - recall: 0.9897 - f1: 0.9871 - 53ms/step
step 35/50 - loss: 0.0000e+00 - precision: 0.9842 - recall: 0.9894 - f1: 0.9868 - 53ms/step
step 36/50 - loss: 0.0000e+00 - precision: 0.9844 - recall: 0.9895 - f1: 0.9870 - 53ms/step
step 37/50 - loss: 0.0000e+00 - precision: 0.9845 - recall: 0.9895 - f1: 0.9870 - 53ms/step
step 38/50 - loss: 0.0000e+00 - precision: 0.9837 - recall: 0.9893 - f1: 0.9865 - 53ms/step
step 39/50 - loss: 0.0000e+00 - precision: 0.9839 - recall: 0.9894 - f1: 0.9866 - 53ms/step
step 40/50 - loss: 0.0000e+00 - precision: 0.9837 - recall: 0.9891 - f1: 0.9864 - 53ms/step
step 41/50 - loss: 0.0000e+00 - precision: 0.9839 - recall: 0.9892 - f1: 0.9865 - 53ms/step
step 42/50 - loss: 0.0000e+00 - precision: 0.9839 - recall: 0.9893 - f1: 0.9866 - 53ms/step
step 43/50 - loss: 0.9277 - precision: 0.9838 - recall: 0.9893 - f1: 0.9865 - 53ms/step
step 44/50 - loss: 0.0000e+00 - precision: 0.9839 - recall: 0.9894 - f1: 0.9867 - 53ms/step
step 45/50 - loss: 0.0000e+00 - precision: 0.9843 - recall: 0.9896 - f1: 0.9870 - 53ms/step
step 46/50 - loss: 0.0000e+00 - precision: 0.9836 - recall: 0.9894 - f1: 0.9865 - 53ms/step
step 47/50 - loss: 0.0000e+00 - precision: 0.9831 - recall: 0.9892 - f1: 0.9861 - 53ms/step
step 48/50 - loss: 0.0000e+00 - precision: 0.9830 - recall: 0.9892 - f1: 0.9861 - 53ms/step
step 49/50 - loss: 0.0000e+00 - precision: 0.9831 - recall: 0.9893 - f1: 0.9862 - 53ms/step
step 50/50 - loss: 0.0000e+00 - precision: 0.9833 - recall: 0.9893 - f1: 0.9863 - 53ms/step
save checkpoint at /home/aistudio/results/5
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.0000e+00 - precision: 0.9479 - recall: 0.9681 - f1: 0.9579 - 42ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9635 - recall: 0.9737 - f1: 0.9686 - 37ms/step
step 3/6 - loss: 0.0000e+00 - precision: 0.9602 - recall: 0.9720 - f1: 0.9661 - 38ms/step
step 4/6 - loss: 0.0000e+00 - precision: 0.9598 - recall: 0.9724 - f1: 0.9661 - 36ms/step
step 5/6 - loss: 1.2098 - precision: 0.9617 - recall: 0.9738 - f1: 0.9677 - 36ms/step
step 6/6 - loss: 0.8674 - precision: 0.9528 - recall: 0.9694 - f1: 0.9610 - 36ms/step
Eval samples: 192
Epoch 7/10
step  1/50 - loss: 0.0000e+00 - precision: 0.9894 - recall: 0.9894 - f1: 0.9894 - 54ms/step
step  2/50 - loss: 0.0000e+00 - precision: 0.9895 - recall: 0.9921 - f1: 0.9908 - 53ms/step
step  3/50 - loss: 0.0000e+00 - precision: 0.9895 - recall: 0.9930 - f1: 0.9913 - 53ms/step
step  4/50 - loss: 0.0000e+00 - precision: 0.9922 - recall: 0.9948 - f1: 0.9935 - 55ms/step
step  5/50 - loss: 0.0000e+00 - precision: 0.9916 - recall: 0.9937 - f1: 0.9927 - 55ms/step
step  6/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9930 - f1: 0.9913 - 55ms/step
step  7/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9925 - f1: 0.9910 - 54ms/step
step  8/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9928 - f1: 0.9912 - 53ms/step
step  9/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9924 - f1: 0.9910 - 53ms/step
step 10/50 - loss: 0.0000e+00 - precision: 0.9885 - recall: 0.9916 - f1: 0.9901 - 53ms/step
step 11/50 - loss: 0.0000e+00 - precision: 0.9886 - recall: 0.9919 - f1: 0.9903 - 53ms/step
step 12/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9926 - f1: 0.9911 - 53ms/step
step 13/50 - loss: 0.0000e+00 - precision: 0.9880 - recall: 0.9915 - f1: 0.9897 - 52ms/step
step 14/50 - loss: 0.0000e+00 - precision: 0.9873 - recall: 0.9914 - f1: 0.9894 - 52ms/step
step 15/50 - loss: 0.0000e+00 - precision: 0.9875 - recall: 0.9916 - f1: 0.9896 - 52ms/step
step 16/50 - loss: 0.0000e+00 - precision: 0.9876 - recall: 0.9915 - f1: 0.9896 - 53ms/step
step 17/50 - loss: 0.0000e+00 - precision: 0.9884 - recall: 0.9920 - f1: 0.9902 - 53ms/step
step 18/50 - loss: 0.0000e+00 - precision: 0.9884 - recall: 0.9919 - f1: 0.9901 - 52ms/step
step 19/50 - loss: 0.0000e+00 - precision: 0.9885 - recall: 0.9917 - f1: 0.9901 - 53ms/step
step 20/50 - loss: 0.0000e+00 - precision: 0.9883 - recall: 0.9919 - f1: 0.9901 - 52ms/step
step 21/50 - loss: 0.0000e+00 - precision: 0.9888 - recall: 0.9923 - f1: 0.9905 - 52ms/step
step 22/50 - loss: 0.0000e+00 - precision: 0.9893 - recall: 0.9926 - f1: 0.9910 - 52ms/step
step 23/50 - loss: 0.0000e+00 - precision: 0.9893 - recall: 0.9927 - f1: 0.9910 - 52ms/step
step 24/50 - loss: 0.0000e+00 - precision: 0.9894 - recall: 0.9928 - f1: 0.9911 - 52ms/step
step 25/50 - loss: 0.0000e+00 - precision: 0.9871 - recall: 0.9920 - f1: 0.9896 - 52ms/step
step 26/50 - loss: 0.0000e+00 - precision: 0.9872 - recall: 0.9922 - f1: 0.9897 - 52ms/step
step 27/50 - loss: 0.0000e+00 - precision: 0.9869 - recall: 0.9921 - f1: 0.9895 - 52ms/step
step 28/50 - loss: 0.0000e+00 - precision: 0.9874 - recall: 0.9923 - f1: 0.9898 - 52ms/step
step 29/50 - loss: 0.0000e+00 - precision: 0.9878 - recall: 0.9926 - f1: 0.9902 - 52ms/step
step 30/50 - loss: 0.0000e+00 - precision: 0.9879 - recall: 0.9927 - f1: 0.9903 - 52ms/step
step 31/50 - loss: 0.0000e+00 - precision: 0.9879 - recall: 0.9927 - f1: 0.9903 - 52ms/step
step 32/50 - loss: 0.0000e+00 - precision: 0.9876 - recall: 0.9925 - f1: 0.9901 - 52ms/step
step 33/50 - loss: 0.0000e+00 - precision: 0.9874 - recall: 0.9924 - f1: 0.9899 - 52ms/step
step 34/50 - loss: 0.0000e+00 - precision: 0.9862 - recall: 0.9918 - f1: 0.9890 - 52ms/step
step 35/50 - loss: 0.0000e+00 - precision: 0.9866 - recall: 0.9921 - f1: 0.9893 - 52ms/step
step 36/50 - loss: 0.0000e+00 - precision: 0.9864 - recall: 0.9919 - f1: 0.9891 - 52ms/step
step 37/50 - loss: 0.0000e+00 - precision: 0.9865 - recall: 0.9919 - f1: 0.9892 - 52ms/step
step 38/50 - loss: 0.0000e+00 - precision: 0.9863 - recall: 0.9919 - f1: 0.9891 - 52ms/step
step 39/50 - loss: 0.2242 - precision: 0.9867 - recall: 0.9921 - f1: 0.9894 - 52ms/step
step 40/50 - loss: 0.0000e+00 - precision: 0.9867 - recall: 0.9922 - f1: 0.9894 - 52ms/step
step 41/50 - loss: 0.0000e+00 - precision: 0.9868 - recall: 0.9922 - f1: 0.9895 - 52ms/step
step 42/50 - loss: 0.0000e+00 - precision: 0.9866 - recall: 0.9921 - f1: 0.9894 - 52ms/step
step 43/50 - loss: 0.0000e+00 - precision: 0.9869 - recall: 0.9923 - f1: 0.9896 - 52ms/step
step 44/50 - loss: 0.0000e+00 - precision: 0.9872 - recall: 0.9925 - f1: 0.9899 - 52ms/step
step 45/50 - loss: 0.0000e+00 - precision: 0.9870 - recall: 0.9923 - f1: 0.9897 - 52ms/step
step 46/50 - loss: 0.0000e+00 - precision: 0.9873 - recall: 0.9925 - f1: 0.9899 - 52ms/step
step 47/50 - loss: 0.0000e+00 - precision: 0.9876 - recall: 0.9927 - f1: 0.9901 - 52ms/step
step 48/50 - loss: 0.0866 - precision: 0.9879 - recall: 0.9928 - f1: 0.9903 - 51ms/step
step 49/50 - loss: 0.0000e+00 - precision: 0.9877 - recall: 0.9925 - f1: 0.9901 - 51ms/step
step 50/50 - loss: 0.0000e+00 - precision: 0.9877 - recall: 0.9926 - f1: 0.9901 - 51ms/step
save checkpoint at /home/aistudio/results/6
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.0000e+00 - precision: 0.9479 - recall: 0.9681 - f1: 0.9579 - 44ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9635 - recall: 0.9737 - f1: 0.9686 - 39ms/step
step 3/6 - loss: 0.0000e+00 - precision: 0.9568 - recall: 0.9702 - f1: 0.9635 - 39ms/step
step 4/6 - loss: 0.0000e+00 - precision: 0.9624 - recall: 0.9738 - f1: 0.9680 - 37ms/step
step 5/6 - loss: 1.0182 - precision: 0.9637 - recall: 0.9748 - f1: 0.9693 - 36ms/step
step 6/6 - loss: 1.6475 - precision: 0.9594 - recall: 0.9712 - f1: 0.9653 - 36ms/step
Eval samples: 192
Epoch 8/10
step  1/50 - loss: 0.0000e+00 - precision: 1.0000 - recall: 1.0000 - f1: 1.0000 - 57ms/step
step  2/50 - loss: 0.0000e+00 - precision: 1.0000 - recall: 1.0000 - f1: 1.0000 - 54ms/step
step  3/50 - loss: 0.0000e+00 - precision: 0.9965 - recall: 0.9982 - f1: 0.9974 - 53ms/step
step  4/50 - loss: 0.0000e+00 - precision: 0.9948 - recall: 0.9961 - f1: 0.9954 - 51ms/step
step  5/50 - loss: 0.0000e+00 - precision: 0.9958 - recall: 0.9968 - f1: 0.9963 - 52ms/step
step  6/50 - loss: 0.0000e+00 - precision: 0.9965 - recall: 0.9974 - f1: 0.9969 - 52ms/step
step  7/50 - loss: 0.0000e+00 - precision: 0.9955 - recall: 0.9970 - f1: 0.9963 - 52ms/step
step  8/50 - loss: 0.0000e+00 - precision: 0.9941 - recall: 0.9967 - f1: 0.9954 - 52ms/step
step  9/50 - loss: 0.0000e+00 - precision: 0.9936 - recall: 0.9965 - f1: 0.9951 - 51ms/step
step 10/50 - loss: 0.0000e+00 - precision: 0.9932 - recall: 0.9963 - f1: 0.9948 - 52ms/step
step 11/50 - loss: 0.0000e+00 - precision: 0.9919 - recall: 0.9957 - f1: 0.9938 - 51ms/step
step 12/50 - loss: 0.0000e+00 - precision: 0.9900 - recall: 0.9948 - f1: 0.9924 - 52ms/step
step 13/50 - loss: 0.0000e+00 - precision: 0.9888 - recall: 0.9940 - f1: 0.9914 - 51ms/step
step 14/50 - loss: 0.0000e+00 - precision: 0.9888 - recall: 0.9936 - f1: 0.9912 - 51ms/step
step 15/50 - loss: 0.0000e+00 - precision: 0.9889 - recall: 0.9934 - f1: 0.9911 - 51ms/step
step 16/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9938 - f1: 0.9917 - 51ms/step
step 17/50 - loss: 0.0000e+00 - precision: 0.9902 - recall: 0.9942 - f1: 0.9922 - 51ms/step
step 18/50 - loss: 0.0000e+00 - precision: 0.9901 - recall: 0.9942 - f1: 0.9922 - 51ms/step
step 19/50 - loss: 0.0000e+00 - precision: 0.9901 - recall: 0.9942 - f1: 0.9922 - 51ms/step
step 20/50 - loss: 0.0000e+00 - precision: 0.9903 - recall: 0.9940 - f1: 0.9922 - 51ms/step
step 21/50 - loss: 0.0000e+00 - precision: 0.9903 - recall: 0.9938 - f1: 0.9920 - 51ms/step
step 22/50 - loss: 0.0000e+00 - precision: 0.9907 - recall: 0.9941 - f1: 0.9924 - 51ms/step
step 23/50 - loss: 0.0000e+00 - precision: 0.9912 - recall: 0.9943 - f1: 0.9927 - 51ms/step
step 24/50 - loss: 0.0000e+00 - precision: 0.9915 - recall: 0.9945 - f1: 0.9930 - 51ms/step
step 25/50 - loss: 0.0000e+00 - precision: 0.9910 - recall: 0.9943 - f1: 0.9927 - 51ms/step
step 26/50 - loss: 0.0000e+00 - precision: 0.9902 - recall: 0.9940 - f1: 0.9921 - 51ms/step
step 27/50 - loss: 0.0000e+00 - precision: 0.9900 - recall: 0.9940 - f1: 0.9920 - 51ms/step
step 28/50 - loss: 0.0000e+00 - precision: 0.9894 - recall: 0.9938 - f1: 0.9916 - 51ms/step
step 29/50 - loss: 0.0000e+00 - precision: 0.9898 - recall: 0.9940 - f1: 0.9919 - 51ms/step
step 30/50 - loss: 0.0000e+00 - precision: 0.9901 - recall: 0.9942 - f1: 0.9922 - 51ms/step
step 31/50 - loss: 0.0000e+00 - precision: 0.9891 - recall: 0.9936 - f1: 0.9913 - 51ms/step
step 32/50 - loss: 0.0000e+00 - precision: 0.9894 - recall: 0.9938 - f1: 0.9916 - 51ms/step
step 33/50 - loss: 0.0000e+00 - precision: 0.9897 - recall: 0.9940 - f1: 0.9918 - 51ms/step
step 34/50 - loss: 0.0000e+00 - precision: 0.9900 - recall: 0.9942 - f1: 0.9921 - 51ms/step
step 35/50 - loss: 0.0000e+00 - precision: 0.9900 - recall: 0.9940 - f1: 0.9920 - 51ms/step
step 36/50 - loss: 0.0000e+00 - precision: 0.9903 - recall: 0.9942 - f1: 0.9922 - 51ms/step
step 37/50 - loss: 0.0000e+00 - precision: 0.9903 - recall: 0.9942 - f1: 0.9922 - 51ms/step
step 38/50 - loss: 0.0000e+00 - precision: 0.9904 - recall: 0.9941 - f1: 0.9922 - 51ms/step
step 39/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9934 - f1: 0.9915 - 51ms/step
step 40/50 - loss: 0.0000e+00 - precision: 0.9893 - recall: 0.9932 - f1: 0.9913 - 51ms/step
step 41/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9934 - f1: 0.9915 - 51ms/step
step 42/50 - loss: 0.0000e+00 - precision: 0.9898 - recall: 0.9935 - f1: 0.9917 - 52ms/step
step 43/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9933 - f1: 0.9914 - 52ms/step
step 44/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9933 - f1: 0.9915 - 52ms/step
step 45/50 - loss: 0.0000e+00 - precision: 0.9898 - recall: 0.9935 - f1: 0.9916 - 52ms/step
step 46/50 - loss: 0.0000e+00 - precision: 0.9898 - recall: 0.9934 - f1: 0.9916 - 52ms/step
step 47/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9932 - f1: 0.9914 - 52ms/step
step 48/50 - loss: 0.0000e+00 - precision: 0.9894 - recall: 0.9931 - f1: 0.9912 - 52ms/step
step 49/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9933 - f1: 0.9914 - 52ms/step
step 50/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9932 - f1: 0.9914 - 52ms/step
save checkpoint at /home/aistudio/results/7
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.0000e+00 - precision: 0.9430 - recall: 0.9681 - f1: 0.9554 - 41ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9610 - recall: 0.9737 - f1: 0.9673 - 37ms/step
step 3/6 - loss: 0.0000e+00 - precision: 0.9671 - recall: 0.9772 - f1: 0.9721 - 37ms/step
step 4/6 - loss: 0.0000e+00 - precision: 0.9649 - recall: 0.9738 - f1: 0.9693 - 36ms/step
step 5/6 - loss: 1.7480 - precision: 0.9637 - recall: 0.9727 - f1: 0.9682 - 35ms/step
step 6/6 - loss: 2.7267 - precision: 0.9611 - recall: 0.9703 - f1: 0.9657 - 35ms/step
Eval samples: 192
Epoch 9/10
step  1/50 - loss: 0.0000e+00 - precision: 1.0000 - recall: 1.0000 - f1: 1.0000 - 54ms/step
step  2/50 - loss: 0.0000e+00 - precision: 0.9948 - recall: 0.9974 - f1: 0.9961 - 53ms/step
step  3/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9948 - f1: 0.9922 - 53ms/step
step  4/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9948 - f1: 0.9922 - 54ms/step
step  5/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9948 - f1: 0.9922 - 53ms/step
step  6/50 - loss: 0.0000e+00 - precision: 0.9870 - recall: 0.9939 - f1: 0.9905 - 53ms/step
step  7/50 - loss: 0.0000e+00 - precision: 0.9874 - recall: 0.9940 - f1: 0.9907 - 52ms/step
step  8/50 - loss: 0.0000e+00 - precision: 0.9890 - recall: 0.9948 - f1: 0.9919 - 52ms/step
step  9/50 - loss: 0.0000e+00 - precision: 0.9902 - recall: 0.9954 - f1: 0.9928 - 51ms/step
step 10/50 - loss: 0.0000e+00 - precision: 0.9901 - recall: 0.9953 - f1: 0.9927 - 51ms/step
step 11/50 - loss: 0.0000e+00 - precision: 0.9910 - recall: 0.9957 - f1: 0.9934 - 51ms/step
step 12/50 - loss: 0.0000e+00 - precision: 0.9918 - recall: 0.9961 - f1: 0.9939 - 50ms/step
step 13/50 - loss: 0.0000e+00 - precision: 0.9916 - recall: 0.9956 - f1: 0.9936 - 50ms/step
step 14/50 - loss: 0.0000e+00 - precision: 0.9922 - recall: 0.9959 - f1: 0.9940 - 50ms/step
step 15/50 - loss: 0.0000e+00 - precision: 0.9920 - recall: 0.9958 - f1: 0.9939 - 50ms/step
step 16/50 - loss: 0.0000e+00 - precision: 0.9912 - recall: 0.9954 - f1: 0.9933 - 51ms/step
step 17/50 - loss: 0.0000e+00 - precision: 0.9911 - recall: 0.9954 - f1: 0.9932 - 51ms/step
step 18/50 - loss: 0.0000e+00 - precision: 0.9907 - recall: 0.9954 - f1: 0.9930 - 51ms/step
step 19/50 - loss: 0.0000e+00 - precision: 0.9912 - recall: 0.9956 - f1: 0.9934 - 51ms/step
step 20/50 - loss: 0.0000e+00 - precision: 0.9911 - recall: 0.9953 - f1: 0.9932 - 51ms/step
step 21/50 - loss: 0.0000e+00 - precision: 0.9910 - recall: 0.9953 - f1: 0.9931 - 51ms/step
step 22/50 - loss: 0.0000e+00 - precision: 0.9910 - recall: 0.9952 - f1: 0.9931 - 51ms/step
step 23/50 - loss: 0.0000e+00 - precision: 0.9914 - recall: 0.9954 - f1: 0.9934 - 51ms/step
step 24/50 - loss: 0.0000e+00 - precision: 0.9909 - recall: 0.9950 - f1: 0.9929 - 51ms/step
step 25/50 - loss: 0.0000e+00 - precision: 0.9912 - recall: 0.9952 - f1: 0.9932 - 51ms/step
step 26/50 - loss: 0.0000e+00 - precision: 0.9912 - recall: 0.9952 - f1: 0.9932 - 51ms/step
step 27/50 - loss: 0.0000e+00 - precision: 0.9915 - recall: 0.9953 - f1: 0.9934 - 51ms/step
step 28/50 - loss: 0.0000e+00 - precision: 0.9918 - recall: 0.9955 - f1: 0.9937 - 51ms/step
step 29/50 - loss: 0.0000e+00 - precision: 0.9917 - recall: 0.9955 - f1: 0.9936 - 51ms/step
step 30/50 - loss: 0.0000e+00 - precision: 0.9917 - recall: 0.9955 - f1: 0.9936 - 51ms/step
step 31/50 - loss: 0.0000e+00 - precision: 0.9909 - recall: 0.9948 - f1: 0.9928 - 51ms/step
step 32/50 - loss: 0.0000e+00 - precision: 0.9912 - recall: 0.9949 - f1: 0.9931 - 52ms/step
step 33/50 - loss: 0.0000e+00 - precision: 0.9915 - recall: 0.9951 - f1: 0.9933 - 51ms/step
step 34/50 - loss: 0.0000e+00 - precision: 0.9914 - recall: 0.9951 - f1: 0.9932 - 51ms/step
step 35/50 - loss: 0.0000e+00 - precision: 0.9914 - recall: 0.9951 - f1: 0.9932 - 51ms/step
step 36/50 - loss: 0.0000e+00 - precision: 0.9916 - recall: 0.9952 - f1: 0.9934 - 52ms/step
step 37/50 - loss: 0.0000e+00 - precision: 0.9918 - recall: 0.9953 - f1: 0.9936 - 52ms/step
step 38/50 - loss: 0.0000e+00 - precision: 0.9920 - recall: 0.9955 - f1: 0.9937 - 52ms/step
step 39/50 - loss: 0.0000e+00 - precision: 0.9922 - recall: 0.9956 - f1: 0.9939 - 52ms/step
step 40/50 - loss: 0.0000e+00 - precision: 0.9924 - recall: 0.9957 - f1: 0.9941 - 52ms/step
step 41/50 - loss: 0.0000e+00 - precision: 0.9924 - recall: 0.9955 - f1: 0.9939 - 52ms/step
step 42/50 - loss: 0.0000e+00 - precision: 0.9925 - recall: 0.9956 - f1: 0.9941 - 52ms/step
step 43/50 - loss: 0.0000e+00 - precision: 0.9925 - recall: 0.9956 - f1: 0.9940 - 51ms/step
step 44/50 - loss: 0.0000e+00 - precision: 0.9926 - recall: 0.9957 - f1: 0.9942 - 52ms/step
step 45/50 - loss: 0.0000e+00 - precision: 0.9923 - recall: 0.9953 - f1: 0.9938 - 51ms/step
step 46/50 - loss: 0.0000e+00 - precision: 0.9925 - recall: 0.9955 - f1: 0.9940 - 51ms/step
step 47/50 - loss: 0.0000e+00 - precision: 0.9922 - recall: 0.9952 - f1: 0.9937 - 51ms/step
step 48/50 - loss: 0.0000e+00 - precision: 0.9922 - recall: 0.9951 - f1: 0.9936 - 51ms/step
step 49/50 - loss: 0.0000e+00 - precision: 0.9921 - recall: 0.9951 - f1: 0.9936 - 51ms/step
step 50/50 - loss: 0.0000e+00 - precision: 0.9923 - recall: 0.9952 - f1: 0.9937 - 51ms/step
save checkpoint at /home/aistudio/results/8
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.0000e+00 - precision: 0.9430 - recall: 0.9681 - f1: 0.9554 - 42ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9508 - recall: 0.9658 - f1: 0.9582 - 39ms/step
step 3/6 - loss: 0.0000e+00 - precision: 0.9502 - recall: 0.9685 - f1: 0.9592 - 39ms/step
step 4/6 - loss: 0.0000e+00 - precision: 0.9496 - recall: 0.9646 - f1: 0.9570 - 37ms/step
step 5/6 - loss: 0.8032 - precision: 0.9535 - recall: 0.9675 - f1: 0.9605 - 36ms/step
step 6/6 - loss: 2.2845 - precision: 0.9509 - recall: 0.9642 - f1: 0.9575 - 36ms/step
Eval samples: 192
Epoch 10/10
step  1/50 - loss: 0.0000e+00 - precision: 1.0000 - recall: 1.0000 - f1: 1.0000 - 64ms/step
step  2/50 - loss: 0.0000e+00 - precision: 0.9948 - recall: 0.9974 - f1: 0.9961 - 59ms/step
step  3/50 - loss: 0.0000e+00 - precision: 0.9965 - recall: 0.9983 - f1: 0.9974 - 58ms/step
step  4/50 - loss: 0.0000e+00 - precision: 0.9974 - recall: 0.9987 - f1: 0.9980 - 56ms/step
step  5/50 - loss: 0.0000e+00 - precision: 0.9958 - recall: 0.9979 - f1: 0.9969 - 62ms/step
step  6/50 - loss: 0.0000e+00 - precision: 0.9965 - recall: 0.9983 - f1: 0.9974 - 61ms/step
step  7/50 - loss: 0.0000e+00 - precision: 0.9955 - recall: 0.9970 - f1: 0.9963 - 72ms/step
step  8/50 - loss: 0.0973 - precision: 0.9948 - recall: 0.9967 - f1: 0.9958 - 70ms/step
step  9/50 - loss: 0.0000e+00 - precision: 0.9942 - recall: 0.9965 - f1: 0.9954 - 68ms/step
step 10/50 - loss: 0.0000e+00 - precision: 0.9938 - recall: 0.9963 - f1: 0.9950 - 66ms/step
step 11/50 - loss: 0.0000e+00 - precision: 0.9943 - recall: 0.9967 - f1: 0.9955 - 65ms/step
step 12/50 - loss: 0.0000e+00 - precision: 0.9931 - recall: 0.9961 - f1: 0.9946 - 64ms/step
step 13/50 - loss: 0.0000e+00 - precision: 0.9928 - recall: 0.9960 - f1: 0.9944 - 62ms/step
step 14/50 - loss: 0.0000e+00 - precision: 0.9933 - recall: 0.9963 - f1: 0.9948 - 62ms/step
step 15/50 - loss: 0.0000e+00 - precision: 0.9938 - recall: 0.9965 - f1: 0.9951 - 61ms/step
step 16/50 - loss: 0.0000e+00 - precision: 0.9941 - recall: 0.9967 - f1: 0.9954 - 61ms/step
step 17/50 - loss: 0.4179 - precision: 0.9930 - recall: 0.9963 - f1: 0.9946 - 60ms/step
step 18/50 - loss: 0.0000e+00 - precision: 0.9933 - recall: 0.9965 - f1: 0.9949 - 60ms/step
step 19/50 - loss: 0.0000e+00 - precision: 0.9937 - recall: 0.9967 - f1: 0.9952 - 60ms/step
step 20/50 - loss: 0.0000e+00 - precision: 0.9935 - recall: 0.9966 - f1: 0.9950 - 59ms/step
step 21/50 - loss: 0.0000e+00 - precision: 0.9933 - recall: 0.9965 - f1: 0.9949 - 59ms/step
step 22/50 - loss: 0.0000e+00 - precision: 0.9936 - recall: 0.9967 - f1: 0.9951 - 59ms/step
step 23/50 - loss: 0.0000e+00 - precision: 0.9939 - recall: 0.9968 - f1: 0.9953 - 59ms/step
step 24/50 - loss: 0.0000e+00 - precision: 0.9941 - recall: 0.9970 - f1: 0.9955 - 58ms/step
step 25/50 - loss: 0.0000e+00 - precision: 0.9944 - recall: 0.9971 - f1: 0.9957 - 58ms/step
step 26/50 - loss: 0.0000e+00 - precision: 0.9946 - recall: 0.9972 - f1: 0.9959 - 58ms/step
step 27/50 - loss: 0.0000e+00 - precision: 0.9946 - recall: 0.9969 - f1: 0.9957 - 58ms/step
step 28/50 - loss: 0.0000e+00 - precision: 0.9944 - recall: 0.9968 - f1: 0.9956 - 58ms/step
step 29/50 - loss: 0.0000e+00 - precision: 0.9946 - recall: 0.9969 - f1: 0.9958 - 58ms/step
step 30/50 - loss: 0.0000e+00 - precision: 0.9948 - recall: 0.9970 - f1: 0.9959 - 57ms/step
step 31/50 - loss: 0.0315 - precision: 0.9946 - recall: 0.9968 - f1: 0.9957 - 57ms/step
step 32/50 - loss: 0.0000e+00 - precision: 0.9945 - recall: 0.9967 - f1: 0.9956 - 57ms/step
step 33/50 - loss: 0.0000e+00 - precision: 0.9943 - recall: 0.9965 - f1: 0.9954 - 57ms/step
step 34/50 - loss: 0.0000e+00 - precision: 0.9945 - recall: 0.9966 - f1: 0.9955 - 57ms/step
step 35/50 - loss: 0.0000e+00 - precision: 0.9940 - recall: 0.9963 - f1: 0.9951 - 57ms/step
step 36/50 - loss: 0.0000e+00 - precision: 0.9942 - recall: 0.9964 - f1: 0.9953 - 56ms/step
step 37/50 - loss: 0.0000e+00 - precision: 0.9941 - recall: 0.9962 - f1: 0.9951 - 56ms/step
step 38/50 - loss: 0.0000e+00 - precision: 0.9942 - recall: 0.9963 - f1: 0.9953 - 56ms/step
step 39/50 - loss: 0.0000e+00 - precision: 0.9944 - recall: 0.9964 - f1: 0.9954 - 56ms/step
step 40/50 - loss: 0.0000e+00 - precision: 0.9945 - recall: 0.9965 - f1: 0.9955 - 56ms/step
step 41/50 - loss: 0.0000e+00 - precision: 0.9944 - recall: 0.9964 - f1: 0.9954 - 56ms/step
step 42/50 - loss: 0.0000e+00 - precision: 0.9945 - recall: 0.9965 - f1: 0.9955 - 56ms/step
step 43/50 - loss: 0.0000e+00 - precision: 0.9944 - recall: 0.9965 - f1: 0.9954 - 56ms/step
step 44/50 - loss: 0.0000e+00 - precision: 0.9945 - recall: 0.9966 - f1: 0.9955 - 56ms/step
step 45/50 - loss: 0.0000e+00 - precision: 0.9947 - recall: 0.9966 - f1: 0.9956 - 56ms/step
step 46/50 - loss: 0.0000e+00 - precision: 0.9948 - recall: 0.9967 - f1: 0.9957 - 56ms/step
step 47/50 - loss: 0.0000e+00 - precision: 0.9949 - recall: 0.9968 - f1: 0.9958 - 56ms/step
step 48/50 - loss: 0.5358 - precision: 0.9948 - recall: 0.9967 - f1: 0.9958 - 56ms/step
step 49/50 - loss: 0.0126 - precision: 0.9942 - recall: 0.9965 - f1: 0.9954 - 55ms/step
step 50/50 - loss: 0.0000e+00 - precision: 0.9939 - recall: 0.9962 - f1: 0.9951 - 55ms/step
save checkpoint at /home/aistudio/results/9
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.0000e+00 - precision: 0.9231 - recall: 0.9574 - f1: 0.9399 - 43ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9407 - recall: 0.9605 - f1: 0.9505 - 38ms/step
step 3/6 - loss: 0.0000e+00 - precision: 0.9418 - recall: 0.9632 - f1: 0.9524 - 38ms/step
step 4/6 - loss: 0.0000e+00 - precision: 0.9472 - recall: 0.9659 - f1: 0.9565 - 36ms/step
step 5/6 - loss: 1.6588 - precision: 0.9495 - recall: 0.9665 - f1: 0.9579 - 36ms/step
step 6/6 - loss: 3.8312 - precision: 0.9476 - recall: 0.9642 - f1: 0.9558 - 36ms/step
Eval samples: 192
save checkpoint at /home/aistudio/results/finalA.5 模型评估
调用model.evaluate,查看序列化标注模型在测试集(test.txt)上的评测结果。In [14]
model.evaluate(eval_data=test_loader, log_freq=1)
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.0000e+00 - precision: 0.9792 - recall: 0.9843 - f1: 0.9817 - 46ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9844 - recall: 0.9896 - f1: 0.9870 - 42ms/step
step 3/6 - loss: 19.0475 - precision: 0.9722 - recall: 0.9790 - f1: 0.9756 - 40ms/step
step 4/6 - loss: 0.0000e+00 - precision: 0.9702 - recall: 0.9791 - f1: 0.9746 - 39ms/step
step 5/6 - loss: 0.0000e+00 - precision: 0.9679 - recall: 0.9780 - f1: 0.9729 - 38ms/step
step 6/6 - loss: 0.0000e+00 - precision: 0.9605 - recall: 0.9756 - f1: 0.9680 - 38ms/step
Eval samples: 192
{'loss': [0.0],'precision': 0.9604810996563574,'recall': 0.9755671902268761,'f1': 0.967965367965368}
A.6 预测
利用已有模型,可在未知label的数据集(此处复用测试集test.txt)上进行预测,得到模型预测结果及各label的概率。In [15]
outputs, lens, decodes = model.predict(test_data=test_loader)
preds = parse_decodes1(test_ds, decodes, lens, label_vocab)
print(len(preds))
print('\n'.join(preds[:5]))
Predict begin...
step 6/6 [==============================] - ETA: 0s - 30ms/ste - ETA: 0s - 24ms/ste - 22ms/step
Predict samples: 192
192
('黑龙江省', 'A1')('双鸭山市', 'A2')('尖山区', 'A3')('八马路与东平行路交叉口北40米', 'A4')('韦业涛', 'P')('18600009172', 'T')
('广西壮族自治区', 'A1')('桂林市', 'A2')('雁山区', 'A3')('雁山镇西龙村老年活动中心', 'A4')('17610348888', 'T')('羊卓卫', 'P')
('15652864561', 'T')('河南省', 'A1')('开封市', 'A2')('顺河回族区', 'A3')('顺河区公园路32号', 'A4')('赵本山', 'P')
('河北省', 'A1')('唐山市', 'A2')('玉田县', 'A3')('无终大街159号', 'A4')('18614253058', 'T')('尚汉生', 'P')
('台湾', 'A1')('台中市', 'A2')('北区', 'A3')('北区锦新街18号', 'A4')('18511226708', 'T')('蓟丽', 'P')
PART B 优化进阶-使用预训练的词向量优化模型效果
在Baseline版本中,我们调用了paddle.nn.Embedding获取词的向量表示。这里,我们调用paddlenlp.embeddings中内置的向量表示TokenEmbedding,能够提升效果。这里的use_w2v_emb参数,决定是否使用预训练的词向量对embedding层进行初始化。In [16]
from paddlenlp.embeddings import TokenEmbedding # EMB
In [17]
network = BiGRUWithCRF(300, 300, len(word_vocab), len(label_vocab), True)
model = paddle.Model(network)optimizer = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())
crf_loss = LinearChainCrfLoss(network.crf)
chunk_evaluator = ChunkEvaluator(label_list=label_vocab.keys(), suffix=True)
model.prepare(optimizer, crf_loss, chunk_evaluator)
[2021-04-28 21:13:58,679] [    INFO] - Loading token embedding...
[2021-04-28 21:14:00,259] [    INFO] - Start extending vocab.
[2021-04-28 21:14:07,988] [    INFO] - Finish extending vocab.
[2021-04-28 21:14:10,623] [    INFO] - Finish loading embedding vector.
[2021-04-28 21:14:10,626] [    INFO] - Token Embedding info:
Unknown index: 20939
Unknown token: OOV
Padding index: 643697
Padding token: [PAD]
Shape :[643698, 300]
In [18]
model.fit(train_data=train_loader,eval_data=dev_loader,epochs=10,save_dir='./results',log_freq=1)
The loss value printed in the log is the current step, and the metric is the average value of previous step.
Epoch 1/10
[2021-04-28 21:14:10,735] [ WARNING]- Compatibility Warning: The params of LinearChainCrfLoss.forward has been modified. The third param is `labels`, and the fourth is not necessary. Please update the usage.[2021-04-28 21:14:10,780] [ WARNING] - Compatibility Warning: The params of ChunkEvaluator.compute has been modified. The old version is `inputs`, `lengths`, `predictions`, `labels` while the current version is `lengths`, `predictions`, `labels`.  Please update the usage.
step  1/50 - loss: 92.3452 - precision: 0.0000e+00 - recall: 0.0000e+00 - f1: 0.0000e+00 - 246ms/step
step  2/50 - loss: 60.1989 - precision: 0.0000e+00 - recall: 0.0000e+00 - f1: 0.0000e+00 - 190ms/step
step  3/50 - loss: 51.1247 - precision: 0.0032 - recall: 0.0052 - f1: 0.0040 - 148ms/step
step  4/50 - loss: 41.5885 - precision: 0.0176 - recall: 0.0235 - f1: 0.0201 - 143ms/step
step  5/50 - loss: 47.7796 - precision: 0.0247 - recall: 0.0293 - f1: 0.0268 - 127ms/step
step  6/50 - loss: 34.3312 - precision: 0.0302 - recall: 0.0339 - f1: 0.0319 - 117ms/step
step  7/50 - loss: 44.9521 - precision: 0.0409 - recall: 0.0448 - f1: 0.0428 - 109ms/step
step  8/50 - loss: 42.2197 - precision: 0.0488 - recall: 0.0516 - f1: 0.0501 - 102ms/step
step  9/50 - loss: 38.8731 - precision: 0.0651 - recall: 0.0679 - f1: 0.0665 - 97ms/step
step 10/50 - loss: 30.2932 - precision: 0.0823 - recall: 0.0851 - f1: 0.0837 - 94ms/step
step 11/50 - loss: 51.4446 - precision: 0.0955 - recall: 0.0987 - f1: 0.0970 - 91ms/step
step 12/50 - loss: 24.9323 - precision: 0.1113 - recall: 0.1149 - f1: 0.1131 - 88ms/step
step 13/50 - loss: 27.1223 - precision: 0.1257 - recall: 0.1298 - f1: 0.1277 - 86ms/step
step 14/50 - loss: 15.1520 - precision: 0.1446 - recall: 0.1497 - f1: 0.1471 - 84ms/step
step 15/50 - loss: 14.5623 - precision: 0.1656 - recall: 0.1714 - f1: 0.1684 - 83ms/step
step 16/50 - loss: 23.7025 - precision: 0.1758 - recall: 0.1829 - f1: 0.1793 - 82ms/step
step 17/50 - loss: 16.8050 - precision: 0.1927 - recall: 0.2010 - f1: 0.1968 - 80ms/step
step 18/50 - loss: 53.2833 - precision: 0.2084 - recall: 0.2180 - f1: 0.2131 - 79ms/step
step 19/50 - loss: 27.1836 - precision: 0.2190 - recall: 0.2307 - f1: 0.2247 - 78ms/step
step 20/50 - loss: 21.2989 - precision: 0.2349 - recall: 0.2488 - f1: 0.2417 - 78ms/step
step 21/50 - loss: 13.7987 - precision: 0.2480 - recall: 0.2641 - f1: 0.2558 - 77ms/step
step 22/50 - loss: 12.1725 - precision: 0.2618 - recall: 0.2802 - f1: 0.2707 - 77ms/step
step 23/50 - loss: 12.5009 - precision: 0.2770 - recall: 0.2966 - f1: 0.2864 - 76ms/step
step 24/50 - loss: 9.5706 - precision: 0.2951 - recall: 0.3156 - f1: 0.3050 - 75ms/step
step 25/50 - loss: 8.0253 - precision: 0.3068 - recall: 0.3280 - f1: 0.3170 - 74ms/step
step 26/50 - loss: 29.2462 - precision: 0.3165 - recall: 0.3386 - f1: 0.3271 - 74ms/step
step 27/50 - loss: 11.7024 - precision: 0.3295 - recall: 0.3525 - f1: 0.3406 - 74ms/step
step 28/50 - loss: 5.8344 - precision: 0.3419 - recall: 0.3665 - f1: 0.3538 - 73ms/step
step 29/50 - loss: 4.7612 - precision: 0.3565 - recall: 0.3820 - f1: 0.3688 - 73ms/step
step 30/50 - loss: 3.3230 - precision: 0.3693 - recall: 0.3968 - f1: 0.3825 - 72ms/step
step 31/50 - loss: 4.2922 - precision: 0.3832 - recall: 0.4112 - f1: 0.3967 - 72ms/step
step 32/50 - loss: 14.5057 - precision: 0.3929 - recall: 0.4229 - f1: 0.4073 - 71ms/step
step 33/50 - loss: 8.0626 - precision: 0.4040 - recall: 0.4359 - f1: 0.4193 - 71ms/step
step 34/50 - loss: 3.1661 - precision: 0.4136 - recall: 0.4469 - f1: 0.4296 - 71ms/step
step 35/50 - loss: 10.6371 - precision: 0.4234 - recall: 0.4578 - f1: 0.4399 - 71ms/step
step 36/50 - loss: 5.8705 - precision: 0.4321 - recall: 0.4675 - f1: 0.4491 - 71ms/step
step 37/50 - loss: 3.6881 - precision: 0.4389 - recall: 0.4747 - f1: 0.4561 - 70ms/step
step 38/50 - loss: 2.6782 - precision: 0.4472 - recall: 0.4832 - f1: 0.4645 - 70ms/step
step 39/50 - loss: 9.2744 - precision: 0.4535 - recall: 0.4903 - f1: 0.4712 - 70ms/step
step 40/50 - loss: 4.6129 - precision: 0.4613 - recall: 0.4985 - f1: 0.4792 - 69ms/step
step 41/50 - loss: 1.6084 - precision: 0.4712 - recall: 0.5086 - f1: 0.4891 - 69ms/step
step 42/50 - loss: 3.6486 - precision: 0.4789 - recall: 0.5169 - f1: 0.4972 - 69ms/step
step 43/50 - loss: 3.5782 - precision: 0.4892 - recall: 0.5274 - f1: 0.5076 - 69ms/step
step 44/50 - loss: 4.7885 - precision: 0.4961 - recall: 0.5350 - f1: 0.5148 - 69ms/step
step 45/50 - loss: 5.8242 - precision: 0.5043 - recall: 0.5433 - f1: 0.5231 - 69ms/step
step 46/50 - loss: 0.3803 - precision: 0.5113 - recall: 0.5510 - f1: 0.5304 - 69ms/step
step 47/50 - loss: 3.9614 - precision: 0.5181 - recall: 0.5587 - f1: 0.5377 - 68ms/step
step 48/50 - loss: 2.1106 - precision: 0.5249 - recall: 0.5657 - f1: 0.5446 - 68ms/step
step 49/50 - loss: 6.0916 - precision: 0.5323 - recall: 0.5734 - f1: 0.5521 - 68ms/step
step 50/50 - loss: 0.6645 - precision: 0.5383 - recall: 0.5796 - f1: 0.5582 - 68ms/step
save checkpoint at /home/aistudio/results/0
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 1.8655 - precision: 0.8974 - recall: 0.9309 - f1: 0.9138 - 43ms/step
step 2/6 - loss: 1.6613 - precision: 0.9103 - recall: 0.9342 - f1: 0.9221 - 38ms/step
step 3/6 - loss: 0.3961 - precision: 0.9080 - recall: 0.9335 - f1: 0.9206 - 39ms/step
step 4/6 - loss: 0.4949 - precision: 0.9129 - recall: 0.9357 - f1: 0.9242 - 37ms/step
step 5/6 - loss: 3.0126 - precision: 0.9165 - recall: 0.9434 - f1: 0.9298 - 36ms/step
step 6/6 - loss: 8.4887 - precision: 0.9100 - recall: 0.9362 - f1: 0.9229 - 36ms/step
Eval samples: 192
Epoch 2/10
step  1/50 - loss: 1.7708 - precision: 0.8883 - recall: 0.9115 - f1: 0.8997 - 64ms/step
step  2/50 - loss: 2.8373 - precision: 0.8864 - recall: 0.9164 - f1: 0.9012 - 61ms/step
step  3/50 - loss: 2.7110 - precision: 0.8894 - recall: 0.9235 - f1: 0.9061 - 61ms/step
step  4/50 - loss: 0.0000e+00 - precision: 0.9043 - recall: 0.9361 - f1: 0.9199 - 59ms/step
step  5/50 - loss: 1.8746 - precision: 0.8975 - recall: 0.9322 - f1: 0.9145 - 60ms/step
step  6/50 - loss: 2.9205 - precision: 0.9018 - recall: 0.9339 - f1: 0.9176 - 60ms/step
step  7/50 - loss: 0.0989 - precision: 0.9061 - recall: 0.9352 - f1: 0.9204 - 60ms/step
step  8/50 - loss: 0.3071 - precision: 0.9077 - recall: 0.9355 - f1: 0.9213 - 60ms/step
step  9/50 - loss: 2.5129 - precision: 0.9133 - recall: 0.9403 - f1: 0.9266 - 59ms/step
step 10/50 - loss: 1.3194 - precision: 0.9158 - recall: 0.9416 - f1: 0.9285 - 59ms/step
step 11/50 - loss: 0.0000e+00 - precision: 0.9152 - recall: 0.9439 - f1: 0.9293 - 59ms/step
step 12/50 - loss: 0.7375 - precision: 0.9143 - recall: 0.9434 - f1: 0.9286 - 59ms/step
step 13/50 - loss: 0.0000e+00 - precision: 0.9139 - recall: 0.9437 - f1: 0.9286 - 59ms/step
step 14/50 - loss: 1.7394 - precision: 0.9129 - recall: 0.9443 - f1: 0.9284 - 59ms/step
step 15/50 - loss: 0.4427 - precision: 0.9134 - recall: 0.9449 - f1: 0.9289 - 59ms/step
step 16/50 - loss: 0.0000e+00 - precision: 0.9143 - recall: 0.9460 - f1: 0.9299 - 59ms/step
step 17/50 - loss: 0.0000e+00 - precision: 0.9154 - recall: 0.9458 - f1: 0.9303 - 60ms/step
step 18/50 - loss: 3.1276 - precision: 0.9139 - recall: 0.9453 - f1: 0.9293 - 60ms/step
step 19/50 - loss: 0.7602 - precision: 0.9157 - recall: 0.9463 - f1: 0.9307 - 60ms/step
step 20/50 - loss: 6.2756 - precision: 0.9171 - recall: 0.9469 - f1: 0.9317 - 60ms/step
step 21/50 - loss: 0.7506 - precision: 0.9195 - recall: 0.9486 - f1: 0.9339 - 60ms/step
step 22/50 - loss: 1.0598 - precision: 0.9217 - recall: 0.9498 - f1: 0.9356 - 60ms/step
step 23/50 - loss: 3.7312 - precision: 0.9202 - recall: 0.9493 - f1: 0.9345 - 60ms/step
step 24/50 - loss: 3.4705 - precision: 0.9211 - recall: 0.9499 - f1: 0.9353 - 61ms/step
step 25/50 - loss: 0.9551 - precision: 0.9215 - recall: 0.9506 - f1: 0.9358 - 60ms/step
step 26/50 - loss: 0.4764 - precision: 0.9219 - recall: 0.9507 - f1: 0.9361 - 60ms/step
step 27/50 - loss: 2.2702 - precision: 0.9227 - recall: 0.9508 - f1: 0.9365 - 60ms/step
step 28/50 - loss: 0.7555 - precision: 0.9245 - recall: 0.9520 - f1: 0.9380 - 60ms/step
step 29/50 - loss: 0.3508 - precision: 0.9257 - recall: 0.9531 - f1: 0.9392 - 60ms/step
step 30/50 - loss: 1.7828 - precision: 0.9255 - recall: 0.9527 - f1: 0.9389 - 60ms/step
step 31/50 - loss: 0.0000e+00 - precision: 0.9268 - recall: 0.9536 - f1: 0.9400 - 60ms/step
step 32/50 - loss: 1.2518 - precision: 0.9277 - recall: 0.9542 - f1: 0.9407 - 60ms/step
step 33/50 - loss: 0.3732 - precision: 0.9268 - recall: 0.9542 - f1: 0.9403 - 61ms/step
step 34/50 - loss: 0.6528 - precision: 0.9277 - recall: 0.9551 - f1: 0.9412 - 61ms/step
step 35/50 - loss: 0.4778 - precision: 0.9287 - recall: 0.9559 - f1: 0.9421 - 61ms/step
step 36/50 - loss: 0.6225 - precision: 0.9293 - recall: 0.9565 - f1: 0.9427 - 61ms/step
step 37/50 - loss: 0.0000e+00 - precision: 0.9301 - recall: 0.9570 - f1: 0.9433 - 61ms/step
step 38/50 - loss: 0.0000e+00 - precision: 0.9281 - recall: 0.9557 - f1: 0.9417 - 61ms/step
step 39/50 - loss: 0.0000e+00 - precision: 0.9288 - recall: 0.9563 - f1: 0.9424 - 61ms/step
step 40/50 - loss: 0.0144 - precision: 0.9292 - recall: 0.9566 - f1: 0.9427 - 61ms/step
step 41/50 - loss: 0.1513 - precision: 0.9306 - recall: 0.9575 - f1: 0.9438 - 61ms/step
step 42/50 - loss: 2.1550 - precision: 0.9302 - recall: 0.9573 - f1: 0.9435 - 61ms/step
step 43/50 - loss: 0.2583 - precision: 0.9298 - recall: 0.9570 - f1: 0.9432 - 61ms/step
step 44/50 - loss: 0.2175 - precision: 0.9304 - recall: 0.9574 - f1: 0.9437 - 61ms/step
step 45/50 - loss: 0.2811 - precision: 0.9309 - recall: 0.9572 - f1: 0.9439 - 61ms/step
step 46/50 - loss: 0.0000e+00 - precision: 0.9317 - recall: 0.9577 - f1: 0.9445 - 61ms/step
step 47/50 - loss: 0.0427 - precision: 0.9318 - recall: 0.9577 - f1: 0.9445 - 61ms/step
step 48/50 - loss: 0.0000e+00 - precision: 0.9327 - recall: 0.9584 - f1: 0.9454 - 61ms/step
step 49/50 - loss: 0.0000e+00 - precision: 0.9326 - recall: 0.9586 - f1: 0.9454 - 61ms/step
step 50/50 - loss: 0.0000e+00 - precision: 0.9330 - recall: 0.9589 - f1: 0.9458 - 61ms/step
save checkpoint at /home/aistudio/results/1
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.9361 - precision: 0.9531 - recall: 0.9734 - f1: 0.9632 - 42ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9689 - recall: 0.9842 - f1: 0.9765 - 38ms/step
step 3/6 - loss: 0.3350 - precision: 0.9605 - recall: 0.9807 - f1: 0.9705 - 38ms/step
step 4/6 - loss: 0.2365 - precision: 0.9588 - recall: 0.9777 - f1: 0.9682 - 36ms/step
step 5/6 - loss: 0.0000e+00 - precision: 0.9599 - recall: 0.9790 - f1: 0.9694 - 36ms/step
step 6/6 - loss: 1.4589 - precision: 0.9631 - recall: 0.9799 - f1: 0.9714 - 36ms/step
Eval samples: 192
Epoch 3/10
step  1/50 - loss: 2.3251 - precision: 0.9196 - recall: 0.9531 - f1: 0.9361 - 64ms/step
step  2/50 - loss: 0.8770 - precision: 0.9391 - recall: 0.9635 - f1: 0.9512 - 60ms/step
step  3/50 - loss: 0.0000e+00 - precision: 0.9427 - recall: 0.9705 - f1: 0.9564 - 61ms/step
step  4/50 - loss: 1.2807 - precision: 0.9419 - recall: 0.9714 - f1: 0.9564 - 61ms/step
step  5/50 - loss: 2.6353 - precision: 0.9373 - recall: 0.9676 - f1: 0.9522 - 61ms/step
step  6/50 - loss: 0.0000e+00 - precision: 0.9458 - recall: 0.9721 - f1: 0.9588 - 61ms/step
step  7/50 - loss: 0.2348 - precision: 0.9519 - recall: 0.9754 - f1: 0.9635 - 60ms/step
step  8/50 - loss: 0.3705 - precision: 0.9527 - recall: 0.9745 - f1: 0.9635 - 60ms/step
step  9/50 - loss: 0.7870 - precision: 0.9478 - recall: 0.9698 - f1: 0.9587 - 60ms/step
step 10/50 - loss: 0.5211 - precision: 0.9514 - recall: 0.9708 - f1: 0.9610 - 60ms/step
step 11/50 - loss: 0.0000e+00 - precision: 0.9548 - recall: 0.9729 - f1: 0.9638 - 60ms/step
step 12/50 - loss: 0.0000e+00 - precision: 0.9573 - recall: 0.9748 - f1: 0.9659 - 60ms/step
step 13/50 - loss: 0.0000e+00 - precision: 0.9589 - recall: 0.9759 - f1: 0.9673 - 60ms/step
step 14/50 - loss: 9.7319 - precision: 0.9593 - recall: 0.9765 - f1: 0.9678 - 59ms/step
step 15/50 - loss: 0.0000e+00 - precision: 0.9523 - recall: 0.9735 - f1: 0.9628 - 60ms/step
step 16/50 - loss: 0.0000e+00 - precision: 0.9518 - recall: 0.9735 - f1: 0.9625 - 60ms/step
step 17/50 - loss: 0.1848 - precision: 0.9522 - recall: 0.9742 - f1: 0.9630 - 61ms/step
step 18/50 - loss: 0.0000e+00 - precision: 0.9509 - recall: 0.9733 - f1: 0.9619 - 61ms/step
step 19/50 - loss: 0.6591 - precision: 0.9494 - recall: 0.9722 - f1: 0.9607 - 60ms/step
step 20/50 - loss: 1.4647 - precision: 0.9469 - recall: 0.9704 - f1: 0.9585 - 60ms/step
step 21/50 - loss: 0.0000e+00 - precision: 0.9479 - recall: 0.9706 - f1: 0.9591 - 61ms/step
step 22/50 - loss: 1.4317 - precision: 0.9480 - recall: 0.9705 - f1: 0.9591 - 61ms/step
step 23/50 - loss: 0.0000e+00 - precision: 0.9493 - recall: 0.9711 - f1: 0.9601 - 61ms/step
step 24/50 - loss: 0.0000e+00 - precision: 0.9497 - recall: 0.9710 - f1: 0.9602 - 61ms/step
step 25/50 - loss: 0.0553 - precision: 0.9483 - recall: 0.9703 - f1: 0.9591 - 61ms/step
step 26/50 - loss: 0.5510 - precision: 0.9492 - recall: 0.9710 - f1: 0.9600 - 61ms/step
step 27/50 - loss: 0.8598 - precision: 0.9505 - recall: 0.9719 - f1: 0.9611 - 61ms/step
step 28/50 - loss: 0.0000e+00 - precision: 0.9505 - recall: 0.9718 - f1: 0.9610 - 65ms/step
step 29/50 - loss: 4.4491 - precision: 0.9495 - recall: 0.9713 - f1: 0.9603 - 69ms/step
step 30/50 - loss: 0.0000e+00 - precision: 0.9509 - recall: 0.9719 - f1: 0.9613 - 69ms/step
step 31/50 - loss: 1.0609 - precision: 0.9524 - recall: 0.9728 - f1: 0.9625 - 68ms/step
step 32/50 - loss: 0.0000e+00 - precision: 0.9532 - recall: 0.9732 - f1: 0.9631 - 68ms/step
step 33/50 - loss: 0.0000e+00 - precision: 0.9535 - recall: 0.9731 - f1: 0.9632 - 68ms/step
step 34/50 - loss: 5.0845 - precision: 0.9537 - recall: 0.9729 - f1: 0.9632 - 68ms/step
step 35/50 - loss: 1.3518 - precision: 0.9534 - recall: 0.9725 - f1: 0.9629 - 68ms/step
step 36/50 - loss: 0.3158 - precision: 0.9535 - recall: 0.9722 - f1: 0.9628 - 68ms/step
step 37/50 - loss: 0.6842 - precision: 0.9533 - recall: 0.9721 - f1: 0.9626 - 68ms/step
step 38/50 - loss: 0.2430 - precision: 0.9543 - recall: 0.9726 - f1: 0.9634 - 68ms/step
step 39/50 - loss: 0.3968 - precision: 0.9550 - recall: 0.9729 - f1: 0.9639 - 68ms/step
step 40/50 - loss: 0.6910 - precision: 0.9557 - recall: 0.9735 - f1: 0.9645 - 68ms/step
step 41/50 - loss: 0.6064 - precision: 0.9560 - recall: 0.9735 - f1: 0.9647 - 68ms/step
step 42/50 - loss: 0.0000e+00 - precision: 0.9556 - recall: 0.9735 - f1: 0.9645 - 68ms/step
step 43/50 - loss: 0.0000e+00 - precision: 0.9537 - recall: 0.9725 - f1: 0.9630 - 67ms/step
step 44/50 - loss: 0.0000e+00 - precision: 0.9541 - recall: 0.9729 - f1: 0.9634 - 67ms/step
step 45/50 - loss: 0.0000e+00 - precision: 0.9538 - recall: 0.9729 - f1: 0.9633 - 67ms/step
step 46/50 - loss: 0.0000e+00 - precision: 0.9546 - recall: 0.9733 - f1: 0.9638 - 67ms/step
step 47/50 - loss: 0.9199 - precision: 0.9553 - recall: 0.9736 - f1: 0.9644 - 67ms/step
step 48/50 - loss: 0.0000e+00 - precision: 0.9559 - recall: 0.9741 - f1: 0.9649 - 67ms/step
step 49/50 - loss: 0.0000e+00 - precision: 0.9564 - recall: 0.9744 - f1: 0.9653 - 67ms/step
step 50/50 - loss: 0.0000e+00 - precision: 0.9571 - recall: 0.9747 - f1: 0.9658 - 66ms/step
save checkpoint at /home/aistudio/results/2
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.8094 - precision: 0.9788 - recall: 0.9840 - f1: 0.9814 - 45ms/step
step 2/6 - loss: 0.5821 - precision: 0.9895 - recall: 0.9921 - f1: 0.9908 - 39ms/step
step 3/6 - loss: 0.0931 - precision: 0.9809 - recall: 0.9877 - f1: 0.9843 - 39ms/step
step 4/6 - loss: 0.1239 - precision: 0.9804 - recall: 0.9869 - f1: 0.9836 - 37ms/step
step 5/6 - loss: 0.0000e+00 - precision: 0.9802 - recall: 0.9874 - f1: 0.9838 - 36ms/step
step 6/6 - loss: 0.0000e+00 - precision: 0.9818 - recall: 0.9886 - f1: 0.9852 - 36ms/step
Eval samples: 192
Epoch 4/10
step  1/50 - loss: 0.5262 - precision: 1.0000 - recall: 1.0000 - f1: 1.0000 - 60ms/step
step  2/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9922 - f1: 0.9909 - 62ms/step
step  3/50 - loss: 0.0000e+00 - precision: 0.9862 - recall: 0.9913 - f1: 0.9887 - 62ms/step
step  4/50 - loss: 0.8189 - precision: 0.9792 - recall: 0.9843 - f1: 0.9818 - 60ms/step
step  5/50 - loss: 0.0000e+00 - precision: 0.9751 - recall: 0.9812 - f1: 0.9781 - 60ms/step
step  6/50 - loss: 0.0000e+00 - precision: 0.9792 - recall: 0.9843 - f1: 0.9818 - 60ms/step
step  7/50 - loss: 0.0000e+00 - precision: 0.9807 - recall: 0.9858 - f1: 0.9833 - 59ms/step
step  8/50 - loss: 1.0097 - precision: 0.9786 - recall: 0.9850 - f1: 0.9818 - 60ms/step
step  9/50 - loss: 0.0000e+00 - precision: 0.9810 - recall: 0.9867 - f1: 0.9838 - 60ms/step
step 10/50 - loss: 0.0000e+00 - precision: 0.9818 - recall: 0.9875 - f1: 0.9846 - 60ms/step
step 11/50 - loss: 0.8753 - precision: 0.9821 - recall: 0.9881 - f1: 0.9851 - 60ms/step
step 12/50 - loss: 0.0000e+00 - precision: 0.9814 - recall: 0.9882 - f1: 0.9848 - 60ms/step
step 13/50 - loss: 0.1589 - precision: 0.9812 - recall: 0.9875 - f1: 0.9844 - 61ms/step
step 14/50 - loss: 0.0000e+00 - precision: 0.9822 - recall: 0.9877 - f1: 0.9849 - 60ms/step
step 15/50 - loss: 0.0000e+00 - precision: 0.9813 - recall: 0.9875 - f1: 0.9844 - 60ms/step
step 16/50 - loss: 1.3081 - precision: 0.9818 - recall: 0.9879 - f1: 0.9848 - 60ms/step
step 17/50 - loss: 0.0806 - precision: 0.9822 - recall: 0.9880 - f1: 0.9851 - 60ms/step
step 18/50 - loss: 2.1783 - precision: 0.9812 - recall: 0.9872 - f1: 0.9842 - 60ms/step
step 19/50 - loss: 0.4383 - precision: 0.9811 - recall: 0.9873 - f1: 0.9842 - 60ms/step
step 20/50 - loss: 0.0000e+00 - precision: 0.9815 - recall: 0.9874 - f1: 0.9845 - 60ms/step
step 21/50 - loss: 0.0000e+00 - precision: 0.9816 - recall: 0.9873 - f1: 0.9844 - 60ms/step
step 22/50 - loss: 0.4441 - precision: 0.9822 - recall: 0.9874 - f1: 0.9848 - 60ms/step
step 23/50 - loss: 0.0000e+00 - precision: 0.9825 - recall: 0.9875 - f1: 0.9850 - 60ms/step
step 24/50 - loss: 0.0000e+00 - precision: 0.9826 - recall: 0.9873 - f1: 0.9850 - 60ms/step
step 25/50 - loss: 0.0000e+00 - precision: 0.9823 - recall: 0.9872 - f1: 0.9847 - 60ms/step
step 26/50 - loss: 0.7703 - precision: 0.9826 - recall: 0.9875 - f1: 0.9850 - 60ms/step
step 27/50 - loss: 0.8359 - precision: 0.9824 - recall: 0.9874 - f1: 0.9849 - 60ms/step
step 28/50 - loss: 3.3207 - precision: 0.9827 - recall: 0.9875 - f1: 0.9851 - 60ms/step
step 29/50 - loss: 0.0000e+00 - precision: 0.9833 - recall: 0.9879 - f1: 0.9856 - 60ms/step
step 30/50 - loss: 0.0000e+00 - precision: 0.9821 - recall: 0.9876 - f1: 0.9849 - 60ms/step
step 31/50 - loss: 1.7666 - precision: 0.9820 - recall: 0.9873 - f1: 0.9847 - 60ms/step
step 32/50 - loss: 0.0955 - precision: 0.9823 - recall: 0.9876 - f1: 0.9849 - 60ms/step
step 33/50 - loss: 0.0000e+00 - precision: 0.9814 - recall: 0.9873 - f1: 0.9843 - 60ms/step
step 34/50 - loss: 0.4529 - precision: 0.9813 - recall: 0.9874 - f1: 0.9843 - 60ms/step
step 35/50 - loss: 0.0000e+00 - precision: 0.9813 - recall: 0.9876 - f1: 0.9844 - 60ms/step
step 36/50 - loss: 0.0000e+00 - precision: 0.9818 - recall: 0.9879 - f1: 0.9848 - 60ms/step
step 37/50 - loss: 0.7701 - precision: 0.9823 - recall: 0.9883 - f1: 0.9853 - 60ms/step
step 38/50 - loss: 0.4365 - precision: 0.9827 - recall: 0.9886 - f1: 0.9856 - 60ms/step
step 39/50 - loss: 0.8331 - precision: 0.9825 - recall: 0.9886 - f1: 0.9855 - 60ms/step
step 40/50 - loss: 0.2560 - precision: 0.9830 - recall: 0.9889 - f1: 0.9859 - 60ms/step
step 41/50 - loss: 0.0000e+00 - precision: 0.9834 - recall: 0.9891 - f1: 0.9862 - 60ms/step
step 42/50 - loss: 1.6503 - precision: 0.9838 - recall: 0.9894 - f1: 0.9866 - 60ms/step
step 43/50 - loss: 3.3220 - precision: 0.9833 - recall: 0.9890 - f1: 0.9862 - 60ms/step
step 44/50 - loss: 0.0000e+00 - precision: 0.9836 - recall: 0.9891 - f1: 0.9863 - 60ms/step
step 45/50 - loss: 0.0000e+00 - precision: 0.9839 - recall: 0.9893 - f1: 0.9866 - 60ms/step
step 46/50 - loss: 0.0000e+00 - precision: 0.9843 - recall: 0.9895 - f1: 0.9869 - 60ms/step
step 47/50 - loss: 0.0620 - precision: 0.9844 - recall: 0.9896 - f1: 0.9870 - 60ms/step
step 48/50 - loss: 0.0000e+00 - precision: 0.9847 - recall: 0.9899 - f1: 0.9873 - 60ms/step
step 49/50 - loss: 0.2296 - precision: 0.9846 - recall: 0.9899 - f1: 0.9872 - 60ms/step
step 50/50 - loss: 0.0941 - precision: 0.9847 - recall: 0.9900 - f1: 0.9873 - 60ms/step
save checkpoint at /home/aistudio/results/3
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.7291 - precision: 0.9581 - recall: 0.9734 - f1: 0.9657 - 42ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9740 - recall: 0.9842 - f1: 0.9791 - 38ms/step
step 3/6 - loss: 1.0605 - precision: 0.9775 - recall: 0.9877 - f1: 0.9826 - 39ms/step
step 4/6 - loss: 0.0626 - precision: 0.9715 - recall: 0.9829 - f1: 0.9772 - 37ms/step
step 5/6 - loss: 0.0000e+00 - precision: 0.9731 - recall: 0.9843 - f1: 0.9786 - 36ms/step
step 6/6 - loss: 0.2693 - precision: 0.9758 - recall: 0.9860 - f1: 0.9809 - 36ms/step
Eval samples: 192
Epoch 5/10
step  1/50 - loss: 0.0000e+00 - precision: 1.0000 - recall: 1.0000 - f1: 1.0000 - 66ms/step
step  2/50 - loss: 0.8448 - precision: 1.0000 - recall: 1.0000 - f1: 1.0000 - 63ms/step
step  3/50 - loss: 0.5168 - precision: 0.9965 - recall: 0.9983 - f1: 0.9974 - 64ms/step
step  4/50 - loss: 0.0000e+00 - precision: 0.9908 - recall: 0.9961 - f1: 0.9934 - 62ms/step
step  5/50 - loss: 0.0000e+00 - precision: 0.9875 - recall: 0.9947 - f1: 0.9911 - 63ms/step
step  6/50 - loss: 0.0000e+00 - precision: 0.9887 - recall: 0.9947 - f1: 0.9917 - 62ms/step
step  7/50 - loss: 0.0000e+00 - precision: 0.9888 - recall: 0.9948 - f1: 0.9918 - 61ms/step
step  8/50 - loss: 0.0601 - precision: 0.9902 - recall: 0.9954 - f1: 0.9928 - 62ms/step
step  9/50 - loss: 0.0000e+00 - precision: 0.9913 - recall: 0.9959 - f1: 0.9936 - 62ms/step
step 10/50 - loss: 1.3366 - precision: 0.9911 - recall: 0.9958 - f1: 0.9935 - 62ms/step
step 11/50 - loss: 0.0000e+00 - precision: 0.9910 - recall: 0.9952 - f1: 0.9931 - 62ms/step
step 12/50 - loss: 0.0000e+00 - precision: 0.9917 - recall: 0.9956 - f1: 0.9937 - 62ms/step
step 13/50 - loss: 0.0601 - precision: 0.9916 - recall: 0.9952 - f1: 0.9934 - 61ms/step
step 14/50 - loss: 0.0000e+00 - precision: 0.9922 - recall: 0.9955 - f1: 0.9938 - 61ms/step
step 15/50 - loss: 0.4606 - precision: 0.9920 - recall: 0.9951 - f1: 0.9936 - 61ms/step
step 16/50 - loss: 0.4462 - precision: 0.9912 - recall: 0.9944 - f1: 0.9928 - 61ms/step
step 17/50 - loss: 0.4156 - precision: 0.9911 - recall: 0.9945 - f1: 0.9928 - 61ms/step
step 18/50 - loss: 0.0955 - precision: 0.9904 - recall: 0.9936 - f1: 0.9920 - 61ms/step
step 19/50 - loss: 0.4337 - precision: 0.9909 - recall: 0.9939 - f1: 0.9924 - 61ms/step
step 20/50 - loss: 0.0431 - precision: 0.9914 - recall: 0.9942 - f1: 0.9928 - 61ms/step
step 21/50 - loss: 0.7695 - precision: 0.9918 - recall: 0.9945 - f1: 0.9932 - 61ms/step
step 22/50 - loss: 0.4006 - precision: 0.9922 - recall: 0.9948 - f1: 0.9935 - 61ms/step
step 23/50 - loss: 0.0000e+00 - precision: 0.9921 - recall: 0.9945 - f1: 0.9933 - 61ms/step
step 24/50 - loss: 0.0378 - precision: 0.9924 - recall: 0.9948 - f1: 0.9936 - 61ms/step
step 25/50 - loss: 0.0460 - precision: 0.9921 - recall: 0.9948 - f1: 0.9934 - 61ms/step
step 26/50 - loss: 0.3840 - precision: 0.9924 - recall: 0.9950 - f1: 0.9937 - 61ms/step
step 27/50 - loss: 0.0000e+00 - precision: 0.9915 - recall: 0.9944 - f1: 0.9929 - 61ms/step
step 28/50 - loss: 0.0000e+00 - precision: 0.9909 - recall: 0.9942 - f1: 0.9925 - 61ms/step
step 29/50 - loss: 0.3813 - precision: 0.9908 - recall: 0.9942 - f1: 0.9925 - 61ms/step
step 30/50 - loss: 0.0000e+00 - precision: 0.9911 - recall: 0.9944 - f1: 0.9928 - 61ms/step
step 31/50 - loss: 0.0673 - precision: 0.9911 - recall: 0.9943 - f1: 0.9927 - 61ms/step
step 32/50 - loss: 0.0573 - precision: 0.9910 - recall: 0.9943 - f1: 0.9927 - 61ms/step
step 33/50 - loss: 0.0000e+00 - precision: 0.9913 - recall: 0.9945 - f1: 0.9929 - 61ms/step
step 34/50 - loss: 0.0000e+00 - precision: 0.9911 - recall: 0.9945 - f1: 0.9928 - 60ms/step
step 35/50 - loss: 0.0000e+00 - precision: 0.9914 - recall: 0.9946 - f1: 0.9930 - 60ms/step
step 36/50 - loss: 0.7175 - precision: 0.9904 - recall: 0.9940 - f1: 0.9922 - 60ms/step
step 37/50 - loss: 0.7233 - precision: 0.9904 - recall: 0.9941 - f1: 0.9922 - 60ms/step
step 38/50 - loss: 0.0000e+00 - precision: 0.9903 - recall: 0.9941 - f1: 0.9922 - 60ms/step
step 39/50 - loss: 0.0146 - precision: 0.9905 - recall: 0.9942 - f1: 0.9924 - 60ms/step
step 40/50 - loss: 0.6816 - precision: 0.9904 - recall: 0.9942 - f1: 0.9923 - 61ms/step
step 41/50 - loss: 0.0000e+00 - precision: 0.9903 - recall: 0.9943 - f1: 0.9923 - 61ms/step
step 42/50 - loss: 0.7724 - precision: 0.9902 - recall: 0.9939 - f1: 0.9920 - 61ms/step
step 43/50 - loss: 0.6859 - precision: 0.9904 - recall: 0.9940 - f1: 0.9922 - 60ms/step
step 44/50 - loss: 0.0000e+00 - precision: 0.9906 - recall: 0.9942 - f1: 0.9924 - 60ms/step
step 45/50 - loss: 0.0000e+00 - precision: 0.9906 - recall: 0.9942 - f1: 0.9924 - 60ms/step
step 46/50 - loss: 0.0000e+00 - precision: 0.9908 - recall: 0.9943 - f1: 0.9926 - 60ms/step
step 47/50 - loss: 0.3436 - precision: 0.9906 - recall: 0.9941 - f1: 0.9923 - 60ms/step
step 48/50 - loss: 0.0073 - precision: 0.9908 - recall: 0.9942 - f1: 0.9925 - 60ms/step
step 49/50 - loss: 0.6759 - precision: 0.9907 - recall: 0.9942 - f1: 0.9925 - 61ms/step
step 50/50 - loss: 0.0000e+00 - precision: 0.9909 - recall: 0.9943 - f1: 0.9926 - 61ms/step
save checkpoint at /home/aistudio/results/4
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.6838 - precision: 0.9737 - recall: 0.9840 - f1: 0.9788 - 43ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9869 - recall: 0.9921 - f1: 0.9895 - 39ms/step
step 3/6 - loss: 0.0000e+00 - precision: 0.9878 - recall: 0.9912 - f1: 0.9895 - 39ms/step
step 4/6 - loss: 0.0038 - precision: 0.9856 - recall: 0.9895 - f1: 0.9876 - 37ms/step
step 5/6 - loss: 0.0000e+00 - precision: 0.9885 - recall: 0.9916 - f1: 0.9901 - 37ms/step
step 6/6 - loss: 0.0000e+00 - precision: 0.9904 - recall: 0.9930 - f1: 0.9917 - 37ms/step
Eval samples: 192
Epoch 6/10
step  1/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9896 - f1: 0.9896 - 68ms/step
step  2/50 - loss: 0.4420 - precision: 0.9948 - recall: 0.9948 - f1: 0.9948 - 66ms/step
step  3/50 - loss: 0.4224 - precision: 0.9878 - recall: 0.9913 - f1: 0.9895 - 64ms/step
step  4/50 - loss: 0.0000e+00 - precision: 0.9909 - recall: 0.9935 - f1: 0.9922 - 63ms/step
step  5/50 - loss: 0.0070 - precision: 0.9896 - recall: 0.9937 - f1: 0.9917 - 62ms/step
step  6/50 - loss: 0.0000e+00 - precision: 0.9913 - recall: 0.9948 - f1: 0.9930 - 64ms/step
step  7/50 - loss: 0.0000e+00 - precision: 0.9911 - recall: 0.9940 - f1: 0.9925 - 64ms/step
step  8/50 - loss: 0.3505 - precision: 0.9896 - recall: 0.9935 - f1: 0.9915 - 64ms/step
step  9/50 - loss: 0.0660 - precision: 0.9907 - recall: 0.9942 - f1: 0.9925 - 64ms/step
step 10/50 - loss: 0.0000e+00 - precision: 0.9906 - recall: 0.9942 - f1: 0.9924 - 65ms/step
step 11/50 - loss: 0.6838 - precision: 0.9915 - recall: 0.9948 - f1: 0.9931 - 64ms/step
step 12/50 - loss: 0.0000e+00 - precision: 0.9922 - recall: 0.9952 - f1: 0.9937 - 64ms/step
step 13/50 - loss: 0.0000e+00 - precision: 0.9928 - recall: 0.9956 - f1: 0.9942 - 64ms/step
step 14/50 - loss: 0.0300 - precision: 0.9933 - recall: 0.9959 - f1: 0.9946 - 64ms/step
step 15/50 - loss: 0.6474 - precision: 0.9937 - recall: 0.9962 - f1: 0.9950 - 63ms/step
step 16/50 - loss: 0.3432 - precision: 0.9941 - recall: 0.9964 - f1: 0.9953 - 63ms/step
step 17/50 - loss: 0.0056 - precision: 0.9945 - recall: 0.9966 - f1: 0.9956 - 63ms/step
step 18/50 - loss: 0.0000e+00 - precision: 0.9942 - recall: 0.9962 - f1: 0.9952 - 63ms/step
step 19/50 - loss: 0.0000e+00 - precision: 0.9945 - recall: 0.9964 - f1: 0.9955 - 63ms/step
step 20/50 - loss: 0.0126 - precision: 0.9948 - recall: 0.9966 - f1: 0.9957 - 63ms/step
step 21/50 - loss: 0.6590 - precision: 0.9945 - recall: 0.9963 - f1: 0.9954 - 62ms/step
step 22/50 - loss: 0.0000e+00 - precision: 0.9948 - recall: 0.9964 - f1: 0.9956 - 62ms/step
step 23/50 - loss: 6.7139e-04 - precision: 0.9950 - recall: 0.9966 - f1: 0.9958 - 62ms/step
step 24/50 - loss: 0.0000e+00 - precision: 0.9948 - recall: 0.9963 - f1: 0.9955 - 63ms/step
step 25/50 - loss: 0.0086 - precision: 0.9950 - recall: 0.9964 - f1: 0.9957 - 63ms/step
step 26/50 - loss: 0.0000e+00 - precision: 0.9952 - recall: 0.9966 - f1: 0.9959 - 63ms/step
step 27/50 - loss: 0.0000e+00 - precision: 0.9954 - recall: 0.9967 - f1: 0.9960 - 63ms/step
step 28/50 - loss: 0.0000e+00 - precision: 0.9938 - recall: 0.9963 - f1: 0.9951 - 63ms/step
step 29/50 - loss: 0.0000e+00 - precision: 0.9941 - recall: 0.9964 - f1: 0.9952 - 63ms/step
step 30/50 - loss: 0.3168 - precision: 0.9943 - recall: 0.9965 - f1: 0.9954 - 63ms/step
step 31/50 - loss: 0.0000e+00 - precision: 0.9944 - recall: 0.9966 - f1: 0.9955 - 63ms/step
step 32/50 - loss: 0.6455 - precision: 0.9946 - recall: 0.9967 - f1: 0.9957 - 63ms/step
step 33/50 - loss: 0.3585 - precision: 0.9948 - recall: 0.9968 - f1: 0.9958 - 63ms/step
step 34/50 - loss: 0.3079 - precision: 0.9940 - recall: 0.9965 - f1: 0.9952 - 63ms/step
step 35/50 - loss: 0.7552 - precision: 0.9942 - recall: 0.9966 - f1: 0.9954 - 63ms/step
step 36/50 - loss: 0.7057 - precision: 0.9933 - recall: 0.9964 - f1: 0.9948 - 62ms/step
step 37/50 - loss: 0.0671 - precision: 0.9931 - recall: 0.9963 - f1: 0.9947 - 62ms/step
step 38/50 - loss: 10.5974 - precision: 0.9927 - recall: 0.9960 - f1: 0.9944 - 62ms/step
step 39/50 - loss: 0.0000e+00 - precision: 0.9921 - recall: 0.9958 - f1: 0.9940 - 62ms/step
step 40/50 - loss: 0.6789 - precision: 0.9921 - recall: 0.9958 - f1: 0.9939 - 62ms/step
step 41/50 - loss: 0.6254 - precision: 0.9920 - recall: 0.9957 - f1: 0.9938 - 62ms/step
step 42/50 - loss: 0.0000e+00 - precision: 0.9912 - recall: 0.9949 - f1: 0.9930 - 62ms/step
step 43/50 - loss: 0.3047 - precision: 0.9914 - recall: 0.9950 - f1: 0.9932 - 62ms/step
step 44/50 - loss: 0.0000e+00 - precision: 0.9915 - recall: 0.9949 - f1: 0.9932 - 62ms/step
step 45/50 - loss: 0.0000e+00 - precision: 0.9912 - recall: 0.9947 - f1: 0.9929 - 62ms/step
step 46/50 - loss: 0.0000e+00 - precision: 0.9907 - recall: 0.9945 - f1: 0.9926 - 62ms/step
step 47/50 - loss: 0.0000e+00 - precision: 0.9908 - recall: 0.9944 - f1: 0.9926 - 62ms/step
step 48/50 - loss: 0.0000e+00 - precision: 0.9908 - recall: 0.9943 - f1: 0.9925 - 62ms/step
step 49/50 - loss: 0.0000e+00 - precision: 0.9910 - recall: 0.9944 - f1: 0.9927 - 62ms/step
step 50/50 - loss: 0.6299 - precision: 0.9907 - recall: 0.9943 - f1: 0.9925 - 62ms/step
save checkpoint at /home/aistudio/results/5
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.6255 - precision: 0.9634 - recall: 0.9787 - f1: 0.9710 - 41ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9714 - recall: 0.9842 - f1: 0.9778 - 38ms/step
step 3/6 - loss: 0.0000e+00 - precision: 0.9774 - recall: 0.9860 - f1: 0.9817 - 38ms/step
step 4/6 - loss: 0.0000e+00 - precision: 0.9740 - recall: 0.9843 - f1: 0.9791 - 37ms/step
step 5/6 - loss: 0.0000e+00 - precision: 0.9772 - recall: 0.9864 - f1: 0.9817 - 36ms/step
step 6/6 - loss: 0.2182 - precision: 0.9758 - recall: 0.9860 - f1: 0.9809 - 36ms/step
Eval samples: 192
Epoch 7/10
step  1/50 - loss: 0.0000e+00 - precision: 0.9895 - recall: 0.9947 - f1: 0.9921 - 64ms/step
step  2/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9948 - f1: 0.9922 - 68ms/step
step  3/50 - loss: 0.0000e+00 - precision: 0.9895 - recall: 0.9947 - f1: 0.9921 - 65ms/step
step  4/50 - loss: 0.0000e+00 - precision: 0.9921 - recall: 0.9960 - f1: 0.9941 - 63ms/step
step  5/50 - loss: 0.3063 - precision: 0.9937 - recall: 0.9968 - f1: 0.9953 - 64ms/step
step  6/50 - loss: 0.9612 - precision: 0.9895 - recall: 0.9947 - f1: 0.9921 - 63ms/step
step  7/50 - loss: 0.0000e+00 - precision: 0.9895 - recall: 0.9947 - f1: 0.9921 - 63ms/step
step  8/50 - loss: 0.0000e+00 - precision: 0.9908 - recall: 0.9954 - f1: 0.9931 - 63ms/step
step  9/50 - loss: 0.0000e+00 - precision: 0.9884 - recall: 0.9942 - f1: 0.9913 - 63ms/step
step 10/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9947 - f1: 0.9921 - 63ms/step
step 11/50 - loss: 0.4619 - precision: 0.9905 - recall: 0.9952 - f1: 0.9929 - 63ms/step
step 12/50 - loss: 0.6074 - precision: 0.9904 - recall: 0.9952 - f1: 0.9928 - 63ms/step
step 13/50 - loss: 0.0000e+00 - precision: 0.9912 - recall: 0.9956 - f1: 0.9934 - 62ms/step
step 14/50 - loss: 0.0000e+00 - precision: 0.9907 - recall: 0.9955 - f1: 0.9931 - 62ms/step
step 15/50 - loss: 0.6124 - precision: 0.9913 - recall: 0.9958 - f1: 0.9935 - 62ms/step
step 16/50 - loss: 0.0000e+00 - precision: 0.9918 - recall: 0.9961 - f1: 0.9939 - 62ms/step
step 17/50 - loss: 0.6040 - precision: 0.9923 - recall: 0.9963 - f1: 0.9943 - 62ms/step
step 18/50 - loss: 0.0000e+00 - precision: 0.9927 - recall: 0.9965 - f1: 0.9946 - 61ms/step
step 19/50 - loss: 0.0000e+00 - precision: 0.9926 - recall: 0.9964 - f1: 0.9945 - 61ms/step
step 20/50 - loss: 0.0000e+00 - precision: 0.9911 - recall: 0.9948 - f1: 0.9929 - 61ms/step
step 21/50 - loss: 0.6769 - precision: 0.9915 - recall: 0.9950 - f1: 0.9933 - 61ms/step
step 22/50 - loss: 0.3223 - precision: 0.9919 - recall: 0.9952 - f1: 0.9936 - 61ms/step
step 23/50 - loss: 0.6046 - precision: 0.9923 - recall: 0.9954 - f1: 0.9939 - 61ms/step
step 24/50 - loss: 0.0000e+00 - precision: 0.9913 - recall: 0.9952 - f1: 0.9932 - 61ms/step
step 25/50 - loss: 0.8447 - precision: 0.9916 - recall: 0.9954 - f1: 0.9935 - 61ms/step
step 26/50 - loss: 0.0000e+00 - precision: 0.9920 - recall: 0.9956 - f1: 0.9938 - 61ms/step
step 27/50 - loss: 0.0000e+00 - precision: 0.9923 - recall: 0.9957 - f1: 0.9940 - 61ms/step
step 28/50 - loss: 0.0000e+00 - precision: 0.9925 - recall: 0.9959 - f1: 0.9942 - 61ms/step
step 29/50 - loss: 0.0000e+00 - precision: 0.9928 - recall: 0.9960 - f1: 0.9944 - 60ms/step
step 30/50 - loss: 0.7634 - precision: 0.9930 - recall: 0.9962 - f1: 0.9946 - 61ms/step
step 31/50 - loss: 0.0000e+00 - precision: 0.9933 - recall: 0.9963 - f1: 0.9948 - 60ms/step
step 32/50 - loss: 0.0000e+00 - precision: 0.9930 - recall: 0.9962 - f1: 0.9946 - 60ms/step
step 33/50 - loss: 0.3905 - precision: 0.9932 - recall: 0.9963 - f1: 0.9948 - 60ms/step
step 34/50 - loss: 0.0000e+00 - precision: 0.9934 - recall: 0.9965 - f1: 0.9949 - 60ms/step
step 35/50 - loss: 0.2703 - precision: 0.9930 - recall: 0.9960 - f1: 0.9945 - 60ms/step
step 36/50 - loss: 0.2559 - precision: 0.9932 - recall: 0.9961 - f1: 0.9946 - 60ms/step
step 37/50 - loss: 0.0000e+00 - precision: 0.9934 - recall: 0.9962 - f1: 0.9948 - 60ms/step
step 38/50 - loss: 0.2856 - precision: 0.9935 - recall: 0.9963 - f1: 0.9949 - 60ms/step
step 39/50 - loss: 0.0000e+00 - precision: 0.9933 - recall: 0.9961 - f1: 0.9947 - 60ms/step
step 40/50 - loss: 0.0000e+00 - precision: 0.9935 - recall: 0.9962 - f1: 0.9948 - 60ms/step
step 41/50 - loss: 0.0000e+00 - precision: 0.9936 - recall: 0.9963 - f1: 0.9950 - 60ms/step
step 42/50 - loss: 0.6387 - precision: 0.9935 - recall: 0.9961 - f1: 0.9948 - 60ms/step
step 43/50 - loss: 0.5856 - precision: 0.9937 - recall: 0.9962 - f1: 0.9950 - 60ms/step
step 44/50 - loss: 0.5878 - precision: 0.9935 - recall: 0.9962 - f1: 0.9948 - 60ms/step
step 45/50 - loss: 0.0000e+00 - precision: 0.9936 - recall: 0.9963 - f1: 0.9949 - 60ms/step
step 46/50 - loss: 0.2696 - precision: 0.9938 - recall: 0.9964 - f1: 0.9951 - 60ms/step
step 47/50 - loss: 0.6053 - precision: 0.9939 - recall: 0.9964 - f1: 0.9952 - 60ms/step
step 48/50 - loss: 0.0000e+00 - precision: 0.9940 - recall: 0.9965 - f1: 0.9953 - 60ms/step
step 49/50 - loss: 0.2930 - precision: 0.9939 - recall: 0.9965 - f1: 0.9952 - 60ms/step
step 50/50 - loss: 0.3184 - precision: 0.9937 - recall: 0.9964 - f1: 0.9951 - 60ms/step
save checkpoint at /home/aistudio/results/6
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.5760 - precision: 0.9738 - recall: 0.9894 - f1: 0.9815 - 45ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9869 - recall: 0.9947 - f1: 0.9908 - 39ms/step
step 3/6 - loss: 0.0000e+00 - precision: 0.9878 - recall: 0.9930 - f1: 0.9904 - 39ms/step
step 4/6 - loss: 0.0000e+00 - precision: 0.9856 - recall: 0.9908 - f1: 0.9882 - 37ms/step
step 5/6 - loss: 0.0000e+00 - precision: 0.9864 - recall: 0.9916 - f1: 0.9890 - 36ms/step
step 6/6 - loss: 0.0000e+00 - precision: 0.9887 - recall: 0.9930 - f1: 0.9908 - 36ms/step
Eval samples: 192
Epoch 8/10
step  1/50 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9948 - f1: 0.9922 - 59ms/step
step  2/50 - loss: 0.2567 - precision: 0.9948 - recall: 0.9974 - f1: 0.9961 - 59ms/step
step  3/50 - loss: 0.0000e+00 - precision: 0.9965 - recall: 0.9983 - f1: 0.9974 - 59ms/step
step  4/50 - loss: 0.2535 - precision: 0.9974 - recall: 0.9987 - f1: 0.9980 - 60ms/step
step  5/50 - loss: 0.5785 - precision: 0.9979 - recall: 0.9990 - f1: 0.9984 - 60ms/step
step  6/50 - loss: 0.5644 - precision: 0.9948 - recall: 0.9983 - f1: 0.9965 - 60ms/step
step  7/50 - loss: 0.0000e+00 - precision: 0.9955 - recall: 0.9985 - f1: 0.9970 - 59ms/step
step  8/50 - loss: 0.0000e+00 - precision: 0.9961 - recall: 0.9987 - f1: 0.9974 - 59ms/step
step  9/50 - loss: 0.5706 - precision: 0.9965 - recall: 0.9988 - f1: 0.9977 - 60ms/step
step 10/50 - loss: 0.0000e+00 - precision: 0.9969 - recall: 0.9990 - f1: 0.9979 - 59ms/step
step 11/50 - loss: 0.0000e+00 - precision: 0.9972 - recall: 0.9990 - f1: 0.9981 - 60ms/step
step 12/50 - loss: 0.0000e+00 - precision: 0.9974 - recall: 0.9991 - f1: 0.9983 - 60ms/step
step 13/50 - loss: 0.0000e+00 - precision: 0.9976 - recall: 0.9992 - f1: 0.9984 - 60ms/step
step 14/50 - loss: 0.0000e+00 - precision: 0.9978 - recall: 0.9993 - f1: 0.9985 - 60ms/step
step 15/50 - loss: 0.0000e+00 - precision: 0.9979 - recall: 0.9993 - f1: 0.9986 - 60ms/step
step 16/50 - loss: 0.0000e+00 - precision: 0.9980 - recall: 0.9993 - f1: 0.9987 - 60ms/step
step 17/50 - loss: 0.0000e+00 - precision: 0.9975 - recall: 0.9988 - f1: 0.9982 - 60ms/step
step 18/50 - loss: 0.0000e+00 - precision: 0.9977 - recall: 0.9988 - f1: 0.9983 - 60ms/step
step 19/50 - loss: 0.0000e+00 - precision: 0.9978 - recall: 0.9989 - f1: 0.9983 - 60ms/step
step 20/50 - loss: 0.0000e+00 - precision: 0.9979 - recall: 0.9990 - f1: 0.9984 - 60ms/step
step 21/50 - loss: 0.5621 - precision: 0.9980 - recall: 0.9990 - f1: 0.9985 - 60ms/step
step 22/50 - loss: 0.0000e+00 - precision: 0.9981 - recall: 0.9990 - f1: 0.9986 - 60ms/step
step 23/50 - loss: 0.3005 - precision: 0.9982 - recall: 0.9991 - f1: 0.9986 - 60ms/step
step 24/50 - loss: 0.0000e+00 - precision: 0.9983 - recall: 0.9991 - f1: 0.9987 - 60ms/step
step 25/50 - loss: 0.2283 - precision: 0.9979 - recall: 0.9990 - f1: 0.9984 - 60ms/step
step 26/50 - loss: 0.0000e+00 - precision: 0.9976 - recall: 0.9988 - f1: 0.9982 - 60ms/step
step 27/50 - loss: 0.0000e+00 - precision: 0.9977 - recall: 0.9988 - f1: 0.9983 - 60ms/step
step 28/50 - loss: 0.0000e+00 - precision: 0.9978 - recall: 0.9989 - f1: 0.9983 - 60ms/step
step 29/50 - loss: 0.5429 - precision: 0.9978 - recall: 0.9989 - f1: 0.9984 - 60ms/step
step 30/50 - loss: 0.0000e+00 - precision: 0.9979 - recall: 0.9990 - f1: 0.9984 - 60ms/step
step 31/50 - loss: 0.0000e+00 - precision: 0.9980 - recall: 0.9990 - f1: 0.9985 - 60ms/step
step 32/50 - loss: 0.0000e+00 - precision: 0.9980 - recall: 0.9990 - f1: 0.9985 - 60ms/step
step 33/50 - loss: 0.5405 - precision: 0.9981 - recall: 0.9990 - f1: 0.9986 - 60ms/step
step 34/50 - loss: 0.5425 - precision: 0.9982 - recall: 0.9991 - f1: 0.9986 - 60ms/step
step 35/50 - loss: 0.0000e+00 - precision: 0.9982 - recall: 0.9991 - f1: 0.9987 - 60ms/step
step 36/50 - loss: 0.0000e+00 - precision: 0.9983 - recall: 0.9991 - f1: 0.9987 - 60ms/step
step 37/50 - loss: 0.0167 - precision: 0.9983 - recall: 0.9992 - f1: 0.9987 - 60ms/step
step 38/50 - loss: 0.0000e+00 - precision: 0.9983 - recall: 0.9992 - f1: 0.9988 - 60ms/step
step 39/50 - loss: 0.0000e+00 - precision: 0.9981 - recall: 0.9989 - f1: 0.9985 - 60ms/step
step 40/50 - loss: 0.0000e+00 - precision: 0.9982 - recall: 0.9990 - f1: 0.9986 - 60ms/step
step 41/50 - loss: 0.0000e+00 - precision: 0.9982 - recall: 0.9990 - f1: 0.9986 - 60ms/step
step 42/50 - loss: 0.2186 - precision: 0.9983 - recall: 0.9990 - f1: 0.9986 - 60ms/step
step 43/50 - loss: 0.0000e+00 - precision: 0.9983 - recall: 0.9990 - f1: 0.9987 - 60ms/step
step 44/50 - loss: 0.0000e+00 - precision: 0.9981 - recall: 0.9988 - f1: 0.9985 - 60ms/step
step 45/50 - loss: 0.0000e+00 - precision: 0.9981 - recall: 0.9988 - f1: 0.9985 - 60ms/step
step 46/50 - loss: 0.5504 - precision: 0.9982 - recall: 0.9989 - f1: 0.9985 - 60ms/step
step 47/50 - loss: 0.2056 - precision: 0.9982 - recall: 0.9989 - f1: 0.9986 - 60ms/step
step 48/50 - loss: 0.0000e+00 - precision: 0.9980 - recall: 0.9987 - f1: 0.9984 - 60ms/step
step 49/50 - loss: 0.0000e+00 - precision: 0.9975 - recall: 0.9983 - f1: 0.9979 - 60ms/step
step 50/50 - loss: 0.0000e+00 - precision: 0.9976 - recall: 0.9983 - f1: 0.9980 - 59ms/step
save checkpoint at /home/aistudio/results/7
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.5255 - precision: 0.9789 - recall: 0.9894 - f1: 0.9841 - 45ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9895 - recall: 0.9947 - f1: 0.9921 - 39ms/step
step 3/6 - loss: 0.5275 - precision: 0.9843 - recall: 0.9912 - f1: 0.9878 - 39ms/step
step 4/6 - loss: 0.0000e+00 - precision: 0.9831 - recall: 0.9908 - f1: 0.9869 - 37ms/step
step 5/6 - loss: 0.0000e+00 - precision: 0.9844 - recall: 0.9916 - f1: 0.9880 - 36ms/step
step 6/6 - loss: 0.0000e+00 - precision: 0.9870 - recall: 0.9930 - f1: 0.9900 - 36ms/step
Eval samples: 192
Epoch 9/10
step  1/50 - loss: 0.0000e+00 - precision: 1.0000 - recall: 1.0000 - f1: 1.0000 - 59ms/step
step  2/50 - loss: 0.0000e+00 - precision: 1.0000 - recall: 1.0000 - f1: 1.0000 - 57ms/step
step  3/50 - loss: 0.0000e+00 - precision: 1.0000 - recall: 1.0000 - f1: 1.0000 - 58ms/step
step  4/50 - loss: 0.0000e+00 - precision: 1.0000 - recall: 1.0000 - f1: 1.0000 - 59ms/step
step  5/50 - loss: 0.0000e+00 - precision: 1.0000 - recall: 1.0000 - f1: 1.0000 - 61ms/step
step  6/50 - loss: 0.2051 - precision: 0.9974 - recall: 0.9991 - f1: 0.9983 - 61ms/step
step  7/50 - loss: 0.6132 - precision: 0.9978 - recall: 0.9993 - f1: 0.9985 - 61ms/step
step  8/50 - loss: 0.2495 - precision: 0.9980 - recall: 0.9993 - f1: 0.9987 - 61ms/step
step  9/50 - loss: 0.0000e+00 - precision: 0.9983 - recall: 0.9994 - f1: 0.9988 - 61ms/step
step 10/50 - loss: 0.0000e+00 - precision: 0.9984 - recall: 0.9995 - f1: 0.9990 - 61ms/step
step 11/50 - loss: 0.0000e+00 - precision: 0.9986 - recall: 0.9995 - f1: 0.9990 - 60ms/step
step 12/50 - loss: 0.0000e+00 - precision: 0.9987 - recall: 0.9996 - f1: 0.9991 - 60ms/step
step 13/50 - loss: 0.0000e+00 - precision: 0.9988 - recall: 0.9996 - f1: 0.9992 - 60ms/step
step 14/50 - loss: 0.2281 - precision: 0.9989 - recall: 0.9996 - f1: 0.9993 - 60ms/step
step 15/50 - loss: 0.5159 - precision: 0.9990 - recall: 0.9997 - f1: 0.9993 - 60ms/step
step 16/50 - loss: 0.0000e+00 - precision: 0.9990 - recall: 0.9997 - f1: 0.9993 - 60ms/step
step 17/50 - loss: 0.0000e+00 - precision: 0.9991 - recall: 0.9997 - f1: 0.9994 - 60ms/step
step 18/50 - loss: 0.0000e+00 - precision: 0.9991 - recall: 0.9997 - f1: 0.9994 - 60ms/step
step 19/50 - loss: 0.0000e+00 - precision: 0.9992 - recall: 0.9997 - f1: 0.9994 - 61ms/step
step 20/50 - loss: 0.0000e+00 - precision: 0.9992 - recall: 0.9997 - f1: 0.9995 - 61ms/step
step 21/50 - loss: 0.0000e+00 - precision: 0.9993 - recall: 0.9998 - f1: 0.9995 - 61ms/step
step 22/50 - loss: 0.5242 - precision: 0.9993 - recall: 0.9998 - f1: 0.9995 - 61ms/step
step 23/50 - loss: 0.0000e+00 - precision: 0.9993 - recall: 0.9998 - f1: 0.9995 - 60ms/step
step 24/50 - loss: 0.0000e+00 - precision: 0.9993 - recall: 0.9998 - f1: 0.9996 - 60ms/step
step 25/50 - loss: 0.0000e+00 - precision: 0.9994 - recall: 0.9998 - f1: 0.9996 - 60ms/step
step 26/50 - loss: 0.0000e+00 - precision: 0.9994 - recall: 0.9998 - f1: 0.9996 - 60ms/step
step 27/50 - loss: 0.0000e+00 - precision: 0.9994 - recall: 0.9998 - f1: 0.9996 - 60ms/step
step 28/50 - loss: 0.2025 - precision: 0.9994 - recall: 0.9998 - f1: 0.9996 - 60ms/step
step 29/50 - loss: 0.0000e+00 - precision: 0.9995 - recall: 0.9998 - f1: 0.9996 - 60ms/step
step 30/50 - loss: 0.0000e+00 - precision: 0.9995 - recall: 0.9998 - f1: 0.9997 - 60ms/step
step 31/50 - loss: 0.0000e+00 - precision: 0.9995 - recall: 0.9998 - f1: 0.9997 - 60ms/step
step 32/50 - loss: 0.0000e+00 - precision: 0.9995 - recall: 0.9998 - f1: 0.9997 - 60ms/step
step 33/50 - loss: 0.0000e+00 - precision: 0.9992 - recall: 0.9995 - f1: 0.9994 - 60ms/step
step 34/50 - loss: 0.0912 - precision: 0.9992 - recall: 0.9995 - f1: 0.9994 - 60ms/step
step 35/50 - loss: 0.0000e+00 - precision: 0.9993 - recall: 0.9996 - f1: 0.9994 - 60ms/step
step 36/50 - loss: 0.0000e+00 - precision: 0.9990 - recall: 0.9993 - f1: 0.9991 - 60ms/step
step 37/50 - loss: 0.0000e+00 - precision: 0.9990 - recall: 0.9993 - f1: 0.9992 - 60ms/step
step 38/50 - loss: 0.0000e+00 - precision: 0.9990 - recall: 0.9993 - f1: 0.9992 - 60ms/step
step 39/50 - loss: 0.0000e+00 - precision: 0.9991 - recall: 0.9993 - f1: 0.9992 - 60ms/step
step 40/50 - loss: 0.1934 - precision: 0.9991 - recall: 0.9993 - f1: 0.9992 - 60ms/step
step 41/50 - loss: 0.0000e+00 - precision: 0.9991 - recall: 0.9994 - f1: 0.9992 - 60ms/step
step 42/50 - loss: 0.0000e+00 - precision: 0.9991 - recall: 0.9994 - f1: 0.9993 - 60ms/step
step 43/50 - loss: 0.0000e+00 - precision: 0.9991 - recall: 0.9994 - f1: 0.9993 - 60ms/step
step 44/50 - loss: 0.0000e+00 - precision: 0.9992 - recall: 0.9994 - f1: 0.9993 - 60ms/step
step 45/50 - loss: 0.0000e+00 - precision: 0.9992 - recall: 0.9994 - f1: 0.9993 - 60ms/step
step 46/50 - loss: 0.1652 - precision: 0.9992 - recall: 0.9994 - f1: 0.9993 - 60ms/step
step 47/50 - loss: 0.0000e+00 - precision: 0.9992 - recall: 0.9994 - f1: 0.9993 - 59ms/step
step 48/50 - loss: 0.0000e+00 - precision: 0.9992 - recall: 0.9995 - f1: 0.9993 - 59ms/step
step 49/50 - loss: 0.0000e+00 - precision: 0.9993 - recall: 0.9995 - f1: 0.9994 - 59ms/step
step 50/50 - loss: 0.4803 - precision: 0.9993 - recall: 0.9995 - f1: 0.9994 - 59ms/step
save checkpoint at /home/aistudio/results/8
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.4800 - precision: 0.9789 - recall: 0.9894 - f1: 0.9841 - 45ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9895 - recall: 0.9947 - f1: 0.9921 - 39ms/step
step 3/6 - loss: 0.0858 - precision: 0.9895 - recall: 0.9930 - f1: 0.9913 - 39ms/step
step 4/6 - loss: 0.0000e+00 - precision: 0.9869 - recall: 0.9908 - f1: 0.9889 - 37ms/step
step 5/6 - loss: 0.0000e+00 - precision: 0.9875 - recall: 0.9916 - f1: 0.9895 - 36ms/step
step 6/6 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9930 - f1: 0.9913 - 36ms/step
Eval samples: 192
Epoch 10/10
step  1/50 - loss: 0.0000e+00 - precision: 1.0000 - recall: 1.0000 - f1: 1.0000 - 67ms/step
step  2/50 - loss: 0.0000e+00 - precision: 1.0000 - recall: 1.0000 - f1: 1.0000 - 63ms/step
step  3/50 - loss: 0.0000e+00 - precision: 1.0000 - recall: 1.0000 - f1: 1.0000 - 64ms/step
step  4/50 - loss: 0.4913 - precision: 0.9974 - recall: 0.9974 - f1: 0.9974 - 64ms/step
step  5/50 - loss: 0.0000e+00 - precision: 0.9979 - recall: 0.9979 - f1: 0.9979 - 63ms/step
step  6/50 - loss: 0.0000e+00 - precision: 0.9983 - recall: 0.9983 - f1: 0.9983 - 61ms/step
step  7/50 - loss: 0.1570 - precision: 0.9985 - recall: 0.9985 - f1: 0.9985 - 61ms/step
step  8/50 - loss: 0.0000e+00 - precision: 0.9987 - recall: 0.9987 - f1: 0.9987 - 61ms/step
step  9/50 - loss: 0.4733 - precision: 0.9988 - recall: 0.9988 - f1: 0.9988 - 61ms/step
step 10/50 - loss: 0.0000e+00 - precision: 0.9990 - recall: 0.9990 - f1: 0.9990 - 60ms/step
step 11/50 - loss: 0.0000e+00 - precision: 0.9991 - recall: 0.9991 - f1: 0.9991 - 60ms/step
step 12/50 - loss: 0.0000e+00 - precision: 0.9991 - recall: 0.9991 - f1: 0.9991 - 60ms/step
step 13/50 - loss: 0.0000e+00 - precision: 0.9992 - recall: 0.9992 - f1: 0.9992 - 60ms/step
step 14/50 - loss: 0.2139 - precision: 0.9993 - recall: 0.9993 - f1: 0.9993 - 60ms/step
step 15/50 - loss: 0.0000e+00 - precision: 0.9993 - recall: 0.9993 - f1: 0.9993 - 60ms/step
step 16/50 - loss: 0.0000e+00 - precision: 0.9993 - recall: 0.9993 - f1: 0.9993 - 60ms/step
step 17/50 - loss: 0.1506 - precision: 0.9994 - recall: 0.9994 - f1: 0.9994 - 60ms/step
step 18/50 - loss: 0.0000e+00 - precision: 0.9994 - recall: 0.9994 - f1: 0.9994 - 60ms/step
step 19/50 - loss: 0.4651 - precision: 0.9994 - recall: 0.9994 - f1: 0.9994 - 60ms/step
step 20/50 - loss: 0.4634 - precision: 0.9995 - recall: 0.9995 - f1: 0.9995 - 60ms/step
step 21/50 - loss: 0.1432 - precision: 0.9995 - recall: 0.9995 - f1: 0.9995 - 60ms/step
step 22/50 - loss: 0.1543 - precision: 0.9995 - recall: 0.9995 - f1: 0.9995 - 60ms/step
step 23/50 - loss: 0.0000e+00 - precision: 0.9995 - recall: 0.9995 - f1: 0.9995 - 60ms/step
step 24/50 - loss: 0.0000e+00 - precision: 0.9996 - recall: 0.9996 - f1: 0.9996 - 60ms/step
step 25/50 - loss: 0.0000e+00 - precision: 0.9996 - recall: 0.9996 - f1: 0.9996 - 60ms/step
step 26/50 - loss: 0.1355 - precision: 0.9996 - recall: 0.9996 - f1: 0.9996 - 60ms/step
step 27/50 - loss: 0.0000e+00 - precision: 0.9996 - recall: 0.9996 - f1: 0.9996 - 60ms/step
step 28/50 - loss: 0.0000e+00 - precision: 0.9996 - recall: 0.9996 - f1: 0.9996 - 60ms/step
step 29/50 - loss: 0.1309 - precision: 0.9996 - recall: 0.9996 - f1: 0.9996 - 60ms/step
step 30/50 - loss: 0.0000e+00 - precision: 0.9997 - recall: 0.9997 - f1: 0.9997 - 60ms/step
step 31/50 - loss: 0.0000e+00 - precision: 0.9997 - recall: 0.9997 - f1: 0.9997 - 60ms/step
step 32/50 - loss: 0.0000e+00 - precision: 0.9997 - recall: 0.9997 - f1: 0.9997 - 60ms/step
step 33/50 - loss: 0.0000e+00 - precision: 0.9997 - recall: 0.9997 - f1: 0.9997 - 60ms/step
step 34/50 - loss: 0.0000e+00 - precision: 0.9997 - recall: 0.9997 - f1: 0.9997 - 60ms/step
step 35/50 - loss: 0.0000e+00 - precision: 0.9997 - recall: 0.9997 - f1: 0.9997 - 60ms/step
step 36/50 - loss: 0.0000e+00 - precision: 0.9997 - recall: 0.9997 - f1: 0.9997 - 60ms/step
step 37/50 - loss: 0.1328 - precision: 0.9997 - recall: 0.9997 - f1: 0.9997 - 60ms/step
step 38/50 - loss: 0.0000e+00 - precision: 0.9997 - recall: 0.9997 - f1: 0.9997 - 60ms/step
step 39/50 - loss: 0.0000e+00 - precision: 0.9997 - recall: 0.9997 - f1: 0.9997 - 60ms/step
step 40/50 - loss: 0.0000e+00 - precision: 0.9997 - recall: 0.9997 - f1: 0.9997 - 60ms/step
step 41/50 - loss: 0.0000e+00 - precision: 0.9997 - recall: 0.9997 - f1: 0.9997 - 60ms/step
step 42/50 - loss: 0.1224 - precision: 0.9998 - recall: 0.9998 - f1: 0.9998 - 60ms/step
step 43/50 - loss: 0.0000e+00 - precision: 0.9998 - recall: 0.9998 - f1: 0.9998 - 60ms/step
step 44/50 - loss: 0.4431 - precision: 0.9998 - recall: 0.9998 - f1: 0.9998 - 60ms/step
step 45/50 - loss: 0.1294 - precision: 0.9998 - recall: 0.9998 - f1: 0.9998 - 60ms/step
step 46/50 - loss: 0.0000e+00 - precision: 0.9998 - recall: 0.9998 - f1: 0.9998 - 60ms/step
step 47/50 - loss: 0.0000e+00 - precision: 0.9998 - recall: 0.9998 - f1: 0.9998 - 60ms/step
step 48/50 - loss: 0.4390 - precision: 0.9998 - recall: 0.9998 - f1: 0.9998 - 60ms/step
step 49/50 - loss: 0.0000e+00 - precision: 0.9998 - recall: 0.9998 - f1: 0.9998 - 60ms/step
step 50/50 - loss: 0.4533 - precision: 0.9998 - recall: 0.9998 - f1: 0.9998 - 60ms/step
save checkpoint at /home/aistudio/results/9
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 1/6 - loss: 0.4335 - precision: 0.9789 - recall: 0.9894 - f1: 0.9841 - 42ms/step
step 2/6 - loss: 0.0000e+00 - precision: 0.9895 - recall: 0.9947 - f1: 0.9921 - 38ms/step
step 3/6 - loss: 0.0765 - precision: 0.9895 - recall: 0.9930 - f1: 0.9913 - 38ms/step
step 4/6 - loss: 0.0000e+00 - precision: 0.9869 - recall: 0.9908 - f1: 0.9889 - 36ms/step
step 5/6 - loss: 0.0000e+00 - precision: 0.9875 - recall: 0.9916 - f1: 0.9895 - 36ms/step
step 6/6 - loss: 0.0000e+00 - precision: 0.9896 - recall: 0.9930 - f1: 0.9913 - 36ms/step
Eval samples: 192
save checkpoint at /home/aistudio/results/finalIn [19]
model.evaluate(eval_data=test_loader)
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
step 6/6 - loss: 0.4526 - precision: 0.9818 - recall: 0.9878 - f1: 0.9848 - 37ms/step
Eval samples: 192
{'loss': [0.45263672],'precision': 0.981786643538595,'recall': 0.987783595113438,'f1': 0.9847759895606786}
模型在验证集上的f1 score较之前有明显提升。In [20]
outputs, lens, decodes = model.predict(test_data=test_loader)
preds = parse_decodes1(test_ds, decodes, lens, label_vocab)print('\n'.join(preds[:10]))
Predict begin...
step 6/6 [==============================] - ETA: 0s - 83ms/ste - ETA: 0s - 51ms/ste - 43ms/step
Predict samples: 192
('黑龙江省', 'A1')('双鸭山市', 'A2')('尖山区', 'A3')('八马路与东平行路交叉口北40米', 'A4')('韦业涛', 'P')('18600009172', 'T')
('广西壮族自治区', 'A1')('桂林市', 'A2')('雁山区', 'A3')('雁山镇西龙村老年活动中心', 'A4')('17610348888', 'T')('羊卓卫', 'P')
('15652864561', 'T')('河南省', 'A1')('开封市', 'A2')('顺河回族区', 'A3')('顺河区公园路32号', 'A4')('赵本山', 'P')
('河北省', 'A1')('唐山市', 'A2')('玉田县', 'A3')('无终大街159号', 'A4')('18614253058', 'T')('尚汉生', 'P')
('台湾', 'A1')('台中市', 'A2')('北区', 'A3')('北区锦新街18号', 'A4')('18511226708', 'T')('蓟丽', 'P')
('廖梓琪', 'P')('18514743222', 'T')('湖北省', 'A1')('宜昌市', 'A2')('长阳土家族自治县', 'A3')('贺家坪镇贺家坪村一组临河1号', 'A4')
('江苏省', 'A1')('南通市', 'A2')('海门市', 'A3')('孝威村孝威路88号', 'A4')('18611840623', 'T')('计星仪', 'P')
('17601674746', 'T')('赵春丽', 'P')('内蒙古自治区', 'A1')('乌兰察布市', 'A2')('凉城县', 'A3')('新建街', 'A4')
('云南省', 'A1')('临沧市', 'A2')('耿马傣族佤族自治县', 'A3')('鑫源路法院对面', 'A4')('许贞爱', 'P')('18510566685', 'T')
('四川省', 'A1')('成都市', 'A2')('双流区', 'A3')('东升镇北仓路196号', 'A4')('耿丕岭', 'P')('18513466161', 'T')
PART C 优化进阶-使用预训练模型
上面我们采用了PaddleNLP中的基本组网单元BiGRU、CRF、ViterbiDecoder组建模型。接下来,我们采用预训练模型,来看看效果吧。In [23]
from paddlenlp.transformers import ErnieTokenizer, ErnieForTokenClassification
from utils import convert_example
C.1 数据准备
定义数据集沿用前面的数据集。
In [24]
# Create dataset, tokenizer and dataloader.
train_ds, dev_ds, test_ds = load_dataset(datafiles=('./express_ner/train.txt', './express_ner/dev.txt', './express_ner/test.txt'))
数据处理:一键加载预训练模型相应的tokenizer
预训练模型ERNIE对中文数据的处理是以字为单位。PaddleNLP对于各种预训练模型已经内置了相应的tokenizer。指定想要使用的模型名字即可加载对应的tokenizer。tokenizer作用为将原始输入文本转化成模型model可以接受的输入数据形式。关于tokenizer更详细的用法介绍参考:使用PaddleNLP语义预训练模型ERNIE优化情感分析这里使用Python中的偏函数(Partial function)。Partial函数的作用就是:将所作用的函数作为partial()函数的第一个参数,原函数的各个参数依次作为partial()函数的后续参数。当函数的参数个数太多,需要简化时,使用functools.partial可以创建一个新的函数,这个新函数可以固定住原函数的部分参数,从而在调用时更简单。In [25]
from functools import partiallabel_vocab = load_dict('./conf/tag.dic')# 设置想要使用模型的名称
MODEL_NAME = "ernie-1.0"
tokenizer = ErnieTokenizer.from_pretrained(MODEL_NAME)trans_func = partial(convert_example, tokenizer=tokenizer, label_vocab=label_vocab)train_ds.map(trans_func)
dev_ds.map(trans_func)
test_ds.map(trans_func)
print (train_ds[0])
[2021-04-28 21:19:54,996] [    INFO] - Found /home/aistudio/.paddlenlp/models/ernie-1.0/vocab.txt
([1, 208, 515, 515, 249, 540, 249, 540, 540, 540, 589, 589, 803, 838, 2914, 1222, 1734, 244, 368, 797, 99, 32, 863, 308, 457, 2778, 484, 167, 436, 930, 192, 233, 634, 99, 213, 40, 317, 540, 256, 2], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 40, [12, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 0, 1, 1, 4, 5, 5, 6, 7, 7, 8, 9, 9, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12])数据加载
使用paddle.io.DataLoader接口多线程异步加载数据。In [26]
ignore_label = -1
batchify_fn = lambda samples, fn=Tuple(Pad(axis=0, pad_val=tokenizer.pad_token_id),  # input_idsPad(axis=0, pad_val=tokenizer.pad_token_type_id),  # token_type_idsStack(),  # seq_lenPad(axis=0, pad_val=ignore_label)  # labels
): fn(samples)train_loader = paddle.io.DataLoader(dataset=train_ds,batch_size=200,return_list=True,collate_fn=batchify_fn)
dev_loader = paddle.io.DataLoader(dataset=dev_ds,batch_size=200,return_list=True,collate_fn=batchify_fn)
test_loader = paddle.io.DataLoader(dataset=test_ds,batch_size=200,return_list=True,collate_fn=batchify_fn)
C.2 调用模型
调用paddlenlp.transformers.ErnieForTokenClassification.from_pretrained()方法只需指定想要使用的模型名称和文本分类的类别数即可完成定义模型网络。In [27]
# Define the model netword and its loss
model = ErnieForTokenClassification.from_pretrained("ernie-1.0", num_classes=len(label_vocab))
[2021-04-28 21:19:57,135] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/ernie-1.0/ernie_v1_chn_base.pdparams
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1303: UserWarning: Skip loading for classifier.weight. classifier.weight is not found in the provided dict.warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1303: UserWarning: Skip loading for classifier.bias. classifier.bias is not found in the provided dict.warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
C.3 模型配置
适用于ERNIE/BERT这类Transformer模型的迁移优化学习率策略为warmup的动态学习率。图6:动态学习率示意图In [28]
metric = ChunkEvaluator(label_list=label_vocab.keys(), suffix=True)
loss_fn = paddle.nn.loss.CrossEntropyLoss(ignore_index=ignore_label)
optimizer = paddle.optimizer.AdamW(learning_rate=2e-5, parameters=model.parameters())
C.4 模型训练与评估
模型训练的过程通常有以下步骤:从dataloader中取出一个batch data
将batch data喂给model,做前向计算
将前向计算结果传给损失函数,计算loss。将前向计算结果传给评价方法,计算评价指标。
loss反向回传,更新梯度。重复以上步骤。
每训练一个epoch时,程序将会评估一次,评估当前模型训练的效果。In [29]
step = 0
for epoch in range(10):# Switch the model to training modemodel.train()for idx, (input_ids, token_type_ids, length, labels) in enumerate(train_loader):logits = model(input_ids, token_type_ids)loss = paddle.mean(loss_fn(logits, labels))loss.backward()optimizer.step()optimizer.clear_grad()step += 1print("epoch:%d - step:%d - loss: %f" % (epoch, step, loss))evaluate(model, metric, dev_loader)paddle.save(model.state_dict(),'./ernie_result/model_%d.pdparams' % step)
# model.save_pretrained('./checkpoint')
# tokenizer.save_pretrained('./checkpoint')
epoch:0 - step:1 - loss: 2.593708
epoch:0 - step:2 - loss: 2.380349
epoch:0 - step:3 - loss: 2.173395
epoch:0 - step:4 - loss: 2.002056
epoch:0 - step:5 - loss: 1.841832
epoch:0 - step:6 - loss: 1.733456
epoch:0 - step:7 - loss: 1.655601
epoch:0 - step:8 - loss: 1.569723
[2021-04-28 21:20:04,566] [ WARNING] - Compatibility Warning: The params of ChunkEvaluator.compute has been modified. The old version is `inputs`, `lengths`, `predictions`, `labels` while the current version is `lengths`, `predictions`, `labels`.  Please update the usage.
eval precision: 0.048905 - recall: 0.056350 - f1: 0.052364
epoch:1 - step:9 - loss: 1.503557
epoch:1 - step:10 - loss: 1.424570
epoch:1 - step:11 - loss: 1.358536
epoch:1 - step:12 - loss: 1.289387
epoch:1 - step:13 - loss: 1.225305
epoch:1 - step:14 - loss: 1.173034
epoch:1 - step:15 - loss: 1.131244
epoch:1 - step:16 - loss: 1.059373
eval precision: 0.283570 - recall: 0.285955 - f1: 0.284757
epoch:2 - step:17 - loss: 1.022154
epoch:2 - step:18 - loss: 0.967207
epoch:2 - step:19 - loss: 0.920802
epoch:2 - step:20 - loss: 0.872374
epoch:2 - step:21 - loss: 0.823026
epoch:2 - step:22 - loss: 0.779837
epoch:2 - step:23 - loss: 0.748339
epoch:2 - step:24 - loss: 0.692416
eval precision: 0.713350 - recall: 0.732548 - f1: 0.722822
epoch:3 - step:25 - loss: 0.666814
epoch:3 - step:26 - loss: 0.625810
epoch:3 - step:27 - loss: 0.590307
epoch:3 - step:28 - loss: 0.545524
epoch:3 - step:29 - loss: 0.520990
epoch:3 - step:30 - loss: 0.490743
epoch:3 - step:31 - loss: 0.468064
epoch:3 - step:32 - loss: 0.424753
eval precision: 0.888889 - recall: 0.908326 - f1: 0.898502
epoch:4 - step:33 - loss: 0.404591
epoch:4 - step:34 - loss: 0.370703
epoch:4 - step:35 - loss: 0.352661
epoch:4 - step:36 - loss: 0.319894
epoch:4 - step:37 - loss: 0.307366
epoch:4 - step:38 - loss: 0.279396
epoch:4 - step:39 - loss: 0.258565
epoch:4 - step:40 - loss: 0.237150
eval precision: 0.942339 - recall: 0.962153 - f1: 0.952143
epoch:5 - step:41 - loss: 0.225628
epoch:5 - step:42 - loss: 0.202598
epoch:5 - step:43 - loss: 0.194455
epoch:5 - step:44 - loss: 0.173414
epoch:5 - step:45 - loss: 0.169743
epoch:5 - step:46 - loss: 0.156120
epoch:5 - step:47 - loss: 0.140866
epoch:5 - step:48 - loss: 0.124002
eval precision: 0.950413 - recall: 0.967199 - f1: 0.958733
epoch:6 - step:49 - loss: 0.116151
epoch:6 - step:50 - loss: 0.107012
epoch:6 - step:51 - loss: 0.106990
epoch:6 - step:52 - loss: 0.092518
epoch:6 - step:53 - loss: 0.099379
epoch:6 - step:54 - loss: 0.081129
epoch:6 - step:55 - loss: 0.077965
epoch:6 - step:56 - loss: 0.070685
eval precision: 0.972454 - recall: 0.979815 - f1: 0.976121
epoch:7 - step:57 - loss: 0.059776
epoch:7 - step:58 - loss: 0.061571
epoch:7 - step:59 - loss: 0.061044
epoch:7 - step:60 - loss: 0.058208
epoch:7 - step:61 - loss: 0.057555
epoch:7 - step:62 - loss: 0.054626
epoch:7 - step:63 - loss: 0.045650
epoch:7 - step:64 - loss: 0.045877
eval precision: 0.976589 - recall: 0.982338 - f1: 0.979455
epoch:8 - step:65 - loss: 0.040371
epoch:8 - step:66 - loss: 0.041364
epoch:8 - step:67 - loss: 0.041890
epoch:8 - step:68 - loss: 0.040191
epoch:8 - step:69 - loss: 0.039695
epoch:8 - step:70 - loss: 0.034364
epoch:8 - step:71 - loss: 0.030331
epoch:8 - step:72 - loss: 0.031266
eval precision: 0.975853 - recall: 0.985702 - f1: 0.980753
epoch:9 - step:73 - loss: 0.026888
epoch:9 - step:74 - loss: 0.025310
epoch:9 - step:75 - loss: 0.030487
epoch:9 - step:76 - loss: 0.032658
epoch:9 - step:77 - loss: 0.029696
epoch:9 - step:78 - loss: 0.028108
epoch:9 - step:79 - loss: 0.024264
epoch:9 - step:80 - loss: 0.022611
eval precision: 0.982441 - recall: 0.988225 - f1: 0.985325C.5 模型预测
训练保存好的训练,即可用于预测。如以下示例代码自定义预测数据,调用predict()函数即可一键预测。In [30]
preds = predict(model, test_loader, test_ds, label_vocab)
file_path = "ernie_results.txt"
with open(file_path, "w", encoding="utf8") as fout:fout.write("\n".join(preds))
# Print some examples
print("The results have been saved in the file: %s, some examples are shown below: "% file_path)
print("\n".join(preds[:10]))
The results have been saved in the file: ernie_results.txt, some examples are shown below:
('黑龙江省', 'A1')('双鸭山市', 'A2')('尖山区', 'A3')('八马路与东平行路交叉口北40米', 'A4')('韦业涛', 'P')('18600009172', 'T')
('广西壮族自治区', 'A1')('桂林市', 'A2')('雁山区', 'A3')('雁山镇西龙村老年活动中心', 'A4')('17610348888', 'T')('羊卓卫', 'P')
('15652864561', 'T')('河南省', 'A1')('开封市', 'A2')('顺河回族区', 'A3')('顺河区公园路32号', 'A4')('赵本山', 'P')
('河北省', 'A1')('唐山市', 'A2')('玉田县', 'A3')('无终大街159号', 'A4')('18614253058', 'T')('尚汉生', 'P')
('台湾', 'A1')('台中市', 'A2')('北区', 'A3')('北区', 'A4')('锦新街18号', 'A4')('18511226708', 'T')('蓟丽', 'P')
('廖梓琪', 'P')('18514743222', 'T')('湖北省', 'A1')('宜昌市', 'A2')('长阳土家族自治县', 'A3')('贺家坪镇贺家坪村一组临河1号', 'A4')
('江苏省', 'A1')('南通市', 'A2')('海门市', 'A3')('孝威村孝威路88号', 'A4')('18611840623', 'T')('计星仪', 'P')
('17601674746', 'T')('赵春丽', 'P')('内蒙古自治区', 'A1')('乌兰察布市', 'A2')('凉城县', 'A3')('新建街', 'A4')
('云南省', 'A1')('临沧市', 'A2')('耿马傣族佤族自治县', 'A3')('鑫源路法院对面', 'A4')('许贞爱', 'P')('18510566685', 'T')
('四川省', 'A1')('成都市', 'A2')('双流区', 'A3')('东升镇北仓路196号', 'A4')('耿丕岭', 'P')('18513466161', 'T')
加入交流群,一起学习吧
现在就加入PaddleNLP的QQ技术交流群,一起交流NLP技术吧!PaddleNLP更多案例
使用seq2vec模块进行句子情感分类
使用预训练模型ERNIE优化情感分析
使用Bi-GRU+CRF抽取快递单信息
使用预训练模型ERNIE优化快递单信息抽取
使用Seq2Seq模型完成自动对联
使用预训练模型ERNIE-GEN实现智能写诗
使用TCN网络完成新冠疫情病例数预测
使用预训练模型完成阅读理解
自定义数据集实现文本多分类任务

『What‘s In PaddleNLP』多层次组网API-快递单信息抽取相关推荐

  1. 『NLP经典项目集』10:使用预训练模型优化快递单信息抽取

    使用PaddleNLP语义预训练模型ERNIE完成快递单信息抽取 注意本项目代码需要使用GPU环境来运行:命名实体识别是NLP中一项非常基础的任务,是信息提取.问答系统.句法分析.机器翻译等众多NLP ...

  2. Day03『NLP打卡营』实践课3:使用预训练模型实现快递单信息抽取

    Day03 词法分析作业辅导 本教程旨在辅导同学如何完成 AI Studio课程--『NLP打卡营』实践课3:使用预训练模型实现快递单信息抽取 课后作业. 1. 更换预训练模型 在PaddleNLP ...

  3. 【PaddleNLP 基于深度学习的自然语言处理】第三次作业--必修|快递单信息识别

    基本情况 1.数据 train_ds, test_ds = paddlenlp.datasets.load_dataset("msra_ner", splits=["tr ...

  4. 『NLP经典项目集』05:新年到,飞桨带你对对联

    基于seq2seq的对联生成 对联,是汉族传统文化之一,是写在纸.布上或刻在竹子.木头.柱子上的对偶语句.对联对仗工整,平仄协调,是一字一音的汉语独特的艺术形式,是中国传统文化瑰宝.这里,我们将根据上 ...

  5. 『NLP打卡营』实践课6:机器阅读理解

    基于预训练模型的机器阅读理解 阅读理解是检索问答系统中的重要组成部分,最常见的数据集是单篇章.抽取式阅读理解数据集. 该示例展示了如何使用PaddleNLP快速实现基于预训练模型的机器阅读理解任务. ...

  6. PaddleNLP通用信息抽取技术UIE【一】产业应用实例:信息抽取{实体关系抽取、中文分词、精准实体标。情感分析等}、文本纠错、问答系统、闲聊机器人、定制训练

    相关文章: 1.快递单中抽取关键信息[一]----基于BiGRU+CR+预训练的词向量优化 2.快递单信息抽取[二]基于ERNIE1.0至ErnieGram + CRF预训练模型 3.快递单信息抽取[ ...

  7. day01『NLP打卡营』实践课1:词向量应用演示

    Day01 词向量作业辅导 本教程旨在辅导同学如何完成 AI Studio课程--『NLP打卡营』实践课1:词向量应用展示 课后作业. 1. 选择词向量预训练模型 在PaddleNLP 中文Embed ...

  8. 『遥かに仰ぎ、丽しの』游戏初回版特典原声集 GAME SP OST(下载、中日双语歌词)...

    记得苍月的结局音乐和内容一样很棒. 大师就给大家送上遥かに仰ぎ.丽しの GAME SP OST 本期策划:大师♂罗莊 翻译:天界白魔导 为什么网上已经有翻译,大师还要组织翻译歌词呢呢? 同学,这个翻译 ...

  9. 射手科技公开课第一辑 『项目管理和代码规范』

    射手玩的东西越来越全面了,从当年的字幕下载站,到播放器,到射手科技,发展的思路值得借鉴和思考. 射手科技成立3个月以来,我们内部已经组织了不少培训.每次内部培训我们都留有录像和录音,以便后续参与项目的 ...

最新文章

  1. 合并两个有序数组为一个新的有序数组
  2. flash as3笔记1
  3. Python numpy函数:all()和any()比较矩阵
  4. vuejs 开发中踩到的坑
  5. Qunee学习开发体会
  6. Android studio 打包 uni App 修改apk名称,app名称及图标
  7. Android自定义View单TextView显示多种文字样式
  8. [云原生专题-23]:K8S - Kubernetes(K8S)整体概述与组件架构通俗讲解
  9. JAVA :一张纸厚0.5mm //0.0005m,折叠多少次,厚度会超过珠穆朗玛峰?(8848.43m)
  10. 零基础如何速成插画?插画入门教程分享!
  11. win7虚拟机iso文件
  12. 百度云盘限速破解方式汇总
  13. 【日记本砸】21.06.11-20 复杂的式子和角标只是一个符号一个标记而已
  14. 各星座导演与他们的电影风格【转】]
  15. “海底捞”的管理智慧
  16. java early eof_idea克隆项目,git clone出现early EOF问题的解决方案
  17. 使用BookMarkHub插件实现在不同浏览器之间进行书签同步
  18. (只需五步)ChatGPT接入微信的攻略
  19. Android:这是一份非常详细的MVP+Rxjava2.0+Retrofit2.0相结合举例RecyclerView的实战篇章
  20. 异常处理——template中的image组件图片未显示

热门文章

  1. 等级链与跳板原则_管理学原理重点
  2. 输血40亿,能“拯救”亏损的四维图新吗?
  3. 半包、全包、套餐、整装该如何选择?
  4. 苹果手机能运行java吗_苹果以后会支持JAVA吗
  5. Microsoft.Practices.Unity依赖注入使用实例
  6. Debug的使用方法(转)
  7. 协同oa怎么选择,需要注意哪些问题?
  8. itunes connect 沙盒帐号地区的问题导致无法进行充值
  9. Facebook iPad App体验视频
  10. mysql 乘_mysql乘法