一、赛题

赛题地址:https://www.sodic.com.cn/competitions/900010

赛题背景: 企业自主填报安全生产隐患,对于将风险消除在事故萌芽阶段具有重要意义。企业在填报隐患时,往往存在不认真填报的情况,“虚报、假报”隐患内容,增大了企业监管的难度。采用大数据手段分析隐患内容,找出不切实履行主体责任的企业,向监管部门进行推送,实现精准执法,能够提高监管手段的有效性,增强企业安全责任意识。

赛题任务: 本赛题提供企业填报隐患数据,参赛选手需通过智能化手段识别其中是否存在“虚报、假报”的情况。

二、解决方法

该赛题抽象成模型问题,总体来看是一个文本分类任务,下面主要采取baseline、nlp传统模型、nlp深度模型以及一些前沿的方法进行提升。

一、baseline模型

# baseline 方法 (albert)# encoding = 'utf-8'import random
import numpy as np
import pandas as pd
from bert4keras.backend import keras,set_gelu
from bert4keras.tokenizers import Tokenizer
from bert4keras.models import build_transformer_model
from bert4keras.optimizers import Adam,extend_with_piecewise_linear_lr
from bert4keras.snippets import sequence_padding, DataGenerator
from bert4keras.snippets import open
from keras.layers import Lambda, Dense# 相关参数及预训练模型
set_gelu("tanh")
num_classes = 2
maxlen = 128
batch_size = 32
config_path = "../model/albert_small_zh_google/albert_config_small_google.json"
checkpoint_path = '../model/albert_small_zh_google/albert_model.ckpt'
dict_path = '../model/albert_small_zh_google/vocab.txt'# 建立分词器
tokenizer = Tokenizer(dict_path, do_lower_case=True)# 定义模型
bert = build_transformer_model(config_path=config_path,checkpoint_path=checkpoint_path,model='albert',return_keras_model=False)output = Lambda(lambda x: x[:,0], name='CLS-token')(bert.model.output)
output = Dense(units=num_classes,activation='softmax',kernel_initializer=bert.initializer)(output)
model = keras.models.Model(bert.model.input, output)
model.compile(loss='sparse_categorical_crossentropy',optimizer=Adam(1e-5),metrics=['accuracy'])# 加载数据与处理df_train_data = pd.read_csv("../data/train.csv")
df_test_data = pd.read_csv("../data/test.csv")train_data, valid_data, test_data = [], [], []valid_rate = 0.3
for row_i, data in df_train_data.iterrows():id, level_1, level_2, level_3, level_4, content, label = dataid, text, label = id, str(level_1) + '\t' + str(level_2) + '\t' + str(level_3) + '\t' + str(level_4) + '\t' + str(content), labelif random.random() > valid_rate:train_data.append((id,text,int(label)))else:valid_data.append((id,text,int(label)))for row_i, data in df_test_data.iterrows():id, level_1, level_2, level_3, level_4, content = dataid, text, label = id, str(level_1) + '\t' + str(level_2) + '\t' + str(level_3) + '\t' + str(level_4) + '\t' + str(content), 0test_data.append((id, text, int(label)))# 定义data_generator
class data_generator(DataGenerator):def __iter__(self, random=False):batch_token_ids, batch_segment_ids, batch_labels = [], [], []for is_end,(id, text, label) in self.sample(random):token_ids, segment_ids = tokenizer.encode(text, maxlen=maxlen)batch_token_ids.append(token_ids)batch_segment_ids.append(segment_ids)batch_labels.append([label])if len(batch_token_ids) == self.batch_size or is_end:batch_token_ids = sequence_padding(batch_token_ids)batch_segment_ids = sequence_padding(batch_segment_ids)batch_labels = sequence_padding(batch_labels)yield [batch_token_ids, batch_segment_ids], batch_labelsbatch_token_ids, batch_segment_ids, batch_labels = [], [], []# 转换数据集
train_generator = data_generator(train_data, batch_size)
valid_generator = data_generator(valid_data, batch_size)# 评估与保存
class Evaluator(keras.callbacks.Callback):def __init__(self):self.best_val_acc = 0.def on_epoch_end(self, epoch, logs=None):val_acc = evaluate(valid_generator)if val_acc > self.best_val_acc:self.best_val_acc = val_accmodel.save_weights('best_model.weights')test_acc = evaluate(valid_generator)print(u'val_acc: %.5f, best_val_acc: %.5f, test_acc: %.5f\n' %(val_acc, self.best_val_acc, test_acc))# 训练和验证
evaluator = Evaluator()# 训咯模型
model.fit(train_generator.forfit(),steps_per_epoch=len(train_generator),epochs=10,callbacks=[evaluator])# 加载模型
model.load_weights("best_model.weights")
print(u"final test acc: %05f\n" % (evaluate(valid_generator)))# 评价指标
def data_pred(test_data):id_ids, y_pred_ids = [], []for id, text, label in test_data:token_ids, segment_ids = tokenizer.encode(text, maxlen=maxlen)token_ids = sequence_padding([token_ids])segment_ids = sequence_padding([segment_ids])y_pred = int(model.predict([token_ids, segment_ids]).argmax(axis=1)[0])id_ids.append(id)y_pred_ids.append(y_pred)return id_ids, y_pred_ids# 模型预测及保存结果
id_ids, y_pred_ids = data_pred(test_data)
df_save = pd.DataFrame()
df_save['id'] = id_ids
df_save['label'] = y_pred_ids# 结果打印
df_save.head()id    label
0   0   0
1   1   0
2   2   1
3   3   0
4   4   0

二、NLP经典模型

""" tf-idf """import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import f1_score
import lightgbm as lgbbase_dir = "../"
train = pd.read_csv(base_dir + "train.csv")
test = pd.read_csv(base_dir + "test.csv")
results = pd.read_csv(base_dir + "results.csv")# 数据去重
train = train.drop_duplicates(['level_1', 'level_2', 'level_3', 'level_4', 'content', 'label'])train['text'] = (train['content']).map(lambda x:' '.join(list(str(x))))
test['text'] = (test['content']).map(lambda x:' '.join(list(str(x))))vectorizer = TfidfVectorizer(analyzer='char')
train_X = vectorizer.fit_transform(train['text']).toarray()
test_X = vectorizer.transform(test['text']).toarray()
train_y = train['label'].astype(int).values# 参数
params = {'task':'train','boosting_type':'gbdt','num_leaves': 31,'objective': 'binary', 'learning_rate': 0.05, 'bagging_freq': 2, 'max_bin':256,'num_threads': 32,
#       'metric':['binary_logloss','binary_error']} skf = StratifiedKFold(n_splits=5)for index,(train_index, test_index) in enumerate(skf.split(train_X, train_y)):X_train, X_test = train_X[train_index], train_X[test_index]y_train, y_test = train_y[train_index], train_y[test_index]lgb_train = lgb.Dataset(X_train, y_train)lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train)gbm = lgb.train(params,lgb_train,num_boost_round=1000,valid_sets=lgb_eval,early_stopping_rounds=10)y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration)pred = gbm.predict(test_X, num_iteration=gbm.best_iteration)if index == 0:pred_y_check, true_y_check = list(y_pred), list(y_test)pred_out=predelse:pred_y_check += list(y_pred)true_y_check += list(y_test)pred_out += pred#验证for i in range(10):pred = [int(x) for x in np.where(np.array(pred_y_check) >= i/10.0,1,0)]scores = f1_score(true_y_check,pred)print(i, scores)
""" n-gram模型 """# encoding='utf-8'import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import f1_score
import lightgbm as lgb# 读取数据base_dir = "../"
train = pd.read_csv(base_dir + "train.csv")
test = pd.read_csv(base_dir + "test.csv")
results = pd.read_csv(base_dir + "results.csv")train = train.drop_duplicates(['level_1', 'level_2', 'level_3', 'level_4', 'content', 'label'])# 构建特征
train['text'] = (train['content']).map(lambda x:' '.join(list(str(x))))
test['text'] = (test['content']).map(lambda x:' '.join(list(str(x))))vectorizer = CountVectorizer(analyzer='char', ngram_range=(1, 3), stop_words=[])train_X = vectorizer.fit_transform(train['text']).toarray()
test_X = vectorizer.transform(test['text']).toarray()train_y = train['label'].astype(int).values# 交叉验证,训练模型params = {'task':'train','boosting_type':'gbdt','num_leaves': 31,'objective': 'binary', 'learning_rate': 0.05, 'bagging_freq': 2, 'max_bin':256,'num_threads': 32,
#         'metric':['binary_logloss','binary_error']} skf = StratifiedKFold(n_splits=5)for index,(train_index, test_index) in enumerate(skf.split(train_X, train_y)):X_train, X_test = train_X[train_index], train_X[test_index]y_train, y_test = train_y[train_index], train_y[test_index]lgb_train = lgb.Dataset(X_train, y_train)lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train)gbm = lgb.train(params,lgb_train,num_boost_round=1000,valid_sets=lgb_eval,early_stopping_rounds=10)y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration)pred = gbm.predict(test_X, num_iteration=gbm.best_iteration)if index == 0:pred_y_check, true_y_check = list(y_pred), list(y_test)pred_out=predelse:pred_y_check += list(y_pred)true_y_check += list(y_test)pred_out += pred# 验证
for i in range(10):pred = [int(x) for x in np.where(np.array(pred_y_check) >= i/10.0,1,0)]scores = f1_score(true_y_check,pred)print(i, scores)
"""word2vec"""import pandas as pd
import numpy as np
import jieba
import lightgbm as lgbfrom sklearn.model_selection import StratifiedKFold
from sklearn.metrics import f1_score
from gensim.models import Word2Vec# 读取数据
base_dir = "../"
train = pd.read_csv(base_dir + "train.csv")
test = pd.read_csv(base_dir + "test.csv")
results = pd.read_csv(base_dir + "results.csv")# 训练集去重
train = train.drop_duplicates(['level_1', 'level_2', 'level_3', 'level_4', 'content', 'label'])# 构建特征,使用word2vec
train['text'] = (train['content']).map(lambda x:' '.join(jieba.cut(str(x))))
test['text'] = (test['content']).map(lambda x:' '.join(jieba.cut(str(x))))model_word = Word2Vec(train['text'].values.tolist(), size=100, window=5, min_count=1, workers=4)def get_vec(word_list, model):init = np.array([0.0]*100)index = 0for word in word_list:if word in model.wv:  init += np.array(model.wv[word])index += 1if index == 0:return initreturn list(init / index)# 向量取平均值
train['vec'] = train['text'].map(lambda x: get_vec(x, model_word))
test['vec'] = test['text'].map(lambda x: get_vec(x, model_word))train_X = np.array(train['vec'].values.tolist())
test_X = np.array(test['vec'].values.tolist())
train_y = train['label'].astype(int).values# 交叉验证params = {'task':'train','boosting_type':'gbdt','num_leaves': 31,'objective': 'binary', 'learning_rate': 0.05, 'bagging_freq': 2, 'max_bin':256,'num_threads': 32,
#         'metric':['binary_logloss','binary_error']} skf = StratifiedKFold(n_splits=5)for index,(train_index, test_index) in enumerate(skf.split(train_X, train_y)):X_train, X_test = train_X[train_index], train_X[test_index]y_train, y_test = train_y[train_index], train_y[test_index]lgb_train = lgb.Dataset(X_train, y_train)lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train)gbm = lgb.train(params,lgb_train,num_boost_round=1000,valid_sets=lgb_eval,early_stopping_rounds=10)y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration)pred = gbm.predict(test_X, num_iteration=gbm.best_iteration)if index == 0:pred_y_check, true_y_check = list(y_pred), list(y_test)pred_out=predelse:pred_y_check += list(y_pred)true_y_check += list(y_test)pred_out += pred# 验证for i in range(10):pred = [int(x) for x in np.where(np.array(pred_y_check) >= i/10.0,1,0)]scores = f1_score(true_y_check,pred)print(i/10.0, scores)

三、NLP深度模型

"""TextCNN"""import random
import numpy as np
import pandas as pd
from bert4keras.backend import keras, set_gelu
from bert4keras.tokenizers import Tokenizer
from bert4keras.models import build_transformer_model
from bert4keras.optimizers import Adam, extend_with_piecewise_linear_lr
from bert4keras.snippets import sequence_padding, DataGenerator
from bert4keras.snippets import open
from keras.layers import *
import tensorflow as tfset_gelu('tanh')  # 切换gelu版本num_classes = 2
maxlen = 128
batch_size = 32
config_path = '../model/albert_small_zh_google/albert_config_small_google.json'
checkpoint_path = '../model/albert_small_zh_google/albert_model.ckpt'
dict_path = '../model/albert_small_zh_google/vocab.txt'# 建立分词器
tokenizer = Tokenizer(dict_path, do_lower_case=True)# 加载bert模型# 加载预训练模型
bert = build_transformer_model(config_path=config_path,checkpoint_path=checkpoint_path,model='albert',return_keras_model=False,
)# keras辅助函数
expand_dims = Lambda(lambda X: tf.expand_dims(X,axis=-1))
max_pool = Lambda(lambda X: tf.squeeze(tf.reduce_max(X,axis=1),axis=1))
concat = Lambda(lambda X: tf.concat(X, axis=-1))# 获取bert的char embedding
cnn_input = expand_dims(bert.layers['Embedding-Token'].output)# 定义cnn网络
filters = 2
sizes = [3,5,7,9]
output = []
for size_i in sizes:X = Conv2D(filters=2,kernel_size=(size_i, 128),activation='relu',)(cnn_input)X = max_pool(X)output.append(X)cnn_output = concat(output)# 分类全连接
output = Dense(units=num_classes,activation='softmax'
)(cnn_output)# 定义模型输入输出
model = keras.models.Model(bert.model.input[0], output)# 编译模型model.compile(loss='sparse_categorical_crossentropy',optimizer=Adam(1e-5),  metrics=['accuracy'],
)# 加载数据def load_data(valid_rate=0.3):train_file = "../data/train.csv"test_file = "../data/test.csv"df_train_data = pd.read_csv("../data/train.csv").\drop_duplicates(['level_1', 'level_2', 'level_3', 'level_4', 'content', 'label'])df_test_data = pd.read_csv("../data/test.csv")train_data, valid_data, test_data = [], [], []for row_i, data in df_train_data.iterrows():id, level_1, level_2, level_3, level_4, content, label = dataid, text, label = id, str(level_1) + '\t' + str(level_2) + '\t' + \str(level_3) + '\t' + str(level_4) + '\t' + str(content), labelif random.random() > valid_rate:train_data.append( (id, text, int(label)) )else:valid_data.append( (id, text, int(label)) )for row_i, data in df_test_data.iterrows():id, level_1, level_2, level_3, level_4, content = dataid, text, label = id, str(level_1) + '\t' + str(level_2) + '\t' + \str(level_3) + '\t' + str(level_4) + '\t' + str(content), 0test_data.append( (id, text, int(label)) )return train_data, valid_data, test_datatrain_data, valid_data, test_data = load_data(valid_rate=0.3)# 迭代器生成class data_generator(DataGenerator):def __iter__(self, random=False):batch_token_ids, batch_labels = [], []for is_end, (id, text, label) in self.sample(random):token_ids, segment_ids = tokenizer.encode(text, maxlen=maxlen)batch_token_ids.append(token_ids)batch_labels.append([label])if len(batch_token_ids) == self.batch_size or is_end:batch_token_ids = sequence_padding(batch_token_ids)batch_labels = sequence_padding(batch_labels)yield [batch_token_ids], batch_labelsbatch_token_ids, batch_labels = [], []train_generator = data_generator(train_data, batch_size)
valid_generator = data_generator(valid_data, batch_size)# 训练验证和预测def evaluate(data):total, right = 0., 0.for x_true, y_true in data:y_pred = model.predict(x_true).argmax(axis=1)y_true = y_true[:, 0]total += len(y_true)right += (y_true == y_pred).sum()return right / totalclass Evaluator(keras.callbacks.Callback):def __init__(self):self.best_val_acc = 0.def on_epoch_end(self, epoch, logs=None):val_acc = evaluate(valid_generator)if val_acc > self.best_val_acc:self.best_val_acc = val_accmodel.save_weights('best_model.weights')test_acc = evaluate(valid_generator)print(u'val_acc: %.5f, best_val_acc: %.5f, test_acc: %.5f\n' %(val_acc, self.best_val_acc, test_acc))def data_pred(test_data):id_ids, y_pred_ids = [], []for id, text, label in test_data:token_ids, segment_ids = tokenizer.encode(text, maxlen=maxlen)token_ids = sequence_padding([token_ids])y_pred = int(model.predict([token_ids]).argmax(axis=1)[0])id_ids.append(id)y_pred_ids.append(y_pred)return id_ids, y_pred_ids# 训练和验证模型evaluator = Evaluator()
model.fit(train_generator.forfit(),steps_per_epoch=len(train_generator),epochs=1,callbacks=[evaluator])# 加载最好的模型model.load_weights('best_model.weights')# 验证集结果
print(u'final test acc: %05f\n' % (evaluate(valid_generator)))# 训练集结果
print(u'final test acc: %05f\n' % (evaluate(train_generator)))# 模型预测保存结果id_ids, y_pred_ids = data_pred(test_data)
df_save = pd.DataFrame()
df_save['id'] = id_ids
df_save['label'] = y_pred_idsdf_save.to_csv('result.csv')
"""Bi-LSTM"""import random
import numpy as np
import pandas as pd
from bert4keras.backend import keras, set_gelu
from bert4keras.tokenizers import Tokenizer
from bert4keras.models import build_transformer_model
from bert4keras.optimizers import Adam, extend_with_piecewise_linear_lr
from bert4keras.snippets import sequence_padding, DataGenerator
from bert4keras.snippets import open
from keras.layers import *
import tensorflow as tfset_gelu('tanh')  # 切换gelu版本
num_classes = 2
maxlen = 128
batch_size = 32
config_path = '../model/albert_small_zh_google/albert_config_small_google.json'
checkpoint_path = '../model/albert_small_zh_google/albert_model.ckpt'
dict_path = '../model/albert_small_zh_google/vocab.txt'# 建立分词器
tokenizer = Tokenizer(dict_path, do_lower_case=True)
# 加载预训练模型
bert = build_transformer_model(config_path=config_path,checkpoint_path=checkpoint_path,model='albert',return_keras_model=False,
)
lstm_input = bert.layers['Embedding-Token'].output
X = Bidirectional(LSTM(128, return_sequences=True))(lstm_input)
lstm_output = Bidirectional(LSTM(128))(X)output = Dense(units=num_classes,activation='softmax'
)(lstm_output)model = keras.models.Model(bert.model.input[0], output)model.compile(loss='sparse_categorical_crossentropy',optimizer=Adam(1e-5),  metrics=['accuracy'],
)def load_data(valid_rate=0.3):train_file = "../data/train.csv"test_file = "../data/test.csv"df_train_data = pd.read_csv("../data/train.csv").\drop_duplicates(['level_1', 'level_2', 'level_3', 'level_4', 'content', 'label'])df_test_data = pd.read_csv("../data/test.csv")train_data, valid_data, test_data = [], [], []for row_i, data in df_train_data.iterrows():id, level_1, level_2, level_3, level_4, content, label = dataid, text, label = id, str(level_1) + '\t' + str(level_2) + '\t' + \str(level_3) + '\t' + str(level_4) + '\t' + str(content), labelif random.random() > valid_rate:train_data.append( (id, text, int(label)) )else:valid_data.append( (id, text, int(label)) )for row_i, data in df_test_data.iterrows():id, level_1, level_2, level_3, level_4, content = dataid, text, label = id, str(level_1) + '\t' + str(level_2) + '\t' + \str(level_3) + '\t' + str(level_4) + '\t' + str(content), 0test_data.append( (id, text, int(label)) )return train_data, valid_data, test_datatrain_data, valid_data, test_data = load_data(valid_rate=0.3)class data_generator(DataGenerator):def __iter__(self, random=False):batch_token_ids, batch_labels = [], []for is_end, (id, text, label) in self.sample(random):token_ids, segment_ids = tokenizer.encode(text, maxlen=maxlen)batch_token_ids.append(token_ids)batch_labels.append([label])if len(batch_token_ids) == self.batch_size or is_end:batch_token_ids = sequence_padding(batch_token_ids)batch_labels = sequence_padding(batch_labels)yield [batch_token_ids], batch_labelsbatch_token_ids, batch_labels = [], []train_generator = data_generator(train_data, batch_size)
valid_generator = data_generator(valid_data, batch_size)def evaluate(data):total, right = 0., 0.for x_true, y_true in data:y_pred = model.predict(x_true).argmax(axis=1)y_true = y_true[:, 0]total += len(y_true)right += (y_true == y_pred).sum()return right / totalclass Evaluator(keras.callbacks.Callback):def __init__(self):self.best_val_acc = 0.def on_epoch_end(self, epoch, logs=None):val_acc = evaluate(valid_generator)if val_acc > self.best_val_acc:self.best_val_acc = val_accmodel.save_weights('best_model.weights')test_acc = evaluate(valid_generator)print(u'val_acc: %.5f, best_val_acc: %.5f, test_acc: %.5f\n' %(val_acc, self.best_val_acc, test_acc))def data_pred(test_data):id_ids, y_pred_ids = [], []for id, text, label in test_data:token_ids, segment_ids = tokenizer.encode(text, maxlen=maxlen)token_ids = sequence_padding([token_ids])y_pred = int(model.predict([token_ids]).argmax(axis=1)[0])id_ids.append(id)y_pred_ids.append(y_pred)return id_ids, y_pred_idsevaluator = Evaluator()model.fit(train_generator.forfit(),steps_per_epoch=len(train_generator),epochs=1,callbacks=[evaluator])model.load_weights('best_model.weights')print(u'final test acc: %05f\n' % (evaluate(valid_generator)))
print(u'final test acc: %05f\n' % (evaluate(train_generator)))id_ids, y_pred_ids = data_pred(test_data)
df_save = pd.DataFrame()
df_save['id'] = id_ids
df_save['label'] = y_pred_idsdf_save.to_csv('result.csv')

最终结果: 开始的时候提交几版,后来没有时间优化也就不了了之啦

注:

相关资料链接:

1. 北大分词库地址: http://sighan.cs.uchicago.edu/bakeoff2005/

2. 腾讯词向量: https://ai.tencent.com/ailab/nlp/zh/embedding.html

基于文本挖掘的企业隐患排查质量分析模型相关推荐

  1. 企业隐患排查文本挖掘比赛(一):数据篇

    1.比赛说明 基于文本挖掘的企业隐患排查质量分析模型 1.1 赛题背景 企业自主填报安全生产隐患,对于将风险消除在事故萌芽阶段具有重要意义.企业在填报隐患时,往往存在不认真填报的情况,"虚报 ...

  2. 企业隐患排查文本挖掘比赛(二):算法篇(从词向量到BERT)

    1.文本挖掘的历程 对于NLP问题,首先要解决的是文本表示的问题.虽然人能够清楚地了解文本的含义,但是计算机只能处理数值运算,因此首先要考虑如何将文本转化为数值. 1.1 向量表示 1.1.1 词袋模 ...

  3. 隐患排查信息系统:实现安全隐患信息登记、评估、分类、处置、分析的流程化处理

    安全生产事故隐患是指违反安全生产法律法规,或因其他因素在生产中存在可能导致事故发生的物的危险状态.人的不安全行为和管理上的缺陷.安全生产事故隐患排查治理工作,是焦化企业安全生产的基石. 隐患排查,是对 ...

  4. 基于SpringBoot的企业OA系统的设计与实现,Java毕业设计项目,高质量毕业论文范例,源码,数据库脚本,项目导入运行视频教程,论文撰写教程

    目录 课题背景 项目技术栈 适合对象 适合课题 项目功能概述 高质量论文范例(附原图,可再次编辑和修改) 毕业设计撰写视频教程 部分运行截图 课题背景 82.7%的受访白领表示认为数字化技术极大地提高 ...

  5. python应用内部审计_基于大数据技术提升内部审计质量的路径

    龙源期刊网 http://www.qikan.com.cn 基于大数据技术提升内部审计质量的路径 作者:彭德锦 方智 来源:<中国内部审计> 2019 年第 07 期 [ 摘要 ] 随着大 ...

  6. 学校计算机安全隐患排查情况报告,学校安全隐患排查情况报告

    学校安全隐患排查情况报告 引导语:安全生产隐患排查整治信息系统为开展安全隐患排查整治工作提供统一的协同工作平台,实现对重大危险源企业.安全隐患信息的登记.审查.评估.分类.统计.分析和处理.下面是小编 ...

  7. 如何实施好基于MOSS的企业搜索项目(上)

    文章目的:希望通过此文,能让读者了解搜索的本质和基于MOSS的企业搜索方案,在此基础上站在项目管理角度掌 握如何实施好这类方案的项目的关键点,确保企业搜索项目成功交付.由于文章长度限制,本文分上下两部 ...

  8. wpsppt流程图联系效果_风险隐患排查的手段—HAZOP 与检查表的区别及应用效果

    HAZOP 与检查表的区别 HAZOP 分析可以在工厂运行周期内的任何时间段进行,既适用于设计阶段,也适用于在役的工艺装置. 在化工项目的设计阶段采用HAZOP 方法进行分析,能识别设计.设备及操作程 ...

  9. 隐患排查和安全生产预警系统解决方案

    安全生产预警系统产品概述 系统主要实现水泥企业安全生产状态的预警和预测等基本功能,同时也和企业的日常安全生产管理工作结合,实现一部分的管理功能.为了更好地贯彻执行国家安监总局隐患排查治理工作,本系统还 ...

最新文章

  1. can not create java_eclipse启动时报错 Could not create the java virtual machine
  2. [bzoj1797][Ahoi2009]Mincut 最小割
  3. 47_pytorch循环神经网络(学习笔记)
  4. UVA11468 Substring
  5. 3、PV、UIP、UV指的是什么
  6. 鸿蒙OS手机版正式发布,鸿蒙OS前三批升级名单曝光,华为荣耀手机均可升级,幸福来得太突然...
  7. (转载)20分钟读懂程序集
  8. django-模型类关系
  9. 洛谷 1541 乌龟棋——dp
  10. 树莓派之Debian游戏(部分)
  11. 【Web理论篇】Web应用程序安全与风险
  12. 软件测试电商web项目如何描述,测试web项目实战
  13. 聊一聊积分墙的那些事儿
  14. PHP入门易精通难,美容院线上拓客,入门易精通难
  15. 计算机设备管理器更新驱动器,怎么利用设备管理器更新显卡驱动 - 驱动管家
  16. 计算机恢复桌面,桌面图标打开方式怎么还原_电脑图标打开方式恢复方法-win7之家...
  17. FineReport html5图表简介
  18. 【解决方案】英文论文投稿提交中显示“ unauthorized content”——投稿系统Editorial Manager
  19. 磁盘存储链式的B树与B+树(上课笔记)
  20. validator-tools

热门文章

  1. 11.13. Highslide
  2. mysql 忘记root密码的解决
  3. progress与meter的区别
  4. Code片段 : .properties属性文件操作工具类 JSON工具类
  5. 使用Axure制作无限循环展示图片效果
  6. git常见问题解决办法
  7. java中 set,list,array(集合与数组)相互转换
  8. 结构体后面定义一个空数组的含义
  9. python 换行符的识别问题,Unix 和Windows 中是不一样的
  10. 分数化小数c语言题目,习题 2-5 分数化小数 (decimal)(C语言版)