目录

  • 赛题背景
  • 全代码(ML 和 DL)
    • 特征工程进阶与方案优化 代码
      • 特征工程进阶部分
      • 基于LightGBM 的模型验证
      • 模型测试
    • 深度学习解决方案:TextCNN建模 代码
      • 数据读取
      • 数据预处理
      • TextCNN网络结构
      • TextCNN训练和预测
      • 结果提交

赛题背景

阿里云作为国内最大的云服务提供商,每天都面临着网络上海量的恶意攻击。
本题目提供的一堆恶意文件数据,包括感染性病毒、木马程序、挖矿程序、DDoS木马、勒索病毒等等,总计6亿条数据,每个文件数据会有对API调用顺序及线程等相关信息,我们需要训练模型,将测试文件正确归类(预测出是哪种病毒),因此是典型的多分类问题
常见的分类算法:朴素贝叶斯决策树支持向量机KNN逻辑回归等等;
集成学习:随机森林GBDT(梯度提升决策树),AdabootXGBoostLightGBMCatBoost等等;
神经网络:MLP(多层神经网络),DL(深度学习)等。

全代码(ML 和 DL)

一个典型的机器学习实战算法基本包括 1) 数据处理,2) 特征选取、优化,和 3) 模型选取、验证、优化。 因为 “数据和特征决定了机器学习的上限,而模型和算法知识逼近这个上限而已。” 所以在解决一个机器学习问题时大部分时间都会花在数据处理和特征优化上。
大家最好在jupyter notebook上一段一段地跑下面的代码,加深理解。
机器学习的基本知识可以康康我的其他文章哦 好康的。

特征工程进阶与方案优化 代码

import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as pltimport lightgbm as lgb
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoderfrom tqdm import tqdm_notebookimport warnings
warnings.filterwarnings('ignore')
%matplotlib inline
# 内存管理
import numpy as np
import pandas as pd
from tqdm import tqdm  class _Data_Preprocess:def __init__(self):self.int8_max = np.iinfo(np.int8).maxself.int8_min = np.iinfo(np.int8).minself.int16_max = np.iinfo(np.int16).maxself.int16_min = np.iinfo(np.int16).minself.int32_max = np.iinfo(np.int32).maxself.int32_min = np.iinfo(np.int32).minself.int64_max = np.iinfo(np.int64).maxself.int64_min = np.iinfo(np.int64).minself.float16_max = np.finfo(np.float16).maxself.float16_min = np.finfo(np.float16).minself.float32_max = np.finfo(np.float32).maxself.float32_min = np.finfo(np.float32).minself.float64_max = np.finfo(np.float64).maxself.float64_min = np.finfo(np.float64).mindef _get_type(self, min_val, max_val, types):if types == 'int':if max_val <= self.int8_max and min_val >= self.int8_min:return np.int8elif max_val <= self.int16_max <= max_val and min_val >= self.int16_min:return np.int16elif max_val <= self.int32_max and min_val >= self.int32_min:return np.int32return Noneelif types == 'float':if max_val <= self.float16_max and min_val >= self.float16_min:return np.float16if max_val <= self.float32_max and min_val >= self.float32_min:return np.float32if max_val <= self.float64_max and min_val >= self.float64_min:return np.float64return Nonedef _memory_process(self, df):init_memory = df.memory_usage().sum() / 1024 ** 2 / 1024print('Original data occupies {} GB memory.'.format(init_memory))df_cols = df.columnsfor col in tqdm_notebook(df_cols):try:if 'float' in str(df[col].dtypes):max_val = df[col].max()min_val = df[col].min()trans_types = self._get_type(min_val, max_val, 'float')if trans_types is not None:df[col] = df[col].astype(trans_types)elif 'int' in str(df[col].dtypes):max_val = df[col].max()min_val = df[col].min()trans_types = self._get_type(min_val, max_val, 'int')if trans_types is not None:df[col] = df[col].astype(trans_types)except:print(' Can not do any process for column, {}.'.format(col)) afterprocess_memory = df.memory_usage().sum() / 1024 ** 2 / 1024print('After processing, the data occupies {} GB memory.'.format(afterprocess_memory))return df
memory_process = _Data_Preprocess()
### 数据读取
path  = '../security_data/'
train = pd.read_csv(path + 'security_train.csv')
test  = pd.read_csv(path + 'security_test.csv')
train.head()
def simple_sts_features(df):simple_fea             = pd.DataFrame()simple_fea['file_id']  = df['file_id'].unique()simple_fea             = simple_fea.sort_values('file_id')df_grp = df.groupby('file_id')simple_fea['file_id_api_count']   = df_grp['api'].count().valuessimple_fea['file_id_api_nunique'] = df_grp['api'].nunique().valuessimple_fea['file_id_tid_count']   = df_grp['tid'].count().valuessimple_fea['file_id_tid_nunique'] = df_grp['tid'].nunique().valuessimple_fea['file_id_index_count']   = df_grp['index'].count().valuessimple_fea['file_id_index_nunique'] = df_grp['index'].nunique().valuesreturn simple_fea
%%time
simple_train_fea1 = simple_sts_features(train)
%%time
simple_test_fea1 = simple_sts_features(test)
def simple_numerical_sts_features(df):simple_numerical_fea             = pd.DataFrame()simple_numerical_fea['file_id']  = df['file_id'].unique()simple_numerical_fea             = simple_numerical_fea.sort_values('file_id')df_grp = df.groupby('file_id')simple_numerical_fea['file_id_tid_mean']  = df_grp['tid'].mean().valuessimple_numerical_fea['file_id_tid_min']   = df_grp['tid'].min().valuessimple_numerical_fea['file_id_tid_std']   = df_grp['tid'].std().valuessimple_numerical_fea['file_id_tid_max']   = df_grp['tid'].max().valuessimple_numerical_fea['file_id_index_mean']= df_grp['index'].mean().valuessimple_numerical_fea['file_id_index_min'] = df_grp['index'].min().valuessimple_numerical_fea['file_id_index_std'] = df_grp['index'].std().valuessimple_numerical_fea['file_id_index_max'] = df_grp['index'].max().valuesreturn simple_numerical_fea
%%time
simple_train_fea2 = simple_numerical_sts_features(train)
%%time
simple_test_fea2 = simple_numerical_sts_features(test)

特征工程进阶部分

def api_pivot_count_features(df):tmp = df.groupby(['file_id','api'])['tid'].count().to_frame('api_tid_count').reset_index()tmp_pivot = pd.pivot_table(data=tmp,index = 'file_id',columns='api',values='api_tid_count',fill_value=0)tmp_pivot.columns = [tmp_pivot.columns.names[0] + '_pivot_'+ str(col) for col in tmp_pivot.columns]tmp_pivot.reset_index(inplace = True)tmp_pivot = memory_process._memory_process(tmp_pivot)return tmp_pivot
%%time
simple_train_fea3 = api_pivot_count_features(train)
%%time
simple_test_fea3 = api_pivot_count_features(test)
def api_pivot_nunique_features(df):tmp = df.groupby(['file_id','api'])['tid'].nunique().to_frame('api_tid_nunique').reset_index()tmp_pivot = pd.pivot_table(data=tmp,index = 'file_id',columns='api',values='api_tid_nunique',fill_value=0)tmp_pivot.columns = [tmp_pivot.columns.names[0] + '_pivot_'+ str(col) for col in tmp_pivot.columns]tmp_pivot.reset_index(inplace = True)tmp_pivot = memory_process._memory_process(tmp_pivot)return tmp_pivot
%%time
simple_train_fea4 = api_pivot_count_features(train)
%%time
simple_test_fea4 = api_pivot_count_features(test)
train_label = train[['file_id','label']].drop_duplicates(subset = ['file_id','label'], keep = 'first')
test_submit = test[['file_id']].drop_duplicates(subset = ['file_id'], keep = 'first')
train_data = train_label.merge(simple_train_fea1, on ='file_id', how='left')
train_data = train_data.merge(simple_train_fea2, on ='file_id', how='left')
train_data = train_data.merge(simple_train_fea3, on ='file_id', how='left')
train_data = train_data.merge(simple_train_fea4, on ='file_id', how='left')
test_submit = test_submit.merge(simple_test_fea1, on ='file_id', how='left')
test_submit = test_submit.merge(simple_test_fea2, on ='file_id', how='left')
test_submit = test_submit.merge(simple_test_fea3, on ='file_id', how='left')
test_submit = test_submit.merge(simple_test_fea4, on ='file_id', how='left')
### 评估指标构建
def lgb_logloss(preds,data):labels_ = data.get_label()             classes_ = np.unique(labels_) preds_prob = []for i in range(len(classes_)):preds_prob.append(preds[i*len(labels_):(i+1) * len(labels_)] )preds_prob_ = np.vstack(preds_prob) loss = []for i in range(preds_prob_.shape[1]):     sum_ = 0for j in range(preds_prob_.shape[0]): pred = preds_prob_[j,i]           if  j == labels_[i]:sum_ += np.log(pred)else:sum_ += np.log(1 - pred)loss.append(sum_)       return 'loss is: ',-1 * (np.sum(loss) / preds_prob_.shape[1]),False

基于LightGBM 的模型验证

train_features = [col for col in train_data.columns if col not in ['label','file_id']]
train_label    = 'label'
%%time
from sklearn.model_selection import StratifiedKFold,KFold
params = {'task':'train', 'num_leaves': 255,'objective': 'multiclass','num_class': 8,'min_data_in_leaf': 50,'learning_rate': 0.05,'feature_fraction': 0.85,'bagging_fraction': 0.85,'bagging_freq': 5, 'max_bin':128,'random_state':100}   folds = KFold(n_splits=5, shuffle=True, random_state=15)
oof = np.zeros(len(train))predict_res = 0
models = []
for fold_, (trn_idx, val_idx) in enumerate(folds.split(train_data)):print("fold n°{}".format(fold_))trn_data = lgb.Dataset(train_data.iloc[trn_idx][train_features], label=train_data.iloc[trn_idx][train_label].values)val_data = lgb.Dataset(train_data.iloc[val_idx][train_features], label=train_data.iloc[val_idx][train_label].values) clf = lgb.train(params, trn_data, num_boost_round=2000,valid_sets=[trn_data,val_data], verbose_eval=50, early_stopping_rounds=100, feval=lgb_logloss) models.append(clf)
plt.figure(figsize=[10,8])
sns.heatmap(train_data.iloc[:10000, 1:21].corr())
### 特征重要性分析
feature_importance             = pd.DataFrame()
feature_importance['fea_name'] = train_features
feature_importance['fea_imp']  = clf.feature_importance()
feature_importance             = feature_importance.sort_values('fea_imp',ascending = False)
feature_importance.sort_values('fea_imp',ascending = False)
plt.figure(figsize=[20, 10,])
plt.figure(figsize=[20, 10,])
sns.barplot(x = feature_importance.iloc[:10]['fea_name'], y = feature_importance.iloc[:10]['fea_imp'])
plt.figure(figsize=[20, 10,])
sns.barplot(x = feature_importance['fea_name'], y = feature_importance['fea_imp'])

模型测试

pred_res = 0
flod = 5
for model in models:pred_res += model.predict(test_submit[train_features]) * 1.0 / flodtest_submit['prob0'] = 0
test_submit['prob1'] = 0
test_submit['prob2'] = 0
test_submit['prob3'] = 0
test_submit['prob4'] = 0
test_submit['prob5'] = 0
test_submit['prob6'] = 0
test_submit['prob7'] = 0test_submit[['prob0','prob1','prob2','prob3','prob4','prob5','prob6','prob7']] = pred_res
test_submit[['file_id','prob0','prob1','prob2','prob3','prob4','prob5','prob6','prob7']].to_csv('baseline2.csv',index = None)

深度学习解决方案:TextCNN建模 代码

数据读取

import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as pltimport lightgbm as lgb
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoderfrom tqdm import tqdm_notebook
from sklearn.preprocessing import LabelBinarizer,LabelEncoderimport warnings
warnings.filterwarnings('ignore')
%matplotlib inlinepath  = '../security_data/'
train = pd.read_csv(path + 'security_train.csv')
test  = pd.read_csv(path + 'security_test.csv')import numpy as np
import pandas as pd
from tqdm import tqdm  class _Data_Preprocess:def __init__(self):self.int8_max = np.iinfo(np.int8).maxself.int8_min = np.iinfo(np.int8).minself.int16_max = np.iinfo(np.int16).maxself.int16_min = np.iinfo(np.int16).minself.int32_max = np.iinfo(np.int32).maxself.int32_min = np.iinfo(np.int32).minself.int64_max = np.iinfo(np.int64).maxself.int64_min = np.iinfo(np.int64).minself.float16_max = np.finfo(np.float16).maxself.float16_min = np.finfo(np.float16).minself.float32_max = np.finfo(np.float32).maxself.float32_min = np.finfo(np.float32).minself.float64_max = np.finfo(np.float64).maxself.float64_min = np.finfo(np.float64).mindef _get_type(self, min_val, max_val, types):if types == 'int':if max_val <= self.int8_max and min_val >= self.int8_min:return np.int8elif max_val <= self.int16_max <= max_val and min_val >= self.int16_min:return np.int16elif max_val <= self.int32_max and min_val >= self.int32_min:return np.int32return Noneelif types == 'float':if max_val <= self.float16_max and min_val >= self.float16_min:return np.float16if max_val <= self.float32_max and min_val >= self.float32_min:return np.float32if max_val <= self.float64_max and min_val >= self.float64_min:return np.float64return Nonedef _memory_process(self, df):init_memory = df.memory_usage().sum() / 1024 ** 2 / 1024print('Original data occupies {} GB memory.'.format(init_memory))df_cols = df.columnsfor col in tqdm_notebook(df_cols):try:if 'float' in str(df[col].dtypes):max_val = df[col].max()min_val = df[col].min()trans_types = self._get_type(min_val, max_val, 'float')if trans_types is not None:df[col] = df[col].astype(trans_types)elif 'int' in str(df[col].dtypes):max_val = df[col].max()min_val = df[col].min()trans_types = self._get_type(min_val, max_val, 'int')if trans_types is not None:df[col] = df[col].astype(trans_types)except:print(' Can not do any process for column, {}.'.format(col)) afterprocess_memory = df.memory_usage().sum() / 1024 ** 2 / 1024print('After processing, the data occupies {} GB memory.'.format(afterprocess_memory))return dfmemory_process = _Data_Preprocess()train.head()

数据预处理

# (字符串转化为数字)
unique_api = train['api'].unique()api2index = {item:(i+1) for i,item in enumerate(unique_api)}
index2api = {(i+1):item for i,item in enumerate(unique_api)}train['api_idx'] = train['api'].map(api2index)
test['api_idx']  = test['api'].map(api2index)# 获取每个文件对应的字符串序列
def get_sequence(df,period_idx):seq_list = []for _id,begin in enumerate(period_idx[:-1]):seq_list.append(df.iloc[begin:period_idx[_id+1]]['api_idx'].values)seq_list.append(df.iloc[period_idx[-1]:]['api_idx'].values)return seq_listtrain_period_idx = train.file_id.drop_duplicates(keep='first').index.values
test_period_idx  = test.file_id.drop_duplicates(keep='first').index.valuestrain_df = train[['file_id','label']].drop_duplicates(keep='first')
test_df  = test[['file_id']].drop_duplicates(keep='first')train_df['seq'] = get_sequence(train,train_period_idx)
test_df['seq']  = get_sequence(test,test_period_idx)

TextCNN网络结构

from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Input, LSTM, Lambda, Embedding, Dropout, Activation,GRU,Bidirectional
from keras.layers import Conv1D,Conv2D,MaxPooling2D,GlobalAveragePooling1D,GlobalMaxPooling1D, MaxPooling1D, Flatten
from keras.layers import CuDNNGRU, CuDNNLSTM, SpatialDropout1D
from keras.layers.merge import concatenate, Concatenate, Average, Dot, Maximum, Multiply, Subtract, average
from keras.models import Model
from keras.optimizers import RMSprop,Adam
from keras.layers.normalization import BatchNormalization
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.optimizers import SGD
from keras import backend as K
from sklearn.decomposition import TruncatedSVD, NMF, LatentDirichletAllocation
from keras.layers import SpatialDropout1D
from keras.layers.wrappers import Bidirectional
def TextCNN(max_len,max_cnt,embed_size, num_filters,kernel_size,conv_action, mask_zero):_input = Input(shape=(max_len,), dtype='int32')_embed = Embedding(max_cnt, embed_size, input_length=max_len, mask_zero=mask_zero)(_input)_embed = SpatialDropout1D(0.15)(_embed)warppers = []for _kernel_size in kernel_size:conv1d = Conv1D(filters=num_filters, kernel_size=_kernel_size, activation=conv_action)(_embed)warppers.append(GlobalMaxPooling1D()(conv1d))fc = concatenate(warppers)fc = Dropout(0.5)(fc)#fc = BatchNormalization()(fc)fc = Dense(256, activation='relu')(fc)fc = Dropout(0.25)(fc)#fc = BatchNormalization()(fc)preds = Dense(8, activation = 'softmax')(fc)model = Model(inputs=_input, outputs=preds)model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])return modeltrain_labels = pd.get_dummies(train_df.label).values
train_seq    = pad_sequences(train_df.seq.values, maxlen = 6000)
test_seq     = pad_sequences(test_df.seq.values, maxlen = 6000)

TextCNN训练和预测

from sklearn.model_selection import StratifiedKFold,KFold
skf = KFold(n_splits=5, shuffle=True)max_len     = 6000
max_cnt     = 295
embed_size  = 256
num_filters = 64
kernel_size = [2,4,6,8,10,12,14]
conv_action = 'relu'
mask_zero   = False
TRAIN       = Trueimport os
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"
meta_train = np.zeros(shape = (len(train_seq),8))
meta_test = np.zeros(shape = (len(test_seq),8))
FLAG = True
i = 0
for tr_ind,te_ind in skf.split(train_labels):i +=1print('FOLD: '.format(i))print(len(te_ind),len(tr_ind)) model_name = 'benchmark_textcnn_fold_'+str(i)X_train,X_train_label = train_seq[tr_ind],train_labels[tr_ind]X_val,X_val_label     = train_seq[te_ind],train_labels[te_ind]model = TextCNN(max_len,max_cnt,embed_size,num_filters,kernel_size,conv_action,mask_zero)model_save_path = './NN/%s_%s.hdf5'%(model_name,embed_size)early_stopping =EarlyStopping(monitor='val_loss', patience=3)model_checkpoint = ModelCheckpoint(model_save_path, save_best_only=True, save_weights_only=True)if TRAIN and FLAG:model.fit(X_train,X_train_label,validation_data=(X_val,X_val_label),epochs=100,batch_size=64,shuffle=True,callbacks=[early_stopping,model_checkpoint] )model.load_weights(model_save_path)pred_val = model.predict(X_val,batch_size=128,verbose=1)pred_test = model.predict(test_seq,batch_size=128,verbose=1)meta_train[te_ind] = pred_valmeta_test += pred_testK.clear_session()
meta_test /= 5.0

结果提交

test_df['prob0'] = 0
test_df['prob1'] = 0
test_df['prob2'] = 0
test_df['prob3'] = 0
test_df['prob4'] = 0
test_df['prob5'] = 0
test_df['prob6'] = 0
test_df['prob7'] = 0test_df[['prob0','prob1','prob2','prob3','prob4','prob5','prob6','prob7']] = meta_test
test_df[['file_id','prob0','prob1','prob2','prob3','prob4','prob5','prob6','prob7']].to_csv('nn_baseline_5fold.csv',index = None)

以上内容和代码全部来自于《阿里云天池大赛赛题解析(机器学习篇)》这本好书,十分推荐大家去阅读原书!

阿里云天池大赛赛题(机器学习)——阿里云安全恶意程序检测(完整代码)相关推荐

  1. 阿里云天池大赛赛题(机器学习)——天猫用户重复购买预测(完整代码)

    目录 赛题背景 全代码 导入包 读取数据(训练数据前10000行,测试数据前100条) 读取全部数据 获取训练和测试数据 切分40%数据用于线下验证 交叉验证:评估估算器性能 F1验证 Shuffle ...

  2. 阿里云天池大赛赛题(机器学习)——工业蒸汽量预测(完整代码)

    目录 赛题背景 全代码 导入包 导入数据 合并数据 删除相关特征 数据最大最小归一化 画图:探查特征和标签相关信息 对特征进行Box-Cox变换,使其满足正态性 标签数据统计转换后的数据,计算分位数画 ...

  3. 国内首本数据竞赛图书《阿里云天池大赛赛题解析——机器学习篇》今日开启预售!

    天池平台已经举办了超过 200 场来自真实业务场景的竞赛,每场赛事沉淀的课题和数据集,将在天池保留和开放.天池平台已成为在校学生踏入职场前的虚拟实践基地,也成为聚集40万数据人才,孵化2000余家数据 ...

  4. 阿里云天池大赛赛题解析——机器学习篇 | 留言赠书

    国内第一本针对竞赛实操的图书:<阿里云天池大赛赛题解析--机器学习篇>,正式发售. 阿里云天池7年200多场数据大赛精华提取录 为什么写这本书 七年前,天池团队的几名创始成员带着" ...

  5. 阿里云天池大赛赛题(机器学习)——O2O优惠券预测(完整代码)

    目录 赛题背景 全代码 算法包及全局变量 工具函数 训练及结果输出 算法分析 调参 整合及输出结果 赛题实践 结果生成 绘制学习曲线 参数调优 赛题背景 O2O行业天然关联着数亿消费者,各类App每天 ...

  6. 阿里云天池大赛赛题解析——机器学习篇

    阿里云天池大赛赛题解析--机器学习篇 (豆瓣)图书阿里云天池大赛赛题解析--机器学习篇 介绍.书评.论坛及推荐 https://book.douban.com/subject/35192976/ 阿里 ...

  7. 阿里云天池大赛赛题解析(深度学习篇)--阅读笔记1--赛题一

    阿里云天池大赛赛题解析(深度学习篇)–阅读笔记1 [x]表示遇到不懂的知识,将在[知识补充]给出具体讲解. 文章目录 阿里云天池大赛赛题解析(深度学习篇)--阅读笔记1 前言 赛题一 瑞金医院MMC人 ...

  8. 【啃书】【阿里云天池大赛赛题解析】目录

    算法与业务结合的开发步骤:业务理解->数据探索->特征工程->模型训练->模型验证->特征优化->模型融和.其中蕴含着模型的重构与参数的优化. 实际业务场景应用机器 ...

  9. 【读书向】阿里云天池大赛赛题解析——总结

    [读书向]阿里云天池大赛赛题解析--总结 目录 [读书向]阿里云天池大赛赛题解析--总结 [读书向]阿里云天池大赛赛题解析--可视化 [读书向]阿里云天池大赛赛题解析--特征工程 [读书向]阿里云天池 ...

最新文章

  1. 201521123054《Java程序设计》第1周学习总结
  2. 使用Mockito模拟自动装配的字段
  3. java正则表达式性能_译:Java 中的正则表达式性能概述
  4. SpringMVC搭建+实例
  5. 安卓应用安全指南 4.2.1 创建/使用广播接收器 示例代码
  6. STM32工作笔记006---常见硬件介绍-以及常见术语--随时更新
  7. OpenCV调整图像的亮度
  8. 内网渗透思路学习——靶场实战——暗月项目七
  9. 卸载微信重装微信聊天记录
  10. bootice添加linux_如何使用老毛桃winpe的Bootice工具还原SYSLINUX引导程序?
  11. IT程序员常去的论坛、社区、网站有哪些?
  12. linux 实时显示网速,linux 实时显示网速bash
  13. uibot和按键精灵区别_uibot和按键精灵有什么区别?
  14. python爬虫(五)---斗鱼主播图片下载并重命名
  15. Ubuntu 20下pycharm无法使用中文输入法
  16. 二维数组和二维数组名
  17. 阿里2017校园招聘电话面试总结
  18. 8款电子邮件客户端比较【转】
  19. Java获取收件箱邮件
  20. flex:1指什么?

热门文章

  1. 基于springboot+vue的地方美食分享网站
  2. vue学习入门——Idea中安装vue插件
  3. gitlab自动同步github
  4. 破解微信图片防盗链 微信图片不显示怎么办?
  5. python绘制常用的概率分布曲线
  6. C++ 时间戳 时间相关函数
  7. 制药实验室信息管理系统(LIMS)
  8. 什么是Advanced Color ePaper (ACeP) 多彩墨水屏及其应用
  9. js 排班插件_排班小程序
  10. 网站服务器部署apk软件,供外网下载