SuperCTR

评估指标

from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import roc_auc_score
def eval_model(y_pred, y_true):print(f"accuracy_score = {accuracy_score(y_true, y_pred)}")print(f"precision_score = {precision_score(y_true, y_pred)}")print(f"recall_score = {recall_score(y_true, y_pred)}")print(f"f1_score = {f1_score(y_true, y_pred)}")print(f"auc = {roc_auc_score(y_true, y_pred)}")

LR

1.导包

import time
import numpy as np
import pandas as pd
import seaborn as sns
import xgboost as xgb
import matplotlib.pyplot as plt
from xgboost import XGBClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold

2.导入数据

train_df = pd.read_csv("../input/criteo-dataset/dac/train.txt", sep='\t', names=train_df_cols, nrows=1000000)
train_df.head()
test_df = pd.read_csv("../input/criteo-dataset/dac/test.txt", sep='\t', names=test_df_cols)

数据共40个特征,其中特征连续型的有13个(I1-I13),类别型的26个(C1-C26),一个标签(Label)。

3.对类别型的数据编码

因为只有编码之后才能用于模型的训练

continuous_variable = [f"I{i}" for i in range(1, 14)]
discrete_variable = [f"C{i}" for i in range(1, 27)]
train_df_cols = ["Label"] + continuous_variable + discrete_variable
test_df_cols = continuous_variable + discrete_variable
print(f"train_df has {train_df.shape[0]} rows and {train_df.shape[1]} columns.")
print(f"test_df has {test_df.shape[0]} rows and {test_df.shape[1]} columns.")

将类别型的每一列用factorize方法进行编码,并用均值填补空缺值

for col in train_df.columns:if train_df[col].dtypes == "object":train_df[col], uniques = pd.factorize(train_df[col])train_df[col].fillna(train_df[col].mean(), inplace=True)
train_df.head()

train_df.info()

结果如下,没有空值了。

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000000 entries, 0 to 999999
Data columns (total 40 columns):#   Column  Non-Null Count    Dtype
---  ------  --------------    -----  0   Label   1000000 non-null  int64  1   I1      1000000 non-null  float642   I2      1000000 non-null  int64  3   I3      1000000 non-null  float644   I4      1000000 non-null  float645   I5      1000000 non-null  float646   I6      1000000 non-null  float647   I7      1000000 non-null  float648   I8      1000000 non-null  float649   I9      1000000 non-null  float6410  I10     1000000 non-null  float6411  I11     1000000 non-null  float6412  I12     1000000 non-null  float6413  I13     1000000 non-null  float6414  C1      1000000 non-null  int64  15  C2      1000000 non-null  int64  16  C3      1000000 non-null  int64  17  C4      1000000 non-null  int64  18  C5      1000000 non-null  int64  19  C6      1000000 non-null  int64  20  C7      1000000 non-null  int64  21  C8      1000000 non-null  int64  22  C9      1000000 non-null  int64  23  C10     1000000 non-null  int64  24  C11     1000000 non-null  int64  25  C12     1000000 non-null  int64  26  C13     1000000 non-null  int64  27  C14     1000000 non-null  int64  28  C15     1000000 non-null  int64  29  C16     1000000 non-null  int64  30  C17     1000000 non-null  int64  31  C18     1000000 non-null  int64  32  C19     1000000 non-null  int64  33  C20     1000000 non-null  int64  34  C21     1000000 non-null  int64  35  C22     1000000 non-null  int64  36  C23     1000000 non-null  int64  37  C24     1000000 non-null  int64  38  C25     1000000 non-null  int64  39  C26     1000000 non-null  int64
dtypes: float64(12), int64(28)
memory usage: 305.2 MB

画一下标签的频率直方图(0表示未点击,1表示点击)

count_label = pd.value_counts(train_df["Label"], sort=True)
count_label.plot(kind="bar")    # 条形图
plt.title("Label Statistics")
plt.xlabel("Label")
plt.ylabel("Frequency")

算一下具体的正负样本比例

#负样本:正样本 = 2.92:1(scale_pos_weight=2.92)
pd.value_counts(train_df["Label"], sort=True)

下采样

# 下采样
number_of_click = len(train_df[train_df.Label == 1])    # 统计点击的数据量
click_indices = np.array(train_df[train_df.Label == 1].index)    # 点击的数据索引
no_click_indices = np.array(train_df[train_df.Label == 0].index)   # 正常数据索引# 在正常数据中随机选择与未点击数据量相等的正常数据的索引
random_no_click_indices = np.array(np.random.choice(no_click_indices, number_of_click, replace=False))# 将所有点击的数据索引和未点击的等量数据索引合并
under_sample_indices = np.concatenate([click_indices, random_no_click_indices])
# 从原始数据中取出下采样数据集
under_sample_train_df = train_df.iloc[under_sample_indices, :]X_under_sample = under_sample_train_df.iloc[:, under_sample_train_df.columns != "Label"]
Y_under_sample = under_sample_train_df.iloc[:, under_sample_train_df.columns == "Label"]
# 看一下下采样数据集的长度
print("Total number of under_sample_train_df =", len(under_sample_train_df))
print("Total number of no_click =", len(under_sample_train_df[train_df.Label == 0]))
print("Total number of click =", len(under_sample_train_df[train_df.Label == 1]))

结果如下,正负样本比例已经均衡。

Total number of under_sample_train_df = 509898
Total number of no_click = 254949
Total number of click = 254949
X_under_sample.head()

Y_under_sample

X_under_sample_train,X_under_sample_test,y_under_sample_train,y_under_sample_test = train_test_split(X_under_sample,Y_under_sample,test_size=0.3)
# 对下采样数据进行测试
lr = LogisticRegression(solver="liblinear")
lr.fit(X_under_sample, Y_under_sample)
lr.score(X_under_sample_test, y_under_sample_test)

得到结果:0.6450088252598549

看一下各项评价指标

y_predict_lr = lr.predict(X_under_sample_test)
eval_model(y_predict_lr, y_under_sample_test)

其实效果并不算好,总体才60%多。接下来试试xgboost跑出来的效果。

xgboost

做k折交叉验证

# 下采样后做交叉验证
def kfold_scores(x_train_data, y_train_data):start_time = time.time()fold = KFold(3, shuffle=True)  # 3折交叉验证#c_param_range = [0, 0.5, 1]      # 惩罚力度,正则化惩罚项的系数# 做可视化展示results_table = pd.DataFrame(index=range(2),columns=["C_parameter", "Mean recall scores", "Mean auc socres"])index = 0print('--------------------------------------------------------------------------------')# 做交叉验证recall_accs = []auc_scores = []# 可以加入scale_pos_weight=2.92参数,如果使用下采样就不加入scale_pos_weight以免互相影响xgb_model = XGBClassifier(objective="binary:logistic", n_jobs=-1, n_estimators=1000, max_depth=16,eval_metric="auc",colsample_bytree=0.8, subsample=0.8, learning_rate=0.2, min_child_weight=6)for iteration, indices in enumerate(fold.split(x_train_data)):# 拟合训练数据# lr.fit(x_train_data.iloc[indices[0], :], y_train_data.iloc[indices[0], :].values.ravel())xgb_model.fit(x_train_data.iloc[indices[0], :], y_train_data.iloc[indices[0], :].values.ravel(),eval_metric="logloss",eval_set=[(x_train_data.iloc[indices[0], :], y_train_data.iloc[indices[0], :]),(x_train_data.iloc[indices[1], :], y_train_data.iloc[indices[1], :])], verbose=True,early_stopping_rounds=10)# 使用验证集得出预测数据# 最适合的迭代次数,然后预测的时候就使用stop之前训练的树来预测。print(f"最佳迭代次数为:{xgb_model.best_iteration}")limit = xgb_model.best_iteration# y_predicted_undersample = lr.predict(x_train_data.iloc[indices[1], :])y_predicted_undersample = xgb_model.predict(x_train_data.iloc[indices[1], :], ntree_limit=limit)# 计算recallrecall_acc = recall_score(y_train_data.iloc[indices[1], :], y_predicted_undersample)recall_accs.append(recall_acc)auc_score = roc_auc_score(y_train_data.iloc[indices[1], :], y_predicted_undersample)auc_scores.append(auc_score)print('\tIteration ', iteration, ': recall score = ', recall_acc)print('\tIteration ', iteration, ': auc score = ', auc_score)index += 1# 计算recall的平均值results_table.loc[index, "Mean recall scores"] = np.mean(recall_accs)results_table.loc[index, "Mean auc scores"] = np.mean(auc_scores)print('Mean recall score = ', results_table.loc[index, "Mean recall scores"], end="\n\n")print('Mean auc score = ', results_table.loc[index, "Mean auc scores"], end="\n\n")print('--------------------------------------------------------------------------------')return xgb_model
# 对下采样数据进行测试
xgb = kfold_scores(X_under_sample, Y_under_sample)

训练结果如下

--------------------------------------------------------------------------------
[0] validation_0-logloss:0.65029    validation_1-logloss:0.66628
[1] validation_0-logloss:0.61773    validation_1-logloss:0.64819
[2] validation_0-logloss:0.59209    validation_1-logloss:0.63575
[3] validation_0-logloss:0.57027    validation_1-logloss:0.62659
[4] validation_0-logloss:0.55167    validation_1-logloss:0.62060
[5] validation_0-logloss:0.53649    validation_1-logloss:0.61567
[6] validation_0-logloss:0.52190    validation_1-logloss:0.61203
[7] validation_0-logloss:0.50868    validation_1-logloss:0.60933
[8] validation_0-logloss:0.49658    validation_1-logloss:0.60661
[9] validation_0-logloss:0.48651    validation_1-logloss:0.60501
[10]    validation_0-logloss:0.47802    validation_1-logloss:0.60350
[11]    validation_0-logloss:0.46839    validation_1-logloss:0.60220
[12]    validation_0-logloss:0.46043    validation_1-logloss:0.60108
[13]    validation_0-logloss:0.45288    validation_1-logloss:0.60091
[14]    validation_0-logloss:0.44480    validation_1-logloss:0.60020
[15]    validation_0-logloss:0.43958    validation_1-logloss:0.60004
[16]    validation_0-logloss:0.43263    validation_1-logloss:0.59916
[17]    validation_0-logloss:0.42796    validation_1-logloss:0.59871
[18]    validation_0-logloss:0.42143    validation_1-logloss:0.59718
[19]    validation_0-logloss:0.41582    validation_1-logloss:0.59657
[20]    validation_0-logloss:0.41130    validation_1-logloss:0.59631
[21]    validation_0-logloss:0.40725    validation_1-logloss:0.59608
[22]    validation_0-logloss:0.40239    validation_1-logloss:0.59571
[23]    validation_0-logloss:0.39832    validation_1-logloss:0.59539
[24]    validation_0-logloss:0.39447    validation_1-logloss:0.59505
[25]    validation_0-logloss:0.39110    validation_1-logloss:0.59487
[26]    validation_0-logloss:0.38895    validation_1-logloss:0.59436
[27]    validation_0-logloss:0.38441    validation_1-logloss:0.59404
[28]    validation_0-logloss:0.38127    validation_1-logloss:0.59416
[29]    validation_0-logloss:0.37733    validation_1-logloss:0.59403
[30]    validation_0-logloss:0.37380    validation_1-logloss:0.59427
[31]    validation_0-logloss:0.37197    validation_1-logloss:0.59412
[32]    validation_0-logloss:0.36929    validation_1-logloss:0.59381
[33]    validation_0-logloss:0.36792    validation_1-logloss:0.59378
[34]    validation_0-logloss:0.36600    validation_1-logloss:0.59374
[35]    validation_0-logloss:0.36452    validation_1-logloss:0.59383
[36]    validation_0-logloss:0.36155    validation_1-logloss:0.59420
[37]    validation_0-logloss:0.35944    validation_1-logloss:0.59413
[38]    validation_0-logloss:0.35899    validation_1-logloss:0.59409
[39]    validation_0-logloss:0.35702    validation_1-logloss:0.59423
[40]    validation_0-logloss:0.35486    validation_1-logloss:0.59396
[41]    validation_0-logloss:0.35240    validation_1-logloss:0.59411
[42]    validation_0-logloss:0.35000    validation_1-logloss:0.59412
[43]    validation_0-logloss:0.34818    validation_1-logloss:0.59408
最佳迭代次数为:34
/opt/conda/lib/python3.7/site-packages/xgboost/core.py:104: UserWarning: ntree_limit is deprecated, use `iteration_range` or model slicing instead.UserWarningIteration  0 : recall score =  0.6809424858239707Iteration  0 : auc score =  0.6823219983343968
/opt/conda/lib/python3.7/site-packages/xgboost/sklearn.py:1146: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].warnings.warn(label_encoder_deprecation_msg, UserWarning)
/opt/conda/lib/python3.7/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().return f(**kwargs)
[0] validation_0-logloss:0.65061    validation_1-logloss:0.66599
[1] validation_0-logloss:0.61815    validation_1-logloss:0.64769
[2] validation_0-logloss:0.59309    validation_1-logloss:0.63531
[3] validation_0-logloss:0.57095    validation_1-logloss:0.62661
[4] validation_0-logloss:0.55278    validation_1-logloss:0.62015
[5] validation_0-logloss:0.53753    validation_1-logloss:0.61507
[6] validation_0-logloss:0.52324    validation_1-logloss:0.61146
[7] validation_0-logloss:0.51051    validation_1-logloss:0.60895
[8] validation_0-logloss:0.49835    validation_1-logloss:0.60653
[9] validation_0-logloss:0.48860    validation_1-logloss:0.60473
[10]    validation_0-logloss:0.47981    validation_1-logloss:0.60401
[11]    validation_0-logloss:0.47103    validation_1-logloss:0.60256
[12]    validation_0-logloss:0.46264    validation_1-logloss:0.60144
[13]    validation_0-logloss:0.45640    validation_1-logloss:0.60092
[14]    validation_0-logloss:0.44830    validation_1-logloss:0.60041
[15]    validation_0-logloss:0.44047    validation_1-logloss:0.59898
[16]    validation_0-logloss:0.43539    validation_1-logloss:0.59905
[17]    validation_0-logloss:0.43033    validation_1-logloss:0.59883
[18]    validation_0-logloss:0.42509    validation_1-logloss:0.59805
[19]    validation_0-logloss:0.41953    validation_1-logloss:0.59732
[20]    validation_0-logloss:0.41452    validation_1-logloss:0.59687
[21]    validation_0-logloss:0.40779    validation_1-logloss:0.59526
[22]    validation_0-logloss:0.40325    validation_1-logloss:0.59515
[23]    validation_0-logloss:0.39959    validation_1-logloss:0.59504
[24]    validation_0-logloss:0.39437    validation_1-logloss:0.59415
[25]    validation_0-logloss:0.39129    validation_1-logloss:0.59403
[26]    validation_0-logloss:0.38813    validation_1-logloss:0.59418
[27]    validation_0-logloss:0.38452    validation_1-logloss:0.59421
[28]    validation_0-logloss:0.38126    validation_1-logloss:0.59379
[29]    validation_0-logloss:0.37753    validation_1-logloss:0.59386
[30]    validation_0-logloss:0.37584    validation_1-logloss:0.59387
[31]    validation_0-logloss:0.37175    validation_1-logloss:0.59363
[32]    validation_0-logloss:0.36911    validation_1-logloss:0.59344
[33]    validation_0-logloss:0.36628    validation_1-logloss:0.59371
[34]    validation_0-logloss:0.36368    validation_1-logloss:0.59380
[35]    validation_0-logloss:0.36148    validation_1-logloss:0.59360
[36]    validation_0-logloss:0.35812    validation_1-logloss:0.59368
[37]    validation_0-logloss:0.35587    validation_1-logloss:0.59382
[38]    validation_0-logloss:0.35382    validation_1-logloss:0.59387
[39]    validation_0-logloss:0.35260    validation_1-logloss:0.59375
[40]    validation_0-logloss:0.35016    validation_1-logloss:0.59354
[41]    validation_0-logloss:0.34815    validation_1-logloss:0.59372
最佳迭代次数为:32
/opt/conda/lib/python3.7/site-packages/xgboost/core.py:104: UserWarning: ntree_limit is deprecated, use `iteration_range` or model slicing instead.UserWarningIteration  1 : recall score =  0.6808235901770202Iteration  1 : auc score =  0.6835137790937945
/opt/conda/lib/python3.7/site-packages/xgboost/sklearn.py:1146: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].warnings.warn(label_encoder_deprecation_msg, UserWarning)
/opt/conda/lib/python3.7/site-packages/sklearn/utils/validation.py:72: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().return f(**kwargs)
[0] validation_0-logloss:0.65044    validation_1-logloss:0.66610
[1] validation_0-logloss:0.61792    validation_1-logloss:0.64771
[2] validation_0-logloss:0.59264    validation_1-logloss:0.63498
[3] validation_0-logloss:0.57097    validation_1-logloss:0.62585
[4] validation_0-logloss:0.55261    validation_1-logloss:0.61955
[5] validation_0-logloss:0.53678    validation_1-logloss:0.61493
[6] validation_0-logloss:0.52250    validation_1-logloss:0.61141
[7] validation_0-logloss:0.50931    validation_1-logloss:0.60843
[8] validation_0-logloss:0.49780    validation_1-logloss:0.60593
[9] validation_0-logloss:0.48695    validation_1-logloss:0.60404
[10]    validation_0-logloss:0.47770    validation_1-logloss:0.60325
[11]    validation_0-logloss:0.46977    validation_1-logloss:0.60226
[12]    validation_0-logloss:0.46171    validation_1-logloss:0.60099
[13]    validation_0-logloss:0.45432    validation_1-logloss:0.60052
[14]    validation_0-logloss:0.44628    validation_1-logloss:0.59974
[15]    validation_0-logloss:0.43920    validation_1-logloss:0.59892
[16]    validation_0-logloss:0.43217    validation_1-logloss:0.59798
[17]    validation_0-logloss:0.42623    validation_1-logloss:0.59718
[18]    validation_0-logloss:0.41996    validation_1-logloss:0.59606
[19]    validation_0-logloss:0.41392    validation_1-logloss:0.59542
[20]    validation_0-logloss:0.40929    validation_1-logloss:0.59504
[21]    validation_0-logloss:0.40538    validation_1-logloss:0.59497
[22]    validation_0-logloss:0.40013    validation_1-logloss:0.59454
[23]    validation_0-logloss:0.39680    validation_1-logloss:0.59428
[24]    validation_0-logloss:0.39408    validation_1-logloss:0.59417
[25]    validation_0-logloss:0.39081    validation_1-logloss:0.59376
[26]    validation_0-logloss:0.38750    validation_1-logloss:0.59376
[27]    validation_0-logloss:0.38438    validation_1-logloss:0.59282
[28]    validation_0-logloss:0.38091    validation_1-logloss:0.59292
[29]    validation_0-logloss:0.37594    validation_1-logloss:0.59296
[30]    validation_0-logloss:0.37254    validation_1-logloss:0.59232
[31]    validation_0-logloss:0.36862    validation_1-logloss:0.59233
[32]    validation_0-logloss:0.36653    validation_1-logloss:0.59235
[33]    validation_0-logloss:0.36366    validation_1-logloss:0.59235

进行预测

y_predict_xgb = xgb.predict(X_under_sample_test,ntree_limit=xgb.best_iteration)
eval_model(y_predict_xgb, y_under_sample_test)

accuracy_score = 0.8150748512780284
precision_score = 0.8165117163425604
recall_score = 0.8132910152423495
f1_score = 0.8148981835313827
auc = 0.8150766723050388

嗯,好,这效果就非常地、大大地好!

n_0-logloss:0.36366  validation_1-logloss:0.59235

进行预测

y_predict_xgb = xgb.predict(X_under_sample_test,ntree_limit=xgb.best_iteration)
eval_model(y_predict_xgb, y_under_sample_test)

accuracy_score = 0.8150748512780284
precision_score = 0.8165117163425604
recall_score = 0.8132910152423495
f1_score = 0.8148981835313827
auc = 0.8150766723050388

嗯,好,这效果就非常地、大大地好!

使用LR和XGBoost跑通criteo点击率预测数据集相关推荐

  1. criteo 点击率预估_预处理criteo数据集以预测广告的点击率

    criteo 点击率预估 Amany Abdelhalim阿曼尼·阿卜杜勒哈林 Follow跟随 Sep 18 九月18 Preprocessing Criteo Dataset for Predic ...

  2. 使用GBDT+LR作点击率预测

    主要内容来源于facebook的论文:Practical Lessons from Predicting Clicks on Ads at Facebook> 1.基本思路 使用GBDT根据用户 ...

  3. 点云3D目标检测之——尝试SFD代码跑通(超详细!!)

    前言 到目前为止还没跑通,但是bug实在太多了,我的每一步都有错,如果不记录下来又会有遗漏,(肯定已经遗漏了很多),在这里把能想起来的都记录一下以便不时之需.另外,本人深度学习小白,一上来跑这么难的代 ...

  4. 跑通Yolov5的一些心得

    跑通Yolov5的一些心得 1.文件安装准备(版本要求) #pytorch 1.5.1 #torchvision 0.6.1 #vs2015 conda create -n yolov5 python ...

  5. LeNet-5N 网络模型及对应代码自用已跑通

    先来结构图 再上数据结果 # 三个列表对应代码中的三个print的结果 # 这个是epoch的loss 波动很大,所以再绘制时我用的另一个列表为train_loss_sum_batch_size_li ...

  6. 深度学习之初识篇——小白也能跑通的深度学习万能框架【交通标识牌检测】

    目录 环境下载:点击即可 数据集下载:点击即可 深度学习环境配置 点击下载深度学习环境 数据集准备 使用自己标注的数据集 使用标注软件 数据准备 VOC标签格式转yolo格式并划分训练集和测试集 部署 ...

  7. 【全网唯一】全网唯一能够跑通的,跑不通你来找我~用node.js完成微信支付下单功能,且只需要一个文件wxpay.js就解决业务流程的node.js程序

    先吐为敬! 最近心血来潮研究nodejs如何完成微信支付功能,结果网上一搜索,一大堆"代码拷贝党"."留一手"."缺斤少两"."不 ...

  8. 13个月才跑通GitHub模型,机器学习科研入门太难了吧

    点击上方"视学算法",选择加"星标"或"置顶" 重磅干货,第一时间送达 金磊 萧箫 发自 凹非寺 量子位 报道 | 公众号 QbitAI & ...

  9. @卡尔曼滤波 跑通调参

    可以先像一开始接触PID那样,先把整个系统跑通,感受一下它是咋用的.后面再回过头来研究原理. 学习OpenCV2--卡尔曼滤波(KalmanFilter)详解 四旋翼姿态解算--互补滤波和拓展卡尔曼 ...

最新文章

  1. 详解 Python 如何将爬取到的数据分别存储到 txt、excel、mysql 中!
  2. 微生物组学研究的那些”奇葩“动物模型
  3. mysql服务器的启动方式有哪几种_Mysql启动的方式(四种)
  4. ironbot智能编程机器人_视频 | 多模式编程机器人,“程序猿”培养从小抓起
  5. 深入理解maven及应用--转
  6. 数据科学-通过数据探索了解我们的特征
  7. IAR无法goto的解决办法
  8. UFLDL教程: Exercise: Sparse Autoencoder
  9. python判断互质_整数判断是否互质并求逆元的 python 实现
  10. jQuery判断页面是电脑端还是手机端
  11. wsgiserver python 漏洞_python-简单测试wsgi
  12. Net设计模式实例之桥接模式( Bridge Pattern)(1)
  13. java snmpv3_snmpv3 java实现
  14. eclipse安装SVN插件(2020最新,亲测可用)
  15. 网易有道最新力作 有道词典笔3 结构拆解
  16. 基于W800的AIOT离在线一体方案说明 (阿里飞燕+离线语音控制)
  17. selenium鼠标操作
  18. VMware虚拟机Mac-OS-X系统如何切换中英文输入法
  19. C# 串口接收含有asic2=0x3f时发生的故障
  20. 第一次用idea写SSM框架的项目就遇到自动注入失败的问题

热门文章

  1. Android 自动化测试 Espresso篇:简介基础使用
  2. @Controller注解的一些理解吧
  3. 洛谷T46780 ZJL 的妹子序列(生成函数)
  4. ngx_lua常用变量参数
  5. oracle收回dba权限后的检查,Oracle RAC GI 权限 检查和修复 方法
  6. 回溯法求解图着色问题
  7. 2.5维电子地图关键技术研究与实现
  8. 从数字0~99999中,数字“8”一共出现了多少次
  9. 流体仿真中,六面体(Hex)网格的求解效率真的比四面体(Tet)高”很多”么?
  10. 【LeetCode】517. 超级洗衣机 解题报告 (python)