比赛介绍

比赛连接

https://js.dclab.run/v2/cmptDetail.html?id=439

任务

随着科技发展,银行陆续打造了线上线下、丰富多样的客户触点,来满足客户日常业务办理、渠道交易等客户需求。面对着大量的客户,银行需要更全面、准确地洞察客户需求。在实际业务开展过程中,需要发掘客户流失情况,对客户的资金变动情况预判;提前/及时针对客户进行营销,减少银行资金流失。本次竞赛提供实际业务场景中的客户行为和资产信息为建模对象,一方面希望能借此展现各参赛选手的数据挖掘实战能力,另一方面需要选手在复赛中结合建模的结果提出相应的营销解决方案,充分体现数据分析的价值。

数据说明

(1) 数据总体概述 本次数据共分为两个数据集,x_train.rar, y_train.rar 和x_test.rar,其中x_train.rar内含训练集的特征, y_train.rar 为训练集的目标变量。训练集由两个季度的数据抽样样本组成。x_test.rar为测试集特征,特征变量与训练集一致。建模的目标即根据训练集对模型进行训练,并对测试集进行预测。

(2) 表和数据字段说明 训练集主要为3、4季度的抽样数据,测试集为1季度的抽样数据。

a) aum_m(Y) 代表第 Y 月的月末时点资产数据

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-m0v1qxRT-1617357647519)(attachment:05b0dbae-0601-46a8-a302-6d1f3efcb454.png)]

b) behavior_m(Y) 代表第Y月的行为数据

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-p2RJSVkX-1617357647522)(attachment:b81381e6-8f15-4657-bf71-9034e50318bd.png)]

c) big_event_Q(Z) 代表第 Z 季度的客户重大历史数据

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-DLCUmS4G-1617357647524)(attachment:45cffcbf-deb5-49c6-8e95-95beaf170985.png)]

d) cunkuan_m(Y) 代表第 Y 月的存款数据

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-KGcsiudp-1617357647527)(attachment:c6b0f78d-a07c-4e10-b861-74cef12eb49f.png)]

e) cust_avli _Q(Z) 代表第 Z 季度的有效客户 仅有 cust_no

f) cust_info_q(Z) 代表第 Z 季度的客户信息

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5FkfKBFM-1617357647529)(attachment:28262587-94be-4996-9166-92f3490ad294.png)]

数据重塑

将训练集、测试集每一个ID对应的所有特征横向拼接

import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
train_Q3_id = pd.read_csv(r'x_train/cust_avli_Q3.csv')
train_Q3_label = pd.read_csv(r'y_train_3/y_Q3_3.csv')
train_Q3_id .shape,train_Q3_label.shape
((69126, 1), (69126, 2))

读取第三季度每个月aum特征并合并

## F代表Feature, 特征_季度_月份,aum_Q3_m1, 第三季度第一个月aum特征1-8
aum_Q3_m1 = pd.read_csv(r'x_train/aum_train/aum_m7.csv')
aum_Q3_m1.columns = ['cust_no','F_aum1','F_aum2','F_aum3','F_aum4','F_aum5','F_aum6','F_aum7','F_aum8']
aum_Q3_m2 = pd.read_csv(r'x_train/aum_train/aum_m8.csv')
aum_Q3_m2.columns = ['cust_no','F_aum9','F_aum10','F_aum11','F_aum12','F_aum13','F_aum14','F_aum15','F_aum16']
aum_Q3_m3 = pd.read_csv(r'x_train/aum_train/aum_m9.csv')
aum_Q3_m3.columns = ['cust_no','F_aum17','F_aum18','F_aum19','F_aum20','F_aum21','F_aum22','F_aum23','F_aum24']
## 合并三个月的aum特征
aum_feature_Q3 = pd.merge(left = train_Q3_id,right=aum_Q3_m1,how='left')
aum_feature_Q3 = pd.merge(left = aum_feature_Q3,right=aum_Q3_m2,how='left')
aum_feature_Q3 = pd.merge(left = aum_feature_Q3,right=aum_Q3_m3,how='left')

合并三个月的behavior特征

behavior_Q3_m1 = pd.read_csv(r'x_train/behavior_train/behavior_m7.csv')
behavior_Q3_m1.columns = ['cust_no','F_B1','F_B2','F_B3','F_B4','F_B5']
behavior_Q3_m2 = pd.read_csv(r'x_train/behavior_train/behavior_m8.csv')
behavior_Q3_m2.columns = ['cust_no','F_B6','F_B7','F_B8','F_B9','F_B10']
behavior_Q3_m3 = pd.read_csv(r'x_train/behavior_train/behavior_m9.csv')
behavior_Q3_m3.columns = ['cust_no','F_B11','F_B12','F_B13','F_B14','F_B15','F_B16','F_B17']
behavior_feature_Q3 = pd.merge(left = train_Q3_id,right=behavior_Q3_m1,how='left')
behavior_feature_Q3 = pd.merge(left = behavior_feature_Q3,right=behavior_Q3_m2,how='left')
behavior_feature_Q3 = pd.merge(left = behavior_feature_Q3,right=behavior_Q3_m3,how='left')
big_event_Q3 = pd.read_csv(r'x_train/big_event_train/big_event_Q3.csv')
big_event_Q3 = pd.merge(left = train_Q3_id,right=big_event_Q3,how='left')

合并三个月的cunkuan特征

cunkuan_Q3_m1 = pd.read_csv(r'x_train/cunkuan_train/cunkuan_m7.csv')
cunkuan_Q3_m1.columns = ['cust_no','C1','C2']
cunkuan_Q3_m2 = pd.read_csv(r'x_train/cunkuan_train/cunkuan_m8.csv')
cunkuan_Q3_m2.columns = ['cust_no','C3','C4']
cunkuan_Q3_m3 = pd.read_csv(r'x_train/cunkuan_train/cunkuan_m9.csv')
cunkuan_Q3_m3.columns = ['cust_no','C5','C6']
cunkuan_feature_Q3 = pd.merge(left = train_Q3_id,right=cunkuan_Q3_m1,how='left')
cunkuan_feature_Q3 = pd.merge(left = cunkuan_feature_Q3,right=cunkuan_Q3_m2,how='left')
cunkuan_feature_Q3 = pd.merge(left = cunkuan_feature_Q3,right=cunkuan_Q3_m3,how='left')
train_Q3_info = pd.read_csv(r'x_train/cust_info_q3.csv')
train_Q3_info = pd.merge(left = train_Q3_id,right=train_Q3_info,how='left')
train_Q3 = train_Q3_id
for df in [aum_feature_Q3,behavior_feature_Q3,big_event_Q3,cunkuan_feature_Q3,train_Q3_info,train_Q3_label]:train_Q3 = pd.merge(train_Q3,df,how='left')

同步处理第四、第一季度的数据

train_Q4_id = pd.read_csv(r'x_train/cust_avli_Q4.csv')
train_Q4_label = pd.read_csv(r'y_train_3/y_Q4_3.csv')aum_Q4_m1 = pd.read_csv(r'x_train/aum_train/aum_m10.csv')
aum_Q4_m1.columns = ['cust_no','F_aum1','F_aum2','F_aum3','F_aum4','F_aum5','F_aum6','F_aum7','F_aum8']
aum_Q4_m2 = pd.read_csv(r'x_train/aum_train/aum_m11.csv')
aum_Q4_m2.columns = ['cust_no','F_aum9','F_aum10','F_aum11','F_aum12','F_aum13','F_aum14','F_aum15','F_aum16']
aum_Q4_m3 = pd.read_csv(r'x_train/aum_train/aum_m12.csv')
aum_Q4_m3.columns = ['cust_no','F_aum17','F_aum18','F_aum19','F_aum20','F_aum21','F_aum22','F_aum23','F_aum24']aum_feature_Q4 = pd.merge(left = train_Q4_id,right=aum_Q4_m1,how='left')
aum_feature_Q4 = pd.merge(left = aum_feature_Q4,right=aum_Q4_m2,how='left')
aum_feature_Q4 = pd.merge(left = aum_feature_Q4,right=aum_Q4_m3,how='left')behavior_Q4_m1 = pd.read_csv(r'x_train/behavior_train/behavior_m10.csv')
behavior_Q4_m1.columns = ['cust_no','F_B1','F_B2','F_B3','F_B4','F_B5']
behavior_Q4_m2 = pd.read_csv(r'x_train/behavior_train/behavior_m11.csv')
behavior_Q4_m2.columns = ['cust_no','F_B6','F_B7','F_B8','F_B9','F_B10']
behavior_Q4_m3 = pd.read_csv(r'x_train/behavior_train/behavior_m12.csv')
behavior_Q4_m3.columns = ['cust_no','F_B11','F_B12','F_B13','F_B14','F_B15','F_B16','F_B17']behavior_feature_Q4 = pd.merge(left = train_Q4_id,right=behavior_Q4_m1,how='left')
behavior_feature_Q4 = pd.merge(left = behavior_feature_Q4,right=behavior_Q4_m2,how='left')
behavior_feature_Q4 = pd.merge(left = behavior_feature_Q4,right=behavior_Q4_m3,how='left')big_event_Q4 = pd.read_csv(r'x_train/big_event_train/big_event_Q4.csv')
big_event_Q4 = pd.merge(left = train_Q4_id,right=big_event_Q4,how='left')cunkuan_Q4_m1 = pd.read_csv(r'x_train/cunkuan_train/cunkuan_m10.csv')
cunkuan_Q4_m1.columns = ['cust_no','C1','C2']
cunkuan_Q4_m2 = pd.read_csv(r'x_train/cunkuan_train/cunkuan_m11.csv')
cunkuan_Q4_m2.columns = ['cust_no','C3','C4']
cunkuan_Q4_m3 = pd.read_csv(r'x_train/cunkuan_train/cunkuan_m12.csv')
cunkuan_Q4_m3.columns = ['cust_no','C5','C6']cunkuan_feature_Q4 = pd.merge(left = train_Q4_id,right=cunkuan_Q4_m1,how='left')
cunkuan_feature_Q4 = pd.merge(left = cunkuan_feature_Q4,right=cunkuan_Q4_m2,how='left')
cunkuan_feature_Q4 = pd.merge(left = cunkuan_feature_Q4,right=cunkuan_Q4_m3,how='left')train_Q4_info = pd.read_csv(r'x_train/cust_info_q4.csv')
train_Q4_info = pd.merge(left = train_Q4_id,right=train_Q4_info,how='left')train_Q4 = train_Q4_id
for df in [aum_feature_Q4,behavior_feature_Q4,big_event_Q4,cunkuan_feature_Q4,train_Q4_info,train_Q4_label]:train_Q4 = pd.merge(train_Q4,df,how='left')
test_Q1_id = pd.read_csv(r'x_test/cust_avli_Q1.csv')## F代表Feature
aum_Q1_m1 = pd.read_csv(r'x_test/aum_test/aum_m1.csv')
aum_Q1_m1.columns = ['cust_no','F_aum1','F_aum2','F_aum3','F_aum4','F_aum5','F_aum6','F_aum7','F_aum8']
aum_Q1_m2 = pd.read_csv(r'x_test/aum_test/aum_m2.csv')
aum_Q1_m2.columns = ['cust_no','F_aum9','F_aum10','F_aum11','F_aum12','F_aum13','F_aum14','F_aum15','F_aum16']
aum_Q1_m3 = pd.read_csv(r'x_test/aum_test/aum_m3.csv')
aum_Q1_m3.columns = ['cust_no','F_aum17','F_aum18','F_aum19','F_aum20','F_aum21','F_aum22','F_aum23','F_aum24']aum_feature_Q1 = pd.merge(left = test_Q1_id,right=aum_Q1_m1,how='left')
aum_feature_Q1 = pd.merge(left = aum_feature_Q1,right=aum_Q1_m2,how='left')
aum_feature_Q1 = pd.merge(left = aum_feature_Q1,right=aum_Q1_m3,how='left')behavior_Q1_m1 = pd.read_csv(r'x_test/behavior_test/behavior_m1.csv')
behavior_Q1_m1.columns = ['cust_no','F_B1','F_B2','F_B3','F_B4','F_B5']
behavior_Q1_m2 = pd.read_csv(r'x_test/behavior_test/behavior_m2.csv')
behavior_Q1_m2.columns = ['cust_no','F_B6','F_B7','F_B8','F_B9','F_B10']
behavior_Q1_m3 = pd.read_csv(r'x_test/behavior_test/behavior_m3.csv')
behavior_Q1_m3.columns = ['cust_no','F_B11','F_B12','F_B13','F_B14','F_B15','F_B16','F_B17']behavior_feature_Q1 = pd.merge(left = test_Q1_id,right=behavior_Q1_m1,how='left')
behavior_feature_Q1 = pd.merge(left = behavior_feature_Q1,right=behavior_Q1_m2,how='left')
behavior_feature_Q1 = pd.merge(left = behavior_feature_Q1,right=behavior_Q1_m3,how='left')big_event_Q1 = pd.read_csv(r'x_test/big_event_test/big_event_Q1.csv')
big_event_Q1 = pd.merge(left = test_Q1_id,right=big_event_Q1,how='left')cunkuan_Q1_m1 = pd.read_csv(r'x_test/cunkuan_test/cunkuan_m1.csv')
cunkuan_Q1_m1.columns = ['cust_no','C1','C2']
cunkuan_Q1_m2 = pd.read_csv(r'x_test/cunkuan_test/cunkuan_m2.csv')
cunkuan_Q1_m2.columns = ['cust_no','C3','C4']
cunkuan_Q1_m3 = pd.read_csv(r'x_test/cunkuan_test/cunkuan_m3.csv')
cunkuan_Q1_m3.columns = ['cust_no','C5','C6']cunkuan_feature_Q1 = pd.merge(left = test_Q1_id,right=cunkuan_Q1_m1,how='left')
cunkuan_feature_Q1 = pd.merge(left = cunkuan_feature_Q1,right=cunkuan_Q1_m2,how='left')
cunkuan_feature_Q1 = pd.merge(left = cunkuan_feature_Q1,right=cunkuan_Q1_m3,how='left')test_Q1_info = pd.read_csv(r'x_test/cust_info_Q1.csv')
test_Q1_info = pd.merge(left = test_Q1_id,right=test_Q1_info,how='left')test_Q1 = test_Q1_id
for df in [aum_feature_Q1,behavior_feature_Q1,big_event_Q1,cunkuan_feature_Q1,test_Q1_info]:test_Q1 = pd.merge(test_Q1,df,how='left')

数据预处理

import matplotlib.pyplot as plt
import seaborn as sns
import datetime, time
import warnings
# 相关设置
# 声明使用 Seaborn 样式
sns.set()
sns.set_style('whitegrid')
sns.set_context("notebook", font_scale=1.3)
sns.set_palette('Set2')
# 中文字体设置-黑体
plt.rcParams['font.sans-serif'] = ['SimHei']
# 解决保存图像是负号'-'显示为方块的问题
plt.rcParams['axes.unicode_minus'] = False
# 解决Seaborn中文显示问题并调整字体大小
train_Q3 = train_Q3.copy()
train_Q4 = train_Q4.copy()
test = test_Q1.copy()
# 新增每季度结束时的时间
train_Q4['end_date'] = '2019-12-31 23:59:59'
train_Q3['end_date'] = '2019-09-30 23:59:59'
test['end_date'] = '2020-03-31 23:59:59'
##查看特征缺失比例
missing_ratio = train_Q3.isna().sum()/train_Q3.shape[0]
plt.figure(figsize=(12,5))
##筛选缺失比例>0的特征
missing_ratio.loc[missing_ratio>0].plot(kind='bar')
plt.show()

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0SMg3SQd-1617357647531)(output_23_0.png)]

#查看缺失值>50的特征
missing_ratio.loc[missing_ratio>0.5]
E4     0.578972
E7     0.980326
E8     0.878020
E9     0.999392
E11    1.000000
E12    0.840335
E13    0.874779
E14    0.621329
I9     1.000000
I10    0.886946
I13    0.989700
I14    0.895293
dtype: float64
### step1:缺失值处理
### tips1:可以删除E11,I9,缺失比例100%
### tips2:missing_ratio>0.5的字段可以用是否null填补,是1,否0
def fillnan(data):data.drop(labels=['E11','I9'],axis=1,inplace=True)data['I1'].fillna('男性',inplace=True)data['I5'].fillna('未知',inplace=True)def f(x):if pd.isnull(x):return 0else:return 1for c in ['E4', 'E5', 'E7', 'E8', 'E9', 'E12', 'E13', 'E14', 'E16', 'E18', 'I10', 'I13', 'I14']:data[c + '_isna'] = data[c].map(f)data.drop(c,axis=1,inplace=True)for c in data.columns:if data[c].dtypes != 'object':data[c].fillna(0,inplace=True)else:passreturn data
train_Q3 = fillnan(train_Q3)
train_Q4 = fillnan(train_Q4)
test = fillnan(test)
# 时间特征处理,用每季度末的时间减去特征时间
def time_to_num(data):for c in ['F_B16','E1','E2','E3','E6','E10']:data[c+'_year'] = pd.to_datetime(data[c]).dt.yeardata[c+'_month'] = pd.to_datetime(data[c]).dt.monthdata[c+'_days'] = pd.to_datetime(data[c]).dt.daydata[c+'_diffdays'] = (pd.to_datetime(data['end_date'])-pd.to_datetime(data[c]))/np.timedelta64(1, 'D')#除以np.timedelta64(1, 'D')可以将时间差中的天数提取出来data.drop(c,axis=1,inplace=True)return data
train_Q3 = time_to_num(train_Q3)
train_Q4 = time_to_num(train_Q4)
test = time_to_num(test)
##查看object特征对应的类
for c in train_Q3.columns:if train_Q3[c].dtypes == 'object' and c != 'cust_no':print(c,train_Q3[c].nunique(),'\n',train_Q3[c].unique(),'\n')
I1 2 ['男性' '女性'] I3 4 ['黄金' '普通客户' '白金' '钻石'] I5 11 ['不便分类的其他从业人员' '办事人员和有关人员' '商业工作人员' '服务性工作人员' '未知' '专业技术人员''国家机关、党群组织、企业、事业单位负责人' '生产、运输设备操作人员及有关人员' '农、林、牧、渔、水利业生产人员' '军人' '退休'] I8 12 ['双鱼座' '狮子座' '摩羯座' '处女座' '射手座' '双子座' '白羊座' '水瓶座' '巨蟹座' '天秤座' '天蝎座' '金牛座'] I12 1 ['个人'] end_date 1 ['2019-09-30 23:59:59']
#object属性特征进行数字编码
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
def LabelEncode(data):for c in ['I1','I3','I5','I8','I12']:data[c] = le.fit_transform(data[c])return data
train_Q3 = LabelEncode(train_Q3)
train_Q4 = LabelEncode(train_Q4)
test = LabelEncode(test)

LGB建模

使用lgb算法训练第三季度数据,预测第四季度计算Kappa,使用第四季度数据训练,预测线上第一季度Kappa

#划分出训练集和标签,删出无用的col
ylabel_Q3 = train_Q3['label']
ylabel_Q4 = train_Q4['label']
train_Q3.drop(['cust_no','end_date','label'],axis=1,inplace=True)
train_Q4.drop(['cust_no','end_date','label'],axis=1,inplace=True)
test.drop(['cust_no','end_date'],axis=1,inplace=True)
import lightgbm as lgb
from sklearn.metrics import cohen_kappa_score as kappa
time0 = time.time()
model_lgb = lgb.LGBMClassifier()
model_lgb.fit(train_Q3,ylabel_Q3)
time1 = time.time()
print('lgbm训练用时:',time1-time0)
ypred_Q4 = model_lgb.predict(train_Q4)
print('线下train_Q4 kappa得分:',kappa(ypred_Q4,ylabel_Q4))
lgbm训练用时: 1.9550013542175293
线下train_Q4 kappa得分: 0.37573233561737773
#使用xgboost训练Q4数据,预测Q1线上0.36左右
test_sub = pd.read_csv(r'feature_data/test_Q1_all_init_feature.csv')
test_sub = test_sub[['cust_no']]
test_sub['label'] = model_lgb.predict(test)
#查看预测结果各标签占比
test_sub['label'].value_counts(normalize=True)
 1    0.791390
-1    0.1181280    0.090483
Name: label, dtype: float64

2020厦门国际银行数创金融杯建模大赛(一)----赛题说明数据重塑Baseline相关推荐

  1. 厦门国际银行数创金融杯建模大赛

    2020厦门国际银行数创金融杯建模大赛baseline分享 成绩:0.34 比赛地址:https://www.dcjingsai.com/v2/cmptDetail.html?id=439&= ...

  2. 火热进行ing:第三届「厦门国际银行“数创金融杯”建模大赛」邀您来战

    第三届厦门国际银行"数创金融杯"建模大赛自2021年12月开赛以来,吸引了不少来自校园和社会各界的金融数据爱好者们纷纷入场.大赛以财富产品精准营销为主题,设置了高达34万元的丰厚奖 ...

  3. 第三届厦门国际银行数创金融杯金融营销建模大赛-BaseLine

    第三届厦门国际银行数创金融杯金融营销建模大赛-BaseLine 1.大赛背景 随着科技发展,银行陆续打造了线上线下.丰富多样的客户触点,来满足客户日常业务办理.渠道交易等需求.面对着大量的客户,银行需 ...

  4. 34万奖金!第三届厦门国际银行数创金融杯金融营销大赛来啦!

    近日,厦门国际银行与厦门大学数据挖掘研究中心联合举办了"第三届厦门国际银行数创金融杯金融营销建模算法大赛",要求参赛者针对客户购买各类理财产品存单概率进行预测,并将预测结果作为营销 ...

  5. 厦门国际银行”数创金融杯“比赛思路及总结

    说明:这是第一次参加比赛,成绩不理想,高手勿喷... 比赛链接:点这里 一.赛题解读 1.任务 2.数据 3.评分标准 4.解决任务方法 通过分析数据标签可以知道这是一个不平衡样本的分类问题,对于这类 ...

  6. 【数据竞赛】厦门国际银行 “数创金融杯”数据建模大赛-冠军分享

    写在前面 冠军团队:三位靓仔 成员介绍:团队成员由当下国内赛圈著名选手组成,一月三冠选手宁缺,赛圈网红林有夕,以及最具潜力选手孙中宇组成. 首先还是非常感谢他们提供的冠军方案分享,下面就一起来看看是如 ...

  7. 2019年厦门国际银行“数创金融杯”数据建模大赛总结

    比赛介绍 比赛链接:此次大赛由厦门国际银行与厦门大学数据挖掘研究中心联合举办,厦门国际银行-厦门大学数据挖掘研究中心"数创金融"联合实验室承办. 数据下载地址:https://do ...

  8. baseline来啦!第三届厦门国际银行数创金融杯金融营销建模大赛(奖金34万!)

    1.大赛背景 随着科技发展,银行陆续打造了线上线下.丰富多样的客户触点,来满足客户日常业务办理.渠道交易等需求.面对着大量的客户,银行需要更全面.准确地洞察客户理财需求.在实际理财产品业务开展过程中, ...

  9. 星河杯“黑名单共享查询”赛题基于隐语实现baseline

    赛题情况介绍 星河杯隐私计算大赛"黑名单共享查询-多方安全计算"赛题,要求参赛者基于多方安全计算技术实现多家企业黑名单数据的安全共享查询功能,完成以下两项任务: 计算两方黑名单ID ...

最新文章

  1. 逻辑覆盖测试(一)语句覆盖
  2. jvm性能调优实战 - 36XX:SoftRefLRUPolicyMSPerMB配置引起的Metaspace频繁FullGC
  3. 认识5G——解开5G的神秘面纱
  4. html编写edm时要注意的事
  5. IOC操作Bean管理XML方式(创建对象和set注入属性)
  6. python 基础 --数字、列表、元组、字典、字符串
  7. 右侧按钮登录注册html,翻转式用户登录注册界面设计
  8. java int转integer方法
  9. poj 2479 Maximum sum(递推)
  10. matlab2012安装过程中退出,Matlab 2012安装图解
  11. unity 凸包算法
  12. 奇迹私服gs服务器端口未能连接请设置参数,奇迹私服架设之各个快捷方式参数...
  13. 记各常见手机屏幕尺寸
  14. 人均GDP将破1万美元对中国意味着什么?
  15. 显著性检测论文梳理(Saliency Detection)
  16. PhotoShop纸张大小
  17. endnote的enl文件格式_endnoteenl文件丢失(一步简单还原丢失文件)
  18. 是否能够成为真正的编程高手,主要是在于是否有毅力坚持学习和练习。今天从最简单的输出名言“贵有恒,何必三更起五更眠:最无益,只怕一日曝十日寒。”开始,主要是想让读者激励自己,坚持学习C语言。
  19. 2015我的校招季,阿里、搜狗、百度、蘑菇街面试总结
  20. 简述Python数据类型

热门文章

  1. GET请求里的body问题
  2. 816墨盒计算机无法与,816墨盒怎么加墨 816墨盒加墨方法及注意问题【详解】
  3. 【数据可视化】360度教你如何全面学习可视化——上篇
  4. unity探索者之Shader Graph所有节点详解-Input篇
  5. 《挑战程序设计竞赛》阅读笔记二 之 ALDS1_2_C Stable Sort
  6. 在vmware里面免费安装纯净的xp虚拟机
  7. 【word 2016】保存太慢解决办法
  8. 【观察】星环科技重构数据云平台,持续释放数据红利和价值
  9. Ultra Recal 一款DIY的个人管理软件
  10. S32K144库函数