# Titanic : Machine Learning from DisasterQuestion要求你建立一个预测模型来回答这个问题:“什么样的人更有可能生存?”使用乘客数据(如姓名、年龄、性别、社会经济阶层等)。

一、导入数据包和数据集

import pandas as pd

from pandas import Series, DataFrame

import numpy as np

from matplotlib import pyplot as plt

import seaborn as sns重点:在kaggle notebook上时,应该把pd.read_csv("./kaggle/input/titanic/train.csv")引号中第一个'.'去掉

读入训练集和测试及都需要

train = pd.read_csv("./kaggle/input/titanic/train.csv")

test = pd.read_csv("./kaggle/input/titanic/test.csv")

allData = pd.concat([train, test], ignore_index=True)

# dataNum = train.shape[0]

# featureNum = train.shape[1]

train.info()

二、数据总览

概况输入train.info()回车可以查看数据集整体信息

RangeIndex: 891 entries, 0 to 890

Data columns (total 12 columns):

PassengerId 891 non-null int64

Survived 891 non-null int64

Pclass 891 non-null int64

Name 891 non-null object

Sex 891 non-null object

Age 714 non-null float64

SibSp 891 non-null int64

Parch 891 non-null int64

Ticket 891 non-null object

Fare 891 non-null float64

Cabin 204 non-null object

Embarked 889 non-null object

dtypes: float64(2), int64(5), object(5)

memory usage: 83.6+ KB输入train.head()可以查看数据样例

特征

Variable | Definition | Key :-:|:-:|:-: survival | Survival | 0 = No, 1 = Yes pclass | Ticket class(客舱等级) | 1 = 1st, 2 = 2nd, 3 = 3rd sex | Sex Age | Age in years sibsp | # of siblings / spouses aboard the Titanic(旁系亲属) parch | # of parents / children aboard the Titanic(直系亲属) ticket | Ticket number fare | Passenger fare cabin | Cabin number(客舱编号) embarked | Port of Embarkation(上船港口编号) | C = Cherbourg, Q = Queenstown, S = Southampton

三、可视化数据分析

性别特征Sex女性生存率远高于男性

# Sex

sns.countplot('Sex', hue='Survived', data=train)

plt.show()

等级特征Pclass乘客等级越高,生存率越高

# Pclass

sns.barplot(x='Pclass', y="Survived", data=train)

plt.show()

家庭成员数量特征FamilySize=Parch+SibSp

家庭成员数量适中,生存率高

# FamilySize = SibSp + Parch + 1

allData['FamilySize'] = allData['SibSp'] + allData['Parch'] + 1

sns.barplot(x='FamilySize', y='Survived', data=allData)

plt.show()

上船港口特征Embarked上船港口不同,生存率不同

# Embarked

sns.countplot('Embarked', hue='Survived', data=train)

plt.show()

年龄特征Age年龄小或者正值壮年生存率高

# Age

sns.stripplot(x="Survived", y="Age", data=train, jitter=True)

plt.show()

- 年龄生存密度

facet = sns.FacetGrid(train, hue="Survived",aspect=2)

facet.map(sns.kdeplot,'Age',shade= True)

facet.set(xlim=(0, train['Age'].max()))

facet.add_legend()

plt.xlabel('Age')

plt.ylabel('density')

plt.show()

儿童相对于全年龄段有特殊的生存率

作者将10及以下视为儿童,设置单独标签

费用特征Fare费用越高,生存率越高

# Fare

sns.stripplot(x="Survived", y="Fare", data=train, jitter=True)

plt.show()

姓名特征Name

头衔特征Title头衔由姓名的前置称谓进行分类

# Name

allData['Title'] = allData['Name'].apply(lambda x:x.split(',')[1].split('.')[0].strip())

pd.crosstab(allData['Title'], allData['Sex'])统计分析

TitleClassification = {'Officer':['Capt', 'Col', 'Major', 'Dr', 'Rev'],

'Royalty':['Don', 'Sir', 'the Countess', 'Dona', 'Lady'],

'Mrs':['Mme', 'Ms', 'Mrs'],

'Miss':['Mlle', 'Miss'],

'Mr':['Mr'],

'Master':['Master','Jonkheer']}

for title in TitleClassification.keys():

cnt = 0

for name in TitleClassification[title]:

cnt += allData.groupby(['Title']).size()[name]

print (title,':',cnt)设置标签

TitleClassification = {'Officer':['Capt', 'Col', 'Major', 'Dr', 'Rev'],

'Royalty':['Don', 'Sir', 'the Countess', 'Dona', 'Lady'],

'Mrs':['Mme', 'Ms', 'Mrs'],

'Miss':['Mlle', 'Miss'],

'Mr':['Mr'],

'Master':['Master','Jonkheer']}

TitleMap = {}

for title in TitleClassification.keys():

TitleMap.update(dict.fromkeys(TitleClassification[title], title))

allData['Title'] = allData['Title'].map(TitleMap)头衔不同,生存率不同

sns.barplot(x="Title", y="Survived", data=allData)

plt.show()

票号特征Ticket有一定连续座位(存在票号相同的乘客)生存率高

#Ticket

TicketCnt = allData.groupby(['Ticket']).size()

allData['SameTicketNum'] = allData['Ticket'].apply(lambda x:TicketCnt[x])

sns.barplot(x='SameTicketNum', y='Survived', data=allData)

plt.show()

# allData['SameTicketNum']

二维/多维分析可以将任意两个/多个数据进行分析

二维分析之Pclass & Age

# Pclass & Age

sns.violinplot("Pclass", "Age", hue="Survived", data=train, split=True)

plt.show()

二维分析之Age & Sex

# Age & Sex

sns.swarmplot(x='Age', y="Sex", data=train, hue='Survived')

plt.show()

四、数据清洗 & 异常处理

离散型数据

有可用标签 --> One-Hot编码Sex & Pclass & Embarked 都有已经设置好的标签(int或float或string等),可以直接进行get_dummies,拆分成多维向量,增加特征维度

其中,Embarked存在一定缺失值,通过对整体的分析,填充上估计值

# Sex

allData = allData.join(pd.get_dummies(allData['Sex'], prefix="Sex"))

# Pclass

allData = allData.join(pd.get_dummies(allData['Pclass'], prefix="Pclass"))

# Embarked

allData[allData['Embarked'].isnull()] # 查看缺失值

allData.groupby(by=['Pclass','Embarked']).Fare.mean() # Pclass=1, Embark=C, 中位数=76

allData['Embarked'] = allData['Embarked'].fillna('C')

allData = allData.join(pd.get_dummies(allData['Embarked'], prefix="Embarked"))

无可用标签 --> 设计标签 --> One-HotFamilySize & Name & Ticket需要对整体数据统一处理,再进行标记

# FamilySize

def FamilyLabel(s):

if (s == 4):

return 4

elif (s == 2 or s == 3):

return 3

elif (s == 1 or s == 7):

return 2

elif (s == 5 or s == 6):

return 1

elif (s < 1 or s > 7):

return 0

allData['FamilyLabel'] = allData['FamilySize'].apply(FamilyLabel)

allData = allData.join(pd.get_dummies(allData['FamilyLabel'], prefix="Fam"))

# Name

TitleLabelMap = {'Mr':1.0,

'Mrs':5.0,

'Miss':4.5,

'Master':2.5,

'Royalty':3.5,

'Officer':2.0}

def TitleLabel(s):

return TitleLabelMap[s]

# allData['TitleLabel'] = allData['Title'].apply(TitleLabel)

allData = allData.join(pd.get_dummies(allData['Title'], prefix="Title"))

# Ticket

def TicketLabel(s):

if (s == 3 or s == 4):

return 3

elif (s == 2 or s == 8):

return 2

elif (s == 1 or s == 5 or s == 6 or s ==7):

return 1

elif (s < 1 or s > 8):

return 0

allData['TicketLabel'] = allData['SameTicketNum'].apply(TicketLabel)

allData = allData.join(pd.get_dummies(allData['TicketLabel'], prefix="TicNum"))

连续型数据

Age & Fare进行标准化,缩小数据范围,加速梯度下降

# Age

allData['Child'] = allData['Age'].apply(lambda x:1 if x <= 10 else 0) # 儿童标签

allData['Age'] = (allData['Age']-allData['Age'].mean())/allData['Age'].std() # 标准化

allData['Age'].fillna(value=0, inplace=True) # 填充缺失值

# Fare

allData['Fare'] = allData['Fare'].fillna(25) # 填充缺失值

allData[allData['Survived'].notnull()]['Fare'] = allData[allData['Survived'].notnull()]['Fare'].apply(lambda x:300.0 if x>500 else x)

allData['Fare'] = allData['Fare'].apply(lambda x:(x-allData['Fare'].mean())/allData['Fare'].std())

清除无用特征清除无用特征,降低算法复杂度

# 清除无用特征

allData.drop(['Cabin', 'PassengerId', 'Ticket', 'Name', 'Title', 'Sex', 'SibSp', 'Parch', 'FamilySize', 'Embarked', 'Pclass', 'Title', 'FamilyLabel', 'SameTicketNum', 'TicketLabel'], axis=1, inplace=True)

重新分割训练集/测试集一开始,为了处理方便,作者将训练集和测试集合并,现在根据Survived是否缺失来讲训练集和测试集分开

# 重新分割数据集

train_data = allData[allData['Survived'].notnull()]

test_data = allData[allData['Survived'].isnull()]

test_data = test_data.reset_index(drop=True)

xTrain = train_data.drop(['Survived'], axis=1)

yTrain = train_data['Survived']

xTest = test_data.drop( ['Survived'], axis=1)

特征相关性分析该步骤用于筛选特征后向程序员反馈,特征是否有效、是否重叠

若有问题,可以修改之前的特征方案

# 特征间相关性分析

Correlation = pd.DataFrame(allData[allData.columns.to_list()])

colormap = plt.cm.viridis

plt.figure(figsize=(24,22))

sns.heatmap(Correlation.astype(float).corr(), linewidths=0.1, vmax=1.0, cmap=colormap, linecolor='white', annot=True, square=True)

plt.show()

五、模型建立 & 参数优化

导入模型包

from sklearn.pipeline import Pipeline

from sklearn.ensemble import RandomForestClassifier

from sklearn.model_selection import GridSearchCV

from sklearn.feature_selection import SelectKBest作者选择随机森林分类器

网格搜索调试参数

pipe = Pipeline([('select', SelectKBest(k=10)),

('classify', RandomForestClassifier(random_state = 10, max_features = 'sqrt'))])

param_test = {'classify__n_estimators':list(range(20,100,5)),

'classify__max_depth' :list(range(3,10,1))}

gsearch = GridSearchCV(estimator=pipe, param_grid=param_test, scoring='roc_auc', cv=10)

gsearch.fit(xTrain, yTrain)

print (gsearch.best_params_, gsearch.best_score_)运行时间较长,结束后出现结果:

{'classify__max_depth': 6, 'classify__n_estimators': 70} 0.8790924679681529

建立模型用以上参数进行输入模型

训练

rfc = RandomForestClassifier(n_estimators=70, max_depth=6, random_state=10, max_features='sqrt')

rfc.fit(xTrain, yTrain)

导出结果

predictions = rfc.predict(xTest)

output = pd.DataFrame({'PassengerId':test['PassengerId'], 'Survived':predictions.astype('int64')})

output.to_csv('my_submission.csv', index=False)

六、提交评分

附:完整代码Jupiter Notebook导出为Python Script格式,需要ipynb格式请点击

# To add a new cell, type '# %%'

# To add a new markdown cell, type '# %% [markdown]'

# %%

import pandas as pd

from pandas import Series, DataFrame

import numpy as np

from matplotlib import pyplot as plt

import seaborn as sns

# %% [markdown]

# # Features

# Variable | Definition | Key

# :-:|:-:|:-:

# survival | Survival | 0 = No, 1 = Yes

# pclass | Ticket class(客舱等级) | 1 = 1st, 2 = 2nd, 3 = 3rd

# sex | Sex

# Age | Age in years

# sibsp | # of siblings / spouses aboard the Titanic(旁系亲属)

# parch | # of parents / children aboard the Titanic(直系亲属)

# ticket | Ticket number

# fare | Passenger fare

# cabin | Cabin number(客舱编号)

# embarked | Port of Embarkation(上船的港口编号) | C = Cherbourg, Q = Queenstown, S = Southampton

# %%

train = pd.read_csv("./kaggle/input/titanic/train.csv")

test = pd.read_csv("./kaggle/input/titanic/test.csv")

allData = pd.concat([train, test], ignore_index=True)

# dataNum = train.shape[0]

# featureNum = train.shape[1]

train.head()

# %%

# Sex

sns.countplot("Sex", hue="Survived", data=train)

plt.show()

# %%

# Pclass

sns.barplot(x="Pclass", y="Survived", data=train)

plt.show()

# Pclass & Age

sns.violinplot("Pclass", "Age", hue="Survived", data=train, split=True)

plt.show()

# %%

# FamilySize = SibSp + Parch + 1

allData["FamilySize"] = allData["SibSp"] + allData["Parch"] + 1

sns.barplot(x="FamilySize", y="Survived", data=allData)

plt.show()

# %%

# Embarked

sns.countplot("Embarked", hue="Survived", data=train)

plt.show()

# %%

# Age

sns.stripplot(x="Survived", y="Age", data=train, jitter=True)

plt.show()

facet = sns.FacetGrid(train, hue="Survived", aspect=2)

facet.map(sns.kdeplot, "Age", shade=True)

facet.set(xlim=(0, train["Age"].max()))

facet.add_legend()

plt.xlabel("Age")

plt.ylabel("density")

plt.show()

# Age & Sex

sns.swarmplot(x="Age", y="Sex", data=train, hue="Survived")

plt.show()

# %%

# Fare

sns.stripplot(x="Survived", y="Fare", data=train, jitter=True)

plt.show()

# %%

# Name

# allData['Title'] = allData['Name'].str.extract('([A-Za-z]+)\.', expand=False) # str.extract不知道在干嘛

allData["Title"] = allData["Name"].apply(

lambda x: x.split(",")[1].split(".")[0].strip()

)

# pd.crosstab(allData['Title'], allData['Sex'])

TitleClassification = {

"Officer": ["Capt", "Col", "Major", "Dr", "Rev"],

"Royalty": ["Don", "Sir", "the Countess", "Dona", "Lady"],

"Mrs": ["Mme", "Ms", "Mrs"],

"Miss": ["Mlle", "Miss"],

"Mr": ["Mr"],

"Master": ["Master", "Jonkheer"],

}

TitleMap = {}

for title in TitleClassification.keys():

TitleMap.update(dict.fromkeys(TitleClassification[title], title))

"""# cnt = 0for name in TitleClassification[title]:cnt += allData.groupby(['Title']).size()[name]# print (title,':',cnt)"""

allData["Title"] = allData["Title"].map(TitleMap)

sns.barplot(x="Title", y="Survived", data=allData)

plt.show()

# %%

# Ticket

TicketCnt = allData.groupby(["Ticket"]).size()

allData["SameTicketNum"] = allData["Ticket"].apply(lambda x: TicketCnt[x])

sns.barplot(x="SameTicketNum", y="Survived", data=allData)

plt.show()

# allData['SameTicketNum']

# %% [markdown]

# # 数据清洗

# - Sex & Pclass & Embarked --> Ont-Hot

# - Age & Fare --> Standardize

# - FamilySize & Name & Ticket --> ints --> One-Hot

# %%

# Sex

allData = allData.join(pd.get_dummies(allData["Sex"], prefix="Sex"))

# Pclass

allData = allData.join(pd.get_dummies(allData["Pclass"], prefix="Pclass"))

# Embarked

allData[allData["Embarked"].isnull()] # 查看缺失值

allData.groupby(by=["Pclass", "Embarked"]).Fare.mean() # Pclass=1, Embark=C, 中位数=76

allData["Embarked"] = allData["Embarked"].fillna("C")

allData = allData.join(pd.get_dummies(allData["Embarked"], prefix="Embarked"))

# %%

# Age

allData["Child"] = allData["Age"].apply(lambda x: 1 if x <= 10 else 0) # 儿童标签

allData["Age"] = (allData["Age"] - allData["Age"].mean()) / allData["Age"].std() # 标准化

allData["Age"].fillna(value=0, inplace=True) # 填充缺失值

# Fare

allData["Fare"] = allData["Fare"].fillna(25) # 填充缺失值

allData[allData["Survived"].notnull()]["Fare"] = allData[allData["Survived"].notnull()][

"Fare"

].apply(lambda x: 300.0 if x > 500 else x)

allData["Fare"] = allData["Fare"].apply(

lambda x: (x - allData["Fare"].mean()) / allData["Fare"].std()

)

# %%

# FamilySize

def FamilyLabel(s):

if s == 4:

return 4

elif s == 2 or s == 3:

return 3

elif s == 1 or s == 7:

return 2

elif s == 5 or s == 6:

return 1

elif s < 1 or s > 7:

return 0

allData["FamilyLabel"] = allData["FamilySize"].apply(FamilyLabel)

allData = allData.join(pd.get_dummies(allData["FamilyLabel"], prefix="Fam"))

# Name

TitleLabelMap = {

"Mr": 1.0,

"Mrs": 5.0,

"Miss": 4.5,

"Master": 2.5,

"Royalty": 3.5,

"Officer": 2.0,

}

def TitleLabel(s):

return TitleLabelMap[s]

# allData['TitleLabel'] = allData['Title'].apply(TitleLabel)

allData = allData.join(pd.get_dummies(allData["Title"], prefix="Title"))

# Ticket

def TicketLabel(s):

if s == 3 or s == 4:

return 3

elif s == 2 or s == 8:

return 2

elif s == 1 or s == 5 or s == 6 or s == 7:

return 1

elif s < 1 or s > 8:

return 0

allData["TicketLabel"] = allData["SameTicketNum"].apply(TicketLabel)

allData = allData.join(pd.get_dummies(allData["TicketLabel"], prefix="TicNum"))

# %%

# 清除无用特征

allData.drop(

[

"Cabin",

"PassengerId",

"Ticket",

"Name",

"Title",

"Sex",

"SibSp",

"Parch",

"FamilySize",

"Embarked",

"Pclass",

"Title",

"FamilyLabel",

"SameTicketNum",

"TicketLabel",

],

axis=1,

inplace=True,

)

# 重新分割数据集

train_data = allData[allData["Survived"].notnull()]

test_data = allData[allData["Survived"].isnull()]

test_data = test_data.reset_index(drop=True)

xTrain = train_data.drop(["Survived"], axis=1)

yTrain = train_data["Survived"]

xTest = test_data.drop(["Survived"], axis=1)

# allData.columns.to_list()

# %%

# 特征间相关性分析

Correlation = pd.DataFrame(allData[allData.columns.to_list()])

colormap = plt.cm.viridis

plt.figure(figsize=(24, 22))

sns.heatmap(

Correlation.astype(float).corr(),

linewidths=0.1,

vmax=1.0,

cmap=colormap,

linecolor="white",

annot=True,

square=True,

)

plt.show()

# %% [markdown]

# # 网格筛选随机森林参数

# - n_estimator

# - max_depth

# %%

from sklearn.pipeline import Pipeline

from sklearn.ensemble import RandomForestClassifier

from sklearn.model_selection import GridSearchCV

from sklearn.feature_selection import SelectKBest

# %%

pipe = Pipeline(

[

("select", SelectKBest(k=10)),

("classify", RandomForestClassifier(random_state=10, max_features="sqrt")),

]

)

param_test = {

"classify__n_estimators": list(range(20, 100, 5)),

"classify__max_depth": list(range(3, 10, 1)),

}

gsearch = GridSearchCV(estimator=pipe, param_grid=param_test, scoring="roc_auc", cv=10)

gsearch.fit(xTrain, yTrain)

print(gsearch.best_params_, gsearch.best_score_)

# %%

rfc = RandomForestClassifier(

n_estimators=70, max_depth=6, random_state=10, max_features="sqrt"

)

rfc.fit(xTrain, yTrain)

predictions = rfc.predict(xTest)

output = pd.DataFrame(

{"PassengerId": test["PassengerId"], "Survived": predictions.astype("int64")}

)

output.to_csv("my_submission.csv", index=False)

python数据挖掘项目实战 预测_Python机器学习/数据挖掘项目实战 泰坦尼克号Titanic生存预测 Kaggle入门比赛...相关推荐

  1. gini系数 决策树_案例7:机器学习--使用决策树实现泰坦尼克号乘客生存率预测...

    一.决策树简介 1.1 什么是决策树? 决策树:是一种树形结构,其中每个内部节点表示一个属性上的判断,每个分支代表一个判断结果的输出,最后每个叶节点代表一种分类结果,本质是一颗由多个判断节点组成的树. ...

  2. Kaggle经典测试,泰坦尼克号的生存预测,机器学习实验----02

    Kaggle经典测试,泰坦尼克号的生存预测,机器学习实验----02 文章目录 Kaggle经典测试,泰坦尼克号的生存预测,机器学习实验----02 一.引言 二.问题 三.问题分析 四.具体操作 1 ...

  3. 【再学Tensorflow2】TensorFlow2的建模流程:Titanic生存预测

    TensorFlow2的建模流程 1. 使用Tensorflow实现神经网络模型的一般流程 2. Titanic生存预测问题 2.1 数据准备 2.2 定义模型 2.3 训练模型 2.4 模型评估 2 ...

  4. 【决策树算法】泰坦尼克号乘客生存预测

    泰坦尼克号乘客生存预测 1. 案例背景 2. 步骤分析 3. 代码实现 4. 决策树可视化 4.1 保存树的结构到dot文件 4.2 网站显示结构 5. 决策树总结 6. 小结 1. 案例背景 泰坦尼 ...

  5. 泰坦尼克号乘客生存预测(XGBoost)

    泰坦尼克号乘客生存预测(XGBoost) 1. 案例背景 2. 步骤分析 3. 代码实现 1. 案例背景 泰坦尼克号沉没是历史上最臭名昭着的沉船之一.1912年4月15日,在她的处女航中,泰坦尼克号在 ...

  6. Keras神经网络实现泰坦尼克号旅客生存预测

    Keras神经网络实现泰坦尼克号旅客生存预测 介绍 数据集介绍 算法 学习器 分类器 实现 数据下载与导入 预处理 建立模型 训练 可视化 评估,预测 结果 代码 介绍 参考资料: 网易云课堂的深度学 ...

  7. sklearn的随机森林实现泰坦尼克号旅客生存预测

    sklearn的随机森林实现泰坦尼克号旅客生存预测 介绍 数据集介绍 算法 学习器 分类器 实现 数据下载与导入 预处理 建立模型 评估,预测 结果 代码 介绍 参考资料: https://wenku ...

  8. 【干货】Python爬虫/文本处理/科学计算/机器学习/数据挖掘兵器谱

    2019独角兽企业重金招聘Python工程师标准>>> 曾经因为NLTK的缘故开始学习Python,之后渐渐成为我工作中的第一辅助脚本语言,虽然开发语言是C/C++,但平时的很多文本 ...

  9. python knn预测_python机器学习之KNN预测QSAR生物浓缩类别

    KNN预测QSAR生物浓缩类别 importnumpyimport pandas #导入Excel文件 from sklearn.neighbors import KNeighborsClassifi ...

最新文章

  1. 在SCSS文件中导入常规CSS文件?
  2. Spring读书笔记——bean创建(下)
  3. 2015计算机二级c语言6,2015年计算机二级《C语言》最新章节练习题(6)
  4. 自定义Spring命名空间使JAXB更容易
  5. oracle数据库快照打点,Oracle数据库快照的使用
  6. 新天龙官网服务器更新消息,新天龙八部怀旧服太火,增开7组服务器不够用,还得继续扩容...
  7. pandas 替换 某列大于_Pandas简单入门 1
  8. ajax中html的属性,jQuery Ajax加载html数据正常,但属性似乎'不可读'
  9. C# list常用的几个操作 改变list中某个元素的值 替换某一段数据 删除集合中指定对象
  10. c语言贪吃蛇游戏代码较难,用C语言编写贪吃蛇代码(难度可选)
  11. 松下伺服务器型号A5和A6,松下伺服电机 A6家族型号对照表.pdf
  12. 围棋打谱软件 android,MultiGo(围棋打谱工具)
  13. python-docx页眉横线
  14. MySql根据字段名查询重复记录并删除!只保留一条
  15. android wifi认证,android 怎么检测连接的wlan wifi需要portal认证
  16. nisp一级练习题及答案
  17. 一些基础知识:脑科学、神经科学、心理学
  18. 小荷才露尖尖角之struts的秘密
  19. 情感分析的一些专业术语
  20. 筹码集中度指标公式怎么写

热门文章

  1. Python 文件写入
  2. 数据库建立索引的优缺点
  3. ubuntu 18安装与网卡驱动问题总结ax201--特殊情况
  4. 并行与分布式计算导论(一)衡量并行程序好坏的指标
  5. 笨方法学Python笔记(9)
  6. 使用vue做一个“淘宝“项目(显示商品栏)
  7. 【私有云】话说云计算的3种服务模式:IaaS,SaaS和PaaS | 燕麦企业云盘(OATOS)
  8. 仿微信页面——Android程序设计
  9. 新手小白如何选择入门第一把吉他,注意这几点不踩坑,初学者入门吉他推荐
  10. 历上最有创意的婚礼入场式(非常不错哟)