Naive Bayes 朴素贝叶斯法

It’s a way to find the probability of an event using the probability of other events

Overview

朴素贝叶斯法(Naive Bayes)是基于贝叶斯定理和特征条件独立假设的分类方法。
对于给定的训练集,基于特征条件独立的假设学习输入和输出的联合概率;通过此模型,对于给定的输入x,计算后验概率(MAP)最大的输出y作为预测分类结果。

首先这里直接给出Naive Bayes算法的核心公式:
P ( Y ∣ X ) = P ( Y ) P ( X ∣ Y ) P ( X ) P(Y|X) = \frac{P(Y)P(X|Y)}{P(X)} P(Y∣X)=P(X)P(Y)P(X∣Y)​

在这个公式中,
P(Y|X): posterior, the probability of Y being TRUE giving that X is TRUE.
P(Y): prior, the probability of Y being TRUE.
P(X|Y): likelihood, the probability of X being TRUE giving that Y is TRUE.
P(X): evidence, the probability of X being TRUE.

最后通过这个公式的计算,我们选取可以使得P(Y|X)值最大的Y作为预测结果。这种预测方式叫做Maximum a-posterior(MAP) hypothesis,即最大后验概率。
需要注意的是,对于Y的预测取决于P(Y|X)的大小,由公式可见所有预测的分母均相同,为P(X),所以它并不会对预测产生影响,其作用是为了normalize所有可能性的概率。

Explain

这里通过一个例子来具体讲解一下Naive Bayes算法的过程。

例1:

我们给定如下一个数据集,输入空间由三个特征向量X1,X2和X3组成,输出为Label = {+, -}。
其中,
X1 = {Low, Medium, High}
X2 = {Yes, No}
X3 = {Red, Green}
最后,请问{Low, Yes, Green}相应的Label是什么?

X1 X2 X3 Label
Low No Red +
Medium No Green +
Low No Green +
Low Yes Red -
High No Green -
Medium Yes Green -
High Yes Green -

解1:

首先,由Naive Bayes公式可知,我们需要做的是:
比较P(Label = + | X1 = Low, X2 = Yes, X3 = Green)P(Label = - | X1 = Low, X2 = Yes, X3 = Green)的大小。

接下来,我们由对应的特征向量分析数据集:
P(Label = +) = 3 / 7
P(Label = -) = 4 / 7
P(X1 = Low | Label = +) = 2/3
P(X2 = Yes | Label = +) = 0/3
P(X3 = Green | Label = +) = 2/3
P(X1 = Low | Label = -) = 1/4
P(X2 = Yes | Label = -) = 3/4
P(X3 = Green | Label = -) = 3/4

有了以上的结果之后,就可以进行最后结果的运算了:

首先,算出:
P ( X 1 = L o w , X 2 = Y e s , X 3 = G r e e n ∣ L a b e l = + ) = P ( X 1 = L o w ∣ L a b e l = + ) ⋅ P ( X 2 = Y e s ∣ L a b e l = + ) ⋅ P ( X 3 = G r e e n ∣ L a b e l = + ) = 2 3 ⋅ 0 3 ⋅ 2 3 = 0 P(X1 = Low, X2 = Yes, X3 = Green | Label = +) = P(X1 = Low | Label = +) · P(X2 = Yes | Label = +) · P(X3 = Green | Label = +) \\= \frac{2}{3} · \frac{0}{3} · \frac{2}{3} = 0 P(X1=Low,X2=Yes,X3=Green∣Label=+)=P(X1=Low∣Label=+)⋅P(X2=Yes∣Label=+)⋅P(X3=Green∣Label=+)=32​⋅30​⋅32​=0
P ( X 1 = L o w , X 2 = Y e s , X 3 = G r e e n ∣ L a b e l = − ) = P ( X 1 = L o w ∣ L a b e l = − ) ⋅ P ( X 2 = Y e s ∣ L a b e l = − ) ⋅ P ( X 3 = G r e e n ∣ L a b e l = − ) = 1 4 ⋅ 3 4 ⋅ 3 4 = 9 64 P(X1 = Low, X2 = Yes, X3 = Green | Label = -) = P(X1 = Low | Label = -) · P(X2 = Yes | Label = -) · P(X3 = Green | Label = -) \\= \frac{1}{4} · \frac{3}{4} · \frac{3}{4} = \frac{9}{64} P(X1=Low,X2=Yes,X3=Green∣Label=−)=P(X1=Low∣Label=−)⋅P(X2=Yes∣Label=−)⋅P(X3=Green∣Label=−)=41​⋅43​⋅43​=649​
对于:
P ( L a b e l = + ∣ X 1 = L o w , X 2 = Y e s , X 3 = G r e e n ) = P ( L a b e l = + ) ⋅ P ( X 1 = L o w , X 2 = Y e s , X 3 = G r e e n ∣ L a b e l = + ) P ( X 1 = L o w , X 2 = Y e s , X 3 = G r e e n ) P(Label = + | X1 = Low, X2 = Yes, X3 = Green) = \frac{P(Label = +)·P(X1 = Low, X2 = Yes, X3 = Green | Label = +)}{P(X1 = Low, X2 = Yes, X3 = Green)} P(Label=+∣X1=Low,X2=Yes,X3=Green)=P(X1=Low,X2=Yes,X3=Green)P(Label=+)⋅P(X1=Low,X2=Yes,X3=Green∣Label=+)​

之前已经解释过分母对预测结果不产生影响,所以:
P ( L a b e l = + ∣ X 1 = L o w , X 2 = Y e s , X 3 = G r e e n ) = P ( L a b e l = + ) ⋅ P ( X 1 = L o w , X 2 = Y e s , X 3 = G r e e n ∣ L a b e l = + ) = 3 7 ⋅ 0 = 0 P(Label = + | X1 = Low, X2 = Yes, X3 = Green) = P(Label = +)·P(X1 = Low, X2 = Yes, X3 = Green | Label = +) \\= \frac{3}{7} · 0 = 0 P(Label=+∣X1=Low,X2=Yes,X3=Green)=P(Label=+)⋅P(X1=Low,X2=Yes,X3=Green∣Label=+)=73​⋅0=0

同理:
P ( L a b e l = − ∣ X 1 = L o w , X 2 = Y e s , X 3 = G r e e n ) = P ( L a b e l = − ) ⋅ P ( X 1 = L o w , X 2 = Y e s , X 3 = G r e e n ∣ L a b e l = + ) = 4 7 ⋅ 9 64 = 0.08035714286 P(Label = - | X1 = Low, X2 = Yes, X3 = Green) = P(Label = -)·P(X1 = Low, X2 = Yes, X3 = Green | Label = +) \\= \frac{4}{7} · \frac{9}{64} = 0.08035714286 P(Label=−∣X1=Low,X2=Yes,X3=Green)=P(Label=−)⋅P(X1=Low,X2=Yes,X3=Green∣Label=+)=74​⋅649​=0.08035714286

通过比较,由于P(Label = - | X1 = Low, X2 = Yes, X3 = Green)更大,所以Naive Bayes法对于此数据的预测为Label = -

以上,便是朴素贝叶斯算法的实际运用方法。

Follow Up

Smoothing 平滑处理

从上面的例子,可以注意到在计算Label为+的情况时,由于P(X2 = Yes | Label = +)的概率为0从而使得整个特征空间出现的概率都为0。这种情况在实际运用中是会对预测结果产生影响的。那么该如何避免呢?

为了避免出现该情况,需要对条件概率公式进行如下平滑处理:
P λ ( Y = c k ) = ∑ i = 1 N I ( y i = c k ) + λ N + K λ P_\lambda(Y=c_k) = \frac{\sum_{i=1}^{N}{I(y_i = c_k)}+ \lambda}{N + K \lambda} Pλ​(Y=ck​)=N+Kλ∑i=1N​I(yi​=ck​)+λ​
在这个公式中,
P λ ( Y = c k ) : Y = c k 出 现 的 概 率 P_\lambda(Y=c_k): Y=c_k出现的概率 Pλ​(Y=ck​):Y=ck​出现的概率
∑ i = 1 N I ( y i = c k ) : Y = c k 出 现 的 次 数 \sum_{i=1}^{N}{I(y_i = c_k)}: Y=c_k出现的次数 i=1∑N​I(yi​=ck​):Y=ck​出现的次数
λ : 平 滑 常 数 , 当 λ = 1 时 , 称 为 拉 普 拉 斯 平 滑 ( L a p l a c i a n S m o o t h i n g ) \lambda: 平滑常数,当\lambda = 1时,称为拉普拉斯平滑(Laplacian Smoothing) λ:平滑常数,当λ=1时,称为拉普拉斯平滑(LaplacianSmoothing)
N : Y 的 总 数 N: Y的总数 N:Y的总数
K : Y 的 类 别 数 K: Y的类别数 K:Y的类别数

由此,取平滑常数为1,更新例1变为:
P ( L a b e l = + ) = 3 + 1 7 + 2 ∗ 1 = 4 9 P(Label = +) = \frac{3 + 1}{7 + 2 * 1} = \frac{4}{9} P(Label=+)=7+2∗13+1​=94​
P ( L a b e l = − ) = 4 + 1 7 + 2 ∗ 1 = 5 9 P(Label = -) = \frac{4 + 1}{7 + 2 * 1} = \frac{5}{9} P(Label=−)=7+2∗14+1​=95​
P ( X 1 = L o w ∣ L a b e l = + ) = 2 + 1 3 + 3 ∗ 1 = 1 2 P(X1 = Low | Label = +) = \frac{2 + 1}{3 + 3 * 1} = \frac{1}{2} P(X1=Low∣Label=+)=3+3∗12+1​=21​
P ( X 2 = Y e s ∣ L a b e l = + ) = 0 + 1 3 + 2 ∗ 1 = 1 5 P(X2 = Yes | Label = +) = \frac{0 + 1}{3 + 2 * 1} = \frac{1}{5} P(X2=Yes∣Label=+)=3+2∗10+1​=51​
P ( X 3 = G r e e n ∣ L a b e l = + ) = 2 + 1 3 + 2 ∗ 1 = 3 5 P(X3 = Green | Label = +) = \frac{2 + 1}{3 + 2 * 1} = \frac{3}{5} P(X3=Green∣Label=+)=3+2∗12+1​=53​
P ( X 1 = L o w ∣ L a b e l = − ) = 1 + 1 4 + 3 ∗ 1 = 2 7 P(X1 = Low | Label = -) = \frac{1 + 1}{4 + 3 * 1} = \frac{2}{7} P(X1=Low∣Label=−)=4+3∗11+1​=72​
P ( X 2 = Y e s ∣ L a b e l = − ) = 3 + 1 4 + 2 ∗ 1 = 2 3 P(X2 = Yes | Label = -) = \frac{3 + 1}{4 + 2 * 1} = \frac{2}{3} P(X2=Yes∣Label=−)=4+2∗13+1​=32​
P ( X 3 = G r e e n ∣ L a b e l = − ) = 3 + 1 4 + 2 ∗ 1 = 2 3 P(X3 = Green | Label = -) = \frac{3 + 1}{4 + 2 * 1} = \frac{2}{3} P(X3=Green∣Label=−)=4+2∗13+1​=32​

最后,计算:
P ( X 1 = L o w , X 2 = Y e s , X 3 = G r e e n ∣ L a b e l = + ) = P ( X 1 = L o w ∣ L a b e l = + ) ⋅ P ( X 2 = Y e s ∣ L a b e l = + ) ⋅ P ( X 3 = G r e e n ∣ L a b e l = + ) = 1 2 ⋅ 1 5 ⋅ 3 5 = 3 50 P(X1 = Low, X2 = Yes, X3 = Green | Label = +) = P(X1 = Low | Label = +) · P(X2 = Yes | Label = +) · P(X3 = Green | Label = +) \\= \frac{1}{2} · \frac{1}{5} · \frac{3}{5} = \frac{3}{50} P(X1=Low,X2=Yes,X3=Green∣Label=+)=P(X1=Low∣Label=+)⋅P(X2=Yes∣Label=+)⋅P(X3=Green∣Label=+)=21​⋅51​⋅53​=503​
P ( X 1 = L o w , X 2 = Y e s , X 3 = G r e e n ∣ L a b e l = − ) = P ( X 1 = L o w ∣ L a b e l = − ) ⋅ P ( X 2 = Y e s ∣ L a b e l = − ) ⋅ P ( X 3 = G r e e n ∣ L a b e l = − ) = 2 7 ⋅ 2 3 ⋅ 2 3 = 6 63 P(X1 = Low, X2 = Yes, X3 = Green | Label = -) = P(X1 = Low | Label = -) · P(X2 = Yes | Label = -) · P(X3 = Green | Label = -) \\= \frac{2}{7} · \frac{2}{3} · \frac{2}{3} = \frac{6}{63} P(X1=Low,X2=Yes,X3=Green∣Label=−)=P(X1=Low∣Label=−)⋅P(X2=Yes∣Label=−)⋅P(X3=Green∣Label=−)=72​⋅32​⋅32​=636​

P ( L a b e l = + ∣ X 1 = L o w , X 2 = Y e s , X 3 = G r e e n ) = P ( L a b e l = + ) ⋅ P ( X 1 = L o w , X 2 = Y e s , X 3 = G r e e n ∣ L a b e l = + ) = 4 9 ⋅ 3 50 = 0.02666666667 P(Label = + | X1 = Low, X2 = Yes, X3 = Green) = P(Label = +)·P(X1 = Low, X2 = Yes, X3 = Green | Label = +) \\= \frac{4}{9} · \frac{3}{50} = 0.02666666667 P(Label=+∣X1=Low,X2=Yes,X3=Green)=P(Label=+)⋅P(X1=Low,X2=Yes,X3=Green∣Label=+)=94​⋅503​=0.02666666667

P ( L a b e l = − ∣ X 1 = L o w , X 2 = Y e s , X 3 = G r e e n ) = P ( L a b e l = − ) ⋅ P ( X 1 = L o w , X 2 = Y e s , X 3 = G r e e n ∣ L a b e l = + ) = 5 9 ⋅ 6 63 = 0.05291005291 P(Label = - | X1 = Low, X2 = Yes, X3 = Green) = P(Label = -)·P(X1 = Low, X2 = Yes, X3 = Green | Label = +) \\= \frac{5}{9} · \frac{6}{63} = 0.05291005291 P(Label=−∣X1=Low,X2=Yes,X3=Green)=P(Label=−)⋅P(X1=Low,X2=Yes,X3=Green∣Label=+)=95​⋅636​=0.05291005291

最后通过比较,预测Label依旧为-
以上,便是使用了拉普拉斯平滑处理之后的朴素贝叶斯方法。

Programming

朴素贝叶斯法 - 垃圾邮件检测模型

这个模型的目的是检测一个邮件是为垃圾邮件还是正常邮件。主要的思想为通过对邮件里每个单词出现在垃圾邮件和正常邮件里的概率从而计算出由这些单词组成的邮件是否为垃圾邮件。
数据集来自UCI机器学习项目,可以从此处下载:https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection.

Step 1: Getting, understanding, and cleaning the dataset

Importing the dataset

import math
import numpy as np
import pandas as pddf = pd.read_table('SMSSpamCollection', sep = '\t', header = None, names = ['label', 'sms_message'])# Lets observe the first 5 rows of data
df.head()
label sms_message
0 ham Go until jurong point, crazy.. Available only ...
1 ham Ok lar... Joking wif u oni...
2 spam Free entry in 2 a wkly comp to win FA Cup fina...
3 ham U dun say so early hor... U c already then say...
4 ham Nah I don't think he goes to usf, he lives aro...

Data Preproccessing

为了方便处理,我们将垃圾邮件(spam)的label设为1, 将正常邮件(ham)设为0。

df['label'] = df.label.map({'spam': 1, 'ham': 0})
df.head()
label sms_message
0 0 Go until jurong point, crazy.. Available only ...
1 0 Ok lar... Joking wif u oni...
2 1 Free entry in 2 a wkly comp to win FA Cup fina...
3 0 U dun say so early hor... U c already then say...
4 0 Nah I don't think he goes to usf, he lives aro...

接下来, 将数据集分开,一部分作为训练集,一部分作为测试集。

from sklearn.model_selection import train_test_splitdf_train_msgs, df_test_msgs, df_ytrain, df_ytest = train_test_split(df['sms_message'],df['label'], random_state=0)

然后对邮件本身进行一下处理,将每一封邮件处理为一个特征向量。需要一个feature来储存目前出现的单词(相当于一个字典),对于一封邮件,如果这个词出现在其中了,那么在这个feature中这个单词对应的值便为1,反之为0。

举个例子,假设我们的字典里有[thank, good, night, apple, banana, river],然后手里的邮件为:“banana river”, 那么这个邮件便表示为[0, 0, 0, 0, 1, 1]。

为了达到这个目的,需要使用Sklearn库中的CounterVectorize方法。

from sklearn.feature_extraction.text import CountVectorizer# stop_words words that will not help us predict since they occur in most documents,
# e.g. 'a', 'and', 'the', 'him', 'is' ...
# Run:print(vectorizer.get_stop_words()) to see stop_word
vectorizer = CountVectorizer(binary = True, stop_words='english')# Create the vocabulary for our feature transformation
vectorizer.fit(df_train_msgs)# Next we create the feature vectors for both the training data and the test data
X_train = vectorizer.transform(df_train_msgs).toarray() # code to turn the training emails into a feature vector
X_test = vectorizer.transform(df_test_msgs).toarray() # code to turn the test email into a feature vector# Changing the target vectors data type
y_train=df_ytrain.to_numpy() # Convereting from a Panda series to a numpy array
y_test = df_ytest.to_numpy()

Step 2: Implementing the algorithm and training the model

# count for ham and spam message:
ham_count = float(np.sum(y_train == 0))
spam_count = float(np.sum(y_train == 1))# calculate the estimated value of P(y) for each class y.
p_y0 = ham_count/y_train.size
p_y1 = spam_count/y_train.sizeprint("The estimated value of P(y) for y = 0 (ham): {}".format(p_y0))
print("The estimated value of P(y) for y = 1 (spam): {}".format(p_y1))
The estimated value of P(y) for y = 0 (ham): 0.8655180665230916
The estimated value of P(y) for y = 1 (spam): 0.13448193347690834
# Smoothing lambda value
m = 1bern_matrix_y0 = np.array([0 for _ in range(X_train[0].size)]) # class 0 -- ham
bern_matrix_y1 = np.array([0 for _ in range(X_train[0].size)]) # class 1 -- spam
# the number of occurrences of each word
for i, msg in enumerate(X_train):if y_train[i] == 0:bern_matrix_y0 += msgelse:bern_matrix_y1 += msgbern_matrix_y0 = (bern_matrix_y0+m) / (ham_count+m*2)
bern_matrix_y1 = (bern_matrix_y1+m) / (spam_count+m*2)
# predict the test dataset
y_pre = []
for i, msg in enumerate(X_test):p_map0 = np.log(p_y0) # log(p(new email ∣ ham)p(ham))p_map1 = np.log(p_y1) # log(p(new email ∣ spam)p(spam))for j, word in enumerate(msg):if word == 1:p_map0 += np.log(bern_matrix_y0[j])p_map1 += np.log(bern_matrix_y1[j])else:p_map0 += np.log(1 - bern_matrix_y0[j])p_map1 += np.log(1 - bern_matrix_y1[j])y_pre.append(0) if p_map0 > p_map1 else y_pre.append(1)
# The accuracy on test sets
correct_num = 0
for i, j in zip(y_test, y_pre):correct_num = correct_num+(i==j)
print("Total number of test examples classified correctly: {}".format(correct_num))
print("The accuracy on the test set: {}".format(correct_num/y_test.size))
Total number of test examples classified correctly: 1362
The accuracy on the test set: 0.9777458722182341

Step 3: Using build-in library

其实Sklearn库里已经包含了Naive Bayes的实现方法,可以直接拿来用,方法如下:

from sklearn.naive_bayes import BernoulliNB
from sklearn import metricsclf = BernoulliNB()
clf.fit(X_train, y_train)
BernoulliNB(alpha=1.0, binarize=0.0, class_prior=None, fit_prior=True)
y_pred = clf.predict(X_test)
print("By using build in library, the accuracy is: ", metrics.accuracy_score(y_test, y_pred))
By using build in library, the accuracy is:  0.9777458722182341

Conclusion

本文简单介绍了什么是朴素贝叶斯模型,应该如何使用,以及使用平滑(smoothing)的方法去解决0概率的问题。具体原理和公式的推导可以去看一下李航老师的《统计学习方法》一书。
这篇文章关于朴素贝叶斯模型其实只是运用了其中的多元伯努利模型,其实朴素贝叶斯模型还有列如多项式事件模型(Multinomial Naive Bayes)等其他的方法,如果感兴趣可以深入学习。

Thanks & Bye ~

机器学习-胯下运球之Naive Bayes<朴素贝叶斯法>相关推荐

  1. 机械学习与R语言--Naive Bayes 朴素贝叶斯在R语言中的实现

    为什么天气预报说70%概率下雨?为什么垃圾短信垃圾邮件被自动归类?这一切的基础算法便是朴素贝叶斯理论(算法有很多,这仅是其中之一). 1.由贝叶斯理论到朴素贝叶斯(naive bayes) 理论的基础 ...

  2. 机器学习笔记(六)——朴素贝叶斯法的参数估计

    一.极大似然估计 在上一笔记中,经过推导,得到了朴素贝叶斯分类器的表示形式: y=argmaxckP(Y=ck)∏jP(X(j)=x(j)|Y=ck)(1) y = arg \max_{c_k} P( ...

  3. Naive Bayes 朴素贝叶斯代码实现-Python

    Implementing Naive Bayes in Python To actually implement the naive Bayes classifier model, we're goi ...

  4. 机器学习监督学习之分类算法---朴素贝叶斯代码实践

    目录 1. 言论过滤器 1.1 项目描述 1.2 朴素贝叶斯 工作原理: 1.2.1 词条向量 1.3 开发流程: 1.4 代码实现 1.4.1 创建样本 1.4.2 构建词汇表,用于建立词集向量 1 ...

  5. 朴素贝叶斯法(Naive Bayes,NB)

    文章目录 1. 朴素贝叶斯法的学习与分类 1.1 基本方法 2. 参数估计 2.1 极大似然估计 2.2 学习与分类算法 2.2.1 例题 2.2.2 例题代码 2.3 贝叶斯估计(平滑) 2.3.1 ...

  6. 机器学习监督学习之分类算法---朴素贝叶斯理论知识

    感谢Jack-Cui大佬的知识分享 机器学习专栏点击这里 目录 感谢Jack-Cui大佬的知识分享 0. 概述 1. 朴素贝叶斯理论 1.1 贝叶斯理论 1.1.1 相关计算公式:条件概率公式,贝叶斯 ...

  7. 机器学习 | 朴素贝叶斯法知识总结

    机器学习 | 朴素贝叶斯法理论知识 贝叶斯决策论是概率框架下实施决策的基本方法.对分类任务来说,在所有相关概率都已知的理想情况下,贝叶斯决策论考虑如何基于这些概率和误判损失来选择最优的类别标记.朴素贝 ...

  8. bayes什么意思_什么是朴素贝叶斯法?

    1) 朴素贝叶斯法(naive Bayes)基于什么理论? 朴素贝叶斯法是基于贝叶斯定理与特征条件独立假设的分类方法. 2) 朴素贝叶斯法的过程是怎样的? 对于给定的训练数据集,首先基于特征条件独立假 ...

  9. 机器学习之朴素贝叶斯法

    转载请注明出处:http://www.cnblogs.com/Peyton-Li/ 朴素贝叶斯法是机器学习模型中一个比较简单的模型,实现简单,比较常用. 是定义在输入空间上的随机向量,是定义在输出空间 ...

最新文章

  1. hdu 1828 pku 1177 Picture
  2. 2021年高薪城市排名,数据盘点哪些城市对打工人最友好?
  3. 编译实验(三)目标代码生成
  4. Lunar New Year and a Wander
  5. c语言常用符号与英文,C语言常用符号与英文(7页)-原创力文档
  6. react-redux简版实现
  7. 【线段树】Traffic Jams in the Land(CF498D)
  8. 小程序入门学习13--云函数与数据库02
  9. linux fdisk 分区
  10. oracle的三个网络配置文件
  11. Java中的LinkedHashSet
  12. paip.c++读写ini文件.
  13. 移动安全-java JEB安装使用
  14. 计算机二级修改并应用基本简历模板,2020年新版个人简历模板大全可编辑(word版).docx...
  15. vs2003远程调试总结
  16. 学习《医学三字经白话解》之咳嗽+疟疾+痢证
  17. iphone换android手机铃声,在iPhone中换个自定义铃声的11个步骤
  18. LUP分解求解线性方程组及求逆矩阵 java
  19. PMP|项目经理如何做好相关方管理?
  20. 微信小程序 教学质量问卷调查 小程序实现

热门文章

  1. 新手建站详细步骤(图文教程)
  2. xposed android debug,Android 手机开启全局调试xposed插件
  3. 无线蓝牙耳机什么牌子的好?好的无线蓝牙耳机推荐
  4. 惊呆了!C语言也能画小猪佩奇?【附源码】社会我佩奇哥!
  5. python基础语法Day11
  6. MOSFET正温度系数和负温度系数
  7. 【数学基础】简单易懂的张量求导和计算图讲解
  8. Java后台代码word转pdf文件下载(类库参考)附jar包
  9. 【python报错总结】pandas打开Excel文件失败
  10. 编写C语言跨平台函数(以清屏和休眠函数为例)