BPMF是用贝叶斯推断方法求解MF的概率模型,参考:https://gist.github.com/macks22/00a17b1d374dfc267a9a

1、利用其本身数据集的代码如下:

# -*- Encoding:UTF-8 -*-
'''
@author: Jason.F
@data: 2019.07.22
@function: Implementing BPMFDataset: Movielen Dataset(ml-1m) Evaluating: hitradio,ndcghttps://www.cs.toronto.edu/~amnih/papers/bpmf.pdf
@reference: https://gist.github.com/macks22/00a17b1d374dfc267a9a
'''
import sys
import time
import loggingimport pymc3 as pm
import numpy as np
import pandas as pd
import theano
import theano.tensor as t
import scipy as sp
import mathDATA_NOT_FOUND = -1# data from: https://gist.github.com/macks22/b40ac9c685e920ad3ca2
def read_jester_data(fname='/data/tmpexec/jester-dense-subset-100x20.csv'):"""Read dense Jester dataset and split train/test data randomly.We use a 0.9:0.1 Train:Test split."""logging.info('reading data')try:data = pd.read_csv(fname)except IOError as err:print (str(err))url = 'https://gist.github.com/macks22/b40ac9c685e920ad3ca2'print ('download from: %s' % url)sys.exit(DATA_NOT_FOUND)# Calculate split sizes.logging.info('splitting train/test sets')n, m = data.shape           # # users, # jokesN = n * m                   # # cells in matrixtest_size = int(N / 10)         # use 10% of data as test settrain_size = N - test_size  # and remainder for training# Prepare train/test ndarrays.train = data.copy().valuestest = np.ones(data.shape) * np.nan# Draw random sample of training data to use for testing.tosample = np.where(~np.isnan(train))        # only sample non-missing valuesidx_pairs = list(zip(tosample[0], tosample[1]))    # zip row/col indicesindices = np.arange(len(idx_pairs))      # indices of row/col index pairssample = np.random.choice(indices, replace=False, size=test_size)  # draw sample# Transfer random sample from train set to test set.for idx in sample:idx_pair = idx_pairs[idx]         # retrieve sampled index pairtest[idx_pair] = train[idx_pair]  # transfer to test settrain[idx_pair] = np.nan          # remove from train set# Verify everything worked properlyassert(np.isnan(train).sum() == test_size)assert(np.isnan(test).sum() == train_size)# Return the two numpy ndarraysreturn train, testdef build_pmf_model(train, alpha=2, dim=10, std=0.01):"""Construct the Probabilistic Matrix Factorization model using pymc3.Note that the `testval` param for U and V initialize the model away from0 using a small amount of Gaussian noise.:param np.ndarray train: Training data (observed) to learn the model on.:param int alpha: Fixed precision to use for the rating likelihood function.:param int dim: Dimensionality of the model; rank of low-rank approximation.:param float std: Standard deviation for Gaussian noise in model initialization."""# Mean value imputation on training data.train = train.copy()nan_mask = np.isnan(train)train[nan_mask] = train[~nan_mask].mean()# Low precision reflects uncertainty; prevents overfitting.# We use point estimates from the data to intialize.# Set to mean variance across users and items.alpha_u = 1 / train.var(axis=1).mean()alpha_v = 1 / train.var(axis=0).mean()logging.info('building the PMF model')n, m = train.shapewith pm.Model() as pmf:U = pm.MvNormal('U', mu=0, tau=alpha_u * np.eye(dim),shape=(n, dim), testval=np.random.randn(n, dim) * std)V = pm.MvNormal('V', mu=0, tau=alpha_v * np.eye(dim),shape=(m, dim), testval=np.random.randn(m, dim) * std)R = pm.Normal('R', mu=t.dot(U, V.T), tau=alpha * np.ones(train.shape),observed=train)logging.info('done building PMF model')return pmfdef build_bpmf_model(train, alpha=2, dim=10, std=0.01):"""Build the original BPMF model, which we cannot sample from due tocurrent limitations in pymc3's implementation of the Wishart distribution."""n, m = train.shapebeta_0 = 1  # scaling factor for lambdas; unclear on its use# Mean value imputation on training data.train = train.copy()nan_mask = np.isnan(train)train[nan_mask] = train[~nan_mask].mean()logging.info('building the BPMF model')with pm.Model() as bpmf:# Specify user feature matrixlambda_u = pm.Wishart('lambda_u', n=dim, V=np.eye(dim), shape=(dim, dim),testval=np.random.randn(dim, dim) * std)mu_u = pm.Normal('mu_u', mu=0, tau=beta_0 * lambda_u, shape=dim,testval=np.random.randn(dim) * std)U = pm.MvNormal( 'U', mu=mu_u, tau=lambda_u, shape=(n, dim),testval=np.random.randn(n, dim) * std)# Specify item feature matrixlambda_v = pm.Wishart('lambda_v', n=dim, V=np.eye(dim), shape=(dim, dim),testval=np.random.randn(dim, dim) * std)mu_v = pm.Normal('mu_v', mu=0, tau=beta_0 * lambda_v, shape=dim,testval=np.random.randn(dim) * std)V = pm.MvNormal('V', mu=mu_v, tau=lambda_v, shape=(m, dim),testval=np.random.randn(m, dim) * std)# Specify rating likelihood functionR = pm.Normal('R', mu=t.dot(U, V.T), tau=alpha * np.ones((n, m)),observed=train)logging.info('done building the BPMF model')return bpmfdef build_mod_bpmf_model(train, alpha=2, dim=10, std=0.01):"""Build the modified BPMF model using pymc3. The original model usesWishart priors on the covariance matrices. Unfortunately, the Wishartdistribution in pymc3 is currently not suitable for sampling. Thisversion decomposes the covariance matrix into:diag(sigma) \dot corr_matrix \dot diag(std).We use uniform priors on the standard deviations (sigma) and LKJCorrpriors on the correlation matrices (corr_matrix):sigma ~ Uniformcorr_matrix ~ LKJCorr(n=1, p=dim)"""n, m = train.shapebeta_0 = 1  # scaling factor for lambdas; unclear on its use# Mean value imputation on training data.train = train.copy()nan_mask = np.isnan(train)train[nan_mask] = train[~nan_mask].mean()# We will use separate priors for sigma and correlation matrix.# In order to convert the upper triangular correlation values to a# complete correlation matrix, we need to construct an index matrix:n_elem = int(dim * (dim - 1) / 2)tri_index = np.zeros([dim, dim], dtype=int)tri_index[np.triu_indices(dim, k=1)] = np.arange(n_elem)tri_index[np.triu_indices(dim, k=1)[::-1]] = np.arange(n_elem)logging.info('building the BPMF model')with pm.Model() as bpmf:# Specify user feature matrixsigma_u = pm.Uniform('sigma_u', shape=dim)corr_triangle_u = pm.LKJCorr('corr_u', n=1, p=dim, testval=np.random.randn(n_elem) * std)corr_matrix_u = corr_triangle_u[tri_index]corr_matrix_u = t.fill_diagonal(corr_matrix_u, 1)cov_matrix_u = t.diag(sigma_u).dot(corr_matrix_u.dot(t.diag(sigma_u)))lambda_u = t.nlinalg.matrix_inverse(cov_matrix_u)mu_u = pm.Normal('mu_u', mu=0, tau=beta_0 * t.diag(lambda_u), shape=dim,testval=np.random.randn(dim) * std)U = pm.MvNormal('U', mu=mu_u, tau=lambda_u, shape=(n, dim),testval=np.random.randn(n, dim) * std)# Specify item feature matrixsigma_v = pm.Uniform('sigma_v', shape=dim)corr_triangle_v = pm.LKJCorr('corr_v', n=1, p=dim,testval=np.random.randn(n_elem) * std)corr_matrix_v = corr_triangle_v[tri_index]corr_matrix_v = t.fill_diagonal(corr_matrix_v, 1)cov_matrix_v = t.diag(sigma_v).dot(corr_matrix_v.dot(t.diag(sigma_v)))lambda_v = t.nlinalg.matrix_inverse(cov_matrix_v)mu_v = pm.Normal('mu_v', mu=0, tau=beta_0 * t.diag(lambda_v), shape=dim,testval=np.random.randn(dim) * std)V = pm.MvNormal( 'V', mu=mu_v, tau=lambda_v, shape=(m, dim),testval=np.random.randn(m, dim) * std)# Specify rating likelihood functionR = pm.Normal('R', mu=t.dot(U, V.T), tau=alpha * np.ones((n, m)),observed=train)logging.info('done building the BPMF model')return bpmfif __name__ == "__main__":logging.basicConfig(level=logging.INFO,format='[%(asctime)s]: %(message)s')# Read data and build PMF model.train, test = read_jester_data()pmf = build_pmf_model(train)# Find mode of posterior using optimizationwith pmf:tstart = time.time()logging.info('finding PMF MAP using Powell optimization')#start = pm.find_MAP(fmin=sp.optimize.fmin_powell)start = pm.find_MAP()elapsed = time.time() - tstartlogging.info('found PMF MAP in %d seconds' % int(elapsed))# Build the modified BPMF model using same default params as PMF.mod_bpmf = build_mod_bpmf_model(train)# Use PMF MAP to initialize sampling for modified BPMF.for key in mod_bpmf.test_point:if key not in start:start[key] = mod_bpmf.test_point[key]# Attempt to sample with modified BPMF# (this part raises PositiveDefiniteError when using the normal BPMF model).with mod_bpmf:nsamples = 100njobs = 2logging.info( 'drawing %d MCMC samples using %d jobs' % (nsamples, njobs))step = pm.NUTS(scaling=start)trace = pm.sample(nsamples, step, start=start, njobs=njobs) with mod_bpmf:ppc = pm.sample_posterior_predictive(trace, progressbar=True)nR = np.mean(ppc['R'],0)#three dims, calcuate the mean with the first dim def getrmse(predictions, targets):return np.sqrt(((predictions - targets) ** 2).mean())rmses=[]for i in range(test.shape[0]):for j in range(test.shape[1]):if math.isnan(test[i][j]) == False:rmse = getrmse(test[i][j],nR[i][j])rmses.append(rmse)print (np.mean(rmses))#4.120942853091463

2、用Movielen-1m数据集无法采样下去,原因未知,有兴趣者可研究,代码如下:

# -*- Encoding:UTF-8 -*-
'''
@author: Jason.F
@data: 2019.07.22
@function: Implementing BPMF by MCMCDataset: Movielen Dataset(ml-1m) Evaluating: hitradio,ndcghttps://www.cs.toronto.edu/~amnih/papers/bpmf.pdf
@reference: https://gist.github.com/macks22/00a17b1d374dfc267a9a
'''
import sys
import time
import loggingimport pymc3 as pm
import numpy as np
import pandas as pd
import theano
import theano.tensor as t
import heapq
import mathdef getTraindata():data = []filePath = '/data/fjsdata/ctKngBase/ml/ml-1m.train.rating'u = 0i = 0maxr = 0.0with open(filePath, 'r') as f:for line in f:if line:lines = line[:-1].split("\t")user = int(lines[0])item = int(lines[1])score = float(lines[2])data.append((user, item, score))if user > u: u = userif item > i: i = itemif score > maxr: maxr = scoreprint("Loading Success!\n""Data Info:\n""\tUser Num: {}\n""\tItem Num: {}\n""\tData Size: {}".format(u, i, len(data)))R = np.zeros([u+1, i+1], dtype=np.float32)for i in data:user = i[0]item = i[1]rating = i[2]R[user][item] = ratingreturn R
def getTestdata():testset = []filePath = '/data/fjsdata/ctKngBase/ml/ml-1m.test.negative'with open(filePath, 'r') as fd:line = fd.readline()while line != None and line != '':arr = line.split('\t')u = eval(arr[0])[0]testset.append([u, eval(arr[0])[1]])#one postive itemfor i in arr[1:]:testset.append([u, int(i)]) #99 negative itemsline = fd.readline()return testsetdef build_pmf_model(train, alpha=2, dim=8, std=0.01):"""Construct the Probabilistic Matrix Factorization model using pymc3.Note that the `testval` param for U and V initialize the model away from0 using a small amount of Gaussian noise.:param np.ndarray train: Training data (observed) to learn the model on.:param int alpha: Fixed precision to use for the rating likelihood function.:param int dim: Dimensionality of the model; rank of low-rank approximation.:param float std: Standard deviation for Gaussian noise in model initialization."""# Mean value imputation on training data.train = train.copy()nan_mask = np.isnan(train)train[nan_mask] = train[~nan_mask].mean()# Low precision reflects uncertainty; prevents overfitting.# We use point estimates from the data to intialize.# Set to mean variance across users and items.alpha_u = 1 / train.var(axis=1).mean()alpha_v = 1 / train.var(axis=0).mean()logging.info('building the PMF model')n, m = train.shapewith pm.Model() as pmf:U = pm.MvNormal('U', mu=0, tau=alpha_u * np.eye(dim),shape=(n, dim), testval=np.random.randn(n, dim) * std)V = pm.MvNormal('V', mu=0, tau=alpha_v * np.eye(dim),shape=(m, dim), testval=np.random.randn(m, dim) * std)R = pm.Normal('R', mu=t.dot(U, V.T), tau=alpha * np.ones(train.shape),observed=train)logging.info('done building PMF model')return pmfdef build_bpmf_model(train, alpha=2, dim=8, std=0.01):"""Build the original BPMF model, which we cannot sample from due tocurrent limitations in pymc3's implementation of the Wishart distribution."""n, m = train.shapebeta_0 = 1  # scaling factor for lambdas; unclear on its use# Mean value imputation on training data.train = train.copy()nan_mask = np.isnan(train)train[nan_mask] = train[~nan_mask].mean()logging.info('building the BPMF model')with pm.Model() as bpmf:# Specify user feature matrixlambda_u = pm.Wishart('lambda_u', n=dim, V=np.eye(dim), shape=(dim, dim),testval=np.random.randn(dim, dim) * std)mu_u = pm.Normal('mu_u', mu=0, tau=beta_0 * lambda_u, shape=dim,testval=np.random.randn(dim) * std)U = pm.MvNormal( 'U', mu=mu_u, tau=lambda_u, shape=(n, dim),testval=np.random.randn(n, dim) * std)# Specify item feature matrixlambda_v = pm.Wishart('lambda_v', n=dim, V=np.eye(dim), shape=(dim, dim),testval=np.random.randn(dim, dim) * std)mu_v = pm.Normal('mu_v', mu=0, tau=beta_0 * lambda_v, shape=dim,testval=np.random.randn(dim) * std)V = pm.MvNormal('V', mu=mu_v, tau=lambda_v, shape=(m, dim),testval=np.random.randn(m, dim) * std)# Specify rating likelihood functionR = pm.Normal('R', mu=t.dot(U, V.T), tau=alpha * np.ones((n, m)),observed=train)logging.info('done building the BPMF model')return bpmfdef build_mod_bpmf_model(train, alpha=2, dim=8, std=0.01):"""Build the modified BPMF model using pymc3. The original model usesWishart priors on the covariance matrices. Unfortunately, the Wishartdistribution in pymc3 is currently not suitable for sampling. Thisversion decomposes the covariance matrix into:diag(sigma) \dot corr_matrix \dot diag(std).We use uniform priors on the standard deviations (sigma) and LKJCorrpriors on the correlation matrices (corr_matrix):sigma ~ Uniformcorr_matrix ~ LKJCorr(n=1, p=dim)"""n, m = train.shapebeta_0 = 1  # scaling factor for lambdas; unclear on its use# Mean value imputation on training data.train = train.copy()nan_mask = np.isnan(train)train[nan_mask] = train[~nan_mask].mean()# We will use separate priors for sigma and correlation matrix.# In order to convert the upper triangular correlation values to a# complete correlation matrix, we need to construct an index matrix:n_elem = int(dim * (dim - 1) / 2)tri_index = np.zeros([dim, dim], dtype=int)tri_index[np.triu_indices(dim, k=1)] = np.arange(n_elem)tri_index[np.triu_indices(dim, k=1)[::-1]] = np.arange(n_elem)logging.info('building the BPMF model')with pm.Model() as bpmf:# Specify user feature matrixsigma_u = pm.Uniform('sigma_u', shape=dim)corr_triangle_u = pm.LKJCorr('corr_u', n=1, p=dim, testval=np.random.randn(n_elem) * std)corr_matrix_u = corr_triangle_u[tri_index]corr_matrix_u = t.fill_diagonal(corr_matrix_u, 1)cov_matrix_u = t.diag(sigma_u).dot(corr_matrix_u.dot(t.diag(sigma_u)))lambda_u = t.nlinalg.matrix_inverse(cov_matrix_u)mu_u = pm.Normal('mu_u', mu=0, tau=beta_0 * t.diag(lambda_u), shape=dim,testval=np.random.randn(dim) * std)U = pm.MvNormal('U', mu=mu_u, tau=lambda_u, shape=(n, dim),testval=np.random.randn(n, dim) * std)# Specify item feature matrixsigma_v = pm.Uniform('sigma_v', shape=dim)corr_triangle_v = pm.LKJCorr('corr_v', n=1, p=dim,testval=np.random.randn(n_elem) * std)corr_matrix_v = corr_triangle_v[tri_index]corr_matrix_v = t.fill_diagonal(corr_matrix_v, 1)cov_matrix_v = t.diag(sigma_v).dot(corr_matrix_v.dot(t.diag(sigma_v)))lambda_v = t.nlinalg.matrix_inverse(cov_matrix_v)mu_v = pm.Normal('mu_v', mu=0, tau=beta_0 * t.diag(lambda_v), shape=dim,testval=np.random.randn(dim) * std)V = pm.MvNormal( 'V', mu=mu_v, tau=lambda_v, shape=(m, dim),testval=np.random.randn(m, dim) * std)# Specify rating likelihood functionR = pm.Normal('R', mu=t.dot(U, V.T), tau=alpha * np.ones((n, m)),observed=train)logging.info('done building the BPMF model')return bpmfdef getHitRatio(ranklist, targetItem):for item in ranklist:if item == targetItem:return 1return 0
def getNDCG(ranklist, targetItem):for i in range(len(ranklist)):item = ranklist[i]if item == targetItem:return math.log(2) / math.log(i+2)return 0if __name__ == "__main__":logging.basicConfig(level=logging.INFO,format='[%(asctime)s]: %(message)s')# Read data and build PMF model.train = getTraindata()bpmf = build_mod_bpmf_model(train, dim=8)#dim is the number of latent factorswith bpmf:# sample with BPMFtstart = time.time()logging.info('Starting BPMF training')#start = pm.find_MAP()    step = pm.NUTS()#trace = pm.sample(1000, step, start=start)trace = pm.sample(100, step)elapsed = time.time() - tstart    logging.info('Completed BPMF in %d seconds' % int(elapsed))with bpmf:#evaluationtestset = getTestdata()ppc = pm.sample_posterior_predictive(trace, progressbar=True)nR = np.mean(ppc['R'],0)#three dims, calcuate the mean with the first dim for posteriorhits = []ndcgs = []prev_u = testset[0][0]pos_i = testset[0][1]scorelist = []for u, i in testset:if prev_u == u:scorelist.append([i,nR[u,i]])else:map_item_score = {}for item, rate in scorelist: #turn dictmap_item_score[item] = rateranklist = heapq.nlargest(10, map_item_score, key=map_item_score.get)#default Topn=10hr = getHitRatio(ranklist, pos_i)hits.append(hr)ndcg = getNDCG(ranklist, pos_i)ndcgs.append(ndcg)#next userscorelist = []prev_u = upos_i = iscorelist.append([i,nR[u,i]])hitratio,ndcg = np.array(hits).mean(), np.array(ndcgs).mean()print("hr: {}, NDCG: {}, At K {}".format(hitratio, ndcg, 8))

训练一直卡在:

Loading Success!
Data Info:User Num: 6039Item Num: 3705Data Size: 994169
[2019-07-23 07:26:00,509]: building the BPMF model
[2019-07-23 07:26:21,704]: done building the BPMF model
[2019-07-23 07:26:21,709]: finding PMF MAP using Powell optimization
Only 100 samples in chain.
[2019-07-23 07:26:40,130]: Only 100 samples in chain.
Multiprocess sampling (4 chains in 4 jobs)
[2019-07-23 07:26:40,147]: Multiprocess sampling (4 chains in 4 jobs)
NUTS: [V, mu_v, corr_v, sigma_v, U, mu_u, corr_u, sigma_u]
[2019-07-23 07:26:40,153]: NUTS: [V, mu_v, corr_v, sigma_v, U, mu_u, corr_u, sigma_u]
Sampling 4 chains:   0%|          | 12/2400 [01:47<10:26:46, 15.75s/draws]

BPMF是用贝叶斯MCMC推断方法求解MF概率模型,和笔者下一篇BMF模型思路一致。

推荐经典算法实现之BPMF(pymc3+MovieLen)相关推荐

  1. 推荐经典算法实现之BPMF(python+MovieLen)

    因前一篇https://blog.csdn.net/fjssharpsword/article/details/97000479采样问题未解决,发现如下github上有BPMF代码,采用wishart ...

  2. 推荐算法实现之BMF(pymc3+MovieLen)

    BMF是笔者根据PMF(http://papers.nips.cc/paper/3208-probabilistic-matrix-factorization.pdf)和BPMF(https://ww ...

  3. 经典算法书籍推荐以及算法书排行【算法四库全书】

    经典算法书籍推荐以及算法书排行[算法四库全书] 作者:霞落满天   https://linuxstyle.blog.csdn.net/    https://blog.csdn.net/21aspne ...

  4. 经典算法题每日演练——第六题 协同推荐SlopeOne 算法

    原文:经典算法题每日演练--第六题 协同推荐SlopeOne 算法 相信大家对如下的Category都很熟悉,很多网站都有类似如下的功能,"商品推荐","猜你喜欢&quo ...

  5. 一文了解迁移学习经典算法

    来源 | linolzhang的CSDN博客 作者 | linolzhang ▌一. 了解迁移学习 迁移学习(Transfer Learning)目标是将从一个环境中学到的知识用来帮助新环境中的学习任 ...

  6. 【机器学习】机器学习的经典算法

    [机器学习]机器学习的经典算法 https://www.cnblogs.com/DicksonJYL/p/9517025.html 本文为整篇文章第二部分,整篇文章主要目录如下: 1:一个故事说明什么 ...

  7. 还在为数学建模的事发愁?带你一起来看看数模竞赛中必备的经典算法

    前言 数学建模比赛是本科生和研究生阶段最重要的比赛之一,包括全国大学生数学建模竞赛(俗称"国赛")和美国大学生数学建模竞赛(俗称"美赛").在这些比赛中取得好成 ...

  8. STL经典算法集锦之排列(next_permutation/prev_permutation

    STL经典算法集锦之排列(next_permutation/prev_permutation) 来自:CSDN博客推荐文章 | 时间:2012-05-07 14:54:09 原文链接: http:// ...

  9. 机器学习初学者手抄本:数学基础、机器学习经典算法、统计学习方法等

    机器学习怎么学?当然是系统地学习了.没有时间这么办呢?利用碎片时间学习!很多人一天要花 2 个小时通勤,通勤路上有很多时间看手机.于是我把一些机器学习的基础知识做成了在线的机器学习手册,只需打开微信收 ...

最新文章

  1. vc采集网页内指定frame框架下所有元素-再升级版
  2. MySQL调优(一):使用profiles、performance_schema性能监控
  3. 手机(jzoj 1983)
  4. 产品经理 - 统一支付 、结算、清算
  5. Hyper-V 3 限定虚拟机可用的CPU利用率
  6. 极客大学产品经理训练营:产品经理的项目管理 第14课总结
  7. 洛谷 4238 【模板】多项式求逆
  8. CRM客户信息管理系统
  9. 2021年高教杯数学建模国赛C题思路详解
  10. JVM: java虚拟机
  11. TPM、TCM分别是什么?
  12. Sqlserver 中临时表和全局临时表
  13. 把款软件可以测试双显卡,如何看自己的电脑是不是双显卡?双显卡有什么好处?...
  14. psv型号版本怎么看的
  15. VSCode替换掉/去掉空行
  16. Unity C#图片转换二进制流、精灵与Png、jpg互转
  17. zai~~myGODDDD
  18. 【论文笔记 · RL】Learning Phase Competition for Traffic Signal Control
  19. 火山小视频伪原创方法 视频文件分割改变md5
  20. 《Go语言圣经》学习笔记 第七章 接口

热门文章

  1. NIO详解(十三):Java IO 和NIO 总结
  2. Java设计模式(十二):状态设计模式
  3. java字符生成器_Java实现简单字符生成器代码例子
  4. python ctypes实现api测试_Python与C之间的相互调用(Python C API及Python ctypes库)
  5. IIS6下配置fastcgi的php的教程
  6. 西门子S7以太网通讯协议
  7. sql server扫盲系列
  8. [国嵌攻略][080][无名管道通讯]
  9. 共享一个调用微信公众平台接口的客户端类库
  10. Android系统修改硬件设备访问权限