• MLPClassifier() 改变模型复杂度的四种方法
  1. 调整神经网络每一个隐藏层上的节点数
  2. 调节神经网络隐藏层的层数
  3. 调节activation的方式
  4. 通过调整alpha值来改变模型正则化的程度(增大alpha会降低模型复杂度, 模型会变得更加简单)

官方doc:

Init signature:
MLPClassifier(hidden_layer_sizes=(100,),activation='relu',solver='adam',alpha=0.0001,batch_size='auto',learning_rate='constant',learning_rate_init=0.001,power_t=0.5,max_iter=200,shuffle=True,random_state=None,tol=0.0001,verbose=False,warm_start=False,momentum=0.9,nesterovs_momentum=True,early_stopping=False,validation_fraction=0.1,beta_1=0.9,beta_2=0.999,epsilon=1e-08,n_iter_no_change=10,
)
Docstring:
Multi-layer Perceptron classifier.This model optimizes the log-loss function using LBFGS or stochastic
gradient descent... versionadded:: 0.18Parameters
----------
hidden_layer_sizes : tuple, length = n_layers - 2, default (100,)The ith element represents the number of neurons in the ithhidden layer.activation : {'identity', 'logistic', 'tanh', 'relu'}, default 'relu'Activation function for the hidden layer.- 'identity', no-op activation, useful to implement linear bottleneck,returns f(x) = x- 'logistic', the logistic sigmoid function,returns f(x) = 1 / (1 + exp(-x)).- 'tanh', the hyperbolic tan function,returns f(x) = tanh(x).- 'relu', the rectified linear unit function,returns f(x) = max(0, x)solver : {'lbfgs', 'sgd', 'adam'}, default 'adam'The solver for weight optimization.- 'lbfgs' is an optimizer in the family of quasi-Newton methods.- 'sgd' refers to stochastic gradient descent.- 'adam' refers to a stochastic gradient-based optimizer proposedby Kingma, Diederik, and Jimmy BaNote: The default solver 'adam' works pretty well on relativelylarge datasets (with thousands of training samples or more) in terms ofboth training time and validation score.For small datasets, however, 'lbfgs' can converge faster and performbetter.alpha : float, optional, default 0.0001L2 penalty (regularization term) parameter.batch_size : int, optional, default 'auto'Size of minibatches for stochastic optimizers.If the solver is 'lbfgs', the classifier will not use minibatch.When set to "auto", `batch_size=min(200, n_samples)`learning_rate : {'constant', 'invscaling', 'adaptive'}, default 'constant'Learning rate schedule for weight updates.- 'constant' is a constant learning rate given by'learning_rate_init'.- 'invscaling' gradually decreases the learning rate at eachtime step 't' using an inverse scaling exponent of 'power_t'.effective_learning_rate = learning_rate_init / pow(t, power_t)- 'adaptive' keeps the learning rate constant to'learning_rate_init' as long as training loss keeps decreasing.Each time two consecutive epochs fail to decrease training loss by atleast tol, or fail to increase validation score by at least tol if'early_stopping' is on, the current learning rate is divided by 5.Only used when ``solver='sgd'``.learning_rate_init : double, optional, default 0.001The initial learning rate used. It controls the step-sizein updating the weights. Only used when solver='sgd' or 'adam'.power_t : double, optional, default 0.5The exponent for inverse scaling learning rate.It is used in updating effective learning rate when the learning_rateis set to 'invscaling'. Only used when solver='sgd'.max_iter : int, optional, default 200Maximum number of iterations. The solver iterates until convergence(determined by 'tol') or this number of iterations. For stochasticsolvers ('sgd', 'adam'), note that this determines the number of epochs(how many times each data point will be used), not the number ofgradient steps.shuffle : bool, optional, default TrueWhether to shuffle samples in each iteration. Only used whensolver='sgd' or 'adam'.random_state : int, RandomState instance or None, optional, default NoneIf int, random_state is the seed used by the random number generator;If RandomState instance, random_state is the random number generator;If None, the random number generator is the RandomState instance usedby `np.random`.tol : float, optional, default 1e-4Tolerance for the optimization. When the loss or score is not improvingby at least ``tol`` for ``n_iter_no_change`` consecutive iterations,unless ``learning_rate`` is set to 'adaptive', convergence isconsidered to be reached and training stops.verbose : bool, optional, default FalseWhether to print progress messages to stdout.warm_start : bool, optional, default FalseWhen set to True, reuse the solution of the previouscall to fit as initialization, otherwise, just erase theprevious solution. See :term:`the Glossary <warm_start>`.momentum : float, default 0.9Momentum for gradient descent update. Should be between 0 and 1. Onlyused when solver='sgd'.nesterovs_momentum : boolean, default TrueWhether to use Nesterov's momentum. Only used when solver='sgd' andmomentum > 0.early_stopping : bool, default FalseWhether to use early stopping to terminate training when validationscore is not improving. If set to true, it will automatically setaside 10% of training data as validation and terminate training whenvalidation score is not improving by at least tol for``n_iter_no_change`` consecutive epochs. The split is stratified,except in a multilabel setting.Only effective when solver='sgd' or 'adam'validation_fraction : float, optional, default 0.1The proportion of training data to set aside as validation set forearly stopping. Must be between 0 and 1.Only used if early_stopping is Truebeta_1 : float, optional, default 0.9Exponential decay rate for estimates of first moment vector in adam,should be in [0, 1). Only used when solver='adam'beta_2 : float, optional, default 0.999Exponential decay rate for estimates of second moment vector in adam,should be in [0, 1). Only used when solver='adam'epsilon : float, optional, default 1e-8Value for numerical stability in adam. Only used when solver='adam'n_iter_no_change : int, optional, default 10Maximum number of epochs to not meet ``tol`` improvement.Only effective when solver='sgd' or 'adam'.. versionadded:: 0.20Attributes
----------
classes_ : array or list of array of shape (n_classes,)Class labels for each output.loss_ : floatThe current loss computed with the loss function.coefs_ : list, length n_layers - 1The ith element in the list represents the weight matrix correspondingto layer i.intercepts_ : list, length n_layers - 1The ith element in the list represents the bias vector corresponding tolayer i + 1.n_iter_ : int,The number of iterations the solver has ran.n_layers_ : intNumber of layers.n_outputs_ : intNumber of outputs.out_activation_ : stringName of the output activation function.Notes
-----
MLPClassifier trains iteratively since at each time step
the partial derivatives of the loss function with respect to the model
parameters are computed to update the parameters.It can also have a regularization term added to the loss function
that shrinks model parameters to prevent overfitting.This implementation works with data represented as dense numpy arrays or
sparse scipy arrays of floating point values.References
----------
Hinton, Geoffrey E."Connectionist learning procedures." Artificial intelligence 40.1(1989): 185-234.Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty oftraining deep feedforward neural networks." International Conferenceon Artificial Intelligence and Statistics. 2010.He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-levelperformance on imagenet classification." arXiv preprintarXiv:1502.01852 (2015).Kingma, Diederik, and Jimmy Ba. "Adam: A method for stochasticoptimization." arXiv preprint arXiv:1412.6980 (2014).
File:           c:\users\huawei\appdata\local\programs\python\python36\lib\site-packages\sklearn\neural_network\multilayer_perceptron.py
Type:           ABCMeta
Subclasses:

python sklearn.neural_network.MLPClassifier() 神经网络改变模型复杂度的四种方法相关推荐

  1. python输入字符串并反序result_python字符串反转的四种方法详解

    python字符串反转的四种方法详解 这篇文章主要介绍了python字符串反转的四种详解,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友可以参考下 1.用red ...

  2. 改变模型锚点的4种方法

    项目经常会遇到调用不同锚点进行位移.旋转等需求,这时候需要我们对模型的锚点进行单独的控制. 模型软件中 模型-模型层次-轴调整改变到想要的位置,然后输出想要的模型格式    unity 创建空物体作为 ...

  3. 简明python教程txt-Python:将 list 写入一个 txt 文件四种方法

    一个数据list of dict如下 a = [ {"Jodie1": "123"}, {"Jodie2": "456" ...

  4. python 中文转Unicode编码 Unicode编码转中文的四种方法

    一,中文转Unicode编码 chinese = "你好" re = chinese.encode("unicode_escape") print(re)返回: ...

  5. Python中记住过去(模型状态)的五种方法

    在Python中记住过去(模型状态)的五种方法 从封闭函数和迭代器到状态机Python库 有人说... "那些不能记住过去的人,注定要重复它".G. Santayana, 1905 ...

  6. Python 数据降噪处理的四种方法——均值滤波、小波变换、奇异值分解、改变binSize

    Python 数据降噪处理的四种方法--均值滤波.小波变换.奇异值分解.改变binSize github主页:https://github.com/Taot-chen 一.均值滤波 1)算法思想 给定 ...

  7. python可以实现哪些功能_Python中实现机器学习功能的四种方法介绍

    本篇文章给大家带来的内容是关于Python中实现机器学习功能的四种方法介绍,有一定的参考价值,有需要的朋友可以参考一下,希望对你有所帮助. 在本文中,我们将介绍从数据集中选择要素的不同方法; 并使用S ...

  8. python写错了怎么更改-Python中修改字符串的四种方法

    在Python中,字符串是不可变类型,即无法直接修改字符串的某一位字符. 因此改变一个字符串的元素需要新建一个新的字符串. 常见的修改方法有以下4种. 方法1:将字符串转换成列表后修改值,然后用joi ...

  9. python字符串中某个字符修改_Python中修改字符串的四种方法

    在Python中,字符串是不可变类型,即无法直接修改字符串的某一位字符. 因此改变一个字符串的元素需要新建一个新的字符串. 常见的修改方法有以下4种. 方法1:将字符串转换成列表后修改值,然后用joi ...

最新文章

  1. ros-kinetic install error: sudo rosdep init ImportError: No module named 'rosdep2'
  2. SAP ECC APO Integration - CIF 简介
  3. 进程知识点,只需这一篇
  4. Git 使用篇一:初步使用GitHub,下载安装git,并上传项目
  5. MicroPython (一)点亮我的Led
  6. AI软件制作莫比乌斯环
  7. 怎么做装修预算?装修预算需要注意的三大事项
  8. bash: test1: command not found
  9. 如何在javascript中解析带有两个小数位的浮点数?
  10. 《为自己工作——世界顶级设计师成功法则》—第1章1.2节有同情心
  11. 我的世界java版怎么打开聊天栏_我的世界JAVA版才有的隐藏模式只有开发者才知道怎么进入...
  12. fp-growth算法详解与实现
  13. 对抗神经网络学习(简单的理解)
  14. Oracle执行计划Explain Plan 如何使用
  15. audio autoplay
  16. 二.英语语法 - 并列句
  17. oracle中对于TableSpace理解
  18. Word文档方框中输入“√”、“×”的简单方法,绝对让你相见恨晚!
  19. SSM出租车查询系统 毕业设计-附源码220915
  20. matlab bmp rgb如何转换,RGB到XYZ转化

热门文章

  1. websphere java和进程管理_jvisualvm/Jconsole监控WAS(WebSphere)中间件
  2. windows mysql主主配置_基于docker MySQL数据库主主同步配置(windows上)
  3. 【学习笔记】JS进阶语法一DOM进阶
  4. 【奥运代表团加油】ABAP字符处理杂例
  5. 【PP操作手册】工作中心的维护
  6. 关于未达账项的账务处理
  7. IT人员健康信号之舌苔
  8. ALV 简单实现HTML抬头的方法 (介绍 一)
  9. 2016年10月CPU天梯图
  10. 如何导出SAP的数据表字段和字段描述