1 什么是线性回归

线性回归是另一个传统的有监督机器学习算法。在这个问题中,每个实体与一个实数值的标签 (而不是一个像在二元分类的0,1标签),和我们想要预测标签尽可能给出数值代表实体特征。MLlib支持线性回归以及L2(ridge)和L1(lasso)正则化参数调整。Mllib还有一个回归算法,原始梯度下降(在下面描述),和上面描述的有相同的参数二元分类算法。

可用线性回归算法:

LinearRegressionWithSGD

RidgeRegressionWithSGD

LassoWithSGD。

注意:

(1)因为是线性回归,所以学习到的函数为线性函数,即直线函数;

(2)因为是单变量,因此只有一个x;

我们能够给出单变量线性回归的模型:

我们常称x为feature,h(x)为hypothesis;

2. Gradient Descent(梯度下降)

但是又一个问题引出了,虽然给定一个函数,我们能够根据cost function知道这个函数拟合的好不好,但是毕竟函数有这么多,总不可能一个一个试吧?

因此我们引出了梯度下降:能够找出cost function函数的最小值;

梯度下降原理:将函数比作一座山,我们站在某个山坡上,往四周看,从哪个方向向下走一小步,能够下降的最快;

当然解决问题的方法有很多,梯度下降只是其中一个,还有一种方法叫Normal Equation;

方法:

(1)先确定向下一步的步伐大小,我们称为Learning rate;

(2)任意给定一个初始值:;

(3)确定一个向下的方向,并向下走预先规定的步伐,并更新;

(4)当下降的高度小于某个定义的值,则停止下降;

算法:

特点:

(1)初始点不同,获得的最小值也不同,因此梯度下降求得的只是局部最小值;

(2)越接近最小值时,下降速度越慢;

3.线性回归代码

下面的示例演示如何加载训练数据,把它解析成 LabeledPoint的RDD对象(弹性分布式数据集)。这个例子然后使用LinearRegressionWithSGD构建一个简单的线性模型来预测标签值。最后我们计算均方误差对拟合优度进行评估。

执行结果如下:

root@master scala]# sbt/sbt package run

[info] Set current project to scala (in build file:/root/sample/scala/)

[success] Total time: 2 s, completed Feb 17, 2014 9:53:53 PM

[info] Running SimpleApp

log4j:WARN No appenders could be found for logger (akka.event.slf4j.Slf4jLogger).

log4j:WARN Please initialize the log4j system properly.

log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

14/02/17 21:53:55 INFO SparkEnv: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties

14/02/17 21:53:55 INFO SparkEnv: Registering BlockManagerMaster

14/02/17 21:53:55 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140217215355-b441

14/02/17 21:53:55 INFO MemoryStore: MemoryStore started with capacity 580.0 MB.

14/02/17 21:53:55 INFO ConnectionManager: Bound socket to port 45162 with id = ConnectionManagerId(master,45162)

14/02/17 21:53:55 INFO BlockManagerMaster: Trying to register BlockManager

14/02/17 21:53:55 INFO BlockManagerMasterActor$BlockManagerInfo: Registering block manager master:45162 with 580.0 MB RAM

14/02/17 21:53:55 INFO BlockManagerMaster: Registered BlockManager

14/02/17 21:53:55 INFO HttpServer: Starting HTTP Server

14/02/17 21:53:56 INFO HttpBroadcast: Broadcast server started at http://192.168.159.129:54817

14/02/17 21:53:56 INFO SparkEnv: Registering MapOutputTracker

14/02/17 21:53:56 INFO HttpFileServer: HTTP File server directory is /tmp/spark-b1b6ca47-4f04-4a60-8cb5-4b7151c6e9a2

14/02/17 21:53:56 INFO HttpServer: Starting HTTP Server

14/02/17 21:53:56 INFO SparkUI: Started Spark Web UI at http://master:4040

14/02/17 21:53:57 WARN NativeCodeLoader: Unable to load native-Hadoop library for your platform... using builtin-Java classes where applicable

14/02/17 21:53:57 INFO SparkContext: Added JAR target/scala-2.10/scala_2.10-0.1-SNAPSHOT.jar at http://192.168.159.129:47898/jars/scala_2.10-0.1-SNAPSHOT.jar with timestamp 1392645237384

14/02/17 21:53:57 INFO AppClient$ClientActor: Connecting to master spark://192.168.159.129:7077...

14/02/17 21:53:58 WARN SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes

14/02/17 21:53:58 INFO MemoryStore: ensureFreeSpace(132636) called with curMem=0, maxMem=608187187

14/02/17 21:53:58 INFO MemoryStore: Block broadcast_0 stored as values to memory (estimated size 129.5 KB, free 579.9 MB)

14/02/17 21:53:59 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20140217215359-0003

14/02/17 21:53:59 INFO AppClient$ClientActor: Executor added: app-20140217215359-0003/0 on worker-20140217214342-master-54909 (master:54909) with 1 cores

14/02/17 21:53:59 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140217215359-0003/0 on hostPort master:54909 with 1 cores, 512.0 MB RAM

14/02/17 21:53:59 INFO AppClient$ClientActor: Executor added: app-20140217215359-0003/1 on worker-20140217214339-slaver02-52414 (slaver02:52414) with 1 cores

14/02/17 21:53:59 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140217215359-0003/1 on hostPort slaver02:52414 with 1 cores, 512.0 MB RAM

14/02/17 21:53:59 INFO AppClient$ClientActor: Executor added: app-20140217215359-0003/2 on worker-20140217214341-slaver01-34119 (slaver01:34119) with 1 cores

14/02/17 21:53:59 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140217215359-0003/2 on hostPort slaver01:34119 with 1 cores, 512.0 MB RAM

14/02/17 21:53:59 INFO AppClient$ClientActor: Executor updated: app-20140217215359-0003/1 is now RUNNING

14/02/17 21:53:59 INFO AppClient$ClientActor: Executor updated: app-20140217215359-0003/2 is now RUNNING

14/02/17 21:53:59 INFO AppClient$ClientActor: Executor updated: app-20140217215359-0003/0 is now RUNNING

14/02/17 21:54:02 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@slaver02:42172/user/Executor#1991133218] with ID 1

14/02/17 21:54:03 INFO FileInputFormat: Total input paths to process : 1

14/02/17 21:54:03 INFO SparkContext: Starting job: first at GeneralizedLinearAlgorithm.scala:121

14/02/17 21:54:03 INFO DAGScheduler: Got job 0 (first at GeneralizedLinearAlgorithm.scala:121) with 1 output partitions (allowLocal=true)

14/02/17 21:54:03 INFO DAGScheduler: Final stage: Stage 0 (first at GeneralizedLinearAlgorithm.scala:121)

14/02/17 21:54:03 INFO DAGScheduler: Parents of final stage: List()

14/02/17 21:54:03 INFO DAGScheduler: Missing parents: List()

14/02/17 21:54:03 INFO DAGScheduler: Computing the requested partition locally

14/02/17 21:54:03 INFO HadoopRDD: Input split: hdfs://master:9000/mllib/lpsa.data:0+5197

14/02/17 21:54:03 INFO BlockManagerMasterActor$BlockManagerInfo: Registering block manager slaver02:33696 with 297.0 MB RAM

14/02/17 21:54:04 INFO SparkContext: Job finished: first at GeneralizedLinearAlgorithm.scala:121, took 0.703681387 s

14/02/17 21:54:04 INFO SparkContext: Starting job: count at GradientDescent.scala:137

14/02/17 21:54:04 INFO DAGScheduler: Got job 1 (count at GradientDescent.scala:137) with 2 output partitions (allowLocal=false)

14/02/17 21:54:04 INFO DAGScheduler: Final stage: Stage 1 (count at GradientDescent.scala:137)

14/02/17 21:54:04 INFO DAGScheduler: Parents of final stage: List()

14/02/17 21:54:04 INFO DAGScheduler: Missing parents: List()

14/02/17 21:54:04 INFO DAGScheduler: Submitting Stage 1 (MappedRDD[3] at map at GeneralizedLinearAlgorithm.scala:139), which has no missing parents

14/02/17 21:54:04 INFO DAGScheduler: Submitting 2 missing tasks from Stage 1 (MappedRDD[3] at map at GeneralizedLinearAlgorithm.scala:139)

14/02/17 21:54:04 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks

14/02/17 21:54:04 INFO TaskSetManager: Starting task 1.0:0 as TID 0 on executor 1: slaver02 (NODE_LOCAL)

14/02/17 21:54:04 INFO TaskSetManager: Serialized task 1.0:0 as 1749 bytes in 24 ms

14/02/17 21:54:32 INFO DAGScheduler: Got job 14 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false)

14/02/17 21:54:32 INFO DAGScheduler: Final stage: Stage 14 (reduce at GradientDescent.scala:150)

14/02/17 21:54:32 INFO DAGScheduler: Parents of final stage: List()

14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List()

14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 14 (MappedRDD[29] at map at GradientDescent.scala:145), which has no missing parents

14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 14 (MappedRDD[29] at map at GradientDescent.scala:145)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 14.0 with 2 tasks

14/02/17 21:54:33 INFO TaskSetManager: Starting task 14.0:0 as TID 26 on executor 2: slaver01 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 14.0:0 as 2419 bytes in 0 ms

14/02/17 21:54:33 INFO TaskSetManager: Starting task 14.0:1 as TID 27 on executor 1: slaver02 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 14.0:1 as 2419 bytes in 0 ms

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 26 in 35 ms on slaver01 (progress: 0/2)

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(14, 0)

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 27 in 85 ms on slaver02 (progress: 1/2)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 14.0 from pool

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(14, 1)

14/02/17 21:54:33 INFO DAGScheduler: Stage 14 (reduce at GradientDescent.scala:150) finished in 0.082 s

14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.098827167 s

14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at GradientDescent.scala:150

14/02/17 21:54:33 INFO DAGScheduler: Got job 15 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false)

14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 15 (reduce at GradientDescent.scala:150)

14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List()

14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List()

14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 15 (MappedRDD[31] at map at GradientDescent.scala:145), which has no missing parents

14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 15 (MappedRDD[31] at map at GradientDescent.scala:145)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 15.0 with 2 tasks

14/02/17 21:54:33 INFO TaskSetManager: Starting task 15.0:0 as TID 28 on executor 2: slaver01 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 15.0:0 as 2421 bytes in 1 ms

14/02/17 21:54:33 INFO TaskSetManager: Starting task 15.0:1 as TID 29 on executor 1: slaver02 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 15.0:1 as 2421 bytes in 0 ms

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 28 in 68 ms on slaver01 (progress: 0/2)

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(15, 0)

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 29 in 80 ms on slaver02 (progress: 1/2)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 15.0 from pool

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(15, 1)

14/02/17 21:54:33 INFO DAGScheduler: Stage 15 (reduce at GradientDescent.scala:150) finished in 0.082 s

14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.317842176 s

14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at GradientDescent.scala:150

14/02/17 21:54:33 INFO DAGScheduler: Got job 16 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false)

14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 16 (reduce at GradientDescent.scala:150)

14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List()

14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List()

14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 16 (MappedRDD[33] at map at GradientDescent.scala:145), which has no missing parents

14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 16 (MappedRDD[33] at map at GradientDescent.scala:145)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 16.0 with 2 tasks

14/02/17 21:54:33 INFO TaskSetManager: Starting task 16.0:0 as TID 30 on executor 2: slaver01 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 16.0:0 as 2420 bytes in 0 ms

14/02/17 21:54:33 INFO TaskSetManager: Starting task 16.0:1 as TID 31 on executor 1: slaver02 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 16.0:1 as 2420 bytes in 0 ms

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 31 in 52 ms on slaver02 (progress: 0/2)

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(16, 1)

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 30 in 60 ms on slaver01 (progress: 1/2)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 16.0 from pool

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(16, 0)

14/02/17 21:54:33 INFO DAGScheduler: Stage 16 (reduce at GradientDescent.scala:150) finished in 0.050 s

14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.071822529 s

14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at GradientDescent.scala:150

14/02/17 21:54:33 INFO DAGScheduler: Got job 17 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false)

14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 17 (reduce at GradientDescent.scala:150)

14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List()

14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List()

14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 17 (MappedRDD[35] at map at GradientDescent.scala:145), which has no missing parents

14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 17 (MappedRDD[35] at map at GradientDescent.scala:145)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 17.0 with 2 tasks

14/02/17 21:54:33 INFO TaskSetManager: Starting task 17.0:0 as TID 32 on executor 2: slaver01 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 17.0:0 as 2417 bytes in 0 ms

14/02/17 21:54:33 INFO TaskSetManager: Starting task 17.0:1 as TID 33 on executor 1: slaver02 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 17.0:1 as 2417 bytes in 1 ms

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 33 in 45 ms on slaver02 (progress: 0/2)

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(17, 1)

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 32 in 58 ms on slaver01 (progress: 1/2)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 17.0 from pool

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(17, 0)

14/02/17 21:54:33 INFO DAGScheduler: Stage 17 (reduce at GradientDescent.scala:150) finished in 0.055 s

14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.067749084 s

14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at GradientDescent.scala:150

14/02/17 21:54:33 INFO DAGScheduler: Got job 18 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false)

14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 18 (reduce at GradientDescent.scala:150)

14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List()

14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List()

14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 18 (MappedRDD[37] at map at GradientDescent.scala:145), which has no missing parents

14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 18 (MappedRDD[37] at map at GradientDescent.scala:145)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 18.0 with 2 tasks

14/02/17 21:54:33 INFO TaskSetManager: Starting task 18.0:0 as TID 34 on executor 2: slaver01 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 18.0:0 as 2419 bytes in 0 ms

14/02/17 21:54:33 INFO TaskSetManager: Starting task 18.0:1 as TID 35 on executor 1: slaver02 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 18.0:1 as 2419 bytes in 0 ms

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 35 in 40 ms on slaver02 (progress: 0/2)

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(18, 1)

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 34 in 97 ms on slaver01 (progress: 1/2)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 18.0 from pool

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(18, 0)

14/02/17 21:54:33 INFO DAGScheduler: Stage 18 (reduce at GradientDescent.scala:150) finished in 0.092 s

14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.105965063 s

14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at GradientDescent.scala:150

14/02/17 21:54:33 INFO DAGScheduler: Got job 19 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false)

14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 19 (reduce at GradientDescent.scala:150)

14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List()

14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List()

14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 19 (MappedRDD[39] at map at GradientDescent.scala:145), which has no missing parents

14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 19 (MappedRDD[39] at map at GradientDescent.scala:145)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 19.0 with 2 tasks

14/02/17 21:54:33 INFO TaskSetManager: Starting task 19.0:0 as TID 36 on executor 2: slaver01 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 19.0:0 as 2418 bytes in 1 ms

14/02/17 21:54:33 INFO TaskSetManager: Starting task 19.0:1 as TID 37 on executor 1: slaver02 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 19.0:1 as 2418 bytes in 0 ms

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 36 in 39 ms on slaver01 (progress: 0/2)

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(19, 0)

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 37 in 52 ms on slaver02 (progress: 1/2)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 19.0 from pool

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(19, 1)

14/02/17 21:54:33 INFO DAGScheduler: Stage 19 (reduce at GradientDescent.scala:150) finished in 0.042 s

14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.060941515 s

14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at GradientDescent.scala:150

14/02/17 21:54:33 INFO DAGScheduler: Got job 20 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false)

14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 20 (reduce at GradientDescent.scala:150)

14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List()

14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List()

14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 20 (MappedRDD[41] at map at GradientDescent.scala:145), which has no missing parents

14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 20 (MappedRDD[41] at map at GradientDescent.scala:145)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 20.0 with 2 tasks

14/02/17 21:54:33 INFO TaskSetManager: Starting task 20.0:0 as TID 38 on executor 2: slaver01 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 20.0:0 as 2418 bytes in 0 ms

14/02/17 21:54:33 INFO TaskSetManager: Starting task 20.0:1 as TID 39 on executor 1: slaver02 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 20.0:1 as 2418 bytes in 0 ms

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 38 in 33 ms on slaver01 (progress: 0/2)

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(20, 0)

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 39 in 71 ms on slaver02 (progress: 1/2)

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(20, 1)

14/02/17 21:54:33 INFO DAGScheduler: Stage 20 (reduce at GradientDescent.scala:150) finished in 0.064 s

14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.080835519 s

14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 20.0 from pool

14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at GradientDescent.scala:150

14/02/17 21:54:33 INFO DAGScheduler: Got job 21 (reduce at GradientDescent.scala:150) with 2 output partitions (allowLocal=false)

14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 21 (reduce at GradientDescent.scala:150)

14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List()

14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List()

14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 21 (MappedRDD[43] at map at GradientDescent.scala:145), which has no missing parents

14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 21 (MappedRDD[43] at map at GradientDescent.scala:145)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 21.0 with 2 tasks

14/02/17 21:54:33 INFO TaskSetManager: Starting task 21.0:0 as TID 40 on executor 2: slaver01 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 21.0:0 as 2422 bytes in 0 ms

14/02/17 21:54:33 INFO TaskSetManager: Starting task 21.0:1 as TID 41 on executor 1: slaver02 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 21.0:1 as 2422 bytes in 0 ms

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 40 in 40 ms on slaver01 (progress: 0/2)

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(21, 0)

14/02/17 21:54:33 INFO TaskSetManager: Finished TID 41 in 45 ms on slaver02 (progress: 1/2)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Remove TaskSet 21.0 from pool

14/02/17 21:54:33 INFO DAGScheduler: Completed ResultTask(21, 1)

14/02/17 21:54:33 INFO DAGScheduler: Stage 21 (reduce at GradientDescent.scala:150) finished in 0.041 s

14/02/17 21:54:33 INFO SparkContext: Job finished: reduce at GradientDescent.scala:150, took 0.051875321 s

14/02/17 21:54:33 INFO GradientDescent: GradientDescent finished. Last 10 stochastic losses 0.22493248687186032, 0.2241836511724591, 0.22358630434392676, 0.22309440787976811, 0.22268441631265215, 0.22233909585390685, 0.22204555434717815, 0.2217939816090842, 0.22157679929323662, 0.22138807401560764

14/02/17 21:54:33 INFO LinearRegressionWithSGD: Final model weights 0.6226986501625317,0.26562471165823115,-0.13304380020663167,0.21671917665388107,0.3037175607477254,-0.2007533914066441,0.013953499241049204,0.20270603251011174

14/02/17 21:54:33 INFO LinearRegressionWithSGD: Final model intercept 2.46997566387807

14/02/17 21:54:33 INFO SparkContext: Starting job: reduce at SimpleApp.scala:24

14/02/17 21:54:33 INFO DAGScheduler: Got job 22 (reduce at SimpleApp.scala:24) with 2 output partitions (allowLocal=false)

14/02/17 21:54:33 INFO DAGScheduler: Final stage: Stage 22 (reduce at SimpleApp.scala:24)

14/02/17 21:54:33 INFO DAGScheduler: Parents of final stage: List()

14/02/17 21:54:33 INFO DAGScheduler: Missing parents: List()

14/02/17 21:54:33 INFO DAGScheduler: Submitting Stage 22 (MappedRDD[45] at map at SimpleApp.scala:24), which has no missing parents

14/02/17 21:54:33 INFO DAGScheduler: Submitting 2 missing tasks from Stage 22 (MappedRDD[45] at map at SimpleApp.scala:24)

14/02/17 21:54:33 INFO TaskSchedulerImpl: Adding task set 22.0 with 2 tasks

14/02/17 21:54:33 INFO TaskSetManager: Starting task 22.0:0 as TID 42 on executor 2: slaver01 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 22.0:0 as 2122 bytes in 1 ms

14/02/17 21:54:33 INFO TaskSetManager: Starting task 22.0:1 as TID 43 on executor 1: slaver02 (NODE_LOCAL)

14/02/17 21:54:33 INFO TaskSetManager: Serialized task 22.0:1 as 2122 bytes in 0 ms

14/02/17 21:54:34 INFO TaskSetManager: Finished TID 42 in 58 ms on slaver01 (progress: 0/2)

14/02/17 21:54:34 INFO DAGScheduler: Completed ResultTask(22, 0)

14/02/17 21:54:34 INFO TaskSetManager: Finished TID 43 in 60 ms on slaver02 (progress: 1/2)

14/02/17 21:54:34 INFO TaskSchedulerImpl: Remove TaskSet 22.0 from pool

14/02/17 21:54:34 INFO DAGScheduler: Completed ResultTask(22, 1)

14/02/17 21:54:34 INFO DAGScheduler: Stage 22 (reduce at SimpleApp.scala:24) finished in 0.057 s

14/02/17 21:54:34 INFO SparkContext: Job finished: reduce at SimpleApp.scala:24, took 0.081040964 s

14/02/17 21:54:34 INFO SparkContext: Starting job: count at SimpleApp.scala:24

14/02/17 21:54:34 INFO DAGScheduler: Got job 23 (count at SimpleApp.scala:24) with 2 output partitions (allowLocal=false)

14/02/17 21:54:34 INFO DAGScheduler: Final stage: Stage 23 (count at SimpleApp.scala:24)

14/02/17 21:54:34 INFO DAGScheduler: Parents of final stage: List()

14/02/17 21:54:34 INFO DAGScheduler: Missing parents: List()

14/02/17 21:54:34 INFO DAGScheduler: Submitting Stage 23 (MappedRDD[44] at map at SimpleApp.scala:20), which has no missing parents

14/02/17 21:54:34 INFO DAGScheduler: Submitting 2 missing tasks from Stage 23 (MappedRDD[44] at map at SimpleApp.scala:20)

14/02/17 21:54:34 INFO TaskSchedulerImpl: Adding task set 23.0 with 2 tasks

14/02/17 21:54:34 INFO TaskSetManager: Starting task 23.0:0 as TID 44 on executor 2: slaver01 (NODE_LOCAL)

14/02/17 21:54:34 INFO TaskSetManager: Serialized task 23.0:0 as 2011 bytes in 0 ms

14/02/17 21:54:34 INFO TaskSetManager: Starting task 23.0:1 as TID 45 on executor 1: slaver02 (NODE_LOCAL)

14/02/17 21:54:34 INFO TaskSetManager: Serialized task 23.0:1 as 2011 bytes in 0 ms

14/02/17 21:54:34 INFO TaskSetManager: Finished TID 45 in 44 ms on slaver02 (progress: 0/2)

14/02/17 21:54:34 INFO DAGScheduler: Completed ResultTask(23, 1)

14/02/17 21:54:34 INFO TaskSetManager: Finished TID 44 in 51 ms on slaver01 (progress: 1/2)

14/02/17 21:54:34 INFO TaskSchedulerImpl: Remove TaskSet 23.0 from pool

14/02/17 21:54:34 INFO DAGScheduler: Completed ResultTask(23, 0)

14/02/17 21:54:34 INFO DAGScheduler: Stage 23 (count at SimpleApp.scala:24) finished in 0.025 s

14/02/17 21:54:34 INFO SparkContext: Job finished: count at SimpleApp.scala:24, took 0.0633455 s

training Mean Squared Error = 0.4424462080486391

14/02/17 21:54:34 INFO ConnectionManager: Selector thread was interrupted!

[success] Total time: 41 s, completed Feb 17, 2014 9:54:34 PM

[root@master scala]#

Spark 的详细介绍:请点这里

Spark 的下载地址:请点这里

相关阅读:

spark写出分布式的训练算法_Spark0.9分布式运行MLlib的线性回归算法相关推荐

  1. 【操作系统】请写出最多允许4人同时进餐的哲学家进餐问题的算法(视频中的代码有点错误)

    题目 请写出最多允许4人同时进餐的哲学家进餐问题的算法(视频中的代码有点错误) 答案 代码 Var chopstick:array[0,-,4],limit : semaphore:=1,1,1,1, ...

  2. Spark 写出MySQL报错,java.sql.BatchUpdateException

    spark DataFrame 写出到MySQL时报如下错误: java.sql.BatchUpdateException: Column 'name' specified twice at sun. ...

  3. 线性回归算法python实现_用python实现线性回归算法

    本文是根据https://blog.csdn.net/dqcfkyqdxym3f8rb0/article/details/79767043这篇博客写出来的.其中的公式什么的能够去这个博客里面看. 本文 ...

  4. spark写出分布式的训练算法_利用 Spark 和 scikit-learn 将你的模型训练加快 100 倍...

    在 Ibotta,我们训练了许多机器学习模型.这些模型为我们的推荐系统.搜索引擎.定价优化引擎.数据质量等提供动力.它们在与我们的移动应用程序交互时为数百万用户做出预测. 当我们使用 Spark 进行 ...

  5. 写出从图的邻接表表示转换成邻接矩阵表示的算法,用c语言写成过程形式,可以用吸毒的方式减肥...

    不变目镜,用吸胞临微镜皮细片时在显察洋葱表下观时装,倍物察先用镜观,调整粗.细准旋若不焦螺,倍物后换镜,中亮度和细胞视野数目是(. 控保制定障方的监合理案,减肥部门保障可能.地点等信息应尽了解监控时间 ...

  6. 如何写出正确的二分查找?——利用循环不变式理解二分查找及其变体的正确性以及构造方式...

    序言 本文以经典的二分查找为例,介绍如何使用循环不变式来理解算法并利用循环不变式在原始算法的基础上根据需要产生算法的变体.谨以本文献给在理解算法思路时没有头绪而又不甘心于死记硬背的人. 二分查找究竟有 ...

  7. 如何写出一个较好的快速排序程序

    写出一个较好的快速排序程序 快速排序是常用的排序算法之一,但要想写出一个又快又准的使用程序,就不是那么简单了 需要注意的事项 首先要写正确.通常使用递归实现.其递归相当于二叉树展开,因此如果要用迭代实 ...

  8. 大规模稀疏数据分布式模型训练+Anakin Optimizaiton

    大规模稀疏数据分布式模型训练视频↓ 大规模稀疏数据分布式模型训练课件↓ Anakin Optimizaiton公开课视频↓ Anakin Optimizaiton公开课课件↓

  9. sklearn应用线性回归算法

    sklearn应用线性回归算法 Scikit-learn 简称 sklearn 是基于 Python 语言实现的机器学习算法库,它包含了常用的机器学习算法,比如回归.分类.聚类.支持向量机.随机森林等 ...

最新文章

  1. java实现redis缓存_java实现redis缓存功能
  2. SAP WM 2-Step Picking流程里创建的Group的分析
  3. Entity Framework Core 实现MySQL 的TimeStamp/RowVersion 并发控制
  4. python实现冒泡排序视频_Python实现冒泡排序
  5. 普通函数被类引用为友元函数
  6. 技术管理者需要认识管理活动的高杠杆率
  7. [XJTU计算机网络安全与管理]——第八讲密钥管理和分发
  8. pdf插入图片到指定坐标位置 亲测可用
  9. 深圳电视台小间距P2高清圆弧屏(弧形屏)是用软模组(软屏)拼成
  10. BF、KMP、BM、Sunday算法讲解
  11. 游戏策划小白笔记——Common Sense(一)
  12. EXCEL列乱序后内容重新对应
  13. oracle中取月初,月末,季初,季末及年初,年末时间
  14. Ubuntu: Host Controller not enabled 报错
  15. 《GC篇》七、GC 调优(实战篇)
  16. 关于NetBios的简单应用
  17. PAT 乙级 1058 选择题 python
  18. 设计模式之禅-二《上卷》
  19. unique中译_UNIQUE是什么意思_ UNIQUE的翻译_音标_读音_用法_例句_爱词霸在线词典...
  20. 分贝测试软件哪个好 家庭影院,家庭影院DIY攻略 攻略篇 – 5.2 音频解码能力

热门文章

  1. 浅谈编程-----非计算机专业以及非培训班的一些感悟
  2. Inject Dll 过程
  3. 4KB/2MB/1GB 4级/5级分页模式下的线性地址翻译以及CR3
  4. https open api_通过bilibili_api获取弹幕+绘制词云的方法
  5. springboot细节挖掘(知识积累)
  6. python全局变量修改_python中全局变量的修改
  7. appsetting mysql_给IConfiguration写一个GetAppSetting扩展方法(示例代码)
  8. python 与零值比较,python – 使用另一个数据帧替换数据帧中的零值
  9. php if require,php – 验证规则required_if与其他条件(Laravel 5.4)
  10. cake-build -.Net Core 跨平台构建自动化系统