Using Spark’s default log4j profile: org/apache/spark/log4j-defaults.properties
20/04/13 16:17:18 INFO SparkContext: Running Spark version 2.1.1
20/04/13 16:17:18 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
20/04/13 16:17:18 INFO SecurityManager: Changing view acls to: Administrator
20/04/13 16:17:18 INFO SecurityManager: Changing modify acls to: Administrator
20/04/13 16:17:18 INFO SecurityManager: Changing view acls groups to:
20/04/13 16:17:18 INFO SecurityManager: Changing modify acls groups to:
20/04/13 16:17:18 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(Administrator); groups with view permissions: Set(); users with modify permissions: Set(Administrator); groups with modify permissions: Set()
20/04/13 16:17:20 INFO Utils: Successfully started service ‘sparkDriver’ on port 49587.
20/04/13 16:17:20 INFO SparkEnv: Registering MapOutputTracker
20/04/13 16:17:20 INFO SparkEnv: Registering BlockManagerMaster
20/04/13 16:17:20 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/04/13 16:17:20 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/04/13 16:17:20 INFO DiskBlockManager: Created local directory at C:\Users\Administrator\AppData\Local\Temp\blockmgr-c2281fdc-c18b-431b-a8db-c41e93f49919
20/04/13 16:17:20 INFO MemoryStore: MemoryStore started with capacity 1992.9 MB
20/04/13 16:17:20 INFO SparkEnv: Registering OutputCommitCoordinator
20/04/13 16:17:20 INFO Utils: Successfully started service ‘SparkUI’ on port 4040.
20/04/13 16:17:20 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.81.99:4040
20/04/13 16:17:20 INFO Executor: Starting executor ID driver on host localhost
20/04/13 16:17:20 INFO Utils: Successfully started service ‘org.apache.spark.network.netty.NettyBlockTransferService’ on port 49628.
20/04/13 16:17:20 INFO NettyBlockTransferService: Server created on 192.168.81.99:49628
20/04/13 16:17:20 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/04/13 16:17:20 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.81.99, 49628, None)
20/04/13 16:17:20 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.81.99:49628 with 1992.9 MB RAM, BlockManagerId(driver, 192.168.81.99, 49628, None)
20/04/13 16:17:20 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.81.99, 49628, None)
20/04/13 16:17:20 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.81.99, 49628, None)
20/04/13 16:17:20 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 127.1 KB, free 1992.8 MB)
20/04/13 16:17:20 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 14.3 KB, free 1992.8 MB)
20/04/13 16:17:20 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.81.99:49628 (size: 14.3 KB, free: 1992.9 MB)
20/04/13 16:17:21 INFO SparkContext: Created broadcast 0 from textFile at WordCount.scala:20
20/04/13 16:17:21 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
at org.apache.hadoop.util.Shell.(Shell.java:293)
at org.apache.hadoop.util.StringUtils.(StringUtils.java:76)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:362)
at org.apache.spark.SparkContextKaTeX parse error: Can't use function '$' in math mode at position 8: anonfun$̲hadoopFile$1anonfun 29. a p p l y ( S p a r k C o n t e x t . s c a l a : 1013 ) a t o r g . a p a c h e . s p a r k . S p a r k C o n t e x t 29.apply(SparkContext.scala:1013) at org.apache.spark.SparkContext 29.apply(SparkContext.scala:1013)atorg.apache.spark.SparkContext a n o n f u n anonfun anonfunhadoopFile 1 1 1$anonfun 29. a p p l y ( S p a r k C o n t e x t . s c a l a : 1013 ) a t o r g . a p a c h e . s p a r k . r d d . H a d o o p R D D 29.apply(SparkContext.scala:1013) at org.apache.spark.rdd.HadoopRDD 29.apply(SparkContext.scala:1013)atorg.apache.spark.rdd.HadoopRDD a n o n f u n anonfun anonfungetJobConf 6. a p p l y ( H a d o o p R D D . s c a l a : 179 ) a t o r g . a p a c h e . s p a r k . r d d . H a d o o p R D D 6.apply(HadoopRDD.scala:179) at org.apache.spark.rdd.HadoopRDD 6.apply(HadoopRDD.scala:179)atorg.apache.spark.rdd.HadoopRDD a n o n f u n anonfun anonfungetJobConf 6. a p p l y ( H a d o o p R D D . s c a l a : 179 ) a t s c a l a . O p t i o n . f o r e a c h ( O p t i o n . s c a l a : 257 ) a t o r g . a p a c h e . s p a r k . r d d . H a d o o p R D D . g e t J o b C o n f ( H a d o o p R D D . s c a l a : 179 ) a t o r g . a p a c h e . s p a r k . r d d . H a d o o p R D D . g e t P a r t i t i o n s ( H a d o o p R D D . s c a l a : 198 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 6.apply(HadoopRDD.scala:179) at scala.Option.foreach(Option.scala:257) at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:179) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:198) at org.apache.spark.rdd.RDD 6.apply(HadoopRDD.scala:179)atscala.Option.foreach(Option.scala:257)atorg.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:179)atorg.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:198)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD 2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . r d d . M a p P a r t i t i o n s R D D . g e t P a r t i t i o n s ( M a p P a r t i t i o n s R D D . s c a l a : 35 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD 2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD 2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . r d d . M a p P a r t i t i o n s R D D . g e t P a r t i t i o n s ( M a p P a r t i t i o n s R D D . s c a l a : 35 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD 2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD 2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . r d d . M a p P a r t i t i o n s R D D . g e t P a r t i t i o n s ( M a p P a r t i t i o n s R D D . s c a l a : 35 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD 2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD 2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . P a r t i t i o n e r 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.Partitioner 2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.Partitioner a n o n f u n anonfun anonfundefaultPartitioner 2. a p p l y ( P a r t i t i o n e r . s c a l a : 66 ) a t o r g . a p a c h e . s p a r k . P a r t i t i o n e r 2.apply(Partitioner.scala:66) at org.apache.spark.Partitioner 2.apply(Partitioner.scala:66)atorg.apache.spark.Partitioner a n o n f u n anonfun anonfundefaultPartitioner 2. a p p l y ( P a r t i t i o n e r . s c a l a : 66 ) a t s c a l a . c o l l e c t i o n . T r a v e r s a b l e L i k e 2.apply(Partitioner.scala:66) at scala.collection.TraversableLike 2.apply(Partitioner.scala:66)atscala.collection.TraversableLike a n o n f u n anonfun anonfunmap 1. a p p l y ( T r a v e r s a b l e L i k e . s c a l a : 234 ) a t s c a l a . c o l l e c t i o n . T r a v e r s a b l e L i k e 1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike 1.apply(TraversableLike.scala:234)atscala.collection.TraversableLike a n o n f u n anonfun anonfunmap 1. a p p l y ( T r a v e r s a b l e L i k e . s c a l a : 234 ) a t s c a l a . c o l l e c t i o n . i m m u t a b l e . L i s t . f o r e a c h ( L i s t . s c a l a : 381 ) a t s c a l a . c o l l e c t i o n . T r a v e r s a b l e L i k e 1.apply(TraversableLike.scala:234) at scala.collection.immutable.List.foreach(List.scala:381) at scala.collection.TraversableLike 1.apply(TraversableLike.scala:234)atscala.collection.immutable.List.foreach(List.scala:381)atscala.collection.TraversableLikeclass.map(TraversableLike.scala:234)
at scala.collection.immutable.List.map(List.scala:285)
at org.apache.spark.Partitioner . d e f a u l t P a r t i t i o n e r ( P a r t i t i o n e r . s c a l a : 66 ) a t o r g . a p a c h e . s p a r k . r d d . P a i r R D D F u n c t i o n s .defaultPartitioner(Partitioner.scala:66) at org.apache.spark.rdd.PairRDDFunctions .defaultPartitioner(Partitioner.scala:66)atorg.apache.spark.rdd.PairRDDFunctions a n o n f u n anonfun anonfunreduceByKey 3. a p p l y ( P a i r R D D F u n c t i o n s . s c a l a : 331 ) a t o r g . a p a c h e . s p a r k . r d d . P a i r R D D F u n c t i o n s 3.apply(PairRDDFunctions.scala:331) at org.apache.spark.rdd.PairRDDFunctions 3.apply(PairRDDFunctions.scala:331)atorg.apache.spark.rdd.PairRDDFunctions a n o n f u n anonfun anonfunreduceByKey 3. a p p l y ( P a i r R D D F u n c t i o n s . s c a l a : 331 ) a t o r g . a p a c h e . s p a r k . r d d . R D D O p e r a t i o n S c o p e 3.apply(PairRDDFunctions.scala:331) at org.apache.spark.rdd.RDDOperationScope 3.apply(PairRDDFunctions.scala:331)atorg.apache.spark.rdd.RDDOperationScope.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope . w i t h S c o p e ( R D D O p e r a t i o n S c o p e . s c a l a : 112 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . w i t h S c o p e ( R D D . s c a l a : 362 ) a t o r g . a p a c h e . s p a r k . r d d . P a i r R D D F u n c t i o n s . r e d u c e B y K e y ( P a i r R D D F u n c t i o n s . s c a l a : 330 ) a t c o m . a t g u i g u . b i g d a t a . s p a r k . W o r d C o u n t .withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:362) at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330) at com.atguigu.bigdata.spark.WordCount .withScope(RDDOperationScope.scala:112)atorg.apache.spark.rdd.RDD.withScope(RDD.scala:362)atorg.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330)atcom.atguigu.bigdata.spark.WordCount.main(WordCount.scala:26)
at com.atguigu.bigdata.spark.WordCount.main(WordCount.scala)
Exception in thread “main” java.lang.NullPointerException
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:404)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at org.apache.hadoop.util.Shell S h e l l C o m m a n d E x e c u t o r . e x e c u t e ( S h e l l . j a v a : 589 ) a t o r g . a p a c h e . h a d o o p . u t i l . S h e l l . e x e c C o m m a n d ( S h e l l . j a v a : 678 ) a t o r g . a p a c h e . h a d o o p . u t i l . S h e l l . e x e c C o m m a n d ( S h e l l . j a v a : 661 ) a t o r g . a p a c h e . h a d o o p . f s . F i l e U t i l . e x e c C o m m a n d ( F i l e U t i l . j a v a : 1097 ) a t o r g . a p a c h e . h a d o o p . f s . R a w L o c a l F i l e S y s t e m ShellCommandExecutor.execute(Shell.java:589) at org.apache.hadoop.util.Shell.execCommand(Shell.java:678) at org.apache.hadoop.util.Shell.execCommand(Shell.java:661) at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097) at org.apache.hadoop.fs.RawLocalFileSystem ShellCommandExecutor.execute(Shell.java:589)atorg.apache.hadoop.util.Shell.execCommand(Shell.java:678)atorg.apache.hadoop.util.Shell.execCommand(Shell.java:661)atorg.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)atorg.apache.hadoop.fs.RawLocalFileSystemRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:567)
at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getPermission(RawLocalFileSystem.java:542)
at org.apache.hadoop.fs.LocatedFileStatus.(LocatedFileStatus.java:42)
at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1815)
at org.apache.hadoop.fs.FileSystem 4. n e x t ( F i l e S y s t e m . j a v a : 1797 ) a t o r g . a p a c h e . h a d o o p . m a p r e d . F i l e I n p u t F o r m a t . l i s t S t a t u s ( F i l e I n p u t F o r m a t . j a v a : 233 ) a t o r g . a p a c h e . h a d o o p . m a p r e d . F i l e I n p u t F o r m a t . g e t S p l i t s ( F i l e I n p u t F o r m a t . j a v a : 270 ) a t o r g . a p a c h e . s p a r k . r d d . H a d o o p R D D . g e t P a r t i t i o n s ( H a d o o p R D D . s c a l a : 202 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 4.next(FileSystem.java:1797) at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:233) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202) at org.apache.spark.rdd.RDD 4.next(FileSystem.java:1797)atorg.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:233)atorg.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270)atorg.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD 2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . r d d . M a p P a r t i t i o n s R D D . g e t P a r t i t i o n s ( M a p P a r t i t i o n s R D D . s c a l a : 35 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD 2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD 2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . r d d . M a p P a r t i t i o n s R D D . g e t P a r t i t i o n s ( M a p P a r t i t i o n s R D D . s c a l a : 35 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD 2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD 2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . r d d . M a p P a r t i t i o n s R D D . g e t P a r t i t i o n s ( M a p P a r t i t i o n s R D D . s c a l a : 35 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD 2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 252 ) a t o r g . a p a c h e . s p a r k . r d d . R D D 2.apply(RDD.scala:252) at org.apache.spark.rdd.RDD 2.apply(RDD.scala:252)atorg.apache.spark.rdd.RDD a n o n f u n anonfun anonfunpartitions 2. a p p l y ( R D D . s c a l a : 250 ) a t s c a l a . O p t i o n . g e t O r E l s e ( O p t i o n . s c a l a : 121 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . p a r t i t i o n s ( R D D . s c a l a : 250 ) a t o r g . a p a c h e . s p a r k . P a r t i t i o n e r 2.apply(RDD.scala:250) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) at org.apache.spark.Partitioner 2.apply(RDD.scala:250)atscala.Option.getOrElse(Option.scala:121)atorg.apache.spark.rdd.RDD.partitions(RDD.scala:250)atorg.apache.spark.Partitioner a n o n f u n anonfun anonfundefaultPartitioner 2. a p p l y ( P a r t i t i o n e r . s c a l a : 66 ) a t o r g . a p a c h e . s p a r k . P a r t i t i o n e r 2.apply(Partitioner.scala:66) at org.apache.spark.Partitioner 2.apply(Partitioner.scala:66)atorg.apache.spark.Partitioner a n o n f u n anonfun anonfundefaultPartitioner 2. a p p l y ( P a r t i t i o n e r . s c a l a : 66 ) a t s c a l a . c o l l e c t i o n . T r a v e r s a b l e L i k e 2.apply(Partitioner.scala:66) at scala.collection.TraversableLike 2.apply(Partitioner.scala:66)atscala.collection.TraversableLike a n o n f u n anonfun anonfunmap 1. a p p l y ( T r a v e r s a b l e L i k e . s c a l a : 234 ) a t s c a l a . c o l l e c t i o n . T r a v e r s a b l e L i k e 1.apply(TraversableLike.scala:234) at scala.collection.TraversableLike 1.apply(TraversableLike.scala:234)atscala.collection.TraversableLike a n o n f u n anonfun anonfunmap 1. a p p l y ( T r a v e r s a b l e L i k e . s c a l a : 234 ) a t s c a l a . c o l l e c t i o n . i m m u t a b l e . L i s t . f o r e a c h ( L i s t . s c a l a : 381 ) a t s c a l a . c o l l e c t i o n . T r a v e r s a b l e L i k e 1.apply(TraversableLike.scala:234) at scala.collection.immutable.List.foreach(List.scala:381) at scala.collection.TraversableLike 1.apply(TraversableLike.scala:234)atscala.collection.immutable.List.foreach(List.scala:381)atscala.collection.TraversableLikeclass.map(TraversableLike.scala:234)
at scala.collection.immutable.List.map(List.scala:285)
at org.apache.spark.Partitioner . d e f a u l t P a r t i t i o n e r ( P a r t i t i o n e r . s c a l a : 66 ) a t o r g . a p a c h e . s p a r k . r d d . P a i r R D D F u n c t i o n s .defaultPartitioner(Partitioner.scala:66) at org.apache.spark.rdd.PairRDDFunctions .defaultPartitioner(Partitioner.scala:66)atorg.apache.spark.rdd.PairRDDFunctions a n o n f u n anonfun anonfunreduceByKey 3. a p p l y ( P a i r R D D F u n c t i o n s . s c a l a : 331 ) a t o r g . a p a c h e . s p a r k . r d d . P a i r R D D F u n c t i o n s 3.apply(PairRDDFunctions.scala:331) at org.apache.spark.rdd.PairRDDFunctions 3.apply(PairRDDFunctions.scala:331)atorg.apache.spark.rdd.PairRDDFunctions a n o n f u n anonfun anonfunreduceByKey 3. a p p l y ( P a i r R D D F u n c t i o n s . s c a l a : 331 ) a t o r g . a p a c h e . s p a r k . r d d . R D D O p e r a t i o n S c o p e 3.apply(PairRDDFunctions.scala:331) at org.apache.spark.rdd.RDDOperationScope 3.apply(PairRDDFunctions.scala:331)atorg.apache.spark.rdd.RDDOperationScope.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope . w i t h S c o p e ( R D D O p e r a t i o n S c o p e . s c a l a : 112 ) a t o r g . a p a c h e . s p a r k . r d d . R D D . w i t h S c o p e ( R D D . s c a l a : 362 ) a t o r g . a p a c h e . s p a r k . r d d . P a i r R D D F u n c t i o n s . r e d u c e B y K e y ( P a i r R D D F u n c t i o n s . s c a l a : 330 ) a t c o m . a t g u i g u . b i g d a t a . s p a r k . W o r d C o u n t .withScope(RDDOperationScope.scala:112) at org.apache.spark.rdd.RDD.withScope(RDD.scala:362) at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330) at com.atguigu.bigdata.spark.WordCount .withScope(RDDOperationScope.scala:112)atorg.apache.spark.rdd.RDD.withScope(RDD.scala:362)atorg.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330)atcom.atguigu.bigdata.spark.WordCount.main(WordCount.scala:26)
at com.atguigu.bigdata.spark.WordCount.main(WordCount.scala)
20/04/13 16:17:21 INFO SparkContext: Invoking stop() from shutdown hook
20/04/13 16:17:21 INFO SparkUI: Stopped Spark web UI at http://192.168.81.99:4040
20/04/13 16:17:21 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/04/13 16:17:21 INFO MemoryStore: MemoryStore cleared
20/04/13 16:17:21 INFO BlockManager: BlockManager stopped
20/04/13 16:17:21 INFO BlockManagerMaster: BlockManagerMaster stopped
20/04/13 16:17:21 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/04/13 16:17:21 INFO SparkContext: Successfully stopped SparkContext
20/04/13 16:17:21 INFO ShutdownHookManager: Shutdown hook called
20/04/13 16:17:21 INFO ShutdownHookManager: Deleting directory C:\Users\Administrator\AppData\Local\Temp\spark-b6e8db72-2692-4e04-bab3-b57c461d8454

Process finished with exit code 1
在spark连接idea时,idea的ip是计算机ip,不是虚拟机的ip,这个怎么办?

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties相关推荐

  1. 【Spark】一条 SQL 在 Apache Spark 之旅(上)

    1.概述 转载学习加深印象:一条 SQL 在 Apache Spark 之旅(上) Spark SQL 是 Spark 众多组件中技术最复杂的组件之一,它同时支持 SQL 查询和 DataFrame ...

  2. 提交spark任务偶尔报错 org.apache.spark.SparkException: A master URL must be set in your configuration

    问题一:pox.xml里外部以来库没有加入<scope>compile</scope>导致无法初始化 问题二:main函数以外的方法有时也无法初始化 错误信息: 20/11/2 ...

  3. spark 读取ftp_scala – 使用ftp在Apache Spark中的远程计算机上读取文件

    我正在尝试使用ftp在Apache Spark( Scala版本)中的远程计算机上读取文件.目前,我在 GitHub上关注Databricks的Learning Spark回购中的一个例子.使用cur ...

  4. 【Apache Spark 】第 9 章使用 Apache Spark构建可靠的数据湖

  5. Apache Spark源码阅读环境搭建

    文章目录 1 下载源码 2 导入项目 3 新建文件 4 Debug JavaWordCount 4.1 搜索JavaWordCount 4.2 修改参数 4.3 Debug 遇到的报错 1 未设置Ma ...

  6. ERROR SparkContext: Error initializing SparkContext. org.apache.spark.SparkException: Could not pars

    ERROR SparkContext: Error initializing SparkContext. org.apache.spark.SparkException: Could not pars ...

  7. spark运行自带例子_运行spark自带的例子出错及解决

    以往都是用java运行spark的没问题,今天用scala在eclipse上运行spark的代码倒是出现了错误 ,记录 首先是当我把相关的包导入好后,Run,报错: Exception in thre ...

  8. Apache Spark 3.0 DStreams-Streaming编程指南

    目录 总览 一个简单的例子 基本概念 连结中 初始化StreamingContext 离散流(DStreams) 输入DStreams和接收器 基本资料 进阶资源 自订来源 接收器可靠性 DStrea ...

  9. Apache Spark基础知识

    我的spark学习笔记,基于Spark 2.4.0 目录 一.简介 二.RDD编程 1 RDD介绍 2 RDD操作 2.0 读操作 2.1 常用Tramsformation算子 2.2 常用Actio ...

最新文章

  1. 为什么375×667是移动端原型设计的最佳分辨率:flutter 设计稿尺寸最好也是375×667...
  2. 请简述一下线程的sleep()方法和yield()方法的区别?
  3. Web框架之Django_08 重要组件(form组件、cookie和session组件)
  4. android 自定义paint,Android中自定义常用的三个对象解析(Paint,Color,Canvas)
  5. maven 插件未找到_防止在多模块Maven中找到“未找到插件”
  6. 【BZOJ】3495: PA2010 Riddle 2-SAT算法
  7. java并发多线程面试_Java多线程并发面试问答
  8. JAVA IO - RandomAccessFile
  9. 语言编出的程序怎么实装_程序员小白:编程语言到底该怎么选?
  10. c语言中malloc的作用,malloc函数-malloc函数,详解
  11. 模拟实现求字符串长度函数strlen
  12. 仿网易云和支付宝首页嵌套滑动
  13. Cassandra - 集群搭建 及 配置DC和rack
  14. 【Java全栈学习路线】最全的Java学习路线及知识清单,Java自学方向指引
  15. MPC5607B串口接收中断程序
  16. DveC++编译[Error] ld returned 1 exit status
  17. 墨画子卿第四章第6节:卷轴
  18. jvm调试工具arthas的tt命令记录参数和返回值使用案例
  19. 【Verilog】时序逻辑电路 -- 程序设计与应用
  20. 搭建自有HTTPS环境

热门文章

  1. c语言版贪吃蛇《课程设计》
  2. 打造Android万能下拉刷新上拉加载控件
  3. 课堂笔记-多元函数微分
  4. 政务服务一网通办建设方案(ppt)
  5. 2021-04-01
  6. 如何理解国产操作系统,现状又是如何?
  7. 2023年五面蚂蚁、三面拼多多、字节跳动最终拿offer入职拼多多
  8. 搜狗输入法,怎么打声调?
  9. 2020年6月24日训练总结(codeforces辛路历程)
  10. 字蛛(FontSpider,中文字体压缩器)网页自由引入中文字体