Spark Idea Maven 开发环境搭建(转载)
mark一下,感谢作者分享!
Spark Idea Maven 开发环境搭建
一、安装jdk
jdk版本最好是1.7以上,设置好环境变量,安装过程,略。
二、安装Maven
我选择的Maven版本是3.3.3,安装过程,略。
编辑Maven安装目录conf/settings.xml文件,
1
2
|
<!-- 修改Maven 库存放目录-->
<localRepository>D:\maven-repository\repository</localRepository>
|
三、安装Idea
安装过程,略。
四、创建Spark项目
1、新建一个Spark项目,
2、选择Maven,从模板创建项目,
3、填写项目GroupId等,
4、选择本地安装的Maven和Maven配置文件。
5、next
6、创建完毕,查看新项目结构:
7、自动更新Maven pom文件
8、编译项目
如果出现这种错误,这个错误是由于Junit版本造成的,可以删掉Test,和pom.xml文件中Junit的相关依赖,
即删掉这两个Scala类:
和pom.xml文件中的Junit依赖:
1
2
3
4
5
|
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version> 4.12 </version>
</dependency>
|
9、刷新Maven依赖
10、引入Jdk和Scala开发库
11、在pom.xml加入相关的依赖包,包括Hadoop、Spark等
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
|
<dependency>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
<version> 1.1 . 1 </version>
<type>jar</type>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version> 3.1 </version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version> 1.2 . 9 </version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version> 4.12 </version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version> 2.7 . 1 </version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version> 2.7 . 1 </version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version> 2.7 . 1 </version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2. 10 </artifactId>
<version> 1.5 . 1 </version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2. 10 </artifactId>
<version> 1.5 . 1 </version>
</dependency>
|
然后刷新maven的依赖,
12、新建一个Scala Object。
测试代码为:
1
2
3
4
5
|
def main(args: Array[String]) {
println( "Hello World!" )
val sparkConf = new SparkConf().setMaster( "local" ).setAppName( "test" )
val sparkContext = new SparkContext(sparkConf)
}
|
执行,
如果报了以下错误,
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
java.lang.SecurityException: class "javax.servlet.FilterRegistration" 's signer information does not match signer information of other classes in the same package
at java.lang.ClassLoader.checkCerts(ClassLoader.java: 952 )
at java.lang.ClassLoader.preDefineClass(ClassLoader.java: 666 )
at java.lang.ClassLoader.defineClass(ClassLoader.java: 794 )
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java: 142 )
at java.net.URLClassLoader.defineClass(URLClassLoader.java: 449 )
at java.net.URLClassLoader.access$</code><code class="java value">100</code><code class="java plain">(URLClassLoader.java:</code><code class="java value">71</code><code class="java plain">)</code></div><div class="line number8 index7 alt1"><code class="java spaces"> </code><code class="java plain">at java.net.URLClassLoader$ 1 .run(URLClassLoader.java: 361 )
at java.net.URLClassLoader$</code><code class="java value">1</code><code class="java plain">.run(URLClassLoader.java:</code><code class="java value">355</code><code class="java plain">)</code></div><div class="line number10 index9 alt1"><code class="java spaces"> </code><code class="java plain">at java.security.AccessController.doPrivileged(Native Method)</code></div><div class="line number11 index10 alt2"><code class="java spaces"> </code><code class="java plain">at java.net.URLClassLoader.findClass(URLClassLoader.java:</code><code class="java value">354</code><code class="java plain">)</code></div><div class="line number12 index11 alt1"><code class="java spaces"> </code><code class="java plain">at java.lang.ClassLoader.loadClass(ClassLoader.java:</code><code class="java value">425</code><code class="java plain">)</code></div><div class="line number13 index12 alt2"><code class="java spaces"> </code><code class="java plain">at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java: 308 )
at java.lang.ClassLoader.loadClass(ClassLoader.java: 358 )
at org.spark-project.jetty.servlet.ServletContextHandler.<init>(ServletContextHandler.java: 136 )
at org.spark-project.jetty.servlet.ServletContextHandler.<init>(ServletContextHandler.java: 129 )
at org.spark-project.jetty.servlet.ServletContextHandler.<init>(ServletContextHandler.java: 98 )
at org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala:</code><code class="java value">110</code><code class="java plain">)</code></div><div class="line number19 index18 alt2"><code class="java spaces"> </code><code class="java plain">at org.apache.spark.ui.JettyUtils$.createServletHandler(JettyUtils.scala: 101 )
at org.apache.spark.ui.WebUI.attachPage(WebUI.scala: 78 )
at org.apache.spark.ui.WebUI$$anonfun$attachTab$</code><code class="java value">1</code><code class="java plain">.apply(WebUI.scala:</code><code class="java value">62</code><code class="java plain">)</code></div><div class="line number22 index21 alt1"><code class="java spaces"> </code><code class="java plain">at org.apache.spark.ui.WebUI$$anonfun$attachTab$ 1 .apply(WebUI.scala: 62 )
at scala.collection.mutable.ResizableArray$</code><code class="java keyword">class</code><code class="java plain">.foreach(ResizableArray.scala:</code><code class="java value">59</code><code class="java plain">)</code></div><div class="line number24 index23 alt1"><code class="java spaces"> </code><code class="java plain">at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:</code><code class="java value">47</code><code class="java plain">)</code></div><div class="line number25 index24 alt2"><code class="java spaces"> </code><code class="java plain">at org.apache.spark.ui.WebUI.attachTab(WebUI.scala:</code><code class="java value">62</code><code class="java plain">)</code></div><div class="line number26 index25 alt1"><code class="java spaces"> </code><code class="java plain">at org.apache.spark.ui.SparkUI.initialize(SparkUI.scala:</code><code class="java value">61</code><code class="java plain">)</code></div><div class="line number27 index26 alt2"><code class="java spaces"> </code><code class="java plain">at org.apache.spark.ui.SparkUI.<init>(SparkUI.scala:</code><code class="java value">74</code><code class="java plain">)</code></div><div class="line number28 index27 alt1"><code class="java spaces"> </code><code class="java plain">at org.apache.spark.ui.SparkUI$.create(SparkUI.scala: 190 )
at org.apache.spark.ui.SparkUI$.createLiveUI(SparkUI.scala:</code><code class="java value">141</code><code class="java plain">)</code></div><div class="line number30 index29 alt1"><code class="java spaces"> </code><code class="java plain">at org.apache.spark.SparkContext.<init>(SparkContext.scala:</code><code class="java value">466</code><code class="java plain">)</code></div><div class="line number31 index30 alt2"><code class="java spaces"> </code><code class="java plain">at com.test.Test$.main(Test.scala: 13 )
at com.test.Test.main(Test.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java: 57 )
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java: 43 )
at java.lang.reflect.Method.invoke(Method.java: 606 )
at com.intellij.rt.execution.application.AppMain.main(AppMain.java: 144 )
|
可以把servlet-api 2.5 jar删除即可:
最好的办法是删除pom.xml中相关的依赖,即
1
2
3
4
5
|
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version> 2.7 . 1 </version>
</dependency>
|
最后的pom.xml文件的依赖是
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
|
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version> 2.7 . 1 </version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version> 2.7 . 1 </version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2. 10 </artifactId>
<version> 1.5 . 1 </version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2. 10 </artifactId>
<version> 1.5 . 1 </version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2. 10 </artifactId>
<version> 1.5 . 1 </version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2. 10 </artifactId>
<version> 1.5 . 2 </version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2. 10 </artifactId>
<version> 1.5 . 2 </version>
</dependency>
<dependency>
<groupId>com.databricks</groupId>
<artifactId>spark-avro_2. 10 </artifactId>
<version> 2.0 . 1 </version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2. 10 </artifactId>
<version> 1.5 . 2 </version>
</dependency>
</dependencies>
|
如果是报了这个错误,也没有什么问题,程序依旧可以执行,
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
|
java.io.IOException: Could not locate executable null \bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java: 356 )
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java: 371 )
at org.apache.hadoop.util.Shell.<clinit>(Shell.java: 364 )
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java: 80 )
at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java: 611 )
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java: 272 )
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java: 260 )
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java: 790 )
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java: 760 )
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java: 633 )
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$</code><code class="java value">1</code><code class="java plain">.apply(Utils.scala:</code><code class="java value">2084</code><code class="java plain">)</code></div><div class="line number13 index12 alt2"><code class="java spaces"> </code><code class="java plain">at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$ 1 .apply(Utils.scala: 2084 )
at scala.Option.getOrElse(Option.scala: 120 )
at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:</code><code class="java value">2084</code><code class="java plain">)</code></div><div class="line number16 index15 alt1"><code class="java spaces"> </code><code class="java plain">at org.apache.spark.SparkContext.<init>(SparkContext.scala:</code><code class="java value">311</code><code class="java plain">)</code></div><div class="line number17 index16 alt2"><code class="java spaces"> </code><code class="java plain">at com.test.Test$.main(Test.scala: 13 )
at com.test.Test.main(Test.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java: 57 )
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java: 43 )
at java.lang.reflect.Method.invoke(Method.java: 606 )
at com.intellij.rt.execution.application.AppMain.main(AppMain.java: 144 )
|
最后看到的正常输出:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
|
Hello World!
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16 / 09 / 19 11 : 21 : 29 INFO SparkContext: Running Spark version 1.5 . 1
16 / 09 / 19 11 : 21 : 29 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null \bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java: 356 )
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java: 371 )
at org.apache.hadoop.util.Shell.<clinit>(Shell.java: 364 )
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java: 80 )
at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java: 611 )
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java: 272 )
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java: 260 )
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java: 790 )
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java: 760 )
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java: 633 )
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$</code><code class="java value">1</code><code class="java plain">.apply(Utils.scala:</code><code class="java value">2084</code><code class="java plain">)</code></div><div class="line number17 index16 alt2"><code class="java spaces"> </code><code class="java plain">at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$ 1 .apply(Utils.scala: 2084 )
at scala.Option.getOrElse(Option.scala: 120 )
at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:</code><code class="java value">2084</code><code class="java plain">)</code></div><div class="line number20 index19 alt1"><code class="java spaces"> </code><code class="java plain">at org.apache.spark.SparkContext.<init>(SparkContext.scala:</code><code class="java value">311</code><code class="java plain">)</code></div><div class="line number21 index20 alt2"><code class="java spaces"> </code><code class="java plain">at com.test.Test$.main(Test.scala: 13 )
at com.test.Test.main(Test.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java: 57 )
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java: 43 )
at java.lang.reflect.Method.invoke(Method.java: 606 )
at com.intellij.rt.execution.application.AppMain.main(AppMain.java: 144 )
16 / 09 / 19 11 : 21 : 29 WARN NativeCodeLoader: Unable to load native -hadoop library for your platform... using builtin-java classes where applicable
16 / 09 / 19 11 : 21 : 30 INFO SecurityManager: Changing view acls to: pc
16 / 09 / 19 11 : 21 : 30 INFO SecurityManager: Changing modify acls to: pc
16 / 09 / 19 11 : 21 : 30 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(pc); users with modify permissions: Set(pc)
16 / 09 / 19 11 : 21 : 30 INFO Slf4jLogger: Slf4jLogger started
16 / 09 / 19 11 : 21 : 31 INFO Remoting: Starting remoting
16 / 09 / 19 11 : 21 : 31 INFO Remoting: Remoting started; listening on addresses :[akka.tcp: //sparkDriver@192.168.51.143:52500]
16 / 09 / 19 11 : 21 : 31 INFO Utils: Successfully started service 'sparkDriver' on port 52500 .
16 / 09 / 19 11 : 21 : 31 INFO SparkEnv: Registering MapOutputTracker
16 / 09 / 19 11 : 21 : 31 INFO SparkEnv: Registering BlockManagerMaster
16 / 09 / 19 11 : 21 : 31 INFO DiskBlockManager: Created local directory at C:\Users\pc\AppData\Local\Temp\blockmgr-f9ea7f8c-68f9-4f9b-a31e-b87ec2e702a4
16 / 09 / 19 11 : 21 : 31 INFO MemoryStore: MemoryStore started with capacity 966.9 MB
16 / 09 / 19 11 : 21 : 31 INFO HttpFileServer: HTTP File server directory is C:\Users\pc\AppData\Local\Temp\spark-64cccfb4-46c8- 4266 -92c1-14cfc6aa2cb3\httpd-5993f955-0d92- 4233 -b366-c9a94f7122bc
16 / 09 / 19 11 : 21 : 31 INFO HttpServer: Starting HTTP Server
16 / 09 / 19 11 : 21 : 31 INFO Utils: Successfully started service 'HTTP file server' on port 52501 .
16 / 09 / 19 11 : 21 : 31 INFO SparkEnv: Registering OutputCommitCoordinator
16 / 09 / 19 11 : 21 : 31 INFO Utils: Successfully started service 'SparkUI' on port 4040 .
16 / 09 / 19 11 : 21 : 31 INFO SparkUI: Started SparkUI at http: //192.168.51.143:4040
16 / 09 / 19 11 : 21 : 31 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16 / 09 / 19 11 : 21 : 31 INFO Executor: Starting executor ID driver on host localhost
16 / 09 / 19 11 : 21 : 31 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 52520 .
16 / 09 / 19 11 : 21 : 31 INFO NettyBlockTransferService: Server created on 52520
16 / 09 / 19 11 : 21 : 31 INFO BlockManagerMaster: Trying to register BlockManager
16 / 09 / 19 11 : 21 : 31 INFO BlockManagerMasterEndpoint: Registering block manager localhost: 52520 with 966.9 MB RAM, BlockManagerId(driver, localhost, 52520 )
16 / 09 / 19 11 : 21 : 31 INFO BlockManagerMaster: Registered BlockManager
16 / 09 / 19 11 : 21 : 31 INFO SparkContext: Invoking stop() from shutdown hook
16 / 09 / 19 11 : 21 : 32 INFO SparkUI: Stopped Spark web UI at http: //192.168.51.143:4040
16 / 09 / 19 11 : 21 : 32 INFO DAGScheduler: Stopping DAGScheduler
16 / 09 / 19 11 : 21 : 32 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16 / 09 / 19 11 : 21 : 32 INFO MemoryStore: MemoryStore cleared
16 / 09 / 19 11 : 21 : 32 INFO BlockManager: BlockManager stopped
16 / 09 / 19 11 : 21 : 32 INFO BlockManagerMaster: BlockManagerMaster stopped
16 / 09 / 19 11 : 21 : 32 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16 / 09 / 19 11 : 21 : 32 INFO SparkContext: Successfully stopped SparkContext
16 / 09 / 19 11 : 21 : 32 INFO ShutdownHookManager: Shutdown hook called
16 / 09 / 19 11 : 21 : 32 INFO ShutdownHookManager: Deleting directory C:\Users\pc\AppData\Local\Temp\spark-64cccfb4-46c8- 4266 -92c1-14cfc6aa2cb3
Process finished with exit code 0
|
至此,开发环境搭建完毕。
五、打jar包
1、新建一个Scala Object
代码是:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
package com.test
import org.apache.spark.{SparkConf, SparkContext}
/**
* Created by pc on 2016/9/20.
*/
object WorldCount {
def main(args: Array[String]) {
val dataFile = args( 0 )
val output = args( 1 )
val sparkConf = new SparkConf().setAppName( "WorldCount" )
val sparkContext = new SparkContext(sparkConf)
val lines = sparkContext.textFile(dataFile)
val counts = lines.flatMap(_.split( "," )).map(s => (s, 1 )).reduceByKey((a,b) => a+b)
counts.saveAsTextFile(output)
sparkContext.stop()
}
}
|
2、 File -》Project Structure
3、点击ok
可以设置jar包输出目录:
4、build Artifact
5、运行:
把测试文件放到HDFS的/test/ 目录下,提交:
1
|
spark-submit -- class com.test.WorldCount --master spark: //192.168.18.151:7077 sparktest.jar /test/data.txt /test/test-01
|
6、如果出现以下错误
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
|
Exception in thread "main" java.lang.SecurityException: Invalid signature file digest for Manifest main attributes
at sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java: 240 )
at sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java: 193 )
at java.util.jar.JarVerifier.processEntry(JarVerifier.java: 305 )
at java.util.jar.JarVerifier.update(JarVerifier.java: 216 )
at java.util.jar.JarFile.initializeVerifier(JarFile.java: 345 )
at java.util.jar.JarFile.getInputStream(JarFile.java: 412 )
at sun.misc.JarIndex.getJarIndex(JarIndex.java: 137 )
at sun.misc.URLClassPath$JarLoader$ 1 .run(URLClassPath.java: 674 )
at sun.misc.URLClassPath$JarLoader$ 1 .run(URLClassPath.java: 666 )
at java.security.AccessController.doPrivileged(Native Method)
at sun.misc.URLClassPath$JarLoader.ensureOpen(URLClassPath.java:</code><code class="java value">665</code><code class="java plain">)</code></div><div class="line number13 index12 alt2"><code class="java spaces"> </code><code class="java plain">at sun.misc.URLClassPath$JarLoader.<init>(URLClassPath.java: 638 )
at sun.misc.URLClassPath$</code><code class="java value">3</code><code class="java plain">.run(URLClassPath.java:</code><code class="java value">366</code><code class="java plain">)</code></div><div class="line number15 index14 alt2"><code class="java spaces"> </code><code class="java plain">at sun.misc.URLClassPath$ 3 .run(URLClassPath.java: 356 )
at java.security.AccessController.doPrivileged(Native Method)
at sun.misc.URLClassPath.getLoader(URLClassPath.java: 355 )
at sun.misc.URLClassPath.getLoader(URLClassPath.java: 332 )
at sun.misc.URLClassPath.getResource(URLClassPath.java: 198 )
at java.net.URLClassLoader$</code><code class="java value">1</code><code class="java plain">.run(URLClassLoader.java:</code><code class="java value">358</code><code class="java plain">)</code></div><div class="line number21 index20 alt2"><code class="java spaces"> </code><code class="java plain">at java.net.URLClassLoader$ 1 .run(URLClassLoader.java: 355 )
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java: 354 )
at java.lang.ClassLoader.loadClass(ClassLoader.java: 425 )
at java.lang.ClassLoader.loadClass(ClassLoader.java: 358 )
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java: 270 )
at org.apache.spark.util.Utils$.classForName(Utils.scala:</code><code class="java value">173</code><code class="java plain">)</code></div><div class="line number29 index28 alt2"><code class="java spaces"> </code><code class="java plain">at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala: 641 )
at org.apache.spark.deploy.SparkSubmit$.doRunMain$ 1 (SparkSubmit.scala: 180 )
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala: 205 )
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala: 120 )
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
|
就使用WinRAR打开jar包, 删除META-INF目录下的除了mainfest.mf,.rsa及maven目录以外的其他所有文件
关注 - 52
粉丝 - 5
currentDiggType = 0;
» 下一篇: Flume 远程写HDFS
FeedBack:
http://pic.cnblogs.com/face/463598/20141117144908.png
后来解决了没
【推荐】华为云7大明星产品0元免费使用
【推荐】腾讯云如何降低移动开发成本
【大赛】2018首届“顶天立地”AI开发者大赛
· 好久不见的蛙儿子到了阿里邮局,给你发来了明信片
· 马斯克:Model 3明年初将进入亚欧市场
· 来电科技CEO被爆盗窃 来电与街电的“战争”再升级
· 百度百科上了区块链,“不背锅”只是第一步
· 反思IBM沃森大裁员:AI落地应用,需解决这三大痛点
» 更多新闻…
· 评审的艺术——谈谈现实中的代码评审
· 如何高效学习
· 如何成为优秀的程序员?
· 菜鸟工程师的超神之路 – 从校园到职场
» 更多知识库文章…
fixPostBody(); setTimeout(function () { incrementViewCount(cb_entryId); }, 50); deliverAdT2(); deliverAdC1(); deliverAdC2(); loadNewsAndKb(); loadBlogSignature(); LoadPostInfoBlock(cb_blogId, cb_entryId, cb_blogApp, cb_blogUserGuid); GetPrevNextPost(cb_entryId, cb_blogId, cb_entryCreatedDate, cb_postType); loadOptUnderPost(); GetHistoryToday(cb_blogId, cb_blogApp, cb_entryCreatedDate);
Spark Idea Maven 开发环境搭建(转载)相关推荐
- 学习笔记Hadoop(十三)—— MapReduce开发入门(1)—— MapReduce开发环境搭建、MapReduce单词计数源码分析
一.MapReduce MapReduce是Google提出的一个软件架构,用于大规模数据集(大于1TB)的并行运算.概念"Map(映射)"和"Reduce(归纳)&qu ...
- windows phone开发环境搭建
windows phone开发环境搭建 转载于:https://blog.51cto.com/yirisu/579302
- spark开发环境搭建及部署
spark开发环境搭建 1.下载开发工具luna eclipse 或者 Intellij IDEA(官网下载的 scala for eclipse如果不能用可以使用 luna) 2.安装jdk1.7配 ...
- Intellij IDEA开发环境搭建,scala配置及打包,jar包在spark中的运行
1. Intellij IDEA 开发环境搭建 最近在学习scala,除需要编写scala程序外,同时还需要创建maven工程,打成Jar包,而Eclipse在这方面显得使用的不是那么方面,同时由于I ...
- eclipse搭建maven开发环境
eclipse搭建maven开发环境 eclipse搭建maven开发环境 maven作为一个项目构建工具,在开发的过程中很受欢迎,可以帮助管理项目中的bao依赖问题,另外它的很多功能都极大的减少了开 ...
- 如何基于Jupyter notebook搭建Spark集群开发环境
摘要:本文介绍如何基于Jupyter notebook搭建Spark集群开发环境. 本文分享自华为云社区<基于Jupyter Notebook 搭建Spark集群开发环境>,作者:apr鹏 ...
- Spark 开发环境搭建(1)IDEA Gradle的安装部署、使用
Spark 开发环境搭建(1)使用IDEA Gradle的方式 1,JAVA环境检查 C:\Windows\System32>java -version java version "1 ...
- Spring+Maven+Dubbo+MyBatis+Linner+Handlebars—Web开发环境搭建
Spring+Maven+Dubbo+MyBatis+Linner+Handlebars --Web开发环境搭建 本文主 ...
- Spring + Maven + Dubbo + MyBatis + Linner + Handlebars-Web开发环境搭建
spring + Maven + Dubbo + MyBatis + Linner + Handlebars - 开发环 ...
最新文章
- Java中对HashMap的深度分析
- cnn风格迁移_快速图像风格迁移思想在无线通信中的另类应用:算法拟合
- C++字符串详解(二)访问与拼接
- [转载]漫谈游戏中的阴影技术
- js中的this指针(二)
- mysql类似的数据库_MemSQL学习笔记-类似MySQL的数据库
- python列表数据类型一致_python笔记--数据类型--列表
- 机器学习基础(五十七)—— 监督学习、无监督学习
- shutdown函数
- [patterns practices] Web 服务安全:场景、模式和实现指南
- java.lang.IllegalStateException: Web app root system property already set to different value
- MATLAB论文绘图模板与尺寸设置
- 还停留在图片识别?谷歌已经开始研究视频识别了
- 抗饱和积分器 matlab,抗积分饱和
- miui9支持android,基于Android Q的MIUI来了 小米9尝鲜
- Postman请求报405错误
- 在 Kubernetes 中基于 StatefulSet 部署 MySQL(下)
- 十招电商运营技巧让你成为运营大牛
- 分支-12. 计算火车运行时间
- C语言练习,计算圆的面积和周长。