转载请注明作者,谢谢支持!

一、环境准备

测试环境使用的cdh提供的quickstart vm

hadoop版本:2.5.0-cdh5.2.0

spark版本:1.1.0

二、Hello Spark

  1. /usr/lib/spark/examples/lib/spark-examples-1.1.0-cdh5.2.0-hadoop2.5.0-cdh5.2.0.jar 移动到/usr/lib/spark/lib/spark-examples-1.1.0-cdh5.2.0-hadoop2.5.0-cdh5.2.0.jar
  1. 执行程序 ./bin/run-example SparkPi 10
  1. 日志分析:

    1. 程序检查ip,host,SecurityManager
    1. 启动sparkDriver。通过akka工具启动一个tcp监听 [akka.tcp://sparkDriver@192.168.128.131:42960]
    1. 启动MapOutputTrackerBlockManagerMaster
    1. 启动一个block manager,也就是ConnectionManagerId(192.168.128.131,41898),其中包含一个MemoryStore
    1. 通过netty启动一个HTTP file serverSocketConnector@0.0.0.0:55161
    1. 启动一个sparkUIhttp://192.168.128.131:4040
    2. 通过http上传本地程序运行Jar
    1. 连接 HeartbeatReceiver: akka.tcp://sparkDriver@192.168.128.131:42960/user/HeartbeatReceiver
    1. Starting job: reduce

      1. 分析中job,有stage 0 (MappedRDD[1])
      1. 添加并启动运行task Submitting 10 missing tasks from Stage 0
      2. 通过http协议获取程序jar包,并添加到classloader
      3. 完成task后,将结果发送到driver
      4. scheduler.DAGScheduler完成Stage的所有task
      5. localhostscheduler.TaskSetManager收集完成的task
      1. job finished
    1. Stop Spark Web UI
    1. Stop DAGScheduler
    1. MapOutputTrackerActor stopped
    1. stop ConnectionManager
    1. MemoryStore cleared
    1. BlockManager stopped
    1. Shutting down remote daemon.
    1. Successfully stopped SparkContext

三、cluster mode 运行模式

运行流程::

  1. SparkContext连接cluster Manager (either Spark’s own standalone cluster manager or Mesos/YARN),
  1. spark ApplicationCluster Manager请求资源 executors (运行计算和存储数据的线程)
  1. 将程序Jar包或者python程序分发到executors
  1. SparkContext发送tasksexecutors上运行

Cluster Manager 类型:

  • Standalone Spark 内置的cluster manager,可以快速启动一个集群
  • Apache Mesos一个通用的Cluster manger,可以运行hadoopMapreduce和其他Service applications
  • Hadoop YARNHadoop 2中的Clustger Manager

主要概念

Term

Meaning

Application

User program built on Spark. Consists of adriver programandexecutorson the cluster.

Application jar

A jar containing the user's Spark application. In some cases users will want to create an "uber jar" containing their application along with its dependencies. The user's jar should never include Hadoop or Spark libraries, however, these will be added at runtime.

Driver program

The process running the main() function of the application and creating the SparkContext

Cluster manager

An external service for acquiring resources on the cluster (e.g. standalone manager, Mesos, YARN)

Deploy mode

Distinguishes where the driver process runs. In "cluster" mode, the framework launches the driver inside of the cluster. In "client" mode, the submitter launches the driver outside of the cluster.

Worker node

Any node that can run application code in the cluster

Executor

A process launched for an application on a worker node, that runs tasks and keeps data in memory or disk storage across them. Each application has its own executors.

Task

A unit of work that will be sent to one executor

Job

A parallel computation consisting of multiple tasks that gets spawned in response to a Spark action (e.g.save,collect); you'll see this term used in the driver's logs.

Stage

Each job gets divided into smaller sets of tasks calledstagesthat depend on each other (similar to the map and reduce stages in MapReduce); you'll see this term used in the driver's logs.

源文档 <http://spark.apache.org/docs/latest/cluster-overview.html>

四、yarn mode 运行模式

yarn集群方式运行

spark-submit

--classcom.wankun.sparktest.WordCount

--masteryarn-cluster

target/sparktest-1.0.0.jar/tmp/test1 2

运行命令:

yarn-cluster

spark-submit --classcom.wankun.sparktest.WordCount --master yarn-cluster --driver-memory 385m--executor-memory 410m target/sparktest-1.0.0.jar /tmp/test1 2

特点:

  1. 运行成功后,无任何输出,输出都在日志中
  1. 程序的运行有yarn来控制,spark只是检测程序的状态,状态为success,即运行成功

yarn-client

spark-submit--class com.wankun.sparktest.WordCount --master yarn-client --driver-memory 385m --executor-memory 410m  target/sparktest-1.0.0.jar /tmp/test1 2

yarn-cluster driver programcontainer 是在集群里的,yarn-client driver programcontainer spark在集群外自己启动的

运行原理:

scheduler.DAGSchedulerscheduler.TaskSetManagercluster.YarnClusterScheduler

  1. sparkRM申请一个Container作为调度container(此时启动的SparkUI端口随机)
  2. 请求Executors(默认2个, Container request (host: Any, priority: 1, capability: <memory:1408, vCores:1>
  1. Received new token for : quickstart.cloudera:48622
  1. 根据resources and environment and commands, open proxy
  1. start progress reporter
  1. YarnClusterSchedulerBackend,BlockManagerMasterActor,MemoryStore等服务启动
  2. SparkContextStarting job
  1. TasksetManagerstarting task with TID 0,1
  1. 任务调度由scheduler.DAGScheduler执行,根据jobjobtasks进行任务执行,Taskset will be removed ,when completed
  1. job全部执行结束,Stopped Spark web UI,Stopping DAGScheduler,Shutting down all executors

executors

  1. executors应该是可以重用
  2. executors通过CoarseGrainedExecutorBackend 获取分配的任务,关闭的时候,Driver commanded a shutdown
  3. 在关闭http file server进程时,遇到错误

14/11/05 20:17:40 WARN thread.QueuedThreadPool: 1 threads could not be stopped

14/11/05 20:17:40 INFO thread.QueuedThreadPool: Couldn't stop Thread[qtp26737473-36 Acceptor0 SocketConnector@0.0.0.0:39213,5,main]

14/11/05 20:17:41 INFO thread.QueuedThreadPool:  at java.net.PlainSocketImpl.socketAccept(Native Method)

14/11/05 20:17:41 INFO thread.QueuedThreadPool:  at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:398)

14/11/05 20:17:41 INFO thread.QueuedThreadPool:  at java.net.ServerSocket.implAccept(ServerSocket.java:530)

14/11/05 20:17:41 INFO thread.QueuedThreadPool:  at java.net.ServerSocket.accept(ServerSocket.java:498)

14/11/05 20:17:41 INFO thread.QueuedThreadPool:  at org.eclipse.jetty.server.bio.SocketConnector.accept(SocketConnector.java:117)

14/11/05 20:17:41 INFO thread.QueuedThreadPool:  at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938)

14/11/05 20:17:41 INFO thread.QueuedThreadPool:  at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)

14/11/05 20:17:41 INFO thread.QueuedThreadPool:  at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)

14/11/05 20:17:41 INFO thread.QueuedThreadPool:  at java.lang.Thread.run(Thread.java:745)

14/11/05 20:17:41 INFO network.ConnectionManager: Key not valid ? sun.nio.ch.SelectionKeyImpl@2cc51248

14/11/05 20:17:41 INFO spark.MapOutputTrackerMasterActor: MapOutputTrackerActor stopped!

14/11/05 20:17:41 INFO network.ConnectionManager: Removing SendingConnection to ConnectionManagerId(quickstart.cloudera,48234)

14/11/05 20:17:41 INFO network.ConnectionManager: Removing SendingConnection to ConnectionManagerId(quickstart.cloudera,52620)

14/11/05 20:17:42 INFO network.ConnectionManager: key already cancelled ? sun.nio.ch.SelectionKeyImpl@2cc51248

java.nio.channels.CancelledKeyException

at org.apache.spark.network.ConnectionManager.run(ConnectionManager.scala:386)

at org.apache.spark.network.ConnectionManager$$anon$4.run(ConnectionManager.scala:139)

14/11/05 20:17:42 INFO network.ConnectionManager: Key not valid ? sun.nio.ch.SelectionKeyImpl@69aaccdf

14/11/05 20:17:42 INFO network.ConnectionManager: key already cancelled ? sun.nio.ch.SelectionKeyImpl@69aaccdf

java.nio.channels.CancelledKeyException

at org.apache.spark.network.ConnectionManager.run(ConnectionManager.scala:386)

at org.apache.spark.network.ConnectionManager$$anon$4.run(ConnectionManager.scala:139)

日志查看

方式一:yarnlogs -applicationId application_1415100770125_0002

方式二:通过Rm:8088端口进入Spark history Server:18088端口查看

  1. 配置spark-defaults.conf jobhistory中的配置

spark.eventLog.enabled=true

spark.eventLog.dir=hdfs:///user/spark/applicationHistory

spark.yarn.historyServer.address=http://quickstart.cloudera:18088

  1. 启动 spark-history-server 服务
  2. 此时,在yarn 集群中提交的服务日志会上传的hdfs上,在RM:8088页面中可以直接调整到spark页面进行查看

提交参数:

  1. spark-submit --class com.wankun.sparktest.WordCount --master yarn-cluster --driver-memory 385m --executor-memory 410m  target/sparktest-1.0.0.jar /tmp/test1 2

实际上,driverExecutor和task Executor 占用那个的内存显示并没有这么多,不清楚什么原因

  1. 支持参数

14/11/05 11:56:14ERROR yarn.Client: Error: Executor memory sizemust be greater than: 384

Exception in thread"main" java.lang.IllegalArgumentException: Usage:org.apache.spark.deploy.yarn.Client [options]

Options:

--jar JAR_PATH             Path to your application's JARfile (required in yarn-cluster mode)

--class CLASS_NAME         Name of your application's main class(required)

--arg ARGS                 Argument to be passed to yourapplication's main class.

Multipleinvocations are possible, each will be passed in order.

--num-executors NUM        Number of executors to start (Default:2)

--executor-cores NUM       Number of cores for the executors(Default: 1).

--driver-memory MEM        Memory for driver (e.g. 1000M, 2G)(Default: 512 Mb)

--executor-memory MEM      Memory per executor (e.g. 1000M, 2G)(Default: 1G)

--name NAME                The name of your application(Default: Spark)

--queue QUEUE              The hadoop queue to use forallocation requests (Default: 'default')

--addJars jars             Comma separated list of local jarsthat want SparkContext.addJar to work with.

--files files              Comma separated list of files tobe distributed with the job.

--archives archives        Comma separated list of archives to bedistributed with the job.

优化配置

sparkhadoop类库上传到hdfs上,省的每次都上传

hdfs dfs -mkdir -p /user/spark/share/lib

hadoop dfs -put /usr/lib/spark/assembly/lib/spark-assembly-1.1.0-cdh5.2.0-hadoop2.5.0-cdh5.2.0.jar /user/spark/share/lib/spark-assembly.jar

hadoop dfs -chmod -R 777 /user/spark/

spark-env.sh中配置

export SPARK_JAR=hdfs://quickstart.cloudera:8020/user/spark/share/lib/spark-assembly.jar

五、spark cluster mode 运行模式

spark服务

启动服务:spark-history-server  spark-master          spark-worker

spark-master 监控页面:

http://192.168.128.131:18080/

http://192.168.128.131:4040/application detail 页面,如果有多个sparkContext,端口依次递增(如4041,4042…),程序结束后,关闭。页面主要内容

  1. stages and tasks,
  1. RDD sizes and memory usage
  1. Environmental
  1. executors

192.168.128.131 7077 master通信端口

spark-worker 监控页面:

http://192.168.128.131:18081 监控页面

192.168.128.131 7078 worker通信端口

spark-history-server 监控页面:

http://192.168.128.131:18088

spark masterworker之间的通信使用的是akkatcp协议。例如:[akka.tcp://sparkWorker@192.168.128.131:7078]

备注:测试时,因为master绑定在了192.168.128.131 ip上了,所以必须在/etc/spark/con/spark-env.sh配置文件配置上exportSPARK_MASTER_IP=192.168.128.131 参数

spark-env.sh主要配置

export STANDALONE_SPARK_MASTER_HOST="192.168.128.131"

export SPARK_MASTER_IP=$STANDALONE_SPARK_MASTER_HOST

### Let's run everything with JVM runtime, instead of Scala

export SPARK_LAUNCH_WITH_SCALA=0

export SPARK_LIBRARY_PATH=${SPARK_HOME}/lib

export SCALA_LIBRARY_PATH=${SPARK_HOME}/lib

export SPARK_MASTER_WEBUI_PORT=18080

export SPARK_MASTER_IP="192.168.128.131"

export SPARK_MASTER_PORT=7077

export SPARK_WORKER_CORES=1

export SPARK_WORKER_MEMORY=100m

export SPARK_WORKER_PORT=7078

export SPARK_WORKER_INSTANCES=1

export SPARK_WORKER_WEBUI_PORT=18081

export SPARK_WORKER_DIR=/var/run/spark/work

export SPARK_LOG_DIR=/var/log/spark

export SPARK_PID_DIR='/var/run/spark/'

if [ -n "$HADOOP_HOME" ]; then

export SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:${HADOOP_HOME}/lib/native

fi

export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/etc/hadoop/conf}

注意在cloudera提供的虚拟机中的配置文件有如下问题:

第一、master7077端口并未绑定在0.0.0.0上,第二,HADOOP_CONF_DIR写错了,写成了etc/hadoop/conf

第二、在/etc/hosts中将hostname配置上外网口ip,否则会造成masterworker通信失败,或者job无法正常提交的问题,提交job时也要使用hostname提交

六、spark-submit

spark-submit

--classcom.wankun.sparktest.JavaWordCount

--masterspark://quickstart.cloudera:7077

target/sparktest-1.0.0.jar/tmp/test1 2

其余常用参数:

--executor-memory 20G

--total-executor-cores 100

--master yarn-cluster \ # can also be `yarn-client` for clientmode

--master local[8] \# Run application locally on 8 cores

--master yarn-cluster \ # can also be `yarn-client` for clientmode

Master URLs

The master URL passed to Spark can be in one of thefollowing formats:

Master URL

Meaning

local

Run Spark locally with one worker thread (i.e. no parallelism at all).

local[K]

Run Spark locally with K worker threads (ideally, set this to the number of cores on your machine).

local[*]

Run Spark locally with as many worker threads as logical cores on your machine.

spark://HOST:PORT

Connect to the givenSpark standalone clustermaster. The port must be whichever one your master is configured to use, which is 7077 by default.

mesos://HOST:PORT

Connect to the givenMesoscluster. The port must be whichever one your is configured to use, which is 5050 by default. Or, for a Mesos cluster using ZooKeeper, usemesos://zk://....

yarn-client

Connect to aYARNcluster in client mode. The cluster location will be found based on the HADOOP_CONF_DIR variable.

yarn-cluster

Connect to aYARNcluster in cluster mode. The cluster location will be found based on HADOOP_CONF_DIR.

源文档 <http://spark.apache.org/docs/latest/submitting-applications.html>

Transformations

The following table listssome of the common transformations supported by Spark. Refer to the RDD API doc(Scala,Java,Python) and pair RDD functions doc (Scala,Java) for details.

Transformation

Meaning

map(func)

Return a new distributed dataset formed by passing each element of the source through a functionfunc.

a1 --> b1

a2 --> b2

a3 --> b3

filter(func)

Return a new dataset formed by selecting those elements of the source on whichfuncreturns true.

flatMap(func)

Similar to map, but each input item can be mapped to 0 or more output items (sofuncshould return a Seq rather than a single item).

mapPartitions(func)

Similar to map, but runs separately on each partition (block) of the RDD, sofuncmust be of type Iterator<T> => Iterator<U> when running on an RDD of type T.

mapPartitionsWithIndex(func)

Similar to mapPartitions, but also providesfuncwith an integer value representing the index of the partition, sofuncmust be of type (Int, Iterator<T>) => Iterator<U> when running on an RDD of type T.

sample(withReplacement,fraction,seed)

Sample a fractionfractionof the data, with or without replacement, using a given random number generator seed.

union(otherDataset)

Return a new dataset that contains the union of the elements in the source dataset and the argument.

intersection(otherDataset)

Return a new RDD that contains the intersection of elements in the source dataset and the argument.

distinct([numTasks]))

Return a new dataset that contains the distinct elements of the source dataset.

groupByKey([numTasks])

When called on a dataset of (K, V) pairs, returns a dataset of (K, Iterable<V>) pairs.

Note:If you are grouping in order to perform an aggregation (such as a sum or average) over each key, usingreduceByKeyorcombineByKeywill yield much better performance.

Note:By default, the level of parallelism in the output depends on the number of partitions of the parent RDD. You can pass an optionalnumTasksargument to set a different number of tasks.

reduceByKey(func, [numTasks])

When called on a dataset of (K, V) pairs, returns a dataset of (K, V) pairs where the values for each key are aggregated using the given reduce functionfunc, which must be of type (V,V) => V. Like ingroupByKey, the number of reduce tasks is configurable through an optional second argument.

aggregateByKey(zeroValue)(seqOp,combOp, [numTasks])

When called on a dataset of (K, V) pairs, returns a dataset of (K, U) pairs where the values for each key are aggregated using the given combine functions and a neutral "zero" value. Allows an aggregated value type that is different than the input value type, while avoiding unnecessary allocations. Like ingroupByKey, the number of reduce tasks is configurable through an optional second argument.

sortByKey([ascending], [numTasks])

When called on a dataset of (K, V) pairs where K implements Ordered, returns a dataset of (K, V) pairs sorted by keys in ascending or descending order, as specified in the booleanascendingargument.

join(otherDataset, [numTasks])

When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (V, W)) pairs with all pairs of elements for each key. Outer joins are also supported throughleftOuterJoinandrightOuterJoin.

cogroup(otherDataset, [numTasks])

When called on datasets of type (K, V) and (K, W), returns a dataset of (K, Iterable<V>, Iterable<W>) tuples. This operation is also calledgroupWith.

cartesian(otherDataset)

When called on datasets of types T and U, returns a dataset of (T, U) pairs (all pairs of elements).

pipe(command,[envVars])

Pipe each partition of the RDD through a shell command, e.g. a Perl or bash script. RDD elements are written to the process's stdin and lines output to its stdout are returned as an RDD of strings.

coalesce(numPartitions)

Decrease the number of partitions in the RDD to numPartitions. Useful for running operations more efficiently after filtering down a large dataset.

repartition(numPartitions)

Reshuffle the data in the RDD randomly to create either more or fewer partitions and balance it across them. This always shuffles all data over the network.

mapToPair

JavaPairRDD ---> JavaPairRDD

JavaPairRDD<Integer, Integer> tc;

JavaPairRDD<Integer, Integer> edges = tc.mapToPair(

new PairFunction<Tuple2<Integer, Integer>, Integer, Integer>() {

@Override

public Tuple2<Integer, Integer> call(Tuple2<Integer, Integer> e) {

return new Tuple2<Integer, Integer>(e._2(), e._1());

}

});

Actions

The following table listssome of the common actions supported by Spark. Refer to the RDD API doc (Scala,Java,Python) and pair RDD functions doc (Scala,Java) for details.

Action

Meaning

reduce(func)

Aggregate the elements of the dataset using a functionfunc(which takes two arguments and returns one). The function should be commutative and associative so that it can be computed correctly in parallel.

a1,a2 --> b1

a2,a3 --> b2

a3,a4 --> b3

collect()

Return all the elements of the dataset as an array at the driver program. This is usually useful after a filter or other operation that returns a sufficiently small subset of the data.

count()

Return the number of elements in the dataset.

first()

Return the first element of the dataset (similar to take(1)).

take(n)

Return an array with the firstnelements of the dataset. Note that this is currently not executed in parallel. Instead, the driver program computes all the elements.

takeSample(withReplacement,num, [seed])

Return an array with a random sample ofnumelements of the dataset, with or without replacement, optionally pre-specifying a random number generator seed.

takeOrdered(n,[ordering])

Return the firstnelements of the RDD using either their natural order or a custom comparator.

saveAsTextFile(path)

Write the elements of the dataset as a text file (or set of text files) in a given directory in the local filesystem, HDFS or any other Hadoop-supported file system. Spark will call toString on each element to convert it to a line of text in the file.

saveAsSequenceFile(path)

(Java and Scala)

Write the elements of the dataset as a Hadoop SequenceFile in a given path in the local filesystem, HDFS or any other Hadoop-supported file system. This is available on RDDs of key-value pairs that either implement Hadoop's Writable interface. In Scala, it is also available on types that are implicitly convertible to Writable (Spark includes conversions for basic types like Int, Double, String, etc).

saveAsObjectFile(path)

(Java and Scala)

Write the elements of the dataset in a simple format using Java serialization, which can then be loaded usingSparkContext.objectFile().

countByKey()

Only available on RDDs of type (K, V). Returns a hashmap of (K, Int) pairs with the count of each key.

foreach(func)

Run a functionfuncon each element of the dataset. This is usually done for side effects such as updating an accumulator variable (see below) or interacting with external storage systems.

源文档 <http://spark.apache.org/docs/latest/programming-guide.html>

action 结果是一个数据,例如,正数,数组,对象等

transformation结果是一个RDD,完成从一个RDD到另一个RDD的转换

常用工具类说明:

JavaRDD<D>

JavaPairRDD<K,V>

Tuple2(K,V> 类似与map中的一个entry e._1()  e._2()

相关资料

http://spark.apache.org/docs/latest/api/scala/index.html Scala API doc

http://mail-archives.apache.org/mod_mbox/incubator-spark-user/apache spark mailing list

http://apache-spark-user-list.1001560.n3.nabble.com/

http://community.cloudera.com/t5/Advanced-Analytics-Apache-Spark/bd-p/Spark

https://github.com/apache/spark/tree/master/examples/src/main/java/org/apache/spark/examplesspark examples

Spark 学习入门教程相关推荐

  1. 深度学习入门教程UFLDL学习实验笔记三:主成分分析PCA与白化whitening

     深度学习入门教程UFLDL学习实验笔记三:主成分分析PCA与白化whitening 主成分分析与白化是在做深度学习训练时最常见的两种预处理的方法,主成分分析是一种我们用的很多的降维的一种手段,通 ...

  2. 深度学习入门教程UFLDL学习实验笔记一:稀疏自编码器

     深度学习入门教程UFLDL学习实验笔记一:稀疏自编码器 UFLDL即(unsupervised feature learning & deep learning).这是斯坦福网站上的一篇 ...

  3. caffe linux 教程,Caffe 深度学习入门教程 - 安装配置Ubuntu14.04+CUDA7.5+Caffe+cuDNN_Linux教程_Linux公社-Linux系统门户网站...

    安装配置Ubuntu14.04+CUDA7.5+Caffe+cuDNN 一.版本 Linux系统:Ubuntu 14.04 (64位) 显卡:Nvidia K20c cuda: cuda_7.5.18 ...

  4. jsx 调用php,JavaScript_JavaScript的React框架中的JSX语法学习入门教程,什么是JSX? 在用React写组件的 - phpStudy...

    JavaScript的React框架中的JSX语法学习入门教程 什么是JSX? 在用React写组件的时候,通常会用到JSX语法,粗看上去,像是在Javascript代码里直接写起了XML标签,实质上 ...

  5. 转g代码教程_图深度学习入门教程(九)——图滤波神经网络模型

    本教程是一个系列免费教程,争取每月更新2到4篇.(由于精力有限,近期停止了一段时间,在此向大家道个歉). 主要是基于图深度学习的入门内容.讲述最基本的基础知识,其中包括深度学习.数学.图神经网络等相关 ...

  6. Michael Nielsen的神经网络与深度学习入门教程

    Michael Nielsen的神经网络与深度学习入门教程 作者:Michael Nielsen ​ 这是我个人以为目前最好的神经网络与机器学习入门资料.作者以MNIST为例详细介绍了神经网络中的基本 ...

  7. Spark Shell入门教程

    教程目录 0x00 教程内容 0x01 Spark Shell 操作 1. 启动与关闭 Spark Shell 2. 使用 Spark Shell 进行 Scala 编程 0x02 测试词频统计案例 ...

  8. spark python入门教程_你是如何自学 Python 的?

    我是机械类专业出身,现在在一家NGO组织从事数据分析方面的工作,主要的工具是Python.SQL.Spark.平时会写一些分析用的脚本,偶尔会写写爬虫,跑跑算法,应该说Python算是我吃饭的家伙,很 ...

  9. 强化学习入门教程(附学习大纲)

    零基础,想要入门或从事强化学习研究的同学有福了! CSDN学院特邀资深讲师为大家分享<强化学习深入浅出完全教程>从零基础开始入门强化学习,在教学的过程中,配合   具体案例.代码演示 , ...

最新文章

  1. 13 个mod_rewrite 应用举例
  2. python的应用领域有哪些、选择题_Python程序的设计复习题与答案
  3. linux rsyslog 系统日志转发
  4. 怎么随时获取Spring的上下文ApplicaitonContext,和Spring管理的Bean
  5. 解读GAN及其 2016 年度进展
  6. Python 中的闭包、匿名函数、decorator 装饰器与python的偏函数
  7. Diango博客--8.解锁博客侧栏
  8. Python集合及运算
  9. centos7修改主机名hostname
  10. 如何让你的网站快速被百度收录。
  11. [RK3399][Android7.1] 调试笔记 --- CPU的serial number读取
  12. matlab 量化投资策略,【策略分享】Matlab量化交易策略源码分享
  13. kubernetes 系列之 - 暴露运行的服务端口
  14. cad渐开线齿轮轮廓绘制_CAD画齿轮的方法
  15. APICloud和海马玩模拟器结合调试手机页面
  16. SCons教程(3) 编译程序
  17. git push错误failed to push some refs to的解决
  18. the daily 发布了
  19. MaxCompute SQL示例解析
  20. layaari2-cmd 踩坑记录,解决安装失败问题

热门文章

  1. 程序员的520表白代码,你给你对象整过几个?
  2. Android的MVVM架构的单Activity应用实践
  3. AsyncTask使用
  4. C#中的ulong关键字
  5. 如何进行移动端页面开发
  6. php 实现无限极分类详解
  7. MySQL——IN的用法详解
  8. 行车路线(改)(图的应用)
  9. 【Earth Engine】基于GEE对季节性地物进行分类(多源数据叠图+监督分类)
  10. 大数据基础知识(上)