一、下载安装包

下载 apache-nutch-1.7-bin.zip 和 apache-nutch-1.7-src.zip 2个包

二、基本环境搭建

1、解压apache-nutch-1.7-src.zip包到eclipse的工作目录下面,如:D:\Workspaces\MyEclipse 8.5\test\apache-nutch-1.7

2、解压apache-nutch-1.7-bin.zip包,将其lib文件夹复制到D:\Workspaces\MyEclipse 8.5\test\apache-nutch-1.7下面(因为nutchsrc包下缺少部分lib包)。

3、将nutch导入eclipse,首先点击工具栏“File”-“new”-“other”-“Java project from an existing Ant buildfile”,

选择next,在Ant buildfile中选中nutch中的build.xml, Project name与Ant buildfile中的build.xml前面的nutch文件夹保持一致,并勾选“link to the buildfile in the file system”。

点击“finish”后,提示nutch的build文件下缺少lib包,此时我们将nutch下面的lib文件夹复制到bulid文件下,重新执行nutch导入eclipse中步骤即可。

eclipse中已经有nutch的文件了,我们选中conf,右键“build path”---"use as source folder"

此时,发现nutch项目中存在错误“x”,选中项目后鼠标右键“properties”,将项目编码改为utf8点击“apply”,至此nutch已经导入eclipse。

三、Eclipse中配置Nutch

1、找到Crawl.java ,右键“run as ” ----"java application ", 在console中显示 “Usage: Crawl <urlDir> -solr <solrURL> [-dir d] [-threads n] [-depth i] [-topN N]”。

在nutch文件夹下新建文件夹“urls”,在urls文件夹下面建立url.txt,中文本中写入要抓取的网站地址。

此时,Crawl.java类中,鼠标右键“run as ” ----"run configuration",在"programm argments"输入“Crawl urls -dir data1221 -threads 2 -depth 3 -topN 5”,在“vm argment”中输入“-Xms512m -Xmx800m”,点击“apply”。

点击“run”。报错,信息如下:

Injector: Converting injected urls to crawl db entries.
Exception in thread "main" java.io.IOException: Failed to set permissions of path: \tmp\hadoop-\mapred\staging\1623868107\.staging to 0700at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:691)at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:664)at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:514)at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:349)at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:193)at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:126)at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942)at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:396)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)at org.apache.nutch.crawl.Injector.inject(Injector.java:281)at org.apache.nutch.crawl.Crawl.run(Crawl.java:132)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)at org.apache.nutch.crawl.Crawl.main(Crawl.java:55)

这个是hadoop的包引起的,需要修改nutch下面lib中的hadoop-core-1.2.0.jar,需要修改hadoop中的FileUtil.java,注释checkReturnValue方法的内容,如图。

将hadoop重新打包。因为我不会打包操作,我是直接用解压缩的方式打开hadoop-core-1.2.0.jar,删除FileUtil的class文件,引入修改后的class文件

此时需要重新build文件,选中“build.xml”,右键 “run as” ----"Ant Build..."

选中“jar”、“job”、“runtime[default]”选项,点击“apply”,点击“run”,需要等待一段时间,才能编译好,第一次编译需要下载部分文件用于编译。

编译后,将nutch文件夹下面的build文件夹下的“plugins”文件夹、“apache-nutch-1.7.jar”、“apache-nutch-1.7.job”复制到nutch文件夹下,在eclipse中刷新nutch项目,如下图。

重新运行Crawl类,运行报错。

Fetcher: No agents listed in 'http.agent.name' property.
Exception in thread "main" java.lang.IllegalArgumentException: Fetcher: No agents listed in 'http.agent.name' property.at org.apache.nutch.fetcher.Fetcher.checkConfiguration(Fetcher.java:1397)at org.apache.nutch.fetcher.Fetcher.fetch(Fetcher.java:1282)at org.apache.nutch.crawl.Crawl.run(Crawl.java:141)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)at org.apache.nutch.crawl.Crawl.main(Crawl.java:55)

提示没有 'http.agent.name'属性,我们找到nutch下面的conf文件夹下面的nutch-default.xml搜索'http.agent.name',发现其中的value值为空。

<property><name>http.agent.name</name><value></value><description>HTTP 'User-Agent' request header. MUST NOT be empty - please set this to a single word uniquely related to your organization.NOTE: You should also check other related properties:http.robots.agentshttp.agent.descriptionhttp.agent.urlhttp.agent.emailhttp.agent.versionand set their values appropriately.</description>
</property>

我们复制上面这段代码到nutch-site.xml中,填写value值。

注意,一般我们不改动nutch-default.xml的默认配置,而是修改nutch-site.xml的配置,覆盖nutch-default.xml的配置。

重新运行Crawl类,报错信息如下:

Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist:
file:/D:/Workspaces/MyEclipse 8.5/test/apache-nutch-1.7/data1221/segments/20131221154928/parse_dataat org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)at org.apache.hadoop.mapred.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:40)at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1081)at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1073)at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:396)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)at org.apache.nutch.crawl.LinkDb.invert(LinkDb.java:180)at org.apache.nutch.crawl.LinkDb.invert(LinkDb.java:151)at org.apache.nutch.crawl.Crawl.run(Crawl.java:148)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)at org.apache.nutch.crawl.Crawl.main(Crawl.java:55)

上面的报错信息是因为没有找到对应的parse_data文件,由于之前运行过程中报错,但当时已经生成了segments下的部分信息导致的。

此时我们发现nutch下面已经生成了data1221文件,我们删除“data1221”这个文件,重新跑Crawl类,运行正常。

发现nutch下面重新生成了“data1221”文件夹,且文件夹下面生成了“crawldb”、“linkdb”、”segments“文件夹。

需要补充的是,我们在这个过程没有改动nutch抓取网页的配置信息,因为nutch下面conf文件下”automaton-urlfilter.txt“默认允许抓取网址如下:

# accept anything else
+.*

至此,nutch抓取网页演示完毕。

附eclipse的console中的输出信息如下:

solrUrl is not set, indexing will be skipped...
crawl started in: data1221
rootUrlDir = urls
threads = 2
depth = 3
solrUrl=null
topN = 5
Injector: starting at 2013-12-21 16:05:08
Injector: crawlDb: data1221/crawldb
Injector: urlDir: urls
Injector: Converting injected urls to crawl db entries.
Injector: total number of urls rejected by filters: 0
Injector: total number of urls injected after normalization and filtering: 1
Injector: Merging injected urls into crawl db.
Injector: finished at 2013-12-21 16:05:11, elapsed: 00:00:02
Generator: starting at 2013-12-21 16:05:11
Generator: Selecting best-scoring urls due for fetch.
Generator: filtering: true
Generator: normalizing: true
Generator: topN: 5
Generator: jobtracker is 'local', generating exactly one partition.
Generator: Partitioning selected urls for politeness.
Generator: segment: data1221/segments/20131221160513
Generator: finished at 2013-12-21 16:05:14, elapsed: 00:00:03
Fetcher: Your 'http.agent.name' value should be listed first in 'http.robots.agents' property.
Fetcher: starting at 2013-12-21 16:05:14
Fetcher: segment: data1221/segments/20131221160513
Using queue mode : byHost
Fetcher: threads: 2
Fetcher: time-out divisor: 2
QueueFeeder finished: total 1 records + hit by time limit :0
Using queue mode : byHost
Using queue mode : byHost
Fetcher: throughput threshold: -1
Fetcher: throughput threshold retries: 5
fetching http://www.163.com/ (queue crawl delay=5000ms)
-finishing thread FetcherThread, activeThreads=1
-finishing thread FetcherThread, activeThreads=0
-activeThreads=0, spinWaiting=0, fetchQueues.totalSize=0
-activeThreads=0
Fetcher: finished at 2013-12-21 16:05:16, elapsed: 00:00:02
ParseSegment: starting at 2013-12-21 16:05:16
ParseSegment: segment: data1221/segments/20131221160513
Parsed (8ms):http://www.163.com/
ParseSegment: finished at 2013-12-21 16:05:17, elapsed: 00:00:01
CrawlDb update: starting at 2013-12-21 16:05:17
CrawlDb update: db: data1221/crawldb
CrawlDb update: segments: [data1221/segments/20131221160513]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: true
CrawlDb update: URL filtering: true
CrawlDb update: 404 purging: false
CrawlDb update: Merging segment data into db.
CrawlDb update: finished at 2013-12-21 16:05:18, elapsed: 00:00:01
Generator: starting at 2013-12-21 16:05:18
Generator: Selecting best-scoring urls due for fetch.
Generator: filtering: true
Generator: normalizing: true
Generator: topN: 5
Generator: jobtracker is 'local', generating exactly one partition.
Generator: Partitioning selected urls for politeness.
Generator: segment: data1221/segments/20131221160520
Generator: finished at 2013-12-21 16:05:21, elapsed: 00:00:03
Fetcher: Your 'http.agent.name' value should be listed first in 'http.robots.agents' property.
Fetcher: starting at 2013-12-21 16:05:21
Fetcher: segment: data1221/segments/20131221160520
Using queue mode : byHost
Fetcher: threads: 2
Fetcher: time-out divisor: 2
QueueFeeder finished: total 2 records + hit by time limit :0
Using queue mode : byHost
fetching http://m.163.com/ (queue crawl delay=5000ms)
Using queue mode : byHost
Fetcher: throughput threshold: -1
Fetcher: throughput threshold retries: 5
-activeThreads=2, spinWaiting=2, fetchQueues.totalSize=1
* queue: http://m.163.commaxThreads    = 1inProgress    = 0crawlDelay    = 5000minCrawlDelay = 0nextFetchTime = 1387613126729now           = 13876131226970. http://m.163.com/newsapp/
-activeThreads=2, spinWaiting=2, fetchQueues.totalSize=1
* queue: http://m.163.commaxThreads    = 1inProgress    = 0crawlDelay    = 5000minCrawlDelay = 0nextFetchTime = 1387613126729now           = 13876131236970. http://m.163.com/newsapp/
-activeThreads=2, spinWaiting=2, fetchQueues.totalSize=1
* queue: http://m.163.commaxThreads    = 1inProgress    = 0crawlDelay    = 5000minCrawlDelay = 0nextFetchTime = 1387613126729now           = 13876131246970. http://m.163.com/newsapp/
-activeThreads=2, spinWaiting=2, fetchQueues.totalSize=1
* queue: http://m.163.commaxThreads    = 1inProgress    = 0crawlDelay    = 5000minCrawlDelay = 0nextFetchTime = 1387613126729now           = 13876131256980. http://m.163.com/newsapp/
-activeThreads=2, spinWaiting=2, fetchQueues.totalSize=1
* queue: http://m.163.commaxThreads    = 1inProgress    = 0crawlDelay    = 5000minCrawlDelay = 0nextFetchTime = 1387613126729now           = 13876131266980. http://m.163.com/newsapp/
fetching http://m.163.com/newsapp/ (queue crawl delay=5000ms)
-finishing thread FetcherThread, activeThreads=1
-finishing thread FetcherThread, activeThreads=0
-activeThreads=0, spinWaiting=0, fetchQueues.totalSize=0
-activeThreads=0
Fetcher: finished at 2013-12-21 16:05:28, elapsed: 00:00:07
ParseSegment: starting at 2013-12-21 16:05:28
ParseSegment: segment: data1221/segments/20131221160520
Parsed (1ms):http://m.163.com/newsapp/
ParseSegment: finished at 2013-12-21 16:05:29, elapsed: 00:00:01
CrawlDb update: starting at 2013-12-21 16:05:29
CrawlDb update: db: data1221/crawldb
CrawlDb update: segments: [data1221/segments/20131221160520]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: true
CrawlDb update: URL filtering: true
CrawlDb update: 404 purging: false
CrawlDb update: Merging segment data into db.
CrawlDb update: finished at 2013-12-21 16:05:30, elapsed: 00:00:01
Generator: starting at 2013-12-21 16:05:30
Generator: Selecting best-scoring urls due for fetch.
Generator: filtering: true
Generator: normalizing: true
Generator: topN: 5
Generator: jobtracker is 'local', generating exactly one partition.
Generator: Partitioning selected urls for politeness.
Generator: segment: data1221/segments/20131221160532
Generator: finished at 2013-12-21 16:05:33, elapsed: 00:00:03
Fetcher: Your 'http.agent.name' value should be listed first in 'http.robots.agents' property.
Fetcher: starting at 2013-12-21 16:05:33
Fetcher: segment: data1221/segments/20131221160532
Using queue mode : byHost
Fetcher: threads: 2
Fetcher: time-out divisor: 2
QueueFeeder finished: total 5 records + hit by time limit :0
Using queue mode : byHost
Using queue mode : byHost
fetching http://digi.163.com/13/0719/10/9450M2MJ00162659.html (queue crawl delay=5000ms)
Fetcher: throughput threshold: -1
Fetcher: throughput threshold retries: 5
fetching http://help.3g.163.com/13/1216/17/9G81M68M0096400O.html (queue crawl delay=5000ms)
fetching http://m.163.com/newsapp/download.html (queue crawl delay=5000ms)
-activeThreads=2, spinWaiting=2, fetchQueues.totalSize=2
* queue: http://help.3g.163.commaxThreads    = 1inProgress    = 0crawlDelay    = 5000minCrawlDelay = 0nextFetchTime = 1387613139083now           = 13876131350030. http://help.3g.163.com/13/1127/15/9EMS17RN0096400O.html
* queue: http://m.163.commaxThreads    = 1inProgress    = 0crawlDelay    = 5000minCrawlDelay = 0nextFetchTime = 1387613139117now           = 13876131350030. http://m.163.com/newsapp/zhinan.html
-activeThreads=2, spinWaiting=2, fetchQueues.totalSize=2
* queue: http://help.3g.163.commaxThreads    = 1inProgress    = 0crawlDelay    = 5000minCrawlDelay = 0nextFetchTime = 1387613139083now           = 13876131360120. http://help.3g.163.com/13/1127/15/9EMS17RN0096400O.html
* queue: http://m.163.commaxThreads    = 1inProgress    = 0crawlDelay    = 5000minCrawlDelay = 0nextFetchTime = 1387613139117now           = 13876131360120. http://m.163.com/newsapp/zhinan.html
-activeThreads=2, spinWaiting=2, fetchQueues.totalSize=2
* queue: http://help.3g.163.commaxThreads    = 1inProgress    = 0crawlDelay    = 5000minCrawlDelay = 0nextFetchTime = 1387613139083now           = 13876131370130. http://help.3g.163.com/13/1127/15/9EMS17RN0096400O.html
* queue: http://m.163.commaxThreads    = 1inProgress    = 0crawlDelay    = 5000minCrawlDelay = 0nextFetchTime = 1387613139117now           = 13876131370130. http://m.163.com/newsapp/zhinan.html
-activeThreads=2, spinWaiting=2, fetchQueues.totalSize=2
* queue: http://help.3g.163.commaxThreads    = 1inProgress    = 0crawlDelay    = 5000minCrawlDelay = 0nextFetchTime = 1387613139083now           = 13876131380140. http://help.3g.163.com/13/1127/15/9EMS17RN0096400O.html
* queue: http://m.163.commaxThreads    = 1inProgress    = 0crawlDelay    = 5000minCrawlDelay = 0nextFetchTime = 1387613139117now           = 13876131380140. http://m.163.com/newsapp/zhinan.html
-activeThreads=2, spinWaiting=2, fetchQueues.totalSize=2
* queue: http://help.3g.163.commaxThreads    = 1inProgress    = 0crawlDelay    = 5000minCrawlDelay = 0nextFetchTime = 1387613139083now           = 13876131390150. http://help.3g.163.com/13/1127/15/9EMS17RN0096400O.html
* queue: http://m.163.commaxThreads    = 1inProgress    = 0crawlDelay    = 5000minCrawlDelay = 0nextFetchTime = 1387613139117now           = 13876131390150. http://m.163.com/newsapp/zhinan.html
fetching http://help.3g.163.com/13/1127/15/9EMS17RN0096400O.html (queue crawl delay=5000ms)
fetching http://m.163.com/newsapp/zhinan.html (queue crawl delay=5000ms)
-finishing thread FetcherThread, activeThreads=1
-finishing thread FetcherThread, activeThreads=0
-activeThreads=0, spinWaiting=0, fetchQueues.totalSize=0
-activeThreads=0
Fetcher: finished at 2013-12-21 16:05:40, elapsed: 00:00:07
ParseSegment: starting at 2013-12-21 16:05:40
ParseSegment: segment: data1221/segments/20131221160532
Parsed (0ms):http://digi.163.com/13/0719/10/9450M2MJ00162659.html
Parsed (0ms):http://help.3g.163.com/13/1127/15/9EMS17RN0096400O.html
Parsed (0ms):http://help.3g.163.com/13/1216/17/9G81M68M0096400O.html
Parsed (0ms):http://m.163.com/newsapp/download.html
Parsed (0ms):http://m.163.com/newsapp/zhinan.html
ParseSegment: finished at 2013-12-21 16:05:42, elapsed: 00:00:01
CrawlDb update: starting at 2013-12-21 16:05:42
CrawlDb update: db: data1221/crawldb
CrawlDb update: segments: [data1221/segments/20131221160532]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: true
CrawlDb update: URL filtering: true
CrawlDb update: 404 purging: false
CrawlDb update: Merging segment data into db.
CrawlDb update: finished at 2013-12-21 16:05:44, elapsed: 00:00:02
LinkDb: starting at 2013-12-21 16:05:44
LinkDb: linkdb: data1221/linkdb
LinkDb: URL normalize: true
LinkDb: URL filter: true
LinkDb: internal links will be ignored.
LinkDb: adding segment: file:/D:/Workspaces/MyEclipse 8.5/test/apache-nutch-1.7/data1221/segments/20131221160513
LinkDb: adding segment: file:/D:/Workspaces/MyEclipse 8.5/test/apache-nutch-1.7/data1221/segments/20131221160520
LinkDb: adding segment: file:/D:/Workspaces/MyEclipse 8.5/test/apache-nutch-1.7/data1221/segments/20131221160532
LinkDb: finished at 2013-12-21 16:05:45, elapsed: 00:00:01
crawl finished: data1221

win7下MyEclipse装Nutch1.7相关推荐

  1. Win7下,Office 2010和Adobe Acrobat Professional 8.1不兼容:PDFMaker文件遗失

    转自:http://againinput4.blog.163.com/blog/static/172799491201112284712656/ [已解决]Win7下,Office 2010和Adob ...

  2. win7下虚拟机VMWare装linux(ubantu)后挂载win7共享目录

    1.在win7下设定共享目录 2.ubantu下安装 mount.cifs apt -get install smbfs 3.挂载: mount -t cifs -o username=usernam ...

  3. win7下使用Taste实现协同过滤算法

    如果要实现Taste算法,必备的条件是: 1) JDK,使用1.6版本.需要说明一下,因为要基于Eclipse构建,所以在设置path的值之前要先定义JAVA_HOME变量. 2) Maven,使用2 ...

  4. 使用iso文件安装双系统linux,Win7下使用EasyBcd安装Ubuntu(iso文件)双系统

    Win7下使用EasyBcd安装Ubuntu(iso文件)双系统 一.准备工作(在win7下操作完成) 1.从官网www.ubuntu.com上下载镜像文件,大小接近700M. 2.下载并安装easy ...

  5. win7下搭建手动转码服务器的安装配置tomcat、java、ffmpeg、hy_changsha、FTP

    http://www.2cto.com/os/201203/122261.html 1.下载JDK安装 官网下载地址http://java.sun.com/javase/downloads/index ...

  6. Win7下安装ubuntu (双硬盘用户加强版)

    起源还是学习上要用到linux操作系统,所以才在自己电脑上安装linux操作系统.先开始是在虚拟机上用的,但用过一两星期就感觉还是不适合,用虚拟机感觉只要ctrl+Alt就能退出来,给自己留了太多的退 ...

  7. 如何在64位win7下通过ODAC来访问Oracle服务器

    最近公司跟我换了新电脑:2代i3+8G内存,由于32位的win7最多只能用3.25G内存,而用ramdisk4g划出4.75G来作硬盘又觉太浪费,遂想用64位的系统. 从网上下载了冷风的64位win7 ...

  8. win-tc不能在win7下使用

    今天安装Win-TC工具,一直不能使用:本机Win7 64位: 编译任何程序都出 list index out of bounds(0): 按其说明未装在根目录,安装路径不包括中文:一直出这个错: 此 ...

  9. win7下python的安装与配置_Win7下Python与Tensorflow-CPU版开发环境的安装与配置过程...

    以此文记录Python与Tensorflow及其开发环境的安装与配置过程,以备以后参考. 1 硬件与系统条件 Win7 64位系统,显卡为NVIDIA GeforeGT 635M 2 安装策略 a.由 ...

  10. linux fedora14 u盘运行,Win7下Fedora 14 硬盘或U盘安装指南

    由于Fedora的调试功能比较好,又加之Fedora 14刚刚发行,不免有了试试的冲动,郁闷的是实验室的机器都没有光驱,因此只能采用硬盘或u盘安装,二者步骤基本相似. 1 EasyBCD 由于原来装的 ...

最新文章

  1. pinctrl 和 gpio 子系统
  2. android+动画队列,Android属性动画详解
  3. 6.2 gzip:压缩或解压文件
  4. java学jdk几_Java系列学习(一)-JDK下载与安装
  5. mysql安装版和解压版哪个好_红米k30pro变焦版和荣耀30pro哪个好-哪个更值得入手...
  6. 在每个运行中运行多个查询_linux系统中运行级别介绍
  7. 三星Galaxy Note 10再曝正面渲染图:居中前置摄像头更顺眼
  8. C++基础:造成多态性的原因是什么?
  9. win2008r2 惠普g160鼠标_分享 HP 原机附带 WIN2008 R2 标准版 64位
  10. 论.NET反射、委托与模式关系 zt- -
  11. python自动轨迹绘制七边形_斜抛运动的数学模型
  12. 软路由:AdGuardHome + OpenWRT 让你家的网络无广告无跟踪
  13. Linux打包与压缩命令
  14. HigherHRNet代码复现问题集(assert isinstance(orig, torch.nn.Module))
  15. 浙江大学【面板数据分析与STATA应用】——第二讲长面板数据分析与机制识别方法
  16. 什么是埋点?简述埋点的操作流程
  17. 为什么股票一买就跌,一卖就涨?终于找到答案了!
  18. 动态表格的实现(layui动态表格实现)
  19. 腾讯安全领御为张裕打造高端葡萄酒区块链溯源平台
  20. STM8L051 同时使用RTC和USART通信

热门文章

  1. 面试必问——你有什么问题问我吗
  2. 基于Maya 2018安装OpenVDB插件
  3. 玩转web表单网页快速开发(❤建议收藏❤)
  4. 大数据和云计算有什么关系?
  5. 软件测试计划与测试方案
  6. Win7 SP1 安装 .NET Framework 4.6.2,提示“时间戳签名和或证书无法验证或已损坏”
  7. u盘安全弹出有什么用?数据丢失还能恢复吗
  8. Python 统计分析--单因素方差分析
  9. php实现把二叉树打印成多行(谋而后动,写好算法思路,不然浪费超多时间而且还是错误代码,而且精力消耗会导致代码正确率下降以及低级错误)...
  10. 三个非负整数 马蹄集