1.概述

  今天在观察集群时,发现NN节点的负载过高,虽然对NN节点的资源进行了调整,同时对NN节点上的应用程序进行重新打包调整,负载问题暂时得到缓解。但是,我想了想,这样也不是长久之计。通过这个问题,我重新分析了一下以前应用部署架构图,发现了一些问题的所在,之前的部署架构是,将打包的应用直接部署在Hadoop集群上,虽然这没什么不好,但是我们分析得知,若是将应用部署在DN节点,那么时间长了应用程序会不会抢占DN节点的资源,那么如果我们部署在NN节点上,又对NN节点计算任务时造成影响,于是,经过讨论后,我们觉得应用程序不应该对Hadoop集群造成干扰,他们应该是属于一种松耦合的关系,所有的应用应该部署在一个AppServer集群上。下面,我就为大家介绍今天的内容。

2.应用部署剖析

  由于之前的应用程序直接部署在Hadoop集群上,这堆集群或多或少造成了一些影响。我们知道在本地开发Hadoop应用的时候,都可以直接运行相关Hadoop代码,这里我们只用到了Hadoop的HDFS的地址,那我们为什么不能直接将应用单独部署呢?其实本地开发就可以看作是AppServer集群的一个节点,借助这个思路,我们将应用单独打包后,部署在一个独立的AppServer集群,只需要用到Hadoop集群的HDFS地址即可,这里需要注意的是,保证AppServer集群与Hadoop集群在同一个网段。下面我给出解耦后应用部署架构图,如下图所示:

  从图中我们可以看出,AppServer集群想Hadoop集群提交作业,两者之间的数据交互,只需用到Hadoop的HDFS地址和Java API。在AppServer上的应用不会影响到Hadoop集群的正常运行。

3.示例

  下面为大家演示相关示例,以WordCountV2为例子,代码如下所示:

package cn.hadoop.hdfs.main;import java.io.IOException;
import java.util.Random;
import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;import cn.hadoop.hdfs.util.SystemConfig;/*** @Date Apr 23, 2015** @Author dengjie** @Note Wordcount的例子是一个比较经典的mapreduce例子,可以叫做Hadoop版的hello world。*       它将文件中的单词分割取出,然后shuffle,sort(map过程),接着进入到汇总统计*       (reduce过程),最后写道hdfs中。基本流程就是这样。*/
public class WordCountV2 {private static Logger logger = LoggerFactory.getLogger(WordCountV2.class);private static Configuration conf;/*** 设置高可用集群连接信息*/static {String tag = SystemConfig.getProperty("dev.tag");String[] hosts = SystemConfig.getPropertyArray(tag + ".hdfs.host", ",");conf = new Configuration();conf.set("fs.defaultFS", "hdfs://cluster1");conf.set("dfs.nameservices", "cluster1");conf.set("dfs.ha.namenodes.cluster1", "nna,nns");conf.set("dfs.namenode.rpc-address.cluster1.nna", hosts[0]);conf.set("dfs.namenode.rpc-address.cluster1.nns", hosts[1]);conf.set("dfs.client.failover.proxy.provider.cluster1","org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");    }public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {private final static IntWritable one = new IntWritable(1);private Text word = new Text();/*** 源文件:a b b* * map之后:* * a 1* * b 1* * b 1*/public void map(Object key, Text value, Context context) throws IOException, InterruptedException {StringTokenizer itr = new StringTokenizer(value.toString());// 整行读取while (itr.hasMoreTokens()) {word.set(itr.nextToken());// 按空格分割单词context.write(word, one);// 每次统计出来的单词+1
            }}}/*** reduce之前:* * a 1* * b 1* * b 1* * reduce之后:* * a 1* * b 2*/public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> {private IntWritable result = new IntWritable();public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException,InterruptedException {int sum = 0;for (IntWritable val : values) {sum += val.get();// 分组累加
            }result.set(sum);context.write(key, result);// 按相同的key输出
        }}public static void main(String[] args) {try {if (args.length < 1) {logger.info("args length is 0");run("hello.txt");} else {logger.info("args length is not 0");run(args[0]);}} catch (Exception ex) {ex.printStackTrace();logger.error(ex.getMessage());}}private static void run(String name) throws Exception {long randName = new Random().nextLong();// 重定向输出目录logger.info("output name is [" + randName + "]");Job job = Job.getInstance(conf);job.setJarByClass(WordCountV2.class);job.setMapperClass(TokenizerMapper.class);// 指定Map计算的类job.setCombinerClass(IntSumReducer.class);// 合并的类job.setReducerClass(IntSumReducer.class);// Reduce的类job.setOutputKeyClass(Text.class);// 输出Key类型job.setOutputValueClass(IntWritable.class);// 输出值类型
String sysInPath = SystemConfig.getProperty("hdfs.input.path.v2");String realInPath = String.format(sysInPath, name);String syOutPath = SystemConfig.getProperty("hdfs.output.path.v2");String realOutPath = String.format(syOutPath, randName);FileInputFormat.addInputPath(job, new Path(realInPath));// 指定输入路径FileOutputFormat.setOutputPath(job, new Path(realOutPath));// 指定输出路径
System.exit(job.waitForCompletion(true) ? 0 : 1);// 执行完MR任务后退出应用
    }
}

  在本地IDE中运行正常,截图如下所示:

4.应用打包部署

  然后,我们将WordCountV2应用打包后部署到AppServer1节点,这里由于工程是基于Maven结构的,我们使用Maven命令直接打包,打包命令如下所示:

mvn assembly:assembly

  然后,我们使用scp命令将打包后的JAR文件上传到AppServer1节点,上传命令如下所示:

scp hadoop-ubas-1.0.0-jar-with-dependencies.jar hadoop@apps:~/

  接着,我们在AppServer1节点上运行我们打包好的应用,运行命令如下所示:

java -jar hadoop-ubas-1.0.0-jar-with-dependencies.jar

  但是,这里却很无奈的报错了,错误信息如下所示:

java.io.IOException: No FileSystem for scheme: hdfsat org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354)at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(FileInputFormat.java:518)at cn.hadoop.hdfs.main.WordCountV2.run(WordCountV2.java:134)at cn.hadoop.hdfs.main.WordCountV2.main(WordCountV2.java:108)
2015-05-10 23:31:21 ERROR [WordCountV2.main] - No FileSystem for scheme: hdfs

5.错误分析

  首先,我们来定位下问题原因,我将打包后的JAR在Hadoop集群上运行,是可以完成良好的运行,并计算出结果信息的,为什么在非Hadoop集群却报错呢?难道是这种架构方式不对?经过仔细的分析错误信息,和我们的Maven依赖环境,问题原因定位出来了,这里我们使用了Maven的assembly插件来打包应用。只是因为当我们使用Maven组件时,它将所有的JARS合并到一个文件中,所有的META-INFO/services/org.apache.hadoop.fs.FileSystem被互相覆盖,仅保留最后一个加入的,在这种情况下FileSystem的列表从Hadoop-Commons重写到Hadoop-HDFS的列表,而DistributedFileSystem就会找不到相应的声明信息。因而,就会出现上述错误信息。在原因找到后,我们剩下的就是去找到解决方法,这里通过分析,我找到的解决办法如下,在Loading相关Hadoop的Configuration时,我们设置相关FileSystem即可,配置代码如下所示:

conf.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
conf.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName());

  接下来,我们重新打包应用,然后在AppServer1节点运行该应用,运行正常,并正常统计结果,运行日志如下所示:

[hadoop@apps example]$ java -jar hadoop-ubas-1.0.0-jar-with-dependencies.jar
2015-05-11 00:08:15 INFO  [SystemConfig.main] - Successfully loaded default properties.
2015-05-11 00:08:15 INFO  [WordCountV2.main] - args length is 0
2015-05-11 00:08:15 INFO  [WordCountV2.main] - output name is [6876390710620561863]
2015-05-11 00:08:16 WARN  [NativeCodeLoader.main] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-05-11 00:08:17 INFO  [deprecation.main] - session.id is deprecated. Instead, use dfs.metrics.session-id
2015-05-11 00:08:17 INFO  [JvmMetrics.main] - Initializing JVM Metrics with processName=JobTracker, sessionId=
2015-05-11 00:08:17 WARN  [JobSubmitter.main] - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2015-05-11 00:08:17 INFO  [FileInputFormat.main] - Total input paths to process : 1
2015-05-11 00:08:18 INFO  [JobSubmitter.main] - number of splits:1
2015-05-11 00:08:18 INFO  [JobSubmitter.main] - Submitting tokens for job: job_local519626586_0001
2015-05-11 00:08:18 INFO  [Job.main] - The url to track the job: http://localhost:8080/
2015-05-11 00:08:18 INFO  [Job.main] - Running job: job_local519626586_0001
2015-05-11 00:08:18 INFO  [LocalJobRunner.Thread-14] - OutputCommitter set in config null
2015-05-11 00:08:18 INFO  [LocalJobRunner.Thread-14] - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
2015-05-11 00:08:18 INFO  [LocalJobRunner.Thread-14] - Waiting for map tasks
2015-05-11 00:08:18 INFO  [LocalJobRunner.LocalJobRunner Map Task Executor #0] - Starting task: attempt_local519626586_0001_m_000000_0
2015-05-11 00:08:18 INFO  [Task.LocalJobRunner Map Task Executor #0] -  Using ResourceCalculatorProcessTree : [ ]
2015-05-11 00:08:18 INFO  [MapTask.LocalJobRunner Map Task Executor #0] - Processing split: hdfs://cluster1/home/hdfs/test/in/hello.txt:0+24
2015-05-11 00:08:19 INFO  [MapTask.LocalJobRunner Map Task Executor #0] - (EQUATOR) 0 kvi 26214396(104857584)
2015-05-11 00:08:19 INFO  [MapTask.LocalJobRunner Map Task Executor #0] - mapreduce.task.io.sort.mb: 100
2015-05-11 00:08:19 INFO  [MapTask.LocalJobRunner Map Task Executor #0] - soft limit at 83886080
2015-05-11 00:08:19 INFO  [MapTask.LocalJobRunner Map Task Executor #0] - bufstart = 0; bufvoid = 104857600
2015-05-11 00:08:19 INFO  [MapTask.LocalJobRunner Map Task Executor #0] - kvstart = 26214396; length = 6553600
2015-05-11 00:08:19 INFO  [MapTask.LocalJobRunner Map Task Executor #0] - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2015-05-11 00:08:19 INFO  [LocalJobRunner.LocalJobRunner Map Task Executor #0] -
2015-05-11 00:08:19 INFO  [MapTask.LocalJobRunner Map Task Executor #0] - Starting flush of map output
2015-05-11 00:08:19 INFO  [MapTask.LocalJobRunner Map Task Executor #0] - Spilling map output
2015-05-11 00:08:19 INFO  [MapTask.LocalJobRunner Map Task Executor #0] - bufstart = 0; bufend = 72; bufvoid = 104857600
2015-05-11 00:08:19 INFO  [MapTask.LocalJobRunner Map Task Executor #0] - kvstart = 26214396(104857584); kvend = 26214352(104857408); length = 45/6553600
2015-05-11 00:08:19 INFO  [MapTask.LocalJobRunner Map Task Executor #0] - Finished spill 0
2015-05-11 00:08:19 INFO  [Task.LocalJobRunner Map Task Executor #0] - Task:attempt_local519626586_0001_m_000000_0 is done. And is in the process of committing
2015-05-11 00:08:19 INFO  [LocalJobRunner.LocalJobRunner Map Task Executor #0] - map
2015-05-11 00:08:19 INFO  [Task.LocalJobRunner Map Task Executor #0] - Task 'attempt_local519626586_0001_m_000000_0' done.
2015-05-11 00:08:19 INFO  [LocalJobRunner.LocalJobRunner Map Task Executor #0] - Finishing task: attempt_local519626586_0001_m_000000_0
2015-05-11 00:08:19 INFO  [LocalJobRunner.Thread-14] - map task executor complete.
2015-05-11 00:08:19 INFO  [LocalJobRunner.Thread-14] - Waiting for reduce tasks
2015-05-11 00:08:19 INFO  [LocalJobRunner.pool-6-thread-1] - Starting task: attempt_local519626586_0001_r_000000_0
2015-05-11 00:08:19 INFO  [Task.pool-6-thread-1] -  Using ResourceCalculatorProcessTree : [ ]
2015-05-11 00:08:19 INFO  [ReduceTask.pool-6-thread-1] - Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@16769723
2015-05-11 00:08:19 INFO  [MergeManagerImpl.pool-6-thread-1] - MergerManager: memoryLimit=177399392, maxSingleShuffleLimit=44349848, mergeThreshold=117083600, ioSortFactor=10, memToMemMergeOutputsThreshold=10
2015-05-11 00:08:19 INFO  [EventFetcher.EventFetcher for fetching Map Completion Events] - attempt_local519626586_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
2015-05-11 00:08:19 INFO  [LocalFetcher.localfetcher#1] - localfetcher#1 about to shuffle output of map attempt_local519626586_0001_m_000000_0 decomp: 50 len: 54 to MEMORY
2015-05-11 00:08:19 INFO  [InMemoryMapOutput.localfetcher#1] - Read 50 bytes from map-output for attempt_local519626586_0001_m_000000_0
2015-05-11 00:08:19 INFO  [MergeManagerImpl.localfetcher#1] - closeInMemoryFile -> map-output of size: 50, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->50
2015-05-11 00:08:19 INFO  [EventFetcher.EventFetcher for fetching Map Completion Events] - EventFetcher is interrupted.. Returning
2015-05-11 00:08:19 INFO  [LocalJobRunner.pool-6-thread-1] - 1 / 1 copied.
2015-05-11 00:08:19 INFO  [MergeManagerImpl.pool-6-thread-1] - finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
2015-05-11 00:08:19 INFO  [Merger.pool-6-thread-1] - Merging 1 sorted segments
2015-05-11 00:08:19 INFO  [Merger.pool-6-thread-1] - Down to the last merge-pass, with 1 segments left of total size: 46 bytes
2015-05-11 00:08:19 INFO  [MergeManagerImpl.pool-6-thread-1] - Merged 1 segments, 50 bytes to disk to satisfy reduce memory limit
2015-05-11 00:08:19 INFO  [MergeManagerImpl.pool-6-thread-1] - Merging 1 files, 54 bytes from disk
2015-05-11 00:08:19 INFO  [MergeManagerImpl.pool-6-thread-1] - Merging 0 segments, 0 bytes from memory into reduce
2015-05-11 00:08:19 INFO  [Merger.pool-6-thread-1] - Merging 1 sorted segments
2015-05-11 00:08:19 INFO  [Merger.pool-6-thread-1] - Down to the last merge-pass, with 1 segments left of total size: 46 bytes
2015-05-11 00:08:19 INFO  [LocalJobRunner.pool-6-thread-1] - 1 / 1 copied.
2015-05-11 00:08:19 INFO  [deprecation.pool-6-thread-1] - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
2015-05-11 00:08:19 INFO  [Job.main] - Job job_local519626586_0001 running in uber mode : false
2015-05-11 00:08:19 INFO  [Job.main] -  map 100% reduce 0%
2015-05-11 00:08:19 INFO  [Task.pool-6-thread-1] - Task:attempt_local519626586_0001_r_000000_0 is done. And is in the process of committing
2015-05-11 00:08:19 INFO  [LocalJobRunner.pool-6-thread-1] - 1 / 1 copied.
2015-05-11 00:08:19 INFO  [Task.pool-6-thread-1] - Task attempt_local519626586_0001_r_000000_0 is allowed to commit now
2015-05-11 00:08:19 INFO  [FileOutputCommitter.pool-6-thread-1] - Saved output of task 'attempt_local519626586_0001_r_000000_0' to hdfs://cluster1/home/hdfs/test/out/6876390710620561863/_temporary/0/task_local519626586_0001_r_000000
2015-05-11 00:08:19 INFO  [LocalJobRunner.pool-6-thread-1] - reduce > reduce
2015-05-11 00:08:19 INFO  [Task.pool-6-thread-1] - Task 'attempt_local519626586_0001_r_000000_0' done.
2015-05-11 00:08:19 INFO  [LocalJobRunner.pool-6-thread-1] - Finishing task: attempt_local519626586_0001_r_000000_0
2015-05-11 00:08:19 INFO  [LocalJobRunner.Thread-14] - reduce task executor complete.
2015-05-11 00:08:20 INFO  [Job.main] -  map 100% reduce 100%
2015-05-11 00:08:20 INFO  [Job.main] - Job job_local519626586_0001 completed successfully
2015-05-11 00:08:20 INFO  [Job.main] - Counters: 38File System CountersFILE: Number of bytes read=77813788FILE: Number of bytes written=78928898FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=48HDFS: Number of bytes written=24HDFS: Number of read operations=13HDFS: Number of large read operations=0HDFS: Number of write operations=4Map-Reduce FrameworkMap input records=2Map output records=12Map output bytes=72Map output materialized bytes=54Input split bytes=108Combine input records=12Combine output records=6Reduce input groups=6Reduce shuffle bytes=54Reduce input records=6Reduce output records=6Spilled Records=12Shuffled Maps =1Failed Shuffles=0Merged Map outputs=1GC time elapsed (ms)=53CPU time spent (ms)=0Physical memory (bytes) snapshot=0Virtual memory (bytes) snapshot=0Total committed heap usage (bytes)=241442816Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format Counters Bytes Read=24File Output Format Counters Bytes Written=24

6.总结

  这里需要注意的是,我们应用部署架构没问题,思路是正确的,问题出在打包上,在打包的时候需要特别注意,另外,有些同学使用IDE的Export导出时也要注意一下,相关依赖是否存在,还有常见的第三方打包工具Fat,这个也是需要注意的。

7.结束语

  这篇博客就和大家分享到这里,如果大家在研究学习的过程当中有什么问题,可以加群进行讨论或发送邮件给我,我会尽我所能为您解答,与君共勉!

联系方式: 
邮箱:smartloli.org@gmail.com 
Twitter:https://twitter.com/smartloli 
QQ群(Hadoop - 交流社区1):424769183 
温馨提示:请大家加群的时候写上加群理由(姓名+公司/学校),方便管理员审核,谢谢!

热爱生活,享受编程,与君共勉!

本文转自哥不是小萝莉博客园博客,原文链接:http://www.cnblogs.com/smartloli/,如需转载请自行联系原作者

高可用Hadoop平台-应用JAR部署相关推荐

  1. 高可用Hadoop平台-Oozie工作流之Hadoop调度

    1.概述 在<高可用Hadoop平台-Oozie工作流>一篇中,给大家分享了如何去单一的集成Oozie这样一个插件.今天为大家介绍如何去使用Oozie创建相关工作流运行与Hadoop上,已 ...

  2. 高可用Hadoop平台-答疑篇

    1.概述 这篇博客不涉及到具体的编码,只是解答最近一些朋友心中的疑惑.最近,一些朋友和网友纷纷私密我,我总结了一下,疑问大致包含以下几点: 我学 Hadoop 后能从事什么岗位? 在遇到问题,我该如何 ...

  3. 高可用Hadoop平台-Oozie工作流

    1.概述 在开发Hadoop的相关应用使用,在业务不复杂,任务不多的情况下,我们可以直接使用Crontab去完成相关应用的调度.今天给大家介绍的是统一管理各种调度任务的系统,下面为今天分享的内容目录: ...

  4. HA高可用HADOOP生态群系统搭建

    1 绪论 在 Hadoop 1.X版本中,NameNode是整个HDFS集群的单点故障(single point of failure,SPOF):每一个HDFS集群只能有一个NameNode节点,一 ...

  5. 网络云存储技术Windows server 2012 (项目二十 一 基于Cluster的高可用企业WEB服务器的部署)

    网络云存储技术Windows server 2012 (项目二十一 基于Cluster的高可用企业WEB服务器的部署) 前言 网络存储技术,是以互联网为载体实现数据的传输与存储,它采用面向网络的存储体 ...

  6. Kubeadm HA 1.9 高可用 集群 本地离线部署

    Kubeadm HA 1.9 高可用 集群 本地离线部署 k8s介绍 k8s 发展速度很快,目前很多大的公司容器集群都基于该项目,如京东,腾讯,滴滴,瓜子二手车,北森等等. kubernetes1.9 ...

  7. greenplum配置高可用_高可用hadoop集群配置就收藏这一篇,动手搭建Hadoop(5)

    01 ssh免密安装 02 jdk安装 03 hadoop伪分布式安装 04 hadoop全分布式 完成了前面四步,现在做hadoop的高可用.其实和之前的lvs的高可用差不多的.如果我们有两个nam ...

  8. LVS+keepalived高可用负载均衡集群部署(一) ----数据库的读写分离

    l  系统环境: RHEL7 l  硬件环境:虚拟机 l  项目描述:为解决网站访问压力大的问题,需要搭建高可用.负载均衡的 web集群. l  架构说明:整个服务架构采用功能分离的方式部署.后端采用 ...

  9. hadoop3.1.1 HA高可用分布式集群安装部署

    1.环境介绍 涉及到软件下载地址:https://pan.baidu.com/s/1hpcXUSJe85EsU9ara48MsQ 服务器:CentOS 6.8 其中:2 台 namenode.3 台 ...

  10. k8s高可用集群_搭建高可用集群(初始化和部署keepalived)---K8S_Google工作笔记0055

    技术交流QQ群[JAVA,C++,Python,.NET,BigData,AI]:170933152 然后我们来部署keepalived,搭建高可用集群. 然后我们这里需要master,155, ma ...

最新文章

  1. 【原】Unity3D 窗口裁剪
  2. 提高vivado的编译速度
  3. windows server 驱动精灵_还在用Windows文件共享?我来教你一键摆脱Windows海量小文件使用和备份的噩梦...
  4. 收藏!李飞飞老师《注意力与Transformer》总结,84页ppt开放下载!
  5. php正则表达式 重复字符,php正则表达式匹配可能的重音字符
  6. 100分制的成绩转换(C语言)(查表法)
  7. 测试开发必备技能-Jmeter二次开发
  8. tensorflow随笔——简单的卷积神经网络分类实例
  9. 【通信】基于matlab语音信号仿真【含Matlab源码 957期】
  10. mysql 拼音模糊查询_mysql中文字段转拼音首字母,以及中文拼音模糊查询
  11. 【谷歌重磅发布2017学术影响因子】AI、视觉、机器人TOP20 榜单
  12. 关于区块链的一些特有技术
  13. AcWing 95. 费解的开关 (递归位运算 详解)
  14. java tetris_Java | Tetris
  15. 机器学习算法-逻辑回归(LR)
  16. X86 android r7 z3735,安卓工业平板电脑android系统下各大主流CPU性能大对比分析
  17. delphi bde mysql_Delphi- 连接MySQL数据库BDE
  18. 春节送礼经济学:绕不开的礼尚往来,怎么送礼最有效?
  19. 在线查询倒闭公司数据
  20. 函数编程变得简单:Eta来了

热门文章

  1. Android的JNI【实战教程】3⃣️--Java调用C代码
  2. git提交代码到github
  3. 更好的理解装饰设计模式和代理设计模式
  4. 计算机的时代背景,学生计算机论文,关于新时代背景下的中专计算机教学相关参考文献资料-免费论文范文...
  5. memcached mysql 类_mysql有没有类似和memcached里那样的CAS版本控制?
  6. linux 按序号创建文件夹,在Linux终端中创建M3U播放列表的方法
  7. mysql锁问题吗_Mysql锁的问题和解析
  8. 华为路由器配置文件备份与恢复
  9. 微信小程序app配置指南
  10. 【莫队算法】URAL - 2080 - Wallet