研究了几天终于将hdfs的java api调用搞通了,其中的艰辛一度让我想要放弃,但最终让我坚持了下来。这几天的经验,无疑是宝贵的,故记录下来,以防以后遗忘。我用的是版本2.10.0,你要问我为啥选择这个版本,我的回答是我也不知道,只知道官网上的下载列表第一个就是它。
1、下载

wget https://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.10.0/hadoop-2.10.0.tar.gz

2、安装
我是采用Linux系统来安装的(windows系统试了,没有跑起来),所谓安装也就是解压出来

tar -zxvf hadoop-2.10.0.tar.gz

3、部署
按照官网文档来就好了:https://hadoop.apache.org/docs/r2.10.0/hadoop-project-dist/hadoop-common/SingleCluster.html
(1)、设置java home,一般都安装了java的,所以这步可以省略

 # set to the root of your Java installationexport JAVA_HOME=/usr/java/latest

(2)、试试解压出来的文件有没有问题

dave@ubuntu:~/d/opt/hadoop-2.10.0$ ./bin/hadoop

(3)、试试单机运行有没有问题

dave@ubuntu:~/d/opt/hadoop-2.10.0$ mkdir input
dave@ubuntu:~/d/opt/hadoop-2.10.0$ cp etc/hadoop/*.xml input
dave@ubuntu:~/d/opt/hadoop-2.10.0$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.0.jar grep input output 'dfs[a-z.]+'
dave@ubuntu:~/d/opt/hadoop-2.10.0$ cat output/*

(4)、伪分布式操作

配置:
etc/hadoop/core-site.xml:

<configuration><property><name>fs.defaultFS</name><value>hdfs://localhost:9000</value></property>
</configuration>

etc/hadoop/hdfs-site.xml:

<configuration><property><name>dfs.replication</name><value>1</value></property>
</configuration>

访问localhost密钥设置(这个不设置的话,每次启动停止都要你输密码):

dave@ubuntu:~/d/opt/hadoop-2.10.0$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
dave@ubuntu:~/d/opt/hadoop-2.10.0$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
dave@ubuntu:~/d/opt/hadoop-2.10.0$ chmod 0600 ~/.ssh/authorized_keys

格式化文件系统:

dave@ubuntu:~/d/opt/hadoop-2.10.0$ bin/hdfs namenode -format

启动namenode、datanode相关守护进程:

dave@ubuntu:~/d/opt/hadoop-2.10.0$ sbin/start-dfs.sh

浏览器访问一下试试:
http://192.168.137.162:50070(官网上是localhost,我在另一台windows主机上访问也没问题,这台windows也是我后面写java代码的主机,所以在这台机器上试试能不能访问)
可以看到没有任何问题:

做一些操作试试(在部署hdfs的主机上操作):

dave@ubuntu:~/d/opt/hadoop-2.10.0$ ./bin/hdfs dfs -mkdir /user
dave@ubuntu:~/d/opt/hadoop-2.10.0$ ./bin/hdfs dfs -mkdir /user/dave
dave@ubuntu:~/d/opt/hadoop-2.10.0$ ./bin/hdfs dfs -put etc/hadoop input

通过浏览器来看看put上去的文件:

一切都出奇的顺利,那我们就来用java api来操作一下。

4、Java api调用
新建一个maven项目,引入两个依赖:

<!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs --><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-hdfs</artifactId><version>2.10.0</version></dependency><!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common --><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-common</artifactId><version>2.10.0</version></dependency>

新建一个测试的类:

package com.luoye.hadoop;import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;import java.io.IOException;public class Application {public static void main(String[] args) {Configuration configuration=new Configuration();configuration.set("fs.defaultFS","hdfs://192.168.137.162:9000");try {FileSystem fileSystem=FileSystem.newInstance(configuration);//上传文件fileSystem.mkdirs(new Path("/dave"));fileSystem.copyFromLocalFile(new Path("C:\\Users\\dave\\Desktop\\suwei\\terminal.ini"), new Path("/dave/terminal.ini"));} catch (IOException e) {e.printStackTrace();}}
}

注意,配置里面fs.defaultFS因为不是本地所以要用部署namenode的主机的IP地址。
一切就绪,那就跑一个看看,应该没任何问题:

log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
java.net.ConnectException: Call From DESKTOP-JE4MI3O/169.254.47.191 to 192.168.137.162:9000 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefusedat sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:422)at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:824)at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:754)at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1544)at org.apache.hadoop.ipc.Client.call(Client.java:1486)at org.apache.hadoop.ipc.Client.call(Client.java:1385)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)at com.sun.proxy.$Proxy10.mkdirs(Unknown Source)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:587)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:497)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)at com.sun.proxy.$Proxy11.mkdirs(Unknown Source)at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2475)at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2450)at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1242)at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1239)at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1239)at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1231)at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2216)at com.luoye.hadoop.Application.main(Application.java:17)
Caused by: java.net.ConnectException: Connection refused: no further informationat sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:805)at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:423)at org.apache.hadoop.ipc.Client.getConnection(Client.java:1601)at org.apache.hadoop.ipc.Client.call(Client.java:1432)... 24 more

可是现实总是在你以为一切都ok的时候狠狠给你一棒。
程序报错了,通过网页查看,文件也没有上传上去。
可是为啥会出错呢,我们仔细看看报错信息:
java.net.ConnectException: Call From DESKTOP-JE4MI3O/169.254.47.191 to 192.168.137.162:9000 failed on connection
连不上!!!
为啥呢?在部署的主机上操作都没问题的啊。
难道9000这个端口绑定的地址有问题?
还是到部署主机上去看一下:

dave@ubuntu:~/d/opt/hadoop-2.10.0$ netstat -anp|grep LISTEN

果然:

tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN      10958/java

好吧,那看看能不能配置一下,让它不绑定到127.0.0.1呢,翻翻官网的配置吧:
http://hadoop.apache.org/docs/r2.10.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
功夫不负有心人,终于让我找到一条:

<property><name>dfs.namenode.rpc-bind-host</name><value></value><description>The actual address the RPC server will bind to. If this optional address isset, it overrides only the hostname portion of dfs.namenode.rpc-address.It can also be specified per name node or name service for HA/Federation.This is useful for making the name node listen on all interfaces bysetting it to 0.0.0.0.</description>
</property>

那就在部署主机上的配置文件hdfs-site.xml里面配置上吧。
然后重启:

dave@ubuntu:~/d/opt/hadoop-2.10.0$ ./sbin/stop-dfs.sh
dave@ubuntu:~/d/opt/hadoop-2.10.0$ ./sbin/start-dfs.sh

好了,我们再来试试Java api调用:

log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /dave/terminal.ini could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1832)at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2591)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:880)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:517)at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034)at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:994)at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:922)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:415)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1893)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2833)at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540)at org.apache.hadoop.ipc.Client.call(Client.java:1486)at org.apache.hadoop.ipc.Client.call(Client.java:1385)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)at com.sun.proxy.$Proxy10.addBlock(Unknown Source)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:448)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:497)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)at com.sun.proxy.$Proxy11.addBlock(Unknown Source)at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1846)at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1645)at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)

还是报错!!!!
这是什么错啊,完全懵逼啊。
没办法网上找找资料吧。
通过网上的一些零碎的资料拼凑起来,大概明白原因是这样的:客户端在收到namenode返回的datanode列表时会去确定datanode是不是可用,如果不可用就会把这个datanode从列表中剔除,所以这里报错说只有0个datanode可以存副本,意思就是datanode访问不了。
这就奇怪了,为啥访问不了啊。
真是让人崩溃啊,老天啊,来个人告诉我为啥吧。
我心里一遍遍在呐喊着,可是老天爷根本就没空理我,在网上找了一圈资料,一无所获。
要是能多输出点信息就好了,我心想。
这时我注意到开头几行,似乎是说log4j配置有问题。好吧,配置一些试试了,于是在resource目录下配置好一个log4j.xml,再次运行,于是我看到了这样的输出:

2020-05-14 02:19:06,573 DEBUG ipc.ProtobufRpcEngine 253 invoke - Call: addBlock took 4ms
2020-05-14 02:19:06,580 DEBUG hdfs.DataStreamer 1686 createBlockOutputStream - pipeline = [DatanodeInfoWithStorage[127.0.0.1:50010,DS-a1ac1217-f93d-486f-bb01-3183e58bad87,DISK]]
2020-05-14 02:19:06,580 DEBUG hdfs.DataStreamer 255 createSocketForPipeline - Connecting to datanode 127.0.0.1:50010
2020-05-14 02:19:07,600 INFO  hdfs.DataStreamer 1763 createBlockOutputStream - Exception in createBlockOutputStream
java.net.ConnectException: Connection refused: no further informationat sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)at org.apache.hadoop.hdfs.DataStreamer.createSocketForPipeline(DataStreamer.java:259)at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1699)at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1655)at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2020-05-14 02:19:07,601 WARN  hdfs.DataStreamer 1658 nextBlockOutputStream - Abandoning BP-111504192-127.0.1.1-1589381622124:blk_1073741864_1040

咦,为啥要去连接127.0.0.1?
一定是缺少什么配置,好吧,再去翻翻官网上的配置文件吧。
翻来翻去也没有看到哪个配置是说去设置datanode的访问地址是部署主机的ip地址的,不过有一个似乎可行的配置:

<property><name>dfs.client.use.datanode.hostname</name><value>false</value><description>Whether clients should use datanode hostnames whenconnecting to datanodes.</description>
</property>

客户端用主机名连接,那就试试这个:
首先在代码里面加上:

configuration.set("dfs.client.use.datanode.hostname","true");

然后在写程序的这台主机的hosts文件中加上域名和ip地址的映射(部署主机上Hadoop自动加了映射的):

192.168.137.162 ubuntu

ubuntu是我部署主机的hostname,192.168.137.162是其IP地址。
好了,再试试:

2020-05-14 02:29:04,888 DEBUG hdfs.DataStreamer 873 waitForAckedSeqno - Waiting for ack for: 1
2020-05-14 02:29:04,900 DEBUG ipc.Client 1138 run - IPC Client (428566321) connection to /192.168.137.162:9000 from dave sending #3 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock
2020-05-14 02:29:04,903 DEBUG ipc.Client 1192 receiveRpcResponse - IPC Client (428566321) connection to /192.168.137.162:9000 from dave got value #3
2020-05-14 02:29:04,903 DEBUG ipc.ProtobufRpcEngine 253 invoke - Call: addBlock took 3ms
2020-05-14 02:29:04,913 DEBUG hdfs.DataStreamer 1686 createBlockOutputStream - pipeline = [DatanodeInfoWithStorage[127.0.0.1:50010,DS-a1ac1217-f93d-486f-bb01-3183e58bad87,DISK]]
2020-05-14 02:29:04,913 DEBUG hdfs.DataStreamer 255 createSocketForPipeline - Connecting to datanode ubuntu:50010
2020-05-14 02:29:04,915 DEBUG hdfs.DataStreamer 267 createSocketForPipeline - Send buf size 65536
2020-05-14 02:29:04,915 DEBUG sasl.SaslDataTransferClient 239 checkTrustAndSend - SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2020-05-14 02:29:04,916 DEBUG ipc.Client 1138 run - IPC Client (428566321) connection to /192.168.137.162:9000 from dave sending #4 org.apache.hadoop.hdfs.protocol.ClientProtocol.getServerDefaults
2020-05-14 02:29:04,935 DEBUG ipc.Client 1192 receiveRpcResponse - IPC Client (428566321) connection to /192.168.137.162:9000 from dave got value #4
2020-05-14 02:29:04,936 DEBUG ipc.ProtobufRpcEngine 253 invoke - Call: getServerDefaults took 21ms
2020-05-14 02:29:04,943 DEBUG sasl.SaslDataTransferClient 279 send - SASL client skipping handshake in unsecured configuration for addr = ubuntu/192.168.137.162, datanodeId = DatanodeInfoWithStorage[127.0.0.1:50010,DS-a1ac1217-f93d-486f-bb01-3183e58bad87,DISK]
2020-05-14 02:29:05,057 DEBUG hdfs.DataStreamer 617 initDataStreaming - nodes [DatanodeInfoWithStorage[127.0.0.1:50010,DS-a1ac1217-f93d-486f-bb01-3183e58bad87,DISK]] storageTypes [DISK] storageIDs [DS-a1ac1217-f93d-486f-bb01-3183e58bad87]
2020-05-14 02:29:05,058 DEBUG hdfs.DataStreamer 766 run - DataStreamer block BP-111504192-127.0.1.1-1589381622124:blk_1073741865_1041 sending packet packet seqno: 0 offsetInBlock: 0 lastPacketInBlock: false lastByteOffsetInBlock: 456
2020-05-14 02:29:05,118 DEBUG hdfs.DataStreamer 1095 run - DFSClient seqno: 0 reply: SUCCESS downstreamAckTimeNanos: 0 flag: 0
2020-05-14 02:29:05,119 DEBUG hdfs.DataStreamer 766 run - DataStreamer block BP-111504192-127.0.1.1-1589381622124:blk_1073741865_1041 sending packet packet seqno: 1 offsetInBlock: 456 lastPacketInBlock: true lastByteOffsetInBlock: 456
2020-05-14 02:29:05,124 DEBUG hdfs.DataStreamer 1095 run - DFSClient seqno: 1 reply: SUCCESS downstreamAckTimeNanos: 0 flag: 0

哈哈,终于不报错了,通过浏览器查看,文件也有了:

最后,还记录一点,在部署主机的配置文件etc/hadoop/hdfs-site.xml最好配置一个属性:

<property><name>dfs.permissions.enabled</name><value>false</value><description>If "true", enable permission checking in HDFS.If "false", permission checking is turned off,but all other behavior is unchanged.Switching from one parameter value to the other does not change the mode,owner or group of files or directories.</description>
</property>

将权限检查关掉,不然有可能因为权限问题,不能操作文件。

最后的最后想说,遇到这些问题,主要还是对hdfs了解不深,所以还是要加深学习才行。

hadoop hdfs记录踩到的坑相关推荐

  1. Ambari离线部署Hadoop集群踩到的坑

    1.远程拷贝HDP组件不全导致安装client时缺少rpm包,手动拷贝解决 2.安装HAWQ,启动时报错 passwordlell ssh hawq hosts ,hawq master 和其他主机机 ...

  2. makefile 打印变量_[Makefile] 缩进与空格--记录踩过的坑

    今天折腾了好久,就为了debug两个makefile的bug.虽然最后找到原因了,但是,怎么说呢,用现在流行的话来说,实在是意难平啊!必须写一篇记录一下! 第一个问题,是个语法高亮问题.今天观察到ma ...

  3. 记录踩过的坑-WPS表格

    目录 筛选 设置单元格为下拉选择框 模糊替换 高亮重复项 编辑栏取消自动展开 排序不排标题 引用别的工作表中的值 用通配符查找/替换 复制只粘贴文本 两个日期或时间相减 按行排序 行列转换 表格上方文 ...

  4. 记录踩过的坑——代理IP

    网络错误,查看是否有代理IP. 转载于:https://www.cnblogs.com/dayang12525/p/6402068.html

  5. 记录踩过的坑-WPS文字

    目录 在WPS文字中插入代码 图片占满整页 标题多级编号 不打印批注 插入目录 固定页面横向 显示批注框 图片显示不全 在WPS文字中插入代码 直接复制粘贴肯定不好看 需要用到Notepad++工具 ...

  6. 青龙面板搭建及记录踩过的坑

    1.到防火墙开启所需端口 放行8888.5700.5701.8080端口(分别是宝塔.青龙.机器人所需的端口) 2.重置实例密码 3.安装宝塔面板 这里我已经选择了带宝塔面板的镜像就不需要再进行安装了 ...

  7. 记录踩过的坑-WPS文字中的表格操作

    目录 WPS文字中的表格内容太多导致显示不全 WPS文字设置表格中的文字上下居中 WPS文字调整表格的段落 WPS文字中的表格标题跑到了表格下方 WPS文字中看一个表格有多少行​​​​​​​ WPS文 ...

  8. 记录踩过的坑-WPS演示

    插入的视频无法播放 大概率是格式问题或者分辨率问题. 我的视频是4K的分辨率,转换成1080p的即可. 插入GIF 插入-图片-选择要插入的图

  9. Loadrunner11在Win10上使用踩过的坑

    环境:Win10家庭版,loadrunner11 loadrunner11只支持IE9及以下,而Win10的IE是降不了级的,所以自己写的脚本,记录踩过的坑. 1 安装 按照教程安装比较顺利,但是破解 ...

最新文章

  1. 长短视频之争,长视频平台和短视频源码谁主沉浮?
  2. android 获取应用内存大小,如何在Android中获取当前内存使用量?
  3. Sql 常用日子转换Convert
  4. Python 十六进制转Base64_python基础day03笔记
  5. BZOJ 4417 Luogu P3990 [SHOI2013]超级跳马 (DP、矩阵乘法)
  6. 死磕java_死磕JavaScript-垃圾收集机制
  7. 动态规划应用--双11购物凑单
  8. url散列算法原理_如何列出与网站相关的所有URL
  9. 使用frida/xposed对某灰色APP进行暴力破解
  10. C# CAD开发 选择集的使用
  11. bash 单引号 双引号_Bash Shell中的单引号和双引号有什么区别?
  12. 金沙滩开发板单片机学习笔记(2)
  13. mentohust 使用
  14. css通用命名大全,CSS的常用命名及规范
  15. 显示器分辨率一直跳_常见屏幕比例与显示器分辨率详解
  16. Python-分割PDF文件-如何自定义分割-按页数分割PDF-PyPDF2
  17. jsp文件木马代码分析
  18. JMeter压力测试(一)
  19. 零售数字化必经哪四个阶段?
  20. 【日常学习】DAU和MAU

热门文章

  1. ubuntu 终端管理工具
  2. 解决VScode终端管理员运行问题
  3. HTML基础 结构,标题h1和段落p 写一个三毛语录
  4. 程序执行流程(一):提交Job到Yarn集群或本地过程
  5. Flutter 打包问题 Could not resolve io.flutter:arm64_v8a_release
  6. Fly.js HTTP 请求解决方案
  7. 解决linux不能上外网
  8. 计算机组成原理(三)存储器的层次结构
  9. java使用Aspose.pdf实现pdf转图片
  10. matlab 压缩感知矩阵_【精读】基于MATLAB的钢筋下料优化算法