hdfs基本操作-python接口
HDFS操作手册
hdfscli命令行
1
2
3
4
五
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
三十
31
32
33
34
35
36
37
38
39
40
41
42
43
44
|
# hdfscli --help
HdfsCLI: a command line interface for HDFS.
Usage:
hdfscli [interactive] [-a ALIAS] [-v...]
hdfscli download [-fsa ALIAS] [-v...] [-t THREADS] HDFS_PATH LOCAL_PATH
hdfscli upload [-sa ALIAS] [-v...] [-A | -f] [-t THREADS] LOCAL_PATH HDFS_PATH
hdfscli -L | -V | -h
Commands:
download Download a file or folder from HDFS. If a
single file is downloaded, - can be
specified as LOCAL_PATH to stream it to
standard out .
interactive Start the client and expose it via the python
interpreter ( using iPython if available).
upload Upload a file or folder to HDFS. - can be
specified as LOCAL_PATH to read from standard
in .
Arguments:
HDFS_PATH Remote HDFS path.
LOCAL_PATH Path to local file or directory.
Options:
-A --append Append data to an existing file. Only supported
if uploading a single file or from standard in .
-L --log Show path to current log file and exit.
-V --version Show version and exit.
-a ALIAS --alias=ALIAS Alias of namenode to connect to.
-f --force Allow overwriting any existing files.
-s --silent Don't display progress status.
-t THREADS --threads=THREADS Number of threads to use for parallelization.
0 allocates a thread per file. [ default : 0]
-v --verbose Enable log output. Can be specified up to three
times (increasing verbosity each time).
Examples:
hdfscli -a prod /user/foo
hdfscli download features.avro dat/
hdfscli download logs/1987-03-23 - >>logs
hdfscli upload -f - data/weights.tsv <weights.tsv
HdfsCLI exits with return status 1 if an error occurred and 0 otherwise.
|
要使用hdfscli,首先需要设置hdfscli的默认配置文件
1
2
3
4
五
6
7
|
# cat ~/.hdfscli.cfg
[global]
default .alias = dev
[dev.alias]
url = http: //hadoop:50070
user = root
|
蟒蛇可用的客户端类:
InsecureClient(默认)
TokenClient
上传或下载文件
使用hdfscli上传文件或文件夹(将hadoop的文件夹上传到/ HDFS)
#hdfscli upload --alias = dev -f /hadoop-2.4.1/etc/hadoop/ / hdfs
使用hdfscli下载/日志目录到操作系统的/根/测试目录下
#hdfscli下载/ logs / root / test /
hdfscli交互模式
1
2
3
4
五
6
7
8
9
10
11
12
13
14
15
16
|
[root@hadoop ~]# hdfscli --alias=dev
Welcome to the interactive HDFS python shell.
The HDFS client is available as `CLIENT`.
>>> CLIENT.list( "/" )
[u 'Demo' , u 'hdfs' , u 'logs' , u 'logss' ]
>>> CLIENT.status( "/Demo" )
{u 'group' : u 'supergroup' , u 'permission' : u '755' , u 'blockSize' : 0,
u 'accessTime' : 0, u 'pathSuffix' : u '' , u 'modificationTime' : 1495123035501L,
u 'replication' : 0, u 'length' : 0, u 'childrenNum' : 1, u 'owner' : u 'root' ,
u 'type' : u 'DIRECTORY' , u 'fileId' : 16389}
>>> CLIENT.delete( "logs/install.log" )
False
>>> CLIENT.delete( "/logs/install.log" )
True
|
与python接口的绑定
初始化客户端
1,导入客户端类,然后调用它的构造函数
1
2
3
4
|
>>> from hdfs import InsecureClient
>>> client = InsecureClient( "http://172.10.236.21:50070" ,user= 'ann' )
>>> client.list( "/" )
[u 'Demo' , u 'hdfs' , u 'logs' , u 'logss' ]
|
2,导入配置类,加载一个已存在的配置文件并且从已存在的别名创建一个客户端,配置文件默认的读取文件为〜/ .hdfs_config.cfg
1
2
3
4
|
>>> from hdfs import Config
>>> client=Config().get_client( "dev" )
>>> client.list( "/" )
[u 'Demo' , u 'hdfs' , u 'logs' , u 'logss' ]
|
读文件
读()方法可从HDFS系统读取一个文件,但是它必须放在与块中,以确保每次都能正确关闭连接
1
2
3
4
|
>>> with client.read( "/logs/yarn-env.sh" ,encoding= "utf-8" ) as reader:
... features=reader.read()
...
>>> print features
|
CHUNK_SIZE参数将返回一个生成器,它使文件的内容变成流数据
1
2
3
4
|
>>> with client.read( "/logs/yarn-env.sh" ,chunk_size=1024) as reader:
... for chunk in reader:
... print chunk
...
|
分隔符参数同样返回一个生成器,文件内容是被指定符号分隔的
1
2
3
4
|
>>> with client.read( "/logs/yarn-env.sh" , encoding= "utf-8" , delimiter= "\n" ) as reader:
... for line in reader:
... time.sleep(1)
... print line
|
写文件
写方法用于写文件到HDFS(将本地文件kong.txt写入HDFS的/logs/kongtest.txt文件中)
1
2
3
4
|
>>> with open( "/root/test/kong.txt" ) as reader, client.write( "/logs/kongtest.txt" ) as writer:
... for line in reader:
... if line.startswith( "-" ):
... writer.write(line)
|
HDFS基本操作-python接口
安装HDFS包
点安装HDFS:可以通过命令pip install hdfs进行安装。
查看HDFS目录
1
2
3
4
五
6
|
[root@hadoop hadoop]# hdfs dfs -ls -R /
drwxr-xr-x - root supergroup 0 2017-05-18 23:57 /Demo
-rw-r--r-- 1 root supergroup 3494 2017-05-18 23:57 /Demo/hadoop-env.sh
drwxr-xr-x - root supergroup 0 2017-05-18 19:01 /logs
-rw-r--r-- 1 root supergroup 2223 2017-05-18 19:01 /logs/anaconda-ks.cfg
-rw-r--r-- 1 root supergroup 57162 2017-05-18 18:32 /logs/install.log
|
创建HDFS连接实例
1
2
3
4
五
6
|
#!/usr/bin/env python
# -*- coding:utf-8 -*-
__Author__ = 'kongZhaGen'
import hdfs
client = hdfs.Client( "http://172.10.236.21:50070" )
|
清单:返回远程文件夹包含的文件或目录名称,如果路径不存在则抛出错误。
hdfs_path:远程文件夹的路径
状态:同时返回每个文件的状态信息
1
2
3
4
五
6
7
8
|
def list(self, hdfs_path, status=False):
"" "Return names of files contained in a remote folder.
:param hdfs_path: Remote path to a directory. If `hdfs_path` doesn't exist
or points to a normal file, an : class :`HdfsError` will be raised.
:param status: Also return each file's corresponding FileStatus_.
"" "
|
示例:
1
2
3
|
print client.list( "/" ,status=False)
结果:
[u 'Demo' , u 'logs' ]
|
状态:获取HDFS系统上文件或文件夹的状态信息
hdfs_path:路径名称
严格:
假:如果远程路径不存在返回无
真:如果远程路径不存在抛出异常
1
2
3
4
五
6
7
8
9
10
11
|
def status(self, hdfs_path, strict=True):
"" "Get FileStatus_ for a file or folder on HDFS.
:param hdfs_path: Remote path.
:param strict: If `False`, return `None` rather than raise an exception if
the path doesn't exist.
.. _FileStatus: FS_
.. _FS: http: //hadoop.apache.org/docs/r1.0.4/webhdfs.html#FileStatus
"" "
|
示例:
1
2
3
|
print client.status(hdfs_path= "/Demoo" ,strict=False)
结果:
None
|
makedirs:在HDFS上创建目录,可实现递归创建目录
hdfs_path:远程目录名称
许可:为新创建的目录设置权限
1
2
3
4
五
6
7
8
9
10
11
12
13
|
def makedirs(self, hdfs_path, permission=None):
"" "Create a remote directory, recursively if necessary.
:param hdfs_path: Remote path. Intermediate directories will be created
appropriately.
:param permission: Octal permission to set on the newly created directory.
These permissions will only be set on directories that do not already
exist.
This function currently has no return value as WebHDFS doesn't return a
meaningful flag.
"" "
|
示例:
如果想在远程客户端通过脚本给HDFS创建目录,需要修改HDFS-site.xml中中中中
1
2
3
4
|
<property>
<name>dfs.permissions</name>
<value> false </value>
</property>
|
重启HDFS
1
2
|
stop-dfs.sh
start-dfs.sh
|
递归创建目录
1
|
client.makedirs( "/data/rar/tmp" ,permission=755)
|
重命名:移动一个文件或文件夹
hdfs_src_path:源路径
hdfs_dst_path:目标路径,如果路径存在且是个目录,则源目录移动到此目录中如果路径存在且是个文件,则会抛出异常
1
2
3
4
五
6
7
8
9
10
|
def rename(self, hdfs_src_path, hdfs_dst_path):
"" "Move a file or folder.
:param hdfs_src_path: Source path.
:param hdfs_dst_path: Destination path. If the path already exists and is
a directory, the source will be moved into it. If the path exists and is
a file, or if a parent destination directory is missing, this method will
raise an : class :`HdfsError`.
"" "
|
示例:
1
|
client.rename( "/SRC_DATA" , "/dest_data" )
|
删除:从HDFS删除一个文件或目录
hdfs_path:HDFS系统上的路径
递归:如果目录非空,真:可递归删除.FALSE:抛出异常。
1
2
3
4
五
6
7
8
9
10
11
12
|
def delete(self, hdfs_path, recursive=False):
"" "Remove a file or directory from HDFS.
:param hdfs_path: HDFS path.
:param recursive: Recursively delete files and directories. By default ,
this method will raise an : class :`HdfsError` if trying to delete a
non-empty directory.
This function returns `True` if the deletion was successful and `False` if
no file or directory previously existed at `hdfs_path`.
"" "
|
示例:
1
|
client.delete( "/dest_data" ,recursive=True)
|
上传:上传文件或目录到HDFS文件系统,如果目标目录已经存在,则将文件或目录上传到此目录中,否则新建目录。
1
2
3
4
五
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
def upload(self, hdfs_path, local_path, overwrite=False, n_threads=1,
temp_dir=None, chunk_size=2 ** 16, progress=None, cleanup=True, **kwargs):
"" "Upload a file or directory to HDFS.
:param hdfs_path: Target HDFS path. If it already exists and is a
directory, files will be uploaded inside.
:param local_path: Local path to file or folder. If a folder, all the files
inside of it will be uploaded (note that this implies that folders empty
of files will not be created remotely).
:param overwrite: Overwrite any existing file or directory.
:param n_threads: Number of threads to use for parallelization. A value of
`0` (or negative) uses as many threads as there are files.
:param temp_dir: Directory under which the files will first be uploaded
when `overwrite=True` and the final remote path already exists. Once the
upload successfully completes, it will be swapped in .
:param chunk_size: Interval in bytes by which the files will be uploaded.
:param progress: Callback function to track progress, called every
`chunk_size` bytes. It will be passed two arguments, the path to the
file being uploaded and the number of bytes transferred so far. On
completion, it will be called once with `-1` as second argument.
:param cleanup: Delete any uploaded files if an error occurs during the
upload.
:param \*\*kwargs: Keyword arguments forwarded to :meth:`write`.
On success, this method returns the remote upload path.
"" "
|
示例:
1
2
3
4
五
6
|
>>> import hdfs
>>> client=hdfs.Client( "http://172.10.236.21:50070" )
>>> client.upload( "/logs" , "/root/training/jdk-7u75-linux-i586.tar.gz" )
'/logs/jdk-7u75-linux-i586.tar.gz'
>>> client.list( "/logs" )
[u 'anaconda-ks.cfg' , u 'install.log' , u 'jdk-7u75-linux-i586.tar.gz' ]
|
内容:获取HDFS系统上文件或目录的概要信息
1
2
3
|
print client.content( "/logs/install.log" )
结果:
{u 'spaceConsumed' : 57162, u 'quota' : -1, u 'spaceQuota' : -1, u 'length' : 57162, u 'directoryCount' : 0, u 'fileCount' : 1}
|
写:在HDFS文件系统上创建文件,可以是字符串,生成器或文件对象
1
2
3
4
五
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
def write(self, hdfs_path, data=None, overwrite=False, permission=None,
blocksize=None, replication=None, buffersize=None, append=False,
encoding=None):
"" "Create a file on HDFS.
:param hdfs_path: Path where to create file. The necessary directories will
be created appropriately.
:param data: Contents of file to write. Can be a string , a generator or a
file object . The last two options will allow streaming upload (i.e.
without having to load the entire contents into memory). If `None`, this
method will return a file-like object and should be called using a `with`
block (see below for examples).
:param overwrite: Overwrite any existing file or directory.
:param permission: Octal permission to set on the newly created file.
Leading zeros may be omitted.
:param blocksize: Block size of the file.
:param replication: Number of replications of the file.
:param buffersize: Size of upload buffer.
:param append: Append to a file rather than create a new one.
:param encoding: Encoding used to serialize data written.
"" "
|
原文见:HTTP://www.cnblogs.com/kongzhagen/p/6874111.html
hdfs基本操作-python接口相关推荐
- 阿里P8:你们公司就这水平?看看这份Python接口自动化测试手册
前阵子有幸参加了个2021英雄技术会,与会了一个阿里P8技术大佬,我兴致勃勃地把我们公司的整套测试流程展示给大佬看,并重点介绍了我司自动化测试,谁知道大佬看完后来了句:就这?就这水平?随后丢给我一份P ...
- python接口自动化5-Json数据处理
前言 有些post的请求参数是json格式的,这个前面第二篇post请求里面提到过,需要导入json模块处理. 一般常见的接口返回数据也是json格式的,我们在做判断时候,往往只需要提取其中几个关键的 ...
- 安装mysqldb python接口时找不到mysql_config
我正在尝试使Python脚本在通过ssh连接到的Linux服务器上运行. 该脚本使用mysqldb. 我有我需要的所有其他组件,但是当我尝试通过setuptools安装mySQLdb时,如下所示: p ...
- caffe python接口_ubuntu配置caffe的python接口pycaffe
参考网站: ubuntu配置caffe的python接口pycaffe 依赖 前提caffe已经正确编译.见Ubuntu配置caffe库包sudo apt-get install python-pip ...
- python自动化测试视频百度云-Python接口自动化测试视频教程下载
Python接口自动化测试视频教程下载 课程介绍: 此套Python接口自动化测试视频教程适合入门接口测试和学习python+requests自动化的学员学习,教程对http协议.fiddler抓包与 ...
- python自动化测试视频百度云-Python接口自动化测试 PDF 超清版
给大家带来的一篇关于Python自动化相关的电子书资源,介绍了关于Python.接口自动化.测试方面的内容,本书是由电子工业出版社出版,格式为PDF,资源大小61.2 MB,王浩然编写,目前豆瓣.亚马 ...
- python自动化测试视频百度云-python接口自动化测试视频教程全集
python接口自动化测试视频教程全集 下载地址:https://k.weidian.com/Pfm=DyuI 课程内容: 第一章:接口测试基础 1-1 接口自动化课程简介 1-2 接口测试课程大纲 ...
- Caffe学习系列(13):数据可视化环境(python接口)配置
原文有更新: Caffe学习系列(13):数据可视化环境(python接口)配置 - denny402 - 博客园 http://www.cnblogs.com/denny402/p/5088399. ...
- caffe预测、特征可视化python接口调用
转载自: 深度学习(九)caffe预测.特征可视化python接口调用 - hjimce的专栏 - 博客频道 - CSDN.NET http://blog.csdn.net/hjimce/articl ...
最新文章
- QEMU — Guest Agent
- Sql:成功解决将sql输出的datetime时间格式转为常规格式
- 【PM模块】维护业务处理流程—内部维护(通知单)
- boost::multi_index模块实现复杂搜索和外键相关的测试程序
- 阿里大数据分析与应用(part2)--大数据分析的流程与常用技术
- vue 文件转换二进制_在vue中使用axios实现post方式获取二进制流下载文件(实例代码)...
- web前端期末大作业--响应式汽车租赁网页设计--(HTML+CSS+JavaScript)实现
- 九种设计模式在Spring中的应用
- 物理层上/下行每个功能块需要的信息
- php异步学习(2)
- Atitit js nodejs下的进程管理wmic process进程管理
- 配电室综合监控系统企业标准(试行)
- 微信小程序累计独立访客(UV)不低于 1000 是什么意思
- 笔杆子被领导倚重的核心竞争力是什么?
- 录制Gif动画的软件-ScreenToGif
- 亲密关系沟通-【认识需求】找到长期沟通的主方向
- 谈一谈企业部署erp系统的三大时间段
- 数字信号与模拟信号的转化
- 乖离性百万亚瑟王服务器维护,重大更新!《乖离性百万亚瑟王》10月23日维护更新公告...
- arduino触须传感器使用方法
热门文章
- Apache DolphinScheduler 在叽里呱啦的实战经验
- Markdown文本编辑
- 安卓机更新系统会卡吗_手机经常提示系统升级,到底要不要升级,看完你就明白了!...
- 如何解决All flavors must now belong to a named flavor dimension.?
- 云计算需要python吗_国内python云计算是啥
- 神经网络的严冬与复兴之路
- 漫画:算法如何验证合法数独 | 全世界最难的数独?
- 超简单的页面(图片、文字、布局。。。)等比缩放
- 双指数边缘平滑滤波器用于磨皮算法的尝试
- (滁院20级计科专用)期末考试复习-计组