1.hdfs的牛逼特性

  • Hadoop, including HDFS, is well suited for distributed storage and distributed processing using commodity hardware. It is fault tolerant, scalable, and extremely simple to expand. MapReduce, well known for its simplicity and applicability for large set of distributed applications, is an integral part of Hadoop. 分布式存储
  • HDFS is highly configurable with a default configuration well suited for many installations. Most of the time, configuration needs to be tuned only for very large clusters. 适当的配置
  • Hadoop is written in Java and is supported on all major platforms. 平台适应性
  • Hadoop supports shell-like commands to interact with HDFS directly. shell-like的操作方式
  • The NameNode and Datanodes have built in web servers that makes it easy to check current status of the cluster. 内置web服务,方便检查集群
  • New features and improvements are regularly implemented in HDFS. The following is a subset of useful features in HDFS:
    • File permissions and authentication.  文件权限验证
    • Rack awareness: to take a node's physical location into account while scheduling tasks and allocating storage.
    • Safemode: an administrative mode for maintenance.  安全模式,用于运维
    • fsck: a utility to diagnose health of the file system, to find missing files or blocks.  检查文件系统的工具,发现丢失的文件或者块
    • fetchdt: a utility to fetch DelegationToken and store it in a file on the local system.
    • Balancer: tool to balance the cluster when the data is unevenly distributed among DataNodes.
    • Upgrade and rollback: after a software upgrade, it is possible to rollback to HDFS' state before the upgrade in case of unexpected problems.
    • Secondary NameNode: performs periodic checkpoints of the namespace and helps keep the size of file containing log of HDFS modifications within certain limits at the NameNode.
    • Checkpoint node: performs periodic checkpoints of the namespace and helps minimize the size of the log stored at the NameNode containing changes to the HDFS. Replaces the role previously filled by the Secondary NameNode, though is not yet battle hardened. The NameNode allows multiple Checkpoint nodes simultaneously, as long as there are no Backup nodes registered with the system.
    • Backup node: An extension to the Checkpoint node. In addition to checkpointing it also receives a stream of edits from the NameNode and maintains its own in-memory copy of the namespace, which is always in sync with the active NameNode namespace state. Only one Backup node may be registered with the NameNode at once.
      来源: http://hadoop.apache.org/docs/r2.6.4/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html
2.webUI
默认是50070端口
3.hdfs基本管理命令
bin/hdfs dfsadmin -参数
  • -report: reports basic statistics of HDFS. Some of this information is also available on the NameNode front page. 报告状态
  • -safemode: though usually not required, an administrator can manually enter or leave Safemode.  开启安全模式
  • -finalizeUpgrade: removes previous backup of the cluster made during last upgrade. 删除上次集群更新时的备份
  • -refreshNodes: Updates the namenode with the set of datanodes allowed to connect to the namenode. Namenodes re-read datanode hostnames in the file defined bydfs.hostsdfs.hosts.exclude. Hosts defined in dfs.hosts are the datanodes that are part of the cluster. If there are entries in dfs.hosts, only the hosts in it are allowed to register with the namenode. Entries in dfs.hosts.exclude are datanodes that need to be decommissioned. Datanodes complete decommissioning when all the replicas from them are replicated to other datanodes. Decommissioned nodes are not automatically shutdown and are not chosen for writing for new replicas.
  • -printTopology : Print the topology of the cluster. Display a tree of racks and datanodes attached to the tracks as viewed by the NameNode. 打印拓扑
4.secondary namenode 
namenode把文件系统的修改以日志追加方式写到本地文件系统,namenode启动时,先从镜像中读取HDFS的状态,然后再把日志中的修改合并到镜像中,再打开一个新的日志文件接收新的修改。namenode仅仅在启动时才合并状态镜像和日志,所以日志可能会变的非常大,在下次启动时需要合并的内容太多导致启动时间很长。
secondary namenode定时的从namenode合并日志,并且保证日志大小限制在一定的范围内。一般不和主namenode放一起,但机器的配置要和namenode一样。
secondary namenode上的checkpoint 里程由以下两个参数控制:
  • dfs.namenode.checkpoint.period, set to 1 hour by default, specifies the maximum delay between two consecutive checkpoints, and
  • dfs.namenode.checkpoint.txns, set to 1 million by default, defines the number of uncheckpointed transactions on the NameNode which will force an urgent checkpoint, even if the checkpoint period has not been reached.
dfs.namenode.checkpoint.preiod  两次执行checkpoint之间的最大时间间隔
dfs.namenode.checkpoint.txns    当没有checkpoint的事务达到多少时执行,即使未达到上面的参数设置的时间,默认是100万(比如10分钟修改了100万个,那么10分钟就执行一次checkpoint而非1小时)
5.checkpoint node
和secondary namenode极为相似,不同的地方是checkpoint下载hdfs状态镜像和日志文件,并在本地合并,合并后还上传到正在运行的namenode.
dfs.namenode.backup.address       地址
dfs.namenode.backup.http-address  ip端口
dfs.namenode.checkpoint.preiod 和dfs.namenode.checkpoint.txns  同样影响checkpoint
checkpoint node和secondary namenode实际上就是一个东西,只是名称有所不同
6.backup node
backup node的功能和checkpoint node一样,但是backup node能实时的从namenode读取namespace变化数据并合并到本地(注意:namenode是不合并,只有重启后才合并),所以backup node是namenode的完全实时备份。
目前一个集群只能有一个backup node,未来可以支持多个。一旦有个backup node,checkpoint node就无法再注册进集群。backup node的配置文件和checkpoint一致(dfs.namenode.backup.address \ dfs.namenode.backup.http-address),以bin/hdfs namenode -backup启动
7.import checkpoint
如果镜像文件和日志文件丢失,可以用import checkpoint方式从checkpoint节点读取。需要配置三个参数:
dfs.namenode.name.dir namenode的元数据文件夹
dfs.namenode.checkpoint.dir checkpoint node上传镜像的文件夹
以-importCheckpoint的方式启动namenode 
8.balancer
HDFS中数据可能不是均衡的放在集群中。考虑到一下情况:
  • Policy to keep one of the replicas of a block on the same node as the node that is writing the block.  在当前读写的节点中保存一个数据备份。
  • Need to spread different replicas of a block across the racks so that cluster can survive loss of whole rack. 保存数据分布到各个机架,可以允许整个机架的丢失
  • One of the replicas is usually placed on the same rack as the node writing to the file so that cross-rack network I/O is reduced.
  • Spread HDFS data uniformly across the DataNodes in the cluster.
    来源: http://hadoop.apache.org/docs/r2.6.4/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#Balancer
9.机架感知,略
10.safemode
当集群重新启动时,namenode读取状态镜像和日志信息,此时namenode等待datanode报告块信息,所以不会立即打开集群,此时namenode处于safemode,集群处于只读状态。等datanode报告完块信息后,集群自动打开,解除safemode状态。可以手动设置safemode状态。
11.fsck
fsck命令用来检查文件(文件块)不一致,与传统的fsck不一样的地方是,该命令并不修正错误,默认下不检查已经打开的文件.fsck命令不是hadoop shell 命令,但是可以以bin/hdfs fsck启动.
12.fecthdt
HDFS支持fecthdt命令来读取口令并存放在本地文件系统中.该口令可用于非安全验证的客户端连接到安全的服务器上(比如namenode).略..
13.recovery mode
恢复模式.如果仅有的namemode元数据丢失了,可以通过recovery mode找到部分数据,此时以namenode -recover启动namenode,然后按照提示输入文件位置,可以使用force参数不输入让hdfs自己找文件位置
14.upgrade and rollback
升级和回滚.略
15.File permissions and security 
文件权限和安全.HDFS的文件权限类似LINUX.启动namenode的用户被视为HDFS的超级用户.
16.可扩展性
HDFS可以支持数千个节点的集群.每个集群只有一个namenode,因此namenode的内存成为集群大小的限制

<wiz_tmp_tag id="wiz-table-range-border" contenteditable="false" style="display: none;">

来自为知笔记(Wiz)

转载于:https://www.cnblogs.com/skyrim/p/7455503.html

一:HDFS 用户指导相关推荐

  1. 【BlackDuck】Black-Duck-User-Guide用户指导书

    用户指导书地址: https://community.synopsys.com/servlet/fileField?entityId=kaC2H000000kBSXUA2&field=Atta ...

  2. [转]HDFS用户指南(中文版)

    目的 本文档可以作为使用Hadoop分布式文件系统用户的起点,无论是将HDFS应用在一个Hadoop集群中还是作为一个单独的分布式文件系统使用.HDFS被设计成可以马上在许多环境中工作起来,那么一些H ...

  3. Hadoop官网翻译(HDFS用户概览)

    Hadoop架构 HDFS目标 容忍硬件故障 批处理数据访问 支持大文件 简单的读写一致性模型 数据本地性 支持异构平台 hdfs通过追加写来简化读写一致性模型.关注吞吐率. NameNode和Dat ...

  4. AIRPAK3.0用户指导手册第一部分手册简介

    一,手册主要内容 该手册包含了五个针对不同问题的实例教程,实例问题的特点和AIRPAK方案都会有详细论述 文档第一部分主要针对新手进行AIRPAK的详细说明,该部分介绍了问题的建立,解决方案的详细步骤 ...

  5. Eclipse 工作台用户指导视图和编辑器

    在开始这一部分的工作台教程前,非常有必要熟悉一下工作台中不同的组成元素.一个工作台由以下几个部分组成: 透视图(Perspective) 视图(View) 编辑器(Editor) 透视图表现为工作台窗 ...

  6. FreeNAS 安装与用户指导页

    下载: 附件:http://down.51cto.com/data/2351891 本文转自 张宇 51CTO博客,原文链接:http://blog.51cto.com/zhangyu/133657, ...

  7. cdh用户权限_cdh设置hdfs权限

    通常会把 root 或者需要的用户添加到 supergroup组,但Linux下默认是没有supergroup组. # Linux下默认是没有supergroup组的 # hadoop:x:994:h ...

  8. java hdfs 指定用户目录_HDFS目录(文件 )权限管理

    用户身份 在1.0.4这个版本的Hadoop中,客户端用户身份是通过宿主操作系统给出.对类Unix系统来说, 用户名等于`whoami`: 组列表等于`bash -c groups`. 将来会增加其他 ...

  9. HDFS将普通用户添加到超级用户组

    文章目录 01 引言 02 操作 step1:校验是否有访问hdfs的权限 step2:添加用户到操作系统的supergroup step3:将信息同步到HDFS step4:验证 03 文末 01 ...

最新文章

  1. 2019区块链行业指南
  2. android gradle权威指南pdf_干货 | 携程 Android 10适配踩坑指南
  3. 微擎结合thinkphp5要带上uniacid_传统企业要做网络营销推广找哪家好?
  4. 关于geekcode
  5. 中信银行Java笔试题库,手撕面试官
  6. 基于深度学习知识追踪研究进展(综述)数据集模型方法
  7. 多家波卡生态项目招聘开发者,高薪职位等你来 Pick
  8. MarkMan – 马克鳗 IU好伙伴啊
  9. 程序员锻炼宽广的胸怀
  10. AndroidProjects个人项目归纳
  11. 探究App推广之路:流量思维永不死 ☞ iphone中App store上架优化建议
  12. 利用Excel Power Query获取基金历史净值、估值和日增长率等信息
  13. 调用钉钉api报错:机器人发送签名过期;solution:签名生成时间和发送时间请保持在 timestampms 以内
  14. 测试术语-bug分类
  15. 第一台计算机作文,世界上的第一台洗衣机
  16. c语言中move指令说明,MOVE指令使用
  17. 树莓派 java 驱动 微雪 墨水屏 4灰阶 epaper
  18. 2019 Java程序员(方向)
  19. java面向对象三大特性理解
  20. 生物、人工智能、经济、哲学和系统科学的新流派

热门文章

  1. java 编码过滤器_Java编码过滤器
  2. linux内核rcu锁实例,Linux Rcu到底有没有锁?
  3. 信息服务器已停止工作,游戏服务器已停止工作
  4. mysql copy pending_mysql 案例 ~ 主从复制延迟之并行复制
  5. python 写脚本 预约课程_Python盘纪念币系列之三:自动预约脚本编写 03 系列总结...
  6. Springboot的部分依赖及作用
  7. C指针6:指针变量作为函数参数
  8. Python+OpenCV实现AI人脸识别身份认证系统(2)—人脸数据采集、存储
  9. 读后感与机翻《基于理论的因果迁移:结合实例级的归纳和抽象级的结构学习》
  10. 数字图像处理——第七章 小波和多分辨处理