1.使用distcp冷备hbase
--查看原始数据
[hdpusr01@hadoop1 bin]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hdpusr01/hbase-1.0.3/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hdpusr01/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help' for list of supported commands.
Type "exit" to leave the HBase Shell
Version 1.0.3, rf1e1312f9790a7c40f6a4b5a1bab2ea1dd559890, Tue Jan 19 19:26:53 PST 2016

hbase(main):001:0> list
TABLE                                                                                                                                                                  
member                                                                                                                                                                 
1 row(s) in 0.2860 seconds

=> ["member"]
hbase(main):002:0> scan 'member'
ROW                                        COLUMN+CELL                                                                                                                 
 rowkey-1                                  column=common:city, timestamp=1469089923121, value=beijing                                                                  
 rowkey-1                                  column=person:age, timestamp=1469089899438, value=20                                                                        
 rowkey-2                                  column=common:country, timestamp=1469090319844, value=china                                                                 
 rowkey-2                                  column=person:sex, timestamp=1469090247393, value=man                                                                       
2 row(s) in 0.0940 seconds

--关闭hbase
[hdpusr01@hadoop1 bin]$ stop-hbase.sh 
stopping hbase................

--查看distcp的帮助
[hdpusr01@hadoop1 bin]$ hadoop distcp
usage: distcp OPTIONS [source_path...]
              OPTIONS
 -append                Reuse existing data in target files and append new
                        data to them if possible
 -async                 Should distcp execution be blocking
 -atomic                Commit all changes or none
 -bandwidth       Specify bandwidth per map in MB
 -delete                Delete from target, files missing in source
 -f               List of files that need to be copied
 -filelimit       (Deprecated!) Limit number of files copied to <= n
 -i                     Ignore failures during copy
 -log             Folder on DFS where distcp execution logs are
                        saved
 -m               Max number of concurrent maps to use for copy
 -mapredSslConf   Configuration for ssl config file, to use with
                        hftps://
 -overwrite             Choose to overwrite target files unconditionally,
                        even if they exist.
 -p               preserve status (rbugpcaxt)(replication,
                        block-size, user, group, permission,
                        checksum-type, ACL, XATTR, timestamps). If -p is
                        specified with no , then preserves
                        replication, block size, user, group, permission,
                        checksum type and timestamps. raw.* xattrs are
                        preserved when both the source and destination
                        paths are in the /.reserved/raw hierarchy (HDFS
                        only). raw.* xattrpreservation is independent of
                        the -p flag. Refer to the DistCp documentation for
                        more details.
 -sizelimit       (Deprecated!) Limit number of files copied to <= n
                        bytes
 -skipcrccheck          Whether to skip CRC checks between source and
                        target paths.
 -strategy        Copy strategy to use. Default is dividing work
                        based on file sizes
 -tmp             Intermediate work path to be used for atomic
                        commit
 -update                Update target, copying only missingfiles or
                        directories
                        
                        
--备份hbase
[hdpusr01@hadoop1 bin]$ hdfs dfs -ls /
Found 3 items
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 02:39 /hbase
drwx-wx-wx   - hdpusr01 supergroup          0 2016-05-23 12:25 /tmp
drwxr-xr-x   - hdpusr01 supergroup          0 2016-06-25 03:12 /user

[hdpusr01@hadoop1 ~]$ hadoop distcp /hbase /hbasebak
16/07/25 03:07:58 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[/hbase], targetPath=/hbasebak, targetPathExists=false, preserveRawXattrs=false}
16/07/25 03:07:58 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/07/25 03:08:00 INFO Configuration.deprecation: io.sort.mb is deprecated. Instead, use mapreduce.task.io.sort.mb
16/07/25 03:08:00 INFO Configuration.deprecation: io.sort.factor is deprecated. Instead, use mapreduce.task.io.sort.factor
16/07/25 03:08:02 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/07/25 03:08:03 INFO mapreduce.JobSubmitter: number of splits:13
16/07/25 03:08:05 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1469176015126_0002
16/07/25 03:08:05 INFO impl.YarnClientImpl: Submitted application application_1469176015126_0002
16/07/25 03:08:05 INFO mapreduce.Job: The url to track the job: http://hadoop1:8088/proxy/application_1469176015126_0002/
16/07/25 03:08:05 INFO tools.DistCp: DistCp job-id: job_1469176015126_0002
16/07/25 03:08:05 INFO mapreduce.Job: Running job: job_1469176015126_0002
16/07/25 03:08:10 INFO mapreduce.Job: Job job_1469176015126_0002 running in uber mode : false
16/07/25 03:08:10 INFO mapreduce.Job:  map 0% reduce 0%
16/07/25 03:08:25 INFO mapreduce.Job:  map 4% reduce 0%
16/07/25 03:08:26 INFO mapreduce.Job:  map 10% reduce 0%
16/07/25 03:08:28 INFO mapreduce.Job:  map 14% reduce 0%
16/07/25 03:09:02 INFO mapreduce.Job:  map 15% reduce 0%
16/07/25 03:09:19 INFO mapreduce.Job:  map 16% reduce 0%
16/07/25 03:09:20 INFO mapreduce.Job:  map 38% reduce 0%
16/07/25 03:09:21 INFO mapreduce.Job:  map 46% reduce 0%
16/07/25 03:09:34 INFO mapreduce.Job:  map 54% reduce 0%
16/07/25 03:09:35 INFO mapreduce.Job:  map 92% reduce 0%
16/07/25 03:09:38 INFO mapreduce.Job:  map 100% reduce 0%
16/07/25 03:09:39 INFO mapreduce.Job: Job job_1469176015126_0002 completed successfully
16/07/25 03:09:39 INFO mapreduce.Job: Counters: 33
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=1404432
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=57852
                HDFS: Number of bytes written=41057
                HDFS: Number of read operations=378
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=99
...

[hdpusr01@hadoop1 ~]$ hdfs dfs -ls /
Found 4 items
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 02:39 /hbase
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:09 /hbasebak
drwx-wx-wx   - hdpusr01 supergroup          0 2016-05-23 12:25 /tmp
drwxr-xr-x   - hdpusr01 supergroup          0 2016-06-25 03:12 /user
[hdpusr01@hadoop1 ~]$ hdfs dfs -ls /hbasebak
Found 8 items
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:08 /hbasebak/.tmp
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:08 /hbasebak/WALs
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:08 /hbasebak/archive
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:08 /hbasebak/corrupt
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:08 /hbasebak/data
-rw-r--r--   1 hdpusr01 supergroup         42 2016-07-25 03:09 /hbasebak/hbase.id
-rw-r--r--   1 hdpusr01 supergroup          7 2016-07-25 03:09 /hbasebak/hbase.version
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:09 /hbasebak/oldWALs

--启动hbase
[hdpusr01@hadoop1 bin]$ ./start-hbase.sh 
starting master, logging to /home/hdpusr01/hbase-1.0.3/bin/../logs/hbase-hdpusr01-master-hadoop1.out
starting regionserver, logging to /home/hdpusr01/hbase-1.0.3/bin/../logs/hbase-hdpusr01-1-regionserver-hadoop1.out

--删除hbase中的表

hbase(main):001:0> list 
TABLE                                                                                                                                                                  
member                                                                                                                                                                 
1 row(s) in 0.1850 seconds

=> ["emp"]
hbase(main):002:0> drop 'emp'

ERROR: Table emp is enabled. Disable it first.

Here is some help for this command:
Drop the named table. Table must first be disabled:
  hbase> drop 't1'
  hbase> drop 'ns1:t1'

hbase(main):003:0> disable 'emp'
0 row(s) in 1.1880 seconds

hbase(main):004:0> drop 'emp'
0 row(s) in 4.2010 seconds

hbase(main):005:0> list
TABLE                                                                                                                                                                  
0 row(s) in 0.0060 seconds

=> []

--关闭hbase
[hdpusr01@hadoop1 bin]$ ./stop-hbase.sh 
stopping hbase..................

[hdpusr01@hadoop1 bin]$ hdfs dfs -ls /
Found 4 items
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:11 /hbase
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:09 /hbasebak
drwx-wx-wx   - hdpusr01 supergroup          0 2016-05-23 12:25 /tmp
drwxr-xr-x   - hdpusr01 supergroup          0 2016-06-25 03:12 /user
[hdpusr01@hadoop1 bin]$ hdfs dfs -mv /hbase /hbase.old
[hdpusr01@hadoop1 bin]$ hdfs dfs -mv /hbasebak /hbase 
[hdpusr01@hadoop1 bin]$ hdfs dfs -ls /
Found 4 items
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:09 /hbase
drwxr-xr-x   - hdpusr01 supergroup          0 2016-07-25 03:11 /hbase.old
drwx-wx-wx   - hdpusr01 supergroup          0 2016-05-23 12:25 /tmp
drwxr-xr-x   - hdpusr01 supergroup          0 2016-06-25 03:12 /user

#除了上面的方法之外还可以使用distcp的overwriter参数还原数据
[hdpusr01@hadoop1 bin]$ hadoop distcp -overwrite /hbasebak /hbase
16/07/25 03:42:48 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[/hbasebak], targetPath=/hbase, targetPathExists=true, preserveRawXattrs=false}
16/07/25 03:42:48 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/07/25 03:42:51 INFO Configuration.deprecation: io.sort.mb is deprecated. Instead, use mapreduce.task.io.sort.mb
16/07/25 03:42:51 INFO Configuration.deprecation: io.sort.factor is deprecated. Instead, use mapreduce.task.io.sort.factor
16/07/25 03:42:52 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/07/25 03:43:14 INFO mapreduce.JobSubmitter: number of splits:12
16/07/25 03:43:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1469176015126_0003
16/07/25 03:43:23 INFO impl.YarnClientImpl: Submitted application application_1469176015126_0003
16/07/25 03:43:23 INFO mapreduce.Job: The url to track the job: http://hadoop1:8088/proxy/application_1469176015126_0003/
16/07/25 03:43:23 INFO tools.DistCp: DistCp job-id: job_1469176015126_0003
16/07/25 03:43:23 INFO mapreduce.Job: Running job: job_1469176015126_0003
16/07/25 03:43:29 INFO mapreduce.Job: Job job_1469176015126_0003 running in uber mode : false
16/07/25 03:43:29 INFO mapreduce.Job:  map 0% reduce 0%
。。。
16/07/25 03:45:27 INFO mapreduce.Job:  map 82% reduce 0%
16/07/25 03:45:32 INFO mapreduce.Job:  map 92% reduce 0%
16/07/25 03:45:39 INFO mapreduce.Job:  map 100% reduce 0%
16/07/25 03:45:47 INFO mapreduce.Job: Job job_1469176015126_0003 completed successfully
16/07/25 03:45:47 INFO mapreduce.Job: Counters: 33
。。。

--启动hbase
[hdpusr01@hadoop1 bin]$ ./start-hbase.sh 
starting master, logging to /home/hdpusr01/hbase-1.0.3/bin/../logs/hbase-hdpusr01-master-hadoop1.out
starting regionserver, logging to /home/hdpusr01/hbase-1.0.3/bin/../logs/hbase-hdpusr01-1-regionserver-hadoop1.out

--验证数据
[hdpusr01@hadoop1 bin]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hdpusr01/hbase-1.0.3/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hdpusr01/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help' for list of supported commands.
Type "exit" to leave the HBase Shell
Version 1.0.3, rf1e1312f9790a7c40f6a4b5a1bab2ea1dd559890, Tue Jan 19 19:26:53 PST 2016

hbase(main):001:0> list
TABLE                                                                                                                                                                  
emp                                                                                                                                                                    
1 row(s) in 0.2090 seconds

=> ["emp"]
hbase(main):002:0> scan 'emp'
ROW                                        COLUMN+CELL                                                                                                                 
 6379                                      column=info:salary, timestamp=1469428164465, value=10000                                                                    
 7822                                      column=info:ename, timestamp=1469428143019, value=tanlei                                                                    
 8899                                      column=info:job, timestamp=1469428186434, value=IT Engineer                                                                 
3 row(s) in 0.1220 seconds

注意:通过distcp的overwirte参数还原后hbase中的数据可以正常显示,但是存在zookeeper中的/hbase/table节点下的关于emp的元数据不见了,而通过hdfs dfs -mv操作直接还原后在zookeeper中的元数据却存在。

2.使用CopyTable热备hbase
--创建新表
hbase(main):004:0> create 'emp2','info'
0 row(s) in 0.7300 seconds

--查看CopyTable的帮助
[hdpusr01@hadoop1 ~]$ hbase org.apache.hadoop.hbase.mapreduce.CopyTable --help
Usage: CopyTable [general options] [--starttime=X] [--endtime=Y] [--new.name=NEW] [--peer.adr=ADR]

Options:
 rs.class     hbase.regionserver.class of the peer cluster specify if different from current cluster
 rs.impl      hbase.regionserver.impl of the peer cluster
 startrow     the start row
 stoprow      the stop row
 starttime    beginning of the time range (unixtime in millis) without endtime means from starttime to forever
 endtime      end of the time range.  Ignored if no starttime specified.
 versions     number of cell versions to copy
 new.name     new table's name
 peer.adr     Address of the peer cluster given in the format hbase.zookeeer.quorum:hbase.zookeeper.client.port:zookeeper.znode.parent
 families     comma-separated list of families to copy
              To copy from cf1 to cf2, give sourceCfName:destCfName. 
              To keep the same name, just give "cfName"
 all.cells    also copy delete markers and deleted cells
 bulkload     Write input into HFiles and bulk load to the destination table

Args:
 tablename    Name of the table to copy

Examples:
 To copy 'TestTable' to a cluster that uses replication for a 1 hour window:
 $ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable --starttime=1265875194289 --endtime=1265878794289 --peer.adr=server1,server2,server3:2181:/hbase --families=myOldCf:myNewCf,cf2,cf3 TestTable 
For performance consider the following general option:
  It is recommended that you set the following to >=100. A higher value uses more memory but
  decreases the round trip time to the server and may increase performance.
    -Dhbase.client.scanner.caching=100
  The following should always be set to false, to prevent writing data twice, which may produce 
  inaccurate results.
    -Dmapreduce.map.speculative=false

--使用CopyTable拷贝数据到新表
hdpusr01@hadoop1 ~]$ hbase org.apache.hadoop.hbase.mapreduce.CopyTable --new.name=emp2 emp
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hdpusr01/hbase-1.0.3/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hdpusr01/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2016-07-25 07:35:24,955 INFO  [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2016-07-25 07:35:25,065 INFO  [main] client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
2016-07-25 07:35:25,244 INFO  [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2016-07-25 07:35:30,765 INFO  [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x50965003 connecting to ZooKeeper ensemble=hadoop1:29181
2016-07-25 07:35:30,770 INFO  [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-07-25 07:35:30,770 INFO  [main] zookeeper.ZooKeeper: Client environment:host.name=hadoop1
2016-07-25 07:35:30,770 INFO  [main] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_79
2016-07-25 07:35:33,334 INFO  [main] impl.YarnClientImpl: Submitted application application_1469176015126_0006
2016-07-25 07:35:33,360 INFO  [main] mapreduce.Job: The url to track the job: http://hadoop1:8088/proxy/application_1469176015126_0006/
2016-07-25 07:35:33,360 INFO  [main] mapreduce.Job: Running job: job_1469176015126_0006
2016-07-25 07:35:39,436 INFO  [main] mapreduce.Job: Job job_1469176015126_0006 running in uber mode : false
2016-07-25 07:35:39,437 INFO  [main] mapreduce.Job:  map 0% reduce 0%
2016-07-25 07:35:44,584 INFO  [main] mapreduce.Job:  map 100% reduce 0%
2016-07-25 07:35:58,657 INFO  [main] mapreduce.Job: Job job_1469176015126_0006 completed successfully
。。。

--验证数据
hbase(main):005:0> scan 'emp2'
ROW                                        COLUMN+CELL                                                                                                                 
 6379                                      column=info:salary, timestamp=1469428164465, value=10000                                                                    
 7822                                      column=info:ename, timestamp=1469428143019, value=tanlei                                                                    
 8899                                      column=info:job, timestamp=1469428186434, value=IT Engineer                                                                 
3 row(s) in 0.0170 seconds
注意:CopyTable可以实现在同一个集群或者不同集群间拷贝数据

3.使用Export热备hbase
--备份表数据
[hdpusr01@hadoop1 ~]$ hbase org.apache.hadoop.hbase.mapreduce.Export emp2 /tmp/emp2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hdpusr01/hbase-1.0.3/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hdpusr01/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2016-07-25 09:32:57,375 INFO  [main] mapreduce.Export: versions=1, starttime=0, endtime=9223372036854775807, keepDeletedCells=false
2016-07-25 09:32:58,241 INFO  [main] client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
2016-07-25 09:32:59,440 INFO  [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x39e4ff0c connecting to ZooKeeper ensemble=hadoop1:29181
2016-07-25 09:32:59,446 INFO  [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-07-25 09:32:59,446 INFO  [main] zookeeper.ZooKeeper: Client environment:host.name=hadoop1
2016-07-25 09:32:59,446 INFO  [main] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_79
。。。
2016-07-25 09:33:00,795 INFO  [main] mapreduce.Job: The url to track the job: http://hadoop1:8088/proxy/application_1469176015126_0007/
2016-07-25 09:33:00,796 INFO  [main] mapreduce.Job: Running job: job_1469176015126_0007
2016-07-25 09:33:06,968 INFO  [main] mapreduce.Job: Job job_1469176015126_0007 running in uber mode : false
2016-07-25 09:33:06,970 INFO  [main] mapreduce.Job:  map 0% reduce 0%
2016-07-25 09:33:13,028 INFO  [main] mapreduce.Job:  map 100% reduce 0%
2016-07-25 09:33:13,036 INFO  [main] mapreduce.Job: Job job_1469176015126_0007 completed successfully
。。。

--创建新表
hbase(main):010:0> create 'emp3','info'
0 row(s) in 0.1560 seconds

=> Hbase::Table - emp3

--导入数据到新表
[hdpusr01@hadoop1 ~]$ hbase  org.apache.hadoop.hbase.mapreduce.Import emp3 /tmp/emp2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hdpusr01/hbase-1.0.3/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hdpusr01/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2016-07-25 09:46:30,373 INFO  [main] client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
2016-07-25 09:46:31,954 INFO  [main] input.FileInputFormat: Total input paths to process : 1
2016-07-25 09:46:32,019 INFO  [main] mapreduce.JobSubmitter: number of splits:1
2016-07-25 09:46:32,593 INFO  [main] mapreduce.JobSubmitter: Submitting tokens for job: job_1469176015126_0008
2016-07-25 09:46:32,821 INFO  [main] impl.YarnClientImpl: Submitted application application_1469176015126_0008
2016-07-25 09:46:32,851 INFO  [main] mapreduce.Job: The url to track the job: http://hadoop1:8088/proxy/application_1469176015126_0008/
。。。
2016-07-25 09:46:38,947 INFO  [main] mapreduce.Job: Job job_1469176015126_0008 running in uber mode : false
2016-07-25 09:46:38,948 INFO  [main] mapreduce.Job:  map 0% reduce 0%
2016-07-25 09:46:44,018 INFO  [main] mapreduce.Job:  map 100% reduce 0%
2016-07-25 09:46:45,030 INFO  [main] mapreduce.Job: Job job_1469176015126_0008 completed successfully
。。。

--验证数据
hbase(main):012:0> scan 'emp3'
ROW                                        COLUMN+CELL                                                                                                                 
 6379                                      column=info:salary, timestamp=1469428164465, value=10000                                                                    
 7822                                      column=info:ename, timestamp=1469428143019, value=tanlei                                                                    
 8899                                      column=info:job, timestamp=1469428186434, value=IT Engineer                                                                 
3 row(s) in 0.0230 seconds

至此:有关hbase热备和冷备的一些方法就介绍到此,还有其他一些技术可以实现hbase的热备这里就不介绍了。

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/20801486/viewspace-2122530/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/20801486/viewspace-2122530/

小丸子学Hadoop系列之——hbase备份与恢复相关推荐

  1. 小丸子学Hadoop系列之——部署Hbase集群

    0.集群规划 主机名 ip地址 安装的软件 运行的进程 AI-OPT-HBS01 10.46.52.30 hadoop,hbase namenode,zkfc,resourcemanager AI-O ...

  2. 小丸子学MongoDB系列之——部署MongoDB副本集

    1.以副本集的方式启动mongodb实例 1.1 创建副本集目录 [mgousr01@vm1 ~]$ mkdir -p mongorep/{mg17/{bin,conf,data,logs,pid}, ...

  3. 小丸子学Redis系列之——Data types(一)

    Redis相比其他key-value类型的数据库来说有其特有的优势,其中两点比较突出的就是支持数据持久化和支持各种复杂的数据结构. 本文就来简单介绍下Redis 3.0支持的数据类型,以及各个数据类型 ...

  4. 小丸子学Docker系列之——安装Docker及基本命令的使用

    环境要求: 在Centos7.x上安装docker官方要求内核至少是3.10 1.查看内核和操作系统版本 [root@docker-machine ~]# uname -r 3.10.0-327.el ...

  5. 小丸子学MongoDB系列之——安装MongoDB

    1.下载MongoDB3.0软件包并解压 [root@vm1 ~]# cd /appbase/apps [root@vm1 ~]# rz [root@vm1 ~]# tar xzvf mongodb- ...

  6. 小丸子学Docker系列之——实战Dockerfile

     这次研究下Docker的Dockerfile,通过Dockerfile来定制化和自动build自己的image,本次要实现的需求是定制一个包含Mysql,Mongodb,Redis三种数据库服务的镜 ...

  7. 小丸子学Oracle 12c系列之——Oracle Pluggable Database

    好久没有研究Oracle了,最近觉得有必要研究下Oracle 12c的新特性,下面主要是记录我在学习Oracle 12c新特性之pdb的相关内容. 1.Oracle Pluggable Databas ...

  8. hadoop fs –ls /hbase 无内容_Hadoop大数据实战系列文章之HDFS文件系统

    扫码加入千人跳槽求职QQ群,每日都有全国招聘信息哦     Hadoop 附带了一个名为 HDFS(Hadoop分布式文件系统)的分布式文件系统,专门 存储超大数据文件,为整个 Hadoop 生态圈提 ...

  9. Hadoop 系列之 HDFS

    Hadoop 系列之 HDFS 花絮 上一篇文章 Hadoop 系列之 1.0和2.0架构 中,提到了 Google 的三驾马车,关于分布式存储,计算以及列式存储的论文,分别对应开源的 HDFS,Ma ...

最新文章

  1. 计算机应用基础中什么是桌面,福师《计算机应用基础》在线作业二 Windows中进行系统设置的工具集是 用户可以根据自己的爱好更改显示器 键盘 鼠标器 桌面等硬件的设置...
  2. node.js搭建简单服务器,用于前端测试websocket链接方法和性能测试
  3. python返回元组_python – numpy.where返回一个元组的目的是什么?
  4. windows10小鹤双拼注册表
  5. mysql数据库永久设置手动提交事务(InnoDB存储引擎禁止autocommit默认开启)
  6. oracle11g导出dmp文件 少表,Oracle11g导出dmp并导入Oracle10g的操作记录
  7. ObjectContext.Refresh
  8. matlab中GUI的属性检查器中的XLimMode是什么_如何在Matlab中使用GUI做一个简易音乐播放器? ---- (二)GUIDE...
  9. java 数据纠错,纠错码简介
  10. 12分钟即达背后的智能支撑
  11. Android service Binder用法
  12. 你还在用Rational Rose画图吗?来来来给你介绍一款开源免费上手容易的 BOUML UML画图工具。
  13. 用python制作上海疫情评论词云图-自定义形状
  14. 操作系统——进程管理的功能
  15. Infor M3咨询服务调研报告-Infor M3咨询服务生产基地、总部、竞争对手及市场地位
  16. 【围棋游戏——使用Python实现(纯tkinter gui)】
  17. php octet stream,php 上传excel时,excel mime-type类型为application/octet-stream,无法通过验证...
  18. 【最新面试】2022年软件测试面试题大全(持续更新)附答案
  19. 成功解决H5画布图片跨域,详解 uniapp H5 画布自定义海报实现长按识别,分享,转发
  20. PCB各层含义简介 浅显易懂 图文展示

热门文章

  1. Matplotlib.pyplot 常用方法
  2. 风雨二十载:OpenGL 4.3规范发布
  3. asdfasdfsd阿萨德发撒的发撒的发撒的发
  4. Pubmed格式字段说明
  5. IPV6----升级点,地址分类及部分协议配置
  6. c语言万年历大作业报告,C语言万年历设计报告
  7. 关于 Linux中系统调优的一些笔记
  8. 澳洲技术移民评分标准
  9. Win10 下安装Mathtype6.9 + office2013+EndNote-x8
  10. iphone一键转移_苹果手机如何一键转移数据 转移教程介绍