HTable是一个比较重的对此,比如加载配置文件,连接ZK,查询meta表等等,高并发的时候影响系统的性能,因此引入了“池”的概念。

  引入“HBase里的连接池”的目的是:

          为了更高的,提高程序的并发和访问速度。

  从“池”里去拿,拿完之后,放“池”即可。

 1 package zhouls.bigdata.HbaseProject.Pool;
 2
 3 import java.io.IOException;
 4 import java.util.concurrent.ExecutorService;
 5 import java.util.concurrent.Executors;
 6
 7 import org.apache.hadoop.conf.Configuration;
 8 import org.apache.hadoop.hbase.HBaseConfiguration;
 9 import org.apache.hadoop.hbase.client.HConnection;
10 import org.apache.hadoop.hbase.client.HConnectionManager;
11
12
13 public class TableConnection {
14     private TableConnection(){
15 }
16     private static HConnection connection = null;
17 public static HConnection getConnection(){
18     if(connection == null){
19         ExecutorService pool = Executors.newFixedThreadPool(10);//建立一个固定大小的线程池
20         Configuration conf = HBaseConfiguration.create();
21         conf.set("hbase.zookeeper.quorum","HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
22         try{
23             connection = HConnectionManager.createConnection(conf,pool);//创建连接时,拿到配置文件和线程池
24         }catch (IOException e){
25         }
26     }
27     return connection;
28     }
29 }

  转到程序里,怎么来用这个“池”呢?

  即,TableConnection是公共的,新建好的“池”。可以一直作为模板啦。

1、引用“池”超过

HBase编程 API入门系列之put(客户端而言)(1)

  上面这种方式

 1 package zhouls.bigdata.HbaseProject.Pool;
 2
 3 import java.io.IOException;
 4
 5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
 6
 7 import javax.xml.transform.Result;
 8
 9 import org.apache.hadoop.conf.Configuration;
10 import org.apache.hadoop.hbase.Cell;
11 import org.apache.hadoop.hbase.CellUtil;
12 import org.apache.hadoop.hbase.HBaseConfiguration;
13 import org.apache.hadoop.hbase.TableName;
14 import org.apache.hadoop.hbase.client.Delete;
15 import org.apache.hadoop.hbase.client.Get;
16 import org.apache.hadoop.hbase.client.HTable;
17 import org.apache.hadoop.hbase.client.HTableInterface;
18 import org.apache.hadoop.hbase.client.Put;
19 import org.apache.hadoop.hbase.client.ResultScanner;
20 import org.apache.hadoop.hbase.client.Scan;
21 import org.apache.hadoop.hbase.util.Bytes;
22
23 public class HBaseTest {
24     public static void main(String[] args) throws Exception {
25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
29 //        table.put(put);
30 //        table.close();
31
32 //        Get get = new Get(Bytes.toBytes("row_04"));
33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
35 //        System.out.println(rest.toString());
36 //        table.close();
37
38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
41 //        table.delete(delete);
42 //        table.close();
43
44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
45     delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
47 //        table.delete(delete);
48 //        table.close();
49
50
51 //        Scan scan = new Scan();
52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
55 //        ResultScanner rst = table.getScanner(scan);//整个循环
56 //        System.out.println(rst.toString());
57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
58 //        for(Cell cell:next.rawCells()){//某个row key下的循坏
59 //        System.out.println(next.toString());
60 //        System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
61 //        System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
62 //        System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
63 //        }
64 //        }
65 //    table.close();
66
67         HBaseTest hbasetest =new HBaseTest();
68         hbasetest.insertValue();
69     }
70
71     public void insertValue() throws Exception{
72         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
73         Put put = new Put(Bytes.toBytes("row_04"));//行键是row_01
74         put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("北京"));
75         table.put(put);
76         table.close();
77     }
78
79
80
81     public static Configuration getConfig(){
82         Configuration configuration = new Configuration();
83 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
84         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
85         return configuration;
86     }
87 }

hbase(main):035:0> scan 'test_table'
ROW COLUMN+CELL
row_01 column=f:col, timestamp=1478095650110, value=maizi
row_01 column=f:name, timestamp=1478095741767, value=Andy2
row_02 column=f:name, timestamp=1478095849538, value=Andy2
row_03 column=f:name, timestamp=1478095893278, value=Andy3
row_04 column=f:name, timestamp=1478096702098, value=Andy1
4 row(s) in 0.1190 seconds

hbase(main):036:0> scan 'test_table'
ROW COLUMN+CELL
row_01 column=f:col, timestamp=1478095650110, value=maizi
row_01 column=f:name, timestamp=1478095741767, value=Andy2
row_02 column=f:name, timestamp=1478095849538, value=Andy2
row_03 column=f:name, timestamp=1478095893278, value=Andy3
row_04 column=f:name, timestamp=1478097220790, value=\xE5\x8C\x97\xE4\xBA\xAC
4 row(s) in 0.5970 seconds

hbase(main):037:0>

 1 package zhouls.bigdata.HbaseProject.Pool;
 2
 3 import java.io.IOException;
 4
 5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
 6
 7 import javax.xml.transform.Result;
 8
 9 import org.apache.hadoop.conf.Configuration;
10 import org.apache.hadoop.hbase.Cell;
11 import org.apache.hadoop.hbase.CellUtil;
12 import org.apache.hadoop.hbase.HBaseConfiguration;
13 import org.apache.hadoop.hbase.TableName;
14 import org.apache.hadoop.hbase.client.Delete;
15 import org.apache.hadoop.hbase.client.Get;
16 import org.apache.hadoop.hbase.client.HTable;
17 import org.apache.hadoop.hbase.client.HTableInterface;
18 import org.apache.hadoop.hbase.client.Put;
19 import org.apache.hadoop.hbase.client.ResultScanner;
20 import org.apache.hadoop.hbase.client.Scan;
21 import org.apache.hadoop.hbase.util.Bytes;
22
23 public class HBaseTest {
24     public static void main(String[] args) throws Exception {
25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
29 //        table.put(put);
30 //        table.close();
31
32 //        Get get = new Get(Bytes.toBytes("row_04"));
33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
35 //        System.out.println(rest.toString());
36 //        table.close();
37
38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
41 //        table.delete(delete);
42 //        table.close();
43
44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
45     delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
47 //        table.delete(delete);
48 //        table.close();
49
50
51 //        Scan scan = new Scan();
52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
55 //        ResultScanner rst = table.getScanner(scan);//整个循环
56 //        System.out.println(rst.toString());
57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
58 //        for(Cell cell:next.rawCells()){//某个row key下的循坏
59 //        System.out.println(next.toString());
60 //        System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
61 //        System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
62 //        System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
63 //        }
64 //        }
65 //        table.close();
66
67         HBaseTest hbasetest =new HBaseTest();
68         hbasetest.insertValue();
69 }
70
71     public void insertValue() throws Exception{
72         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
73         Put put = new Put(Bytes.toBytes("row_05"));//行键是row_01
74         put.add(Bytes.toBytes("f"),Bytes.toBytes("address"),Bytes.toBytes("beijng"));
75         table.put(put);
76         table.close();
77     }
78
79
80
81     public static Configuration getConfig(){
82         Configuration configuration = new Configuration();
83 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
84         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
85         return configuration;
86     }
87 }

2016-12-11 14:22:14,784 INFO [org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper] - Process identifier=hconnection-0x19d12e87 connecting to ZooKeeper ensemble=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:host.name=WIN-BQOBV63OBNM
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.version=1.7.0_51
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.vendor=Oracle Corporation
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.home=C:\Program Files\Java\jdk1.7.0_51\jre
2016-12-11 14:22:14,797 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.class.path=D:\Code\MyEclipseJavaCode\HbaseProject\bin;D:\SoftWare\hbase-1.2.3\lib\activation-1.1.jar;D:\SoftWare\hbase-1.2.3\lib\aopalliance-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-i18n-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\api-asn1-api-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\api-util-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\asm-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\avro-1.7.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-1.7.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-core-1.8.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-cli-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-codec-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\commons-collections-3.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-compress-1.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-configuration-1.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-daemon-1.0.13.jar;D:\SoftWare\hbase-1.2.3\lib\commons-digester-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\commons-el-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-httpclient-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-io-2.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-lang-2.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-logging-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math-2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math3-3.1.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-net-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\disruptor-3.3.0.jar;D:\SoftWare\hbase-1.2.3\lib\findbugs-annotations-1.3.9-1.jar;D:\SoftWare\hbase-1.2.3\lib\guava-12.0.1.jar;D:\SoftWare\hbase-1.2.3\lib\guice-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\guice-servlet-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-annotations-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-auth-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-hdfs-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-app-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-core-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-jobclient-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-shuffle-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-api-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-server-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-client-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-examples-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-external-blockcache-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop2-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-prefix-tree-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-procedure-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-protocol-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-resource-bundle-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-rest-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-shell-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-thrift-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\htrace-core-3.1.0-incubating.jar;D:\SoftWare\hbase-1.2.3\lib\httpclient-4.2.5.jar;D:\SoftWare\hbase-1.2.3\lib\httpcore-4.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-core-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-jaxrs-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-mapper-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-xc-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jamon-runtime-2.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-compiler-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-runtime-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\javax.inject-1.jar;D:\SoftWare\hbase-1.2.3\lib\java-xmlbuilder-0.4.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-api-2.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-impl-2.2.3-1.jar;D:\SoftWare\hbase-1.2.3\lib\jcodings-1.0.8.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-client-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-core-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-guice-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-json-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-server-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jets3t-0.9.0.jar;D:\SoftWare\hbase-1.2.3\lib\jettison-1.3.3.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-sslengine-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-util-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\joni-2.1.2.jar;D:\SoftWare\hbase-1.2.3\lib\jruby-complete-1.6.8.jar;D:\SoftWare\hbase-1.2.3\lib\jsch-0.1.42.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-api-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\junit-4.12.jar;D:\SoftWare\hbase-1.2.3\lib\leveldbjni-all-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\libthrift-0.9.3.jar;D:\SoftWare\hbase-1.2.3\lib\log4j-1.2.17.jar;D:\SoftWare\hbase-1.2.3\lib\metrics-core-2.2.0.jar;D:\SoftWare\hbase-1.2.3\lib\netty-all-4.0.23.Final.jar;D:\SoftWare\hbase-1.2.3\lib\paranamer-2.3.jar;D:\SoftWare\hbase-1.2.3\lib\protobuf-java-2.5.0.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-api-1.7.7.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-log4j12-1.7.5.jar;D:\SoftWare\hbase-1.2.3\lib\snappy-java-1.0.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\spymemcached-2.11.6.jar;D:\SoftWare\hbase-1.2.3\lib\xmlenc-0.52.jar;D:\SoftWare\hbase-1.2.3\lib\xz-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\zookeeper-3.4.6.jar
2016-12-11 14:22:14,797 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.library.path=C:\Program Files\Java\jdk1.7.0_51\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\ProgramData\Oracle\Java\javapath;C:\Python27\;C:\Python27\Scripts;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\SoftWare\MATLAB R2013a\runtime\win64;D:\SoftWare\MATLAB R2013a\bin;C:\Program Files (x86)\IDM Computer Solutions\UltraCompare;C:\Program Files\Java\jdk1.7.0_51\bin;C:\Program Files\Java\jdk1.7.0_51\jre\bin;D:\SoftWare\apache-ant-1.9.0\bin;HADOOP_HOME\bin;D:\SoftWare\apache-maven-3.3.9\bin;D:\SoftWare\Scala\bin;D:\SoftWare\Scala\jre\bin;%MYSQL_HOME\bin;D:\SoftWare\MySQL Server\MySQL Server 5.0\bin;D:\SoftWare\apache-tomcat-7.0.69\bin;%C:\Windows\System32;%C:\Windows\SysWOW64;D:\SoftWare\SSH Secure Shell;.
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.compiler=<NA>
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.name=Windows 7
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.arch=amd64
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.version=6.1
2016-12-11 14:22:14,799 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.name=Administrator
2016-12-11 14:22:14,799 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.home=C:\Users\Administrator
2016-12-11 14:22:14,799 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.dir=D:\Code\MyEclipseJavaCode\HbaseProject
2016-12-11 14:22:14,801 INFO [org.apache.zookeeper.ZooKeeper] - Initiating client connection, connectString=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181 sessionTimeout=90000 watcher=hconnection-0x19d12e870x0, quorum=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181, baseZNode=/hbase
2016-12-11 14:22:14,853 INFO [org.apache.zookeeper.ClientCnxn] - Opening socket connection to server HadoopMaster/192.168.80.10:2181. Will not attempt to authenticate using SASL (unknown error)
2016-12-11 14:22:14,855 INFO [org.apache.zookeeper.ClientCnxn] - Socket connection established to HadoopMaster/192.168.80.10:2181, initiating session
2016-12-11 14:22:14,960 INFO [org.apache.zookeeper.ClientCnxn] - Session establishment complete on server HadoopMaster/192.168.80.10:2181, sessionid = 0x1582556e7c5001c, negotiated timeout = 40000

hbase(main):035:0> scan 'test_table'
ROW COLUMN+CELL
row_01 column=f:col, timestamp=1478095650110, value=maizi
row_01 column=f:name, timestamp=1478095741767, value=Andy2
row_02 column=f:name, timestamp=1478095849538, value=Andy2
row_03 column=f:name, timestamp=1478095893278, value=Andy3
row_04 column=f:name, timestamp=1478096702098, value=Andy1
4 row(s) in 0.1190 seconds

hbase(main):036:0> scan 'test_table'
ROW COLUMN+CELL
row_01 column=f:col, timestamp=1478095650110, value=maizi
row_01 column=f:name, timestamp=1478095741767, value=Andy2
row_02 column=f:name, timestamp=1478095849538, value=Andy2
row_03 column=f:name, timestamp=1478095893278, value=Andy3
row_04 column=f:name, timestamp=1478097220790, value=\xE5\x8C\x97\xE4\xBA\xAC
4 row(s) in 0.5970 seconds

hbase(main):037:0> scan 'test_table'
ROW COLUMN+CELL
row_01 column=f:col, timestamp=1478095650110, value=maizi
row_01 column=f:name, timestamp=1478095741767, value=Andy2
row_02 column=f:name, timestamp=1478095849538, value=Andy2
row_03 column=f:name, timestamp=1478095893278, value=Andy3
row_04 column=f:name, timestamp=1478097227253, value=\xE5\x8C\x97\xE4\xBA\xAC
row_05 column=f:address, timestamp=1478097364649, value=beijng
5 row(s) in 0.2630 seconds

hbase(main):038:0>

  即,这就是,“”的概念,会一直保持

  详细分析

      这里,我设定的是10个线程池,

  其实,很简单,就好比,你来拿一个去用,别人来拿一个去用。等你们用完了,再还回来。(好比跟图书馆里的借书一样)

  那有人会问,若我设定的固定10个线程池,都被别人拿完了,若第11个来了,怎办?岂不是,没得拿?

      答案:那你就等着呗,等别人还回来。这跟队列是一样的原理。

  

  这样做的理由,很简单,有了线程池,不需,我们再每次都手动配置文件啊连接zk了。因为,在TableConnection.java里,写好了。

2、引用“池”超过

HBase编程 API入门系列之get(客户端而言)(2)

  上面这种方式

  为了更进一步,给博友们,深层次明白,“池”的魅力,当然,这也是在公司实际开发里,首推和强烈建议去做的。

hbase(main):038:0> scan 'test_table'
ROW COLUMN+CELL
row_01 column=f:col, timestamp=1478095650110, value=maizi
row_01 column=f:name, timestamp=1478095741767, value=Andy2
row_02 column=f:name, timestamp=1478095849538, value=Andy2
row_03 column=f:name, timestamp=1478095893278, value=Andy3
row_04 column=f:name, timestamp=1478097227253, value=\xE5\x8C\x97\xE4\xBA\xAC
row_05 column=f:address, timestamp=1478097364649, value=beijng
5 row(s) in 0.2280 seconds

hbase(main):039:0>

  1 package zhouls.bigdata.HbaseProject.Pool;
  2
  3 import java.io.IOException;
  4
  5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6
  7 import javax.xml.transform.Result;
  8
  9 import org.apache.hadoop.conf.Configuration;
 10 import org.apache.hadoop.hbase.Cell;
 11 import org.apache.hadoop.hbase.CellUtil;
 12 import org.apache.hadoop.hbase.HBaseConfiguration;
 13 import org.apache.hadoop.hbase.TableName;
 14 import org.apache.hadoop.hbase.client.Delete;
 15 import org.apache.hadoop.hbase.client.Get;
 16 import org.apache.hadoop.hbase.client.HTable;
 17 import org.apache.hadoop.hbase.client.HTableInterface;
 18 import org.apache.hadoop.hbase.client.Put;
 19 import org.apache.hadoop.hbase.client.ResultScanner;
 20 import org.apache.hadoop.hbase.client.Scan;
 21 import org.apache.hadoop.hbase.util.Bytes;
 22
 23 public class HBaseTest {
 24     public static void main(String[] args) throws Exception {
 25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
 26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
 27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
 28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
 29 //        table.put(put);
 30 //        table.close();
 31
 32 //        Get get = new Get(Bytes.toBytes("row_04"));
 33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
 34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 35 //        System.out.println(rest.toString());
 36 //        table.close();
 37
 38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
 39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
 40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
 41 //        table.delete(delete);
 42 //        table.close();
 43
 44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
 45     delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
 47 //        table.delete(delete);
 48 //        table.close();
 49
 50
 51 //        Scan scan = new Scan();
 52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
 53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
 54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 55 //        ResultScanner rst = table.getScanner(scan);//整个循环
 56 //        System.out.println(rst.toString());
 57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
 58 //        for(Cell cell:next.rawCells()){//某个row key下的循坏
 59 //        System.out.println(next.toString());
 60 //        System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
 61 //        System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
 62 //        System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
 63 //        }
 64 //        }
 65 //        table.close();
 66
 67         HBaseTest hbasetest =new HBaseTest();
 68 //        hbasetest.insertValue();
 69         hbasetest.getValue();
 70 }
 71
 72
 73 //        public void insertValue() throws Exception{
 74 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 75 //        Put put = new Put(Bytes.toBytes("row_05"));//行键是row_01
 76 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("address"),Bytes.toBytes("beijng"));
 77 //        table.put(put);
 78 //        table.close();
 79 //        }
 80
 81
 82
 83     public void getValue() throws Exception{
 84         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 85         Get get = new Get(Bytes.toBytes("row_03"));
 86         get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 87         org.apache.hadoop.hbase.client.Result rest = table.get(get);
 88         System.out.println(rest.toString());
 89         table.close();
 90     }
 91
 92
 93
 94     public static Configuration getConfig(){
 95         Configuration configuration = new Configuration();
 96 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
 97         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
 98         return configuration;
 99     }
100 }

2016-12-11 14:37:12,030 INFO [org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper] - Process identifier=hconnection-0x7660aac9 connecting to ZooKeeper ensemble=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181
2016-12-11 14:37:12,040 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:host.name=WIN-BQOBV63OBNM
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.version=1.7.0_51
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.vendor=Oracle Corporation
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.home=C:\Program Files\Java\jdk1.7.0_51\jre
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.class.path=D:\Code\MyEclipseJavaCode\HbaseProject\bin;D:\SoftWare\hbase-1.2.3\lib\activation-1.1.jar;D:\SoftWare\hbase-1.2.3\lib\aopalliance-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-i18n-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\api-asn1-api-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\api-util-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\asm-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\avro-1.7.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-1.7.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-core-1.8.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-cli-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-codec-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\commons-collections-3.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-compress-1.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-configuration-1.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-daemon-1.0.13.jar;D:\SoftWare\hbase-1.2.3\lib\commons-digester-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\commons-el-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-httpclient-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-io-2.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-lang-2.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-logging-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math-2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math3-3.1.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-net-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\disruptor-3.3.0.jar;D:\SoftWare\hbase-1.2.3\lib\findbugs-annotations-1.3.9-1.jar;D:\SoftWare\hbase-1.2.3\lib\guava-12.0.1.jar;D:\SoftWare\hbase-1.2.3\lib\guice-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\guice-servlet-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-annotations-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-auth-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-hdfs-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-app-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-core-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-jobclient-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-shuffle-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-api-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-server-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-client-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-examples-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-external-blockcache-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop2-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-prefix-tree-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-procedure-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-protocol-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-resource-bundle-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-rest-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-shell-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-thrift-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\htrace-core-3.1.0-incubating.jar;D:\SoftWare\hbase-1.2.3\lib\httpclient-4.2.5.jar;D:\SoftWare\hbase-1.2.3\lib\httpcore-4.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-core-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-jaxrs-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-mapper-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-xc-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jamon-runtime-2.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-compiler-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-runtime-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\javax.inject-1.jar;D:\SoftWare\hbase-1.2.3\lib\java-xmlbuilder-0.4.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-api-2.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-impl-2.2.3-1.jar;D:\SoftWare\hbase-1.2.3\lib\jcodings-1.0.8.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-client-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-core-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-guice-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-json-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-server-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jets3t-0.9.0.jar;D:\SoftWare\hbase-1.2.3\lib\jettison-1.3.3.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-sslengine-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-util-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\joni-2.1.2.jar;D:\SoftWare\hbase-1.2.3\lib\jruby-complete-1.6.8.jar;D:\SoftWare\hbase-1.2.3\lib\jsch-0.1.42.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-api-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\junit-4.12.jar;D:\SoftWare\hbase-1.2.3\lib\leveldbjni-all-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\libthrift-0.9.3.jar;D:\SoftWare\hbase-1.2.3\lib\log4j-1.2.17.jar;D:\SoftWare\hbase-1.2.3\lib\metrics-core-2.2.0.jar;D:\SoftWare\hbase-1.2.3\lib\netty-all-4.0.23.Final.jar;D:\SoftWare\hbase-1.2.3\lib\paranamer-2.3.jar;D:\SoftWare\hbase-1.2.3\lib\protobuf-java-2.5.0.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-api-1.7.7.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-log4j12-1.7.5.jar;D:\SoftWare\hbase-1.2.3\lib\snappy-java-1.0.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\spymemcached-2.11.6.jar;D:\SoftWare\hbase-1.2.3\lib\xmlenc-0.52.jar;D:\SoftWare\hbase-1.2.3\lib\xz-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\zookeeper-3.4.6.jar
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.library.path=C:\Program Files\Java\jdk1.7.0_51\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\ProgramData\Oracle\Java\javapath;C:\Python27\;C:\Python27\Scripts;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\SoftWare\MATLAB R2013a\runtime\win64;D:\SoftWare\MATLAB R2013a\bin;C:\Program Files (x86)\IDM Computer Solutions\UltraCompare;C:\Program Files\Java\jdk1.7.0_51\bin;C:\Program Files\Java\jdk1.7.0_51\jre\bin;D:\SoftWare\apache-ant-1.9.0\bin;HADOOP_HOME\bin;D:\SoftWare\apache-maven-3.3.9\bin;D:\SoftWare\Scala\bin;D:\SoftWare\Scala\jre\bin;%MYSQL_HOME\bin;D:\SoftWare\MySQL Server\MySQL Server 5.0\bin;D:\SoftWare\apache-tomcat-7.0.69\bin;%C:\Windows\System32;%C:\Windows\SysWOW64;D:\SoftWare\SSH Secure Shell;.
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.compiler=<NA>
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.name=Windows 7
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.arch=amd64
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.version=6.1
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.name=Administrator
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.home=C:\Users\Administrator
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.dir=D:\Code\MyEclipseJavaCode\HbaseProject
2016-12-11 14:37:12,044 INFO [org.apache.zookeeper.ZooKeeper] - Initiating client connection, connectString=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181 sessionTimeout=90000 watcher=hconnection-0x7660aac90x0, quorum=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181, baseZNode=/hbase
2016-12-11 14:37:12,091 INFO [org.apache.zookeeper.ClientCnxn] - Opening socket connection to server HadoopMaster/192.168.80.10:2181. Will not attempt to authenticate using SASL (unknown error)
2016-12-11 14:37:12,094 INFO [org.apache.zookeeper.ClientCnxn] - Socket connection established to HadoopMaster/192.168.80.10:2181, initiating session
2016-12-11 14:37:12,162 INFO [org.apache.zookeeper.ClientCnxn] - Session establishment complete on server HadoopMaster/192.168.80.10:2181, sessionid = 0x1582556e7c5001d, negotiated timeout = 40000
keyvalues={row_03/f:name/1478095893278/Put/vlen=5/seqid=0}

3.1、引用“池”超过

HBase编程 API入门系列之delete(客户端而言)(3)

HBase编程 API入门之delete.deleteColumn和delete.deleteColumns区别(客户端而言)(4)

  上面这种方式

    时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

              先建                              后建

  1 package zhouls.bigdata.HbaseProject.Pool;
  2
  3 import java.io.IOException;
  4
  5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6
  7 import javax.xml.transform.Result;
  8
  9 import org.apache.hadoop.conf.Configuration;
 10 import org.apache.hadoop.hbase.Cell;
 11 import org.apache.hadoop.hbase.CellUtil;
 12 import org.apache.hadoop.hbase.HBaseConfiguration;
 13 import org.apache.hadoop.hbase.TableName;
 14 import org.apache.hadoop.hbase.client.Delete;
 15 import org.apache.hadoop.hbase.client.Get;
 16 import org.apache.hadoop.hbase.client.HTable;
 17 import org.apache.hadoop.hbase.client.HTableInterface;
 18 import org.apache.hadoop.hbase.client.Put;
 19 import org.apache.hadoop.hbase.client.ResultScanner;
 20 import org.apache.hadoop.hbase.client.Scan;
 21 import org.apache.hadoop.hbase.util.Bytes;
 22
 23 public class HBaseTest {
 24     public static void main(String[] args) throws Exception {
 25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
 26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
 27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
 28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
 29 //        table.put(put);
 30 //        table.close();
 31
 32 //        Get get = new Get(Bytes.toBytes("row_04"));
 33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
 34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 35 //        System.out.println(rest.toString());
 36 //        table.close();
 37
 38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
 39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
 40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
 41 //        table.delete(delete);
 42 //        table.close();
 43
 44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
 45     delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
 47 //        table.delete(delete);
 48 //        table.close();
 49
 50
 51 //        Scan scan = new Scan();
 52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
 53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
 54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 55 //        ResultScanner rst = table.getScanner(scan);//整个循环
 56 //        System.out.println(rst.toString());
 57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
 58 //            for(Cell cell:next.rawCells()){//某个row key下的循坏
 59 //            System.out.println(next.toString());
 60 //            System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
 61 //            System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
 62 //            System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
 63 //        }
 64 //        }
 65 //        table.close();
 66
 67         HBaseTest hbasetest =new HBaseTest();
 68 //        hbasetest.insertValue();
 69 //        hbasetest.getValue();
 70         hbasetest.delete();
 71     }
 72
 73
 74 //    public void insertValue() throws Exception{
 75 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 76 //        Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
 77 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
 78 //        table.put(put);
 79 //        table.close();
 80 //    }
 81
 82
 83
 84 //    public void getValue() throws Exception{
 85 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 86 //        Get get = new Get(Bytes.toBytes("row_03"));
 87 //        get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 88 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 89 //        System.out.println(rest.toString());
 90 //        table.close();
 91 //    }
 92 //
 93
 94
 95     public void delete() throws Exception{
 96         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 97         Delete delete = new Delete(Bytes.toBytes("row_01"));
 98 //        delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 99         delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
100         table.delete(delete);
101         table.close();
102     }
103
104
105
106     public static Configuration getConfig(){
107         Configuration configuration = new Configuration();
108 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
109         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
110         return configuration;
111     }
112 }

  

delete.deleteColumn和delete.deleteColumns区别是:

    deleteColumn是删除某一个列簇里的最新时间戳版本。

    delete.deleteColumns是删除某个列簇里的所有时间戳版本。

    

3.2、引用“池”超过

HBase编程 API入门之delete(客户端而言)

HBase编程 API入门之delete.deleteColumn和delete.deleteColumns区别(客户端而言)

  上面这种方式

  时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

              先建                              后建

  

      时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

              先建                              后建

  1 package zhouls.bigdata.HbaseProject.Pool;
  2
  3 import java.io.IOException;
  4
  5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6
  7 import javax.xml.transform.Result;
  8
  9 import org.apache.hadoop.conf.Configuration;
 10 import org.apache.hadoop.hbase.Cell;
 11 import org.apache.hadoop.hbase.CellUtil;
 12 import org.apache.hadoop.hbase.HBaseConfiguration;
 13 import org.apache.hadoop.hbase.TableName;
 14 import org.apache.hadoop.hbase.client.Delete;
 15 import org.apache.hadoop.hbase.client.Get;
 16 import org.apache.hadoop.hbase.client.HTable;
 17 import org.apache.hadoop.hbase.client.HTableInterface;
 18 import org.apache.hadoop.hbase.client.Put;
 19 import org.apache.hadoop.hbase.client.ResultScanner;
 20 import org.apache.hadoop.hbase.client.Scan;
 21 import org.apache.hadoop.hbase.util.Bytes;
 22
 23 public class HBaseTest {
 24     public static void main(String[] args) throws Exception {
 25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
 26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
 27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
 28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
 29 //        table.put(put);
 30 //        table.close();
 31
 32 //        Get get = new Get(Bytes.toBytes("row_04"));
 33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
 34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 35 //        System.out.println(rest.toString());
 36 //        table.close();
 37
 38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
 39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
 40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
 41 //        table.delete(delete);
 42 //        table.close();
 43
 44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
 45     delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
 47 //        table.delete(delete);
 48 //        table.close();
 49
 50
 51 //        Scan scan = new Scan();
 52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
 53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
 54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 55 //        ResultScanner rst = table.getScanner(scan);//整个循环
 56 //        System.out.println(rst.toString());
 57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
 58 //        for(Cell cell:next.rawCells()){//某个row key下的循坏
 59 //            System.out.println(next.toString());
 60 //            System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
 61 //            System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
 62 //            System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
 63 //        }
 64 //    }
 65 //        table.close();
 66
 67         HBaseTest hbasetest =new HBaseTest();
 68 //        hbasetest.insertValue();
 69 //        hbasetest.getValue();
 70         hbasetest.delete();
 71     }
 72
 73
 74 //    public void insertValue() throws Exception{
 75 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 76 //        Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
 77 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
 78 //        table.put(put);
 79 //        table.close();
 80 //    }
 81
 82
 83
 84 //    public void getValue() throws Exception{
 85 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 86 //        Get get = new Get(Bytes.toBytes("row_03"));
 87 //        get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 88 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 89 //        System.out.println(rest.toString());
 90 //        table.close();
 91 //    }
 92 //
 93
 94
 95     public void delete() throws Exception{
 96         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 97         Delete delete = new Delete(Bytes.toBytes("row_01"));
 98         delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 99 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
100         table.delete(delete);
101         table.close();
102 }
103
104
105
106     public static Configuration getConfig(){
107         Configuration configuration = new Configuration();
108 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
109         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
110         return configuration;
111     }
112 }

            时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

                                 先建                              后建

 delete.deleteColumn和delete.deleteColumns区别是:

    deleteColumn是删除某一个列簇里的最新时间戳版本。

    delete.deleteColumns是删除某个列簇里的所有时间戳版本。

4、引用“池”超过

HBase编程 API入门之scan(客户端而言)

  上面这种方式

  1 package zhouls.bigdata.HbaseProject.Pool;
  2
  3 import java.io.IOException;
  4
  5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6
  7 import javax.xml.transform.Result;
  8
  9 import org.apache.hadoop.conf.Configuration;
 10 import org.apache.hadoop.hbase.Cell;
 11 import org.apache.hadoop.hbase.CellUtil;
 12 import org.apache.hadoop.hbase.HBaseConfiguration;
 13 import org.apache.hadoop.hbase.TableName;
 14 import org.apache.hadoop.hbase.client.Delete;
 15 import org.apache.hadoop.hbase.client.Get;
 16 import org.apache.hadoop.hbase.client.HTable;
 17 import org.apache.hadoop.hbase.client.HTableInterface;
 18 import org.apache.hadoop.hbase.client.Put;
 19 import org.apache.hadoop.hbase.client.ResultScanner;
 20 import org.apache.hadoop.hbase.client.Scan;
 21 import org.apache.hadoop.hbase.util.Bytes;
 22
 23 public class HBaseTest {
 24     public static void main(String[] args) throws Exception {
 25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
 26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
 27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
 28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
 29 //        table.put(put);
 30 //        table.close();
 31
 32 //        Get get = new Get(Bytes.toBytes("row_04"));
 33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
 34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 35 //        System.out.println(rest.toString());
 36 //        table.close();
 37
 38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
 39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
 40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
 41 //        table.delete(delete);
 42 //        table.close();
 43
 44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
 45     delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
 47 //        table.delete(delete);
 48 //        table.close();
 49
 50
 51 //        Scan scan = new Scan();
 52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
 53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
 54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 55 //        ResultScanner rst = table.getScanner(scan);//整个循环
 56 //        System.out.println(rst.toString());
 57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
 58 //            for(Cell cell:next.rawCells()){//某个row key下的循坏
 59 //                System.out.println(next.toString());
 60 //                System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
 61 //                System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
 62 //                System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
 63 //            }
 64 //        }
 65 //        table.close();
 66
 67         HBaseTest hbasetest =new HBaseTest();
 68 //        hbasetest.insertValue();
 69 //        hbasetest.getValue();
 70 //        hbasetest.delete();
 71         hbasetest.scanValue();
 72     }
 73
 74
 75 //    public void insertValue() throws Exception{
 76 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 77 //        Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
 78 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
 79 //        table.put(put);
 80 //        table.close();
 81 //    }
 82
 83
 84
 85 //    public void getValue() throws Exception{
 86 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 87 //        Get get = new Get(Bytes.toBytes("row_03"));
 88 //        get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 89 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 90 //        System.out.println(rest.toString());
 91 //        table.close();
 92 //    }
 93 //
 94
 95
 96 //    public void delete() throws Exception{
 97 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 98 //        Delete delete = new Delete(Bytes.toBytes("row_01"));
 99 //     delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
100     delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
101 //        table.delete(delete);
102 //        table.close();
103 //    }
104
105
106     public void scanValue() throws Exception{
107         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
108         Scan scan = new Scan();
109         scan.setStartRow(Bytes.toBytes("row_02"));//包含开始行键
110         scan.setStopRow(Bytes.toBytes("row_04"));//不包含结束行键
111         scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
112         ResultScanner rst = table.getScanner(scan);//整个循环
113         System.out.println(rst.toString());
114         for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
115             for(Cell cell:next.rawCells()){//某个row key下的循坏
116                 System.out.println(next.toString());
117                 System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
118                 System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
119                 System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
120             }
121         }
122         table.close();
123     }
124
125
126
127     public static Configuration getConfig(){
128         Configuration configuration = new Configuration();
129 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
130         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
131         return configuration;
132     }
133 }

2016-12-11 15:14:56,940 INFO [org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper] - Process identifier=hconnection-0x278a676 connecting to ZooKeeper ensemble=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:host.name=WIN-BQOBV63OBNM
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.version=1.7.0_51
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.vendor=Oracle Corporation
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.home=C:\Program Files\Java\jdk1.7.0_51\jre
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.class.path=D:\Code\MyEclipseJavaCode\HbaseProject\bin;D:\SoftWare\hbase-1.2.3\lib\activation-1.1.jar;D:\SoftWare\hbase-1.2.3\lib\aopalliance-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-i18n-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\api-asn1-api-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\api-util-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\asm-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\avro-1.7.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-1.7.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-core-1.8.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-cli-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-codec-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\commons-collections-3.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-compress-1.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-configuration-1.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-daemon-1.0.13.jar;D:\SoftWare\hbase-1.2.3\lib\commons-digester-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\commons-el-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-httpclient-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-io-2.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-lang-2.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-logging-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math-2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math3-3.1.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-net-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\disruptor-3.3.0.jar;D:\SoftWare\hbase-1.2.3\lib\findbugs-annotations-1.3.9-1.jar;D:\SoftWare\hbase-1.2.3\lib\guava-12.0.1.jar;D:\SoftWare\hbase-1.2.3\lib\guice-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\guice-servlet-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-annotations-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-auth-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-hdfs-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-app-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-core-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-jobclient-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-shuffle-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-api-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-server-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-client-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-examples-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-external-blockcache-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop2-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-prefix-tree-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-procedure-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-protocol-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-resource-bundle-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-rest-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-shell-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-thrift-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\htrace-core-3.1.0-incubating.jar;D:\SoftWare\hbase-1.2.3\lib\httpclient-4.2.5.jar;D:\SoftWare\hbase-1.2.3\lib\httpcore-4.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-core-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-jaxrs-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-mapper-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-xc-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jamon-runtime-2.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-compiler-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-runtime-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\javax.inject-1.jar;D:\SoftWare\hbase-1.2.3\lib\java-xmlbuilder-0.4.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-api-2.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-impl-2.2.3-1.jar;D:\SoftWare\hbase-1.2.3\lib\jcodings-1.0.8.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-client-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-core-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-guice-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-json-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-server-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jets3t-0.9.0.jar;D:\SoftWare\hbase-1.2.3\lib\jettison-1.3.3.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-sslengine-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-util-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\joni-2.1.2.jar;D:\SoftWare\hbase-1.2.3\lib\jruby-complete-1.6.8.jar;D:\SoftWare\hbase-1.2.3\lib\jsch-0.1.42.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-api-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\junit-4.12.jar;D:\SoftWare\hbase-1.2.3\lib\leveldbjni-all-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\libthrift-0.9.3.jar;D:\SoftWare\hbase-1.2.3\lib\log4j-1.2.17.jar;D:\SoftWare\hbase-1.2.3\lib\metrics-core-2.2.0.jar;D:\SoftWare\hbase-1.2.3\lib\netty-all-4.0.23.Final.jar;D:\SoftWare\hbase-1.2.3\lib\paranamer-2.3.jar;D:\SoftWare\hbase-1.2.3\lib\protobuf-java-2.5.0.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-api-1.7.7.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-log4j12-1.7.5.jar;D:\SoftWare\hbase-1.2.3\lib\snappy-java-1.0.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\spymemcached-2.11.6.jar;D:\SoftWare\hbase-1.2.3\lib\xmlenc-0.52.jar;D:\SoftWare\hbase-1.2.3\lib\xz-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\zookeeper-3.4.6.jar
2016-12-11 15:14:56,955 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.library.path=C:\Program Files\Java\jdk1.7.0_51\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\ProgramData\Oracle\Java\javapath;C:\Python27\;C:\Python27\Scripts;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\SoftWare\MATLAB R2013a\runtime\win64;D:\SoftWare\MATLAB R2013a\bin;C:\Program Files (x86)\IDM Computer Solutions\UltraCompare;C:\Program Files\Java\jdk1.7.0_51\bin;C:\Program Files\Java\jdk1.7.0_51\jre\bin;D:\SoftWare\apache-ant-1.9.0\bin;HADOOP_HOME\bin;D:\SoftWare\apache-maven-3.3.9\bin;D:\SoftWare\Scala\bin;D:\SoftWare\Scala\jre\bin;%MYSQL_HOME\bin;D:\SoftWare\MySQL Server\MySQL Server 5.0\bin;D:\SoftWare\apache-tomcat-7.0.69\bin;%C:\Windows\System32;%C:\Windows\SysWOW64;D:\SoftWare\SSH Secure Shell;.
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.compiler=<NA>
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.name=Windows 7
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.arch=amd64
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.version=6.1
2016-12-11 15:14:56,957 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.name=Administrator
2016-12-11 15:14:56,957 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.home=C:\Users\Administrator
2016-12-11 15:14:56,957 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.dir=D:\Code\MyEclipseJavaCode\HbaseProject
2016-12-11 15:14:56,958 INFO [org.apache.zookeeper.ZooKeeper] - Initiating client connection, connectString=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181 sessionTimeout=90000 watcher=hconnection-0x278a6760x0, quorum=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181, baseZNode=/hbase
2016-12-11 15:14:57,015 INFO [org.apache.zookeeper.ClientCnxn] - Opening socket connection to server HadoopMaster/192.168.80.10:2181. Will not attempt to authenticate using SASL (unknown error)
2016-12-11 15:14:57,018 INFO [org.apache.zookeeper.ClientCnxn] - Socket connection established to HadoopMaster/192.168.80.10:2181, initiating session
2016-12-11 15:14:57,044 INFO [org.apache.zookeeper.ClientCnxn] - Session establishment complete on server HadoopMaster/192.168.80.10:2181, sessionid = 0x1582556e7c50024, negotiated timeout = 40000
org.apache.hadoop.hbase.client.ClientScanner@4362f2fe
keyvalues={row_02/f:name/1478095849538/Put/vlen=5/seqid=0}
family:f
col:name
valueAndy2
keyvalues={row_03/f:name/1478095893278/Put/vlen=5/seqid=0}
family:f
col:name
valueAndy3

  好的,其他的功能,就不带领大家去做了,自行去研究。

最后,总结:

  在实际开发中,一定要掌握线程池!!!

附上代码

package zhouls.bigdata.HbaseProject.Pool;

import java.io.IOException;

import zhouls.bigdata.HbaseProject.Pool.TableConnection;

import javax.xml.transform.Result;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.CellUtil;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.HTableInterface;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.util.Bytes;

public class HBaseTest {

public static void main(String[] args) throws Exception {
// HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
// Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
// put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
// put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
// table.put(put);
// table.close();

// Get get = new Get(Bytes.toBytes("row_04"));
// get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
// org.apache.hadoop.hbase.client.Result rest = table.get(get);
// System.out.println(rest.toString());
// table.close();

// Delete delete = new Delete(Bytes.toBytes("row_2"));
// delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
// delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
// table.delete(delete);
// table.close();

// Delete delete = new Delete(Bytes.toBytes("row_04"));
delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
// delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
// table.delete(delete);
// table.close();

// Scan scan = new Scan();
// scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
// scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
// scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
// ResultScanner rst = table.getScanner(scan);//整个循环
// System.out.println(rst.toString());
// for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() )
// {
// for(Cell cell:next.rawCells()){//某个row key下的循坏
// System.out.println(next.toString());
// System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
// System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
// System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
// }
// }
// table.close();

HBaseTest hbasetest =new HBaseTest();
// hbasetest.insertValue();
// hbasetest.getValue();
// hbasetest.delete();
hbasetest.scanValue();

}

//生产开发中,建议这样用线程池做
// public void insertValue() throws Exception{
// HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
// Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
// put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
// table.put(put);
// table.close();
// }

//生产开发中,建议这样用线程池做
// public void getValue() throws Exception{
// HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
// Get get = new Get(Bytes.toBytes("row_03"));
// get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
// org.apache.hadoop.hbase.client.Result rest = table.get(get);
// System.out.println(rest.toString());
// table.close();
// }
//

//生产开发中,建议这样用线程池做
// public void delete() throws Exception{
// HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
// Delete delete = new Delete(Bytes.toBytes("row_01"));
// delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
// table.delete(delete);
// table.close();
//
// }

//生产开发中,建议这样用线程池做
public void scanValue() throws Exception{
HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
Scan scan = new Scan();
scan.setStartRow(Bytes.toBytes("row_02"));//包含开始行键
scan.setStopRow(Bytes.toBytes("row_04"));//不包含结束行键
scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
ResultScanner rst = table.getScanner(scan);//整个循环
System.out.println(rst.toString());
for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() )
{
for(Cell cell:next.rawCells()){//某个row key下的循坏
System.out.println(next.toString());
System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
}
}
table.close();
}

public static Configuration getConfig(){
Configuration configuration = new Configuration();
// conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
return configuration;
}
}

package zhouls.bigdata.HbaseProject.Pool;

import java.io.IOException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.HConnection;
import org.apache.hadoop.hbase.client.HConnectionManager;

public class TableConnection {
private TableConnection(){
}
private static HConnection connection = null;
public static HConnection getConnection(){
if(connection == null){
ExecutorService pool = Executors.newFixedThreadPool(10);//建立一个固定大小的线程池
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum","HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
try{
connection = HConnectionManager.createConnection(conf,pool);//创建连接时,拿到配置文件和线程池
}catch (IOException e){
}
}
return connection;
}
}

转载于:https://www.cnblogs.com/zlslch/p/6159427.html

HBase编程 API入门系列之HTable pool(6)相关推荐

  1. HBase编程 API入门系列之create(管理端而言)(8)

    这里,我带领大家,学习更高级的,因为,在开发中,尽量不能去服务器上创建表. 所以,在管理端来创建HBase表.采用线程池的方式(也是生产开发里首推的). 1 package zhouls.bigdat ...

  2. HBase编程 API入门系列之put(客户端而言)(1)

    心得,写在前面的话,也许,中间会要多次执行,连接超时,多试试就好了. [hadoop@HadoopSlave1 conf]$ cat regionservers HadoopMaster Hadoop ...

  3. Hadoop MapReduce编程 API入门系列之压缩和计数器(三十)

    不多说,直接上代码. Hadoop MapReduce编程 API入门系列之小文件合并(二十九) 生成的结果,作为输入源. 代码 package zhouls.bigdata.myMapReduce. ...

  4. Hadoop MapReduce编程 API入门系列之查找相同字母组成的字谜(三)

    找出相同单词的所有单词.现在,是拿取部分数据集(如下)来完成本项目. 项目需求 一本英文书籍包含成千上万个单词或者短语,现在我们需要在大量的单词中,找出相同字母组成的所有anagrams(字谜). 思 ...

  5. Windows SDK编程 API入门系列(转)

    之一 -那'烦人'的Windows数据类型 原创文章,转载请注明作者及出处. 首发 http://blog.csdn.net/beyondcode http://www.cnblogs.com/bey ...

  6. Spark RDD/Core 编程 API入门系列之动手实战和调试Spark文件操作、动手实战操作搜狗日志文件、搜狗日志文件深入实战(二)...

    1.动手实战和调试Spark文件操作 这里,我以指定executor-memory参数的方式,启动spark-shell. 启动hadoop集群 spark@SparkSingleNode:/usr/ ...

  7. Hadoop MapReduce编程 API入门系列之join(二十六)

    天气记录数据库 气象站数据库 气象站和天气记录合并之后的示意图如下所示. 011990-99999 SIHCCAJAVRI 195005150700 0 011990-99999 SIHCCAJAVR ...

  8. Hadoop MapReduce编程 API入门系列之最短路径(十五)

    不多说,直接上代码. ====================================== = Iteration: 1 = Input path: out/shortestpath/inpu ...

  9. Spark MLlib编程API入门系列之特征选择之R模型公式(RFormula)

    不多说,直接上干货! 特征选择里,常见的有:VectorSlicer(向量选择) RFormula(R模型公式) ChiSqSelector(卡方特征选择). RFormula用于将数据中的字段通过R ...

最新文章

  1. 架构师技术文档:Redis+Nginx+Spring全家桶+Dubbo精选
  2. time_wait状态产生的原因,危害,如何避免
  3. Ubuntu显示隐藏文件和文件夹
  4. Linux环境下使用dosemu写汇编
  5. EconomicIndoor集成测试
  6. 279. 完全平方数 golang
  7. 太原市中考计算机考试系统,太原中考报名系统
  8. [19/03/12-星期二] 数组_遍历(for-each)复制java.util.Arrays类
  9. 想做程序员?不同方向入门路线全解
  10. 两个平面的位置关系和判定方程组解_高一数学必修二平面知识点详解
  11. 用GridView做国际象棋
  12. Doris export任务概率性cancelled第二种情况
  13. The Expressive Power of Neural Networks: A View from the Width
  14. 【功能安全】【ISO26262】生产和运行
  15. HTML页面查看world等文件,网页文件 - HTML - 网页基础 - KK的小故事
  16. 趣玩Python——如何帮女朋友快速抢票
  17. Spark日志,及设置日志输出级别
  18. C++设计模式——装饰着模式(高屋建瓴)
  19. 获取订单状态_SAP刘梦_新浪博客
  20. 全网最详细的Gephi安装与使用教程

热门文章

  1. mysql bin 恢复工具_基于binlog恢复工具mysqlbinlog_flashback
  2. c语言调用sqlite
  3. linux时区的几个代码片段
  4. 95-866-040-源码-吞吐量-提升吞吐的利器 MicroBatch
  5. 【Flink】Flink 1.12.2 TaskSlotTable
  6. 【FLink】Flink checkpoint 实现数据连续计算 恢复机制 案例实战
  7. 【Kafka】kafka NotLeaderForPartitionException thisserver is not the leader for topic-partition
  8. 【Kafka】Kafka SCRAM认证 ERROR [ZooKeeperClient] Auth failed
  9. MyIbatis :不使用XML和注解@Mapper以及MapperScan
  10. 95-190-044-源码-window-window三要素