HBase的安装、写入和查询操作
实验材料及说明
在Ubuntu系统的/学号(每个人之间的学号)/salesInfo目录下,有买家的购买记录文件Sales,该文件记录了买家的id,购买商品的id以及购买日期,文件为名为Sales。Sales包含:买家ID、商品ID、购买日期三个字段,数据以“\t”进行分割,样本数据及格式如下:
买家ID 商品ID 购买日期
1000181 1000481 2021-04-04 16:54:31
2000001 1001597 2021-04-07 15:07:52
2000001 1001560 2021-04-07 15:08:27
2000042 1001368 2021-04-08 08:20:30
2000067 1002061 2021-04-08 16:45:33
2000056 1003289 2021-04-12 10:50:55
2000056 1003290 2021-04-12 11:57:35
2000056 1003292 2021-04-12 12:05:29
2000054 1002420 2021-04-14 15:24:12
2000055 1001679 2021-04-14 19:46:04
2000054 1010675 2021-04-14 15:23:53
2000054 1002429 2021-04-14 17:52:45
2000076 1002427 2021-04-14 19:35:39
2000054 1003326 2021-04-20 12:54:44
2000056 1002420 2021-04-15 11:24:49
2000064 1002422 2021-04-15 11:35:54
2000056 1003066 2021-04-15 11:43:01
2000056 1003055 2021-04-15 11:43:06
2000056 1010183 2021-04-15 11:45:24
2000056 1002422 2021-04-15 11:45:49
2000056 1003100 2021-04-15 11:45:54
2000056 1003094 2021-04-15 11:45:57
2000056 1003064 2021-04-15 11:46:04
2000056 1010178 2021-04-15 16:15:20
2000076 1003101 2021-04-15 16:37:27
2000076 1003103 2021-04-15 16:37:05
2000076 1003100 2021-04-15 16:37:18
2000076 1003066 2021-04-15 16:37:31
要求根据要求撰写实验报告,实验报告需要包括实验原理、算法设计思路、代码、代码调试说明、实验过程中碰到的问题和代码改进建议等内容。实验报告文件命名规则:HadoopLabX-学号-姓名.doc(X=1,2,3)。具体而言,实验报告需要包括以下内容:
实验目的
掌握HBase的安装、写入和查询操作。即,要求在Hbase中创建Sales表;创建PutData类,将Sales文件中的所有数据写入Sales表中;并创建GetData类,查询Sales表中rowkey为10010的数据。
1.在hbase中创建Sales表
package shiyan;
import java.io.IOException;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.conf.Configuration;
public class CreateTable { public static void main(String[] args) throws IOException {// Instantiating configuration class 初始化配置文件Configuration con = HBaseConfiguration.create();// Instantiating HbaseAdmin class 初始化HbaseAdminHBaseAdmin admin = new HBaseAdmin(con);// Instantiating table descriptor class 设置表名HTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf("Sales"));// Adding column families to table descriptor 设置列族名(可设置多个)tableDescriptor.addFamily(new HColumnDescriptor("buyerID"));tableDescriptor.addFamily(new HColumnDescriptor("goodsID"));tableDescriptor.addFamily(new HColumnDescriptor("time"));admin.createTable(tableDescriptor);System.out.println("created Table success!");}
2.从本地批量上传数据到hbase中的表中
package shiyan;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.*;
import org.apache.hadoop.hbase.client.*;
import org.apache.hadoop.hbase.util.Bytes;import java.io.BufferedReader;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
public class putdata{private static final String TABLE_NAME="sales";public static final String FAMILY_NAME_1 = "goodsid";public static final String FAMILY_NAME_2= "buytime";//confpublic static void main(String [] args){Configuration conf = HBaseConfiguration.create();conf.set("hbase.zookeeper.quorum","localhost");conf.set("zookeeper.znode.parent", "/hbase");List<Put> list = new ArrayList<Put>();File file = new File("/1863710117/salesInfo/Sales");BufferedReader reader = null;String lineString = null;Connection connection = null;Table table = null;try {connection = ConnectionFactory.createConnection(conf);Admin admin = connection.getAdmin(); table = connection.getTable(TableName.valueOf(TABLE_NAME));reader = new BufferedReader(new FileReader(file));while ((lineString = reader.readLine()) != null) {String[] lines = lineString.split(" ");Put put = new Put(lines[0].getBytes());put.addColumn(FAMILY_NAME_1.getBytes(), null, lines[1].getBytes());put.addColumn(FAMILY_NAME_2.getBytes(), null, lines[2].getBytes());list.add(put);}table.put(list);System.out.println("success!");reader.close();} catch (FileNotFoundException e) {e.printStackTrace();} catch (IOException e) {e.printStackTrace();} finally {if (reader != null) {try {reader.close();} catch (IOException e) {e.printStackTrace();}}if (table != null)try {table.close();} catch (IOException e) {// TODO Auto-generated catch blocke.printStackTrace();}if (connection != null)try {connection.close();} catch (IOException e) {// TODO Auto-generated catch blocke.printStackTrace();}}}
}
3.使用一个程序实现创建、上传数据、查询数据
package shiyan;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.*;
import org.apache.hadoop.hbase.client.*;
import org.apache.hadoop.hbase.util.Bytes;
import java.io.IOException;
import java.util.LinkedList;
import java.util.List;
public class addRowData{private static final String TABLE_NAME="sales";// public static final String FAMILY_NAME_1 = "buyerid";public static final String FAMILY_NAME_1 = "goodsid";public static final String FAMILY_NAME_2= "buytime";//confprivate static Configuration getHBaseConfiguration() {Configuration conf = HBaseConfiguration.create();conf.set("hbase.zookeeper.quorum","localhost");conf.set("zookeeper.znode.parent", "/hbase");return conf;}//createTableprivate static void createTable(Configuration conf) throws IOException {Connection connection = null;Table table = null;try {connection = ConnectionFactory.createConnection(conf);Admin admin = connection.getAdmin();if (!admin.tableExists(TableName.valueOf(TABLE_NAME))) {//create table ,create familyHTableDescriptor tableDescriptor = new HTableDescriptor(TableName.valueOf(TABLE_NAME));HColumnDescriptor columnDescriptor_1 = new HColumnDescriptor(Bytes.toBytes(FAMILY_NAME_1));HColumnDescriptor columnDescriptor_2 = new HColumnDescriptor(Bytes.toBytes(FAMILY_NAME_2));// HColumnDescriptor columnDescriptor_3 = new HColumnDescriptor(Bytes.toBytes(FAMILY_NAME_3));tableDescriptor.addFamily(columnDescriptor_1);tableDescriptor.addFamily(columnDescriptor_2);// tableDescriptor.addFamily(columnDescriptor_3);admin.createTable(tableDescriptor);} else {System.err.println("table is exists!!!!!");}//put datatable = connection.getTable(TableName.valueOf(TABLE_NAME));Put put=new Put(Bytes.toBytes("1000181")); //rowkey 1put.addColumn(Bytes.toBytes(FAMILY_NAME_1), Bytes.toBytes("gid"), Bytes.toBytes("1000481"));put.addColumn(Bytes.toBytes(FAMILY_NAME_2), Bytes.toBytes("btime"), Bytes.toBytes("2021-04-04 16:54:31"));// put.addColumn(Bytes.toBytes(FAMILY_NAME_3), Bytes.toBytes(""), Bytes.toBytes(""));Put put2=new Put(Bytes.toBytes("2000001")); //rowkey 1put2.addColumn(Bytes.toBytes(FAMILY_NAME_1), Bytes.toBytes("gid"), Bytes.toBytes("1001597"));put2.addColumn(Bytes.toBytes(FAMILY_NAME_2), Bytes.toBytes("btime"), Bytes.toBytes("2021-04-07 15:07:52"));Put put3=new Put(Bytes.toBytes("2000001")); //rowkey 1put3.addColumn(Bytes.toBytes(FAMILY_NAME_1), Bytes.toBytes("gid"), Bytes.toBytes("1001560"));put3.addColumn(Bytes.toBytes(FAMILY_NAME_2), Bytes.toBytes("btime"), Bytes.toBytes("2021-04-07 15:08:27"));Put put4=new Put(Bytes.toBytes("2000042")); //rowkey 1put4.addColumn(Bytes.toBytes(FAMILY_NAME_1), Bytes.toBytes("gid"), Bytes.toBytes("1001368"));put4.addColumn(Bytes.toBytes(FAMILY_NAME_2), Bytes.toBytes("btime"), Bytes.toBytes(" 2021-04-08 08:20:30"));Put put5=new Put(Bytes.toBytes("2000067")); //rowkey 1put5.addColumn(Bytes.toBytes(FAMILY_NAME_1), Bytes.toBytes("gid"), Bytes.toBytes("1002061"));put5.addColumn(Bytes.toBytes(FAMILY_NAME_2), Bytes.toBytes("btime"), Bytes.toBytes("2021-04-08 16:45:33"));table.put(put);table.put(put2);table.put(put3);table.put(put4);table.put(put5);Get getbuyrokey=new Get(Bytes.toBytes("1000181"));//String result=getE001.getFamilyMap().get("salary").toString();//System.out.println(result);byte [] ss=table.get(getbuyrokey).getValue(Bytes.toBytes(FAMILY_NAME_1),Bytes.toBytes("gid"));System.out.print("读出rowkey为 “1000181” 的goodsiD:gid: "+new String(ss)+" ");byte [] ss1=table.get(getbuyrokey).getValue(Bytes.toBytes(FAMILY_NAME_2),Bytes.toBytes("btime"));System.out.println("buytime:btime: "+new String(ss1));Scan scan=new Scan();scan.setStartRow(Bytes.toBytes("1000181"));scan.setStopRow(Bytes.toBytes("2000067")); scan.addColumn(Bytes.toBytes(FAMILY_NAME_1), Bytes.toBytes("gid"));scan.setCaching(100);ResultScanner results=table.getScanner(scan);for (Result result : results) {while (result.advance()) {// System.out.println("goodid :"+new String(result.current().getValue()));}}}catch (IOException e){e.printStackTrace();}finally {//closeif (table != null) table.close();if (connection != null) connection.close();}}public static void main(String [] args){Configuration conf=getHBaseConfiguration();try {createTable(conf);} catch (IOException e) {e.printStackTrace();}finally {}}
}
4.从hbase中查询数据
package shiyan;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.HTablePool;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.filter.FilterList;
import org.apache.hadoop.hbase.filter.SingleColumnValueExcludeFilter;
import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.*;
import org.apache.hadoop.hbase.client.*;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.conf.Configuration;
import com.sun.corba.se.pept.transport.Connection;
import net.spy.memcached.ConnectionFactory;
public class GetDataTest {private static Configuration config = null;static {config = HBaseConfiguration.create();config.set("hbase.zookeeper.quorum","*.*.*.*");config.set("hbase.zookeeper.property.clientPort","2181");config = HBaseConfiguration.create(config);
}
//查找数据
public static void getOneRecord(String tableName, String rowKey) throws IOException {HTable table = new HTable(config, tableName);Get get = new Get(rowKey.getBytes());Result rs = table.get(get);
for (KeyValue kv:rs.raw()) {String timestampFormat = new SimpleDateFormat("yyyy-MM-dd HH:MM:ss").format(new Date(kv.getTimestamp()));System.out.println("===:"+timestampFormat+" ==timestamp: "+kv.getTimestamp());System.out.println("\nKeyValue: "+kv);System.out.println("key: "+kv.getKeyString());System.out.println(new String(kv.getRow()) + " " + new String(kv.getFamily()) + ":"+ new String(kv.getQualifier()) + " " + kv.getTimestamp() + " " + new String(kv.getValue()));}
}
//TODO:读取数据
/*public void getDataFromTable() throws IOException {//1.获取hbase连接,配置Configuration conf = HBaseConfiguration.create();conf.set("hbase.zookeeper.quorum", "hadoop00");conf.set("hbase.zookeeper.property.clientPort", "2181");//2.创建连接Connection conn = ConnectionFactory.createConnection(conf);//HConnectionManager.getConnection(HBaseConfiguration.create());// Connection connection = ConnectionFactory.createConnection(conf);//3.获取tableTable TestTable = conn.getTable(TableName.valueOf("TestTable"));//4.读取表中的数据,GetGet get = new Get(Bytes.toBytes("1001"));//5.获取结果Result result = TestTable.get(get);//6.遍历结果Cell[] cells = result.rawCells();for (Cell cell : cells) {//获取具体的值System.out.println("rowkey:" + Bytes.toString(CellUtil.cloneRow(cell)));System.out.println("列族:" + Bytes.toInt(CellUtil.cloneFamily(cell)));System.out.println("列名:" + Bytes.toString(CellUtil.cloneQualifier(cell)));System.out.println("value:" + Bytes.toString(CellUtil.cloneValue(cell)));System.out.println("_______________");}//7.关闭连接conn.close();
}
*/
public static void main(String[] args) {String tableName = "TestTable";String columnFamily="f1";try { GetDataTest.getOneRecord(tableName, "1000181"); System.out.print("success!");} catch (IOException e) {// TODO Auto-generated catch bloce.printStackTrace();}
}
}
HBase的安装、写入和查询操作相关推荐
- HBase安装与命令行操作
2019独角兽企业重金招聘Python工程师标准>>> HBase简介 基于Hadoop的NoSql数据库,适合存储半结构化.非结构化的稀疏数据,提供增删改查能力.因为其底层是hdf ...
- HBase安装配置以及Java操作hbase
2019独角兽企业重金招聘Python工程师标准>>> Apache HBase Apache HBase™是Hadoop数据库,是一个分布式,可扩展的大数据存储. 当您需要对大数据 ...
- HBase安装phoenix实战shell操作
Hbase安装参考https://rumenz.com/rumenbiji/hadoop-hbase-install.html 由于我们安装的是 hbase-2.3.1-bin.tar.gz ,所以需 ...
- HBase【付诸实践 01】hbase shell 常用命令详解(表操作+数据增删改查+2种查询操作)(hbase-2.4.5 单机版standalone模式)
1.运行环境 HBase的安装文件为:hbase-2.4.5-bin.tar.gz 相关配置信息可以查看<HBase-2.4.5 单机版standalone模式安装配置> 其他环境如下: ...
- LSM树——Log-Structured Merge-Tree数据结构、LSM树设计思想、LSM的数据写入操作、LSM的数据查询操作
LSM树数据结构 简介 传统关系型数据库,一般都选择使用B+树作为索引结构,而在大数据场景下,HBase.Kudu这些存储引擎选择的是LSM树.LSM树,即日志结构合并树(Log-Structured ...
- Hadoop集群搭建(六:HBase的安装配置)
实验 目的 要求 目的: 1.HBase的高可用完全分布模式的安装和验证 要求: 完成HBase的高可用完全分布模式的安装: HBase的相关服务进程能够正常的启动: HBase控制台能够正常使用: ...
- hbase shell命令_HBASE的shell操作
前言 我们知道hbase其实是架构在hdfs上的一个分布式数据库,既然是数据库那么这篇文章就主要围绕着我们最熟悉的增删改查来做.当然了,其实hbase的shell操作在真实的企业中几乎不用,这个很简单 ...
- 数据蒋堂 | 用HBase做高性能键值查询?
作者:蒋步星 来源:数据蒋堂 本文共1400字,建议阅读9分钟.本文与你探讨HBase做高性能键值查询的可行性. 最近碰到几家用户在使用HBase或者试图使用HBase来做高性能查询,场景也比较类似, ...
- hadoop+zookeeper+hbase 完全安装手册
2019独角兽企业重金招聘Python工程师标准>>> 安装环境: 1. 2台 namenode 10台 datanode 3. 安装目录:opt/software jdk hado ...
最新文章
- c++ Qt向PHP接口POST文件流
- Real World Haskell 第七章 I/O
- AccEAP架构介绍(1)---实体的设计
- CentOS查看CPU、内存、版本等系统信息
- Java学习笔记—TCP通信
- sql交叉报表实例(转)
- 跳表 skipList
- EmEditor中,正则判断行中是否存在自动字符串
- 【虚拟机】VirtualBox 安装 Windows 11 虚拟机简介
- 搭建一个网站的价格以及步骤全面解读
- python 主函数传参数
- 设计师该如何把简历写好?
- 每日方法分享:免费一键抠图方法都有哪些?
- kuangbin 最小生成树专题 - ZOJ - 1586 QS Network (朴素 Prim算法 模板题)
- Flink Asynchronous IO异步操作
- 打补丁的日子,比写代码的日子难熬多了
- el表达式截取字符串
- 第三届国际金融科技论坛开幕,神州信息专家参与蓉城“论道”
- D. Ela and the Wiring Wizard floyd/思维
- python数据分析与可视化清华大学_【官方正版】 Python数据分析与可视化 微课视频版 清华大学出版社 魏伟一 李晓红 软件工具 程序设计...