最近给客户准备培训,看到Coherence可以通过三种方式批量加载数据,分别是:

  • Custom application
  • InvocableMap - PreloadRequest
  • Invocation Service

Custom application的方式简单易懂,基本就是通过put和putAll方法实现,就不再纠结了。但问题是无论是put还是putAll

都是一个串行过程,如果装载大量数据的话,就需要有一种并行机制实现并行装载。

本文对第二种方式InvocableMap做一些研究,PreloadRequest主要是基于一个entry的集合通过Cache Loader进行装载,

其命令主要是:

包含如下特征:

  • 装载前必须知道要装载的所有的key值。
  • 本身装载的动作通过CacheLoader来实现。
  • 装载是并行过程,每个存储节点负责把分布在自己Cache的内容按照key值,从数据库中装载

代码:

Person.java

package dataload;

import java.io.Serializable;

public class Person implements Serializable {
private String Id;
private String Firstname;

public void setId(String Id) {
this.Id = Id;
}

public String getId() {
return Id;
}

public void setFirstname(String Firstname) {
this.Firstname = Firstname;
}

public String getFirstname() {
return Firstname;
}

public void setLastname(String Lastname) {
this.Lastname = Lastname;
}

public String getLastname() {
return Lastname;
}

public void setAddress(String Address) {
this.Address = Address;
}

public String getAddress() {
return Address;
}
private String Lastname;
private String Address;

public Person() {
super();
}

public Person(String sId,String sFirstname,String sLastname,String sAddress) {
Id=sId;
Firstname=sFirstname;
Lastname=sLastname;
Address=sAddress;
}
}

实现CacheLoader的DBCacheStore.java,比较核心的是看load方法

package dataload;

import com.tangosol.net.CacheFactory;
import com.tangosol.net.NamedCache;
import com.tangosol.net.cache.CacheStore;
import com.tangosol.util.Base;

import com.tangosol.util.InvocableMap;

import java.sql.DriverManager;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;

import java.util.Collection;
import java.util.Hashtable;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;

import javax.naming.Context;
import javax.naming.InitialContext;

import java.sql.ResultSet;
import java.sql.Statement;

import java.util.Collections;
import java.util.HashMap;
import java.util.Set;

import javax.naming.NamingException;

/**
* An example implementation of CacheStore
* interface.
*
* @author erm 2003.05.01
*/
public class DBCacheStore
extends Base
implements CacheStore
{
// ----- constructors ---------------------------------------------------
/**
* Constructs DBCacheStore for a given database table.
*
* @param sTableName the db table name
*/
public DBCacheStore(String sTableName)
{
m_sTableName = sTableName;
cache = CacheFactory.getCache("SampleCache");

}

// ---- accessors -------------------------------------------------------

/**
* Obtain the name of the table this CacheStore is persisting to.
*
* @return the name of the table this CacheStore is persisting to
*/
public String getTableName()
{
return m_sTableName;
}

/**
* Obtain the connection being used to connect to the database.
*
* @return the connection used to connect to the database
*/
public Connection getConnection() {
try {
Context ctx = null;

Hashtable<String,String> ht = new Hashtable<String,String>();
ht.put(Context.INITIAL_CONTEXT_FACTORY,"weblogic.jndi.WLInitialContextFactory");
ht.put(Context.PROVIDER_URL,"t3://localhost:7001");
ctx = new InitialContext(ht);
javax.sql.DataSource ds= (javax.sql.DataSource) ctx.lookup("ds");

m_con = ds.getConnection();
} catch (Exception e) {
System.out.println(e.getMessage());
}

return m_con;
}

// ----- CacheStore Interface --------------------------------------------

/**
* Return the value associated with the specified key, or null if the
* key does not have an associated value in the underlying store.
*
* @param oKey key whose associated value is to be returned
*
* @return the value associated with the specified key, or
* <tt>null</tt> if no value is available for that key
*/
public Object load(Object oKey)
{
Object oValue = null;
Person person = null;
Connection con = getConnection();
String sSQL = "SELECT id, firstname,lastname,address FROM " + getTableName()
+ " WHERE id = ?";
System.out.println("Enter load= "+sSQL);

try
{
PreparedStatement stmt = con.prepareStatement(sSQL);

stmt.setString(1, String.valueOf(oKey));
System.out.println("key="+String.valueOf(oKey));
ResultSet rslt = stmt.executeQuery();
if (rslt.next())
{
person = new Person(rslt.getString("id"),rslt.getString("firstname"),rslt.getString("lastname"),rslt.getString("address"));
oValue = person;

if (rslt.next())
{
throw new SQLException("Not a unique key: " + oKey);
}
}
stmt.close();

}
catch (SQLException e)
{

System.out.println("=============="+e.getMessage());

//throw ensureRuntimeException(e, "Load failed: key=" + oKey);
}
return oValue;
}

/**
* Store the specified value under the specific key in the underlying
* store. This method is intended to support both key/value creation
* and value update for a specific key.
*
* @param oKey key to store the value under
* @param oValue value to be stored
*
* @throws UnsupportedOperationException if this implementation or the
* underlying store is read-only
*/
public void store(Object oKey, Object oValue)
{
/*
Connection con = getConnection();
String sTable = getTableName();
String sSQL;

if (load(oKey) != null)
{
sSQL = "UPDATE " + sTable + " SET value = ? where id = ?";
}
else
{
sSQL = "INSERT INTO " + sTable + " (value, id) VALUES (?,?)";
}
try
{
PreparedStatement stmt = con.prepareStatement(sSQL);
int i = 0;
stmt.setString(++i, String.valueOf(oValue));
stmt.setString(++i, String.valueOf(oKey));
stmt.executeUpdate();
stmt.close();
}
catch (SQLException e)
{
throw ensureRuntimeException(e, "Store failed: key=" + oKey);
}
*/
}

/**
* Remove the specified key from the underlying store if present.
*
* @param oKey key whose mapping is to be removed from the map
*
* @throws UnsupportedOperationException if this implementation or the
* underlying store is read-only
*/
public void erase(Object oKey)
{
/*
Connection con = getConnection();
String sSQL = "DELETE FROM " + getTableName() + " WHERE id=?";
try
{
PreparedStatement stmt = con.prepareStatement(sSQL);

stmt.setString(1, String.valueOf(oKey));
stmt.executeUpdate();
stmt.close();
}
catch (SQLException e)
{
throw ensureRuntimeException(e, "Erase failed: key=" + oKey);
}
*/
}

/**
* Remove the specified keys from the underlying store if present.
*
* @param colKeys keys whose mappings are being removed from the cache
*
* @throws UnsupportedOperationException if this implementation or the
* underlying store is read-only
*/
public void eraseAll(Collection colKeys)
{
throw new UnsupportedOperationException();
}

/**
* Return the values associated with each the specified keys in the
* passed collection. If a key does not have an associated value in
* the underlying store, then the return map will not have an entry
* for that key.
*
* @param colKeys a collection of keys to load
*
* @return a Map of keys to associated values for the specified keys
*/
public Map loadAll(Collection colKeys)
{
/* System.out.println("Enter LoadAll Map");
Map mapResults = new HashMap();
for (Object entry : (Set<Object>) colKeys) {
System.out.println(entry);
mapResults.put(entry, load(entry));
}
return mapResults;
*/
return Collections.emptyMap();
//throw new UnsupportedOperationException();
}

/**
* Store the specified values under the specified keys in the underlying
* store. This method is intended to support both key/value creation
* and value update for the specified keys.
*
* @param mapEntries a Map of any number of keys and values to store
*
* @throws UnsupportedOperationException if this implementation or the
* underlying store is read-only
*/
public void storeAll(Map mapEntries)
{
throw new UnsupportedOperationException();
}

/**
* Iterate all keys in the underlying store.
*
* @return a read-only iterator of the keys in the underlying store
*/
public Iterator keys()
{
Connection con = getConnection();
String sSQL = "SELECT id FROM " + getTableName();
List list = new LinkedList();

try
{
PreparedStatement stmt = con.prepareStatement(sSQL);
ResultSet rslt = stmt.executeQuery();
while (rslt.next())
{
Object oKey = rslt.getString(1);
list.add(oKey);
}
stmt.close();
}
catch (SQLException e)
{
throw ensureRuntimeException(e, "Iterator failed");
}

return list.iterator();
}

// ----- data members ---------------------------------------------------

/**
* The connection.
*/
protected Connection m_con;

/**
* The db table name.
*/
protected String m_sTableName;

protected NamedCache cache;
}

CoherencePreLoad.java程序

package dataload;

import java.sql.ResultSet;
import java.sql.Statement;
import com.tangosol.net.CacheFactory;
import com.tangosol.net.NamedCache;

import com.tangosol.util.InvocableMap;
import com.tangosol.util.processor.PreloadRequest;

import java.sql.Connection;

import java.util.Collection;
import java.util.Collections;
import java.util.HashSet;
import java.util.Hashtable;

import javax.naming.Context;
import javax.naming.InitialContext;

public class CoherencePreLoad {
public CoherencePreLoad() {
super();
}

public static void main(String[] args) {
CoherencePreLoad coherencePreLoad = new CoherencePreLoad();

NamedCache cache = CacheFactory.getCache("SampleCache");
//cache.put("1","eric");

String sql = "select id from persons order by id";
Connection con = null;
Statement s = null;
ResultSet rs = null;
int count =0;
Collection keys = new HashSet();;
String key = null;

try{

Context ctx = null;

Hashtable<String,String> ht = new Hashtable<String,String>();
ht.put(Context.INITIAL_CONTEXT_FACTORY,"weblogic.jndi.WLInitialContextFactory");
ht.put(Context.PROVIDER_URL,"t3://localhost:7001");
ctx = new InitialContext(ht);
javax.sql.DataSource ds= (javax.sql.DataSource) ctx.lookup("ds");

con = ds.getConnection();
s = con.createStatement();
rs = s.executeQuery(sql);
System.out.println("Loading with SQL ");

while (rs.next()) {
key = rs.getString(1);
System.out.println(key);
keys.add(key);
count++;

// this loads 1000 items at a time into the cache
if ((count++ % 1000) == 0) {
cache.invokeAll(keys, new PreloadRequest() );
keys.clear();
}
}
if (!keys.isEmpty()) {
System.out.println("Enter");
//InvocableMap.EntryProcessor preloadrequest = new PreloadRequest();
cache.invokeAll(keys, new PreloadRequest() );
System.out.println("finish");
}

}catch (Exception e) {
System.out.println("============"+e.getStackTrace());
System.out.println(e.getMessage());
}
}
}

然后需要在缓存的配置中进行设置

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<!--
Caches with names that start with 'DBBacked' will be created
as distributed-db-backed.
-->
<cache-mapping>
<cache-name>SampleCache</cache-name>
<scheme-name>distributed-pof</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
DB Backed Distributed caching scheme.
-->
<distributed-scheme>
<scheme-name>distributed-pof</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>

<read-write-backing-map-scheme>

<internal-cache-scheme>
<local-scheme/>
</internal-cache-scheme>

<cachestore-scheme>
<class-scheme>
<class-name>dataload.DBCacheStore</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>persons</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>

<listener/>
<autostart>true</autostart>
<local-storage>true</local-storage>
</distributed-scheme>

</caching-schemes>
</cache-config>

需要注意的是,必须在启动Cache-server的时候加入weblogic.jar以及dataload的包,因为在DBCacheStore中用到了weblogic JNDI去寻找数据源。

输出结果如下:

在jdeveloper端的Coherence节点

在coherence server端的存储节点

通过visualVM监控是否已经写入缓存

转载于:https://www.cnblogs.com/ericnie/p/6125247.html

Coherence装载数据的研究-PreloadRequest相关推荐

  1. 郑可迪 : 培养数据思维,投身电力大数据领域研究 | 提升之路系列(一)

    导读 为了发挥清华大学多学科优势,搭建跨学科交叉融合平台,创新跨学科交叉培养模式,培养具有大数据思维和应用创新的"π"型人才,由清华大学研究生院.清华大学大数据研究中心及相关院系共 ...

  2. 从 OSS 装载数据到 PostgreSQL

    oss_fdw 在阿里云上,支持通过 oss_fdw 并行装载数据到 PostgreSQL 和 PPAS 中 oss_fdw 参数 oss_fdw 和其他 fdw 的接口一样,提供对外部数据源 oss ...

  3. 广州市城市智能交通大数据体系研究与实践

    广州市城市智能交通大数据体系研究与实践 张孜1, 黄钦炎2, 冯川2 1 广州市交通运输局,广东 广州 510620 2 广州交通信息化建设投资营运有限公司,广东 广州 510620 摘要:为了构建现 ...

  4. 一纸读懂另类数据 | 未央研究

    一纸读懂另类数据 | 未央研究 未央研究 清华大学五道口金融学院 今天 什么是另类数据? 1.定义 另类数据(Alternative Data)是不同于传统的交易所披露.公司公告披露的新数据,是有利于 ...

  5. 基于数据要素流通视角的数据溯源研究进展

    摘要 [目的] 通过文献梳理分析数据溯源研究进展及应用场景,以期为数据交易平台搭建.行业数据治理建设和数字政府治理建设提供参考.[方法] 从数据溯源模型.数据溯源方法和数据溯源应用分别进行归纳和分析, ...

  6. 【P4论文分享】基于P4的可编程数据平面研究及其应用

    前言 本文是本人学习的笔记,如有错误欢迎指正. 论文下载地址:基于P4的可编程数据平面研究及其应用 本文目录 前言 1 引 言 传统交换机的局限性 如何增强网络开放性? OpenFlow局限性 解决O ...

  7. Hive Load装载数据与HDFS的关系

    装载数据:LOAD移动数据 LOCAL:指定文件位于本地文件系统 :OVERWRITE表示覆盖现有数据 使用方法: -- load数据格式 LOAD DATA LOCAL INPATH '/home/ ...

  8. 基于中台的公共图书馆数据服务研究

    基于中台的公共图书馆数据服务研究 摘 要 本文以中台相关概念为切入点,讨论利用中台相关技术,收集图书馆的多源数据,提高公共图书馆数据资源的管控能力.中台的作用不仅仅是将图书馆中的各种数据进行汇聚,而且 ...

  9. 基于文本内容理解的中医药数据基础研究——中医药文献语料库的建设

    http://journal.shouxi.net/html/qikan/zgyx/zgzyyxxzz/20079149/zyyxxx/20100108093937831_500494.html [关 ...

最新文章

  1. 语音控制 python_python有没有语音控制模块
  2. iphone开发JSON库之BSJSONAdditions
  3. Python学习日记day4 字符编码
  4. Java猿面试_猿灯塔:关于Java面试,你应该准备这些知识点
  5. 操作系统原理之文件系统(第五章)
  6. Educational Codeforces Round 114 (Rated for Div. 2) D. The Strongest Build 暴力 + bfs
  7. finereport 格式化金额函数_帆软报表(finereport)常用函数
  8. 德国大学:如何改变一个民族和整个世界的命运
  9. flask route
  10. java 解码 encodeuri_encodeURIComponent编码后java后台的解码
  11. NSIS ---使用nsDialogs创建自定义页面,并获取输入到控件中的内容保存到一个XML文档中
  12. DICOM笔记-使用DCMTK库的DcmOutputBufferStream类将DICOM信息序列化到内存中
  13. 一路走好——稻盛和夫先生
  14. Java 二十三种设计模式
  15. IEEE软件工程标准词汇表定义需求
  16. Mysql查询数据之基本和多条件查询
  17. java 微秒_Java中的当前时间(以微秒为单位)
  18. 云盘构建LVM linux 持续更新
  19. 南方科大计算机学院院长,新闻详情 - 计算机科学与工程系 - 南方科技大学
  20. cadence元件库文件位置

热门文章

  1. vue delete删除json数组_Vue.set 和 Vue.delete
  2. php 同步退出,Ucenter 的同步登录与同步退出
  3. 重庆计算机考试准考证打印入口,重庆西南大学计算机等级考试准考证打印入口...
  4. STM8单片机低功耗---等待(Wait)模式实现
  5. python将json数据集转成voc xml文件
  6. QLabel 图片大小设定
  7. python __call__或者说func()()的理解
  8. 基于SSM的勤工助学管理系统
  9. Java 命令行运行参数大全
  10. OpenGL ES 2 o 初探