从上一篇对Hive metastore表结构的简要分析中,我再根据数据设计的实体对象,再进行整个代码结构的总结。那么我们先打开metadata的目录,其目录结构:

  可以看到,整个hivemeta的目录包含metastore(客户端与服务端调用逻辑)、events(事件目录包含table生命周期中的检查、权限认证等listener实现)、hooks(这里的hooks仅包含了jdo connection的相关接口)、parser(对于表达树的解析)、spec(partition的相关代理类)、tools(jdo execute相关方法)及txn及model,下来我们从整个metadata分逐一进行代码分析及注释:

  没有把包打开,很多类?是不是感觉害怕很想死?我也想死,咱们继续。。一开始,我们可能觉得一团乱麻烦躁,这是啥玩意儿啊这。。冷静下来,我们从Hive这个大类开始看,因为它是metastore元数据调用的入口。整个生命周期分析流程为: HiveMetaStoreClient客户端的创建及加载、HiveMetaStore服务端的创建及加载、createTable、dropTable、AlterTable、createPartition、dropPartition、alterPartition。当然,这只是完整metadata的一小部分。

  1、HiveMetaStoreClient客户端的创建及加载

  那么我们从Hive这个类一点点开始看:

1   private HiveConf conf = null;2   privateIMetaStoreClient metaStoreClient;3   privateUserGroupInformation owner;4
5   //metastore calls timing information
6   private final Map<String, Long> metaCallTimeMap = new HashMap<String, Long>();7
8   private static ThreadLocal<Hive> hiveDB = new ThreadLocal<Hive>() {9 @Override10     protected synchronizedHive initialValue() {11       return null;12 }13
14 @Override15     public synchronized voidremove() {16       if (this.get() != null) {17         this.get().close();18 }19       super.remove();20 }21   };

  这里声明的有hiveConf对象、metaStoreClient 、操作用户组userGroupInfomation以及调用时间Map,这里存成一个map,用来记录每一个动作的运行时长。同时维护了一个本地线程hiveDB,如果db为空的情况下,会重新创建一个Hive对象,代码如下:

1   public static Hive get(HiveConf c, boolean needsRefresh) throwsHiveException {2     Hive db =hiveDB.get();3     if (db == null || needsRefresh || !db.isCurrentUserOwner()) {4       if (db != null) {5         LOG.debug("Creating new db. db = " + db + ", needsRefresh = " + needsRefresh +
6           ", db.isCurrentUserOwner = " +db.isCurrentUserOwner());7 }8 closeCurrent();9       c.set("fs.scheme.class", "dfs");10       Hive newdb = newHive(c);11 hiveDB.set(newdb);12       returnnewdb;13 }14     db.conf =c;15     returndb;16   }

  随后我们会发现,在创建Hive对象时,便已经将function进行注册,什么是function呢,通过上次的表结构分析,可以理解为所有udf等jar包的元数据存储。代码如下:

1   //register all permanent functions. need improvement
2   static{3     try{4 reloadFunctions();5     } catch(Exception e) {6       LOG.warn("Failed to access metastore. This class should not accessed in runtime.",e);7 }8 }9
10   public static void reloadFunctions() throwsHiveException {    //获取 Hive对象,用于后续方法的调用11     Hive db =Hive.get();    //通过遍历每一个dbName12     for(String dbName : db.getAllDatabases()) {      //通过dbName查询挂在该db下的所有function的信息。13       for (String functionName : db.getFunctions(dbName, "*")) {14         Function function =db.getFunction(dbName, functionName);15         try{     //这里的register便是将查询到的function的数据注册到Registry类中的一个Map<String,FunctionInfo>中,以便计算引擎在调用时,不必再次查询数据库。16 FunctionRegistry.registerPermanentFunction(17 FunctionUtils.qualifyFunctionName(functionName, dbName), function.getClassName(),18           false, FunctionTask.toFunctionResource(function.getResourceUris()));19         } catch(Exception e) {20           LOG.warn("Failed to register persistent function " +
21               functionName + ":" + function.getClassName() + ". Ignore and continue.");22 }23 }24 }25   }

  调用getMSC()方法,进行metadataClient客户端的创建,代码如下:

1  1   private IMetaStoreClient createMetaStoreClient() throwsMetaException {2  2   
3     //这里实现接口HiveMetaHookLoader
4  3     HiveMetaHookLoader hookLoader = newHiveMetaHookLoader() {5  4@Override6  5         publicHiveMetaHook getHook(7  6org.apache.hadoop.hive.metastore.api.Table tbl)8  7           throwsMetaException {9  8
10  9           try{11 10             if (tbl == null) {12 11               return null;13 12}14          //根据tble的kv属性加载不同storage的实例,比如hbase、redis等等拓展存储,作为外部表进行存储
15 13             HiveStorageHandler storageHandler =
16 14HiveUtils.getStorageHandler(conf,17 15tbl.getParameters().get(META_TABLE_STORAGE));18 16             if (storageHandler == null) {19 17               return null;20 18}21 19             returnstorageHandler.getMetaHook();22 20           } catch(HiveException ex) {23 21LOG.error(StringUtils.stringifyException(ex));24 22             throw newMetaException(25 23               "Failed to load storage handler:  " +ex.getMessage());26 24}27 25}28 26};29 27     returnRetryingMetaStoreClient.getProxy(conf, hookLoader, metaCallTimeMap,30 28         SessionHiveMetaStoreClient.class.getName());31 29   }

  2、HiveMetaStore服务端的创建及加载

  在HiveMetaStoreClient初始化时,会初始化HiveMetaStore客户端,代码如下:

1   publicHiveMetaStoreClient(HiveConf conf, HiveMetaHookLoader hookLoader)2     throwsMetaException {3
4     this.hookLoader =hookLoader;5     if (conf == null) {6       conf = new HiveConf(HiveMetaStoreClient.class);7 }8     this.conf =conf;9     filterHook =loadFilterHooks();10    //根据hive-site.xml中的hive.metastore.uris配置,如果配置该参数,则认为是远程连接,否则为本地连接
11     String msUri =conf.getVar(HiveConf.ConfVars.METASTOREURIS);12     localMetaStore =HiveConfUtil.isEmbeddedMetaStore(msUri);13     if(localMetaStore) {//本地连接直接连接HiveMetaStore
16       client = HiveMetaStore.newRetryingHMSHandler("hive client", conf, true);17       isConnected = true;18 snapshotActiveConf();19       return;20 }21
22     //获取配置中的重试次数及timeout时间
23     retries =HiveConf.getIntVar(conf, HiveConf.ConfVars.METASTORETHRIFTCONNECTIONRETRIES);24     retryDelaySeconds =conf.getTimeVar(25 ConfVars.METASTORE_CLIENT_CONNECT_RETRY_DELAY, TimeUnit.SECONDS);26
27     //拼接metastore uri
28     if (conf.getVar(HiveConf.ConfVars.METASTOREURIS) != null) {29       String metastoreUrisString[] =conf.getVar(30           HiveConf.ConfVars.METASTOREURIS).split(",");31       metastoreUris = newURI[metastoreUrisString.length];32       try{33         int i = 0;34         for(String s : metastoreUrisString) {35           URI tmpUri = newURI(s);36           if (tmpUri.getScheme() == null) {37             throw new IllegalArgumentException("URI: " +s38                 + " does not have a scheme");39 }40           metastoreUris[i++] =tmpUri;41
42 }43       } catch(IllegalArgumentException e) {44         throw(e);45       } catch(Exception e) {46 MetaStoreUtils.logAndThrowMetaException(e);47 }48     } else{49       LOG.error("NOT getting uris from conf");50       throw new MetaException("MetaStoreURIs not found in conf file");51 }52     调用open方法创建连接
53 open();54   }

  从上面代码中可以看出,如果我们是远程连接,需要配置hive-site.xml中的hive.metastore.uri,是不是很熟悉?加入你的client与server不在同一台机器,就需要配置进行远程连接。那么我们继续往下面看,创建连接的open方法:

1   private void open() throwsMetaException {2     isConnected = false;3     TTransportException tte = null;     //是否使用Sasl4     boolean useSasl =conf.getBoolVar(ConfVars.METASTORE_USE_THRIFT_SASL);     //If true, the metastore Thrift interface will use TFramedTransport. When false (default) a standard TTransport is used.5     boolean useFramedTransport =conf.getBoolVar(ConfVars.METASTORE_USE_THRIFT_FRAMED_TRANSPORT);     //If true, the metastore Thrift interface will use TCompactProtocol. When false (default) TBinaryProtocol will be used 具体他们之间的区别我们后续再讨论6     boolean useCompactProtocol =conf.getBoolVar(ConfVars.METASTORE_USE_THRIFT_COMPACT_PROTOCOL);     //获取socket timeout时间7     int clientSocketTimeout = (int) conf.getTimeVar(8 ConfVars.METASTORE_CLIENT_SOCKET_TIMEOUT, TimeUnit.MILLISECONDS);9
10     for (int attempt = 0; !isConnected && attempt < retries; ++attempt) {11       for(URI store : metastoreUris) {12         LOG.info("Trying to connect to metastore with URI " +store);13         try{14           transport = newTSocket(store.getHost(), store.getPort(), clientSocketTimeout);15           if(useSasl) {16             //Wrap thrift connection with SASL for secure connection.
17             try{            //创建HadoopThriftAuthBridge client18               HadoopThriftAuthBridge.Client authBridge =
19 ShimLoader.getHadoopThriftAuthBridge().createClient();20          //权限认证相关
21               //check if we should use delegation tokens to authenticate22               //the call below gets hold of the tokens if they are set up by hadoop23               //this should happen on the map/reduce tasks if the client added the24               //tokens into hadoop's credential store in the front end during job25               //submission.
26               String tokenSig = conf.get("hive.metastore.token.signature");27               //tokenSig could be null
28               tokenStrForm =Utils.getTokenStrForm(tokenSig);29               if(tokenStrForm != null) {30                 //authenticate using delegation tokens via the "DIGEST" mechanism
31                 transport = authBridge.createClientTransport(null, store.getHost(),32                     "DIGEST", tokenStrForm, transport,33 MetaStoreUtils.getMetaStoreSaslProperties(conf));34               } else{35                 String principalConfig =
36 conf.getVar(HiveConf.ConfVars.METASTORE_KERBEROS_PRINCIPAL);37                 transport =authBridge.createClientTransport(38                     principalConfig, store.getHost(), "KERBEROS", null,39 transport, MetaStoreUtils.getMetaStoreSaslProperties(conf));40 }41             } catch(IOException ioe) {42               LOG.error("Couldn't create client transport", ioe);43               throw newMetaException(ioe.toString());44 }45           } else if(useFramedTransport) {46             transport = newTFramedTransport(transport);47 }48           finalTProtocol protocol;         //后续详细说明两者的区别(因为俺还没看,哈哈)49           if(useCompactProtocol) {50             protocol = newTCompactProtocol(transport);51           } else{52             protocol = newTBinaryProtocol(transport);53 }         //创建ThriftHiveMetastore client54           client = newThriftHiveMetastore.Client(protocol);55           try{56 transport.open();57             isConnected = true;58           } catch(TTransportException e) {59             tte =e;60             if(LOG.isDebugEnabled()) {61               LOG.warn("Failed to connect to the MetaStore Server...", e);62             } else{63               //Don't print full exception trace if DEBUG is not on.
64               LOG.warn("Failed to connect to the MetaStore Server...");65 }66 }67       //用户组及用户的加载
68           if (isConnected && !useSasl &&conf.getBoolVar(ConfVars.METASTORE_EXECUTE_SET_UGI)){69             //Call set_ugi, only in unsecure mode.
70             try{71               UserGroupInformation ugi =Utils.getUGI();72 client.set_ugi(ugi.getUserName(), Arrays.asList(ugi.getGroupNames()));73             } catch(LoginException e) {74               LOG.warn("Failed to do login. set_ugi() is not successful, " +
75                        "Continuing without it.", e);76             } catch(IOException e) {77               LOG.warn("Failed to find ugi of client set_ugi() is not successful, " +
78                   "Continuing without it.", e);79             } catch(TException e) {80               LOG.warn("set_ugi() not successful, Likely cause: new client talking to old server. "
81                   + "Continuing without it.", e);82 }83 }84         } catch(MetaException e) {85           LOG.error("Unable to connect to metastore with URI " +store86                     + " in attempt " +attempt, e);87 }88         if(isConnected) {89           break;90 }91 }92       //Wait before launching the next round of connection retries.
93       if (!isConnected && retryDelaySeconds > 0) {94         try{95           LOG.info("Waiting " + retryDelaySeconds + " seconds before next connection attempt.");96           Thread.sleep(retryDelaySeconds * 1000);97         } catch(InterruptedException ignore) {}98 }99 }100
101     if (!isConnected) {102       throw new MetaException("Could not connect to meta store using any of the URIs provided." +
103         " Most recent failure: " +StringUtils.stringifyException(tte));104 }105
106 snapshotActiveConf();107
108     LOG.info("Connected to metastore.");109   }

  本篇先对对protocol的原理放置一边。从代码中可以看出HiveMetaStore服务端是通过ThriftHiveMetaStore创建,它本是一个class类,但其中定义了接口Iface、AsyncIface,这样做的好处是利于继承实现。那么下来,我们看一下HMSHandler的初始化。如果是在本地调用的过程中,直接调用newRetryingHMSHandler,便会直接进行HMSHandler的初始化。代码如下:

1     public HMSHandler(String name, HiveConf conf, boolean init) throwsMetaException {2       super(name);3       hiveConf =conf;4       if(init) {5 init();6 }7     }

  下俩我们继续看它的init方法:

1     public void init() throwsMetaException {      //获取与数据交互的实现类className,该类为objectStore,是RawStore的实现,负责JDO与数据库的交互。2       rawStoreClassName =hiveConf.getVar(HiveConf.ConfVars.METASTORE_RAW_STORE_IMPL);      //加载Listeners,来自hive.metastore.init.hooks,可自行实现并加载3       initListeners =MetaStoreUtils.getMetaStoreListeners(4           MetaStoreInitListener.class, hiveConf,5 hiveConf.getVar(HiveConf.ConfVars.METASTORE_INIT_HOOKS));6       for(MetaStoreInitListener singleInitListener: initListeners) {7           MetaStoreInitContext context = newMetaStoreInitContext();8 singleInitListener.onInit(context);9 }10     //初始化alter的实现类
11       String alterHandlerName = hiveConf.get("hive.metastore.alter.impl",12           HiveAlterHandler.class.getName());13       alterHandler =(AlterHandler) ReflectionUtils.newInstance(MetaStoreUtils.getClass(14 alterHandlerName), hiveConf);      //初始化warehouse15       wh = newWarehouse(hiveConf);16     //创建默认db以及用户,同时加载currentUrl
17       synchronized (HMSHandler.class) {18         if (currentUrl == null || !currentUrl.equals(MetaStoreInit.getConnectionURL(hiveConf))) {19 createDefaultDB();20 createDefaultRoles();21 addAdminUsers();22           currentUrl =MetaStoreInit.getConnectionURL(hiveConf);23 }24 }25     //计数信息的初始化
26       if (hiveConf.getBoolean("hive.metastore.metrics.enabled", false)) {27         try{28 Metrics.init();29         } catch(Exception e) {30           //log exception, but ignore inability to start
31           LOG.error("error in Metrics init: " + e.getClass().getName() + " "
32               +e.getMessage(), e);33 }34 }35     //Listener的PreListener的初始化
36       preListeners = MetaStoreUtils.getMetaStoreListeners(MetaStorePreEventListener.class,37 hiveConf,38 hiveConf.getVar(HiveConf.ConfVars.METASTORE_PRE_EVENT_LISTENERS));39       listeners = MetaStoreUtils.getMetaStoreListeners(MetaStoreEventListener.class, hiveConf,40 hiveConf.getVar(HiveConf.ConfVars.METASTORE_EVENT_LISTENERS));41       listeners.add(newSessionPropertiesListener(hiveConf));42       endFunctionListeners =MetaStoreUtils.getMetaStoreListeners(43           MetaStoreEndFunctionListener.class, hiveConf,44 hiveConf.getVar(HiveConf.ConfVars.METASTORE_END_FUNCTION_LISTENERS));45     //针对partitionName的正则校验,可自行设置,根据hive.metastore.partition.name.whitelist.pattern进行设置
46       String partitionValidationRegex =
47 hiveConf.getVar(HiveConf.ConfVars.METASTORE_PARTITION_NAME_WHITELIST_PATTERN);48       if (partitionValidationRegex != null && !partitionValidationRegex.isEmpty()) {49         partitionValidationPattern =Pattern.compile(partitionValidationRegex);50       } else{51         partitionValidationPattern = null;52 }53
54       long cleanFreq =hiveConf.getTimeVar(ConfVars.METASTORE_EVENT_CLEAN_FREQ, TimeUnit.MILLISECONDS);55       if (cleanFreq > 0) {56         //In default config, there is no timer.
57         Timer cleaner = new Timer("Metastore Events Cleaner Thread", true);58         cleaner.schedule(new EventCleanerTask(this), cleanFreq, cleanFreq);59 }60     }

  它初始化了与数据库交互的rawStore的实现类、物理操作的warehouse以及Event与Listener。从而通过接口调用相关meta生命周期方法进行表的操作。

  3、createTable

 从createTable方法开始。上代码:

1   public void createTable(String tableName, List<String> columns, List<String>partCols,2                           Class<? extends InputFormat>fileInputFormat,3                           Class<?> fileOutputFormat, int bucketCount, List<String>bucketCols,4                           Map<String, String> parameters) throwsHiveException {5     if (columns == null) {6       throw new HiveException("columns not specified for table " +tableName);7 }8
9     Table tbl =newTable(tableName);    //SD表属性,设置该表的input及output class名,在计算引擎计算时,拉取相应的ClassName 通过反射进行input及output类的加载10 tbl.setInputFormatClass(fileInputFormat.getName());11 tbl.setOutputFormatClass(fileOutputFormat.getName());12       //封装FileSchema对象,该为每个column的名称及字段类型,并加入到sd对象的的column属性中
13     for(String col : columns) {14       FieldSchema field = new FieldSchema(col, STRING_TYPE_NAME, "default");15 tbl.getCols().add(field);16 }17     //如果在创建表时,设置了分区信息,比如dt字段为该分区。则进行分区信息的记录,最终写入Partition表中
18     if (partCols != null) {19       for(String partCol : partCols) {20         FieldSchema part = newFieldSchema();21 part.setName(partCol);22         part.setType(STRING_TYPE_NAME); //default partition key
23 tbl.getPartCols().add(part);24 }25 }    //设置序列化的方式26     tbl.setSerializationLib(LazySimpleSerDe.class.getName());    //设置分桶信息27 tbl.setNumBuckets(bucketCount);28 tbl.setBucketCols(bucketCols);    //设置table额外添加的kv信息29     if (parameters != null) {30 tbl.setParamters(parameters);31 }32 createTable(tbl);33   }

  从代码中可以看到,Hive 构造了一个Table的对象,该对象可以当做是一个model,包含了几乎所有以Tbls表为主表的所有以table_id为的外键表属性(具体可参考hive metastore表结构),封装完毕后在进行createTable的调用,接下来的调用如下:

  public void createTable(Table tbl, boolean ifNotExists) throwsHiveException {try{    //这里再次获取SessionState中的CurrentDataBase进行setDbName(安全)if (tbl.getDbName() == null || "".equals(tbl.getDbName().trim())) {tbl.setDbName(SessionState.get().getCurrentDatabase());}    //这里主要对每一个column属性进行校验,比如是否有非法字符等等if (tbl.getCols().size() == 0 || tbl.getSd().getColsSize() == 0) {tbl.setFields(MetaStoreUtils.getFieldsFromDeserializer(tbl.getTableName(),tbl.getDeserializer()));}    //该方法对table属性中的input、output以及column属性的校验tbl.checkValidity();if (tbl.getParameters() != null) {tbl.getParameters().remove(hive_metastoreConstants.DDL_TIME);}org.apache.hadoop.hive.metastore.api.Table tTbl=tbl.getTTable();    //这里开始进行权限认证,牵扯到的便是我们再hive中配置的 hive.security.authorization.createtable.user.grants、hive.security.authorization.createtable.group.grants、    hive.security.authorization.createtable.role.grants配置参数,来自于hive自己封装的 用户、角色、组的概念。PrincipalPrivilegeSet principalPrivs= newPrincipalPrivilegeSet();SessionState ss=SessionState.get();if (ss != null) {CreateTableAutomaticGrant grants=ss.getCreateTableGrants();if (grants != null) {principalPrivs.setUserPrivileges(grants.getUserGrants());principalPrivs.setGroupPrivileges(grants.getGroupGrants());principalPrivs.setRolePrivileges(grants.getRoleGrants());tTbl.setPrivileges(principalPrivs);}}   //通过客户端链接服务端进行table的创建getMSC().createTable(tTbl);}catch(AlreadyExistsException e) {if (!ifNotExists) {throw newHiveException(e);}}catch(Exception e) {throw newHiveException(e);}}

  那么下来,我们来看一下受到调用的HiveMetaClient中createTable方法,代码如下:

1   public void createTable(Table tbl, EnvironmentContext envContext) throwsAlreadyExistsException,2 InvalidObjectException, MetaException, NoSuchObjectException, TException {    //这里获取HiveMeetaHook对象,针对不同的存储引擎进行创建前的加载及验证3     HiveMetaHook hook =getHook(tbl);4     if (hook != null) {5 hook.preCreateTable(tbl);6 }7     boolean success = false;8     try{//随即调用HiveMetaStore进行服务端与数据库的创建交互
10 create_table_with_environment_context(tbl, envContext);11       if (hook != null) {12 hook.commitCreateTable(tbl);13 }14       success = true;15     } finally{      如果创建失败的话,进行回滚操作16       if (!success && (hook != null)) {17 hook.rollbackCreateTable(tbl);18 }19 }20   }

  这里简要说下Hook的作用,HiveMetaHook为接口,接口方法包括preCreate、rollbackCreateTable、preDropTable等等操作,它的实现为不同存储类型的预创建加载及验证,以及失败回滚等动作。代码如下:

1 public interfaceHiveMetaHook {2   /**
3 * Called before a new table definition is added to the metastore4 * during CREATE TABLE.5 *6 *@paramtable new table definition7    */
8   public voidpreCreateTable(Table table)9     throwsMetaException;10
11   /**
12 * Called after failure adding a new table definition to the metastore13 * during CREATE TABLE.14 *15 *@paramtable new table definition16    */
17   public voidrollbackCreateTable(Table table)18     throwsMetaException;
35   public voidpreDropTale(Table table)36     throws MetaException;...............................

  随后,我们再看一下HiveMetaStore服务端的createTable方法,如下:

1     private void create_table_core(final RawStore ms, final Table tbl,  2         finalEnvironmentContext envContext)3         throwsAlreadyExistsException, MetaException,4 InvalidObjectException, NoSuchObjectException {5     //名称正则校验,校验是否含有非法字符
6       if (!MetaStoreUtils.validateName(tbl.getTableName())) {7         throw newInvalidObjectException(tbl.getTableName()8             + " is not a valid object name");9 }      //改端代码属于校验代码,对于column的名称及column type类型j及partitionKey的名称校验10       String validate =MetaStoreUtils.validateTblColumns(tbl.getSd().getCols());11       if (validate != null) {12         throw new InvalidObjectException("Invalid column " +validate);13 }14       if (tbl.getPartitionKeys() != null) {15         validate =MetaStoreUtils.validateTblColumns(tbl.getPartitionKeys());16         if (validate != null) {17           throw new InvalidObjectException("Invalid partition column " +validate);18 }19 }20       SkewedInfo skew =tbl.getSd().getSkewedInfo();21       if (skew != null) {22         validate =MetaStoreUtils.validateSkewedColNames(skew.getSkewedColNames());23         if (validate != null) {24           throw new InvalidObjectException("Invalid skew column " +validate);25 }26         validate =MetaStoreUtils.validateSkewedColNamesSubsetCol(27 skew.getSkewedColNames(), tbl.getSd().getCols());28         if (validate != null) {29           throw new InvalidObjectException("Invalid skew column " +validate);30 }31 }32
33       Path tblPath = null;34       boolean success = false, madeDir = false;35       try{       //创建前的事件调用,metastore已实现的listner事件包含DummyPreListener、AuthorizationPreEventListener、AlternateFailurePreListener以及MetaDataExportListener。       //这些Listener是干嘛的呢?详细解释由分析meta设计模式时,详细说明。36         firePreEvent(new PreCreateTableEvent(tbl, this));37         //打开事务
38 ms.openTransaction();39        //如果db不存在的情况下,则抛异常
40         Database db =ms.getDatabase(tbl.getDbName());41         if (db == null) {42           throw new NoSuchObjectException("The database " + tbl.getDbName() + " does not exist");43 }44     
45         // 校验该db下,table是否存在
46         if(is_table_exists(ms, tbl.getDbName(), tbl.getTableName())) {47           throw new AlreadyExistsException("Table " +tbl.getTableName()48               + " already exists");49 }50      // 如果该表不为视图表,则组装完整的tbleParth ->fs.getUri().getScheme()+fs.getUri().getAuthority()+path.toUri().getPath())
51         if (!TableType.VIRTUAL_VIEW.toString().equals(tbl.getTableType())) {52           if (tbl.getSd().getLocation() == null
53               ||tbl.getSd().getLocation().isEmpty()) {54             tblPath =wh.getTablePath(55 ms.getDatabase(tbl.getDbName()), tbl.getTableName());56           } else{          //如果该表不是内部表同时tbl的kv中storage_handler为空时,则只是警告57             if (!isExternal(tbl) && !MetaStoreUtils.isNonNativeTable(tbl)) {58               LOG.warn("Location: " +tbl.getSd().getLocation()59                   + " specified for non-external table:" +tbl.getTableName());60 }61             tblPath = wh.getDnsPath(newPath(tbl.getSd().getLocation()));62 }        //将拼接完的tblPath set到sd的location中63 tbl.getSd().setLocation(tblPath.toString());64 }65      //创建table的路径
66         if (tblPath != null) {67           if (!wh.isDir(tblPath)) {68             if (!wh.mkdirs(tblPath, true)) {69               throw newMetaException(tblPath70                   + " is not a directory or unable to create one");71 }72             madeDir = true;73 }74 }       // hive.stats.autogather 配置判断75         if (HiveConf.getBoolVar(hiveConf, HiveConf.ConfVars.HIVESTATSAUTOGATHER) &&
76             !MetaStoreUtils.isView(tbl)) {77           if (tbl.getPartitionKeysSize() == 0)  { //Unpartitioned table
78 MetaStoreUtils.updateUnpartitionedTableStatsFast(db, tbl, wh, madeDir);79           } else { //Partitioned table with no partitions.
80             MetaStoreUtils.updateUnpartitionedTableStatsFast(db, tbl, wh, true);81 }82 }83
84         //set create time
85         long time = System.currentTimeMillis() / 1000;86         tbl.setCreateTime((int) time);87         if (tbl.getParameters() == null ||
88             tbl.getParameters().get(hive_metastoreConstants.DDL_TIME) == null) {89 tbl.putToParameters(hive_metastoreConstants.DDL_TIME, Long.toString(time));90 }       执行createTable数据库操作91 ms.createTable(tbl);92         success =ms.commitTransaction();93
94       } finally{95         if (!success) {96 ms.rollbackTransaction();         //如果由于某些原因没有创建,则进行已创建表路径的删除97           if(madeDir) {98             wh.deleteDir(tblPath, true);99 }100 }       //进行create完成时的listener类发送 比如 noftify通知101         for(MetaStoreEventListener listener : listeners) {102           CreateTableEvent createTableEvent =
103               new CreateTableEvent(tbl, success, this);104 createTableEvent.setEnvironmentContext(envContext);105 listener.onCreateTable(createTableEvent);106 }107 }108     }

  这里的listener后续会详细说明,那么我们继续垂直往下看,这里的 ms.createTable方法。ms便是RawStore接口对象,这个接口对象包含了所有生命周期的统一方法调用,部分代码如下:

1   public abstractDatabase getDatabase(String name)2       throwsNoSuchObjectException;3
4   public abstract boolean dropDatabase(String dbname) throwsNoSuchObjectException, MetaException;5
6   public abstract boolean alterDatabase(String dbname, Database db) throwsNoSuchObjectException, MetaException;7
8   public abstract List<String> getDatabases(String pattern) throwsMetaException;9
10   public abstract List<String> getAllDatabases() throwsMetaException;11
12   public abstract booleancreateType(Type type);13
14   public abstractType getType(String typeName);15
16   public abstract booleandropType(String typeName);17
18   public abstract void createTable(Table tbl) throwsInvalidObjectException,19 MetaException;20
21   public abstract booleandropTable(String dbName, String tableName)22       throwsMetaException, NoSuchObjectException, InvalidObjectException, InvalidInputException;23
24   public abstractTable getTable(String dbName, String tableName)25       throwsMetaException;26   ..................

  那么下来我们来看一下具体怎么实现的,首先hive metastore会通过调用getMS()方法,获取本地线程中的RawStore的实现,代码如下:

1     public RawStore getMS() throwsMetaException {     //获取本地线程中已存在的RawStore2       RawStore ms =threadLocalMS.get();     //如果不存在,则创建该对象的实现,并加入到本地线程中3       if (ms == null) {4         ms =newRawStore();5 ms.verifySchema();6 threadLocalMS.set(ms);7         ms =threadLocalMS.get();8 }9       returnms;10 }

  看到这里,是不是很想看看newRawStore它干嘛啦?那么我们继续:

1   public staticRawStore getProxy(HiveConf hiveConf, Configuration conf, String rawStoreClassName,2       int id) throwsMetaException {3   //通过反射,创建baseClass,随后再进行该实现对象的创建
4     Class<? extends RawStore> baseClass = (Class<? extends RawStore>) MetaStoreUtils.getClass(5 rawStoreClassName);6
7     RawStoreProxy handler = newRawStoreProxy(hiveConf, conf, baseClass, id);8
9     //Look for interfaces on both the class and all base classes.
10     return (RawStore) Proxy.newProxyInstance(RawStoreProxy.class.getClassLoader(),11 getAllInterfaces(baseClass), handler);12   }

  那么问题来了,rawstoreClassName从哪里来呢?它是在HiveMetaStore进行初始化时加载的,来源于HiveConf中的METASTORE_RAW_STORE_IMPL,配置参数,也就是RawStore的实现类ObjectStore。好了,既然RawStore的实现类已经创建,那么我们继续深入ObjectStore,代码如下:

  

1 @Override2   public void createTable(Table tbl) throwsInvalidObjectException, MetaException {3     boolean commited = false;4     try{      //创建事务5 openTransaction();      //这里再次进行db 、table的校验,代码不再贴出来,具体为什么又要做一次校验,还需要深入思考6       MTable mtbl =convertToMTable(tbl);     这里的pm为ObjectStore创建时,init的JDO PersistenceManage对象。这里便是提交Table对象的地方,具体可研究下JDO module对象与数据库的交互7 pm.makePersistent(mtbl);     //封装权限用户、角色、组对象并写入8       PrincipalPrivilegeSet principalPrivs =tbl.getPrivileges();9       List<Object> toPersistPrivObjs = new ArrayList<Object>();10       if (principalPrivs != null) {11         int now = (int)(System.currentTimeMillis()/1000);12
13         Map<String, List<PrivilegeGrantInfo>> userPrivs =principalPrivs.getUserPrivileges();14 putPersistentPrivObjects(mtbl, toPersistPrivObjs, now, userPrivs, PrincipalType.USER);15
16         Map<String, List<PrivilegeGrantInfo>> groupPrivs =principalPrivs.getGroupPrivileges();17 putPersistentPrivObjects(mtbl, toPersistPrivObjs, now, groupPrivs, PrincipalType.GROUP);18
19         Map<String, List<PrivilegeGrantInfo>> rolePrivs =principalPrivs.getRolePrivileges();20 putPersistentPrivObjects(mtbl, toPersistPrivObjs, now, rolePrivs, PrincipalType.ROLE);21 }22 pm.makePersistentAll(toPersistPrivObjs);23       commited =commitTransaction();24     } finally{      //如果失败则回滚25       if (!commited) {26 rollbackTransaction();27 }28 }29   }

  4、dropTable

  二话不说上从Hive类中上代码:

1   public void dropTable(String tableName, boolean ifPurge) throwsHiveException {    //这里Hive 将dbName与TableName合并成一个数组2     String[] names =Utilities.getDbTableName(tableName);3     dropTable(names[0], names[1], true, true, ifPurge);4   }

  为什么要进行这样的处理呢,其实是因为 drop table的时候 我们的sql语句会是drop table dbName.tableName 或者是drop table tableName,这里进行tableName和DbName的组装,如果为drop table tableName,则获取当前session中的dbName,代码如下:

1   public static String[] getDbTableName(String dbtable) throwsSemanticException {    //获取当前Session中的DbName2     returngetDbTableName(SessionState.get().getCurrentDatabase(), dbtable);3 }4
5   public static String[] getDbTableName(String defaultDb, String dbtable) throwsSemanticException {6     if (dbtable == null) {7       return new String[2];8 }9     String[] names =  dbtable.split("\\.");10     switch(names.length) {11       case 2:12         returnnames;     //如果长度为1,则重新组装13       case 1:14         return newString [] {defaultDb, dbtable};15       default:16         throw newSemanticException(ErrorMsg.INVALID_TABLE_NAME, dbtable);17 }18   }

  随后通过getMSC()调用HiveMetaStoreClient中的dropTable,代码如下:

1   public void dropTable(String dbname, String name, booleandeleteData,2       boolean ignoreUnknownTab, EnvironmentContext envContext) throwsMetaException, TException,3 NoSuchObjectException, UnsupportedOperationException {4 Table tbl;5     try{     //通过dbName与tableName获取正个Table对象,也就是通过dbName与TableName获取该Table存储的所有元数据6       tbl =getTable(dbname, name);7     } catch(NoSuchObjectException e) {8       if (!ignoreUnknownTab) {9         throwe;10 }11       return;12 }    //根据table type来判断是否为IndexTable,如果为索引表则不允许删除  13     if(isIndexTable(tbl)) {14       throw new UnsupportedOperationException("Cannot drop index tables");15 }    //这里的getHook 与create时getHook一致,获取对应table存储的hook16     HiveMetaHook hook =getHook(tbl);17     if (hook != null) {18 hook.preDropTable(tbl);19 }20     boolean success = false;21     try{      调用HiveMetaStore服务端的dropTable方法22 drop_table_with_environment_context(dbname, name, deleteData, envContext);23       if (hook != null) {24 hook.commitDropTable(tbl, deleteData);25 }26       success=true;27     } catch(NoSuchObjectException e) {28       if (!ignoreUnknownTab) {29         throwe;30 }31     } finally{32       if (!success && (hook != null)) {33 hook.rollbackDropTable(tbl);34 }35 }36   }

  下面我们重点看下服务端HiveMetaStore干了些什么,代码如下:

1    private boolean drop_table_core(final RawStore ms, final String dbname, finalString name,2         final boolean deleteData, finalEnvironmentContext envContext,3         final String indexName) throwsNoSuchObjectException,4 MetaException, IOException, InvalidObjectException, InvalidInputException {5       boolean success = false;6       boolean isExternal = false;7       Path tblPath = null;8       List<Path> partPaths = null;9       Table tbl = null;10       boolean ifPurge = false;11       try{12 ms.openTransaction();13         // 获取正个Table的对象属性
14         tbl =get_table_core(dbname, name);15         if (tbl == null) {16           throw new NoSuchObjectException(name + " doesn't exist");17 }       //如果sd数据为空,则认为该表数据损坏18         if (tbl.getSd() == null) {19           throw new MetaException("Table metadata is corrupted");20 }21         ifPurge =isMustPurge(envContext, tbl);22
23         firePreEvent(new PreDropTableEvent(tbl, deleteData, this));//判断如果该表存在索引,则需要先删除该表的索引
25         boolean isIndexTable =isIndexTable(tbl);26         if (indexName == null &&isIndexTable) {27           throw newRuntimeException(28               "The table " + name + " is an index table. Please do drop index instead.");29 }//如果不是索引表,则删除索引元数据
31         if (!isIndexTable) {32           try{33             List<Index> indexes =ms.getIndexes(dbname, name, Short.MAX_VALUE);34             while (indexes != null && indexes.size() > 0) {35               for(Index idx : indexes) {36                 this.drop_index_by_name(dbname, name, idx.getIndexName(), true);37 }38               indexes =ms.getIndexes(dbname, name, Short.MAX_VALUE);39 }40           } catch(TException e) {41             throw newMetaException(e.getMessage());42 }43 }       //判断是否为外部表44         isExternal =isExternal(tbl);45         if (tbl.getSd().getLocation() != null) {46           tblPath = newPath(tbl.getSd().getLocation());47           if (!wh.isWritable(tblPath.getParent())) {48             String target = indexName == null ? "Table" : "Index table";49             throw new MetaException(target + " metadata not deleted since " +
50                 tblPath.getParent() + " is not writable by " +
51 hiveConf.getUser());52 }53 }54
56         checkTrashPurgeCombination(tblPath, dbname + "." +name, ifPurge);57         //获取所有partition的location path 这里有个奇怪的地方,为什么不将Table对象直接传入,而是又在该方法中重新getTable,同时校验上级目录的读写权限
58         partPaths =dropPartitionsAndGetLocations(ms, dbname, name, tblPath,59             tbl.getPartitionKeys(), deleteData && !isExternal);60      //调用ObjectStore进行meta数据的删除
61         if (!ms.dropTable(dbname, name)) {62           String tableName = dbname + "." +name;63           throw new MetaException(indexName == null ? "Unable to drop table " +tableName:64               "Unable to drop index table " + tableName + " for index " +indexName);65 }66         success =ms.commitTransaction();67       } finally{68         if (!success) {69 ms.rollbackTransaction();70         } else if (deleteData && !isExternal) {//删除物理partition
73 deletePartitionData(partPaths, ifPurge);74           //删除Table路径
75 deleteTableData(tblPath, ifPurge);76           //ok even if the data is not deleted
77        //Listener 处理78         for(MetaStoreEventListener listener : listeners) {79           DropTableEvent dropTableEvent = new DropTableEvent(tbl, success, deleteData, this);80 dropTableEvent.setEnvironmentContext(envContext);81 listener.onDropTable(dropTableEvent);82 }83 }84       returnsuccess;85     }

  我们继续深入ObjectStore中的dropTable,会发现 再一次通过dbName与tableName获取整个Table对象,随后逐一删除。也许代码并不是同一个人写的也可能是由于安全性考虑?很多可以通过接口传入的Table对象,都重新获取了,这样会不会加重数据库的负担呢?ObjectStore代码如下:

1   public boolean dropTable(String dbName, String tableName) throwsMetaException,2 NoSuchObjectException, InvalidObjectException, InvalidInputException {3     boolean success = false;4     try{5 openTransaction();      //重新获取Table对象6       MTable tbl =getMTable(dbName, tableName);7 pm.retrieve(tbl);8       if (tbl != null) {9         //下列代码查询并删除所有的权限
10         List<MTablePrivilege> tabGrants =listAllTableGrants(dbName, tableName);11         if (tabGrants != null && tabGrants.size() > 0) {12 pm.deletePersistentAll(tabGrants);13 }      14         List<MTableColumnPrivilege> tblColGrants =listTableAllColumnGrants(dbName,15 tableName);16         if (tblColGrants != null && tblColGrants.size() > 0) {17 pm.deletePersistentAll(tblColGrants);18 }19
20         List<MPartitionPrivilege> partGrants = this.listTableAllPartitionGrants(dbName, tableName);21         if (partGrants != null && partGrants.size() > 0) {22 pm.deletePersistentAll(partGrants);23 }24
25         List<MPartitionColumnPrivilege> partColGrants =listTableAllPartitionColumnGrants(dbName,26 tableName);27         if (partColGrants != null && partColGrants.size() > 0) {28 pm.deletePersistentAll(partColGrants);29 }30         //delete column statistics if present
31         try{        //删除column统计表数据32           deleteTableColumnStatistics(dbName, tableName, null);33         } catch(NoSuchObjectException e) {34           LOG.info("Found no table level column statistics associated with db " + dbName +
35           " table " + tableName + " record to delete");36 }37      //删除mcd表数据
38 preDropStorageDescriptor(tbl.getSd());39         //删除整个Table对象相关表数据
40 pm.deletePersistentAll(tbl);41 }42       success =commitTransaction();43     } finally{44       if (!success) {45 rollbackTransaction();46 }47 }48     returnsuccess;49   }

  5、AlterTable

  下来我们看下AlterTable,AlterTable包含的逻辑较多,因为牵扯到物理存储上的路径修改等,那么我们来一点点查看。还是从Hive类中开始,上代码:

1   public void alterTable(String tblName, Table newTbl, booleancascade)2       throwsInvalidOperationException, HiveException {3     String[] names =Utilities.getDbTableName(tblName);4     try{5       //删除table kv中的DDL_TIME 因为要alterTable所以,该事件会被改变
6       if (newTbl.getParameters() != null) {7 newTbl.getParameters().remove(hive_metastoreConstants.DDL_TIME);8 }      //进行相关校验,包含dbName、tableName、column、inputOutClass、outputClass的校验等,如果校验不通过则抛出HiveException9 newTbl.checkValidity();      //调用alterTable10       getMSC().alter_table(names[0], names[1], newTbl.getTTable(), cascade);11     } catch(MetaException e) {12       throw new HiveException("Unable to alter table. " +e.getMessage(), e);13     } catch(TException e) {14       throw new HiveException("Unable to alter table. " +e.getMessage(), e);15 }16   }

  对于HiveMetaClient,并没有做相应处理,所以我们直接来看HiveMetaStore服务端做了些什么呢?

1     private void alter_table_core(final String dbname, final String name, finalTable newTable,2         final EnvironmentContext envContext, final booleancascade)3         throwsInvalidOperationException, MetaException {4       startFunction("alter_table", ": db=" + dbname + " tbl=" +name5           + " newtbl=" +newTable.getTableName());6
7       //更新DDL_Time
8       if (newTable.getParameters() == null ||
9           newTable.getParameters().get(hive_metastoreConstants.DDL_TIME) == null) {10 newTable.putToParameters(hive_metastoreConstants.DDL_TIME, Long.toString(System11             .currentTimeMillis() / 1000));12 }13       boolean success = false;14       Exception ex = null;15       try{       //获取已有Table的整个对象16         Table oldt =get_table_core(dbname, name);       //进行Event处理17         firePreEvent(new PreAlterTableEvent(oldt, newTable, this));       //进行alterTable处理,后面详细说明18 alterHandler.alterTable(getMS(), wh, dbname, name, newTable, cascade);19         success = true;20            //进行Listener处理
21         for(MetaStoreEventListener listener : listeners) {22
23           AlterTableEvent alterTableEvent =
24               new AlterTableEvent(oldt, newTable, success, this);25 alterTableEvent.setEnvironmentContext(envContext);26 listener.onAlterTable(alterTableEvent);27 }28       } catch(NoSuchObjectException e) {29         //thrown when the table to be altered does not exist
30         ex =e;31         throw newInvalidOperationException(e.getMessage());32       } catch(Exception e) {33         ex =e;34         if (e instanceofMetaException) {35           throw(MetaException) e;36         } else if (e instanceofInvalidOperationException) {37           throw(InvalidOperationException) e;38         } else{39           thrownewMetaException(e);40 }41       } finally{42         endFunction("alter_table", success, ex, name);43 }44     }

  那么,我们重点看下alterHandler具体所做的事情,在这之前简要说下alterHandler的初始化,它是在HiveMetaStore init时获取的hive.metastore.alter.impl参数的className,也就是HiveAlterHandler的name,那么具体,我们来看下它alterTable时的实现,前方高能,小心火烛:)

1   public voidalterTable(RawStore msdb, Warehouse wh, String dbname,2       String name, Table newt, boolean cascade) throwsInvalidOperationException, MetaException {3     if (newt == null) {4       throw new InvalidOperationException("New table is invalid: " +newt);5 }6    //校验新的tableName是否合法
7     if (!MetaStoreUtils.validateName(newt.getTableName())) {8       throw newInvalidOperationException(newt.getTableName()9           + " is not a valid object name");10 }     //校验新的column Name type是否合法11     String validate =MetaStoreUtils.validateTblColumns(newt.getSd().getCols());12     if (validate != null) {13       throw new InvalidOperationException("Invalid column " +validate);14 }15
16     Path srcPath = null;17     FileSystem srcFs = null;18     Path destPath = null;19     FileSystem destFs = null;20
21     boolean success = false;22     boolean moveData = false;23     boolean rename = false;24     Table oldt = null;25     List<ObjectPair<Partition, String>> altps = new ArrayList<ObjectPair<Partition, String>>();26
27     try{28 msdb.openTransaction();      //这里直接转换小写,可以看出 代码不是一个人写的29       name =name.toLowerCase();30       dbname =dbname.toLowerCase();31
32       //校验新的tableName是否存在
33       if (!newt.getTableName().equalsIgnoreCase(name)34           || !newt.getDbName().equalsIgnoreCase(dbname)) {35         if (msdb.getTable(newt.getDbName(), newt.getTableName()) != null) {36           throw new InvalidOperationException("new table " +newt.getDbName()37               + "." + newt.getTableName() + " already exists");38 }39         rename = true;40 }41
42       //获取老的table对象
43       oldt =msdb.getTable(dbname, name);44       if (oldt == null) {45         throw new InvalidOperationException("table " + newt.getDbName() + "."
46             + newt.getTableName() + " doesn't exist");47 }48     //alter Table时 获取 METASTORE_DISALLOW_INCOMPATIBLE_COL_TYPE_CHANGES配置项,如果为true的话,将改变column的type类型,这里为false
49       if(HiveConf.getBoolVar(hiveConf,50 HiveConf.ConfVars.METASTORE_DISALLOW_INCOMPATIBLE_COL_TYPE_CHANGES,51             false)) {52         //Throws InvalidOperationException if the new column types are not53         //compatible with the current column types.
54 MetaStoreUtils.throwExceptionIfIncompatibleColTypeChange(55 oldt.getSd().getCols(), newt.getSd().getCols());56 }57     //cascade参数由调用Hive altertable方法穿过来的,也就是引擎调用时参数的设置,这里用来查看是否需要alterPartition信息
58       if(cascade) {59         //校验新的column是否与老的column一致,如不一致,说明进行了column的添加或删除操作
60         if(MetaStoreUtils.isCascadeNeededInAlterTable(oldt, newt)) {        //根据dbName与tableName获取整个partition的信息61           List<Partition> parts = msdb.getPartitions(dbname, name, -1);62           for(Partition part : parts) {63             List<FieldSchema> oldCols =part.getSd().getCols();64 part.getSd().setCols(newt.getSd().getCols());65             String oldPartName =Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues());          //如果columns不一致,则删除已有的column统计信息66 updatePartColumnStatsForAlterColumns(msdb, part, oldPartName, part.getValues(), oldCols, part);          //更新整个Partition的信息67 msdb.alterPartition(dbname, name, part.getValues(), part);68 }69         } else{70           LOG.warn("Alter table does not cascade changes to its partitions.");71 }72 }73
74       //判断parititonkey是否改变,也就是dt 或 hour等partName是否改变
76       boolean partKeysPartiallyEqual =checkPartialPartKeysEqual(oldt.getPartitionKeys(),77 newt.getPartitionKeys());78           //如果已有表为视图表,同时发现老的partkey与新的partKey不一致,则报错
79       if(!oldt.getTableType().equals(TableType.VIRTUAL_VIEW.toString())){80         if (oldt.getPartitionKeys().size() !=newt.getPartitionKeys().size()81             || !partKeysPartiallyEqual) {82           throw newInvalidOperationException(83               "partition keys can not be changed.");84 }85 }86
      //如果该表不为视图表,同时,该表的location信息并未发生变化,同时新的location信息并不为空,并且已有的该表不为外部表,说明用户是想要移动数据到新的location地址,那么该操作       // 为alter table rename操作
91       if(rename92           && !oldt.getTableType().equals(TableType.VIRTUAL_VIEW.toString())93           && (oldt.getSd().getLocation().compareTo(newt.getSd().getLocation()) == 0
94             ||StringUtils.isEmpty(newt.getSd().getLocation()))95           && !MetaStoreUtils.isExternalTable(oldt)) {96      //获取新的location信息
97         srcPath = newPath(oldt.getSd().getLocation());98         srcFs =wh.getFs(srcPath);99
100         //that means user is asking metastore to move data to new location101         //corresponding to the new name102         //get new location
103         Database db =msdb.getDatabase(newt.getDbName());104         Path databasePath =constructRenamedPath(wh.getDatabasePath(db), srcPath);105         destPath = newPath(databasePath, newt.getTableName());106         destFs =wh.getFs(destPath);107      //设置新的table location信息 用于后续更新动作
108 newt.getSd().setLocation(destPath.toString());109         moveData = true;110 //校验物理目标地址是否存在,如果存在则会override所有数据,是不允许的。
114         if (!FileUtils.equalsFileSystem(srcFs, destFs)) {115           throw new InvalidOperationException("table new location " +destPath116               + " is on a different file system than the old location "
117               + srcPath + ". This operation is not supported");118 }119         try{120           srcFs.exists(srcPath); //check that src exists and also checks121                                  //permissions necessary
122           if(destFs.exists(destPath)) {123             throw new InvalidOperationException("New location for this table "
124                 + newt.getDbName() + "." +newt.getTableName()125                 + " already exists : " +destPath);126 }127         } catch(IOException e) {128           throw new InvalidOperationException("Unable to access new location "
129               + destPath + " for table " + newt.getDbName() + "."
130               +newt.getTableName());131 }132         String oldTblLocPath =srcPath.toUri().getPath();133         String newTblLocPath =destPath.toUri().getPath();134     
135         //获取old table中的所有partition信息
136         List<Partition> parts = msdb.getPartitions(dbname, name, -1);137         for(Partition part : parts) {138           String oldPartLoc =part.getSd().getLocation();        //这里,便开始新老partition地址的变换,修改partition元数据信息139           if(oldPartLoc.contains(oldTblLocPath)) {140             URI oldUri = newPath(oldPartLoc).toUri();141             String newPath =oldUri.getPath().replace(oldTblLocPath, newTblLocPath);142             Path newPartLocPath = newPath(oldUri.getScheme(), oldUri.getAuthority(), newPath);143 altps.add(ObjectPair.create(part, part.getSd().getLocation()));144 part.getSd().setLocation(newPartLocPath.toString());145             String oldPartName =Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues());146             try{147               //existing partition column stats is no longer valid, remove them
148               msdb.deletePartitionColumnStatistics(dbname, name, oldPartName, part.getValues(), null);149             } catch(InvalidInputException iie) {150               throw new InvalidOperationException("Unable to update partition stats in table rename." +iie);151 }152 msdb.alterPartition(dbname, name, part.getValues(), part);153 }154 }       //更新stats相关信息155       } else if (MetaStoreUtils.requireCalStats(hiveConf, null, null, newt) &&
156         (newt.getPartitionKeysSize() == 0)) {157           Database db =msdb.getDatabase(newt.getDbName());158           //Update table stats. For partitioned table, we update stats in159           //alterPartition()
160           MetaStoreUtils.updateUnpartitionedTableStatsFast(db, newt, wh, false, true);161 }162 updateTableColumnStatsForAlterTable(msdb, oldt, newt);163       //now finally call alter table
164 msdb.alterTable(dbname, name, newt);165       //commit the changes
166       success =msdb.commitTransaction();167     } catch(InvalidObjectException e) {168 LOG.debug(e);169       throw newInvalidOperationException(170           "Unable to change partition or table."
171               + " Check metastore logs for detailed stack." +e.getMessage());172     } catch(NoSuchObjectException e) {173 LOG.debug(e);174       throw newInvalidOperationException(175           "Unable to change partition or table. Database " + dbname + " does not exist"
176               + " Check metastore logs for detailed stack." +e.getMessage());177     } finally{178       if (!success) {179 msdb.rollbackTransaction();180 }181       if (success &&moveData) {//开始更新hdfs路径,进行老路径的rename到新路径 ,调用fileSystem的rename操作
185         try{186           if (srcFs.exists(srcPath) && !srcFs.rename(srcPath, destPath)) {187             throw new IOException("Renaming " + srcPath + " to " + destPath + " failed");188 }189         } catch(IOException e) {190           LOG.error("Alter Table operation for " + dbname + "." + name + " failed.", e);191           boolean revertMetaDataTransaction = false;192           try{193 msdb.openTransaction();           //这里会发现,又一次进行了alterTable元数据动作,或许跟JDO的特性有关?还是因为安全?194 msdb.alterTable(newt.getDbName(), newt.getTableName(), oldt);195             for (ObjectPair<Partition, String>pair : altps) {196               Partition part =pair.getFirst();197 part.getSd().setLocation(pair.getSecond());198 msdb.alterPartition(newt.getDbName(), name, part.getValues(), part);199 }200             revertMetaDataTransaction =msdb.commitTransaction();201           } catch(Exception e1) {202             //we should log this for manual rollback by administrator
203             LOG.error("Reverting metadata by HDFS operation failure failed During HDFS operation failed", e1);204             LOG.error("Table " + Warehouse.getQualifiedName(newt) +
205                 " should be renamed to " +Warehouse.getQualifiedName(oldt));206             LOG.error("Table " + Warehouse.getQualifiedName(newt) +
207                 " should have path " +srcPath);208             for (ObjectPair<Partition, String>pair : altps) {209               LOG.error("Partition " + Warehouse.getQualifiedName(pair.getFirst()) +
210                   " should have path " +pair.getSecond());211 }212             if (!revertMetaDataTransaction) {213 msdb.rollbackTransaction();214 }215 }216           throw new InvalidOperationException("Alter Table operation for " + dbname + "." + name +
217             " failed to move data due to: '" + getSimpleMessage(e) + "' See hive log file for details.");218 }219 }220 }221     if (!success) {222       throw new MetaException("Committing the alter table transaction was not successful.");223 }224   }

  6、createPartition
  在分区数据写入之前,会先进行partition的元数据注册及物理文件路径的创建(内部表),Hive类代码如下:

1   public Partition createPartition(Table tbl, Map<String, String> partSpec) throwsHiveException {2     try{    //new出来一个Partition对象,传入Table对象,调用Partition的构造方法来initialize Partition的信息3       return newPartition(tbl, getMSC().add_partition(4           Partition.createMetaPartitionObject(tbl, partSpec, null)));5     } catch(Exception e) {6 LOG.error(StringUtils.stringifyException(e));7       throw newHiveException(e);8 }9   }

  这里的createMetaPartitionObject作用在于整个Partition传入对象的校验对对象的封装,代码如下:

1   public staticorg.apache.hadoop.hive.metastore.api.Partition createMetaPartitionObject(2       Table tbl, Map<String, String> partSpec, Path location) throwsHiveException {3     List<String> pvals = new ArrayList<String>();    //遍历整个PartCols,并且校验partMap中是否一一对应4     for(FieldSchema field : tbl.getPartCols()) {5       String val =partSpec.get(field.getName());6       if (val == null ||val.isEmpty()) {7         throw new HiveException("partition spec is invalid; field "
8             + field.getName() + " does not exist or is empty");9 }10 pvals.add(val);11 }12   //set相关的属性信息,包括DbName、TableName、PartValues、以及sd信息
13     org.apache.hadoop.hive.metastore.api.Partition tpart =
14         neworg.apache.hadoop.hive.metastore.api.Partition();15 tpart.setDbName(tbl.getDbName());16 tpart.setTableName(tbl.getTableName());17 tpart.setValues(pvals);18
19     if (!tbl.isView()) {20 tpart.setSd(cloneS d(tbl));21       tpart.getSd().setLocation((location != null) ? location.toString() : null);22 }23     returntpart;24   }

  随之MetaDataClient对于该对象调用MetaDataService的addPartition,并进行了深拷贝,这里不再详细说明,那么我们直接看下服务端干了什么:

1     private Partition add_partition_core(finalRawStore ms,2         final Partition part, finalEnvironmentContext envContext)3         throwsInvalidObjectException, AlreadyExistsException, MetaException, TException {4       boolean success = false;5       Table tbl = null;6       try{7 ms.openTransaction();        //根据DbName、TableName获取整个Table对象信息8         tbl =ms.getTable(part.getDbName(), part.getTableName());9         if (tbl == null) {10           throw newInvalidObjectException(11               "Unable to add partition because table or database do not exist");12 }13      //事件处理
14         firePreEvent(new PreAddPartitionEvent(tbl, part, this));15      //在创建Partition之前,首先会校验元数据中该partition是否存在
16         boolean shouldAdd = startAddPartition(ms, part, false);17         assert shouldAdd; //start would throw if it already existed here       //创建Partition路径
18         boolean madeDir =createLocationForAddedPartition(tbl, part);19         try{        //加载一些kv信息20 initializeAddedPartition(tbl, part, madeDir);        //写入元数据21           success =ms.addPartition(part);22         } finally{23           if (!success &&madeDir) {         //如果没有成功,便删除物理路径24             wh.deleteDir(new Path(part.getSd().getLocation()), true);25 }26 }27         //we proceed only if we'd actually succeeded anyway, otherwise,28         //we'd have thrown an exception
29         success = success &&ms.commitTransaction();30       } finally{31         if (!success) {32 ms.rollbackTransaction();33 }34 fireMetaStoreAddPartitionEvent(tbl, Arrays.asList(part), envContext, success);35 }36       returnpart;37     }

  这里提及一个设计上的点,从之前的表结构设计上,没有直接存储PartName,而是将key与value单独存在与kv表中,这里我们看下createLocationForAddedPartition:

1     private booleancreateLocationForAddedPartition(2         final Table tbl, final Partition part) throwsMetaException {3       Path partLocation = null;4       String partLocationStr = null;      //如果sd不为null,则将sd的location信息作为表跟目录赋给partLocationStr5       if (part.getSd() != null) {6         partLocationStr =part.getSd().getLocation();7 }8     //如果为null,则重新拼接part Location
9       if (partLocationStr == null ||partLocationStr.isEmpty()) {10         //set default location if not specified and this is11         //a physical table partition (not a view)
12         if (tbl.getSd().getLocation() != null) {         //如果不为null,则继续拼接文件路径及part的路径,组成完成的Partition location13           partLocation = newPath(tbl.getSd().getLocation(), Warehouse14 .makePartName(tbl.getPartitionKeys(), part.getValues()));15 }16       } else{17         if (tbl.getSd().getLocation() == null) {18           throw new MetaException("Cannot specify location for a view partition");19 }20         partLocation = wh.getDnsPath(newPath(partLocationStr));21 }22
23       boolean result = false;     //将location信息写入sd表24       if (partLocation != null) {25 part.getSd().setLocation(partLocation.toString());26
27         //Check to see if the directory already exists before calling28         //mkdirs() because if the file system is read-only, mkdirs will29         //throw an exception even if the directory already exists.
30         if (!wh.isDir(partLocation)) {31           if (!wh.mkdirs(partLocation, true)) {32             throw newMetaException(partLocation33                 + " is not a directory or unable to create one");34 }35           result = true;36 }37 }38       returnresult;39     }

  总结:

  7、dropPartition

  删除partition就不再从Hive开始了,我们直接看HiveMetaStore服务端做了什么:

1     private booleandrop_partition_common(RawStore ms, String db_name, String tbl_name,2       List<String> part_vals, final boolean deleteData, finalEnvironmentContext envContext)3       throwsMetaException, NoSuchObjectException, IOException, InvalidObjectException,4 InvalidInputException {5       boolean success = false;6       Path partPath = null;7       Table tbl = null;8       Partition part = null;9       boolean isArchived = false;10       Path archiveParentDir = null;11       boolean mustPurge = false;12
13       try{14 ms.openTransaction();       //根据dbName、tableName、part_values获取整个part信息15         part =ms.getPartition(db_name, tbl_name, part_vals);       //获取所有Table对象16         tbl =get_table_core(db_name, tbl_name);17         firePreEvent(new PreDropPartitionEvent(tbl, part, deleteData, this));18         mustPurge =isMustPurge(envContext, tbl);19
20         if (part == null) {21           throw new NoSuchObjectException("Partition doesn't exist. "
22               +part_vals);23 }24      //这一片还没有深入看Arrchived partition
25         isArchived =MetaStoreUtils.isArchived(part);26         if(isArchived) {27           archiveParentDir =MetaStoreUtils.getOriginalLocation(part);28 verifyIsWritablePath(archiveParentDir);29           checkTrashPurgeCombination(archiveParentDir, db_name + "." + tbl_name + "." +part_vals, mustPurge);30 }31         if (!ms.dropPartition(db_name, tbl_name, part_vals)) {32           throw new MetaException("Unable to drop partition");33 }34         success =ms.commitTransaction();35         if ((part.getSd() != null) && (part.getSd().getLocation() != null)) {36           partPath = newPath(part.getSd().getLocation());37 verifyIsWritablePath(partPath);38           checkTrashPurgeCombination(partPath, db_name + "." + tbl_name + "." +part_vals, mustPurge);39 }40       } finally{41         if (!success) {42 ms.rollbackTransaction();43         } else if (deleteData && ((partPath != null) || (archiveParentDir != null))) {44           if (tbl != null && !isExternal(tbl)) {45             if(mustPurge) {46               LOG.info("dropPartition() will purge " + partPath + " directly, skipping trash.");47 }48             else{49               LOG.info("dropPartition() will move " + partPath + " to trash-directory.");50 }         //删除partition51             //Archived partitions have har:/to_har_file as their location.52             //The original directory was saved in params
53             if(isArchived) {54               assert (archiveParentDir != null);55               wh.deleteDir(archiveParentDir, true, mustPurge);56             } else{57               assert (partPath != null);58               wh.deleteDir(partPath, true, mustPurge);59               deleteParentRecursive(partPath.getParent(), part_vals.size() - 1, mustPurge);60 }61             //ok even if the data is not deleted
62 }63 }64         for(MetaStoreEventListener listener : listeners) {65           DropPartitionEvent dropPartitionEvent =
66             new DropPartitionEvent(tbl, part, success, deleteData, this);67 dropPartitionEvent.setEnvironmentContext(envContext);68 listener.onDropPartition(dropPartitionEvent);69 }70 }71       return true;72     }

  8、alterPartition

  alterPartition牵扯的校验及文件目录的修改,我们直接从HiveMetaStore中的rename_partition中查看:

1     private void rename_partition(final String db_name, finalString tbl_name,2         final List<String> part_vals, finalPartition new_part,3         finalEnvironmentContext envContext)4         throwsInvalidOperationException, MetaException,5 TException {      //日志记录6       startTableFunction("alter_partition", db_name, tbl_name);7
8       if(LOG.isInfoEnabled()) {9         LOG.info("New partition values:" +new_part.getValues());10         if (part_vals != null && part_vals.size() > 0) {11           LOG.info("Old Partition values:" +part_vals);12 }13 }14
15       Partition oldPart = null;16       Exception ex = null;17       try{18         firePreEvent(new PreAlterPartitionEvent(db_name, tbl_name, part_vals, new_part, this));19      //校验PartName的规范性
20         if (part_vals != null && !part_vals.isEmpty()) {21 MetaStoreUtils.validatePartitionNameCharacters(new_part.getValues(),22 partitionValidationPattern);23 }24      调用alterHandler的alterPartition进行partition物理上的rename,以及元数据修改
25         oldPart =alterHandler.alterPartition(getMS(), wh, db_name, tbl_name, part_vals, new_part);26
27         //Only fetch the table if we actually have a listener
28         Table table = null;29         for(MetaStoreEventListener listener : listeners) {30           if (table == null) {31             table =getMS().getTable(db_name, tbl_name);32 }33           AlterPartitionEvent alterPartitionEvent =
34               new AlterPartitionEvent(oldPart, new_part, table, true, this);35 alterPartitionEvent.setEnvironmentContext(envContext);36 listener.onAlterPartition(alterPartitionEvent);37 }38       } catch(InvalidObjectException e) {39         ex =e;40         throw newInvalidOperationException(e.getMessage());41       } catch(AlreadyExistsException e) {42         ex =e;43         throw newInvalidOperationException(e.getMessage());44       } catch(Exception e) {45         ex =e;46         if (e instanceofMetaException) {47           throw(MetaException) e;48         } else if (e instanceofInvalidOperationException) {49           throw(InvalidOperationException) e;50         } else if (e instanceofTException) {51           throw(TException) e;52         } else{53           thrownewMetaException(e);54 }55       } finally{56         endFunction("alter_partition", oldPart != null, ex, tbl_name);57 }58       return;59     }

  这里我们着重看一下,alterHandler.alterPartition方法,前方高能:

1   public Partition alterPartition(final RawStore msdb, Warehouse wh, finalString dbname,2       final String name, final List<String> part_vals, finalPartition new_part)3       throwsInvalidOperationException, InvalidObjectException, AlreadyExistsException,4 MetaException {5     boolean success = false;6
7     Path srcPath = null;8     Path destPath = null;9     FileSystem srcFs = null;10     FileSystem destFs = null;11     Partition oldPart = null;12     String oldPartLoc = null;13     String newPartLoc = null;14
15     //修改新的partition的DDL时间
16     if (new_part.getParameters() == null ||
17         new_part.getParameters().get(hive_metastoreConstants.DDL_TIME) == null ||
18         Integer.parseInt(new_part.getParameters().get(hive_metastoreConstants.DDL_TIME)) == 0) {19 new_part.putToParameters(hive_metastoreConstants.DDL_TIME, Long.toString(System20           .currentTimeMillis() / 1000));21 }22    //根据dbName、tableName获取整个Table对象
23     Table tbl =msdb.getTable(dbname, name);24     //如果传入的part_vals为空或为0,说明修改的只是partition的其他元数据信息而不牵扯到partKV,则直接元数据,在msdb.alterPartition会直接更新
25     if (part_vals == null || part_vals.size() == 0) {26       try{27         oldPart =msdb.getPartition(dbname, name, new_part.getValues());28         if(MetaStoreUtils.requireCalStats(hiveConf, oldPart, new_part, tbl)) {29           MetaStoreUtils.updatePartitionStatsFast(new_part, wh, false, true);30 }31 updatePartColumnStats(msdb, dbname, name, new_part.getValues(), new_part);32 msdb.alterPartition(dbname, name, new_part.getValues(), new_part);33       } catch(InvalidObjectException e) {34         throw new InvalidOperationException("alter is not possible");35       } catch(NoSuchObjectException e){36         //old partition does not exist
37         throw new InvalidOperationException("alter is not possible");38 }39       returnoldPart;40 }41     //rename partition
42     try{43 msdb.openTransaction();44       try{       //获取oldPart对象信息45         oldPart =msdb.getPartition(dbname, name, part_vals);46       } catch(NoSuchObjectException e) {47         //this means there is no existing partition
48         throw newInvalidObjectException(49             "Unable to rename partition because old partition does not exist");50 }51       Partition check_part = null;52       try{       //组装newPart的partValues等Partition信息53         check_part =msdb.getPartition(dbname, name, new_part.getValues());54       } catch(NoSuchObjectException e) {55         //this means there is no existing partition
56         check_part = null;57 }      //如果check_part组装成功,说明该part已经存在,则报already exists58       if (check_part != null) {59         throw new AlreadyExistsException("Partition already exists:" + dbname + "." + name + "." +
60 new_part.getValues());61 }      //table的信息校验62       if (tbl == null) {63         throw newInvalidObjectException(64             "Unable to rename partition because table or database do not exist");65 }66
67       //如果是外部表的分区变化了,那么不需要操作文件系统,直接更新meta信息即可
68       if(tbl.getTableType().equals(TableType.EXTERNAL_TABLE.toString())) {69 new_part.getSd().setLocation(oldPart.getSd().getLocation());70         String oldPartName =Warehouse.makePartName(tbl.getPartitionKeys(), oldPart.getValues());71         try{72           //existing partition column stats is no longer valid, remove
73           msdb.deletePartitionColumnStatistics(dbname, name, oldPartName, oldPart.getValues(), null);74         } catch(NoSuchObjectException nsoe) {75           //ignore
76         } catch(InvalidInputException iie) {77           throw new InvalidOperationException("Unable to update partition stats in table rename." +iie);78 }79 msdb.alterPartition(dbname, name, part_vals, new_part);80       } else{81         try{         //获取Table的文件路径82           destPath = newPath(wh.getTablePath(msdb.getDatabase(dbname), name),83 Warehouse.makePartName(tbl.getPartitionKeys(), new_part.getValues()));         //拼接新的Partition的路径84           destPath = constructRenamedPath(destPath, newPath(new_part.getSd().getLocation()));85         } catch(NoSuchObjectException e) {86 LOG.debug(e);87           throw newInvalidOperationException(88             "Unable to change partition or table. Database " + dbname + " does not exist"
89               + " Check metastore logs for detailed stack." +e.getMessage());90 }       //如果destPath不为空,说明改变了文件路径91         if (destPath != null) {92           newPartLoc =destPath.toString();93           oldPartLoc =oldPart.getSd().getLocation();94       //根据原有sd的路径获取老的part路径信息
95           srcPath = newPath(oldPartLoc);96
97           LOG.info("srcPath:" +oldPartLoc);98           LOG.info("descPath:" +newPartLoc);99           srcFs =wh.getFs(srcPath);100           destFs =wh.getFs(destPath);101           //查看srcFS与destFs是否Wie同一个fileSystem
102           if (!FileUtils.equalsFileSystem(srcFs, destFs)) {103             throw new InvalidOperationException("table new location " +destPath104               + " is on a different file system than the old location "
105               + srcPath + ". This operation is not supported");106 }107           try{          //校验老的partition路径与新的partition路径是否一致,同时新的partition路径是否已经存在  108             srcFs.exists(srcPath); //check that src exists and also checks
109             if (newPartLoc.compareTo(oldPartLoc) != 0 &&destFs.exists(destPath)) {110               throw new InvalidOperationException("New location for this table "
111                 + tbl.getDbName() + "." +tbl.getTableName()112                 + " already exists : " +destPath);113 }114           } catch(IOException e) {115             throw new InvalidOperationException("Unable to access new location "
116               + destPath + " for partition " + tbl.getDbName() + "."
117               + tbl.getTableName() + " " +new_part.getValues());118 }119 new_part.getSd().setLocation(newPartLoc);120           if(MetaStoreUtils.requireCalStats(hiveConf, oldPart, new_part, tbl)) {121             MetaStoreUtils.updatePartitionStatsFast(new_part, wh, false, true);122 }         //拼接oldPartName,并且删除原有oldPart的信息,写入新的partition信息123           String oldPartName =Warehouse.makePartName(tbl.getPartitionKeys(), oldPart.getValues());124           try{125             //existing partition column stats is no longer valid, remove
126             msdb.deletePartitionColumnStatistics(dbname, name, oldPartName, oldPart.getValues(), null);127           } catch(NoSuchObjectException nsoe) {128             //ignore
129           } catch(InvalidInputException iie) {130             throw new InvalidOperationException("Unable to update partition stats in table rename." +iie);131 }132 msdb.alterPartition(dbname, name, part_vals, new_part);133 }134 }135
136       success =msdb.commitTransaction();137     } finally{138       if (!success) {139 msdb.rollbackTransaction();140 }141       if (success && newPartLoc != null && newPartLoc.compareTo(oldPartLoc) != 0) {142         //rename the data directory
143         try{144           if(srcFs.exists(srcPath)) {145             //如果根路径海微创建,需要重新进行创建,就好比计算引擎先调用了alterTable,又调用了alterPartition,这时partition的根路径或许还未创建
146             Path destParentPath =destPath.getParent();147             if (!wh.mkdirs(destParentPath, true)) {148                 throw new IOException("Unable to create path " +destParentPath);149 }          //进行原路径与目标路径的rename150             wh.renameDir(srcPath, destPath, true);151             LOG.info("rename done!");152 }153         } catch(IOException e) {154           boolean revertMetaDataTransaction = false;155           try{156 msdb.openTransaction();157 msdb.alterPartition(dbname, name, new_part.getValues(), oldPart);158             revertMetaDataTransaction =msdb.commitTransaction();159           } catch(Exception e1) {160             LOG.error("Reverting metadata opeation failed During HDFS operation failed", e1);161             if (!revertMetaDataTransaction) {162 msdb.rollbackTransaction();163 }164 }165           throw new InvalidOperationException("Unable to access old location "
166               + srcPath + " for partition " + tbl.getDbName() + "."
167               + tbl.getTableName() + " " +part_vals);168 }169 }170 }171     returnoldPart;172   }

  暂时到这里吧~后续咱们慢慢玩哈~

 

转载于:https://www.cnblogs.com/yangsy0915/p/8456806.html

Hive metastore整体代码分析及详解相关推荐

  1. Zabbix+MatrixDB大规模监控与分析解决方案详解(含PPT)

    首先,谢谢原作者:(此文为转载的文章,现将原地址贴出如下:以下文章来源于yMatrix,作者MatrixDB团队Zabbix+MatrixDB大规模监控与分析解决方案详解(含PPT)) 更多精彩Zab ...

  2. [Python从零到壹] 五十一.图像增强及运算篇之图像灰度直方图对比分析万字详解

    欢迎大家来到"Python从零到壹",在这里我将分享约200篇Python系列文章,带大家一起去学习和玩耍,看看Python这个有趣的世界.所有文章都将结合案例.代码和作者的经验讲 ...

  3. 【胖虎的逆向之路】02——Android整体加壳原理详解实现

    [胖虎的逆向之路](02)--Android整体加壳原理详解&实现 Android Apk的加壳原理流程及详解 文章目录 [胖虎的逆向之路](02)--Android整体加壳原理详解& ...

  4. 基于spark mllib_Spark高级分析指南 | 机器学习和分析流程详解(下)

    - 点击上方"中国统计网"订阅我吧!- 我们在Spark高级分析指南 | 机器学习和分析流程详解(上)快速介绍了一下不同的高级分析应用和用力,从推荐到回归.但这只是实际高级分析过程 ...

  5. java 注释 超链接_java_Java代码注释规范详解,代码附有注释对程序开发者来 - phpStudy...

    Java代码注释规范详解 代码附有注释对程序开发者来说非常重要,随着技术的发展,在项目开发过程中,必须要求程序员写好代码注释,这样有利于代码后续的编写和使用. 基本的要求: 1.注释形式统一 在整个应 ...

  6. php调用C代码的方法详解和zend_parse_parameters函数详解

    来源:http://my.oschina.net/Customs/blog/490873 http://blog.csdn.net/super_ufo/article/details/3863731 ...

  7. 2013汇总计算 广联达gcl_广联达图形算量GCL2013整体操作流程图文教程详解

    算量 GCL2013 整体操作流程图文教程详解 当您对 GCL2013 软件的整体操作流程不熟悉或不清楚时, 您可以看 看这个简单操作流程. 操作步骤 [第一步]:启动软件: 通过鼠标左键单击 win ...

  8. valgrind和Kcachegrind性能分析工具详解

    作者: zhuyong 原文地址 一.valgrind介绍 valgrind是运行在Linux上的一套基于仿真技术的程序调试和分析工具,用于构建动态分析工具的装备性框架.它包括一个工具集,每个工具执行 ...

  9. python交互式和文件式区别_Python 运行.py文件和交互式运行代码的区别详解

    代码版本:3.6.3 1. 交互式运行代码会直接给出表达式的结果,运行代码文件必须print才能在控制台看到结果. 直接给出结果: 没有print是看不到结果的: 有print才能看到结果: 另:交互 ...

  10. 【机器学习】线性回归实战案例一:多元素情况下广告投放效果分析步骤详解

    线性回归实战案例一:多元素情况下广告投放效果分析步骤详解 2 线性回归 2.1 案例一:多元素情况下广告投放效果分析 2.1.1 模块加载与绘图布局样式设置 2.1.2 加载数据和数据筛选 2.1.3 ...

最新文章

  1. python英语单词 扇贝英语安卓下载_扇贝单词app下载-扇贝单词英语版 安卓版v3.6.503-pc6手机下载...
  2. js 报错说此方法没定义 我明明定义了
  3. python隐藏启动台_如何在Python中启动后台进程?
  4. TCP协议的一些认识及实践
  5. linux Pci字符驱动基本加载流程
  6. linux 截图程序源码,Linux下C语言实现C/S模式编程(附源码,运行截图)
  7. android 实现异步加载图片,Android中ImageView异步加载图片类
  8. android.intent.action.view 融云,Android 融云SDK集成单聊
  9. 江苏大学考研885程序设计 - 填空选择知识点
  10. 人机交互式编程_并发编程从操作系统底层工作整体认识开始
  11. JVM中的GC是什么
  12. Unity给小鳄鱼洗澡2D流体水实现
  13. 今日头条2018 坐标
  14. 计算机系单身率排行榜,中国高校单身率排行榜,第一名实至名归!
  15. 如何在大屏幕上滚动播放视频、图片和文字
  16. 简单的视频压缩大小技巧来了,小白也能轻松上手
  17. python 中文字符转换
  18. 大数据营销更需要消费者洞察
  19. JS判断用户输入是否为素数
  20. tools:replace specified at line: for attribute android:appComponentFactory, but no new value specifi

热门文章

  1. Tomcat详解(三)——tomcat多实例
  2. SpringBoot 下 Mybatis 的缓存
  3. 51nod 1135 原根(原根)
  4. FutureTask源码解析(2)——深入理解FutureTask
  5. 学习笔记--对最近学习的总结
  6. ALEIYE 2.0发布 首创RET关键事件功能
  7. Python3.4下使用sqlalchemy
  8. 申请以及集成 Stripe 的 Alipay 支付方案
  9. 快切-开源中文css框架之纯css透明
  10. mysql数据库管理手册_CentOS MySQL 用户及数据库管理手册