1.报错如下

Encountered exception loading fsimage
java.io.IOException: NameNode is not formatted.
上午10点28:19.302分  WARN    FSNamesystem
Encountered exception loading fsimage
java.io.IOException: NameNode is not formatted.at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:238)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1152)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:799)at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844)at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547)

错误原因:
原因是namenode元数据被破坏了,需要修复。

解决办法:
输入命令:hadoop namenode -recover
全都选择  ‘c’ 即可

但是,会继续抛出异常。

rg.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /dfs/nn is in an inconsistent state: storage directory does not exist or is not accessible.at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:377)at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:228)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1152)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:799)at org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1437)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1531)at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
20/07/22 10:33:47 INFO namenode.MetaRecoveryContext: RECOVERY FAILED: caught exception
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /dfs/nn is in an inconsistent state: storage directory does not exist or is not accessible.at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:377)at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:228)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1152)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:799)at org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1437)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1531)at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
20/07/22 10:33:47 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /dfs/nn is in an inconsistent state: storage directory does not exist or is not accessible.at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:377)at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:228)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1152)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:799)at org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1437)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1531)at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
20/07/22 10:33:47 INFO util.ExitUtil: Exiting with status 1

继续看下是什么原因

2.第二个报错

20/07/22 10:33:47 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /dfs/nn is in an inconsistent state: storage directory does not exist or is not accessible.

执行这个

hadoop namenode –format

成功,一直Y就可以了。

新来一个报错。

Encountered exception loading fsimage
java.io.FileNotFoundException: /dfs/nn/current/VERSION (Permission denied)

3.第三个报错。

Encountered exception loading fsimage
java.io.FileNotFoundException: /dfs/nn/current/VERSION (Permission denied)

问题解决:
chown hdfs:root -R /dfs/nn/*

可以完美解决,哈哈。ok了,马上就可以愉快地跑任务了。

4.第四个报错。

似乎还是权限问题。

Error starting NodeManager
org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: /var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/LOCK: Permission denied
Error starting NodeManager
org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: /var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/LOCK: Permission denied

chmod 777 /var/lib/hadoop-yarn

chmod 777 /var/lib/hadoop-yarn/yarn-nm-recovery

chmod 777 /var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state

5.第五个报错,JobManager

来看看第五个报错

上午10点51:25.326分  FATAL   HMaster
Unhandled exception. Starting shutdown.
org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/":root:supergroup:drwxr-xr-xat org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:279)at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:260)at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:240)at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:162)
Unhandled exception. Starting shutdown.
org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/":root:supergroup:drwxr-xr-x

因为只是开发环境,所以

hdfs dfs -mkdir /hbase

hdfs dfs -chmod 777 /hbase

直接给最大权限

重启应该就行了

6.第六个报错

Error starting JobHistoryServer
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://node1:8020/user/history/done]
Error starting JobHistoryServer
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://node1:8020/user/history/done]

sudo -u hdfs hdfs dfs -chmod -R 777 /

赋权试一下

好了,hbase、yarn启动成功了,通过了,

很开心。哈哈

但是hive又来幺蛾子了。

7.第七个hive的问题

+ exec /opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/lib/hive/bin/hive --config /run/cloudera-scm-agent/process/205-hive-HIVEMETASTORE --service metastore -p 9083
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
javax.jdo.JDOFatalInternalException: Error creating transactional connection factoryat org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:587)at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:788)at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)at java.security.AccessController.doPrivileged(Native Method)at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:420)at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:449)at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:344)

这个问题基本上就是hive没有jar有关。

日志能看到

Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BONECP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "BONECP" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.

看到应该是hive缺少一个连接mysql的第三方jar连接包,我们要弄一下了。

但是呢我们在cdh的hive的lib下面放了jar之后,还有一个

MetaException(message:Version information not found in metastore. )
OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
MetaException(message:Version information not found in metastore. )at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7387)at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7363)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103)at com.sun.proxy.$Proxy7.verifySchema(Unknown Source)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:664)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:712)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:511)at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6511)at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6506)at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6756)at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6683)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.util.RunJar.run(RunJar.java:226)at org.apache.hadoop.util.RunJar.main(RunJar.java:141)
Exception in thread "main" MetaException(message:Version information not found in metastore. )at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7387)at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7363)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:103)at com.sun.proxy.$Proxy7.verifySchema(Unknown Source)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:664)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:712)at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:511)at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84)at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6511)

这个,再找一下,

/opt/cloudera/parcels/CDH/lib/hive/scripts/metastore/upgrade/mysql

这里然后进入mysql

mysql

source hive-schema-1.1.0.mysql.sql;

8.第八个错误

[Errno 2] No such file or directory: '/var/log/oozie/oozie-cmf-oozie-OOZIE_SERVER-node1.log.out'
[Errno 2] No such file or directory: '/var/log/oozie/oozie-cmf-oozie-OOZIE_SERVER-node1.log.out'

直接赋权限,看看,应该能行吧

rm: cannot remove /var/lib/oozie/tomcat-deployment/webapps/oozie/docs: Permission denied
cp: failed to access /var/lib/oozie/tomcat-deployment/webapps/oozie/docs: Permission denied
mkdir: cannot create directory /var/lib/oozie: Permission denied
cp: failed to access /var/lib/oozie/tomcat-deployment/webapps/oozie/WEB-INF: Permission denied
chown: cannot access /var/lib/oozie/tomcat-deployment: Permission denied
rm: cannot remove /var/lib/oozie/tomcat-deployment/webapps/oozie/docs: Permission denied
cp: failed to access /var/lib/oozie/tomcat-deployment/webapps/oozie/docs: Permission denied
mkdir: cannot create directory /var/lib/oozie: Permission denied
cp: failed to access /var/lib/oozie/tomcat-deployment/webapps/oozie/WEB-INF: Permission denied
chown: cannot access /var/lib/oozie/tomcat-deployment: Permission denied

9.第九个错误

可能是mysql的驱动没有,别急

CMF_CONF_DIR=/etc/cloudera-scm-agent
Copying JDBC jar from /usr/share/java/mysql-connector-java.jar to /var/lib/oozieERROR: Oozie could not be startedREASON: org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Cannot load JDBC driver class 'com.mysql.jdbc.Driver'Stacktrace:
-----------------------------------------------------------------
org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Cannot load JDBC driver class 'com.mysql.jdbc.Driver'at org.apache.oozie.service.Services.loadServices(Services.java:309)at org.apache.oozie.service.Services.init(Services.java:213)at org.apache.oozie.servlet.ServicesLoader.contextInitialized(ServicesLoader.java:46)at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779)at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:944)at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:779)at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:505)at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069)at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)at org.apache.catalina.core.StandardService.start(StandardService.java:525)at org.apache.catalina.core.StandardServer.start(StandardServer.java:761)at org.apache.catalina.startup.Catalina.start(Catalina.java:595)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
Caused by: <openjpa-2.4.1-r422266:1730418 fatal general error> org.apache.openjpa.persistence.PersistenceException: Cannot load JDBC driver class 'com.mysql.jdbc.Driver'at org.apache.openjpa.jdbc.sql.DBDictionaryFactory.newDBDictionary(DBDictionaryFactory.java:106)at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getDBDictionaryInstance(JDBCConfigurationImpl.java:603)at org.apache.openjpa.jdbc.meta.MappingRepository.endConfiguration(MappingRepository.java:1520)at org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:533)at org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:458)at org.apache.openjpa.lib.conf.PluginValue.instantiate(PluginValue.java:121)at org.apache.openjpa.conf.MetaDataRepositoryValue.instantiate(MetaDataRepositoryValue.java:68)at org.apache.openjpa.lib.conf.ObjectValue.instantiate(ObjectValue.java:83)at org.apache.openjpa.conf.OpenJPAConfigurationImpl.newMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:967)at org.apache.openjpa.conf.OpenJPAConfigurationImpl.getMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:958)at org.apache.openjpa.kernel.AbstractBrokerFactory.makeReadOnly(AbstractBrokerFactory.java:642)at org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:202)at org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:154)
CMF_CONF_DIR=/etc/cloudera-scm-agent
Copying JDBC jar from /usr/share/java/mysql-connector-java.jar to /var/lib/oozieERROR: Oozie could not be startedREASON: org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Cannot load JDBC driver class 'com.mysql.jdbc.Driver'Stacktrace:
-----------------------------------------------------------------
org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Cannot load JDBC driver class 'com.mysql.jdbc.Driver'at org.apache.oozie.service.Services.loadServices(Services.java:309)at org.apache.oozie.service.Services.init(Services.java:213)at org.apache.oozie.servlet.ServicesLoader.contextInitialized(ServicesLoader.java:46)at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779)at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:944)at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:779)at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:505)at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069)at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)at org.apache.catalina.core.StandardService.start(StandardService.java:525)at org.apache.catalina.core.StandardServer.start(StandardServer.java:761)at org.apache.catalina.startup.Catalina.start(Catalina.java:595)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
Caused by: <openjpa-2.4.1-r422266:1730418 fatal general error> org.apache.openjpa.persistence.PersistenceException: Cannot load JDBC driver class 'com.mysql.jdbc.Driver'at org.apache.openjpa.jdbc.sql.DBDictionaryFactory.newDBDictionary(DBDictionaryFactory.java:106)at org.apache.openjpa.jdbc.conf.JDBCConfigurationImpl.getDBDictionaryInstance(JDBCConfigurationImpl.java:603)at org.apache.openjpa.jdbc.meta.MappingRepository.endConfiguration(MappingRepository.java:1520)at org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:533)at org.apache.openjpa.lib.conf.Configurations.configureInstance(Configurations.java:458)at org.apache.openjpa.lib.conf.PluginValue.instantiate(PluginValue.java:121)at org.apache.openjpa.conf.MetaDataRepositoryValue.instantiate(MetaDataRepositoryValue.java:68)at org.apache.openjpa.lib.conf.ObjectValue.instantiate(ObjectValue.java:83)at org.apache.openjpa.conf.OpenJPAConfigurationImpl.newMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:967)at org.apache.openjpa.conf.OpenJPAConfigurationImpl.getMetaDataRepositoryInstance(OpenJPAConfigurationImpl.java:958)at org.apache.openjpa.kernel.AbstractBrokerFactory.makeReadOnly(AbstractBrokerFactory.java:642)at org.apache.openjpa.kernel.AbstractBrokerFactory.newBroker(AbstractBrokerFactory.java:202)at org.apache.openjpa.kernel.DelegatingBrokerFactory.newBroker(DelegatingBrokerFactory.java:154)

即可搞定。好的,接下来,我们再看看,还有什么错误。

10.hue报错

基本也是权限问题,解决也容易,至此服务启动

ok,起飞了

11.再来一次,还是说格式化的问题

对当前 NameNode 的名称目录进行格式化。如果名称目录不为空,此操作将失败。
这个是因为我之前安装失败了然后dfs下的nn目录不为空,删除每台机器下dfs目录下的nn目录就行了。

将namenode的目录删除就好 我的是在/dfs/nn
然后重试就可以了

rm -rf /dfs/nn 即可

12.奇怪的报错。namenode 和 datanode 集群id不匹配

IOException in offerService
java.io.IOException: Failed on local exception: java.io.IOException: Connection reset by peer; Host Details : local host is: "node1/172.16.0.56"; destination host is: "node1":8022;

这个问题搞了很久,那么我来吧报错写的详细点,再来告诉你怎么回事

然后请注意,你需要确定你的集群的目录

/dfs 这里,要符合  nn里面            snn             以及 dn里面           version   里面的clusterId一致才行,好的,我们规整一下。

那我们就改成 13好了

成功,思路就是 /dfs/nn        和        /dfs/dn     下      version里的cluster_id 保持一致即可

13.yarn的问题,好难解决啊

HDFS dependency of YARN (MR2 Included) does not have sufficient roles running.
Completed only 0/1 steps. First failure: Completed only 0/2 steps. First failure: Failed to execute command Create Job History Dir on service YARN (MR2 Included)

这个太棘手了,好难搞啊。

【HBase数据开发】集群搭建NameNode未格式化相关推荐

  1. 大数据Hadoop集群搭建

    大数据Hadoop集群搭建 一.环境 服务器配置: CPU型号:Intel® Xeon® CPU E5-2620 v4 @ 2.10GHz CPU核数:16 内存:64GB 操作系统 版本:CentO ...

  2. hbase完整分布式集群搭建

    简介: hadoop的单机,伪分布式,分布式安装 hadoop2.8 集群 1 (伪分布式搭建 hadoop2.8 ha 集群搭建 hbase完整分布式集群搭建 hadoop完整集群遇到问题汇总 Hb ...

  3. 达梦数据库数据守护集群搭建

    目录 数据守护 集群搭建 备份还原 dm.ini文件修改 配置dmmal.ini文件 配置dmarch.ini归档文件 配置dmwatcher.ini文件 监视器文件配置 mount启动数据库,设置o ...

  4. 达梦数据库数据守护集群搭建(命令行方式)

    文章目录 达梦数据守护集群介绍 一.前提 二.环境准备 1.数据守护集群搭建 2.配置过程 达梦数据守护集群介绍 达梦数据守护集群软件(DM Data Watch)是一种集成化的高可靠性解决方案,该方 ...

  5. DM数据守护集群搭建

    DM数据守护集群搭建 文章目录 DM数据守护集群搭建 一.概述 二.系统结构图 三.基本概念 (1)主库 (2)备库 (3)Redo日志 (4)Redo日志传输 (5)Redo日志重演 (6)守护进程 ...

  6. DM数据库数据守护集群搭建

    目录 1.环境说明 2.数据准备 (1)正常关闭数据库:前台+后台 (2)进行脱机备份 (3)拷贝备份文件到备机 (4)备机进行脱机数据库还原与恢复 3.配置主库DMDB1 3.1 配置dm.ini ...

  7. DM8数据守护集群搭建

    DM8数据守护集群搭建 1.安装规划 1.1 环境说明 1.2 端口配置 2.准备主库 2.1. 修改dm.ini参数 2.2. 配置dmmal.ini 2.3 配置dmwatcher.ini 3.备 ...

  8. 大数据分布式集群搭建(1)

    在学习了几天的hadoop分布式集群搭建之后想写写文章记录一下一路以来遇到的困难和解决方案. 闲话不多说,进入正题. 一.环境及准备 因为只有一台电脑,所以需要用虚拟机来模拟环境. 本地机器用到软件: ...

  9. 大数据 -- Hadoop集群搭建

    Hadoop集群搭建 1.修改/etc/hosts文件 在每台linux机器上,sudo vim /etc/hosts 编写hosts文件.将主机名和ip地址的映射填写进去.编辑完后,结果如下: 2. ...

最新文章

  1. 《高性能科学与工程计算》——3.7 习题
  2. java root_java – 如何在没有root的情况下(如Automate和...
  3. 为什么重启路由器 经常重启让WiFi更快
  4. (转)jQuery第五课:Ajax
  5. Java RandomAccessFile readChar()方法及示例
  6. Linux. C语言中else,2. if/else语句
  7. YUV格式学习:YUYV、YVYU、UYVY、VYUY格式转换成RGB24
  8. 第 1 篇 Scrum 冲刺博客
  9. FreeRTOS 教程指南 学习笔记 第六章 中断管理(二)
  10. 如何下载Chrome谷歌浏览器历史版本
  11. 墨刀 - 简单 易用的APP原型设计工具
  12. c语言发生错误文件无效或损坏,VC2010编译时提示:转换到 COFF 期间失败: 文件无效或损坏...
  13. VBA学习之一:基本知识
  14. Springboot+微信小程序自习室管理系统毕业设计源码221535
  15. linux vrrp 配置命令,虚拟路由器冗余协议(VRRP)简单实验
  16. 工控网络安全防护分析与建议
  17. Java中 关键字abstract(抽像)的定义
  18. SIKI学院:MySQL数据库从零到精通:八:课时 10 : 09-如何利用MySQL Workbench查看数据库和创建
  19. win10桌面图标字体看不清楚
  20. 攻击者和受害者:剖析网络攻击敲诈勒索的四大类型!

热门文章

  1. 各种设计模式的使用场景
  2. java下载网页内容_java下载网页并读取内容
  3. 百度搜索去广告和自定义样式主题
  4. 永磁同步电机的矢量控制策略(一)一一一坐标变换
  5. 运筹学笔记 对偶理论与灵敏度分析
  6. 黑龙江省计算机软件专科学校,黑龙江省主要大专院校及专业统计.doc
  7. php-java-net-python-地球生物发展史计算机毕业设计程序
  8. 超声波传感器测距c语言编程,超声波传感器和stc89c51单片机的c语言代码
  9. MOS管被ESD击穿解决方案-KIA MOS管
  10. 爱普生机器人学习笔记02