编译Atlas

mvn clean package -DskipTests -Dfast -Drat.skip=true -Pdist,embedded-hbase-solr

Atlas 内嵌 hbase solr

先启动 hbase + solr 再启动 atlas

   /opt/apache-atlas-2.2.0/hbase/bin/start-hbase.sh/opt/apache-atlas-2.2.0/solr/bin/solr start -c -z localhost:2181 -p 8983 -force/opt/apache-atlas-2.2.0/solr/bin/solr create -c fulltext_index -force -d /opt/apache-atlas-2.2.0/conf/solr//opt/apache-atlas-2.2.0/solr/bin/solr create -c edge_index -force -d /opt/apache-atlas-2.2.0/conf/solr//opt/apache-atlas-2.2.0/solr/bin/solr create -c vertex_index -force -d /opt/apache-atlas-2.2.0/conf/solr//opt/apache-atlas-2.2.0/bin/atlas_start.py
[root@docker-registry-node apache-atlas-2.2.0]# netstat -anop | grep zk
[root@docker-registry-node apache-atlas-2.2.0]# netstat -anop | grep zookeeper
### 虽然没有 zookeeper 进程,却占用2181 端口
[root@docker-registry-node apache-atlas-2.2.0]# netstat -anop | grep 2181
tcp        0      0 127.0.0.1:39984         127.0.0.1:2181          ESTABLISHED 29800/java           off (0.00/0/0)
tcp6       0      0 :::2181                 :::*                    LISTEN      24397/java           off (0.00/0/0)
tcp6       0      0 127.0.0.1:2181          127.0.0.1:39984         ESTABLISHED 24397/java           off (0.00/0/0)
tcp6       0      0 127.0.0.1:2181          127.0.0.1:39820         ESTABLISHED 24397/java           off (0.00/0/0)
tcp6       0      0 127.0.0.1:39478         127.0.0.1:2181          ESTABLISHED 24397/java           off (0.00/0/0)
tcp6       0      0 ::1:2181                ::1:40142               ESTABLISHED 24397/java           off (0.00/0/0)
tcp6       0      0 127.0.0.1:39474         127.0.0.1:2181          ESTABLISHED 24397/java           off (0.00/0/0)
tcp6       0      0 127.0.0.1:2181          127.0.0.1:39474         ESTABLISHED 24397/java           off (0.00/0/0)
tcp6       0      0 127.0.0.1:2181          127.0.0.1:39478         ESTABLISHED 24397/java           off (0.00/0/0)
tcp6       0      0 127.0.0.1:2181          127.0.0.1:39486         ESTABLISHED 24397/java           off (0.00/0/0)
tcp6       0      0 ::1:40142               ::1:2181                ESTABLISHED 27950/java           off (0.00/0/0)
tcp6       0      0 127.0.0.1:39486         127.0.0.1:2181          ESTABLISHED 24397/java           off (0.00/0/0)
tcp6       0      0 127.0.0.1:39820         127.0.0.1:2181          ESTABLISHED 27950/java           off (0.00/0/0)
[root@docker-registry-node apache-atlas-2.2.0]# ps -aux | grep 24397
root     24397  1.1  7.5 3094532 293596 pts/1  Sl   10:28   2:39 /opt/module/jdk1.8.0_141/bin/java -Dproc_master -XX:OnOutOfMemoryError=kill -9 %p -XX:+UseConcMarkSweepGC -Dhbase.log.dir=/opt/module/atlas/apache-atlas-2.2.0/hbase/bin/../logs -Dhbase.log.file=hbase-nufront-master-docker-registry-node.log -Dhbase.home.dir=/opt/module/atlas/apache-atlas-2.2.0/hbase/bin/.. -Dhbase.id.str=nufront -Dhbase.root.logger=INFO,RFA -Dhbase.security.logger=INFO,RFAS org.apache.hadoop.hbase.master.HMaster start

Kafka

[root@docker-registry-node conf]# vim atlas-application.properties
[root@docker-registry-node conf]# netstat  -anop | grep 9027
tcp        0      0 127.0.0.1:9027          0.0.0.0:*               LISTEN      29800/java           off (0.00/0/0)
tcp        1      0 127.0.0.1:38910         127.0.0.1:9027          CLOSE_WAIT  29800/java           keepalive (4783.83/0/0)
tcp        0      0 127.0.0.1:38942         127.0.0.1:9027          ESTABLISHED 29800/java           keepalive (4800.22/0/0)
tcp        0      0 127.0.0.1:9027          127.0.0.1:38936         ESTABLISHED 29800/java           keepalive (4800.22/0/0)
tcp        0      0 127.0.0.1:38936         127.0.0.1:9027          ESTABLISHED 29800/java           keepalive (4800.22/0/0)
tcp        0      0 127.0.0.1:9027          127.0.0.1:38942         ESTABLISHED 29800/java           keepalive (4800.22/0/0)
[root@docker-registry-node version-2]# pwd
/opt/module/atlas/apache-atlas-2.2.0/data/hbase-zookeeper-data/zookeeper_0/version-2
[root@docker-registry-node version-2]# ll
total 580
-rw-r--r--. 1 root root 102416 Oct 14 10:27 log.1
-rw-r--r--. 1 root root 614416 Oct 14 13:59 log.4f[root@docker-registry-node version-2]# cat /opt/module/atlas/apache-atlas-2.2.0/hbase/conf/hbase-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed to the Apache Software Foundation (ASF) under one or morecontributor license agreements.  See the NOTICE file distributed withthis work for additional information regarding copyright ownership.The ASF licenses this file to You under the Apache License, Version 2.0(the "License"); you may not use this file except in compliance withthe License.  You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.
-->
<configuration><property><name>hbase.rootdir</name><value>file:///opt/module/atlas/apache-atlas-2.2.0/data/hbase-root</value></property><property><name>hbase.zookeeper.property.dataDir</name><value>/opt/module/atlas/apache-atlas-2.2.0/data/hbase-zookeeper-data</value></property><property><name>hbase.master.info.port</name><value>61510</value></property><property><name>hbase.regionserver.info.port</name><value>61530</value></property><property><name>hbase.master.port</name><value>61500</value></property><property><name>hbase.regionserver.port</name><value>61520</value></property><property><name>hbase.unsafe.stream.capability.enforce</name><value>false</value></property>
</configuration>
Atlas hbase | sorl 数据存储路径
[root@docker-registry-node data]# pwd
/opt/module/atlas/apache-atlas-2.2.0/data
[root@docker-registry-node data]# ll
total 0
drwxr-xr-x. 12 root root 188 Oct 14 10:28 hbase-root
drwxr-xr-x.  3 root root  25 Oct 14 10:25 hbase-zookeeper-data
drwxr-xr-x.  4 root root  29 Oct 14 13:47 kafka
drwxr-xr-x.  2 root root  22 Oct 14 10:25 solr
################################################
##### 创建 fulltext_index | edge_index | vertex_index 索引
################################################[root@docker-registry-node bin]# ./solr create -c fulltext_index -force -d /opt/module/atlas/apache-atlas-2.2.0/conf/solr/
Created collection 'fulltext_index' with 1 shard(s), 1 replica(s) with config-set 'fulltext_index'
[root@docker-registry-node bin]# ./solr create -c edge_index -force -d /opt/module/atlas/apache-atlas-2.2.0/conf/solr/
Created collection 'edge_index' with 1 shard(s), 1 replica(s) with config-set 'edge_index'
[root@docker-registry-node bin]# ./solr create -c vertex_index -force -d /opt/module/atlas/apache-atlas-2.2.0/conf/solr/
Created collection 'vertex_index' with 1 shard(s), 1 replica(s) with config-set 'vertex_index'

[root@docker-registry-node bin]# pwd
/opt/module/atlas/apache-atlas-2.2.0/bin
[root@docker-registry-node bin]# ./atlas_start.pyConfigured for local HBase.
Starting local HBase...
Local HBase started!Configured for local Solr.
Starting local Solr...
Local Solr started!Creating Solr collections for Atlas using config: /opt/module/atlas/apache-atlas-2.2.0/conf/solrStarting Atlas server on host: localhost
Starting Atlas server on port: 21000
................................................................................................................................
Apache Atlas Server started!!![root@docker-registry-node conf]# jps
32103 Jps
29800 Atlas
24397 HMaster
27950 jar[root@docker-registry-node bin]# curl http://localhost:21000/login.jsp

账号密码:admin / admin

问题记录

Could not instantiate implementation: org.janusgraph.diskstorage.hbase2.HBaseStoreManager
2022-10-13 17:00:55,863 INFO  - [main:] ~ Logged in user root (auth:SIMPLE) (LoginProcessor:77)
2022-10-13 17:00:56,817 INFO  - [main:] ~ Not running setup per configuration atlas.server.run.setup.on.start. (SetupSteps$SetupRequired:189)
2022-10-13 17:00:58,810 INFO  - [main:] ~ Failed to obtain graph instance, retrying 3 times, error: java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.hbase2.HBaseStoreManager (AtlasGraphProvider:100)
2022-10-13 17:01:28,845 WARN  - [main:] ~ Failed to obtain graph instance on attempt 1 of 3 (AtlasGraphProvider:118)
java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.hbase2.HBaseStoreManagerat org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:64)at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:440)at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:411)at org.janusgraph.graphdb.configuration.builder.GraphDatabaseConfigurationBuilder.build(GraphDatabaseConfigurationBuilder.java:50)at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:161)at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:132)at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:112)at org.apache.atlas.repository.graphdb.janus.AtlasJanusGraphDatabase.initJanus

Cannot connect to cluster at localhost:2181: cluster not found/not ready | Could not instantiate implementation: org.janusgraph.diskstorage.solr.Solr6Index

vim atlas-application.properties
#######
...
atlas.graph.storage.hostname=localhost:2181
...
Can not find the specified config set: vertex_index
2022-10-14 13:35:02,680 ERROR - [main:] ~ GraphBackedSearchIndexer.initialize() failed (GraphBackedSearchIndexer:376)
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://172.16.51.129:8983/solr: Can not find the specified config set: vertex_indexat org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
Failed to start embedded kafka

import_hive.sh

Failed to load application properties

拷贝 atlas-application.properties 至 HIVE_CONF_DIR

java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgum


将 hive lib的 guava 替换为 hadoop guava 版本

Unexpected end of file when reading from HS2 server. The root cause might be too many concurrent connections. Please ask the administrator to check the number of active connections, and adjust hive.server2.thrift.max.worker.threads if applicable.

客户端与hive服务端版本不一致(3.1.0 连接 2.3.2)

Exception in thread “main” java.lang.IllegalAccessError: tried to access method com.google.common.collect.Iterators.emptyIterator()Lcom/google/common/collect/UnmodifiableIterator; from class org.apache.hadoop.hive.ql.exec.FetchOperator

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-73mwhhL9-1666405043664)(aaa.assets/image-20221021112214783.png)]

hive 导入成功,但atlas ui 查询不到
create table aaa(id int, name string, age int, department string) row format delimited  fields terminated by ',' lines terminated by '\n' stored as textfile;


但 ui 界面上还是没显示…

__AtlasUserProfile with unique attribute {name=admin} does not exist

import_hive.sh 元数据导入成功,但是导航不显示hive_table数量,查询血缘关系时也是报的无血缘关系…可能是登录用户为admin的问题、
https://community.cloudera.com/t5/Support-Questions/Apache-atlas-data-lineage-issue/m-p/221542

##############################
### atlas application.log
##############################2022-10-25 17:13:04,091 ERROR - [etp2008966511-516 - 6877d62d-0751-407b-abf8-6225510bd075:] ~ graph rollback due to exception AtlasBaseException:Instance __AtlasUserProfile with unique attribute {name=admin} does not exist (GraphTransactionInterceptor:202)

atlas 血缘不支持内部表

https://zhidao.baidu.com/question/1802196529111502747.html

https://community.cloudera.com/t5/Support-Questions/Lineage-is-not-visible-for-Hive-Table-in-Atlas/m-p/237484
import-hive.sh atlas 不支持hive内部表(managed table)的lineage,只有External修饰的表才能生成血缘!!!

### 外部表
create external table outtable(id int, name string, age int, department string) row format delimited fields terminated by ',' lines terminated by '\n' stored as textfile;create external table outtable_1(id int, name string, age int, department string) row format delimited fields terminated by ',' lines terminated by '\n' stored as textfile;insert into outtable select * from outtable_1;


Kafka

添加 atlas-application.properties 进 kafka-bridge-1.2.0.jar


执行登录时提示: invalid user credentials

替换 atlas-configmap (之前是atlas 2.2.0 未更新)users-credentials.propertiesapache-atlas-1.2.0-binusers-credentials.properties

Dockerfile

FROM docker-registry-node:5000/centos:centos7.6.1810ADD hadoop-2.7.4.tar.gz /opt
ADD kafka_2.11-2.2.1.tgz /opt
ADD apache-hive-2.3.2-bin.tar.gz /opt
ADD apache-atlas-1.2.0-bin.tar.gz /opt
ADD jdk-8u141-linux-x64.tar.gz /opt
ADD which /usr/bin/RUN chmod +x /usr/bin/whichRUN cp /opt/apache-hive-2.3.2-bin/lib/libfb303-0.9.3.jar /opt/apache-hive-2.3.2-bin/lib/hive-exec-2.3.2.jar /opt/apache-hive-2.3.2-bin/lib/hive-jdbc-2.3.2.jar /opt/apache-hive-2.3.2-bin/lib/hive-metastore-2.3.2.jar   /opt/apache-hive-2.3.2-bin/lib/jackson-core-2.6.5.jar /opt/apache-hive-2.3.2-bin/lib/jackson-databind-2.6.5.jar /opt/apache-hive-2.3.2-bin/lib/jackson-annotations-2.6.0.jar /opt/apache-atlas-1.2.0/hook/hive/atlas-hive-plugin-impl/RUN rm -rf /opt/apache-atlas-1.2.0/hook/hive/atlas-hive-plugin-impl/jackson-annotations-2.9.9.jar /opt/apache-atlas-1.2.0/hook/hive/atlas-hive-plugin-impl/jackson-core-2.9.9.jar /opt/apache-atlas-1.2.0/hook/hive/atlas-hive-plugin-impl/jackson-databind-2.9.9.jarENV JAVA_HOME /opt/jdk1.8.0_141
ENV PATH ${JAVA_HOME}/bin:$PATHEXPOSE 21000

atlas-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:name: atlas-configmaplabels:app: atlas-configmap
data:atlas-application.properties: |-atlas.graph.storage.backend=hbase2atlas.graph.storage.hbase.table=apache_atlas_janus### embedded hbase solratlas.graph.storage.hostname=localhost:2181atlas.graph.storage.hbase.regions-per-server=1atlas.EntityAuditRepository.impl=org.apache.atlas.repository.audit.HBaseBasedAuditRepositoryatlas.graph.index.search.backend=solratlas.graph.index.search.solr.mode=cloudatlas.graph.index.search.solr.zookeeper-url=localhost:2181atlas.graph.index.search.solr.zookeeper-connect-timeout=60000atlas.graph.index.search.solr.zookeeper-session-timeout=60000atlas.graph.index.search.solr.wait-searcher=falseatlas.graph.index.search.max-result-set-size=150atlas.notification.embedded=true### embedded kafkaatlas.kafka.data=${sys:atlas.home}/data/kafkaatlas.kafka.zookeeper.connect=localhost:9026atlas.kafka.bootstrap.servers=localhost:9027atlas.kafka.zookeeper.session.timeout.ms=400atlas.kafka.zookeeper.connection.timeout.ms=200atlas.kafka.zookeeper.sync.time.ms=20atlas.kafka.auto.commit.interval.ms=1000atlas.kafka.hook.group.id=atlasatlas.kafka.enable.auto.commit=falseatlas.kafka.auto.offset.reset=earliestatlas.kafka.session.timeout.ms=30000atlas.kafka.offsets.topic.replication.factor=1atlas.kafka.poll.timeout.ms=1000atlas.notification.create.topics=trueatlas.notification.replicas=1atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIESatlas.notification.log.failed.messages=trueatlas.notification.consumer.retry.interval=500atlas.notification.hook.retry.interval=1000atlas.enableTLS=falseatlas.authentication.method.kerberos=falseatlas.authentication.method.file=trueatlas.authentication.method.ldap.type=noneatlas.authentication.method.file.filename=${sys:atlas.home}/conf/users-credentials.propertiesatlas.rest.address=http://localhost:21000atlas.audit.hbase.tablename=apache_atlas_entity_auditatlas.audit.zookeeper.session.timeout.ms=1000atlas.audit.hbase.zookeeper.quorum=localhost:2181atlas.server.ha.enabled=falseatlas.hook.hive.synchronous=falseatlas.hook.hive.numRetries=3atlas.hook.hive.queueSize=10000atlas.cluster.name=primaryatlas.authorizer.impl=simpleatlas.authorizer.simple.authz.policy.file=atlas-simple-authz-policy.jsonatlas.rest-csrf.enabled=trueatlas.rest-csrf.browser-useragents-regex=^Mozilla.*,^Opera.*,^Chrome.*atlas.rest-csrf.methods-to-ignore=GET,OPTIONS,HEAD,TRACEatlas.rest-csrf.custom-header=X-XSRF-HEADERatlas.metric.query.cache.ttlInSecs=900atlas.search.gremlin.enable=falseatlas.ui.default.version=v1atlas-env.sh: |-export MANAGE_LOCAL_HBASE=trueexport MANAGE_LOCAL_SOLR=trueexport MANAGE_EMBEDDED_CASSANDRA=falseexport MANAGE_LOCAL_ELASTICSEARCH=falseatlas-log4j.xml: |-<?xml version="1.0" encoding="UTF-8" ?><!--~ Licensed to the Apache Software Foundation (ASF) under one~ or more contributor license agreements.  See the NOTICE file~ distributed with this work for additional information~ regarding copyright ownership.  The ASF licenses this file~ to you under the Apache License, Version 2.0 (the~ "License"); you may not use this file except in compliance~ with the License.  You may obtain a copy of the License at~~     http://www.apache.org/licenses/LICENSE-2.0~~ Unless required by applicable law or agreed to in writing, software~ distributed under the License is distributed on an "AS IS" BASIS,~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.~ See the License for the specific language governing permissions and~ limitations under the License.--><!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"><log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/"><appender name="console" class="org.apache.log4j.ConsoleAppender"><param name="Target" value="System.out"/><layout class="org.apache.log4j.PatternLayout"><param name="ConversionPattern" value="%d %-5p - [%t:%x] ~ %m (%C{1}:%L)%n"/></layout></appender><appender name="FILE" class="org.apache.log4j.RollingFileAppender"><param name="File" value="${atlas.log.dir}/${atlas.log.file}"/><param name="Append" value="true"/><param name="maxFileSize" value="100MB" /><param name="maxBackupIndex" value="20" /><layout class="org.apache.log4j.PatternLayout"><param name="ConversionPattern" value="%d %-5p - [%t:%x] ~ %m (%C{1}:%L)%n"/></layout></appender><appender name="LARGE_MESSAGES" class="org.apache.log4j.RollingFileAppender"><param name="File" value="${atlas.log.dir}/large_messages.log"/><param name="Append" value="true"/><param name="MaxFileSize" value="100MB" /><param name="MaxBackupIndex" value="20" /><layout class="org.apache.log4j.PatternLayout"><param name="ConversionPattern" value="%d %m%n"/></layout></appender><appender name="AUDIT" class="org.apache.log4j.RollingFileAppender"><param name="File" value="${atlas.log.dir}/audit.log"/><param name="Append" value="true"/><param name="maxFileSize" value="100MB" /><param name="maxBackupIndex" value="20" /><layout class="org.apache.log4j.PatternLayout"><param name="ConversionPattern" value="%d %x %m%n"/></layout></appender><appender name="TASKS" class="org.apache.log4j.RollingFileAppender"><param name="File" value="${atlas.log.dir}/tasks.log"/><param name="Append" value="true"/><param name="maxFileSize" value="259MB" /><param name="maxBackupIndex" value="20" /><layout class="org.apache.log4j.PatternLayout"><param name="ConversionPattern" value="%t %d %x %m%n (%C{1}:%L)"/></layout></appender><appender name="METRICS" class="org.apache.log4j.RollingFileAppender"><param name="File" value="${atlas.log.dir}/metric.log"/><param name="Append" value="true"/><param name="maxFileSize" value="100MB" /><layout class="org.apache.log4j.PatternLayout"><param name="ConversionPattern" value="%d %x %m%n"/></layout></appender><appender name="FAILED" class="org.apache.log4j.RollingFileAppender"><param name="File" value="${atlas.log.dir}/failed.log"/><param name="Append" value="true"/><param name="maxFileSize" value="100MB" /><param name="maxBackupIndex" value="20" /><layout class="org.apache.log4j.PatternLayout"><param name="ConversionPattern" value="%d %m%n"/></layout></appender><!-- Uncomment the following for perf logs --><!--<appender name="perf_appender" class="org.apache.log4j.DailyRollingFileAppender"><param name="file" value="${atlas.log.dir}/atlas_perf.log" /><param name="datePattern" value="'.'yyyy-MM-dd" /><param name="append" value="true" /><layout class="org.apache.log4j.PatternLayout"><param name="ConversionPattern" value="%d|%t|%m%n" /></layout></appender><logger name="org.apache.atlas.perf" additivity="false"><level value="debug" /><appender-ref ref="perf_appender" /></logger>--><logger name="org.apache.atlas" additivity="false"><level value="info"/><appender-ref ref="FILE"/></logger><logger name="org.janusgraph" additivity="false"><level value="warn"/><appender-ref ref="FILE"/></logger><logger name="org.springframework" additivity="false"><level value="warn"/><appender-ref ref="console"/></logger><logger name="org.eclipse" additivity="false"><level value="warn"/><appender-ref ref="console"/></logger><logger name="com.sun.jersey" additivity="false"><level value="warn"/><appender-ref ref="console"/></logger><!-- to avoid logs - The configuration log.flush.interval.messages = 1 was supplied but isn't a known config --><logger name="org.apache.kafka.common.config.AbstractConfig" additivity="false"><level value="error"/><appender-ref ref="FILE"/></logger><logger name="AUDIT" additivity="false"><level value="info"/><appender-ref ref="AUDIT"/></logger><logger name="LARGE_MESSAGES" additivity="false"><level value="warn"/><appender-ref ref="LARGE_MESSAGES"/></logger><logger name="METRICS" additivity="false"><level value="debug"/><appender-ref ref="METRICS"/></logger><logger name="FAILED" additivity="false"><level value="info"/><appender-ref ref="FAILED"/></logger><logger name="TASKS" additivity="false"><level value="info"/><appender-ref ref="TASKS"/></logger><root><priority value="warn"/><appender-ref ref="FILE"/></root></log4j:configuration>atlas-simple-authz-policy.json: |-{"roles": {"ROLE_ADMIN": {"adminPermissions": [{"privileges": [ ".*" ]}],"typePermissions": [{"privileges":     [ ".*" ],"typeCategories": [ ".*" ],"typeNames":      [ ".*" ]}],"entityPermissions": [{"privileges":            [ ".*" ],"entityTypes":           [ ".*" ],"entityIds":             [ ".*" ],"entityClassifications": [ ".*" ],"labels":                [ ".*" ],"businessMetadata":      [ ".*" ],"attributes":            [ ".*" ],"classifications":       [ ".*" ]}],"relationshipPermissions": [{"privileges":     [ ".*" ],"relationshipTypes": [ ".*" ],"end1EntityType":           [ ".*" ],"end1EntityId":             [ ".*" ],"end1EntityClassification": [ ".*" ],"end2EntityType":           [ ".*" ],"end2EntityId":             [ ".*" ],"end2EntityClassification": [ ".*" ]}]},"DATA_SCIENTIST": {"entityPermissions": [{"privileges":            [ "entity-read", "entity-read-classification" ],"entityTypes":           [ ".*" ],"entityIds":             [ ".*" ],"entityClassifications": [ ".*" ],"labels":                [ ".*" ],"businessMetadata":      [ ".*" ],"attributes":            [ ".*" ]}]},"DATA_STEWARD": {"entityPermissions": [{"privileges":            [ "entity-read", "entity-create", "entity-update", "entity-read-classification", "entity-add-classification", "entity-update-classification", "entity-remove-classification" ],"entityTypes":           [ ".*" ],"entityIds":             [ ".*" ],"entityClassifications": [ ".*" ],"labels":                [ ".*" ],"businessMetadata":      [ ".*" ],"attributes":            [ ".*" ],"classifications":       [ ".*" ]}],"relationshipPermissions": [{"privileges":               [ "add-relationship", "update-relationship", "remove-relationship" ],"relationshipTypes":        [ ".*" ],"end1EntityType":           [ ".*" ],"end1EntityId":             [ ".*" ],"end1EntityClassification": [ ".*" ],"end2EntityType":           [ ".*" ],"end2EntityId":             [ ".*" ],"end2EntityClassification": [ ".*" ]}]}},"userRoles": {"admin": [ "ROLE_ADMIN" ],"rangertagsync": [ "DATA_SCIENTIST" ]},"groupRoles": {"ROLE_ADMIN":      [ "ROLE_ADMIN" ],"hadoop":          [ "DATA_STEWARD" ],"DATA_STEWARD":    [ "DATA_STEWARD" ],"RANGER_TAG_SYNC": [ "DATA_SCIENTIST" ]}}cassandra.yml.template: |-cluster_name: 'JanusGraph'num_tokens: 256hinted_handoff_enabled: truehinted_handoff_throttle_in_kb: 1024max_hints_delivery_threads: 2batchlog_replay_throttle_in_kb: 1024authenticator: AllowAllAuthenticatorauthorizer: AllowAllAuthorizerpermissions_validity_in_ms: 2000partitioner: org.apache.cassandra.dht.Murmur3Partitionerdata_file_directories:- ${atlas_home}/data/cassandra/datacommitlog_directory: ${atlas_home}/data/cassandra/commitlogdisk_failure_policy: stopcommit_failure_policy: stopkey_cache_size_in_mb:key_cache_save_period: 14400row_cache_size_in_mb: 0row_cache_save_period: 0saved_caches_directory: ${atlas_home}/data/cassandra/saved_cachescommitlog_sync: periodiccommitlog_sync_period_in_ms: 10000commitlog_segment_size_in_mb: 32seed_provider:- class_name: org.apache.cassandra.locator.SimpleSeedProviderparameters:- seeds: "127.0.0.1"concurrent_reads: 32concurrent_writes: 32trickle_fsync: falsetrickle_fsync_interval_in_kb: 10240storage_port: 7000ssl_storage_port: 7001listen_address: localhoststart_native_transport: truenative_transport_port: 9042start_rpc: truerpc_address: localhostrpc_port: 9160rpc_keepalive: truerpc_server_type: syncthrift_framed_transport_size_in_mb: 15incremental_backups: falsesnapshot_before_compaction: falseauto_snapshot: truetombstone_warn_threshold: 1000tombstone_failure_threshold: 100000column_index_size_in_kb: 64compaction_throughput_mb_per_sec: 16read_request_timeout_in_ms: 5000range_request_timeout_in_ms: 10000write_request_timeout_in_ms: 2000cas_contention_timeout_in_ms: 1000truncate_request_timeout_in_ms: 60000request_timeout_in_ms: 10000cross_node_timeout: falseendpoint_snitch: SimpleSnitchdynamic_snitch_update_interval_in_ms: 100dynamic_snitch_reset_interval_in_ms: 600000dynamic_snitch_badness_threshold: 0.1request_scheduler: org.apache.cassandra.scheduler.NoSchedulerserver_encryption_options:internode_encryption: nonekeystore: conf/.keystorekeystore_password: cassandratruststore: conf/.truststoretruststore_password: cassandraclient_encryption_options:enabled: falsekeystore: conf/.keystorekeystore_password: cassandrainternode_compression: allinter_dc_tcp_nodelay: falsehadoop-metrics2.properties: |-*.period=30atlas-debug-metrics-context.sink.atlas-debug-metrics-context.context=atlas-debug-metrics-contextusers-credentials.properties: |-admin=ADMIN::a4a88c0872bf652bb9ed803ece5fd6e82354838a9bf59ab4babb1dab322154e1rangertagsync=RANGER_TAG_SYNC::0afe7a1968b07d4c3ff4ed8c2d809a32ffea706c66cd795ead9048e81cfaf034
atlas.yaml
apiVersion: v1
kind: Service
metadata:name: atlas-service
spec:type: NodePortselector:name: atlas-nodeports:- name: atlas-portport: 21000targetPort: 21000nodePort: 32100
---
apiVersion: v1
kind: ReplicationController
metadata:name: atlas-nodelabels:name: atlas-node
spec:replicas: 1selector:name: atlas-nodetemplate:metadata:labels:name: atlas-nodespec:containers:- name: atlas-nodeimage: docker-registry-node:5000/atlas:0.2imagePullPolicy: IfNotPresentenv:- name: TZvalue: Asia/Shanghai- name: HIVE_HOMEvalue: - name: HIVE_CONF_DIRvalue: - name: HADOOP_CLASSPATHvalue: - name: HBASE_CONF_DIRvalue: /opt/apache-atlas-2.2.0/hbase/confvolumeMounts:- name: atlas-configmap-volumemountPath: /etc/atlas- name: hadoop-config-volumemountPath: /etc/hadoop- name: hive-config-volumemountPath: /etc/hive- name: flink-config-volumemountPath: /etc/flink#- name: hbase-config-volume#  mountPath: /etc/hbasecommand: ["/bin/bash", "-c", "cd  /etc/atlas; cp * /opt/apache-atlas-2.2.0/conf ; sleep infinity"]restartPolicy: Alwaysvolumes:- name: atlas-configmap-volumeconfigMap:name: atlas-configmapitems:- key: atlas-application.propertiespath: atlas-application.properties- key: atlas-env.shpath: atlas-env.sh- key: atlas-log4j.xmlpath: atlas-log4j.xml- key: atlas-simple-authz-policy.jsonpath: atlas-simple-authz-policy.json- key: cassandra.yml.templatepath: cassandra.yml.template- key: hadoop-metrics2.propertiespath: hadoop-metrics2.properties- key: users-credentials.propertiespath: users-credentials.properties- name: hive-config-volumepersistentVolumeClaim:claimName: hive-config-nfs-pvc- name: hadoop-config-volumepersistentVolumeClaim:claimName: bde2020-hadoop-config-nfs-pvc#claimName: hadoop-config-nfs-pvc- name: flink-config-volumeconfigMap:name: flink-config#- name: hbase-config-volume#  persistentVolumeClaim:#    claimName: hbase-config-nfs-pvc

k8s 部署 Atlas相关推荐

  1. K8S部署工具:KubeOperator集群导入

    K8S部署工具:KubeOperator集群导入 基本信息⚓︎ 输入要导入集群的名称.Api Server.Router.Token 示例 Api Server: https://172.16.10. ...

  2. K8S部署工具:KubeOperator集群部署

    K8S部署工具:KubeOperator集群部署 集群信息⚓︎ 项目: 选择集群所属项目 供应商: 支持裸金属(手动模式)和部署计划(自动模式) 版本: 支持版本管理中最新的两个 Kubernetes ...

  3. K8S部署工具:KubeOperator集群规划-手动模式

    K8S部署工具:KubeOperator集群规划-手动模式 KubeOperator 支持两种 Kubernetes 集群部署方式,一种是手动模式,另外一种是自动模式.手动模式下,用户需要自行准备主机 ...

  4. K8S部署工具:KubeOperator集群规划-自动模式

    K8S部署工具:KubeOperator集群规划-自动模式 KubeOperator 支持两种 Kubernetes 集群部署方式,一种是自动模式,另外一种是手动模式,我们推荐使用自动模式.在自动模式 ...

  5. K8S部署工具:KubeOperator系统设置

    K8S部署工具:KubeOperator系统设置 系统设置⚓︎ 仓库协议: 支持 http 和 https,默认 http 仓库 IP: 默认为部署 KubeOperator 的服务器 IP.将使用该 ...

  6. K8S部署工具:KubeOperator安装部署

    K8S部署工具:KubeOperator安装部署 硬件要求⚓︎ 最小化配置 角色 CPU核数 内存 系统盘 数量 部署机 4 8G 100G 1 Master 4 8G 100G 1 Worker 4 ...

  7. K8S部署工具:KubeOperator主要概念

    K8S部署工具:KubeOperator主要概念 部署模式⚓︎ 手动模式: 用户需要自己准备物理机或虚拟机,存储可选择 NFS 持久化存储,外部 ceph 存储等 自动模式: 用户只需要绑定云平台(比 ...

  8. k8s部署zkui 2.0

    下载地址 https://github.com/DeemOpen/zkui maven构建 [root@k8s-n0 zkui]# pwd /home/k8s-yaml/zk-ui2/zkui [ro ...

  9. K8S——关于K8S控制台的yaml文件编写(基于上一章多节点K8S部署)

    K8S--关于K8S控制台的yaml文件编写(基于上一章多节点K8S部署) 一.yaml文件编写流程 二.证书自签 一.yaml文件编写流程 rbac.yaml---->secret.yaml- ...

最新文章

  1. 身份证号码对应地区-官方措辞:行政区划代码
  2. ltrim($str);
  3. 97.PC 的串口是同步还是异步
  4. 第13章 Django框架
  5. Debian Squeeze AMD64安装Oracle 10g x86_64 10.2.0....
  6. linux下mtr命令,如何使用Linux mtr命令
  7. Go 实现 soundex 算法
  8. 字节跳动专家会_年薪30万60万!字节跳动招这个语系的语言专家!
  9. 压缩UI深度的代码实现
  10. Codeforces Round #197 (Div. 2): D. Xenia and Bit Operations(线段树)
  11. 《***大曝光》作者:怎样造就***高手
  12. 使用Zephir来快速编写高性能PHP二进制拓展
  13. windows交互式登陆
  14. openstack手动部署简单记录
  15. 车辆销售系统用例_汽车销售管理系统UML建模分析.doc
  16. 一年级计算机上册计划进度表,一年级上册语文教学计划及进度表
  17. 支持大规模视频融合的混合现实技术
  18. 【美味蟹堡王今日营业】论文学习笔记10-02
  19. 关于各种校园网,局域网等出现IP识别错误(169IP开头)及其导致的联网失败,DNS错误的解决方法
  20. 5操作系统的运行机制和体系结构

热门文章

  1. Openjudge:万年历
  2. 具有射频器件的电路系统调试注意事项
  3. 新西兰计算机预科学费,新西兰预科留学费用详解
  4. system76_您需要了解有关System76的开源固件项目的知识
  5. 【干货】生成对抗网络GANs算法在医学图像领域应用总结
  6. 《出版专业基础》2015年版(初级)思考与练习 第五章
  7. git操作:根据修改历史找到该历史的所有分支(根据commit id 查找包含该commit id的所有分支)
  8. 好程序员web前端培训分享做H5页面需要学什么
  9. 如何使用扫码实现收派件?
  10. 【ps】如何编辑*.psd文件中的已有文字