2019独角兽企业重金招聘Python工程师标准>>>

脱离CDH来配置hue确实比较麻烦,现在只使用了hive和HDFS,其他组件之后会陆续添加。1、安装依赖包(hue依赖的包实在太多,太部分安装编译过程的error都是缺少依赖包造成的,具体可google)
yum install rsync gcc openldap-devel  python-ldap  mysql-devel  python-devel python-setuptools python-simplejson sqlite-devel   libxml2-devel libxslt-devel cyrus-sasl-devel cyrus-sasl-plain2、下载hue的cdh版本
wget http://archive-primary.cloudera.com/cdh5/cdh/5/hue-3.7.0-cdh5.4.2.tar.gz
git clone http://github.com/cloudera/hue.gitcd hue
make apps && make install
build/env/bin/hue runserver
#start the production server
build/env/bin/supervisor安装过程参见:https://github.com/cloudera/hue详细配置参加:Hue Installation Guide# Hue configuration file
# ===================================
#
# For complete documentation about the contents of this file, run
#   $ <hue_root>/build/env/bin/hue config_help
#
# All .ini files under the current directory are treated equally.  Their
# contents are merged to form the Hue configuration, which can
# can be viewed on the Hue at
#   http://<hue_host>:<port>/dump_config###########################################################################
# General configuration for core Desktop features (authentication, etc)
###########################################################################[desktop]# Set this to a random string, the longer the better.# This is used for secure hashing in the session store.secret_key=pv7O4T4dWS9YV2cvsN32VT1lPbHe8KET# Webserver listens on this address and porthttp_host=0.0.0.0http_port=10080# Time zone nametime_zone=Asia/Shanghai# Enable or disable Django debug mode.django_debug_mode=false# Enable or disable database debug mode.## database_logging=false# Enable or disable backtrace for server errorhttp_500_debug_mode=false# Enable or disable memory profiling.## memory_profiler=false# Server email for internal error messages## django_server_email='hue@localhost.localdomain'# Email backend## django_email_backend=django.core.mail.backends.smtp.EmailBackend# Webserver runs as this userserver_user=hadoopserver_group=hadoop# This should be the Hue admin and proxy user#default_user=hadoop# This should be the hadoop cluster admindefault_hdfs_superuser=hadoop# If set to false, runcpserver will not actually start the web server.# Used if Apache is being used as a WSGI container.## enable_server=yes# Number of threads used by the CherryPy web server## cherrypy_server_threads=40# Filename of SSL Certificate## ssl_certificate=# Filename of SSL RSA Private Key## ssl_private_key=# SSL certificate password## ssl_password=# Execute this script to produce the SSL password. This will be used when `ssl_password` is not set.## ssl_password_script=# List of allowed and disallowed ciphers in cipher list format.# See http://www.openssl.org/docs/apps/ciphers.html for more information on cipher list format.## ssl_cipher_list=DEFAULT:!aNULL:!eNULL:!LOW:!EXPORT:!SSLv2# LDAP username and password of the hue user used for LDAP authentications.# Set it to use LDAP Authentication with HiveServer2 and Impala.## ldap_username=hue## ldap_password=# Default encoding for site data## default_site_encoding=utf-8# Help improve Hue with anonymous usage analytics.# Use Google Analytics to see how many times an application or specific section of an application is used, nothing more.## collect_usage=true# Support for HTTPS termination at the load-balancer level with SECURE_PROXY_SSL_HEADER.## secure_proxy_ssl_header=false# Comma-separated list of Django middleware classes to use.# See https://docs.djangoproject.com/en/1.4/ref/middleware/ for more details on middlewares in Django.## middleware=desktop.auth.backend.LdapSynchronizationBackend# Comma-separated list of regular expressions, which match the redirect URL.# For example, to restrict to your local domain and FQDN, the following value can be used:# ^\/.*$,^http:\/\/www.mydomain.com\/.*$## redirect_whitelist=^\/.*$# Comma separated list of apps to not load at server startup.# e.g.: pig,zookeeper## app_blacklist=# The directory where to store the auditing logs. Auditing is disable if the value is empty.# e.g. /var/log/hue/audit.log## audit_event_log_dir=# Size in KB/MB/GB for audit log to rollover.## audit_log_max_file_size=100MB# A json file containing a list of log redaction rules for cleaning sensitive data# from log files. It is defined as:## {#   "version": 1,#   "rules": [#     {#       "description": "This is the first rule",#       "trigger": "triggerstring 1",#       "search": "regex 1",#       "replace": "replace 1"#     },#     {#       "description": "This is the second rule",#       "trigger": "triggerstring 2",#       "search": "regex 2",#       "replace": "replace 2"#     }#   ]# }## Redaction works by searching a string for the [TRIGGER] string. If found,# the [REGEX] is used to replace sensitive information with the# [REDACTION_MASK].  If specified with `log_redaction_string`, the# `log_redaction_string` rules will be executed after the# `log_redaction_file` rules.## For example, here is a file that would redact passwords and social security numbers:# {#   "version": 1,#   "rules": [#     {#       "description": "Redact passwords",#       "trigger": "password",#       "search": "password=\".*\"",#       "replace": "password=\"???\""#     },#     {#       "description": "Redact social security numbers",#       "trigger": "",#       "search": "\d{3}-\d{2}-\d{4}",#       "replace": "XXX-XX-XXXX"#     }#   ]# }## log_redaction_file=# Comma separated list of strings representing the host/domain names that the Hue server can serve.# e.g.: localhost,domain1,*## allowed_hosts=*# Administrators# ----------------[[django_admins]]## [[[admin1]]]## name=john## email=john@doe.com# UI customizations# -------------------[[custom]]# Top banner HTML code# e.g. <H2>Test Lab A2 Hue Services</H2>## banner_top_html=# Configuration options for user authentication into the web application# ------------------------------------------------------------------------[[auth]]# Authentication backend. Common settings are:# - django.contrib.auth.backends.ModelBackend (entirely Django backend)# - desktop.auth.backend.AllowAllBackend (allows everyone)# - desktop.auth.backend.AllowFirstUserDjangoBackend#     (Default. Relies on Django and user manager, after the first login)# - desktop.auth.backend.LdapBackend# - desktop.auth.backend.PamBackend# - desktop.auth.backend.SpnegoDjangoBackend# - desktop.auth.backend.RemoteUserDjangoBackend# - libsaml.backend.SAML2Backend# - libopenid.backend.OpenIDBackend# - liboauth.backend.OAuthBackend#     (Support Twitter, Facebook, Google+ and Linkedin## backend=desktop.auth.backend.AllowFirstUserDjangoBackend# The service to use when querying PAM.## pam_service=login# When using the desktop.auth.backend.RemoteUserDjangoBackend, this sets# the normalized name of the header that contains the remote user.# The HTTP header in the request is converted to a key by converting# all characters to uppercase, replacing any hyphens with underscores# and adding an HTTP_ prefix to the name. So, for example, if the header# is called Remote-User that would be configured as HTTP_REMOTE_USER## Defaults to HTTP_REMOTE_USER## remote_user_header=HTTP_REMOTE_USER# Ignore the case of usernames when searching for existing users.# Only supported in remoteUserDjangoBackend.## ignore_username_case=true# Ignore the case of usernames when searching for existing users to authenticate with.# Only supported in remoteUserDjangoBackend.## force_username_lowercase=true# Users will expire after they have not logged in for 'n' amount of seconds.# A negative number means that users will never expire.## expires_after=-1# Apply 'expires_after' to superusers.## expire_superusers=true# Configuration options for connecting to LDAP and Active Directory# -------------------------------------------------------------------[[ldap]]# The search base for finding users and groups## base_dn="DC=mycompany,DC=com"# URL of the LDAP server## ldap_url=ldap://auth.mycompany.com# A PEM-format file containing certificates for the CA's that# Hue will trust for authentication over TLS.# The certificate for the CA that signed the# LDAP server certificate must be included among these certificates.# See more here http://www.openldap.org/doc/admin24/tls.html.## ldap_cert=## use_start_tls=true# Distinguished name of the user to bind as -- not necessary if the LDAP server# supports anonymous searches## bind_dn="CN=ServiceAccount,DC=mycompany,DC=com"# Password of the bind user -- not necessary if the LDAP server supports# anonymous searches## bind_password=# Execute this script to produce the bind user password. This will be used# when `bind_password` is not set.## bind_password_script=# Pattern for searching for usernames -- Use <username> for the parameter# For use when using LdapBackend for Hue authentication## ldap_username_pattern="uid=<username>,ou=People,dc=mycompany,dc=com"# Create users in Hue when they try to login with their LDAP credentials# For use when using LdapBackend for Hue authentication## create_users_on_login = true# Synchronize a users groups when they login## sync_groups_on_login=false# Ignore the case of usernames when searching for existing users in Hue.## ignore_username_case=false# Force usernames to lowercase when creating new users from LDAP.## force_username_lowercase=false# Use search bind authentication.## search_bind_authentication=true# Choose which kind of subgrouping to use: nested or suboordinate (deprecated).## subgroups=suboordinate# Define the number of levels to search for nested members.## nested_members_search_depth=10# Whether or not to follow referrals## follow_referrals=false# Enable python-ldap debugging.## debug=false# Sets the debug level within the underlying LDAP C lib.## debug_level=255# Possible values for trace_level are 0 for no logging, 1 for only logging the method calls with arguments,# 2 for logging the method calls with arguments and the complete results and 9 for also logging the traceback of method calls.## trace_level=0[[[users]]]# Base filter for searching for users## user_filter="objectclass=*"# The username attribute in the LDAP schema## user_name_attr=sAMAccountName[[[groups]]]# Base filter for searching for groups## group_filter="objectclass=*"# The group name attribute in the LDAP schema## group_name_attr=cn# The attribute of the group object which identifies the members of the group## group_member_attr=members[[[ldap_servers]]]## [[[[mycompany]]]]# The search base for finding users and groups## base_dn="DC=mycompany,DC=com"# URL of the LDAP server## ldap_url=ldap://auth.mycompany.com# A PEM-format file containing certificates for the CA's that# Hue will trust for authentication over TLS.# The certificate for the CA that signed the# LDAP server certificate must be included among these certificates.# See more here http://www.openldap.org/doc/admin24/tls.html.## ldap_cert=## use_start_tls=true# Distinguished name of the user to bind as -- not necessary if the LDAP server# supports anonymous searches## bind_dn="CN=ServiceAccount,DC=mycompany,DC=com"# Password of the bind user -- not necessary if the LDAP server supports# anonymous searches## bind_password=# Execute this script to produce the bind user password. This will be used# when `bind_password` is not set.## bind_password_script=# Pattern for searching for usernames -- Use <username> for the parameter# For use when using LdapBackend for Hue authentication## ldap_username_pattern="uid=<username>,ou=People,dc=mycompany,dc=com"## Use search bind authentication.## search_bind_authentication=true# Whether or not to follow referrals## follow_referrals=false# Enable python-ldap debugging.## debug=false# Sets the debug level within the underlying LDAP C lib.## debug_level=255# Possible values for trace_level are 0 for no logging, 1 for only logging the method calls with arguments,# 2 for logging the method calls with arguments and the complete results and 9 for also logging the traceback of method calls.## trace_level=0## [[[[[users]]]]]# Base filter for searching for users## user_filter="objectclass=Person"# The username attribute in the LDAP schema## user_name_attr=sAMAccountName## [[[[[groups]]]]]# Base filter for searching for groups## group_filter="objectclass=groupOfNames"# The username attribute in the LDAP schema## group_name_attr=cn# Configuration options for specifying the Desktop Database. For more info,# see http://docs.djangoproject.com/en/1.4/ref/settings/#database-engine# ------------------------------------------------------------------------[[database]]# Database engine is typically one of:# postgresql_psycopg2, mysql, sqlite3 or oracle.## Note that for sqlite3, 'name', below is a path to the filename. For other backends, it is the database name.# Note for Oracle, options={'threaded':true} must be set in order to avoid crashes.# Note for Oracle, you can use the Oracle Service Name by setting "port=0" and then "name=<host>:<port>/<service_name>".## engine=sqlite3## host=## port=## user=## password=## name=desktop/desktop.db## options={}# Configuration options for specifying the Desktop session.# For more info, see https://docs.djangoproject.com/en/1.4/topics/http/sessions/# ------------------------------------------------------------------------[[session]]# The cookie containing the users' session ID will expire after this amount of time in seconds.# Default is 2 weeks.## ttl=1209600# The cookie containing the users' session ID will be secure.# Should only be enabled with HTTPS.## secure=false# The cookie containing the users' session ID will use the HTTP only flag.## http_only=false# Use session-length cookies. Logs out the user when she closes the browser window.## expire_at_browser_close=false# Configuration options for connecting to an external SMTP server# ------------------------------------------------------------------------[[smtp]]# The SMTP server information for email notification deliveryhost=localhostport=25user=password=# Whether to use a TLS (secure) connection when talking to the SMTP servertls=no# Default email address to use for various automated notification from Hue## default_from_email=hue@localhost# Configuration options for Kerberos integration for secured Hadoop clusters# ------------------------------------------------------------------------[[kerberos]]# Path to Hue's Kerberos keytab file## hue_keytab=# Kerberos principal name for Hue## hue_principal=hue/hostname.foo.com# Path to kinit## kinit_path=/path/to/kinit# Configuration options for using OAuthBackend (core) login# ------------------------------------------------------------------------[[oauth]]# The Consumer key of the application## consumer_key=XXXXXXXXXXXXXXXXXXXXX# The Consumer secret of the application## consumer_secret=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX# The Request token URL## request_token_url=https://api.twitter.com/oauth/request_token# The Access token URL## access_token_url=https://api.twitter.com/oauth/access_token# The Authorize URL## authenticate_url=https://api.twitter.com/oauth/authorize###########################################################################
# Settings to configure SAML
###########################################################################[libsaml]# Xmlsec1 binary path. This program should be executable by the user running Hue.## xmlsec_binary=/usr/local/bin/xmlsec1# Entity ID for Hue acting as service provider.# Can also accept a pattern where '<base_url>' will be replaced with server URL base.## entity_id="<base_url>/saml2/metadata/"# Create users from SSO on login.## create_users_on_login=true# Required attributes to ask for from IdP.# This requires a comma separated list.## required_attributes=uid# Optional attributes to ask for from IdP.# This requires a comma separated list.## optional_attributes=# IdP metadata in the form of a file. This is generally an XML file containing metadata that the Identity Provider generates.## metadata_file=# Private key to encrypt metadata with.## key_file=# Signed certificate to send along with encrypted metadata.## cert_file=# A mapping from attributes in the response from the IdP to django user attributes.## user_attribute_mapping={'uid':'username'}# Have Hue initiated authn requests be signed and provide a certificate.## authn_requests_signed=false# Have Hue initiated logout requests be signed and provide a certificate.## logout_requests_signed=false## Username can be sourced from 'attributes' or 'nameid'.## username_source=attributes# Performs the logout or not.## logout_enabled=true###########################################################################
# Settings to configure OPENID
###########################################################################[libopenid]# (Required) OpenId SSO endpoint url. ## server_endpoint_url=https://www.google.com/accounts/o8/id# OpenId 1.1 identity url prefix to be used instead of SSO endpoint url# This is only supported if you are using an OpenId 1.1 endpoint## identity_url_prefix=https://app.onelogin.com/openid/your_company.com/# Create users from OPENID on login.## create_users_on_login=true# Use email for username## use_email_for_username=true###########################################################################
# Settings to configure OAuth
###########################################################################[liboauth]# NOTE: # To work, each of the active (i.e. uncommented) service must have # applications created on the social network.# Then the "consumer key" and "consumer secret" must be provided here.## The addresses where to do so are:# Twitter:  https://dev.twitter.com/apps# Google+ : https://cloud.google.com/# Facebook: https://developers.facebook.com/apps# Linkedin: https://www.linkedin.com/secure/developer## Additionnaly, the following must be set in the application settings:# Twitter:  Callback URL (aka Redirect URL) must be set to http://YOUR_HUE_IP_OR_DOMAIN_NAME/oauth/social_login/oauth_authenticated# Google+ : CONSENT SCREEN must have email address# Facebook: Sandbox Mode must be DISABLED# Linkedin: "In OAuth User Agreement", r_emailaddress is REQUIRED# The Consumer key of the application## consumer_key_twitter=              ## consumer_key_google=               ## consumer_key_facebook=             ## consumer_key_linkedin=# The Consumer secret of the application## consumer_secret_twitter=           ## consumer_secret_google=            ## consumer_secret_facebook=          ## consumer_secret_linkedin=# The Request token URL## request_token_url_twitter=https://api.twitter.com/oauth/request_token## request_token_url_google=https://accounts.google.com/o/oauth2/auth## request_token_url_linkedin=https://www.linkedin.com/uas/oauth2/authorization## request_token_url_facebook=https://graph.facebook.com/oauth/authorize# The Access token URL## access_token_url_twitter=https://api.twitter.com/oauth/access_token## access_token_url_google=https://accounts.google.com/o/oauth2/token## access_token_url_facebook=https://graph.facebook.com/oauth/access_token## access_token_url_linkedin=https://api.linkedin.com/uas/oauth2/accessToken# The Authenticate URL## authenticate_url_twitter=https://api.twitter.com/oauth/authorize## authenticate_url_google=https://www.googleapis.com/oauth2/v1/userinfo?access_token=## authenticate_url_facebook=https://graph.facebook.com/me?access_token=## authenticate_url_linkedin=https://api.linkedin.com/v1/people/~:(email-address)?format=json&oauth2_access_token=# Username Map. Json Hash format.# Replaces username parts in order to simplify usernames obtained# Example: {"@sub1.domain.com":"_S1", "@sub2.domain.com":"_S2"}# converts 'email@sub1.domain.com' to 'email_S1'## username_map={}# Whitelisted domains (only applies to Google OAuth). CSV format.## whitelisted_domains_google=###########################################################################
# Settings for the RDBMS application
###########################################################################[librdbms]# The RDBMS app can have any number of databases configured in the databases# section. A database is known by its section name# (IE sqlite, mysql, psql, and oracle in the list below).[[databases]]# sqlite configuration.## [[[sqlite]]]# Name to show in the UI.## nice_name=SQLite# For SQLite, name defines the path to the database.## name=/tmp/sqlite.db# Database backend to use.## engine=sqlite# Database options to send to the server when connecting.# https://docs.djangoproject.com/en/1.4/ref/databases/## options={}# mysql, oracle, or postgresql configuration.[[[mysql]]]# Name to show in the UI.nice_name="FMCM CMS Database"# For MySQL and PostgreSQL, name is the name of the database.# For Oracle, Name is instance of the Oracle server. For express edition# this is 'xe' by default.name=fmcm_cms# Database backend to use. This can be:# 1. mysql# 2. postgresql# 3. oracleengine=mysql# IP or hostname of the database to connect to.host=100.99.74.222# Port the database server is listening to. Defaults are:# 1. MySQL: 3306# 2. PostgreSQL: 5432# 3. Oracle Express Edition: 1521port=3306# Username to authenticate with when connecting to the database.user=thecover# Password matching the username to authenticate with when# connecting to the database.password=Thecover_2016# Database options to send to the server when connecting.# https://docs.djangoproject.com/en/1.4/ref/databases/options={"init_command":"SET NAMES 'utf8'"}###########################################################################
# Settings to configure your Hadoop cluster.
###########################################################################[hadoop]# Configuration for HDFS NameNode# ------------------------------------------------------------------------[[hdfs_clusters]]# HA support by using HttpFs[[[default]]]# Enter the filesystem urifs_defaultfs=hdfs://master1:8020# NameNode logical name.logical_name=master1# Use WebHdfs/HttpFs as the communication mechanism.# Domain should be the NameNode or HttpFs host.# Default port is 14000 for HttpFs.webhdfs_url=http://master1:10070/webhdfs/v1# Change this if your HDFS cluster is Kerberos-securedsecurity_enabled=false# In secure mode (HTTPS), if SSL certificates from YARN Rest APIs# have to be verified against certificate authority## ssl_cert_ca_verify=True# Directory of the Hadoop configurationhadoop_conf_dir=/data/usr/hadoop/etc/hadoop# Configuration for YARN (MR2)# ------------------------------------------------------------------------[[yarn_clusters]][[[default]]]# Enter the host on which you are running the ResourceManagerresourcemanager_host=master1# The port where the ResourceManager IPC listens onresourcemanager_port=8032# Whether to submit jobs to this clustersubmit_to=True# Resource Manager logical name (required for HA)## logical_name=# Change this if your YARN cluster is Kerberos-secured## security_enabled=false# URL of the ResourceManager APIresourcemanager_api_url=http://master1:10088# URL of the ProxyServer APIproxy_api_url=http://master1:10088# URL of the HistoryServer API## history_server_api_url=http://localhost:19888# In secure mode (HTTPS), if SSL certificates from from YARN Rest APIs# have to be verified against certificate authority## ssl_cert_ca_verify=True# HA support by specifying multiple clusters# e.g.# [[[ha]]]# Resource Manager logical name (required for HA)## logical_name=my-rm-name# Configuration for MapReduce (MR1)# ------------------------------------------------------------------------[[mapred_clusters]][[[default]]]# Enter the host on which you are running the Hadoop JobTracker## jobtracker_host=localhost# The port where the JobTracker IPC listens on## jobtracker_port=8021# JobTracker logical name for HA## logical_name=# Thrift plug-in port for the JobTracker## thrift_port=9290# Whether to submit jobs to this clustersubmit_to=False# Change this if your MapReduce cluster is Kerberos-secured## security_enabled=false# HA support by specifying multiple clusters# e.g.# [[[ha]]]# Enter the logical name of the JobTrackers## logical_name=my-jt-name###########################################################################
# Settings to configure the Filebrowser app
###########################################################################[filebrowser]# Location on local filesystem where the uploaded archives are temporary stored.## archive_upload_tempdir=/tmp###########################################################################
# Settings to configure liboozie
###########################################################################[liboozie]# The URL where the Oozie service runs on. This is required in order for# users to submit jobs. Empty value disables the config check.## oozie_url=http://localhost:11000/oozie# Requires FQDN in oozie_url if enabled## security_enabled=false# Location on HDFS where the workflows/coordinator are deployed when submitted.## remote_deployement_dir=/user/hue/oozie/deployments###########################################################################
# Settings to configure the Oozie app
###########################################################################[oozie]# Location on local FS where the examples are stored.## local_data_dir=..../examples# Location on local FS where the data for the examples is stored.## sample_data_dir=...thirdparty/sample_data# Location on HDFS where the oozie examples and workflows are stored.## remote_data_dir=/user/hue/oozie/workspaces# Maximum of Oozie workflows or coodinators to retrieve in one API call.## oozie_jobs_count=100# Use Cron format for defining the frequency of a Coordinator instead of the old frequency number/unit.## enable_cron_scheduling=true###########################################################################
# Settings to configure Beeswax with Hive
###########################################################################[beeswax]# Host where HiveServer2 is running.# If Kerberos security is enabled, use fully-qualified domain name (FQDN).hive_server_host=master1# Port where HiveServer2 Thrift server runs on.hive_server_port=10000# Hive configuration directory, where hive-site.xml is locatedhive_conf_dir=/data/usr/hive/conf# Timeout in seconds for thrift calls to Hive service## server_conn_timeout=120# Choose whether to use the old GetLog() thrift call from before Hive 0.14 to retrieve the logs.# If false, use the FetchResults() thrift call from Hive 1.0 or more instead.## use_get_log_api=false# Set a LIMIT clause when browsing a partitioned table.# A positive value will be set as the LIMIT. If 0 or negative, do not set any limit.## browse_partitioned_table_limit=250# A limit to the number of rows that can be downloaded from a query.# A value of -1 means there will be no limit.# A maximum of 65,000 is applied to XLS downloads.## download_row_limit=1000000# Hue will try to close the Hive query when the user leaves the editor page.# This will free all the query resources in HiveServer2, but also make its results inaccessible.## close_queries=false# Thrift version to use when communicating with HiveServer2.# New column format is from version 7.## thrift_version=5[[ssl]]# Path to Certificate Authority certificates.## cacerts=/etc/hue/cacerts.pem# Choose whether Hue should validate certificates received from the server.## validate=true###########################################################################
# Settings to configure Impala
###########################################################################[impala]# Host of the Impala Server (one of the Impalad)## server_host=localhost# Port of the Impala Server## server_port=21050# Kerberos principal## impala_principal=impala/hostname.foo.com# Turn on/off impersonation mechanism when talking to Impala## impersonation_enabled=False# Number of initial rows of a result set to ask Impala to cache in order# to support re-fetching them for downloading them.# Set to 0 for disabling the option and backward compatibility.## querycache_rows=50000# Timeout in seconds for thrift calls## server_conn_timeout=120# Hue will try to close the Impala query when the user leaves the editor page.# This will free all the query resources in Impala, but also make its results inaccessible.## close_queries=true# If QUERY_TIMEOUT_S > 0, the query will be timed out (i.e. cancelled) if Impala does not do any work# (compute or send back results) for that query within QUERY_TIMEOUT_S seconds.## query_timeout_s=600[[ssl]]# SSL communication enabled for this server.## enabled=false# Path to Certificate Authority certificates.## cacerts=/etc/hue/cacerts.pem# Choose whether Hue should validate certificates received from the server.## validate=true###########################################################################
# Settings to configure Pig
###########################################################################[pig]# Location of piggybank.jar on local filesystem.## local_sample_dir=/usr/share/hue/apps/pig/examples# Location piggybank.jar will be copied to in HDFS.## remote_data_dir=/user/hue/pig/examples###########################################################################
# Settings to configure Sqoop
###########################################################################[sqoop]# For autocompletion, fill out the librdbms section.# Sqoop server URL## server_url=http://localhost:12000/sqoop###########################################################################
# Settings to configure Proxy
###########################################################################[proxy]# Comma-separated list of regular expressions,# which match 'host:port' of requested proxy target.## whitelist=(localhost|127\.0\.0\.1):(50030|50070|50060|50075)# Comma-separated list of regular expressions,# which match any prefix of 'host:port/path' of requested proxy target.# This does not support matching GET parameters.## blacklist=###########################################################################
# Settings to configure HBase Browser
###########################################################################[hbase]# Comma-separated list of HBase Thrift servers for clusters in the format of '(name|host:port)'.# Use full hostname with security.# If using Kerberos we assume GSSAPI SASL, not PLAIN.## hbase_clusters=(Cluster|localhost:9090)# HBase configuration directory, where hbase-site.xml is located.## hbase_conf_dir=/etc/hbase/conf# Hard limit of rows or columns per row fetched before truncating.## truncate_limit = 500# 'buffered' is the default of the HBase Thrift Server and supports security.# 'framed' can be used to chunk up responses,# which is useful when used in conjunction with the nonblocking server in Thrift.## thrift_transport=buffered###########################################################################
# Settings to configure Solr Search
###########################################################################[search]# URL of the Solr Server## solr_url=http://localhost:8983/solr/# Requires FQDN in solr_url if enabled## security_enabled=false## Query sent when no term is entered## empty_query=*:*###########################################################################
# Settings to configure Solr Indexer
###########################################################################[indexer]# Location of the solrctl binary.## solrctl_path=/usr/bin/solrctl# Zookeeper ensemble.## solr_zk_ensemble=localhost:2181/solr###########################################################################
# Settings to configure Job Designer
###########################################################################[jobsub]# Location on local FS where examples and template are stored.## local_data_dir=..../data# Location on local FS where sample data is stored## sample_data_dir=...thirdparty/sample_data###########################################################################
# Settings to configure Job Browser.
###########################################################################[jobbrowser]# Share submitted jobs information with all users. If set to false,# submitted jobs are visible only to the owner and administrators.## share_jobs=true###########################################################################
# Settings to configure the Zookeeper application.
###########################################################################[zookeeper][[clusters]][[[default]]]# Zookeeper ensemble. Comma separated list of Host/Port.# e.g. localhost:2181,localhost:2182,localhost:2183host_ports=master1:2181,master2:2181,slave1:2181,slave2:2181,slave3:2181# The URL of the REST contrib service (required for znode browsing).## rest_url=http://localhost:9998# Name of Kerberos principal when using security.## principal_name=zookeeper###########################################################################
# Settings to configure the Spark application.
###########################################################################[spark]# URL of the REST Spark Job Server.## server_url=http://localhost:8090/# List of available types of snippets## languages='[{"name": "Scala", "type": "scala"},{"name": "Python", "type": "python"},{"name": "Impala SQL", "type": "impala"},{"name": "Hive SQL", "type": "hive"},{"name": "Text", "type": "text"}]'###########################################################################
# Settings for the User Admin application
###########################################################################[useradmin]# The name of the default user group that users will be a member of## default_user_group=default[[password_policy]]# Set password policy to all users. The default policy requires password to be at least 8 characters long,# and contain both uppercase and lowercase letters, numbers, and special characters.## is_enabled=false## pwd_regex="^(?=.*?[A-Z])(?=(.*[a-z]){1,})(?=(.*[\d]){1,})(?=(.*[\W_]){1,}).{8,}$"## pwd_hint="The password must be at least 8 characters long, and must contain both uppercase and lowercase letters, at least one number, and at least one special character."## pwd_error_message="The password must be at least 8 characters long, and must contain both uppercase and lowercase letters, at least one number, and at least one special character."###########################################################################
# Settings for the Sentry lib
###########################################################################[libsentry]# Hostname or IP of server.## hostname=localhost# Port the sentry service is running on.## port=8038# Sentry configuration directory, where sentry-site.xml is located.## sentry_conf_dir=/etc/sentry/conf

转载于:https://my.oschina.net/aibati2008/blog/654038

hue-3.7.0安装+ hadoop2.6.3目前使用的hue配置相关推荐

  1. MySQL8.0安装,data文件缺失,my.ini配置错误已不存在

    首先,MySQL8.0安装注意一点就可以了,其它正常安装就行:注意的是安装路径和data文件存放路径,要记住: 安装后看到(以下内容为个人记录在记事本上的验证,但安装完成后已经能登陆到mysql数据库 ...

  2. centos安装Hue 3.7.0

    Hue 是运营和开发Hadoop应用的图形化用户界面.Hue程序被整合到一个类似桌面的环境,以web程序的形式发布,对于单独的用户来说不需要额外的安装. 要求:Python 2.6 - 2.7 安装: ...

  3. Docker 安装 Hue 4.10.0

    1.安装mysql: 使用Docker搭建MySQL服务 - sablier - 博客园 2.下载镜像:直接启动: docker run -it -p 8888:8888 gethue/hue:lat ...

  4. Windows 10下编译安装Hadoop2.6

    转自:https://www.linuxidc.com/Linux/2016-08/134131.htm Windows 10下安装Hadoop2.6,Windows10下编译64位Hadoop2.x ...

  5. CentOS7安装Hadoop2.7完整步骤

    总体思路,准备主从服务器,配置主服务器可以无密码SSH登录从服务器,解压安装JDK,解压安装Hadoop,配置hdfs.mapreduce等主从关系. 1.环境,3台CentOS7,64位,Hadoo ...

  6. hadoop 2.5.0安装和配置

    安装hadoop要先做以下准备: 1.jdk,安装教程在 http://www.cnblogs.com/stardjyeah/p/4640917.html 2.ssh无密码验证,配置教程在 http: ...

  7. CentOS7安装Hadoop2.7完整流程

    2019独角兽企业重金招聘Python工程师标准>>> 1.环境,3台CentOS7,64位,Hadoop2.7需要64位Linux,CentOS7 Minimal的ISO文件只有6 ...

  8. ubuntu14.04安装hadoop2.7.1伪分布式和错误解决

    ubuntu14.04安装hadoop2.7.1伪分布式和错误解决 需要说明的是我下载的是源码,通过编译源码并安装 一.需要准备的软件: 1.JDK和GCC     设置JAVA_HOME:      ...

  9. hadoop实战-06.ubuntu14.0安装hadoop 2.7.1( 3台主机) 小集群

    之前配置的是1.0.2,这个版本较老了,所以升级成2.7.1了. 大致上两个版本的配置差异不会太大. 规划: ubuntu1 172.19.43.178  master,namenode,jobtra ...

最新文章

  1. 自学python清单-我的2018学习清单
  2. 多线程c语言,如何用C语言实现多线程
  3. 记录x86调试命令总结
  4. HDU - 1540 Tunnel Warfare(线段树+区间合并)
  5. linux查看svn信息,SVN 查看历史信息
  6. Xamarin 设置可接受的版本
  7. python函数求导_python_exp
  8. magisk卸载内置软件_手把手教你使用ADB卸载手机内置App软件
  9. 霍夫曼编码最简单的实现
  10. bootstrap表格标题Caption位于表格下方的原因
  11. 如何选择适合你的兴趣爱好(九),钓鱼
  12. Golang Base64编码解码
  13. 房地产业务学习 04 -房企信息化 谁忽悠了谁
  14. Cannot find JRE ‘1.8‘. You can specify JRE to run maven goals in Settings | Mav
  15. 第八章 交互技术,8.1 VR电商购物(作者:宋五)
  16. Element和综合练习
  17. 2022年吃瓜事件拆解,打造爆款,让你拥有顶级营销思维!
  18. Hi3516EV200 liteOs SDK搭建
  19. SUA--Win7的有趣功能
  20. Spring Cloud Alibaba Sentinel--分布式系统的流量防卫兵

热门文章

  1. idea+springboot+mongodb的简单测试使用分享
  2. psql: FATAL the database system is in recovery解决
  3. HBase 配置详解
  4. appSettings 配置mysql_app.config数据库配置字符串的三种取法
  5. 计算机桌面上的声音图标没了怎么办,Win7电脑右下角声音图标不见了怎么办?...
  6. 红黑树和平衡二叉树的区别_面试题精选红黑树(c/c++版本)
  7. 一个黑色全屏的计时器_我入手了一个1000多的智能手环,值吗?|Fitbit Charge 4测评...
  8. C语言编程基础 打印图形
  9. MyBatis缓存机制学习
  10. 互联网日报 | 4月18日 星期日 | 水滴公司递交赴美IPO申请;极狐阿尔法S首发上市;盒马X会员店6月进京...