Greenplum数据库集群安装指南
Greenplum数据库集群安装文档
说明:本文档是在同事指导下,根据操作步骤以及自己理解所整理 ------阿莲
未说明服务器或者执行用户的地方均是在所有服务器上使用root用户执行一遍
一、安装前环境配置
本次集群环境为2台华为云服务器,ip地址为:
192.168.1.124
192.168.1.123
1.集群服务器主机名更改
[root@sdw1 ~]# vim /etc/hosts #将该文件发送到master主机上,或者用主机操作再分发,另外::1这一行最好注释掉,并把节点地址和主机名添加上
#::1 localhost localhost.localdomain localhost6 localhost6.localdomain6127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4#127.0.0.1 localhost localhost192.168.1.124 sdw1 seg01192.168.1.123 master mas01
2.配置root用户互信
依次将所有服务器上的id_dsa.pub,id_rsa.pub密钥拷贝到authorized_keys文件,并最终将该文件分发到所有服务器下
服务器1
[root@sdw1 .ssh]# mkdir-p/root/.ssh#该步骤可省略直接进入到/root/.ssh目录下[root@sdw1 .ssh]# chmod-R700/root/.ssh#该步骤可省略,因为本身就是这样的权限[root@sdw1 .ssh]# ssh-keygen -t rsa [root@sdw1 .ssh]# ssh-keygen -t dsa [root@sdw1 .ssh]# cat id_dsa.pub >>authorized_keys [root@sdw1 .ssh]# cat id_rsa.pub >>authorized_keys [root@sdw1 .ssh]# scp authorized_keys sdw1:/root/.ssh/ 服务器2 [root@master ~]# mkdir-p/root/.ssh#该步骤可省略直接进入到/root/.ssh目录下[root@master ~]# chmod-R700/root/.ssh#该步骤可省略,因为本身就是这样的权限[root@master ~]# ssh-keygen -t rsa [root@master ~]# ssh-keygen -t dsa [root@master ~]# cat id_dsa.pub >>authorized_keys [root@master ~]# cat id_rsa.pub >>authorized_keys [root@master ~]# scp authorized_keys mas01:/root/.ssh/
3. 配置sysctl.conf 文件
[root@master ~]# vim /etc/sysctl.conf
#Kernel sysctl configuration file for Red Hat Linux ]#For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and #sysctl.conf(5) for more details. #Controls IP packet forwarding net.ipv4.ip_forward = 0 #Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 、#Controls whether core dumps will append the PID to the core filename. #Useful for debugging multi-threaded applications. kernel.core_uses_pid=1#Controls the use of TCP syncookies net.ipv4.tcp_syncookies=1#Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables=0net.bridge.bridge-nf-call-iptables=0net.bridge.bridge-nf-call-arptables=0#Controls the default maxmimum size of a mesage queue kernel.msgmnb=65536#Controls the maximum size of a message, in bytes kernel.msgmax=65536#Controls the maximum shared segment size, in bytes kernel.shmmax=68719476736#Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 #以上可省略,直接插入以下内容kernel.shmmax=500000000kernel.shmmni=4096kernel.shmall=4000000000kernel.sem=2505120001002048kernel.sysrq=1kernel.core_uses_pid=1kernel.msgmnb=65536kernel.msgmax=65536kernel.msgmni=2048net.ipv4.tcp_syncookies=1net.ipv4.ip_forward=0net.ipv4.conf.default.accept_source_route=0net.ipv4.tcp_tw_recycle=1net.ipv4.tcp_max_syn_backlog=4096net.ipv4.conf.all.arp_filter=1net.ipv4.ip_local_port_range=102565535net.core.netdev_max_backlog=10000net.core.rmem_max=2097152net.core.wmem_max=2097152vm.overcommit_memory=2生效文件配置[root@master ~]# sysctl –p
4.配置limits.conf文件
[root@master ~]# cat >> /etc/security/limits.conf * soft nofile65536* hard nofile65536* soft nproc131072* hard nproc131072
5.配置/etc/security/limits.d/20-nproc.conf 文件
原来版本可能是90–nproc.conf 文件,我的云主机是20-nproc.conf
[root@master ~]# cat >> /etc/security/limits.d/20-nproc.conf * soft nofile65536* hard nofile65536* soft nproc131072* hard nproc131072
[root@master ~]#
6.关闭防火墙,同时在rc.Local中添加预读快设置(注:此文件不可分发)这一步可以不用设置到文件,直接关闭防火墙就可以
[root@master ~]# vim /etc/rc.local #!/bin/sh #This script will be executed *after* all the other init scripts. #You can put your own initialization stuff in here if you don't #want to do the full Sys V style init stuff. touch/var/lock/subsys/localserviceiptablesstopserviceip6tablesstopchkconfig iptables off chkconfig ip6tables off /sbin/blockdev--setra18384/dev/sdb1/sbin/blockdev--setra18384/dev/sdc1 ~ ~#注:rhel 7 版本以后一般是systemctl stop firewalld
7. 关闭selinux(/etc/selinux/config)
在/etc/selinux/config文件中更改为SELINUX=disabled
[root@master ~]# vim /etc/selinux/config #This file controls the state of SELinux on the system. #SELINUX= can take one of these three values: #enforcing - SELinux security policy is enforced. #permissive - SELinux prints warnings instead of enforcing. #disabled - No SELinux policy is loaded. SELINUX=disabled#SELINUXTYPE= can take one of these two values: #targeted - Targeted processes are protected, #mls - Multi Level Security protection. SELINUXTYPE=targeted
8. 分发所配置文件到各个节点服务武器
可分发文件sysctl.conf、90-nproc.conf、limits.conf、config、hosts
[root@master ~]# scp /etc/sysctl.conf seg01:/etc/sysctl.conf[root@master ~]# scp /etc/security/limits.d/90-nproc.conf seg01:/etc/security/limits.d/90-nproc.conf[root@master ~]# scp /etc/security/limits.conf seg01:/etc/security/limits.conf[root@master ~]# scp /etc/selinux/config seg01:/etc/selinux/config[root@master ~]# sysctl -p[root@master ~]# ssh seg01 sysctl -p
9.创建目录,建用户,授权
#创建用户,并设置密码[root@master ~]# useradd gpadmin[root@master ~]# passwd gpadmin (输入密码******)#创建安装目录[root@master ~]# mkdir -p /opt/greenplum/ [root@master ~]# mkdir -p /data/pg_system/ #创建数据库目录[root@master ~]# mkdir -p /data/fs_csg_data/ [root@master ~]# chown -R gpadmin:gpadmin /opt/[root@master ~]# chown -R gpadmin:gpadmin /data/[root@master ~]# ssh seg01 useradd gpadmin[root@master ~]# ssh seg01 passwd gpadmin (输入密码******)[root@master ~]# ssh seg01 mkdir -p /data1/pg_system/primary/[root@master ~]# ssh seg01 mkdir -p /data1/pg_system/mirror/ 因为一核CPU给一个实例,所以根据CPU配置建一个data1目录,实际根据具体情况进行相应操作[root@master ~]# ssh seg01 mkdir -p /data1/fs_csg_data/primary/[root@master ~]# ssh seg01 mkdir -p /data1/fs_csg_data/mirror/[root@master ~]# ssh seg01 chown -R gpadmin:gpadmin /opt/[root@master ~]# ssh seg01 chown -R gpadmin:gpadmin /data1/#### 10.文件系统设置
Check;用df -T or lsblk 检查磁盘挂载情况,写在/etc/fstab文件中--master[root@master ~]# vim /etc/fstab #UUID根据自身情况填写......UUID=7ec2cd5c-a358-4182-93ea-dc66525b65fd /master xfs rw,noatime,inode64,allocsize=16m 0 0--segment[root@sdw1 ~]# vim /etc/fstab #UUID根据自身情况填写UUID=a762026d-73e0-4064-96f4-bdd21f20f9d5 /mirror xfs rw,nodev,noatime,inode64,allocsize=16m 0 0UUID=7be862b4-2fc2-4a3b-a6c5-35f134c37137 /primary xfs rw,nodev,noatime,inode64,allocsize=16m 0 0
11 disk调度策略的设置(centos7版本以上没有该文件,可以忽略)
主要添加elevator=deadline crashkernel=auto和transparent_hugepage=never 注意添加位置以及crashkernel=auto是否已经存在
#echo deadline > /sys/block/sda/queue/scheduler
#云主机是这样显示的,但是按照上面设置不成功,所以可以忽略
[root@master ~]# cat /sys/block/vda/queue/scheduler[mq-deadline] kyber none同时也写在grub中:#vi /etc/grub2.cfgkernel /vmlinuz-2.6.32-358.el6.x86_64 roroot=/dev/mapper/vg_mas01-lv_root rd_NO_LUKSrd_LVM_LV=vg_mas01/lv_root rd_NO_MDelevator=deadlinecrashkernel=autoLANG=zh_CN.UTF-8rd_LVM_LV=vg_mas01/lv_swap KEYBOARDTYPE=pcKEYTABLE=us rd_NO_DM rhgb quiettransparent_hugepage=neverinitrd /initramfs-2.6.32-358.el6.x86_64.img (rhel 6)#grubby --update-kernel=ALL --args="elevator=deadline" (rhel 7)
12 禁用Transparent Huge Pages (THP)(centos7版本以上没有该文件,可以忽略)
即上一步已写入的末尾transparent_hugepage=never
One way to disable THP on RHEL 6.x is by adding the parameter transparent_hugepage=neverto the kernel command in the file /boot/grub/grub.conf:kernel /vmlinuz-2.6.32-358.el6.x86_64 ro root=/dev/mapper/vg_mas01-lv_root rd_NO_LUKS rd_LVM_LV=vg_mas01/lv_root rd_NO_MD elevator=deadline crashkernel=auto LANG=zh_CN.UTF-8 rd_LVM_LV=vg_mas01/lv_swap KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet transparent_hugepage=neverinitrd /initramfs-2.6.32-358.el6.x86_64.img#grubby --update-kernel=ALL --args="transparent_hugepage=never" (rhel 7)Check:#cat /sys/kernel/mm/*transparent_hugepage/enabledalways [never]
13.预读块的设置(可以忽略)
注意:重启后会失效,需要重执行或者reboot系统后再设置
每块设备文件需设置预读块为18384,格式如下:
(如若第6步中已经添加到rc.local文件中则无需再设置重启生效)
#/sbin/blockdev --setra 18384 /dev/sdb2#/sbin/blockdev --getra /dev/sb2#/sbin/blockdev --setra 18384 /dev/sdb1
14 .ntp服务配置
安装ntp服务,设置一个统一的时间服务器,#一般华为云服务器都是已经设置好的,可默认,其他根据实际情况填写
cat >>/etc/ntp.conf restrict 127.0.0.1 server ntp.myhuaweicloud.com iburst#编写自动更新时间脚本,并写入计划任务每天零点执行crontab -l0 * * * * sh /bin/npt_client_call.sh >/dev/null 2>&1
15. reboot重启服务器
[root@master ~]# reboot
二、安装数据库
1.上传软件包、解压
将greenplum软件下载后上传到云服务器,可以用ftp,可以用lrzsz软件等上传,上传后进行解压,记得先安装unzip
Su – gpadmin 切换gpadmin用户执行解压后的bin文件,在前面设置的安装目录解压即可
[root@master ~]# su - gpadmin[gpadmin@master ~]$ cd /opt/greenplum/[gpadmin@master greenplum]$ unzip greenplum-db-4.3.8.0-build-1-RHEL5-x86_64.zip
2.安装软件
[gpadmin@master greenplum]$ ./ greenplum-db-4.3.8.0-build-1-RHEL5-x86_64.bin
#安装显示篇幅较长,故将中间部分省略,需要设置的部分保留******************************************************************************** You must read and accept the Pivotal Database license agreement before installing ******************************************************************************** *** IMPORTANT INFORMATION - PLEASE READ CAREFULLY *** ......1. DEFINITIONS ............13.3 Section 9.4 (Limitation Period). The entire section is deleted and replaced with: 9.4 WAIVER OF RIGHT TO BRING ACTIONS. Customer waives the right to bring any claim arising out of or in connection with this EULA more than twenty-four (24) months after the date of the cause of action giving rise to such claim. Rev: Pivotal_GPDB_EULA_03182014.txt I HAVE READ AND AGREE TO THE TERMS OF THE ABOVE PIVOTAL SOFTWARE LICENSE AGREEMENT. ******************************************************************************** Do you accept the Pivotal Database license agreement? [yes|no] ******************************************************************************* yes ******************************************************************************** Provide the installation path for Greenplum Database or press ENTER to accept the default installation path: /usr/local/greenplum-db-4.3.8.0 #执行过程中输入安装目录,即之前创建的目录 ******************************************************************************* /opt/greenplum/greenplum-db-4.3.8.0*******************************************************************************Install Greenplum Database into </opt/greenplum/greenplum-db-4.3.8.0>? [yes|no]*******************************************************************************yes*******************************************************************************/opt/greenplum/greenplum-db-4.3.8.0 does not exist.Create /opt/greenplum/greenplum-db-4.3.8.0 ? [yes|no](Selecting no will exit the installer)*******************************************************************************yesExtracting product to /opt/greenplum/greenplum-db-4.3.8.0*******************************************************************************Installation complete.Greenplum Database is installed in /opt/greenplum/greenplum-db-4.3.8.0Pivotal Greenplum documentation is availablefor download at http://docs.gopivotal.com/gpdb*******************************************************************************
[gpadmin@master ~]$
3. 创建GP互信
首先写好host_list文件和host_seg文件(可自定义文件名),host_list文件包含管理节点和所有segment节点主机名,host_seg文件包含所有segment节点主机名
[gpadmin@master greenplum]$ vim host_listmastersdw1[gpadmin@master greenplum]$ vim host_segmaster #master也可以用作数据节点存储数据sdw1
若无gpssh-exkeys 命令则先激活运行环境
[gpadmin@master greenplum]$ source ./greenplum-db-4.3.8.0/greenplum_path.sh
[gpadmin@master greenplum]$ gpssh-exkeys -f host_list[STEP 1 of 5] create local ID and authorize on local host[STEP 2 of 5] keyscan all hosts and update known_hosts file[STEP 3 of 5] authorize current user on remote hosts... send to seg01****** Enter password for seg01: #此处输入节点seg01服务器密码[STEP 4 of 5] determine common authentication file content[STEP 5 of 5] copy authentication files to all remote hosts... finished key exchange with seg01[INFO] completed successfully
4. 检查时间
[gpadmin@master greenplum]$ gpssh -f host_list Note: command history unsupported on this machine ...=> date #时间要一致,以免出现数据不同步情况[seg01] Wed Nov 6 18:08:37 CST 2019[mas01] Wed Nov 6 18:08:37 CST 2019
5. 集群数据库安装
节点安装
[gpadmin@master greenplum]$ gpseginstall -f host_list -u gpadmin -p wwljy789 #密码自定义20191031:18:12:18:002501 gpseginstall:master:gpadmin-[INFO]:-Installation Info:link_name greenplum-dbbinary_path /opt/greenplum/greenplum-db-4.3.8.0binary_dir_location /opt/greenplumbinary_dir_name greenplum-db-4.3.8.020191031:18:12:18:002501 gpseginstall:master:gpadmin-[INFO]:-check cluster password access20191031:18:12:18:002501 gpseginstall:master:gpadmin-[INFO]:-de-duplicate hostnames20191031:18:12:18:002501 gpseginstall:master:gpadmin-[INFO]:-master hostname: master20191031:18:12:18:002501 gpseginstall:master:gpadmin-[INFO]:-rm -f /opt/greenplum/greenplum-db-4.3.8.0.tar; rm -f /opt/greenplum/greenplum-db-4.3.8.0.tar.gz20191031:18:12:18:002501 gpseginstall:master:gpadmin-[INFO]:-cd /opt/greenplum; tar cf greenplum-db-4.3.8.0.tar greenplum-db-4.3.8.020191031:18:12:28:002501 gpseginstall:master:gpadmin-[INFO]:-gzip /opt/greenplum/greenplum-db-4.3.8.0.tar20191031:18:12:48:002501 gpseginstall:master:gpadmin-[INFO]:-remote command: mkdir -p /opt/greenplum20191031:18:12:49:002501 gpseginstall:master:gpadmin-[INFO]:-remote command: rm -rf /opt/greenplum/greenplum-db-4.3.8.020191031:18:12:49:002501 gpseginstall:master:gpadmin-[INFO]:-scp software to remote location20191031:18:12:51:002501 gpseginstall:master:gpadmin-[INFO]:-remote command: gzip -f -d /opt/greenplum/greenplum-db-4.3.8.0.tar.gz20191031:18:12:56:002501 gpseginstall:master:gpadmin-[INFO]:-md5 check on remote location20191031:18:12:57:002501 gpseginstall:master:gpadmin-[INFO]:-remote command: cd /opt/greenplum; tar xf greenplum-db-4.3.8.0.tar20191031:18:13:02:002501 gpseginstall:master:gpadmin-[INFO]:-remote command: rm -f /opt/greenplum/greenplum-db-4.3.8.0.tar20191031:18:13:03:002501 gpseginstall:master:gpadmin-[INFO]:-remote command: cd /opt/greenplum; rm -f greenplum-db; ln -fs greenplum-db-4.3.8.0 greenplum-db20191031:18:13:03:002501 gpseginstall:master:gpadmin-[INFO]:-rm -f /opt/greenplum/greenplum-db-4.3.8.0.tar.gz20191031:18:13:04:002501 gpseginstall:master:gpadmin-[INFO]:-version string on master: gpssh version 4.3.8.0 build 120191031:18:13:04:002501 gpseginstall:master:gpadmin-[INFO]:-remote command: . /opt/greenplum/greenplum-db/./greenplum_path.sh; /opt/greenplum/greenplum-db/./bin/gpssh --version20191031:18:13:04:002501 gpseginstall:master:gpadmin-[INFO]:-remote command: . /opt/greenplum/greenplum-db-4.3.8.0/greenplum_path.sh; /opt/greenplum/greenplum-db-4.3.8.0/bin/gpssh --version20191031:18:13:10:002501 gpseginstall:master:gpadmin-[INFO]:-SUCCESS -- Requested commands completed#出现SUCCESS说明安装成功,如有报错,根据报错信息检查相应配置
6.检查参数
[gpadmin@master ~]$ source /opt/greenplum/greenplum-db/greenplum_path.sh [gpadmin@master ~]$ gpcheck -f host_list -m master20191031:18:39:02:003100 gpcheck:master:gpadmin-[INFO]:-dedupe hostnames20191031:18:39:02:003100 gpcheck:master:gpadmin-[INFO]:-Detected platform: Generic Linux Cluster20191031:18:39:02:003100 gpcheck:master:gpadmin-[INFO]:-generate data on servers20191031:18:39:03:003100 gpcheck:master:gpadmin-[INFO]:-copy data files from serversWarning: the ECDSA host key for 'sdw1' differs from the key for the IP address '192.188.1.123'Offending key for IP in /home/gpadmin/.ssh/known_hosts:1Matching host key in /home/gpadmin/.ssh/known_hosts:6Are you sure you want to continue connecting (yes/no)? yes20191031:18:39:07:003100 gpcheck:master:gpadmin-[INFO]:-delete remote tmp files20191031:18:39:07:003100 gpcheck:master:gpadmin-[INFO]:-Using gpcheck config file: /opt/greenplum/greenplum-db/./etc/gpcheck.cnf20191031:18:39:07:003100 gpcheck:master:gpadmin-[ERROR]:-GPCHECK_ERROR host(None): utility will not check all settings when run as non-root user20191031:18:39:07:003100 gpcheck:master:gpadmin-[ERROR]:-GPCHECK_ERROR host(master): on device (vda) IO scheduler 'mq-deadline' does not match expected value 'deadline'20191031:18:39:07:003100 gpcheck:master:gpadmin-[ERROR]:-GPCHECK_ERROR host(sdw1): on device (vda) IO scheduler 'mq-deadline' does not match expected value 'deadline'20191031:18:39:07:003100 gpcheck:master:gpadmin-[INFO]:-gpcheck completing...#如有报错,根据报错信息检查之前配置数据,这是云主机disk调度策略设置无法更改,但不影响集群安装使用[gpadmin@master greenplum]$
7. 检查网络
#该步骤未在云主机检测,其实也可以忽略
[gpadmin@master ~]$ gpcheckperf -f host_seg -r n -d /tmp >test.out[gpadmin@master ~]$ cat test.out /opt/greenplum-db/./bin/gpcheckperf -f host_seg -r n -d /tmp
------------------- -- NETPERF TEST -------------------
==================== == RESULT ==================== Netperf bisection bandwidth test master -> sdw1 = 1084.220000 sdw1 -> master = 1120.210000 Summary: sum = 13411.45 MB/sec min = 1084.22 MB/sec max = 1121.99 MB/sec avg = 1118.62 MB/sec median = 1120.55 MB/sec
8.IO检查(实验时未检查)
[gpadmin@master ~]$ gpcheckperf -f host_seg -r ds -D -d /data/pg_system -v -S 2048000000 >io.out [gpadmin@master ~]$ cat io.out /opt/greenplum-db/./bin/gpcheckperf -f host_seg -r ds -D -d /data/pg_system -v -S 2048000000 -------------------- SETUP -------------------- [Info] verify python interpreter exists [Info] /opt/greenplum-db/./bin/gpssh -f host_seg 'python -c print' [Info] making gpcheckperf directory on all hosts ... Info] /opt/greenplum-db/./bin/gpssh -f host_seg 'rm -rf /data/pg_system/gpcheckperf_$USER /data/pg_system/gpcheckperf_$USER [Info] copy local /opt/greenplum/bin/lib/multidd to remote /data/pg_system/gpcheckperf_$USER/multidd [Info] /opt/greenplum-db/./bin/gpscp -f host_seg /opt/greenplum/bin/lib/multidd =:/data1/pg_system/gpcheckperf_$USER/multidd [Info] /opt/greenplum-db/./bin/gpssh -f host_seg 'chmod a+rx /data/pg_system/gpcheckperf_$USER/multidd' -------------------- -- DISK WRITE TEST -------------------- [Info] /opt/greenplum-db/./bin/gpssh -f host_seg 'time -p /data/pg_system/gpcheckperf_$USER/multidd -i /dev/zero -o -B 32768 -S 512000000'-------------------- -- DISK READ TEST -------------------- [Info] /opt/greenplum-db/./bin/gpssh -f host_seg 'time -p /data/pg_system/gpcheckperf_$USER/multidd -o /dev/null -i -B 32768 -S 512000000'--------------- -- STREAM TEST -------------------- ......
9. vim gpinitsystem_config文件配置
[gpadmin@master ~]$ mkdir /home/gpadmin/gpconfigs[gpadmin@master ~]$ cp /opt/greenplum/greenplum-db-4.3.8.0/docs/cli_help/gpconfigs/gpinitsystem_config /home/gpadmin/gpconfigs/[gpadmin@master ~]$ cd /home/gpadmin/gpconfigs/[gpadmin@master gpconfigs]$ vi gpinitsystem_config #FILE NAME: gpinitsystem_config # Configuration file needed by the gpinitsystem ################################################ ####REQUIRED PARAMETERS ################################################ ####Name of this Greenplum system enclosed in quotes. ARRAY_NAME="EMC Greenplum DW" #一般默认不改动 ####Naming convention for utility-generated data directories. SEG_PREFIX=gpseg #一般默认不改动 ####Base number by which primary segment port numbers ####are calculated. PORT_BASE=40000 #根据情况做修改,跟其他数据库不起冲突即可 ####File system location(s) where primary segment data directories ####will be created. The number of locations in the list dictate ####the number of primary segments that will get created per ####physical host (if multiple addresses for a host are listed in ####the hostfile, the number of segments will be spread evenly across ####the specified interface addresses). #根据自己创建的数据库目录进行填写,实例数量根据容量大小设置,本次设置4个primary declare -a DATA_DIRECTORY=(/data/pg_system/primary /data/pg_system/primary /data/pg_system/primary /data/pg_system/primary) ####OS-configured hostname or IP address of the master host. 根据主机名填写 MASTER_HOSTNAME=master ####File system location where the master data directory ####will be created. 根据创建的目录填写 MASTER_DIRECTORY=/data/pg_system ####Port number for the master instance. MASTER_PORT=5432 #一般默认是5432,也可自定义,不与其他服务冲突即可 ####Shell utility used to connect to remote hosts. TRUSTED_SHELL=ssh ####Maximum log file segments between automatic WAL checkpoints. #根据实际创建实例填写 CHECK_POINT_SEGMENTS=4 ####Default server-side character set encoding. ENCODING=UTF8 ################################################ ####OPTIONAL MIRROR PARAMETERS ###################################################Base number by which mirror segment port####numbers are calculated. MIRROR_PORT_BASE=50000 #镜像端口 ####Base number by which primary file replication port ####numbers are calculated. REPLICATION_PORT_BASE=41000 #主文件复制端口 ####Base number by which mirror file replication port ####numbers are calculated. MIRROR_REPLICATION_PORT_BASE=51000 #镜像文件复制端口 ####File system location(s) where mirror segment data directories ####will be created. The number of mirror locations must equal the ####number of primary locations as specified in the ####DATA_DIRECTORY parameter. #根据自己创建的数据库目录进行填写,实例数量根据容量大小设置,本次设置4个mirror,与primary数量一致 declare -a MIRROR_DATA_DIRECTORY=(/data/pg_system/mirror /data/pg_system/mirror /data/pg_system/mirror /data/pg_system/mirror)################################################ ####OTHER OPTIONAL PARAMETERS ################################################ ####Create a database of this name after initialization. 创建一个数据并命名 DATABASE_NAME=mydb ####Specify the location of the host address file here instead of ####with the the -h option of gpinitsystem. #MACHINE_LIST_FILE=/home/gpadmin/gpconfigs/hostfile_gpinitsystem [gpadmin@master gpconfigs]$
10.初始化:(数据库安装)
有standby的情况[gpadmin@master gpconfigs]$ gpinitsystem -c gpinitsystem_config -s smdw -h /home/gpadmin/host_seg无standby的情况[gpadmin@master gpconfigs]$ gpinitsystem -c gpinitsystem_config -h /home/gpadmin/host_seg 20191031:18:00:55:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Checking configuration parameters, please wait... 20191031:18:00:55:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Reading Greenplum configuration file gpinitsystem_config 20191031:18:00:55:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Locale has not been set in gpinitsystem_config, will set to default value 20191031:18:00:55:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Locale set to en_US.utf8 20191031:18:00:55:020311 gpinitsystem:mas36:gpadmin-[WARN]:-Master hostname mdw does not match hostname output 20191031:18:00:55:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Checking to see if mdw can be resolved on this host 20191031:18:00:55:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Can resolve mdw to this host 20191031:18:00:55:020311 gpinitsystem:mas36:gpadmin-[INFO]:-MASTER_MAX_CONNECT not set, will set to default value 250 20191031:18:00:56:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Checking configuration parameters, Completed 20191031:18:00:56:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Commencing multi-home checks, please wait... ............ 20191031:18:01:00:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Configuring build for standard array 20191031:18:01:00:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Commencing multi-home checks, Completed 20191031:18:01:00:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Building primary segment instance array, please wait... ................................................ 20191031:18:01:37:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Building group mirror array type , please wait... ................................................ 20191031:18:02:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Checking Master host 20191031:18:02:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Checking new segment hosts, please wait... ............................................ .................................................... 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Checking new segment hosts, Completed 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Greenplum Database Creation Parameters 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:--------------------------------------- 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Master Configuration 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:--------------------------------------- 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Master instance name = EMC Greenplum DW 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Master hostname = mdw 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Master port = 5432 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Master instance dir = /data/pg_system/gpseg-1 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Master LOCALE = en_US.utf8 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Greenplum segment prefix = gpseg 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Master Database = csgbi 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Master connections = 250 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Master buffers = 128000kB 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Segment connections = 750 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Segment buffers = 128000kB 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Checkpoint segments = 8 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Encoding = UTF8 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Postgres param file = Off 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Initdb to be used = /opt/greenplum-db/./bin/initdb 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-GP_LIBRARY_PATH is = /opt/greenplum-db/./lib 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Ulimit check = Passed 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Array host connect type = Single hostname per node 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Master IP address [1] = ::1 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Master IP address [2] = 192.188.1.123 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Master IP address [3] = fe80::92e2:baff:fe92:f4dc 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Standby Master = Not Configured 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Primary segment # = 4 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Total Database segments = 8 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Trusted shell = ssh 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Number segment hosts = 2 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Mirror port base = 50000 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Replicaton port base = 41000 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Mirror replicaton port base= 51000 20191031:18:04:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Mirror segment # = 4 20191031:18:04:19:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Mirroring config = ON 20191031:18:04:19:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Mirroring type = Group 20191031:18:04:19:020311 gpinitsystem:mas36:gpadmin-[INFO]:---------------------------------------- 20191031:18:04:19:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Greenplum Primary Segment Configuration 20191031:18:04:19:020311 gpinitsystem:mas36:gpadmin-[INFO]:---------------------------------------- 20191031:18:04:19:020311 gpinitsystem:mas36:gpadmin-[INFO]:-seg01 /data1/pg_system/primary/gpseg0 40000 2 0 41000 20191031:18:04:19:020311 gpinitsystem:mas36:gpadmin-[INFO]:-seg01 /data1/pg_system/primary/gpseg1 40001 3 1 41001 20191031:18:04:19:020311 gpinitsystem:mas36:gpadmin-[INFO]:-seg01 /data2/pg_system/primary/gpseg2 40002 4 2 41002 20191031:18:04:19:020311 gpinitsystem:mas36:gpadmin-[INFO]:-seg01 /data2/pg_system/primary/gpseg3 40003 5 3 41003 。。。。。。20191031:18:04:20:020311 gpinitsystem:mas36:gpadmin-[INFO]:--------------------------------------- 20191031:18:04:20:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Greenplum Mirror Segment Configuration 20191031:18:04:20:020311 gpinitsystem:mas36:gpadmin-[INFO]:--------------------------------------- 20191031:18:04:21:020311 gpinitsystem:mas36:gpadmin-[INFO]:-seg01 /data1/pg_system/mirror/gpseg44 50000 94 44 51000 20191031:18:04:22:020311 gpinitsystem:mas36:gpadmin-[INFO]:-seg01 /data1/pg_system/mirror/gpseg45 50001 95 45 51001 20191031:18:04:22:020311 gpinitsystem:mas36:gpadmin-[INFO]:-seg01 /data2/pg_system/mirror/gpseg46 50002 96 46 51002 20191031:18:04:22:020311 gpinitsystem:mas36:gpadmin-[INFO]:-seg01 /data2/pg_system/mirror/gpseg47 50003 97 47 51003 Continue with Greenplum creation Yy/Nn> y 20191031:18:04:53:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Building the Master instance database, please wait... 20191031:18:05:05:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Starting the Master in admin mode 20191031:18:06:15:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Commencing parallel build of primary segment instances 20191031:18:06:15:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Spawning parallel processes batch [1], please wait... ................................................ 20191031:18:06:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait... ............................................. 20191031:18:07:03:020311 gpinitsystem:mas36:gpadmin-[INFO]:------------------------------------------------ 20191031:18:07:03:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Parallel process exit status 20191031:18:07:03:020311 gpinitsystem:mas36:gpadmin-[INFO]:------------------------------------------------ 20191031:18:07:04:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Total processes marked as completed = 48 20191031:18:07:04:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Total processes marked as killed = 0 20191031:18:07:04:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Total processes marked as failed = 0 20191031:18:07:04:020311 gpinitsystem:mas36:gpadmin-[INFO]:------------------------------------------------ 20191031:18:07:04:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Commencing parallel build of mirror segment instances 20191031:18:07:04:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Spawning parallel processes batch [1], please wait... ................................................ 20191031:18:07:07:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait... .......... 20191031:18:07:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:------------------------------------------------ 20191031:18:07:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Parallel process exit status 20191031:18:07:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:------------------------------------------------ 20191031:18:07:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Total processes marked as completed = 48 20191031:18:07:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Total processes marked as killed = 0 20191031:18:07:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Total processes marked as failed = 0 20191031:18:07:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:------------------------------------------------ 20191031:18:07:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Deleting distributed backout files 20191031:18:07:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Removing back out file 20191031:18:07:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-No errors generated from parallel processes 20191031:18:07:18:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Restarting the Greenplum instance in production mode 20191031:18:07:18:031959 gpstop:mas36:gpadmin-[INFO]:-Starting gpstop with args: -a -i -m -d /data/pg_system/gpseg-1 20191031:18:07:18:031959 gpstop:mas36:gpadmin-[INFO]:-Gathering information and validating the environment... 20191031:18:07:18:031959 gpstop:mas36:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information 20191031:18:07:18:031959 gpstop:mas36:gpadmin-[INFO]:-Obtaining Segment details from master... 20191031:18:07:19:031959 gpstop:mas36:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.8.0 build 1' 20191031:18:07:19:031959 gpstop:mas36:gpadmin-[INFO]:-There are 0 connections to the database 20191031:18:07:19:031959 gpstop:mas36:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='immediate' 20191031:18:07:19:031959 gpstop:mas36:gpadmin-[INFO]:-Master host=mas36 20191031:18:07:19:031959 gpstop:mas36:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=immediate 20191031:18:07:19:031959 gpstop:mas36:gpadmin-[INFO]:-Master segment instance directory=/data/pg_system/gpseg-1 20191031:18:07:20:031959 gpstop:mas36:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process 20191031:18:07:20:031959 gpstop:mas36:gpadmin-[INFO]:-Terminating processes for segment /data/pg_system/gpseg-1 20191031:18:07:20:031959 gpstop:mas36:gpadmin-[ERROR]:-Failed to kill processes for segment /data/pg_system/gpseg-1: ([Errno 3] No such process)20191031:18:07:20:032046 gpstart:mas36:gpadmin-[INFO]:-Starting gpstart with args: -a -d /data/pg_system/gpseg-1 20191031:18:07:20:032046 gpstart:mas36:gpadmin-[INFO]:-Gathering information and validating the environment... 20191031:18:07:21:032046 gpstart:mas36:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.3.8.0 build 1' 20191031:18:07:21:032046 gpstart:mas36:gpadmin-[INFO]:-Greenplum Catalog Version: '201310150' 20191031:18:07:21:032046 gpstart:mas36:gpadmin-[INFO]:-Starting Master instance in admin mode 20191031:18:07:22:032046 gpstart:mas36:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information 20191031:18:07:22:032046 gpstart:mas36:gpadmin-[INFO]:-Obtaining Segment details from master... 20191031:18:07:23:032046 gpstart:mas36:gpadmin-[INFO]:-Setting new master era 20191031:18:07:23:032046 gpstart:mas36:gpadmin-[INFO]:-Master Started... 20191031:18:07:23:032046 gpstart:mas36:gpadmin-[INFO]:-Shutting down master 20191031:18:07:25:032046 gpstart:mas36:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait... ..... 20191031:18:07:30:032046 gpstart:mas36:gpadmin-[INFO]:-Process results... 20191031:18:07:30:032046 gpstart:mas36:gpadmin-[INFO]:----------------------------------------------------- 20191031:18:07:30:032046 gpstart:mas36:gpadmin-[INFO]:- Successful segment starts = 8 20191031:18:07:30:032046 gpstart:mas36:gpadmin-[INFO]:- Failed segment starts = 0 20191031:18:07:30:032046 gpstart:mas36:gpadmin-[INFO]:- Skipped segment starts (segments are marked down in configuration) = 0 20191031:18:07:30:032046 gpstart:mas36:gpadmin-[INFO]:----------------------------------------------------- 20191031:18:07:30:032046 gpstart:mas36:gpadmin-[INFO]:- 20191031:18:07:30:032046 gpstart:mas36:gpadmin-[INFO]:-Successfully started 8 of 8 segment instances 20191031:18:07:30:032046 gpstart:mas36:gpadmin-[INFO]:----------------------------------------------------- 20191031:18:07:30:032046 gpstart:mas36:gpadmin-[INFO]:-Starting Master instance mas36 directory /data/pg_system/gpseg-1 20191031:18:07:31:032046 gpstart:mas36:gpadmin-[INFO]:-Command pg_ctl reports Master mas36 instance active 20191031:18:07:32:032046 gpstart:mas36:gpadmin-[INFO]:-No standby master configured. skipping... 20191031:18:07:32:032046 gpstart:mas36:gpadmin-[INFO]:-Database successfully started 20191031:18:07:32:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode 20191031:18:07:32:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Loading gp_toolkit... 20191031:18:07:47:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Scanning utility log file for any warning messages 20191031:18:07:47:020311 gpinitsystem:mas36:gpadmin-[WARN]:-******************************************************* 20191031:18:07:47:020311 gpinitsystem:mas36:gpadmin-[WARN]:-Scan of log file indicates that some warnings or errors 20191031:18:07:47:020311 gpinitsystem:mas36:gpadmin-[WARN]:-were generated during the array creation 20191031:18:07:47:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Please review contents of log file 20191031:18:07:47:020311 gpinitsystem:mas36:gpadmin-[INFO]:-/home/gpadmin/gpAdminLogs/gpinitsystem_20191031.log 20191031:18:07:47:020311 gpinitsystem:mas36:gpadmin-[INFO]:-To determine level of criticality 20191031:18:07:47:020311 gpinitsystem:mas36:gpadmin-[WARN]:-******************************************************* 20191031:18:07:47:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Greenplum Database instance successfully created 20191031:18:07:47:020311 gpinitsystem:mas36:gpadmin-[INFO]:------------------------------------------------------- 20191031:18:07:47:020311 gpinitsystem:mas36:gpadmin-[INFO]:-To complete the environment configuration, please 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:-update gpadmin .bashrc file with the following 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:-2. Add "export MASTER_DATA_DIRECTORY=/data/pg_system/gpseg-1" 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:- to access the Greenplum scripts for this instance: 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:- or, use -d /data/pg_system/gpseg-1 option for the Greenplum scripts 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:- Example gpstate -d /data/pg_system/gpseg-1 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20191031.log 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:-To initialize a Standby Master Segment for this Greenplum instance 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Review options for gpinitstandby 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:------------------------------------------------------- 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:-The Master /data/pg_system/gpseg-1/pg_hba.conf post gpinitsystem 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:-has been configured to allow all hosts within this new 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:-new array must be explicitly added to this file 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:-Refer to the Greenplum Admin support guide which is 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:-located in the /opt/greenplum-db/./docs directory 20191031:18:07:48:020311 gpinitsystem:mas36:gpadmin-[INFO]:------------------------------------------------------- #如有报错根据报错信息查找原因并进行修改
11.环境变量
[gpadmin@master ~]$ vim .bashrc#.bashrc#Source global definitionsif [ -f /etc/bashrc ]; then. /etc/bashrcfi#User specific aliases and functions. /opt/greenplum/greenplum-db/greenplum_path.shexport MASTER_DATA_DIRECTORY=/data/pg_system/gpseg-1export PGPORT=5432export PGDATABASE=mydb
12. 数据库调优:
注:个别调优选项有算法。
[gpadmin@master ~]$ gpconfig -c gp_fts_probe_timeout -v 60s20191031:18:21:49:012239 gpconfig:mas36:gpadmin-[INFO]:-completed successfully[gpadmin@master ~]$ gpconfig -c gp_resqueue_priority_cpucores_per_segment -m 96 -v 520191031:18:22:08:012320 gpconfig:mas36:gpadmin-[INFO]:-completed successfully[gpadmin@master ~]$ gpconfig -c gp_vmem_protect_limit -m 365568 -v 3651220191031:18:23:55:012437 gpconfig:mas36:gpadmin-[INFO]:-completed successfully[gpadmin@master ~]$ gpconfig -c gp_external_max_segs -v 1620191031:18:24:28:012534 gpconfig:mas36:gpadmin-[INFO]:-completed successfully
13. 打字符集补丁包
#先将新的补丁包utf8_and_gb18030.so包考到主机/home/gpadmin/目录下,将原来/opt/greenplum/greenplum-db/lib/postgresql/目录下的备份好,再将新的复制到该目录下
[gpadmin@master ~]$ cd /opt/greenplum/greenplum-db/lib/postgresql/[gpadmin@master postgresql]$ cp utf8_and_gb18030.so utf8_and_gb18030.so.back[gpadmin@master postgresql]$ cp /home/gpadmin/utf8_and_gb18030.so ./[gpadmin@master postgresql]$ scp ./utf8_and_gb18030.so $i:/opt/greenplum/greenplum-db/lib/postgresql/
#如果节点较多,可以尝试批量发送
14.在GP中安装oracle兼容函数:
[gpadmin@master ~]$ cd /opt/greenplum/greenplum-db-4.3.8.0/share/postgresql/contrib/[gpadmin@master contrib]$ vim orafunc.sql#(前三行改public和--CREATE)-- Adjust this setting to control where the objects get created.\set ORA_SCHEMA public --CREATE SCHEMA :ORA_SCHEMA; #该行注释SET search_path = :ORA_SCHEMA;BEGIN;#在GP数据库中执行生成oracle的兼容函数:[gpadmin@master contrib]$ psql -d csgbi -f orafunc.sqlSETBEGINCREATE FUNCTIONCREATE FUNCTION ......CREATE FUNCTIONCOMMIT#检查是否创建成功:[gpadmin@master contrib]$ psql -d mydbpsql (8.2.15)Type "help" for help.mydb=#mydb=# select public.instr('abc','a');instr\-------1(1 row)mydb=# \q
三、监控部署
1.加载环境变量
[gpadmin@master ~]$ su - root[root@master ~]# source /opt/greenplum/greenplum-db/greenplum_path.sh停用防火墙,之前已操作,则可忽略该步骤,[root@master ~]#gpssh -f /opt/greenplum/host_list -e 'service iptables status'[root@master ~]#gpssh -f /opt/greenplum/host_list -e 'service iptables stop'#rhel7版本的命令有所改动[root@master ~]#gpssh -f /opt/greenplum/host_list -e 'systemctl stop firewalld'
2.部署监控软件
[root@master ~]# Su - gpamdin设置监控用户密码及端口,监控用户会默认创建为gpperfmon[gpadmin@master ~]$ cd /opt/ [gpadmin@master opt]$ gpperfmon_install --enable --password gpmon@123 --port 5432
重启集群:
[gpadmin@master opt]$ gpstop -ra[gpadmin@master opt]$ psql gpperfmon -c 'SELECT * FROM system_now;'
安装软件:
[gpadmin@master opt]$ ./greenplum-cc-web-2.2.0-build-21-RHEL5-x86_64.bin[gpadmin@master opt]$ source /opt/gpcc/greenplum-cc-web/gpcc_path.sh
部署至集群:
[gpadmin@master opt]$ gpccinstall -f host_listContinue with Greenplum instance shutdown Yy|Nn (default=N):> n20191031:18:38:33:244222 gpstop:seg07:gpadmin-[INFO]:-User abort requested, Exiting...
重启集群
[gpadmin@master opt]$ gpstop -M fast
在环境变量中加入:
[gpadmin@master opt]$ source /opt/gpcc/greenplum-cc-web/gpcc_path.sh[gpadmin@master opt]$ gpstart -a
创建监控实例:
[gpadmin@master opt]$ gpcmdr --setup #使用默认配置没有自定义的地方直接回车即可
An instance name is used by the Greenplum Command Center asa way to uniquely identify a Greenplum Database that has the monitoringcomponents installed and configured. This name is also used to controlspecific instances of the Greenplum Command Center web UI. Instance namescan contain letters, digits and underscores and are not case sensitive.Please enter a new instance name:> gpcc #该处需要自己命名The web component of the Greenplum Command Center can connect to amonitor database on a remote Greenplum Database.Is the master host for the Greenplum Database remote? Yy|Nn (default=N):>The display name is shown in the web interface and does not need to bea hostname.What would you like to use for the display name for this instance:> gpccWhat port does the Greenplum Database use? (default=5432):>Creating instance schema in GPDB. Please wait ...The display name is shown in the web interface and does not need to bea hostname.Would you like to install workload manager? Yy|Nn (default=N):>Skipping installation of workload manager.The Greenplum Command Center runs a small web server for the UI and web API. This web server by default runs on port 28080, but you may specify any available port.What port would you like the web server to use for this instance? (default=28080):> #这里可以写自己定义的端口号The Greenplum Command Center runs a small web server for the UI and web API. This web server by default runs on port 28080, but you may specify any available port.What port would you like the new beta web server to use for this instance? (default=28090):> #这里可以写自己定义的端口号Users logging in to the Command Center must provide database usercredentials. In order to protect user names and passwords, it is recommendedthat SSL be enabled.Do you want to enable SSL for the Web API Yy|Nn (default=N):>Do you want to enable ipV6 for the Web API Yy|Nn (default=N):>Do you want to enable Cross Site Request Forgery Protection for the Web API Yy|Nn (default=N):>Do you want to copy the instance to a standby master host Yy|Nn (default=Y):> n #默认是安装到standby,在集群没有standby时需要输入nDone writing lighttpd configuration to /opt/209greenplum/greenplum-cc-web/./instances/gpcc1/conf/lighttpd.confDone writing web UI configuration to /opt/209greenplum/greenplum-cc-web/./instances/gpcc1/conf/gpperfmonui.confDone writing beta configuration to /opt/209greenplum/greenplum-cc-web/./instances/gpcc/beta/conf/app.confDone writing web UI clustrs configuration to /opt/209greenplum/greenplum-cc-web/./instances/gpcc/conf/clusters.confGreenplum Command Center UI configuration is now complete. Ifat a later date you want to change certain parameters, you caneither re-run 'gpcmdr --setup' or edit the configuration filelocated at /opt/209greenplum/greenplum-cc-web/./instances/gpcc/conf/gpperfmonui.conf.The web UI for this instance is available at http://seg07:81/You can now start the web UI for this instance by running: gpcmdr --start gpcc###########################以下是有standby的显示内容###############################Do you want to copy the instance to a standby master host Yy|Nn (default=Y):>What is the hostname of the standby master host? [smdw]: standby isDone writing lighttpd configuration to /opt/gpcc/greenplum-cc-web/./instances/gpcc/conf/lighttpd.confDone writing web UI configuration to /opt/gpcc/greenplum-cc-web/./instances/gpcc/conf/gpperfmonui.confDone writing beta configuration to /opt/gpcc/greenplum-cc-web/./instances/gpcc/beta/conf/app.confDone writing web UI clustrs configuration to /opt/gpcc/greenplum-cc-web/./instances/gpcc/conf/clusters.confCopying instance 'gpcc' to host 'smdw'...Greenplum Command Center UI configuration is now complete. Ifat a later date you want to change certain parameters, you caneither re-run 'gpcmdr --setup' or edit the configuration filelocated at /opt/gpcc/greenplum-cc-web/./instances/gpcc/conf/gpperfmonui.conf.The web UI for this instance is available at http://mas01:28080/You can now start the web UI for this instance by running: gpcmdr --start gpccNo instances
启动监控服务:
[gpadmin@mas01 ~]$ gpcmdr --start gpccStarting instance gpcc...Do you want to start the beta server? Yy|Nn (default=Y):>Greenplum Command Center UI for instance 'gpcc' - [RUNNING on PORT: 28080]Beta server for instance 'gpcc' - [RUNNING on PORT: 28090]
页面访问监控:
http://IP:28080/
默认用户名和密码:gpmon/gpmon@123
Greenplum数据库集群安装指南相关推荐
- greenplum数据库集群的备份与恢复看这一篇就够啦!
针对greenplum-db-6.1.0的数据库集群的备份与恢复方案 使用gpbackup和gprestore的GPDB的备份方案 1.前言 2.Greenplum备份恢复发展历程 3.备份方案的优劣 ...
- greenplum数据库集群异常FATAL,XX000,Number of freeTIDs 788079, do not match maximum free order numbe
背景描述:greenplum生产集群晚上运行任务负载过高导致实例异常,集群资源繁忙无法正常登陆数据库无法进行恢复,清理部分任务进程后停库进行重启失败,64个实例29个失败. 现象:重启异常:[ERRO ...
- GreenPlum数据库集群故障检测与恢复
Segment镜像概述 当Greenplum数据库高可用性被启用时,有两种类型的Segment:主Segment和镜像Segment,每个主Segment都有一个对应的镜像Segment.主Segme ...
- Greenplum【集群搭建 02】cgroup工具安装+用户资源组配置+集群资源组配置+数据库资源组配置+资源组相关信息(一篇学会资源组配置)
采用资源组的形式管理数据库集群的内存.cpu等信息,非特殊说明,需要在集群每台机器上执行. 1.安装cgroup工具 # 1.安装 yum install -y libcgroup-tools# 如果 ...
- 搭建高可用web和数据库集群
1.项目简介即拓扑图 本次设计使用keepalived与lvs搭建一个调度器集群,实现lvs调度器的高可用,当lvs调度器的master宕机后,keepalived会自动切换到backup上. 调度后 ...
- CentOS7+MySQL/MariaDB+Galera+HAProxy+Keepalived构建高可用数据库集群
方案优势: Galera能够实现MySQL/MariaDB数据库的主主复制和多主复制等模式,这些复制模式都是同步进行的,同步时间非常短 每一个节点都可以同时写入和读取,当某一节点发生故障时,可自动从集 ...
- 数据库集群和高可用解决方案
数据库集群和高可用解决方案 参考文章: (1)数据库集群和高可用解决方案 (2)https://www.cnblogs.com/Newd/p/9049873.html 备忘一下.
- 数据库中间件支持数据库集群方案
咏南数据库中间件支持数据库集群方案 通过咏南数据库中间件作为PROXY(数据库代理)实现的数据库集群,非数据库层面集群. 数据库可以分库分表进行水平或垂直拆分成数据库集群. 咏南中间件作为数据库集群代 ...
- 浅谈高性能数据库集群——读写分离
作者 陈彩华 贝聊Java后端工程师 文章转载交流请联系 caison@aliyun.com 复制代码 最近学习了阿里资深技术专家李运华的架构设计关于读写分离的教程,颇有收获,总结一下. 本文主要介绍 ...
最新文章
- 超越谷歌BERT!依图推出预训练语言理解模型ConvBERT,入选NeurIPS 2020
- 理解Hinted Handoff
- 「mysql优化专题」单表查询优化的一些小总结,非索引设计(3)
- EF修改对象里面的值。。。(对象字段多的时候)
- Linux下解压:tar、rar、7z命令
- 佳能相机CR2转TIFF
- 第一条Pulsar消息发送
- scrapy再学习与第二个实例
- 基于JAVA+SpringBoot+Mybatis+MYSQL的家庭财务管理系统
- 用计算机制作flash动画教案,最新最完整Flash动画制作教案.doc
- 【简易设计模式07】适配器模式
- 【EasyAR学习】平面追踪、表面跟踪和运动跟踪、稀疏空间地图
- NoSuchMethodError: No virtual method(Ljava/io/File;)V in class Landroid/media/MediaRec
- 【仙剑奇侠传5】主线任务汇总
- 求长方形的周长和面积c语言两个函数_怎么用C语言编程设计“输入长方形的长和宽,求长方形的面积”的程序...
- 程序员真的需要一台 Mac 吗?
- 【邻接矩阵乘法】沼泽鳄鱼
- 反向放大器为何要使用同相增益(也称作噪声增益),来计算带宽
- 动手学深度学习笔记4——微积分自动微分
- windows10图片打开找不到内置图片查看器