GreenPlum 与storm hadoop关系什么关系

阿里曾文旌:Greenplum和Hadoop对比,架构解析及技术选型
[问题点数:0分]
本版专家分:1
结帖率 0.38%
CSDN今日推荐
匿名用户不能发表回复!|
其他相关推荐Greenplum+Hadoop学习笔记-10-Greenplum安装
时间: 15:36:26
&&&& 阅读:1156
&&&& 评论:
&&&& 收藏:0
标签:2.1.评估存储能力
2.1.1.计算可用的空间
步骤1:初始存储能力=硬盘大小*硬盘数
步骤2:配置RAID10,格式化磁盘空间=(初始存储能力*0.9)/2
步骤3:可用磁盘空间=格式化磁盘空间*0.7
步骤4:用户数据使用空间
&&&&&&&& 使用镜像:(2*用户数据)+用户数据/3=可用磁盘空间
&&&&&&&& 不使用镜像:用户数据+用户数据/3=可用磁盘空间
2.1.2.计算用户数据大小
平均来说,实际占用磁盘空间大小=用户数据*1.4
页面开销:32KB页面需要20 bytes
行开销:每行24 bytes,‘append-only‘表需要4bytes
索引开销:
&&&&&&&& B-tree:唯一值*(数据类型大小+24bytes)
&&&&&&&& Bitmap:(唯一值*行数*1bit*压vi缩比率/8)+(唯一值*32)
为元数据和日志计算空间需求
系统元数据:20M
预写日志(WAL):WAL被拆分成多个64M的文件,WAL文件数最多为
2*checkpoint_segments+1,checkpoint_segments默认值为8。也就意味着每个实例需要1088MB的WAL空间
GP数据库日志文件:日志轮转
性能监控数据
2.2.实验环境
2.2.1.硬件环境
&VMware虚拟机软件10.0
三台Linux虚拟机:Red Hat Enterprise Linux Serverrelease 5.42
数据库:greenplum-db-4.2.8.0-build-1-RHEL5-x86_64.zip
2.2.2.网卡设置
2.2.3.虚拟机配置
2.2.3.1.基本信息配置
&192.168.80.200
&192.168.80.201
&192.168.80.202
2.2.3.2.设置IP地址的命令行方式
vi /etc/sysconfig/network-scripts/ifcfg-eth0
填写ip地址、子网掩码、网关、DNS等
DEVICE=eth0
HWADDR=00:0c:29:b7:7d:69
ONBOOT=yes
BOOTPROTO=static
DNS1=192.168.80.200
IPV6INIT=no
USERCTL=no
IPADDR=192.168.80.200
NETMASK=255.255.255.0
GATEWAY=192.168.80.1
关闭防火墙
[ ~]# service iptables status
表格:filter
Chain INPUT (policy ACCEPT)
num& target&&&& prot opt source&&&&&&&&&&&&&& destination&&&&&&&&
1&&& RH-Firewall-1-INPUT& all& --& 0.0.0.0/0&&&&&&&&&&& 0.0.0.0/0&&&&&&&&&&
Chain FORWARD (policy ACCEPT)
num& target&&&& prot opt source&&&&&&&&&&&&&& destination& &&&&&&&
1&&& RH-Firewall-1-INPUT& all& --& 0.0.0.0/0&&&&&&&&&&& 0.0.0.0/0&&&&&&&&&&
Chain OUTPUT (policy ACCEPT)
num& target&&&& prot opt source&&&&&&&&&&&&&& destination&&&&&&&&
Chain RH-Firewall-1-INPUT (2 references)
num& target&&&& prot opt source&&&&&&&&&&&&&& destination&&&&&&&&
1&&& ACCEPT&&&& all& --& 0.0.0.0/0&&&&&&&&&&& 0.0.0.0/0&&&&&&&&&&
2&&& ACCEPT&&&& icmp --& 0.0.0.0/0&&&&&&&&&&& 0.0.0.0/0&&&&&&&&&& icmp type 255
3&&& ACCEPT&&&& esp& --& 0.0.0.0/0&&&&&& &&&&&0.0.0.0/0&&&&&&&&&&
4&&& ACCEPT&&&& ah&& --& 0.0.0.0/0&&&&&&&&&&& 0.0.0.0/0&&&&&&&&&&
5&&& ACCEPT&&&& udp& --& 0.0.0.0/0&&&&&&&&&&& 224.0.0.251&&&&&&&& udp dpt:5353
6&&& ACCEPT&&&& udp& --& 0.0.0.0/0&&&&&&&&&&& 0.0.0.0/0&&&&&&&&&& udp dpt:631
7&& &ACCEPT&&&& tcp& --& 0.0.0.0/0&&&&&&&&&&& 0.0.0.0/0&&&&&&&&&& tcp dpt:631
8&&& ACCEPT&&&& all& --& 0.0.0.0/0&&&&&&&&&&& 0.0.0.0/0&&&&&&&&&& state RELATED,ESTABLISHED
9&&& ACCEPT&&&& tcp& --& 0.0.0.0/0&&&&&&&&&&& 0.0.0.0/0&&&&&&&&&& state NEW tcp dpt:21
10&& ACCEPT&&&& tcp& --& 0.0.0.0/0&&&&&&&&&&& 0.0.0.0/0&&&&&&&&&& state NEW tcp dpt:22
11&& ACCEPT&&&& tcp& --& 0.0.0.0/0&&&&&&&&&&& 0.0.0.0/0&&&&&&&&&& state NEW tcp dpt:23
12&& ACCEPT&&&& tcp& --& 0.0.0.0/0&&&&&&&&&&& 0.0.0.0/0&&&&&&&&&& state NEW tcp dpt:80
13&& ACCEPT&&&& tcp& --& 0.0.0.0/0&&&&&&&&&&& 0.0.0.0/0&&&&&&&&&& state NEW tcp dpt:443
14&& REJECT&&&& all& --& 0.0.0.0/0&&&&&&&&&&& 0.0.0.0/0&&&&&&&&&& reject-with icmp-host-prohibited
[ ~]# service iptables stop
清除防火墙规则:&&&&&&& &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&[确定]
把 chains 设置为 ACCEPT 策略:filter&&&&&&&&&&&&&&&&&&&&&& [确定]
正在卸载 Iiptables 模块:&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& [确定]
修改主机名和IP映射:
[ ~]# vi /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=master
[ ~]# vi /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1&&&&&&&&&&&&&& localhost.localdomain localhost
::1&&&&&&&&&&&& localhost6.localdomain6 localhost6
192.168.80.200 master
2.2.3.3.系统参数配置
&&&&&&& 包括共享内存、网络、用户限制等,修改或添加/etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
# For binary values, 0 is disabled, 1 is enabled.& See sysctl(8) and
# sysctl.conf(5) for more details.
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
#for greenplum begin
xfs_mount_options = rw,noatime,inode64,allocsize=16m
# Controls the maximum shared segment size, in bytes
kernel.shmmax =
# Controls the minimum shared segment size, in bytes
kernel.shmmni = 4096
# Controls the maximum number of shared memory segments, in pages
kernel.shmall =
kernel.sem = 250
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 1
# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1
# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536
# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536
# Controls the default minmimum size of a mesage queue
kernel.msgmni = 2048
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range =
net.core.netdev_max_backlog = 10000
vm.overcommit_memory = 2
#for greenplum end
2.2.3.4.系统设置(所有节点)
配置/etc/security/limits.conf文件
# for greenplum begin
* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072
# for greenplum end
设置磁盘访问I/O调度策略
说明:介绍文章:&&&&&&&&&&&&&&
deadline的调度策略可以平衡IO调度和进程调度,不会造成等待进程频繁的等待
[ ~] # cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]
[ ~] # cat /sys/block/fd0/queue/scheduler
noop anticipatory deadline [cfq]&
[ ~]# dmesg | grep -i scheduler
io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered
io scheduler cfq registered (default)
[ ~]# echo deadline & /sys/block/sda/queue/scheduler
[ ~]# echo deadline & /sys/block/fd0/queue/scheduler
[ ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq
[ ~]#& cat /sys/block/fd0/queue/scheduler
noop anticipatory [deadline] cfq
设置预读块的值为16384;
16384:数据仓库的最大特点是用于保存历史数据,存在大量的数据操作,包括增删改查,当设置的块越大时读取性能越高;16384是greenplum数据库要求的最低要求。
[ ~]# /sbin/blockdev --setra 16384 /dev/sda
[ ~]# /sbin/blockdev --setra 16384 /dev/sda1
[ ~]# /sbin/blockdev --setra 16384 /dev/sda2
[ ~]# /sbin/blockdev --setra 16384 /dev/sda3
[ ~]# fdisk -l
Disk /dev/sda: 21.4 GB,
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
&& Device Boot&&&&& Start&&&&&&&& End&&&&& Blocks&& Id& System
/dev/sda1&& *&&&&&&&&&& 1&&&&&&&& 653&&&& 5245191&& 83& Linux
/dev/sda2&&&&&&&&&&&& 654&&&&&&&& 914&&&& ;& 82& Linux swap / Solaris
/dev/sda3&&&&&&&&&&&& 915&&&&&&& 1045&&&& ;& 83& Linux
修改/etc/hosts,添加如下内容
127.0.0.1&&&&&&&&&&&&&& localhost.localdomain localhost
::1&&&&&&&&&&&& localhost6.localdomain6 localhost6
192.168.80.200 master
192.168.80.201 slave1
192.168.80.202 slave2
2.2.4.在Master节点上安装Greenplum软件
2.2.4.1.准备介质
下载地址:http://gopivotal.com/products/pivotal-greenplum-database
2.2.4.2.解压
&&&&&&&& #unzip greenplum-db-4.2.8.0-build-1-RHEL5-x86_64.zip
2.2.4.3.安装软件
&&&&&&&& #/bin/bash greenplum-db-4.2.8.0-build-1-RHEL5-x86_64.bin
I HAVE READ AND AGREE TO THE TERMS OF THE ABOVE PIVOTAL SOFTWARE
LICENSE AGREEMENT.
********************************************************************************
Do you accept the Pivotal Database license agreement? [yes|no]
********************************************************************************
********************************************************************************
Provide the installation path for Greenplum Database or press ENTER to
accept the default installation path: /usr/local/greenplum-db-4.2.8.0
********************************************************************************
********************************************************************************
Install Greenplum Database into &/usr/local/greenplum-db-4.2.8.0&? [yes|no]
********************************************************************************
********************************************************************************
/usr/local/greenplum-db-4.2.8.0 does not exist.
Create /usr/local/greenplum-db-4.2.8.0 ? [yes|no]
(Selecting no will exit the installer)
********************************************************************************
********************************************************************************
[Optional] Provide the path to a previous installation of Greenplum Database,
or press ENTER to skip this step. e.g. /usr/local/greenplum-db-4.1.1.3
This installation step will migrate any Greenplum Database extensions from the
provided path to the version currently being installed. This step is optional
and can be run later with:
gppkg --migrate &path_to_old_gphome& /usr/local/greenplum-db-4.2.8.0
********************************************************************************
Extracting product to /usr/local/greenplum-db-4.2.8.0
Skipping migration of Greenplum Database extensions...
********************************************************************************
Installation complete.
Greenplum Database is installed in /usr/local/greenplum-db-4.2.8.0
Pivotal Greenplum documentation is available
for download at http://docs.gopivotal.com/gpdb
********************************************************************************
2.2.4.4.安装结果
[ greenplum-db-4.2.8.0]# pwd
/usr/local/greenplum-db-4.2.8.0
[ greenplum-db-4.2.8.0]# ls -ltr
drwxr-xr-x 3& 510& 510&& 4096 Jun 18& 2014 share
drwxr-xr-x 2& 510& 510&& 4096 Jun 18& 2014 demo
drwxr-xr-x 5& 510& 510&& 4096 Jun 18& 2014 docs
drwxr-xr-x 7& 510& 510&& 4096 Jun 18& 2014 lib
drwxr-xr-x 6& 510& 510&& 4096 Jun 18& 2014 include
drwxr-xr-x 3& 510& 510&& 4096 Jun 18& 2014 ext
drwxr-xr-x 2& 510& 510&& 4096 Jun 18& 2014 etc
drwxr-xr-x 2& 510& 510&& 4096 Jun 18& 2014 sbin
-rw-rw-r-- 1& 510& 510 193083 Jun 18& 2014 LICENSE.thirdparty
-rw-rw-r-- 1& 510& 510& 43025 Jun 18& 2014 GPDB-LICENSE.txt
drwxr-xr-x 3& 510& 510&& 4096 Jun 18& 2014 bin
-rw-r--r-- 1 root root&&& 676 Mar 20 19:41 greenplum_path.sh
2.2.4.5.文件说明:
greenplum_path.sh:Greenplum数据库环境变量文件
GPDB-LICENSE.txt:Greenplum许可协议
bin:管理工具、客户端程序及服务程序
demo:示例程序
docs:帮助文档
etc:OpenSSL的配置示例
ext:一些GP工具使用的捆绑程序
inlcude:C头文件
lib:库文件
sbin:支持的或者内部的脚本和程序
share:共享文件
2.2.4.6.在所有主机上初始化配置Greenplum
2.2.4.6.1.获取环境参数:
# source/usr/local/greenplum-db/greenplum_path.sh
GPHOME=/usr/local/greenplum-db-4.2.8.0
# Replace with symlink path if it is present and correct
if [ -h ${GPHOME}/../greenplum-db ]; then
&&& GPHOME_BY_SYMLINK=`(cd ${GPHOME}/../greenplum-db/ && pwd -P)`
&&& if [ x&${GPHOME_BY_SYMLINK}& = x&${GPHOME}& ]; then
&&&&&&& GPHOME=`(cd ${GPHOME}/../greenplum-db/ && pwd -L)`/.
&&& unset GPHOME_BY_SYMLINK
PATH=$GPHOME/bin:$GPHOME/ext/python/bin:$PATH
LD_LIBRARY_PATH=$GPHOME/lib:$GPHOME/ext/python/lib:$LD_LIBRARY_PATH
PYTHONPATH=$GPHOME/lib/python
PYTHONHOME=$GPHOME/ext/python
OPENSSL_CONF=$GPHOME/etc/openssl.cnf
export GPHOME
export PATH
export LD_LIBRARY_PATH
export PYTHONPATH
export PYTHONHOME
export OPENSSL_CONF
2.2.4.6.2.创建主机文件all_hosts
文件内容:
[ greenplum-db-4.2.8.0]# env
HOSTNAME=master
TERM=vt100
SHELL=/bin/bash
HISTSIZE=1000
SSH_CLIENT=192.168.80.120 49852 22
GPHOME=/usr/local/greenplum-db/.
OLDPWD=/root/installer
SSH_TTY=/dev/pts/2
LD_LIBRARY_PATH=/usr/local/greenplum-db/./lib:/usr/local/greenplum-db/./ext/python/lib:
LS_COLORS=no=00:fi=00:di=01;34:ln=01;36:pi=40;33:so=01;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=01;32:*.cmd=01;32:*.exe=01;32:*.com=01;32:*.btm=01;32:*.bat=01;32:*.sh=01;32:*.csh=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tz=01;31:*.rpm=01;31:*.cpio=01;31:*.jpg=01;35:*.gif=01;35:*.bmp=01;35:*.xbm=01;35:*.xpm=01;35:*.png=01;35:*.tif=01;35:
MAIL=/var/spool/mail/root
PATH=/usr/local/greenplum-db/./bin:/usr/local/greenplum-db/./ext/python/bin:/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
INPUTRC=/etc/inputrc
PWD=/usr/local/greenplum-db-4.2.8.0
LANG=en_US.UTF-8
PYTHONHOME=/usr/local/greenplum-db/./ext/python
SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
HOME=/root
OPENSSL_CONF=/usr/local/greenplum-db/./etc/openssl.cnf
PYTHONPATH=/usr/local/greenplum-db/./lib/python
LOGNAME=root
CVS_RSH=ssh
SSH_CONNECTION=192.168.80.120 8.80.200 22
LESSOPEN=|/usr/bin/lesspipe.sh %s
G_BROKEN_FILENAMES=1
_=/bin/env
[ greenplum-db-4.2.8.0]# ssh slave1
The authenticity of host ‘slave1 (192.168.80.201)‘ can‘t be established.
RSA key fingerprint is 04:6d:9c:fc:68:e3:9a:24:dc:11:ff:25:14:9e:d1:5b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘slave1,192.168.80.201‘ (RSA) to the list of known hosts.
‘s password:
Last login: Fri Mar 20 17:49:51 2015 from 192.168.80.120
2.2.4.7.运行gpseginstall工具
# gpseginstall -f all_hosts-ugpadmin -p gpadmin
[ greenplum-db]# gpseginstall -f all_hosts& -u gpadmin -p gpadmin
:58:32:003371 gpseginstall:master:root-[INFO]:-Installation Info:
link_name greenplum-db
binary_path /usr/local/greenplum-db-4.2.8.0
binary_dir_location /usr/local
binary_dir_name greenplum-db-4.2.8.0
:58:32:003371 gpseginstall:master:root-[INFO]:-check cluster password access
& *** Enter password for slave1:
:58:38:003371 gpseginstall:master:root-[INFO]:-de-duplicate hostnames
:58:38:003371 gpseginstall:master:root-[INFO]:-master hostname: master
:58:38:003371 gpseginstall:master:root-[INFO]:-check for user gpadmin on cluster
:58:38:003371 gpseginstall:master:root-[INFO]:-add user gpadmin on master
:58:41:003371 gpseginstall:master:root-[INFO]:-add user gpadmin on cluster
:58:42:003371 gpseginstall:master:root-[INFO]:-chown -R gpadmin:gpadmin /usr/local/greenplum-db
:58:43:003371 gpseginstall:master:root-[INFO]:-chown -R gpadmin:gpadmin /usr/local/greenplum-db-4.2.8.0
:58:43:003371 gpseginstall:master:root-[INFO]:-rm -f /usr/local/greenplum-db-4.2.8.0. rm -f /usr/local/greenplum-db-4.2.8.0.tar.gz
:58:43:003371 gpseginstall:master:root-[INFO]:-cd /usr/ tar cf greenplum-db-4.2.8.0.tar greenplum-db-4.2.8.0
:58:50:003371 gpseginstall:master:root-[INFO]:-gzip /usr/local/greenplum-db-4.2.8.0.tar
:59:04:003371 gpseginstall:master:root-[INFO]:-remote command: mkdir -p /usr/local
:59:04:003371 gpseginstall:master:root-[INFO]:-remote command: rm -rf /usr/local/greenplum-db-4.2.8.0
:59:04:003371 gpseginstall:master:root-[INFO]:-scp software to remote location
:59:34:003371 gpseginstall:master:root-[INFO]:-remote command: gzip -f -d /usr/local/greenplum-db-4.2.8.0.tar.gz
:59:39:003371 gpseginstall:master:root-[INFO]:-md5 check on remote location
:59:41:003371 gpseginstall:master:root-[INFO]:-remote command: cd /usr/ tar xf greenplum-db-4.2.8.0.tar
:59:59:003371 gpseginstall:master:root-[INFO]:-remote command: rm -f /usr/local/greenplum-db-4.2.8.0.tar
:00:01:003371 gpseginstall:master:root-[INFO]:-remote command: cd /usr/ rm -f greenplum- ln -fs greenplum-db-4.2.8.0 greenplum-db
:00:01:003371 gpseginstall:master:root-[INFO]:-remote command: chown -R gpadmin:gpadmin /usr/local/greenplum-db
:00:02:003371 gpseginstall:master:root-[INFO]:-remote command: chown -R gpadmin:gpadmin /usr/local/greenplum-db-4.2.8.0
:00:02:003371 gpseginstall:master:root-[INFO]:-rm -f /usr/local/greenplum-db-4.2.8.0.tar.gz
:00:02:003371 gpseginstall:master:root-[INFO]:-Changing system passwords ...
:00:05:003371 gpseginstall:master:root-[INFO]:-exchange ssh keys for user root
:00:16:003371 gpseginstall:master:root-[INFO]:-exchange ssh keys for user gpadmin
:00:18:003371 gpseginstall:master:root-[INFO]:-/usr/local/greenplum-db/./sbin/gpfixuserlimts -f /etc/security/limits.conf -u gpadmin
:00:20:003371 gpseginstall:master:root-[INFO]:-remote command: . /usr/local/greenplum-db/./greenplum_path. /usr/local/greenplum-db/./sbin/gpfixuserlimts -f /etc/security/limits.conf -u gpadmin
:00:21:003371 gpseginstall:master:root-[INFO]:-version string on master: gpssh version 4.2.8.0 build 1
:00:21:003371 gpseginstall:master:root-[INFO]:-remote command: . /usr/local/greenplum-db/./greenplum_path. /usr/local/greenplum-db/./bin/gpssh --version
:00:21:003371 gpseginstall:master:root-[INFO]:-remote command: . /usr/local/greenplum-db-4.2.8.0/greenplum_path. /usr/local/greenplum-db-4.2.8.0/bin/gpssh --version
:00:27:003371 gpseginstall:master:root-[INFO]:-SUCCESS -- Requested commands completed
2.2.4.8.在所有主机上初始化配置Greenplum
2.2.4.8.1.切换到gpamdin用户并获取环境变量
&&&&&&&& $ su - gpadmin
&&&&&&&& # source/usr/local/greenplum-db/greenplum_path.sh
2.2.4.8.2.使用gpssh工具来测试无密码登录所有主机
&&&&&&&& $ gpssh -f host_list -e ls -l$GPHOME
[ greenplum-db]$ gpssh -f all_hosts -e ls -l $GPHOME
[master] ls -l /usr/local/greenplum-db/.
[master] total 336
[master] -rw-r--r-- 1 gpadmin gpadmin&&&& 21 Mar 20 19:48 all_hosts
[master] drwxr-xr-x 3 gpadmin gpadmin&& 4096 Jun 18& 2014 bin
[master] drwxr-xr-x 2 gpadmin gpadmin&& 4096 Jun 18& 2014 demo
[master] drwxr-xr-x 5 gpadmin gpadmin&& 4096 Jun 18& 2014 docs
[master] drwxr-xr-x 2 gpadmin gpadmin&& 4096 Jun 18& 2014 etc
[master] drwxr-xr-x 3 gpadmin gpadmin&& 4096 Jun 18& 2014 ext
[master] -rw-rw-r-- 1 gpadmin gpadmin& 43025 Jun 18& 2014 GPDB-LICENSE.txt
[master] -rw-r--r-- 1 gpadmin gpadmin&&& 676 Mar 20 19:41 greenplum_path.sh
[master] drwxr-xr-x 6 gpadmin gpadmin&& 4096 Jun 18& 2014 include
[master] drwxr-xr-x 7 gpadmin gpadmin&& 4096 Jun 18& 2014 lib
[master] -rw-rw-r-- 1 gpadmin gpadmin 193083 Jun 18& 2014 LICENSE.thirdparty
[master] drwxr-xr-x 2 gpadmin gpadmin&& 4096 Jun 18& 2014 sbin
[master] drwxr-xr-x 3 gpadmin gpadmin&& 4096 Jun 18& 2014 share
[slave1] ls -l /usr/local/greenplum-db/.
[slave1] total 336
[slave1] -rw-r--r-- 1 gpadmin gpadmin&&&& 21 Mar 20 19:48 all_hosts
[slave1] drwxr-xr-x 3 gpadmin gpadmin&& 4096 Jun 18& 2014 bin
[slave1] drwxr-xr-x 2 gpadmin gpadmin&& 4096 Jun 18& 2014 demo
[slave1] drwxr-xr-x 5 gpadmin gpadmin&& 4096 Jun 18& 2014 docs
[slave1] drwxr-xr-x 2 gpadmin gpadmin&& 4096 Jun 18& 2014 etc
[slave1] drwxr-xr-x 3 gpadmin gpadmin&& 4096 Jun 18& 2014 ext
[slave1] -rw-rw-r-- 1 gpadmin gpadmin& 43025 Jun 18& 2014 GPDB-LICENSE.txt
[slave1] -rw-r--r-- 1 gpadmin gpadmin&&& 676 Mar 20 19:41 greenplum_path.sh
[slave1] drwxr-xr-x 6 gpadmin gpadmin& &4096 Jun 18& 2014 include
[slave1] drwxr-xr-x 7 gpadmin gpadmin&& 4096 Jun 18& 2014 lib
[slave1] -rw-rw-r-- 1 gpadmin gpadmin 193083 Jun 18& 2014 LICENSE.thirdparty
[slave1] drwxr-xr-x 2 gpadmin gpadmin&& 4096 Jun 18& 2014 sbin
[slave1] drwxr-xr-x 3 gpadmin gpadmin&& 4096 Jun 18& 2014 share
[slave2] ls -l /usr/local/greenplum-db/.
[slave2] total 336
[slave2] -rw-r--r-- 1 gpadmin gpadmin&&&& 21 Mar 20 19:48 all_hosts
[slave2] drwxr-xr-x 3 gpadmin gpadmin&& 4096 Jun 18& 2014 bin
[slave2] drwxr-xr-x 2 gpadmin gpadmin&& 4096 Jun 18& 2014 demo
[slave2] drwxr-xr-x 5 gpadmin gpadmin&& 4096 Jun 18& 2014 docs
[slave2] drwxr-xr-x 2 gpadmin gpadmin&& 4096 Jun 18& 2014 etc
[slave2] drwxr-xr-x 3 gpadmin gpadmin&& 4096 Jun 18& 2014 ext
[slave2] -rw-rw-r-- 1 gpadmin gpadmin& 43025 Jun 18& 2014 GPDB-LICENSE.txt
[slave2] -rw-r--r-- 1 gpadmin gpadmin&&& 676 Mar 20 19:41 greenplum_path.sh
[slave2] drwxr-xr-x 6 gpadmin gpadmin&& 4096 Jun 18& 2014 include
[slave2] drwxr-xr-x 7 gpadmin gpadmin&& 4096 Jun 18& 2014 lib
[slave2] -rw-rw-r-- 1 gpadmin gpadmin 193083 Jun 18& 2014 LICENSE.thirdparty
[slave2] drwxr-xr-x 2 gpadmin gpadmin&& 4096 Jun 18& 2014 sbin
[slave2] drwxr-xr-x 3 gpadmin gpadmin&& 4096 Jun 18& 2014 share
2.2.4.8.3.将&./usr/local/greenplum-db-4.2.2.4/greenplum_path.sh&添加到.bashrc文件最后并传送到子服务器
[ ~]$ cat .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
&&&&&&& . /etc/bashrc
. /usr/local/greenplum-db/greenplum_path.sh
# User specific aliases and functions
[ ~]$ su - root
[ ~]# su - gpadmin
HOSTNAME=master
SHELL=/bin/bash
TERM=vt100
HISTSIZE=1000
GPHOME=/usr/local/greenplum-db/.
USER=gpadmin
LD_LIBRARY_PATH=/usr/local/greenplum-db/./lib:/usr/local/greenplum-db/./ext/python/lib:
LS_COLORS=no=00:fi=00:di=01;34:ln=01;36:pi=40;33:so=01;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=01;32:*.cmd=01;32:*.exe=01;32:*.com=01;32:*.btm=01;32:*.bat=01;32:*.sh=01;32:*.csh=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tz=01;31:*.rpm=01;31:*.cpio=01;31:*.jpg=01;35:*.gif=01;35:*.bmp=01;35:*.xbm=01;35:*.xpm=01;35:*.png=01;35:*.tif=01;35:
MAIL=/var/spool/mail/gpadmin
PATH=/usr/local/greenplum-db/./bin:/usr/local/greenplum-db/./ext/python/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/gpadmin/bin
INPUTRC=/etc/inputrc
PWD=/home/gpadmin
LANG=en_US.UTF-8
PYTHONHOME=/usr/local/greenplum-db/./ext/python
SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
HOME=/home/gpadmin
OPENSSL_CONF=/usr/local/greenplum-db/./etc/openssl.cnf
PYTHONPATH=/usr/local/greenplum-db/./lib/python
LOGNAME=gpadmin
CVS_RSH=ssh
LESSOPEN=|/usr/bin/lesspipe.sh %s
G_BROKEN_FILENAMES=1
_=/bin/env
[ ~]$ scp .bashrc slave1:`pwd`
.bashrc&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& 100%& 167&&&& 0.2KB/s&& 00:00&&&
[ ~]$ scp .bashrc slave2:`pwd`
.bashrc&&&& &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&100%& 167&&&& 0.2KB/s&& 00:00
2.2.5.验证操作系统设置
2.2.5.1.创建存储区域
a) 创建Master数据存储区域
&&&&&&&& # mkdir -p /data/master
b) 改变目录的所有权
&&&&&&&& # chown gpadmin /data/master
c) 创建一个包含所有segment主机的文件seg_hosts
[ ~]# mkdir /tmp/greenplum
[ ~]# cd /tmp/greenplum/
[ greenplum]# vi seg_hosts
&seg_hosts& [New] 2L, 14C written
[ greenplum]# cat seg_hosts
&&&& d) 使用gpssh工具在所有segment主机上创建主数据和镜像数据目录
[ greenplum]# . /usr/local/greenplum-db/greenplum_path.sh&&&
[ greenplum]# gpssh -f seg_hosts -e ‘mkdir -p /data/primary‘
[slave1] mkdir -p /data/primary
[slave2] mkdir -p /data/primary
[ greenplum]# gpssh -f seg_hosts -e ‘mkdir -p /data/mirror‘
[slave1] mkdir -p /data/mirror
[slave2] mkdir -p /data/mirror
[ greenplum]# gpssh -f seg_hosts -e ‘chown gpadmin /data/primary‘
[slave1] chown gpadmin /data/primary
[slave2] chown gpadmin /data/primary
[ greenplum]# gpssh -f seg_hosts -e ‘chown gpadmin /data/mirror‘
&[slave1] chown gpadmin /data/mirror
&[slave2] chown gpadmin /data/mirror
2.2.5.2.同步系统时间
&&&& a) 在Master主机上编辑/etc/ntp.conf来设置如下内容:
[ greenplum]# cat /etc/ntp.conf
# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
# Permit all access over the loopback interface.& This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1
restrict -6 ::1
# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 0.rhel.pool.ntp.org
server 1.rhel.pool.ntp.org
server 2.rhel.pool.ntp.org
#broadcast 192.168.1.255 key 42&&&&&&& &# broadcast server
#broadcastclient&&&&&&&&&&&&&&&&&&&&&&& # broadcast client
#broadcast 224.0.1.1 key 42&&&&&&&&&&&& # multicast server
#multicastclient 224.0.1.1&&&&&&&&&&&&& # multicast client
#manycastserver 239.255.254.254&&&&&&&& # manycast server
#manycastclient 239.255.254.254 key 42& # manycast client
# Undisciplined Local Clock. This is a fake driver intended for backup
# and when no outside source of synchronized time is available.
server& 127.127.1.0&&& &# local clock
fudge&& 127.127.1.0 stratum 10
# Drift file.& Put this in a directory which the daemon can write to.
# No symbolic links allowed, either, since the daemon updates the file
# by creating a temporary in the same directory and then rename()‘ing
# it to the file.
driftfile /var/lib/ntp/drift
# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography.
keys /etc/ntp/keys
# Specify the key identifiers which are trusted.
#trustedkey 4 8 42
# Specify the key identifier to use with the ntpdc utility.
#requestkey 8
# Specify the key identifier to use with the ntpq utility.
#controlkey 8
&&&& b) 在Segment主机上编辑/etc/ntp.conf
# Undisciplined Local Clock. This is a fake driver intended for backup
# and when no outside source of synchronized time is available.
#server 127.127.1.0&&&& # local clock
server master &&&# local clock
fudge&& 127.127.1.0 stratum 10
&&&& c) 在Master主机上,通过NTP守护进程同步系统时钟
[ greenplum]# gpssh -f all_hosts -v -e ‘ntpd‘
[Reset ...]
[INFO] login master
[INFO] login slave1
[INFO] login slave2
[master] ntpd
[slave1] ntpd
[slave2] ntpd
[INFO] completed successfully
[Cleanup...]
2.2.5.3.验证操作系统设置
[ greenplum]# gpcheck -f all_hosts& -m master
:20:10:004588 gpcheck:master:root-[INFO]:-dedupe hostnames
:20:11:004588 gpcheck:master:root-[INFO]:-Detected platform: Generic Linux Cluster
:20:11:004588 gpcheck:master:root-[INFO]:-generate data on servers
:20:12:004588 gpcheck:master:root-[INFO]:-copy data files from servers
:20:17:004588 gpcheck:master:root-[INFO]:-delete remote tmp files
:20:17:004588 gpcheck:master:root-[INFO]:-Using gpcheck config file: /usr/local/greenplum-db/./etc/gpcheck.cnf
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave1): on device (fd0) IO scheduler ‘cfq‘ does not match expected value ‘deadline‘
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave1): on device (sr0) IO scheduler ‘cfq‘ does not match expected value ‘deadline‘
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave1): on device (sda) IO scheduler ‘cfq‘ does not match expected value ‘deadline‘
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave1): on device (/dev/sda) blockdev readahead value ‘256‘ does not match expected value ‘16384‘
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave1): on device (/dev/sda1) blockdev readahead value ‘256‘ does not match expected value ‘16384‘
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave1): on device (/dev/sda2) blockdev readahead value ‘256‘ does not match expected value ‘16384‘
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave1): on device (/dev/sda3) blockdev readahead value ‘256‘ does not match expected value ‘16384‘
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave1): potential NTPD issue.& gpcheck end time (Fri Mar 20 21:20:11 2015) time on machine (Fri Mar 20 21:20:17 2015)
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(master): on device (sr0) IO scheduler ‘cfq‘ does not match expected value ‘deadline‘
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave2): on device (fd0) IO scheduler ‘cfq‘ does not match expected value ‘deadline‘
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave2): on device (sr0) IO scheduler ‘cfq‘ does not match expected value ‘deadline‘
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave2): on device (sda) IO scheduler ‘cfq‘ does not match expected value ‘deadline‘
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave2): on device (/dev/sda) blockdev readahead value ‘256‘ does not match expected value ‘16384‘
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave2): on device (/dev/sda1) blockdev readahead value ‘256‘ does not match expected value ‘16384‘
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave2): on device (/dev/sda2) blockdev readahead value ‘256‘ does not match expected value ‘16384‘
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave2): on device (/dev/sda3) blockdev readahead value ‘256‘ does not match expected value ‘16384‘
:20:17:004588 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave2): potential NTPD issue.& gpcheck end time (Fri Mar 20 21:20:11 2015) time on machine (Fri Mar 20 21:20:18 2015)
:20:17:004588 gpcheck:master:root-[INFO]:-gpcheck completing...
[ greenplum]# gpcheck -f all_hosts& -m master
:23:48:004727 gpcheck:master:root-[INFO]:-dedupe hostnames
:23:49:004727 gpcheck:master:root-[INFO]:-Detected platform: Generic Linux Cluster
:23:49:004727 gpcheck:master:root-[INFO]:-generate data on servers
:23:49:004727 gpcheck:master:root-[INFO]:-copy data files from servers
:23:51:004727 gpcheck:master:root-[INFO]:-delete remote tmp files
:23:52:004727 gpcheck:master:root-[INFO]:-Using gpcheck config file: /usr/local/greenplum-db/./etc/gpcheck.cnf
:23:52:004727 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave1): on device (sr0) IO scheduler ‘cfq‘ does not match expected value ‘deadline‘
:23:52:004727 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave1): potential NTPD issue.& gpcheck end time (Fri Mar 20 21:23:49 2015) time on machine (Fri Mar 20 21:23:55 2015)
:23:52:004727 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(master): on device (sr0) IO scheduler ‘cfq‘ does not match expected value ‘deadline‘
:23:52:004727 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave2): on device (sr0) IO scheduler ‘cfq‘ does not match expected value ‘deadline‘
:23:52:004727 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave2): potential NTPD issue.& gpcheck end time (Fri Mar 20 21:23:49 2015) time on machine (Fri Mar 20 21:23:55 2015)
:23:52:004727 gpcheck:master:root-[INFO]:-gpcheck completing...
[ greenplum]# echo deadline & /sys/block/sr0/queue/scheduler
[ greenplum]# gpcheck -f all_hosts& -m master
:27:41:004866 gpcheck:master:root-[INFO]:-dedupe hostnames
:27:44:004866 gpcheck:master:root-[INFO]:-Detected platform: Generic Linux Cluster
:27:44:004866 gpcheck:master:root-[INFO]:-generate data on servers
:27:45:004866 gpcheck:master:root-[INFO]:-copy data files from servers
:27:50:004866 gpcheck:master:root-[INFO]:-delete remote tmp files
:27:50:004866 gpcheck:master:root-[INFO]:-Using gpcheck config file: /usr/local/greenplum-db/./etc/gpcheck.cnf
:27:50:004866 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave1): potential NTPD issue.& gpcheck end time (Fri Mar 20 21:27:44 2015) time on machine (Fri Mar 20 21:27:50 2015)
:27:50:004866 gpcheck:master:root-[ERROR]:-GPCHECK_ERROR host(slave2): potential NTPD issue.& gpcheck end time (Fri Mar 20 21:27:44 2015) time on machine (Fri Mar 20 21:27:50 2015)
:27:50:004866 gpcheck:master:root-[INFO]:-gpcheck completing...
[ greenplum]#& gpcheck -f all_hosts& -m master
:09:12:015514 gpcheck:master:root-[INFO]:-dedupe hostnames
:09:12:015514 gpcheck:master:root-[INFO]:-Detected platform: Generic Linux Cluster
:09:12:015514 gpcheck:master:root-[INFO]:-generate data on servers
:09:13:015514 gpcheck:master:root-[INFO]:-copy data files from servers
:09:13:015514 gpcheck:master:root-[INFO]:-delete remote tmp files
:09:13:015514 gpcheck:master:root-[INFO]:-Using gpcheck config file: /usr/local/greenplum-db/./etc/gpcheck.cnf
:09:13:015514 gpcheck:master:root-[INFO]:-GPCHECK_NORMAL
:09:13:015514 gpcheck:master:root-[INFO]:-gpcheck completing...
2.2.6.初始化Greenplum数据库系统
2.2.6.1.创建Greenplum数据库配置文件
a) 以gpadmin用户登录
&&&&&&&& # su - gpadmin
b) 从模板中拷贝一份gpinitsystem_config文件
[ ~]$ cp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config& /home/gpadmin/
[ ~]$ chmod 775 /home/gpadmin/gpinitsystem_config
c) 设置所有必须的参数以及可选参数
[ ~]$ vi gpinitsystem_config
# FILE NAME: gpinitsystem_config
# Configuration file needed by the gpinitsystem
################################################
#### REQUIRED PARAMETERS
################################################
#### Name of this Greenplum system enclosed in quotes.
ARRAY_NAME=&EMC Greenplum DW&
#### Naming convention for utility-generated data directories.
SEG_PREFIX=gpseg
#### Base number by which primary segment port numbers
#### are calculated.
PORT_BASE=40000
#### File system location(s) where primary segment data directories
#### will be created. The number of locations in the list dictate
#### the number of primary segments that will get created per
#### physical host (if multiple addresses for a host are listed in
#### the hostfile, the number of segments will be spread evenly across
#### the specified interface addresses).
declare -a DATA_DIRECTORY=(/data/primary )
#### OS-configured hostname or IP address of the master host.
MASTER_HOSTNAME=master
#### File system location where the master data directory
#### will be created.
MASTER_DIRECTORY=/data/master
#### Port number for the master instance.
MASTER_PORT=5432
#### Shell utility used to connect to remote hosts.
TRUSTED_SHELL=ssh
#### Maximum log file segments between automatic WAL checkpoints.
CHECK_POINT_SEGMENTS=8
#### Default server-side character set encoding.
ENCODING=UNICODE
################################################
#### OPTIONAL MIRROR PARAMETERS
################################################
#### Base number by which mirror segment port numbers
#### are calculated.
MIRROR_PORT_BASE=50000
#### Base number by which primary file replication port
#### numbers are calculated.
REPLICATION_PORT_BASE=41000
#### Base number by which mirror file replication port
#### numbers are calculated.
MIRROR_REPLICATION_PORT_BASE=51000
#### File system location(s) where mirror segment data directories
#### will be created. The number of mirror locations must equal the
#### number of primary locations as specified in the
#### DATA_DIRECTORY parameter.
declare -a MIRROR_DATA_DIRECTORY=(/data/mirror)
################################################
#### OTHER OPTIONAL PARAMETERS
################################################
#### Create a database of this name after initialization.
#DATABASE_NAME=name_of_database
#### Specify the location of the host address file here instead of
#### with the the -h option of gpinitsystem.
#MACHINE_LIST_FILE=/home/gpadmin/gpconfigs/hostfile_gpinitsystem
2.2.6.2.初始化数据库
a) 运行初始化工具
[ ~]$ cp /tmp/greenplum/seg_hosts& .
[er ~]$ ls
gpinitsystem_config& seg_hosts
[ ~]$ gpinitsystem -c gpinitsystem_config -h seg_hosts
:51:58:005103 gpinitsystem:master:gpadmin-[INFO]:-Checking configuration parameters, please wait...
:51:58:005103 gpinitsystem:master:gpadmin-[INFO]:-Reading Greenplum configuration file gpinitsystem_config
:51:58:005103 gpinitsystem:master:gpadmin-[INFO]:-Locale has not been set in gpinitsystem_config, will set to default value
:51:59:005103 gpinitsystem:master:gpadmin-[INFO]:-Locale set to en_US.utf8
:51:59:005103 gpinitsystem:master:gpadmin-[INFO]:-No DATABASE_NAME set, will exit following template1 updates
:51:59:005103 gpinitsystem:master:gpadmin-[INFO]:-MASTER_MAX_CONNECT not set, will set to default value 250
:51:59:005103 gpinitsystem:master:gpadmin-[INFO]:-Checking configuration parameters, Completed
:51:59:005103 gpinitsystem:master:gpadmin-[INFO]:-Commencing multi-home checks, please wait...
:52:00:005103 gpinitsystem:master:gpadmin-[INFO]:-Configuring build for standard array
:52:00:005103 gpinitsystem:master:gpadmin-[INFO]:-Commencing multi-home checks, Completed
:52:00:005103 gpinitsystem:master:gpadmin-[INFO]:-Building primary segment instance array, please wait...
:52:01:005103 gpinitsystem:master:gpadmin-[INFO]:-Building group mirror array type , please wait...
:52:02:005103 gpinitsystem:master:gpadmin-[INFO]:-Checking Master host
:52:02:005103 gpinitsystem:master:gpadmin-[INFO]:-Checking new segment hosts, please wait...
:52:08:005103 gpinitsystem:master:gpadmin-[INFO]:-Checking new segment hosts, Completed
:52:08:005103 gpinitsystem:master:gpadmin-[INFO]:-Greenplum Database Creation Parameters
:52:08:005103 gpinitsystem:master:gpadmin-[INFO]:---------------------------------------
:52:08:005103 gpinitsystem:master:gpadmin-[INFO]:-Master Configuration
:52:08:005103 gpinitsystem:master:gpadmin-[INFO]:---------------------------------------
:52:08:005103 gpinitsystem:master:gpadmin-[INFO]:-Master instance name&&&&&& = EMC Greenplum DW
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Master hostname&&&&&&&&&&& = master
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Master port&&&&&&&&&&&&&&& = 5432
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Master instance dir&&&&&&& = /data/master/gpseg-1
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Master LOCALE&&&&&&&&&&&&& = en_US.utf8
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Greenplum segment prefix&& = gpseg
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Master Database&&&&&&&&&&& =
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Master connections&&&&&&&& = 250
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Master buffers&&&&&&&&&&&& = 128000kB
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Segment connections&&&&&&& = 750
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Segment buffers&&&&&&&&&&& = 128000kB
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Checkpoint segments&&&&&&& = 8
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Encoding&&&&&&&&&&&&&&&&&& = UNICODE
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Postgres param file&&&&&&& = Off
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Initdb to be used&&&&&&&&& = /usr/local/greenplum-db/./bin/initdb
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-GP_LIBRARY_PATH is&&&&&&&& = /usr/local/greenplum-db/./lib
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Ulimit check&&&&&&&&&&&&&& = Passed
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Array host connect type&&& = Single hostname per node
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Master IP address [1]&&&&& = ::1
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Master IP address [2]&&&&& = 192.168.80.200
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Master IP address [3]&&&&& = fe80::20c:29ff:fe9d:3bd6
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Standby Master&&&&&&&&&&&& = Not Configured
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Primary segment #&&&&&&&&& = 1
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Total Database segments&&& = 2
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Trusted shell&&&&&&&&&&&&& = ssh
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Number segment hosts&&&&&& = 2
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Mirror port base&&&&&&&&&& = 50000
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Replicaton port base&&&&&& = 41000
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Mirror replicaton port base= 51000
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Mirror segment #&&&&&&&&&& = 1
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Mirroring config&&&&&&&&&& = ON
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Mirroring type&&&&&&&&&&&& = Group
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:----------------------------------------
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Greenplum Primary Segment Configuration
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:----------------------------------------
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-slave1&&&& /data/primary/gpseg0&&& 40000&& 2&&&&&& 0&&&&&& 41000
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-slave2&&&& /data/primary/gpseg1&&& 40000&& 3&&&&&& 1&&&&&& 41000
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:---------------------------------------
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-Greenplum Mirror Segment Configuration
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:---------------------------------------
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-slave2&&&& /data/mirror/gpseg0&&&& 50000&& 4&&&&&& 0&&&&&& 51000
:52:09:005103 gpinitsystem:master:gpadmin-[INFO]:-slave1&&&& /data/mirror/gpseg1&&&& 50000&& 5&&&&&& 1&&&&&& 51000
Continue with Greenplum creation Yy/Nn&
:52:17:005103 gpinitsystem:master:gpadmin-[INFO]:-Building the Master instance database, please wait...
:52:30:005103 gpinitsystem:master:gpadmin-[INFO]:-Starting the Master in admin mode
:52:36:005103 gpinitsystem:master:gpadmin-[INFO]:-Commencing parallel build of primary segment instances
:52:36:005103 gpinitsystem:master:gpadmin-[INFO]:-Spawning parallel processes&&& batch [1], please wait...
:52:36:005103 gpinitsystem:master:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
.................................
:53:10:005103 gpinitsystem:master:gpadmin-[INFO]:------------------------------------------------
:53:10:005103 gpinitsystem:master:gpadmin-[INFO]:-Parallel process exit status
:53:10:005103 gpinitsystem:master:gpadmin-[INFO]:------------------------------------------------
:53:10:005103 gpinitsystem:master:gpadmin-[INFO]:-Total processes marked as completed&&&&&&&&&& = 2
:53:10:005103 gpinitsystem:master:gpadmin-[INFO]:-Total processes marked as killed&&&&&&&&&&&&& = 0
:53:10:005103 gpinitsystem:master:gpadmin-[INFO]:-Total processes marked as failed&&&&&&&&&&&&& = 0
:53:10:005103 gpinitsystem:master:gpadmin-[INFO]:------------------------------------------------
:53:10:005103 gpinitsystem:master:gpadmin-[INFO]:-Commencing parallel build of mirror segment instances
:53:10:005103 gpinitsystem:master:gpadmin-[INFO]:-Spawning parallel processes&&& batch [1], please wait...
:53:10:005103 gpinitsystem:master:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
:53:14:005103 gpinitsystem:master:gpadmin-[INFO]:------------------------------------------------
:53:14:005103 gpinitsystem:master:gpadmin-[INFO]:-Parallel process exit status
:53:14:005103 gpinitsystem:master:gpadmin-[INFO]:------------------------------------------------
:53:15:005103 gpinitsystem:master:gpadmin-[INFO]:-Total processes marked as completed&&&&& &&&&&= 0
:53:15:005103 gpinitsystem:master:gpadmin-[INFO]:-Total processes marked as killed&&&&&&&&&&&&& = 0
:53:15:005103 gpinitsystem:master:gpadmin-[WARN]:-Total processes marked as failed&&&&&&&&&&&&& = 2 &&&&&
:53:15:005103 gpinitsystem:master:gpadmin-[INFO]:------------------------------------------------
:53:15:005103 gpinitsystem:master:gpadmin-[FATAL]:-Errors generated from parallel processes
:53:15:005103 gpinitsystem:master:gpadmin-[INFO]:-Dumped contents of status file to the log file
:53:15:005103 gpinitsystem:master:gpadmin-[INFO]:-Building composite backout file
:53:15:gpinitsystem:master:gpadmin-[FATAL]:-Failures detected, see log file /home/gpadmin/gpAdminLogs/gpinitsystem_.log for more detail Script Exiting!
:53:15:005103 gpinitsystem:master:gpadmin-[WARN]:-Script has left Greenplum Database in an incomplete state
:53:15:005103 gpinitsystem:master:gpadmin-[WARN]:-Run command /bin/bash /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_158 to remove these changes
:53:15:005103 gpinitsystem:master:gpadmin-[INFO]:-Start Function BACKOUT_COMMAND
:53:15:005103 gpinitsystem:master:gpadmin-[INFO]:-End Function BACKOUT_COMMAND
经过重新同步各服务器之间的时间后初始化正确:
[ ~]$ gpinitsystem -c gpinitsystem_config -h seg_hosts
:21:45:021667 gpinitsystem:master:gpadmin-[INFO]:-Checking configuration parameters, please wait...
:21:45:021667 gpinitsystem:master:gpadmin-[INFO]:-Reading Greenplum configuration file gpinitsystem_config
:21:45:021667 gpinitsystem:master:gpadmin-[INFO]:-Locale has not been set in gpinitsystem_config, will set to default value
:21:45:021667 gpinitsystem:master:gpadmin-[INFO]:-Locale set to en_US.utf8
:21:45:021667 gpinitsystem:master:gpadmin-[INFO]:-No DATABASE_NAME set, will exit following template1 updates
:21:45:021667 gpinitsystem:master:gpadmin-[INFO]:-MASTER_MAX_CONNECT not set, will set to default value 250
:21:46:021667 gpinitsystem:master:gpadmin-[INFO]:-Checking configuration parameters, Completed
:21:46:021667 gpinitsystem:master:gpadmin-[INFO]:-Commencing multi-home checks, please wait...
:21:46:021667 gpinitsystem:master:gpadmin-[INFO]:-Configuring build for standard array
:21:46:021667 gpinitsystem:master:gpadmin-[INFO]:-Commencing multi-home checks, Completed
:21:46:021667 gpinitsystem:master:gpadmin-[INFO]:-Building primary segment instance array, please wait...
:21:47:021667 gpinitsystem:master:gpadmin-[INFO]:-Checking Master host
:21:47:021667 gpinitsystem:master:gpadmin-[INFO]:-Checking new segment hosts, please wait...
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Checking new segment hosts, Completed
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Greenplum Database Creation Parameters
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:---------------------------------------
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Master Configuration
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:---------------------------------------
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Master instance name&&&&&& = EMC Greenplum DW
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Master hostname&&&&&&&&&&& = master
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Master port&&&&&&&&&&&&&&& = 5432
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Master instance dir&&&&&&& = /data/master/gpseg-1
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Master LOCALE&&&&&&&&&&&&& = en_US.utf8
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Greenplum segment prefix&& = gpseg
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Master Database&&&&&&&&&&& =
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Master connections&&&&&&&& = 250
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Master buffers&&&&&&&&&&&& = 128000kB
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Segment connections&&&&&&& = 750
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Segment buffers&&&&&&&&&&& = 128000kB
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Checkpoint segments&&&&&&& = 8
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Encoding&&&&&&&&&&&&&&&&&& = UNICODE
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Postgres param file&&&&&&& = Off
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-Initdb to be used&&&&&&&&& = /usr/local/greenplum-db/./bin/initdb
:21:50:021667 gpinitsystem:master:gpadmin-[INFO]:-GP_LIBRARY_PATH is&&&&&&&& = /usr/local/greenplum-db/./lib
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:-Ulimit check&&&&&&&&&&&&&& = Passed
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:-Array host connect type&&& = Single hostname per node
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:-Master IP address [1]&&&&& = ::1
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:-Master IP address [2]&&&&& = 192.168.80.200
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:-Master IP address [3]&&&&& = fe80::20c:29ff:fe9d:3bd6
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:-Standby Master&&&&&&&&&&&& = Not Configured
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:-Primary segment #&&&&&&&&& = 1
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:-Total Database segments&&& = 2
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:-Trusted shell&&&&&&&&&&&&& = ssh
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:-Number segment hosts&&&&&& = 2
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:-Mirroring config&&&&&&&&&& = OFF
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:----------------------------------------
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:-Greenplum Primary Segment Configuration
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:----------------------------------------
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:-slave1 &&&&&&&& /data/primary/gpseg0 & 40000 &&&&&&&& 2 &&&& 0
:21:51:021667 gpinitsystem:master:gpadmin-[INFO]:-slave2 &&&&&&&& /data/primary/gpseg1 & 40000 &&&&&&&& 3 &&&& 1
Continue with Greenplum creation Yy/Nn&
:21:55:021667 gpinitsystem:master:gpadmin-[INFO]:-Building the Master instance database, please wait...
:22:08:021667 gpinitsystem:master:gpadmin-[INFO]:-Starting the Master in admin mode
:22:13:021667 gpinitsystem:master:gpadmin-[INFO]:-Commencing parallel build of primary segment instances
:22:13:021667 gpinitsystem:master:gpadmin-[INFO]:-Spawning parallel processes&&& batch [1], please wait...
:22:13:021667 gpinitsystem:master:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
......................................
:22:52:021667 gpinitsystem:master:gpadmin-[INFO]:------------------------------------------------
:22:52:021667 gpinitsystem:master:gpadmin-[INFO]:-Parallel process exit status
:22:52:021667 gpinitsystem:master:gpadmin-[INFO]:------------------------------------------------
:22:52:021667 gpinitsystem:master:gpadmin-[INFO]:-Total processes marked as completed&&&&&&&&&& = 2
:22:52:021667 gpinitsystem:master:gpadmin-[INFO]:-Total processes marked as killed&&&&&&&&&&&&& = 0
:22:52:021667 gpinitsystem:master:gpadmin-[INFO]:-Total processes marked as failed&&&&&&&&&&&&& = 0
:22:52:021667 gpinitsystem:master:gpadmin-[INFO]:------------------------------------------------
:22:53:021667 gpinitsystem:master:gpadmin-[INFO]:-Deleting distributed backout files
:22:53:021667 gpinitsystem:master:gpadmin-[INFO]:-Removing back out file
:22:53:021667 gpinitsystem:master:gpadmin-[INFO]:-No errors generated from parallel processes
:22:53:021667 gpinitsystem:master:gpadmin-[INFO]:-Restarting the Greenplum instance in production mode
:22:53:032430 gpstop:master:gpadmin-[INFO]:-Starting gpstop with args: -a -i -m -d /data/master/gpseg-1
:22:53:032430 gpstop:master:gpadmin-[INFO]:-Gathering information and validating the environment...
:22:53:032430 gpstop:master:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
:22:53:032430 gpstop:master:gpadmin-[INFO]:-Obtaining Segment details from master...
:22:53:032430 gpstop:master:gpadmin-[INFO]:-Greenplum Version: ‘postgres (Greenplum Database) 4.2.8.0 build 1‘
:22:53:032430 gpstop:master:gpadmin-[INFO]:-There are 0 connections to the database
:22:53:032430 gpstop:master:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=‘immediate‘
:22:53:032430 gpstop:master:gpadmin-[INFO]:-Master host=master
:22:53:032430 gpstop:master:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=immediate
:22:53:032430 gpstop:master:gpadmin-[INFO]:-Master segment instance directory=/data/master/gpseg-1
:22:54:032514 gpstart:master:gpadmin-[INFO]:-Starting gpstart with args: -a -d /data/master/gpseg-1
:22:54:032514 gpstart:master:gpadmin-[INFO]:-Gathering information and validating the environment...
:22:54:032514 gpstart:master:gpadmin-[INFO]:-Greenplum Binary Version: ‘postgres (Greenplum Database) 4.2.8.0 build 1‘
:22:54:032514 gpstart:master:gpadmin-[INFO]:-Greenplum Catalog Version: ‘‘
:22:54:032514 gpstart:master:gpadmin-[INFO]:-Starting Master instance in admin mode
:22:56:032514 gpstart:master:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
:22:56:032514 gpstart:master:gpadmin-[INFO]:-Obtaining Segment details from master...
:22:56:032514 gpstart:master:gpadmin-[INFO]:-Setting new master era
:22:56:032514 gpstart:master:gpadmin-[INFO]:-Master Started...
:22:56:032514 gpstart:master:gpadmin-[INFO]:-Shutting down master
:22:57:032514 gpstart:master:gpadmin-[INFO]:-No standby master configured.& skipping...
:22:57:032514 gpstart:master:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
:23:03:032514 gpstart:master:gpadmin-[INFO]:-Process results...
:23:03:032514 gpstart:master:gpadmin-[INFO]:-----------------------------------------------------
:23:03:032514 gpstart:master:gpadmin-[INFO]:-&& Successful segment starts&&&&&& &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&= 2
:23:03:032514 gpstart:master:gpadmin-[INFO]:-&& Failed segment starts&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& = 0
:23:03:032514 gpstart:master:gpadmin-[INFO]:-&& Skipped segment starts (segments are marked down in configuration)&& = 0
:23:03:032514 gpstart:master:gpadmin-[INFO]:-----------------------------------------------------
:23:03:032514 gpstart:master:gpadmin-[INFO]:-
:23:03:032514 gpstart:master:gpadmin-[INFO]:-Successfully started 2 of 2 segment instances
:23:03:032514 gpstart:master:gpadmin-[INFO]:-----------------------------------------------------
:23:03:032514 gpstart:master:gpadmin-[INFO]:-Starting Master instance master directory /data/master/gpseg-1
:23:04:032514 gpstart:master:gpadmin-[INFO]:-Command pg_ctl reports Master master instance active
:23:05:032514 gpstart:master:gpadmin-[WARNING]:-FATAL:& DTM initialization: failure during startup recovery, retry failed, check segment status (cdbtm.c:1583)
:23:05:032514 gpstart:master:gpadmin-[INFO]:-Check status of database with gpstate utility
:23:05:021667 gpinitsystem:master:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode
:23:05:021667 gpinitsystem:master:gpadmin-[INFO]:-Loading gp_toolkit...
psql: FATAL:& DTM initialization: failure during startup recovery, retry failed, check segment status (cdbtm.c:1583)
:23:05:gpinitsystem:master:gpadmin-[FATAL]:-Failed to retrieve rolname. Script Exiting!
当出现以下信息时初始化成功
[ ~]$ ps -ef|grep post
gpadmin&& & 0 14:27 pts/1&&& 00:00:00 grep post
gpadmin& 32570&&&& 1& 0 14:23 ?&&&&&&& 00:00:00 /usr/local/greenplum-db-4.2.8.0/bin/postgres -D /data/master/gpseg-1 -p 5432 -b 1 -z 2 --silent-mode=true -i -M master -C -1 -x 0 -E
gpadmin& & 0 14:23 ?&&&&&&& 00:00:00 postgres: port& 5432, master logger process&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
gpadmin& & 0 14:23 ?&&&&&&& 00:00:00 postgres: port& 5432, stats collector process&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
gpadmin& & 0 14:23 ?&&&&&&& 00:00:00 postgres: port& 5432, writer process&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
gpadmin& & 0 14:23 ?&&&&&&& 00:00:00 postgres: port& 5432, checkpoint process&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
gpadmin& & 0 14:23 ?&&&&&&& 00:00:00 postgres: port& 5432, seqserver process&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
gpadmin& & 0 14:23 ?&&&&&&& 00:00:00 postgres: port& 5432, WAL Send Server process&&&&&&&&&&&&&&&&&&&&&&&&& &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
gpadmin& & 0 14:23 ?&&&&&&& 00:00:00 postgres: port& 5432, ftsprobe process&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
gpadmin& & 0 14:23 ?&&&&&&& 00:00:00 postgres: port& 5432, sweeper process
b) 设置环境变量
添加“export MASTER_DATA_DIRECTORY=/data/master/gpseg-1”到~/.bashrc文件尾,并同步到其他节点。
[ ~]$ cat .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
&&&&&&&& . /etc/bashrc
. /usr/local/greenplum-db/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/data/master/gpseg-1
# User specific aliases and functions
[ ~]$ scp .bashrc& slave1:`pwd`
.bashrc&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& 100%& 217&&&& 0.2KB/s&& 00:00&&&
[ ~]$ scp .bashrc& slave2:`pwd`
.bashrc&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& 100%& 217&&&& 0.2KB/s&& 00:00
2.2.7.启动和停止数据库
a) 启动数据库
&&&&&&&& $ gpstart
[ ~]$ gpstart
:49:18:007507 gpstart:master:gpadmin-[INFO]:-Starting gpstart with args:
:49:18:007507 gpstart:master:gpadmin-[INFO]:-Gathering information and validating the environment...
:49:18:007507 gpstart:master:gpadmin-[INFO]:-Greenplum Binary Version: ‘postgres (Greenplum Database) 4.2.8.0 build 1‘
:49:18:007507 gpstart:master:gpadmin-[INFO]:-Greenplum Catalog Version: ‘‘
:49:18:007507 gpstart:master:gpadmin-[INFO]:-Starting Master instance in admin mode
:49:19:007507 gpstart:master:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
:49:19:007507 gpstart:master:gpadmin-[INFO]:-Obtaining Segment details from master...
:49:19:007507 gpstart:master:gpadmin-[INFO]:-Setting new master era
:49:19:007507 gpstart:master:gpadmin-[INFO]:-Master Started...
:49:19:007507 gpstart:master:gpadmin-[INFO]:-Shutting down master
:49:21:007507 gpstart:master:gpadmin-[INFO]:---------------------------
:49:21:007507 gpstart:master:gpadmin-[INFO]:-Master instance parameters
:49:21:007507 gpstart:master:gpadmin-[INFO]:---------------------------
:49:21:007507 gpstart:master:gpadmin-[INFO]:-Database&&&&&&&&&&&&&&&& = template1
:49:21:007507 gpstart:master:gpadmin-[INFO]:-Master Port&&&&&&&&&&&&& = 5432
:49:21:007507 gpstart:master:gpadmin-[INFO]:-Master directory&&&&&&&& = /data/master/gpseg-1
:49:21:007507 gpstart:master:gpadmin-[INFO]:-Timeout&&&&&&&&&&&&&&&&& = 600 seconds
:49:21:007507 gpstart:master:gpadmin-[INFO]:-Master standby&&&&&&&&&& = Off
:49:21:007507 gpstart:master:gpadmin-[INFO]:---------------------------------------
:49:21:007507 gpstart:master:gpadmin-[INFO]:-Segment instances that will be started
:49:21:007507 gpstart:master:gpadmin-[INFO]:---------------------------------------
:49:21:007507 gpstart:master:gpadmin-[INFO]:-&& Host&&&& Datadir&&&&&&&&&&&&&&& Port
:49:21:007507 gpstart:master:gpadmin-[INFO]:-&& slave1&& /data/primary/gpseg0&& 40000
:49:21:007507 gpstart:master:gpadmin-[INFO]:-&& slave2&& /data/primary/gpseg1&& 40000
Continue with Greenplum instance startup Yy|Nn (default=N):
:49:23:007507 gpstart:master:gpadmin-[INFO]:-No standby master configured.& skipping...
:49:23:007507 gpstart:master:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
:49:25:007507 gpstart:master:gpadmin-[INFO]:-Process results...
:49:25:007507 gpstart:master:gpadmin-[INFO]:-----------------------------------------------------
:49:25:007507 gpstart:master:gpadmin-[INFO]:-&& Successful segment starts&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& = 2
:49:25:007507 gpstart:master:gpadmin-[INFO]:-&& Failed segment starts&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& = 0
:49:25:007507 gpstart:master:gpadmin-[INFO]:-&& Skipped segment starts (segments are marked down in configuration)&& = 0
:49:25:007507 gpstart:master:gpadmin-[INFO]:-----------------------------------------------------
:49:25:007507 gpstart:master:gpadmin-[INFO]:-
:49:25:007507 gpstart:master:gpadmin-[INFO]:-Successfully started 2 of 2 segment instances
:49:25:007507 gpstart:master:gpadmin-[INFO]:-----------------------------------------------------
:49:25:007507 gpstart:master:gpadmin-[INFO]:-Starting Master instance master directory /data/master/gpseg-1
:49:26:007507 gpstart:master:gpadmin-[INFO]:-Command pg_ctl reports Master master instance active
:49:26:007507 gpstart:master:gpadmin-[INFO]:-Database successfully started
[ ~]$ psql -d template1
psql (8.2.15)
Type &help& for help.
template1=#
注意:需要检查防火墙是否关闭否则,会有错误。
b) 关闭数据库
&& $ gpstop
[ ~]$ gpstop
:38:08:004265 gpstop:master:gpadmin-[INFO]:-Starting gpstop with args:
:38:08:004265 gpstop:master:gpadmin-[INFO]:-Gathering information and validating the environment...
:38:08:004265 gpstop:master:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
:38:08:004265 gpstop:master:gpadmin-[INFO]:-Obtaining Segment details from master...
:38:08:004265 gpstop:master:gpadmin-[INFO]:-Greenplum Version: ‘postgres (Greenplum Database) 4.2.8.0 build 1‘
:38:08:004265 gpstop:master:gpadmin-[INFO]:---------------------------------------------
:38:08:004265 gpstop:master:gpadmin-[INFO]:-Master instance parameters
:38:08:004265 gpstop:master:gpadmin-[INFO]:---------------------------------------------
:38:08:004265 gpstop:master:gpadmin-[INFO]:-&& Master Greenplum instance process active PID&& = 32570
:38:08:004265 gpstop:master:gpadmin-[INFO]:-&& Database&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& = template1
:38:08:004265 gpstop:master:gpadmin-[INFO]:-&& Master port&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& = 5432
:38:08:004265 gpstop:master:gpadmin-[INFO]:-&& Master directory&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& = /data/master/gpseg-1
:38:08:004265 gpstop:master:gpadmin-[INFO]:-&& Shutdown mode&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& = smart
:38:08:004265 gpstop:master:gpadmin-[INFO]:-&& Timeout&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& = 600
:38:08:004265 gpstop:master:gpadmin-[INFO]:-&& Shutdown Master standby host&&&&&&&&&&&&&&&&&& = Off
:38:08:004265 gpstop:master:gpadmin-[INFO]:---------------------------------------------
:38:08:004265 gpstop:master:gpadmin-[INFO]:-Segment instances that will be shutdown:
:38:08:004265 gpstop:master:gpadmin-[INFO]:---------------------------------------------
:38:08:004265 gpstop:master:gpadmin-[INFO]:-&& Host&&&& Datadir&&&&&&&&&&&&&&& Port&&& Status
:38:08:004265 gpstop:master:gpadmin-[INFO]:-&& slave1&& /data/primary/gpseg0&& 40000&& u
:38:08:004265 gpstop:master:gpadmin-[INFO]:-&& slave2&& /data/primary/gpseg1&& 40000&& u
Continue with Greenplum instance shutdown Yy|Nn (default=N):
:38:11:004265 gpstop:master:gpadmin-[INFO]:-There are 0 connections to the database
:38:11:004265 gpstop:master:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=‘smart‘
:38:11:004265 gpstop:master:gpadmin-[INFO]:-Master host=master
:38:11:004265 gpstop:master:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=smart
:38:11:004265 gpstop:master:gpadmin-[INFO]:-Master segment instance directory=/data/master/gpseg-1
:38:12:004265 gpstop:master:gpadmin-[INFO]:-No standby master host configured
:38:12:004265 gpstop:master:gpadmin-[INFO]:-Commencing parallel segment instance shutdown, please wait...
:38:14:004265 gpstop:master:gpadmin-[INFO]:-----------------------------------------------------
:38:14:004265 gpstop:master:gpadmin-[INFO]:-&& Segments stopped successfully&&&&& = 2
:38:14:004265 gpstop:master:gpadmin-[INFO]:-&& Segments with errors during stop&& = 0
:38:14:004265 gpstop:master:gpadmin-[INFO]:-----------------------------------------------------
:38:14:004265 gpstop:master:gpadmin-[INFO]:-Successfully shutdown 2 of 2 segment instances
:38:14:004265 gpstop:master:gpadmin-[INFO]:-Database successfully shutdown with no errors reported
标签:原文地址:http://blog.csdn.net/mavs41/article/details/
&&国之画&&&& &&&&chrome插件
版权所有 京ICP备号-2
迷上了代码!

我要回帖

更多关于 hadoop和spark的关系 的文章

 

随机推荐