zookeeper-3.4.6分布式环境安装

下载zookeeper-3.4.6.tar.gz

wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

解压安装到/opt目录下

tar zxvf zookeeper-3.4.6.tar.gz -C /opt

配置/opt/zookeeper-3.4.6/conf/zoo.cfg

拷贝zoo_sample.cfg为zoo.cfg

修改zoo.cfg为以下内容

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/home/scott/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

server.1=master:2888:3888
server.2=slave1:2888:3888
server.3=slave2:2888:3888

dataDir=/home/scott/zookeeper 为zookeeper数据保存目录 在集群环境中每台机器上都要创建该目录

server.1=master:2888:3888
server.2=slave1:2888:3888
server.3=slave2:2888:3888

以上的server.1、server.2、server.3 分别对应三台安装了zookeeper的机器 其数字编号为对应机器上dataDir目录中myid文件中的内容
master、slave1、slave2为三台主机名

拷贝以上配置好的zookeeper目录到其他机器(slave1,slave2),最好与master机器中的zookeeper安装目录保持一致

分别在三台机器的dataDir目录创建myid文件

master主机

/home/scott/zookeeper/myid
文件内容为

1

slave1主机

/home/scott/zookeeper/myid
文件内容为

2

slave2主机

/home/scott/zookeeper/myid
文件内容为

3

文件里面的内容与zoo.cfg文件中server.后面的数字保持一致*

分别设置每台机器上的zookeeper环境变量

export ZOOKEEPER_HOME=/opt/zookeeper-3.4.6
export PATH=$ZOOKEEPER_HOME/bin:$PATH

分别启动三台机器上的zookeeper集群服务

zkServer.sh start

替换hbase中的zookeeper

将/opt/hbase-0.98.1/conf/hbase-env.sh中下面的内容注释掉或改为false

# export HBASE_MANAGES_ZK=true

修改hbase-site.xml文件中hbase.zookeeper.quorum为安装了zookeeper的主机名

<property>
<name>hbase.zookeeper.quorum</name>
<value>master,slave1,slave2</value>
</property>

注:

这里建议的主机数为奇数个,主要是为了容错.具体不解释

启动zookeeper,启动hbase

zkServer.sh start
start-hbase.sh

启动zookeeper的rest服务

要启用zookeeper的rest服务我们需要用ant来启动/opt/zookeeper-3.4.6/src/contrib/rest目录下的ant脚本

首先修改/opt/zookeeper-3.4.6目录下的ivysettings.xml文件 添加oschina的源,后面用到ivy去maven仓库下载jar会比较慢。

在<resolvers><resolvers/>标签内添加<ibiblio name=”maven-osc” root=”http://maven.oschina.net/content/groups/public/“ pattern=”${maven2.pattern.ext}” m2compatible=”true”/>

在<chain name=”external” dual=”true”></chain>标签内添加<resolver ref=”maven-osc”/>并且把<resolver ref=”maven-osc”/>放在最前面,优先使用该地址

<ibiblio name="maven-osc" root="http://maven.oschina.net/content/groups/public/" pattern="${maven2.pattern.ext}" m2compatible="true"/>
<resolver ref="maven-osc"/>

在/opt/zookeeper-3.4.6目录执行ant

scott@master:/opt/zookeeper-3.4.6$ ant

在/opt/zookeeper-3.4.6/src/contrib/rest目录执行ant run 启动zookeeper rest服务

scott@master:/opt/zookeeper-3.4.6/src/contrib/rest$ ant run

地址栏访问http://master:9998/znodes/v1/

在hue中配置zookeeper

在hue.ini中修改zookeeper的配置为

[zookeeper]

[[clusters]]

[[[default]]]
# Zookeeper ensemble. Comma separated list of Host/Port.
# e.g. localhost:2181,localhost:2182,localhost:2183
host_ports=master:2181,slave1:2181,slave2:2181
# host_ports=slave1:2181
# host_ports=slave2:2181

# The URL of the REST contrib service (required for znode browsing)
rest_url=http://master:9998