基於Helm離線部署高可用Zookeeper
This chart will do the following:
You can install the chart with the release name myzk as below.
If you do not specify a name, helm will select a name for you.
You can use kubectl get to view all of the installed components.
You can specify each parameter using the --set key=value[,key=value] argument to helm install .
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
The configuration parameters in this section control the resources requested and utilized by the ZooKeeper ensemble.
These parameters control the network ports on which the ensemble communicates.
ZooKeeper uses the Zab protocol to replicate its state machine across the ensemble. The following parameters control the timeouts for the protocol.
ZooKeeper writes its WAL (Write Ahead Log) and periodic snapshots to storage media. These parameters control the retention policy for snapshots and WAL segments. If you do not configure the ensemble to automatically periodically purge snapshots and logs, it is important to implement such a mechanism yourself. Otherwise, you will eventually exhaust all available storage media.
Spreading allows you specify an anti-affinity between ZooKeeper servers in the ensemble. This will prevent the Pods from being scheduled on the same node.
In order to allow for the default installation to work well with the log rolling and retention policy of Kubernetes, all logs are written to stdout. This should also be compatible with logging integrations such as Google Cloud Logging and ELK.
The servers in the ensemble have both liveness and readiness checks specified. These parameters can be used to tune the sensitivity of the liveness and readiness checks.
This parameter controls when the image is pulled from the repository.
The image used for this chart is based on Ubuntu 16.04 LTS. This image is larger than Alpine or BusyBox, but it provides glibc, rather than ulibc or mucl, and a JVM release that is built against it. You can easily convert this chart to run against a smaller image with a JVM that is build against that images libc. However, as far as we know, no Hadoop vendor supports, or has verified, ZooKeeper running on such a JVM.
The Java Virtual Machine used for this chart is the OpenJDK JVM 8u111 JRE (headless).
The ZooKeeper version is the latest stable version (3.4.9). The distribution is installed into /opt/zookeeper-3.4.9. This directory is symbolically linked to /opt/zookeeper. Symlinks are created to simulate a rpm installation into /usr.
You can test failover by killing the leader. Insert a key:
Watch existing members:
Delete Pods and wait for the StatefulSet controller to bring them back up:
Check the previously inserted key:
ZooKeeper can not be safely scaled in versions prior to 3.5.x. There are manual procedures for scaling an ensemble, but as noted in the ZooKeeper 3.5.2 documentation these procedures require a rolling restart, are known to be error prone, and often result in a data loss.
While ZooKeeper 3.5.x does allow for dynamic ensemble reconfiguration (including scaling membership), the current status of the release is still alpha, and it is not recommended for production use.