本文完整介绍了在K8S中搭建 Kafka 和 Zookeeper 集群,并且通过修改镜像,实现扩容 Pod 时自动扩展 brokerID 和 zookeeper 集群信息,无需手动干预。
修改后的 docker-entrypoint.sh 脚本如下(原脚本内容可参考:https://github.com/31z4/zookeeper-docker/tree/2373492c6f8e74d3c1167726b19babe8ac7055dd/3.6.2):
#!/bin/bash
set -e
HOST=$(hostname -s)
DOMAIN=$(hostname -d)
CLIENT_PORT=2181
SERVER_PORT=2888
ELECTION_PORT=3888
function createConfig(){
if [[ ! -f "$ZOO_CONF_DIR/${HOST}/zoo.cfg" ]]; then
mkdir -p $ZOO_CONF_DIR/${HOST}
mkdir -p $ZOO_DATA_DIR/${HOST}
mkdir -p $ZOO_DATA_LOG_DIR/${HOST}
CONFIG="$ZOO_CONF_DIR/${HOST}/zoo.cfg"
{
echo "dataDir=$ZOO_DATA_DIR/${HOST}"
echo "dataLogDir=$ZOO_DATA_LOG_DIR/${HOST}"
echo "tickTime=$ZOO_TICK_TIME"
echo "initLimit=$ZOO_INIT_LIMIT"
echo "syncLimit=$ZOO_SYNC_LIMIT"
echo "autopurge.snapRetainCount=$ZOO_AUTOPURGE_SNAPRETAINCOUNT"
echo "autopurge.purgeInterval=$ZOO_AUTOPURGE_PURGEINTERVAL"
echo "maxClientCnxns=$ZOO_MAX_CLIENT_CNXNS"
echo "standaloneEnabled=$ZOO_STANDALONE_ENABLED"
echo "admin.enableServer=$ZOO_ADMINSERVER_ENABLED"
} >> ${CONFIG}
if [[ -n $ZOO_4LW_COMMANDS_WHITELIST ]]; then
echo "4lw.commands.whitelist=$ZOO_4LW_COMMANDS_WHITELIST" >> ${CONFIG}
fi
for cfg_extra_entry in $ZOO_CFG_EXTRA; do
echo "$cfg_extra_entry" >> ${CONFIG}
done
fi
}
function getHostNum(){
if [[ $HOST =~ (.*)-([0-9]+)$ ]]; then
NAME=${BASH_REMATCH[1]}
ORD=${BASH_REMATCH[2]}
else
echo "Fialed to parse name and ordinal of Pod"
exit 1
fi
}
function createID(){
ID_FILE="$ZOO_DATA_DIR/${HOST}/myid"
MY_ID=$((ORD+1))
echo $MY_ID > $ID_FILE
}
function addServer(){
for (( i=1; i<=$SERVERS; i++ ))
do
s="server.$i=$NAME-$((i-1)).$DOMAIN:$SERVER_PORT:$ELECTION_PORT;$CLIENT_PORT"
[[ $(grep "$s" $ZOO_CONF_DIR/${HOST}/zoo.cfg) ]] || echo $s >> $ZOO_CONF_DIR/${HOST}/zoo.cfg
done
}
function userPerm(){
if [[ "$1" = 'zkServer.sh' && "$(id -u)" = '0' ]]; then
chown -R zookeeper "$ZOO_DATA_DIR" "$ZOO_DATA_LOG_DIR" "$ZOO_LOG_DIR" "$ZOO_CONF_DIR"
exec gosu zookeeper "$0" "$@"
fi
}
function startZK(){
/apache-zookeeper-3.6.2-bin/bin/zkServer.sh --config "$ZOO_CONF_DIR/$(hostname -s)" start-foreground
}
createConfig
getHostNum
createID
addServer
userPerm
startZK
我这里对于 Dockerfile 的改动很小,只是将原来的 ENTRYPOINT 配置项注释掉,CMD 配置项更改为由 docker-entrypoint.sh 启动:
FROM openjdk:11-jre-slim
ENV ZOO_CONF_DIR=/conf \
ZOO_DATA_DIR=/data \
ZOO_DATA_LOG_DIR=/datalog \
ZOO_LOG_DIR=/logs \
ZOO_TICK_TIME=2000 \
ZOO_INIT_LIMIT=5 \
ZOO_SYNC_LIMIT=2 \
ZOO_AUTOPURGE_PURGEINTERVAL=0 \
ZOO_AUTOPURGE_SNAPRETAINCOUNT=3 \
ZOO_MAX_CLIENT_CNXNS=60 \
ZOO_STANDALONE_ENABLED=true \
ZOO_ADMINSERVER_ENABLED=true
# Add a user with an explicit UID/GID and create necessary directories
RUN set -eux; \
groupadd -r zookeeper --gid=1000; \
useradd -r -g zookeeper --uid=1000 zookeeper; \
mkdir -p "$ZOO_DATA_LOG_DIR" "$ZOO_DATA_DIR" "$ZOO_CONF_DIR" "$ZOO_LOG_DIR"; \
chown zookeeper:zookeeper "$ZOO_DATA_LOG_DIR" "$ZOO_DATA_DIR" "$ZOO_CONF_DIR" "$ZOO_LOG_DIR"
# Install required packges
RUN set -eux; \
apt-get update; \
DEBIAN_FRONTEND=noninteractive \
apt-get install -y --no-install-recommends \
ca-certificates \
dirmngr \
gosu \
gnupg \
netcat \
wget; \
rm -rf /var/lib/apt/lists/*; \
# Verify that gosu binary works
gosu nobody true
ARG GPG_KEY=BBE7232D7991050B54C8EA0ADC08637CA615D22C
ARG SHORT_DISTRO_NAME=zookeeper-3.6.2
ARG DISTRO_NAME=apache-zookeeper-3.6.2-bin
# Download Apache Zookeeper, verify its PGP signature, untar and clean up
RUN set -eux; \
ddist() { \
local f="$1"; shift; \
local distFile="$1"; shift; \
local success=; \
local distUrl=; \
for distUrl in \
'https://www.apache.org/dyn/closer.cgi?action=download&filename=' \
https://www-us.apache.org/dist/ \
https://www.apache.org/dist/ \
https://archive.apache.org/dist/ \
; do \
if wget -q -O "$f" "$distUrl$distFile" && [ -s "$f" ]; then \
success=1; \
break; \
fi; \
done; \
[ -n "$success" ]; \
}; \
ddist "$DISTRO_NAME.tar.gz" "zookeeper/$SHORT_DISTRO_NAME/$DISTRO_NAME.tar.gz"; \
ddist "$DISTRO_NAME.tar.gz.asc" "zookeeper/$SHORT_DISTRO_NAME/$DISTRO_NAME.tar.gz.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --keyserver ha.pool.sks-keyservers.net --recv-key "$GPG_KEY" || \
gpg --keyserver pgp.mit.edu --recv-keys "$GPG_KEY" || \
gpg --keyserver keyserver.pgp.com --recv-keys "$GPG_KEY"; \
gpg --batch --verify "$DISTRO_NAME.tar.gz.asc" "$DISTRO_NAME.tar.gz"; \
tar -zxf "$DISTRO_NAME.tar.gz"; \
mv "$DISTRO_NAME/conf/"* "$ZOO_CONF_DIR"; \
rm -rf "$GNUPGHOME" "$DISTRO_NAME.tar.gz" "$DISTRO_NAME.tar.gz.asc"; \
chown -R zookeeper:zookeeper "/$DISTRO_NAME"
WORKDIR $DISTRO_NAME
VOLUME ["$ZOO_DATA_DIR", "$ZOO_DATA_LOG_DIR", "$ZOO_LOG_DIR"]
EXPOSE 2181 2888 3888 8080
ENV PATH=$PATH:/$DISTRO_NAME/bin \
ZOOCFGDIR=$ZOO_CONF_DIR
COPY docker-entrypoint.sh /
# 将 ENTRYPOINT 内容注释
# ENTRYPOINT ["/docker-entrypoint.sh"]
# 将原 CMD 注释,并新增下面的配置
# CMD ["zkServer.sh", "start-foreground"]
CMD ["/docker-entrypoint.sh"]
原始的 start-kafka.sh 脚本内容可到 https://github.com/wurstmeister/kafka-docker 中查看。修改后的内容如下:
#!/bin/bash -e
# Allow specific kafka versions to perform any unique bootstrap operations
OVERRIDE_FILE="/opt/overrides/${KAFKA_VERSION}.sh"
if [[ -x "$OVERRIDE_FILE" ]]; then
echo "Executing override file $OVERRIDE_FILE"
eval "$OVERRIDE_FILE"
fi
# Store original IFS config, so we can restore it at various stages
ORIG_IFS=$IFS
if [[ -z "$KAFKA_ZOOKEEPER_CONNECT" ]]; then
echo "ERROR: missing mandatory config: KAFKA_ZOOKEEPER_CONNECT"
exit 1
fi
if [[ -z "$KAFKA_PORT" ]]; then
export KAFKA_PORT=9092
fi
create-topics.sh &
unset KAFKA_CREATE_TOPICS
if [[ -z "$KAFKA_BROKER_ID" ]]; then
if [[ -n "$BROKER_ID_COMMAND" ]]; then
KAFKA_BROKER_ID=$(eval "$BROKER_ID_COMMAND")
export KAFKA_BROKER_ID
else
export KAFKA_BROKER_ID=-1
fi
fi
if [[ -z "$KAFKA_LOG_DIRS" ]]; then
export KAFKA_LOG_DIRS="/kafka/kafka-logs-$HOSTNAME"
fi
if [[ -n "$KAFKA_HEAP_OPTS" ]]; then
sed -r -i 's/(export KAFKA_HEAP_OPTS)="(.*)"/\1="'"$KAFKA_HEAP_OPTS"'"/g' "$KAFKA_HOME/bin/kafka-server-start.sh"
unset KAFKA_HEAP_OPTS
fi
if [[ -n "$HOSTNAME_COMMAND" ]]; then
HOSTNAME_VALUE=$(eval "$HOSTNAME_COMMAND")
# Replace any occurences of _{HOSTNAME_COMMAND} with the value
IFS=$'\n'
for VAR in $(env); do
if [[ $VAR =~ ^KAFKA_ && "$VAR" =~ "_{HOSTNAME_COMMAND}" ]]; then
eval "export ${VAR//_\{HOSTNAME_COMMAND\}/$HOSTNAME_VALUE}"
fi
done
IFS=$ORIG_IFS
fi
if [[ -n "$PORT_COMMAND" ]]; then
PORT_VALUE=$(eval "$PORT_COMMAND")
# Replace any occurences of _{PORT_COMMAND} with the value
IFS=$'\n'
for VAR in $(env); do
if [[ $VAR =~ ^KAFKA_ && "$VAR" =~ "_{PORT_COMMAND}" ]]; then
eval "export ${VAR//_\{PORT_COMMAND\}/$PORT_VALUE}"
fi
done
IFS=$ORIG_IFS
fi
if [[ -n "$RACK_COMMAND" && -z "$KAFKA_BROKER_RACK" ]]; then
KAFKA_BROKER_RACK=$(eval "$RACK_COMMAND")
export KAFKA_BROKER_RACK
fi
if [[ -z "$KAFKA_ADVERTISED_HOST_NAME$KAFKA_LISTENERS" ]]; then
if [[ -n "$KAFKA_ADVERTISED_LISTENERS" ]]; then
echo "ERROR: Missing environment variable KAFKA_LISTENERS. Must be specified when using KAFKA_ADVERTISED_LISTENERS"
exit 1
elif [[ -z "$HOSTNAME_VALUE" ]]; then
echo "ERROR: No listener or advertised hostname configuration provided in environment."
echo " Please define KAFKA_LISTENERS / (deprecated) KAFKA_ADVERTISED_HOST_NAME"
exit 1
fi
export KAFKA_ADVERTISED_HOST_NAME="$HOSTNAME_VALUE"
fi
echo "" >> "$KAFKA_HOME/config/server.properties"
(
function updateConfig() {
key=$1
value=$2
file=$3
echo "[Configuring] '$key' in '$file'"
if grep -E -q "^#?$key=" "$file"; then
sed -r -i "s@^#?$key=.*@$key=$value@g" "$file"
else
echo "$key=$value" >> "$file"
fi
}
# KAFKA_VERSION + KAFKA_HOME + grep -rohe KAFKA[A-Z0-0_]* /opt/kafka/bin | sort | uniq | tr '\n' '|'
EXCLUSIONS="|KAFKA_VERSION|KAFKA_HOME|KAFKA_DEBUG|KAFKA_GC_LOG_OPTS|KAFKA_HEAP_OPTS|KAFKA_JMX_OPTS|KAFKA_JVM_PERFORMANCE_OPTS|KAFKA_LOG|KAFKA_OPTS|"
IFS=$'\n'
for VAR in $(env)
do
env_var=$(echo "$VAR" | cut -d= -f1)
if [[ "$EXCLUSIONS" = *"|$env_var|"* ]]; then
echo "Excluding $env_var from broker config"
continue
fi
if [[ $env_var =~ ^KAFKA_ ]]; then
kafka_name=$(echo "$env_var" | cut -d_ -f2- | tr '[:upper:]' '[:lower:]' | tr _ .)
updateConfig "$kafka_name" "${!env_var}" "$KAFKA_HOME/config/server.properties"
fi
if [[ $env_var =~ ^LOG4J_ ]]; then
log4j_name=$(echo "$env_var" | tr '[:upper:]' '[:lower:]' | tr _ .)
updateConfig "$log4j_name" "${!env_var}" "$KAFKA_HOME/config/log4j.properties"
fi
done
PODNAME=$(hostname -s | awk -F'-' 'OFS="-"{$NF="";print}' |sed 's/-$//g')
for ((i=0;i<$SERVERS;i++))
do
BOOTSTRAP_SERVERS+="$PODNAME-$i.$(hostname -d):${KAFKA_PORT},"
done
BOOTSTRAP_SERVERS=${BOOTSTRAP_SERVERS%?}
echo ${BOOTSTRAP_SERVERS} > /opt/log.txt
sed -i "s/bootstrap.servers.*$/bootstrap.servers=$BOOTSTRAP_SERVERS/g" $KAFKA_HOME/config/consumer.properties
sed -i "s/bootstrap.servers.*$/bootstrap.servers=$BOOTSTRAP_SERVERS/g" $KAFKA_HOME/config/producer.properties
)
if [[ -n "$CUSTOM_INIT_SCRIPT" ]] ; then
eval "$CUSTOM_INIT_SCRIPT"
fi
exec "$KAFKA_HOME/bin/kafka-server-start.sh" "$KAFKA_HOME/config/server.properties"
集群搭建完成后,查看 zookeeper 各个节点当前的状态,使用如下命令:
[@k8s-master1 /]# for i in 0 1 2; do kubectl exec -it zk-$i -n ns-kafka -- zkServer.sh --config /opt/conf/zk-$i status; done
ZooKeeper JMX enabled by default
Using config: /opt/conf/zk-0/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
ZooKeeper JMX enabled by default
Using config: /opt/conf/zk-1/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
ZooKeeper JMX enabled by default
Using config: /opt/conf/zk-2/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
可以看到当前集群中是一个 leader,两个follower。接下来验证集群各个节点的消息同步,首先在 zk-0 节点上创建一个信息:
[@k8s-master1 /]# kubectl exec -it zk-0 -n ns-kafka -- zkCli.sh
[zk: localhost:2181(CONNECTED) 0] create /testMessage Hello
Created /testMessage
在其他两个节点上查看这条消息:
[@k8s-master1 /]# kubectl exec -it zk-1 -n ns-kafka -- zkCli.sh
[zk: localhost:2181(CONNECTED) 0] get /testMessage
Hello
[@k8s-master1 /]# kubectl exec -it zk-2 -n ns-kafka -- zkCli.sh
[zk: localhost:2181(CONNECTED) 0] get /testMessage
Hello
可以正常看到消息,代表集群当前运行正常。
[@k8s-master1 ~]# kubectl exec -it zk-0 -n ns-kafka -- zkCli.sh
Connecting to localhost:2181
[zk: localhost:2181(CONNECTED) 0] ls /
[admin, brokers, cluster, config, consumers, controller, controller_epoch, isr_change_notification, latest_producer_id_block, log_dir_event_notification, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /brokers
[ids, seqid, topics]
[zk: localhost:2181(CONNECTED) 2] ls /brokers/ids
[0, 1, 2]
[zk: localhost:2181(CONNECTED) 3] get /brokers/ids/0
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://kafka-0.kafka-service.ns-kafka.svc.cluster.local:9092"],"jmx_port":-1,"port":9092,"host":"kafka-0.kafka-service.ns-kafka.svc.cluster.local","version":4,"timestamp":"1604644074102"}
[zk: localhost:2181(CONNECTED) 4] get /brokers/ids/1
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://kafka-1.kafka-service.ns-kafka.svc.cluster.local:9092"],"jmx_port":-1,"port":9092,"host":"kafka-1.kafka-service.ns-kafka.svc.cluster.local","version":4,"timestamp":"1604644074079"}
[zk: localhost:2181(CONNECTED) 5] get /brokers/ids/2
{"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://kafka-2.kafka-service.ns-kafka.svc.cluster.local:9092"],"jmx_port":-1,"port":9092,"host":"kafka-2.kafka-service.ns-kafka.svc.cluster.local","version":4,"timestamp":"1604644074009"}
可以看到 3 个 broker 都已经在 zookeeper 中注册了。
在 kafka-0 节点中创建一个名为 Message 的 topic,3个分区,3个副本:
[@k8s-master1 ~]# kubectl exec -it kafka-0 -n ns-kafka -- /bin/bash
bash-4.4# kafka-topics.sh --create --topic Message --zookeeper zk-inner-service.ns-kafka.svc.cluster.local:2181 --partitions 3 --replication-factor 3
Created topic Message.
在 zk-1 节点中查看是否存在这个 Topic:
[@k8s-master1 ~]# kubectl exec -it zk-1 -n ns-kafka -- zkCli.sh
Connecting to localhost:2181
[zk: localhost:2181(CONNECTED) 0] ls /
[admin, brokers, cluster, config, consumers, controller, controller_epoch, isr_change_notification, latest_producer_id_block, log_dir_event_notification, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /brokers
[ids, seqid, topics]
[zk: localhost:2181(CONNECTED) 3] ls /brokers/topics
[Message]
可以看到 Zookeeper 中已经存在这个 Topic 了。
首先 在 kafka-1 上模拟生产者向 Message 中写入消息:
[@k8s-master1 ~]# kubectl exec -it kafka-1 -n ns-kafka -- /bin/bash
bash-4.4# kafka-console-producer.sh --topic Message --broker-list kafka-0.kafka-service.ns-kafka.svc.cluster.local:9092,kafka-1.kafka-service.ns-kafka.svc.cluster.local:9092,kafka-2.kafka-service.ns-kafka.svc.cluster.local:9092
>This is a test message
>Welcome to Kafka
然后在 kafka-2 中模拟消费者消费这些信息:
[@k8s-master1 ~]# kubectl exec -it kafka-2 -n ns-kafka -- /bin/bash
bash-4.4# kafka-console-consumer.sh --topic Message --bootstrap-server kafka-0.kafka-service.ns-kafka.svc.cluster.local:9092,kafka-1.kafka-service.ns-kafka.svc.cluster.local:9092,kafka-2.kafka-service.ns-kafka.svc.cluster.local:9092 --from-beginning
This is a test message
Welcome to Kafka
可以正常生产消息和消费消息,代表 Kafka 集群运行正常。
本文完整介绍了在K8S中搭建 Kafka 和 Zookeeper 集群,并且通过修改镜像,实现扩容 Pod 时自动扩展 brokerID 和 zookeeper 集群信息,无需手动干预。一、服务版本信息:Kafka:v2.13-2.6.0 Zookeeper:v3.6.2 Kubernetes:v1.18.4二、制作 Zookeeper 镜像Zookeeper 使用的是 docker hub 中提供的官方镜像,使用如下命令可以直接下载:docker pull zookeeper:3.6
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
#关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 关闭swap
swapoff -a
sed -i 's/.*swap.*/#&/