最近項目需要節(jié)約成本進行開發(fā),所以要把docker利用的淋漓盡致,暫時只有一臺服務(wù)器可用。
規(guī)劃如下:zookeeper開啟三個,kafka開啟三個,hbase和hadoop在本地開啟,不用docker。
參考:https://www.cnblogs.com/idea360/p/12411859.html
首先服務(wù)器上已經(jīng)有了docker,這里我們使用docker-compose,提高開發(fā)效率,首先安裝docker-compose:
sudo curl -L
"https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
這里的1.24.1要進行改變,看你想用什么版本,兼容性如下:
compose文件格式版本 docker版本
3.4 17.09.0+
3.3 17.06.0+
3.2 17.04.0+
3.1 1.13.1+
3.0 1.13.0+
2.3 17.06.0+
2.2 1.13.0+
2.1 1.12.0+
2.0 1.10.0+
1.0 1.9.1.+
接下來:
sudo chmod +x /usr/local/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
docker-compose --version
即可查看是否安裝成功
隨后寫兩個文件分別為zookeeper和kafka:
(注:在zookeeper和kafka中可添加 volumes操作,映射目錄,我這里沒做:
例:
“opt/kafka/kafka1/data/:/kafka”)
version: '3.4'services:zoo1:image: zookeeper
:3.4.10
restart: always
hostname: zoo1
container_name: zoo1
ports:- 2184
:2181environment:ZOO_MY_ID: 1ZOO_SERVERS: server.1=0.0.0.0
:2888
:3888 server.2=zoo2
:2888
:3888 server.3=zoo3
:2888
:3888zoo2:image: zookeeper
:3.4.10
restart: always
hostname: zoo2
container_name: zoo2
ports:- 2185
:2181environment:ZOO_MY_ID: 2ZOO_SERVERS: server.1=zoo1
:2888
:3888 server.2=0.0.0.0
:2888
:3888 server.3=zoo3
:2888
:3888zoo3:image: zookeeper
:3.4.10
restart: always
hostname: zoo3
container_name: zoo3
ports:- 2186
:2181environment:ZOO_MY_ID: 3ZOO_SERVERS: server.1=zoo1
:2888
:3888 server.2=zoo2
:2888
:3888 server.3=0.0.0.0
:2888
:3888
version: '3.4'services:kafka1:image: wurstmeister/kafka
:2.11
-0.11.0.3
restart: unless
-stopped
container_name: kafka1
ports:- "9093:9092"external_links:- zoo1
- zoo2
- zoo3
environment:KAFKA_BROKER_ID: 1KAFKA_ADVERTISED_HOST_NAME: 172.21.0.3
KAFKA_ADVERTISED_PORT: 9093 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT
://172.21.0.3
:9093 KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181,zoo2:2181,zoo3:2181"KAFKA_delete_topic_enable: 'true'volumes:- "/home/cdata/data1/docker/kafka/kafka1/docker.sock:/var/run/docker.sock"- "/home/cdata/data1/docker/kafka/kafka1/data/:/kafka"kafka2:image: wurstmeister/kafka
:2.11
-0.11.0.3
restart: unless
-stopped
container_name: kafka2
ports:- "9094:9092"external_links:- zoo1
- zoo2
- zoo3
environment:KAFKA_BROKER_ID: 2KAFKA_ADVERTISED_HOST_NAME: 172.21.0.3
KAFKA_ADVERTISED_PORT: 9094 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT
://172.21.0.3
:9094 KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181,zoo2:2181,zoo3:2181"KAFKA_delete_topic_enable: 'true'volumes:- "/home/cdata/data1/docker/kafka/kafka2/docker.sock:/var/run/docker.sock"- "/home/cdata/data1/docker/kafka/kafka2/data/:/kafka"kafka3:image: wurstmeister/kafka
:2.11
-0.11.0.3
restart: unless
-stopped
container_name: kafka3
ports:- "9095:9092"external_links:- zoo1
- zoo2
- zoo3
environment:KAFKA_BROKER_ID: 3KAFKA_ADVERTISED_HOST_NAME: 172.21.0.3
KAFKA_ADVERTISED_PORT: 9095 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT
://172.21.0.3
:9095 KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181,zoo2:2181,zoo3:2181"KAFKA_delete_topic_enable: 'true'volumes:- "/home/cdata/data1/docker/kafka/kafka3/docker.sock:/var/run/docker.sock"- "/home/cdata/data1/docker/kafka/kafka3/data/:/kafka"kafka-manager:image: sheepkiller/kafka
-manager
:latest
restart: unless
-stopped
container_name: kafka
-manager
hostname: kafka
-manager
ports:- "9000:9000"links: - kafka1
- kafka2
- kafka3
external_links: - zoo1
- zoo2
- zoo3
environment:ZK_HOSTS: zoo1
:2181,zoo2
:2181,zoo3
:2181 TZ: CST
-8
寫完yml文件后:分別執(zhí)行docker-compose up -d命令,即可完成安裝并且執(zhí)行,隨后打開9000端口也可以查看:
點擊添加集群,輸入相應(yīng)的配置即可
可以進行topic的創(chuàng)建和查詢。
接下來進行hadoop的安裝,配置好相應(yīng)的文件,開啟namenode和datanode
隨后安裝hbase,安裝好后,核心配置如下:
hbase-site.xml
分別綁定到zookeeper和kafka上即可
hbae-env.sh
總結(jié)
以上是生活随笔為你收集整理的docker zookeeper kafka kafka-manager 本地hbase hadoop的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。