佳木斯湛栽影视文化发展公司

主頁 > 知識庫 > 用Docker swarm快速部署Nebula Graph集群的教程

用Docker swarm快速部署Nebula Graph集群的教程

熱門標(biāo)簽:網(wǎng)站建設(shè) 百度競價(jià)點(diǎn)擊價(jià)格的計(jì)算公式 智能手機(jī) 美圖手機(jī) 阿里云 硅谷的囚徒呼叫中心 使用U盤裝系統(tǒng) 檢查注冊表項(xiàng)

一、前言

本文介紹如何使用 Docker Swarm 來部署 Nebula Graph 集群。

二、nebula集群搭建

2.1 環(huán)境準(zhǔn)備

機(jī)器準(zhǔn)備

ip

內(nèi)存(Gb)

cpu(核數(shù))

192.168.1.166

16

4

192.168.1.167

16

4

192.168.1.168

16

4

在安裝前確保所有機(jī)器已安裝docker

2.2 初始化swarm集群

在192.168.1.166機(jī)器上執(zhí)行

$ docker swarm init --advertise-addr 192.168.1.166
Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager.
To add a worker to this swarm, run the following command:
 docker swarm join \

 --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \

 192.168.1.166:2377
 
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

2.3 加入worker節(jié)點(diǎn)

根據(jù)init命令提示內(nèi)容,加入swarm worker節(jié)點(diǎn),在192.168.1.167 192.168.1.168分別執(zhí)行

docker swarm join \

 --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \

 192.168.1.166:2377

2.4 驗(yàn)證集群

docker node ls
 
ID       HOSTNAME   STATUS    AVAILABILITY  MANAGER STATUS  ENGINE VERSION
h0az2wzqetpwhl9ybu76yxaen * KF2-DATA-166  Ready    Active    Reachable   18.06.1-ce
q6jripaolxsl7xqv3cmv5pxji  KF2-DATA-167  Ready    Active    Leader    18.06.1-ce
h1iql1uvm7123h3gon9so69dy  KF2-DATA-168  Ready    Active         18.06.1-ce

2.5 配置docker stack

vi docker-stack.yml

配置如下內(nèi)容

 version: '3.6'
 services:
  metad0:
  image: vesoft/nebula-metad:nightly
  env_file:
   - ./nebula.env
  command:
   - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
   - --local_ip=192.168.1.166
  - --ws_ip=192.168.1.166
  - --port=45500
  - --data_path=/data/meta
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
  constraints:
   - node.hostname == KF2-DATA-166
 healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.166:11000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
 ports:
  - target: 11000
   published: 11000
   protocol: tcp
  mode: host
  - target: 11002
   published: 11002
   protocol: tcp
  mode: host
  - target: 45500
   published: 45500
  protocol: tcp
   mode: host
 volumes:
  - data-metad0:/data/meta
  - logs-metad0:/logs
 networks:
  - nebula-net
 
 metad1:
 image: vesoft/nebula-metad:nightly
  env_file:
  - ./nebula.env
 command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --local_ip=192.168.1.167
  - --ws_ip=192.168.1.167
  - --port=45500
  - --data_path=/data/meta
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
 deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-167
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.167:11000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
 start_period: 20s
  ports:
  - target: 11000
   published: 11000
  protocol: tcp
   mode: host
  - target: 11002
   published: 11002
  protocol: tcp
   mode: host
  - target: 45500
   published: 45500
   protocol: tcp
   mode: host
 volumes:
  - data-metad1:/data/meta
  - logs-metad1:/logs
 networks:
  - nebula-net

 metad2:
  image: vesoft/nebula-metad:nightly
 env_file:
  - ./nebula.env
 command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --local_ip=192.168.1.168
  - --ws_ip=192.168.1.168
  - --port=45500
  - --data_path=/data/meta
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-168
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.168:11000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 11000
   published: 11000
   protocol: tcp
   mode: host
  - target: 11002
   published: 11002
   protocol: tcp
   mode: host
  - target: 45500
   published: 45500
   protocol: tcp
   mode: host
  volumes:
  - data-metad2:/data/meta
  - logs-metad2:/logs
  networks:
  - nebula-net
 
 storaged0:
  image: vesoft/nebula-storaged:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --local_ip=192.168.1.166
  - --ws_ip=192.168.1.166
  - --port=44500
  - --data_path=/data/storage
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-166
  depends_on:
  - metad0
  - metad1
  - metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.166:12000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 12000
   published: 12000
   protocol: tcp
   mode: host
  - target: 12002
   published: 12002
   protocol: tcp
   mode: host
  volumes:
  - data-storaged0:/data/storage
  - logs-storaged0:/logs
  networks:
  - nebula-net
 storaged1:
  image: vesoft/nebula-storaged:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --local_ip=192.168.1.167
  - --ws_ip=192.168.1.167
  - --port=44500
  - --data_path=/data/storage
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-167
  depends_on:
  - metad0
  - metad1
  - metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.167:12000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 12000
   published: 12000
   protocol: tcp
   mode: host
  - target: 12002
   published: 12004
   protocol: tcp
   mode: host
  volumes:
  - data-storaged1:/data/storage
  - logs-storaged1:/logs
  networks:
  - nebula-net
 
 storaged2:
  image: vesoft/nebula-storaged:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --local_ip=192.168.1.168
  - --ws_ip=192.168.1.168
  - --port=44500
  - --data_path=/data/storage
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-168
  depends_on:
  - metad0
  - metad1
  - metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.168:12000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 12000
   published: 12000
   protocol: tcp
   mode: host
  - target: 12002
   published: 12006
   protocol: tcp
   mode: host
  volumes:
  - data-storaged2:/data/storage
  - logs-storaged2:/logs
  networks:
  - nebula-net
 graphd1:
  image: vesoft/nebula-graphd:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --port=3699
  - --ws_ip=192.168.1.166
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-166
  depends_on:
  - metad0
  - metad1
  - metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.166:13000/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 3699
   published: 3699
   protocol: tcp
   mode: host
  - target: 13000
   published: 13000
   protocol: tcp
 #  mode: host
  - target: 13002
   published: 13002
   protocol: tcp
   mode: host
  volumes:
  - logs-graphd:/logs
  networks:
  - nebula-net
 
 graphd2:
  image: vesoft/nebula-graphd:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --port=3699
  - --ws_ip=192.168.1.167
  - --log_dir=/logs
  - --v=2
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-167
  depends_on:
  - metad0
  - metad1
  - metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.167:13001/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 3699
   published: 3640
   protocol: tcp
   mode: host
  - target: 13000
   published: 13001
   protocol: tcp
   mode: host
  - target: 13002
   published: 13003
   protocol: tcp
 #  mode: host
  volumes:
  - logs-graphd2:/logs
  networks:
  - nebula-net
 graphd3:
  image: vesoft/nebula-graphd:nightly
  env_file:
  - ./nebula.env
  command:
  - --meta_server_addrs=192.168.1.166:45500,192.168.1.167:45500,192.168.1.168:45500
  - --port=3699
  - --ws_ip=192.168.1.168
  - --log_dir=/logs
  - --v=0
  - --minloglevel=2
  deploy:
  replicas: 1
  restart_policy:
   condition: on-failure
  placement:
   constraints:
   - node.hostname == KF2-DATA-168
  depends_on:
  - metad0
  - metad1
  - metad2
  healthcheck:
  test: ["CMD", "curl", "-f", "http://192.168.1.168:13002/status"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 20s
  ports:
  - target: 3699
   published: 3641
   protocol: tcp
   mode: host
  - target: 13000
   published: 13002
   protocol: tcp
 #  mode: host
  - target: 13002
   published: 13004
   protocol: tcp
   mode: host
  volumes:
  - logs-graphd3:/logs
  networks:
  - nebula-net
 networks:
 nebula-net:
  external: true
  attachable: true
  name: host
 volumes:
 data-metad0:
 logs-metad0:
 data-metad1:
 logs-metad1:
 data-metad2:
 logs-metad2:
 data-storaged0:
 logs-storaged0:
 data-storaged1:
 logs-storaged1:
 data-storaged2:
 logs-storaged2:
 logs-graphd:
 logs-graphd2:
 logs-graphd3:
docker-stack.yml

編輯 nebula.env

加入如下內(nèi)容

 TZ=UTC
USER=root

nebula.env

2.6 啟動(dòng)nebula集群

docker stack deploy nebula -c docker-stack.yml

三、集群負(fù)載均衡及高可用配置

Nebula Graph的客戶端目前(1.X)沒有提供負(fù)載均衡的能力,只是隨機(jī)選一個(gè)graphd去連接。所以生產(chǎn)使用的時(shí)候要自己做個(gè)負(fù)載均衡和高可用。

圖3.1

將整個(gè)部署架構(gòu)分為三層,數(shù)據(jù)服務(wù)層,負(fù)載均衡層及高可用層。如圖3.1所示

負(fù)載均衡層:對client請求做負(fù)載均衡,將請求分發(fā)至下方數(shù)據(jù)服務(wù)層

高可用層: 這里實(shí)現(xiàn)的是haproxy的高可用,保證負(fù)載均衡層的服務(wù)從而保證整個(gè)集群的正常服務(wù)

3.1 負(fù)載均衡配置

haproxy使用docker-compose配置。分別編輯以下三個(gè)文件

Dockerfile 加入以下內(nèi)容

FROM haproxy:1.7
 COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
EXPOSE 3640

Dockerfile

docker-compose.yml加入以下內(nèi)容

 version: "3.2"
 services:
 haproxy:
  container_name: haproxy
  build: .
  volumes:
  - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
  ports:
  - 3640:3640
  restart: always
  networks:
  - app_net
 networks:
 app_net:
  external: true

docker-compose.yml

haproxy.cfg加入以下內(nèi)容

global
  daemon
  maxconn 30000
  log 127.0.0.1 local0 info
 log 127.0.0.1 local1 warning

 defaults
 log-format %hr\ %ST\ %B\ %Ts
 log global
  mode http
  option http-keep-alive
  timeout connect 5000ms
  timeout client 10000ms
  timeout server 50000ms
  timeout http-request 20000ms
 
 # custom your own frontends && backends && listen conf
 #CUSTOM
 
 listen graphd-cluster
  bind *:3640
  mode tcp
  maxconn 300
  balance roundrobin
  server server1 192.168.1.166:3699 maxconn 300 check
  server server2 192.168.1.167:3699 maxconn 300 check
  server server3 192.168.1.168:3699 maxconn 300 check
 
 listen stats
  bind *:1080
  stats refresh 30s
  stats uri /stats

3.2 啟動(dòng)haproxy

docker-compose up -d

3.2 高可用配置

注:配置keepalive需預(yù)先準(zhǔn)備好vip (虛擬ip),在以下配置中192.168.1.99便為虛擬ip

在192.168.1.166 、192.168.1.167、192.168.1.168上均做以下配置

安裝keepalived

apt-get update && apt-get upgrade && apt-get install keepalived -y

更改keepalived配置文件/etc/keepalived/keepalived.conf(三臺機(jī)器中 做如下配置,priority應(yīng)設(shè)置不同值確定優(yōu)先級)

192.168.1.166機(jī)器配置

 global_defs {
  router_id lb01 #標(biāo)識信息,一個(gè)名字而已;
 }
 vrrp_script chk_haproxy {
  script "killall -0 haproxy" interval 2
 }
 vrrp_instance VI_1 {
  state MASTER
  interface ens160
  virtual_router_id 52
  priority 999
  # 設(shè)定MASTER與BACKUP負(fù)載均衡器之間同步檢查的時(shí)間間隔,單位是秒
  advert_int 1
  # 設(shè)置驗(yàn)證類型和密碼
  authentication {
  # 設(shè)置驗(yàn)證類型,主要有PASS和AH兩種
   auth_type PASS
  # 設(shè)置驗(yàn)證密碼,在同一個(gè)vrrp_instance下,MASTER與BACKUP必須使用相同的密碼才能正常通信
   auth_pass amber1
  }
  virtual_ipaddress {
   # 虛擬IP為192.168.1.99/24;綁定接口為ens160;別名ens169:1,主備相同
   192.168.1.99/24 dev ens160 label ens160:1
  }
  track_script {
   chk_haproxy
  }
 }

167機(jī)器配置

 global_defs {
  router_id lb01 #標(biāo)識信息,一個(gè)名字而已;
 }
 vrrp_script chk_haproxy {
  script "killall -0 haproxy" interval 2
 }
 vrrp_instance VI_1 {
  state BACKUP
  interface ens160
  virtual_router_id 52
  priority 888
  # 設(shè)定MASTER與BACKUP負(fù)載均衡器之間同步檢查的時(shí)間間隔,單位是秒
  advert_int 1
  # 設(shè)置驗(yàn)證類型和密碼
  authentication {
  # 設(shè)置驗(yàn)證類型,主要有PASS和AH兩種
   auth_type PASS
  # 設(shè)置驗(yàn)證密碼,在同一個(gè)vrrp_instance下,MASTER與BACKUP必須使用相同的密碼才能正常通信
   auth_pass amber1
  }
  virtual_ipaddress {
   # 虛擬IP為192.168.1.99/24;綁定接口為ens160;別名ens160:1,主備相同
   192.168.1.99/24 dev ens160 label ens160:1
  }
  track_script {
   chk_haproxy
  }
 }

168機(jī)器配置

 global_defs {
  router_id lb01 #標(biāo)識信息,一個(gè)名字而已;
 }
 vrrp_script chk_haproxy {
  script "killall -0 haproxy" interval 2
 }
 vrrp_instance VI_1 {
  state BACKUP
  interface ens160
  virtual_router_id 52
  priority 777
  # 設(shè)定MASTER與BACKUP負(fù)載均衡器之間同步檢查的時(shí)間間隔,單位是秒
  advert_int 1
  # 設(shè)置驗(yàn)證類型和密碼
  authentication {
  # 設(shè)置驗(yàn)證類型,主要有PASS和AH兩種
   auth_type PASS
  # 設(shè)置驗(yàn)證密碼,在同一個(gè)vrrp_instance下,MASTER與BACKUP必須使用相同的密碼才能正常通信
   auth_pass amber1
  }
  virtual_ipaddress {
   # 虛擬IP為192.168.1.99/24;綁定接口為ens160;別名ens160:1,主備相同
   192.168.1.99/24 dev ens160 label ens160:1
  }
  track_script {
   chk_haproxy
  }
 }

keepalived相關(guān)命令

# 啟動(dòng)keepalived
systemctl start keepalived
# 使keepalived開機(jī)自啟
systemctl enable keeplived
# 重啟keepalived
systemctl restart keepalived

四、其他

離線怎么部署?把鏡像更改為私有鏡像庫就成了,有問題歡迎來勾搭啊。

到此這篇關(guān)于用Docker swarm快速部署Nebula Graph集群的文章就介紹到這了,更多相關(guān)Docker 部署Nebula Graph集群內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

標(biāo)簽:山南 煙臺 黃山 通遼 賀州 湖北 湘潭 懷化

巨人網(wǎng)絡(luò)通訊聲明:本文標(biāo)題《用Docker swarm快速部署Nebula Graph集群的教程》,本文關(guān)鍵詞  ;如發(fā)現(xiàn)本文內(nèi)容存在版權(quán)問題,煩請?zhí)峁┫嚓P(guān)信息告之我們,我們將及時(shí)溝通與處理。本站內(nèi)容系統(tǒng)采集于網(wǎng)絡(luò),涉及言論、版權(quán)與本站無關(guān)。
  • 相關(guān)文章
  • 收縮
    • 微信客服
    • 微信二維碼
    • 電話咨詢

    • 400-1100-266
    读书| 望都县| 梓潼县| 陇西县| 龙岩市| 东丽区| 永寿县| 扎兰屯市| 额敏县| 周口市| 达日县| 禹城市| 剑河县| 饶阳县| 监利县| 饶平县| 房山区| 于都县| 德格县| 库伦旗| 澳门| 祁东县| 商都县| 肇源县| 嵊泗县| 汾阳市| 璧山县| 东丰县| 喀什市| 区。| 兴山县| 青岛市| 来凤县| 特克斯县| 安乡县| 出国| 青州市| 华容县| 龙州县| 阿鲁科尔沁旗| 桑日县|