用【Docker】的etcd+docker-machine来使用docker-swarm

使用etcd作为管理信息处理后端,并使用docker-machine构建Swarm Cluster。

环境

    • docker(version 1.12.3)

 

    • docker-machine(version 0.7.0)

 

    etcd(version 2.3.7)

Vagrantfile 的原始文件。

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # If true, then any SSH connections made will enable agent forwarding.
  # Default value: false
  config.ssh.forward_agent = true

  config.vm.define "manager" do |d|
    d.vm.box = "ubuntu/trusty64"
    d.vm.network :private_network, ip: "192.168.33.10", virtualbox__intnet: "intnet"
  end

  config.vm.define "node1" do |d|
    d.vm.box = "ubuntu/trusty64"
    d.vm.network :private_network, ip: "192.168.33.20", virtualbox__intnet: "intnet"
  end

  config.vm.define "node2" do |d|
    d.vm.box = "ubuntu/trusty64"
    d.vm.network :private_network, ip: "192.168.33.30", virtualbox__intnet: "intnet"
  end

  config.vm.define "kvstore" do |d|
    d.vm.box = "ubuntu/trusty64"
    d.vm.network :private_network, ip: "192.168.33.40", virtualbox__intnet: "intnet"
  end

  config.vm.define "host" do |d|
    d.vm.box = "ubuntu/trusty64"
    d.vm.network :private_network, ip: "192.168.33.50", virtualbox__intnet: "intnet"
  end
end
IP用途manager192.168.33.10Swarmクラスターのmasternode1192.168.33.20Swarmクラスターのnode1node2192.168.33.30Swarmクラスターのmasterkvstore192.168.33.40SwarmクラスターBackendのetcd用host192.168.33.50作業場所

请确保从主机(host)能够通过SSH登录到管理员(manager)、节点1(node1)、节点2(node2)和键值存储(kvstore)。
首先创建etcd,然后再构建Swarm集群。
所有操作都在host虚拟机上进行。

etcd 可以用中文释义为:分布式键值对存储系统。

在kvstore上创建一个etcd容器。

$ docker-machine create \
   --driver generic --generic-ip-address=192.168.33.40 \
   --generic-ssh-user vagrant --generic-ssh-key ~/.ssh/id_rsa \
   kvstore
$ eval $(docker-machine env kvstore)
$ docker pull quay.io/coreos/etcd:v2.3.7
$ docker run -d --name etcd \
   -p 2379:2379 -p 4001:4001 \
   quay.io/coreos/etcd:v2.3.7 \
   --data-dir=/tmp/default.etcd \
   --advertise-client-urls 'http://192.168.33.40:2379,http://192.168.33.40:4001' \
   --listen-client-urls 'http://0.0.0.0:2379,http://0.0.0.0:4001'
$ ETCDCTL_ENDPOINT="http://192.168.33.40:2379" ./etcdctl --no-sync  get /

迷上了这个点。 le .)

需要将–advertise-client-urls的值从’http://0.0.0.0:2379,http://0.0.0.0:4001’更改为http://192.168.33.40:2379,http://192.168.33.40:4001。

建立Swarm Cluster

创建Swarm主控

经理

$ docker-machine create \
   --driver generic --generic-ip-address=192.168.33.10 \
   --generic-ssh-user vagrant --generic-ssh-key ~/.ssh/id_rsa \
   --swarm --swarm-master \
   --swarm-discovery="etcd://$(docker-machine ip kvstore):2379/swarm" \
   --engine-opt="cluster-store=etcd://$(docker-machine ip kvstore):2379" \
   --engine-opt="cluster-advertise=eth1:2376" \
   manager

制造swarm节点

node1

$ docker-machine create \
   --driver generic --generic-ip-address=192.168.33.20 \
   --generic-ssh-user vagrant --generic-ssh-key ~/.ssh/id_rsa \
   --swarm \
   --swarm-discovery="etcd://$(docker-machine ip kvstore):2379/swarm" \
   node1

node2

$ docker-machine create \
   --driver generic --generic-ip-address=192.168.33.30 \
   --generic-ssh-user vagrant --generic-ssh-key ~/.ssh/id_rsa \
   --swarm \
   --swarm-discovery="etcd://$(docker-machine ip kvstore):2379/swarm" \
   node2

验证行动

etcd的中文翻译是云原生分布式键值存储系统。

使用etcd的API来检查状态。
已经注册了manager, node1和node2。

{
  "node": {
    "nodes": [
      {
        "createdIndex": 4,
        "modifiedIndex": 4,
        "nodes": [
          {
            "createdIndex": 4,
            "modifiedIndex": 4,
            "nodes": [
              {
                "createdIndex": 4,
                "modifiedIndex": 4,
                "nodes": [
                  {
                    "createdIndex": 4,
                    "modifiedIndex": 4,
                    "nodes": [
                      {
                        "createdIndex": 100,
                        "modifiedIndex": 100,
                        "ttl": 134,
                        "expiration": "2016-11-23T11:45:51.259602538Z",
                        "value": "192.168.33.10:2376",
                        "key": "/swarm/docker/swarm/nodes/192.168.33.10:2376"
                      },
                      {
                        "createdIndex": 102,
                        "modifiedIndex": 102,
                        "ttl": 168,
                        "expiration": "2016-11-23T11:46:25.014658247Z",
                        "value": "192.168.33.20:2376",
                        "key": "/swarm/docker/swarm/nodes/192.168.33.20:2376"
                      },
                      {
                        "createdIndex": 101,
                        "modifiedIndex": 101,
                        "ttl": 137,
                        "expiration": "2016-11-23T11:45:53.997167245Z",
                        "value": "192.168.33.30:2376",
                        "key": "/swarm/docker/swarm/nodes/192.168.33.30:2376"
                      }
                    ],
                    "dir": true,
                    "key": "/swarm/docker/swarm/nodes"
                  }
                ],
                "dir": true,
                "key": "/swarm/docker/swarm"
              }
            ],
            "dir": true,
            "key": "/swarm/docker"
          }
        ],
        "dir": true,
        "key": "/swarm"
      }
    ],
    "dir": true
  },
  "action": "get"
}

聚类

使用Docker Machine进行状态确认。
将192.168.33.10设为主节点,并且状态显示为正在运行。

$ docker-machine ls
NAME      ACTIVE      DRIVER    STATE     URL                        SWARM              DOCKER    ERRORS
kvstore   -           generic   Running   tcp://192.168.33.40:2376                      v1.12.3
manager   * (swarm)   generic   Running   tcp://192.168.33.10:2376   manager (master)   v1.12.3
node1     -           generic   Running   tcp://192.168.33.20:2376   manager            v1.12.3
node2     -           generic   Running   tcp://192.168.33.30:2376   manager            v1.12.3

将swarm-manager作为docker主机并启动docker命令。
可以验证集群的信息。

$ eval $(docker-machine env --swarm manager)
$ docker info
...
Nodes: 3
 manager: 192.168.33.10:2376
  └ ID: 7MOY:Q7KV:KRXU:ORDJ:7LNB:LGSM:GT5H:N4J3:BMC7:WC7C:RMEA:UPA6
  └ Status: Healthy
  └ Containers: 2 (2 Running, 0 Paused, 0 Stopped)
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 514.5 MiB
  └ Labels: kernelversion=3.13.0-95-generic, operatingsystem=Ubuntu 14.04.5 LTS, provider=generic, storagedriver=aufs
  └ UpdatedAt: 2016-11-23T11:49:04Z
  └ ServerVersion: 1.12.3
 node1: 192.168.33.20:2376
  └ ID: C6GI:OKAT:H2LH:VCBU:OF65:YWST:QVQ7:X6RP:6JY5:2CQK:4PVM:LVCZ
  └ Status: Healthy
  └ Containers: 1 (1 Running, 0 Paused, 0 Stopped)
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 514.5 MiB
  └ Labels: kernelversion=3.13.0-95-generic, operatingsystem=Ubuntu 14.04.5 LTS, provider=generic, storagedriver=aufs
  └ UpdatedAt: 2016-11-23T11:49:16Z
  └ ServerVersion: 1.12.3
 node2: 192.168.33.30:2376
  └ ID: IJAI:XSRL:3T5S:5XEF:AUVZ:TLYM:RVAH:MYAW:Q43J:CVCF:J5JH:BGZA
  └ Status: Healthy
  └ Containers: 1 (1 Running, 0 Paused, 0 Stopped)
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 514.5 MiB
  └ Labels: kernelversion=3.13.0-95-generic, operatingsystem=Ubuntu 14.04.5 LTS, provider=generic, storagedriver=aufs
  └ UpdatedAt: 2016-11-23T11:49:16Z
  └ ServerVersion: 1.12.3
...

试着搭建一个合适的容器

ubuntuをpullして10個ぐらい立てる。

$ docker pull ubuntu
$ docker run -d ubuntu tail -f /dev/nul
.....
$ docker run -d ubuntu tail -f /dev/nul
$ docker ps
CONTAINER ID        IMAGE               COMMAND               CREATED             STATUS              PORTS               NAMES
adb3b7d5e716        ubuntu              "tail -f /dev/null"   4 seconds ago       Up 3 seconds                            node1/cocky_stallman
dcd7f4f99864        ubuntu              "tail -f /dev/null"   22 seconds ago      Up 21 seconds                           node2/furious_payne
734878f86e59        ubuntu              "tail -f /dev/null"   23 seconds ago      Up 22 seconds                           node1/small_ardinghelli
0d6099e7cc05        ubuntu              "tail -f /dev/null"   25 seconds ago      Up 24 seconds                           node2/nauseous_hoover
f06a02774421        ubuntu              "tail -f /dev/null"   26 seconds ago      Up 25 seconds                           node1/big_kare
b24d42d433f4        ubuntu              "tail -f /dev/null"   29 seconds ago      Up 28 seconds                           node2/hungry_engelbart
7daa0be7d630        ubuntu              "tail -f /dev/null"   54 seconds ago      Up 52 seconds                           node1/stupefied_joliot
9ddaf2e88b93        ubuntu              "tail -f /dev/null"   2 seconds ago       Up 1 seconds                            manager/elated_northcutt
66b4da48e09d        ubuntu              "tail -f /dev/null"   24 seconds ago      Up 23 seconds                           manager/elated_stonebraker
bb97b9a39644        ubuntu              "tail -f /dev/null"   27 seconds ago      Up 26 seconds                           manager/determined_lichterman

由于默认的策略是spread,因此manager、node1和node2上分别均匀地创建了容器。

请你提供中国的本土服务商。

bannerAds