本文共 7515 字,大约阅读时间需要 25 分钟。
Docker v1.12 brings in its integrated orchestration into docker engine.
Starting with Docker 1.12, we have added features to the core Docker Engine to make multi-host and multi-container orchestration easy. We’ve added new API objects, like Service and Node, that will let you use the Docker API to deploy and manage apps on a group of Docker Engines called a swarm. With Docker 1.12, the best way to orchestrate Docker is Docker!
Create swarm-manager:
gcloud initdocker-machine create swarm-manager --engine-install-url experimental.docker.com -d google --google-machine-type n1-standard-1 --google-zone us-central1-f --google-disk-size "500" --google-tags swarm-cluster --google-project k8s-dev-prj
Check what version has been installed:
$ eval $(docker-machine env swarm-manager)$ docker versionClient: Version: 1.12.0-rc2 API version: 1.24 Go version: go1.6.2 Git commit: 906eacd Built: Fri Jun 17 20:35:33 2016 OS/Arch: darwin/amd64 Experimental: trueServer: Version: 1.12.0-rc2 API version: 1.24 Go version: go1.6.2 Git commit: 906eacd Built: Fri Jun 17 21:07:35 2016 OS/Arch: linux/amd64 Experimental: true
Create worker node:
docker-machine create swarm-worker-1 \ --engine-install-url experimental.docker.com \ -d google \ --google-machine-type n1-standard-1 \ --google-zone us-central1-f \ --google-disk-size "500" \ --google-tags swarm-cluster \ --google-project k8s-dev-prj
Initialize swarm
# init managereval $(docker-machine env swarm-manager)docker swarm init
Under the hood this creates a Raft consensus group of one node. This first node has the role of manager, meaning it accepts commands and schedule tasks. As you join more nodes to the swarm, they will by default be workers, which simply execute containers dispatched by the manager. You can optionally add additional manager nodes. The manager nodes will be part of the Raft consensus group. We use an optimized Raft store in which reads are serviced directly from memory which makes scheduling performance fast.
# join workereval $(docker-machine env swarm-worker-1)manager_ip=$(gcloud compute instances list | awk '/swarm-manager/{print $4}')docker swarm join ${manager_ip}:2377
List all nodes:
$ eval $(docker-machine env swarm-manager)$ docker node lsID NAME MEMBERSHIP STATUS AVAILABILITY MANAGER STATUS0m2qy40ch1nqfpmhnsvj8jzch * swarm-manager Accepted Ready Active Leader4v1oo055unqiz9fy14u8wg3fn swarm-worker-1 Accepted Ready Active
eval $(docker-machine env swarm-manager)docker service create --replicas 2 -p 80:80/tcp --name nginx nginx
This command declares a desired state on your swarm of 2 nginx containers, reachable as a single, internally load balanced service on port 80 of any node in your swarm. Internally, we make this work using Linux IPVS, an in-kernel Layer 4 multi-protocol load balancer that’s been in the Linux kernel for more than 15 years. With IPVS routing packets inside the kernel, swarm’s routing mesh delivers high performance container-aware load-balancing.
When you create services, can optionally create replicated or global services. Replicated services mean any number of containers that you define will be spread across the available hosts. Global services, by contrast, schedule one instance the same container on every host in the swarm.
Let’s turn to how Docker provides resiliency. Swarm mode enabled engines are self-healing, meaning that they are aware of the application you defined and will continuously check and reconcile the environment when things go awry. For example, if you unplug one of the machines running an nginx instance, a new container will come up on another node. Unplug the network switch for half the machines in your swarm, and the other half will take over, redistributing the containers amongst themselves. For updates, you now have flexibility in how you re-deploy services once you make a change. You can set a rolling or parallel update of the containers on your swarm.
docker service scale nginx=3$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESb51a902db8bc nginx:latest "nginx -g 'daemon off" 2 minutes ago Up 2 minutes 80/tcp, 443/tcp nginx.1.8yvwxbquvz1ptuqsc8hewwbau
# switch to worker$ eval $(docker-machine env swarm-worker-1)$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESda6a8250bef4 nginx:latest "nginx -g 'daemon off" About a minute ago Up About a minute 80/tcp, 443/tcp nginx.2.bqko7fyj1nowwj1flxva3ur0g54d9ffd07894 nginx:latest "nginx -g 'daemon off" About a minute ago Up About a minute 80/tcp, 443/tcp nginx.3.02k4d34gjooa9f8m6yhfi5hyu
As seen above, one container runs on swarm-manager, and the others run on swarm-worker-1.
gcloud compute firewall-rules create nginx-swarm \ --allow tcp:80 \ --description "nginx swarm service" \ --target-tags swarm-cluster
Then use external IP (get by exec gcloud compute instances list
) to visit nginx service.
gcloud compute addresses create network-lb-ip-1 --region us-central1gcloud compute http-health-checks create basic-checkgcloud compute target-pools create www-pool --region us-central1 --health-check basic-checkgcloud compute target-pools add-instances www-pool --instances swarm-manager,swarm-worker-1 --zone us-central1-f# Get lb addressesSTATIC_EXTERNAL_IP=$(gcloud compute addresses list | awk '/network-lb-ip-1/{print $3}')# create forwarding rulesgcloud compute forwarding-rules create www-rule --region us-central1 --port-range 80 --address ${STATIC_EXTERNAL_IP} --target-pool www-pool
Now you could visit http://${STATIC_EXTERNAL_IP} for nginx service.
BTW, will do this more easily as integrated:
By default, apps deployed with bundles do not have ports publicly exposed. Update port mappings for services, and Docker will automatically wire up the underlying platform loadbalancers:
docker service update -p 80:80 <example-service>
Create local scope network and place containers in existing vlans:
docker network create -d macvlan --subnet=192.168.0.0/16 --ip-range=192.168.41.0/24 --aux-address="favoriate_ip_ever=192.168.41.2" --gateway=192.168.41.1 -o parent=eth0.41 macnet41docker run --net=macnet41 -it --rm alpine /bin/sh
A typical two-tier (web+db) application runs on swarm scope network would be created like this:
docker network create -d overlay mynetdocker service create –name frontend –replicas 5 -p 80:80/tcp –network mynet mywebappdocker service create –name redis –network mynet redis:latest
Docker v1.12 indeeds introduced easy-of-use interface for orchestrating containers, but I’m concerned whether this way could scale for large clusters. Maybe we could see it on Docker’s further iterations.