bgvorti.blogg.se

Docker convoy plugin
Docker convoy plugin











docker convoy plugin
  1. #DOCKER CONVOY PLUGIN HOW TO#
  2. #DOCKER CONVOY PLUGIN INSTALL#

Thanks to frank mckenna and for this sweet header photo! The deploy the stack again: $ docker stack deploy -c docker-compose.yml testĬheck where the container is running: $ docker service ps test_fooĩd6z02m123jk test_foo.1 alpine:latest swarm-worker-1 Running Running 2 seconds agoĮxec into the container and read the data: $ docker exec -it 3008b1e1bba1 cat /tmp/data.txtĪnd as you can see the data is persisted. On the manager delete the stack: $ docker stack rm test Now let's append data to that file, then delete the stack and recreate to test if the data is still persisted: $ echo $HOSTNAME > /tmp/data.txt Hop onto the swarm-worker-2 node and check if the data is persisted from our previous write: $ docker exec -it 4228529aba29 sh Iybat57t4lha test_foo.3 alpine:latest swarm-worker-2 Running Running 15 seconds ago Mdsg6c5b2nqb test_foo.2 alpine:latest swarm-worker-3 Running Running 15 seconds ago Jfwzb7yxnrxx test_foo.1 alpine:latest swarm-worker-1 Running Running 2 minutes ago $ docker service scale test_foo=3ġ/3: running Ģ/3: running ģ/3: running Ĭheck where the containers are running: $ docker service ps test_foo Scale the service to 3 replicas, then hop onto a new node where a replica resides and check if the data was persisted. While you are in the container, write the hostname's value into a file which is mapped to the glusterfs volume: $ echo $HOSTNAME > /tmp/data.txt Now since the container is running on this node, we will also see that the volume defined in our task configuration will also be present: $ docker volume lsĮxec into the container and look at the disk layout: $ docker exec -it d469f341d836 shįilesystem Size Used Available Use% Mounted onġ0.22.125.101:gfs/vol1 45.6G 3.3G 40.0G 8% /tmp Now jump to the swarm-worker-1 node and verify that the container is running on that node: $ docker psĬONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESĭ469f341d836 alpine:latest "ping localhost" 59 seconds ago Up 57 seconds test_foo.1.jfwzb7yxnrxxnd0qxtcjex8lu Jfwzb7yxnrxx test_foo.1 alpine:latest swarm-worker-1 Running Running 37 seconds ago

docker convoy plugin

ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS Have a look on which node is your container running: $ docker service ps test_foo Note that my glusterfs volume is called gfs version: "3.4"ĭeploy the stack: $ docker stack deploy -c docker-compose.yml test Set the glusterfs servers: $ docker plugin set glusterfs SERVERS=10.22.125.101,10.22.125.102,10.22.125.103Įnable the glusterfs plugin: $ docker plugin enable glusterfsĭeploy a sample service on docker swarm with a volume backed by glusterfs.

#DOCKER CONVOY PLUGIN INSTALL#

$ docker plugin install -alias glusterfs trajano/glusterfs-volume-plugin -grant-all-permissions -disable Install the GlusterFS Volume Pluginīelow I'm installing the plugin and setting the alias name as glusterfs, granting all permissions and keeping the plugin in a disabled state. Have a look at this post to setup the glusterfs volume. Our servers that we will be using will have the private ip's as shown below: 10.22.125.101

docker convoy plugin

We will setup a 3 node replicated glusterfs volume and show how easy it is to install the volume plugin and then demonstrate how storage from our swarms containers are persisted. I've been waiting for some time for one solid glusterfs volume plugin, and it works great. I've stumbled upon one AWESOME GlusterFS Volume Plugin for Docker by please have a look at his repository.

#DOCKER CONVOY PLUGIN HOW TO#

From one of my previous posts I demonstrated how to provide persistent storage for your containers by using a Convoy NFS Plugin.













Docker convoy plugin