On the host system:
1. Install virtualbox:
sudo apt install virtualbox
2. Install docker:
sudo apt install docker.io
Add user to docker group:
sudo groupadd docker
sudo usermod -aG docker $USER
Either manually create 3 VMs (1 manager and 2 workers) or use docker-machine.
Note: With Secure UEFI boot enabled in BIOS installation of virtualbox requires us to generate a MOD key protected by password. When installation completes we are required to reboot and then enroll the MOD key by entering the same passphrase for the key.
Approach 1 - Manually create 3 VMs (1 manager and 2 workers) - Recommended
Networking for the 3 VMs should meet the following requirements:
1. We should be able to ssh to each of the ubuntu VMs from host system.
2. We should be able to access internet from ubuntu VMs. This is required for the docker registry access.
3. Lastly, each VM should be able to reach the IP address of other nodes in the cluster.
If installing ubuntu VMs also install docker -- this can be selected during OS install .. but you will still need to add your user to docker group post installation.
To setup networking for the VMs:
1. Virtual Box > Host Network Manager > create an adapter (it will be named vboxnet0 by default) say with the following config:
DHCP - Enabled (though this is not best way but generally the IP allocated to a VM wont change and it saves the effort of manually configuring the static ips on each VM)
Refer -
https://www.tecmint.com/network-between-guest-vm-and-host-virtualbox/
Configure Adapter Manually :
IPv4 address: 192.168.56.1 -- this becomes the gateway address
IPv4 Network Mask: 255.255.255.0
Leave the IPv6 settings to default
On DHCP tab:
Server Address: 192.168.56.100 -- the DHCP server
Server Mask: 255.255.255.0
Lower Address Bound: 192.168.56.101 -- first VM's ip address can begin with this.
Upper Address Bound: 192.168.56.254
2. Select each ubuntu VM > Settings > Network > create 2 Network Adapters.
Network Adapter 1 ->
- Check the option: “Enable Network Adapter” to turn it on.
- In the field Attached to: select Host-only Adapter
- Then select the Name of the network: vboxnet0
Network Adapter 2:
- Check the option: “Enable Network Adapter” to activate it.
- In the field Attached to: select NAT
For ease of being able to ssh to VMs by name:
Add the following entries to /etc/hosts (change to your preferred names) on the host system:
192.168.56.101 manager
192.168.56.102 worker1
192.168.56.103 worker2
Also to enable ssh without password:
create ~/.ssh directory and a file under it called authorized_keys
mkdir -p ~/.ssh
echo "[public key]" >> ~/.ssh/authorized_keys
Now you can ssh to the VMs without having to type in the password. For e.g.:
ssh [user]@manager
Once on the VM, try doing nslookup www.google.com and verify the internet connection is working.
Also verify you are able to ping the other 2 node ips.
Approach 2 - Use docker-machine
To create via docker-machine use the command though i prefer the above approach as it gives more flexibility to choose our preferred OS version to install instead of having to live with the version that comes with docker-machine. Also with docker machine approach you end up running one additional VM which runs docker. On a Linux host we dont need that as docker engine can run natively on the host system rather than requiring a separate docker host VM.
docker-machine create -d virtualbox default
The above command will create a VM with name "default" that will be used as host for running the docker engine and additionally will create 3 nodes for swarm cluster.
To Setup Swarm mode:
On Manager node:
$ docker swarm init --advertise-addr 192.168.56.101:2377 --listen-addr 192.168.56.101:237
where, 192.168.56.101 is this manager node's ip address.
We only have one manager which also is the leader. Docker swarm supports having more than one manager nodes but there can be only one leader among them at any given time. We run the swarm init command on the leader node and all other manager nodes will then join the swarm cluster using the manager token.
On 2 Workers:
Run swarm join command for worker nodes to join the swarm cluster.
$ docker swarm join --token SWMTKN-1-4azauf7ujxp711zzwb1ihfluqjy5om6pa4zlicmm7pq3l0svcp-d2bgy5mtsl0w82n99suvb1uu6 192.168.56.101:2377 --advertise-addr 192.168.56.102:2377 --listen-addr 192.168.56.102:2377
The above command will be given in o/p of swarm init.
To get the above commands again anytime following commands can be used on the manager node to get it:
docker swarm join-token manager
docker swarm join-token worker
Running
docker info will show there is 1 manager and 3 nodes in swarm cluster and that swarm is active. It will also show if the current node is a manager or not.
To deploy a test app (an example from nigel poulton's course on pluralsight.com).
docker service create --name psight1 -p 8080:8080 --replicas 5 nigelpoulton/pluralsight-docker-ci
docker service ps psight1
The above command will show which node is each replica running on.
$ docker service ps psight1
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
jk55y1btc0mo psight1.1 nigelpoulton/pluralsight-docker-ci:latest worker2 Running Running 40 seconds ago
3tlq4zb5s78m psight1.2 nigelpoulton/pluralsight-docker-ci:latest worker1 Running Running 40 seconds ago
tonkv7bmjei5 psight1.3 nigelpoulton/pluralsight-docker-ci:latest worker2 Running Running 41 seconds ago
znoek7dwq6x5 psight1.4 nigelpoulton/pluralsight-docker-ci:latest manager Running Running 41 seconds ago
teg2amzw2f73 psight1.5 nigelpoulton/pluralsight-docker-ci:latest worker1 Running Running 40 seconds ago
That's it! Now we have a working swarm cluster that we can use to deploy apps on.