Difference between revisions of "Docker"
Michael.mast (talk | contribs) |
Michael.mast (talk | contribs) |
||
Line 38: | Line 38: | ||
*All saves images are stored in /mnt/docker-efs/containers | *All saves images are stored in /mnt/docker-efs/containers | ||
<br> | <br> | ||
− | Either host can build the containers if load is not too high. | + | Either host can build the containers if load is not too high. In this example I am building on host-1. Also note that I build a custom apache/php container that is used as the base for many others. The script creates this first then creates the others. |
<pre> | <pre> | ||
#!/bin/bash | #!/bin/bash |
Latest revision as of 11:08, 6 November 2019
Contents
Building Containers
- Build without working with cached data.
#!/bin/bash docker build -t localphp:`date +%m%d%Y` -f Dockerfile --no-cache . docker build -t localphp -f Dockerfile .
Networking
IP Forwarding
This is required to allow the containers to route out of the local docker network.[1]
echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf systemctl restart network
SELinux
Quick and dirty fix for mounting local volumes[2]
chcon -Rt svirt_sandbox_file_t /directory/to/be/mounted
Or even better, just use the 'z' flag for mounting volumes.[3]
docker run --name steamcache --restart=always -d -v /raid5/archive/steamcache:/data/cache:z -v /raid5/archive/steamcache/logs:/data/logs:z --network=steam -p 192.168.11.200:80:80 steamcache/monolithic:latest
Clustering
Poor Man's Cluster
I started to use the base docker daemon because containers were easier to work with. I was able to run many services off a single host without conflicts or complicated config files. However, now I needed to make the service run with some redundancy. Maybe a container crashes or the host is down. There are many good ways to handle this, but for stupid reasons I wanted to handle this with shell scripts. At the moment I do not have any health checks implemented, as long as the second host is online I should be ok. However, health checks are on the 'ol to-do list.
- The environment is AWS.
- Both hosts are running on Amazon Linux 2
- VPC in Ohio Region
- Different subnets/AZs.
- Share an EFS mount so the files are the same (These are primarily web containers. This allows the files to be modified on the fly)
- Share an RDS MySQL Serverless database.
- All the files to build the containers are in /mnt/docker-efs/build (This uses sub directories for each container)
- All directories mounted into containers are in /mnt/docker-efs/sitefiles
- All start scripts for the containers are in /mnt/docker-efs/start_scripts
- All saves images are stored in /mnt/docker-efs/containers
Either host can build the containers if load is not too high. In this example I am building on host-1. Also note that I build a custom apache/php container that is used as the base for many others. The script creates this first then creates the others.
#!/bin/bash ##Build all containers, mark them as latest. buildbase='/mnt/docker-efs/' containerbase="$buildbase/containers" today=`date +%F` list=$(ls $buildbase/build) function _check() { [ "$?" -ne '0' ] && echo 'Script failed' && exit } function _save() { docker save "$1" > "$containerbase"/"$1".tar _check } function _build() { local buildsources="build/$1" docker build -t "$1":${today} -f "$buildbase"/"$buildsources"/[d..D]ockerfile "$buildbase"/"$buildsources"/ --no-cache _check _save "$1":${today} docker tag "$1":${today} "$1":latest } [ -z "$1" ] && echo -e 'Options : \n\nlist\nbuild\nbackup' && exit [ "$1" == 'list' ] && echo "$list" && exit [ "$1" == 'build' ] && docker system prune -a -f && _build localphp && for i in $list; do [ "$i" != "localphp" ] && echo "Building $i" && _build "$i"; done && exit [ "$1" == 'backup' ] && echo "place holder for saves" && exit echo "You should not see this, something went wrong."
After the containers have been built, they need to be imported into the other host. On host-2 run the import script.
#!/bin/bash source='/mnt/docker-efs/containers' list=`ls -1t /mnt/docker-efs/containers/` [ ! -z "$list" ] && date=`echo $list | awk '{print $1}' | grep -oE 20[0-9]\{2\}-[0-9]\{2\}-[0-9]\{2\}` [ ! -z "$date" ] && find "$source" -name "*$date*" | for i in `cat -` ; do name=`echo $i | grep -oE [a-z_]+: | sed 's/://'` [ ! -z $name ] && docker load --input "$i" [ "$?" == "0" ] && docker tag "$name":"$date" "$name":latest done /mnt/docker-efs/reload.sh docker system prune -a -f
The above script will also run the reload.sh script. This tells the host to run execute all the start scripts. The first command in each start script is to kill the associated running container. This will ensure fresh containers are loaded.
#!/bin/bash ##Delete then run all containers. source='/mnt/docker-efs/start_scripts' ls "$source"/ | for i in `cat -`; do [ ! -z "$i" ] && "$source"/"$i" done
Now that I have the same containers running on two hosts, and the duplicate containers are sharing the same database and files, I can use a standard web proxy to load balance between the hosts. Other services, such as RADIUS, can use round-robin DNS or native load balancing from the connecting client.