In a previous post I talked about how one of my Ubuntu 18.04 VM's was running out of space so I explained my steps to expand the space allocated to it. However, I've been monitoring the disk usage with my Grafana-Telegraf-InfluxDB setup and came to the conclusion that just adding some more GB's every so often would be tedious and would eventually result in a VM that was too big.
Actually, the whole reason that I'm having these disk space issues is because of my Grafana setup. Influx is taking in a lot of metrics from several hosts in my home and most of them run 24/7. With data being gathered every 10 seconds this starts to build up. Currently the data is being stored on the VM itself in a Docker volume. But if I wanted to save space on the VM disk, then I would have to move the data elsewhere.
In comes NFS. I have a NAS running FreeNAS that has a couple pools. One of those pools is designed exactly for an instance like this, to contain none-crucial data. I then made two datasets on that pool (one for the InfluxDB data and the other for a MySQL database I had on the VM) and set up the permissions to allow the VM to access those datasets.
But now the question is: how to get the containers to access those datasets in the best way?
Well Docker has several options for this and I tried 3 different ways. The first was to create a volume using nfs as the "type" You put in the options of the nfs share and it creates a connection. I also needed to start up the container that was going to be doing the read/write with the UID of the VM. (This is due to the way I have my nfs share permissions setup, I'm sure it could be done in a more secure way) This worked, but not fully. my other containers that depended on the databases couldn't access it for some reason and time outs started popping up.
So I tried the second option, which was to have the compose file that I start my stacks with create the volume in the same way. This however, did not work either. I actually ran into more issues doing it this way.
The third way, and one might say the brute force way of doing it was to mount the nfs shares to the host itself utilizing fstab, and then bind-mounting the specific volumes needed in the compose file for the stacks that needed it. That did the trick and after that I monitored for any slowdowns or errors, but everything seems to be working a-okay!
Now I have quite a bit of space to store my historical metrics and I won't have to worry about filling it up anytime soon.