So I was running out of storage space on my current Synology 6 bay NAS which had 6 10GB drives in a RAID 5 set up, rather than just upgrade all the disk to 22TB which would be costly I was looking at using an old PC as a NAS and then using HBA cards to allow a lot more HDD than the motherboard supports.
The equipment I am using :-
1 x Sever Case which can support up to 8 HDD bays
1 x Motherboard (intel LGA1150 Socket) 32GB RAM DDR4 CPU Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz 1GB NIC ( this may need to be upgraded in the future )
1 x PSU 750W
1 x HBA in IT mode with two 4 ports supports up to 8 HDD
2 x SSD for the TrueNas O/S
8 x Seagate 18TB drives connected to the HBA
So to make things a bit easier I opted to use TrueNas Scale as I had the hardware to run k8/docker app alongside NAS/iscsi/smb mounts and data stores for my vSphere infrastructure
I create a datapool of 1 x RAIDZ2 | 8 wide | 16.37 TiB this means i can lose one disk and all my data is still intact.
I am taking advantage of TrueNas Scale ability to run native docker apps the apps i am running are as follows
- Plex
- Jellyfin
- Emby
- Nginx Proxy Manager
- Jellyseerr
- Navidrome
- Overseerr
- Tautulli
Nginx Proxy Manager
I will be utilising Nginx Proxy Manger as my main reverse proxy however, as some of my domains are external I will be looking to implement security features such as geoip2 to only allow certain country’s if Proxy manager can not perform this i may revert back to my VM install of Nginx Proxy manager where i have implemented this.
if you try to install this you may run into the issue I encounter if you are running apps from SSD it works fine no issues, however if you are running this from the datapool ( HDD ) the app may be stuck in “deploying” state
To fix this we need to add an Environment Varibale
Name * S6_STAGE2_HOOK
Value * sed -i $d /etc/s6-overlay/s6-rc.d/prepare/30-ownership.sh
Restart the pod and it should then change its state to “Running”