Proxmox on ZFS or EXT4 as primary partition? + Requesting info (using RS 4000 G8SE 140GB SSD)

  • I've been looking into proxmox and wanting to use it for my RS 4000 G8SE server.


    Just wanted to know if I should know something before I install it. Should I use the entire partition for ZFS or EXT4? Any experiences? Issues? Seeing I can only install prox with RAID0 seeing I only have a single "disk" so to speak.


    I've been looking into docker as well, but I am not sure yet if I wish to use that. Seeing docker likes to mess with the firewall and is mostly only compatible with iptables, and is incompatible with ufw. Plus, the hassle of removing entire templates just for reinstallation while separating the data sounds like a pain to me.


    As such I've been looking into Proxmox. And wanted to know if anyone can share me some experiences with it, and issues. And how they are storing their templates (no VM's). Thus far, I only have 1 ZFS container on LXD (no prox yet) while the main partition is using EXT4.


    Regards,

    Skyrider

  • Hi,

    you should really think about whether you need ZFS. I just reviewed my server today - i am saving ~ 25 GiB diskspace with ZFS in comparison to normal lvm-thin with ext4. I am running ~15 LXC Containers. With ZFS ~90 GiB, without ZFS ~115GiB.


    Not that much... you should also know that ZFS needs much RAM and leads to higher CPU Load because of the live compression.


    So i normally recommend a normal setup with ext4, if you dont want to store much data.



    My way for setting up proxmox is a bit different - i am setting up a normal debian, because at this way i can create my own partition schema on the disk.
    I am creating a small partition for the boot loader. The rest of the disk will be used for lvm - so i create a volume group with a 10GiB lv with ext4 for / and a 1GiB lv for /var/log. The rest of the vg will remain unused.
    Then i manually setup proxmox and after that, i create a lv as a lvm-thin with the unused storage of the volume group. And this lvm-thin i register in proxmox and use it for my lxc containers.

    If i am using ZFS with proxmox, then the lv with the lvm-thin will be a zfs pool. So the rootfs lv, as well as the log lv, is in each situation a normal ext4.


    Thats my way to handle it - just a suggestion.
    Do you store huge masses of data with e.g. much duplicate files? Then ZFS is your filesystem - otherwise you should choose lvm-thin with ext4.


    I've been looking into docker as well, but I am not sure yet if I wish to use that. Seeing docker likes to mess with the firewall and is mostly only compatible with iptables, and is incompatible with ufw. Plus, the hassle of removing entire templates just for reinstallation while separating the data sounds like a pain to me.

    Networking is pain in the ass with containers. But the pain with lxc aint that hard like the pain with docker. Thats why i personally hate docker. ^^


    In my setups, the proxmox included firewall is completely turned off and im having my own big iptables script with nat and portforwarding. If you want, i can share it with you - but networking with lxc containers could be a struggle for the begin, because its a little bit difficult to understand how it works. :D

    But if you really want, this wont be a showstopper. Iptables is easier as it looks like. :P



    Hope this helps - feel free to ask if you having any other questions. :saint:

    RS Rentier 2019 | CX11-CEPH | VPS Karneval 2020


    Wer im Netz Anstand und Respekt verliert, der ist auch im realen Leben für nichts zu gebrauchen! ;)

  • Yea, I'm familiar with ZFS's ARC. Which though, memory can be set in its config. But I also noticed this:


    Quote

    Do not use ZFS on top of hardware controller which has its own cache management. ZFS needs to directly communicate with disks. An HBA adapter is the way to go, or something like LSI controller flashed in “IT” mode.


    Which means it's not a good idea to use ZFS with compression to start with as netcup uses hardware raid.


    Nice how you split the volume, I'll look into that!


    What does your script exactly do by the way? port forwarding what exactly?

  • Which means it's not a good idea to use ZFS with compression to start with as netcup uses hardware raid.

    Well - thats true, but i never had some problems. ^^



    What does your script exactly do by the way? port forwarding what exactly?

    I will send you a example later. Currently i cant access my private computer.


    For now i will try to explain it:

    Each lxc container is having a internal ip address. For outgoing traffic, theres a normal NAT like at a router at home, so all containers will access the internet with the ip address of the hostsystem.


    For incoming traffic, i am using a iptables DNAT. So youre saying at the hostsystem: all traffic the hostsystem recieves e.g. on its public ip at port 80/tcp will be forwarded to a internal ip address (a container).


    I will show you a real example later. :)

    RS Rentier 2019 | CX11-CEPH | VPS Karneval 2020


    Wer im Netz Anstand und Respekt verliert, der ist auch im realen Leben für nichts zu gebrauchen! ;)

  • So - now i am at my private pc and can show you a example of my server:

    Code
    1. ### Portforwarding for Container mycontainer.rz.domain.de (116) ###
    2. # SSH /SFTP
    3. /sbin/ip6tables -t nat -A PREROUTING -d abcd:ef12:3456:7890::1 -i ens3 -p tcp --dport 4422 -j DNAT --to [fd00:21:1::116]:22
    4. /sbin/ip6tables -A FORWARD -p tcp --dport 22 -d fd00:21:1::116 -m state --state NEW -j ACCEPT
    5. /sbin/ip6tables -A FORWARD -p tcp --sport 22 -s fd00:21:1::116 -m state --state RELATED,ESTABLISHED -j ACCEPT


    So image your RS IP is abcd:ef12:3456:7890::1 and on this RS, you are hosting a container with the ID 116 and the internal IP address fd00:21:1::116. On youre container is a ssh server listening on port 22 - now you want to access the ssh server of your container from remote. So you are setting up a portforwarding like above, and due to this portforwarding, the port 22 of the container is available globally with the port 4422 and the ip of the hostsystem.


    Clear what i am talking about? You are forwarding a external accessable port of the host to a internal accessable port of a container.

    For IPv4 its the same proceedure, you are just using other IP addresses and iptables instead of ip6tables.

    RS Rentier 2019 | CX11-CEPH | VPS Karneval 2020


    Wer im Netz Anstand und Respekt verliert, der ist auch im realen Leben für nichts zu gebrauchen! ;)

  • Do you think ZFS is safe and benefits from efficiency in combination with Mailcow?
    I had the impression you were doing a test with this combination during the last weeks.

    Well - i can imagine (basically for all mailservers), that a deduplicating filesystem could make sense. Just think about if somebody send a document about 5 MiB to 20 recipients. So the mail (including the 5 MiB document from the attachment) will be stored at least 21 times on the server. 105 MiB. If you're running a deduplicating filesystem, it will only use 5 MiB storage.


    I never tested this and i also never wanted to test it ^^ and i also dont know, whether this will work with mailcow and docker volumes. But if you are going to try it, please report your experiences to us. :)

    RS Rentier 2019 | CX11-CEPH | VPS Karneval 2020


    Wer im Netz Anstand und Respekt verliert, der ist auch im realen Leben für nichts zu gebrauchen! ;)