• 1 Post
  • 113 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle
  • For jpg’s, no they will not get smaller. Maybe even a smidge bigger if you zip them. Usually not enough to make a practical difference.

    Zip does generic lossless compression, meaning it can be extracted for a bit-perfect copy of the original. Very simplified it works by finding patterns repeating and replacing a long pattern with a short key, and storing an index to replace the keys with the original pattern on extraction.

    Jpg’s use lossy compression, meaning some detail is lost and can never be reproduced. Jpg is highly optimized to only drop details that don’t matter much for human perception of the image.

    Since jpg is already compressed, there will not be any repeating patterns (duplicate information) for the zip algorithm to find.


  • There’s nothing wrong with Mint, it’s solid. If it works for you don’t stress about it

    The only thing is that it’s based on Ubuntu LTS so it’s packages can be a bit old. Doesn’t really matter much unless you have very new hardware and need the hardware support. Then something Fedora based like Bazzite would be better.

    For getting newer software you can use flatpak/Flathub.

    Bazzite is also “immutable” which makes it harder to break on a system level, but also harder to tinker on a system level. Mint is a “normal” distribution in that regard. Mint does have Timeshift for taking system level snapshots, on the off chance that an update or your tinkering breaks something. Its worth checking that Timeshift is set up for automatic snapshots




  • I highly recommend you use Proxmox as the base OS. Proxmox makes it easy to spin up virtual machines, and easy to back up and revert to backups. So you’re free to play around and try stupid stuff. If you break something in your VM, just restore a backup.

    In addition to virtual machines, Proxmox also does “LXC containers” , which are system level containers. They are basically a very light weight virtual machine, with some caveats like running the same kernel as the host.

    Most self-hosting software is released as a docker-image. Docker is application level containers, meaning only the bare minimum to run the application is included. You don’t enter a docker container to update packages, instead you pull down a new version of the image from the author.

    There are 3 ways to run docker on Proxmox:

    • Install docker inside a virtual machine (recommended).
    • Install docker inside a LXC Containers (not recommended because of various edge cases)
    • Install docker directly on the Proxmox host (not recommended for various reasons).
    • (There is ongoing work for running docker images directly in Proxmox, this is in beta/preview since Proxmox 9.1).

    The “overhead” of running docker inside a VM on the host is so negligible, you don’t need to worry about it.


  • I had never heard of dockge before, but this sounds like the killer feature for me:

    File based structure - Dockge won’t kidnap your compose files, they are stored on your drive as usual. You can interact with them using normal docker compose commands

    Does that mean I can just point it at my existing docker compose files?
    My current layout is a folder for each service/stack , which contains docker-compose.yaml + data-folders etc for the service. docker-compose and related config files are versioned in git.
    I have portainer, but rarely use it , and won’t let it manage the configuration, because that interfered with versioning the config in git.




  • I think Mint does this out of the box, but check if Timeshift is set up for automatic backups. It’s meant for system-level snapshots (basically everything outside the HOME-folder), so you can easily revert if an update or something breaks the system.

    Also consider some form of periodic external backup of her files and documents in the home folder, to protect against hardware failure.










  • I run PBS as a virtual machine on Proxmox, with a dedicated physical harddrive passed through to PBS for the data.

    While this protects from software failures of my VMs, it does not protect from catastrophic hardware failure. In theory I should be able to take the dedicated harddrive out and put it in any other system running a fresh PBS, but I have not tested this.

    I tried running the same PBS with an external NFS share, but had speed and stability issue, mainly due to the hardware of the NFS host. And I wasn’t aware of autofs at the time, so the NFS share stayed disconnected