

Huh, I think you’re right.
Before discovering ZFS, my previous backup solution was rdiff-backup. I have memories of it being problematic for me, but I may be wrong in my remembering of why it caused problems.


Huh, I think you’re right.
Before discovering ZFS, my previous backup solution was rdiff-backup. I have memories of it being problematic for me, but I may be wrong in my remembering of why it caused problems.


Thanks! I was not aware of these options, along with what other poster mentioned about --link-dest. These do turn rsync into a backup program, which is something the root article should explain!
(Both are limited in some aspects to other backup software, but they might still be a simpler but effective solution. And sometimes simple is best!)


Ah, I didn’t know of this. This should be in the linked article! Because it’s one of the ways to turn rsync into a real backup! (I didn’t know this flag- I thought this was the main point of rdiff-backup.)


Beware rdiff-backup. It certainly does turn rsync (not a backup program) into a backup program.
However, I used rdiff-backup in the past and it can be a bit problematic. If I remember correctly, every “snapshot” you keep in rdiff-backup uses as many inodes as the thing you are backing up. (Because every “file” in the snapshot is either a file or a hard link to an identical version of that file in another snapshot.) So this can be a problem if you store many snapshots of many files.
But it does make rsync a backup solution; a snapshot or a redundant copy is very useful, but it’s not a backup.
(OTOH, rsync is still wonderful for large transfers.)


I run mbsync/isync to keep a maildir copy of my email (hosted by someone else).
You can run it periodically with cron or systemd timers, it connects to an IMAP server, downloads all emails to a directory (in maildir format) for backup. You can also use this to migrate to another IMAP server.
If the webmail sucks, I wouldn’t run my own. I would consider using Thunderbird. It is a desktop/Android application. It syncs mail to your desktop/phone, so most of the time, it’s working with local storage so it’s much faster than most webmails.


https://charity.wtf/2021/08/09/notes-on-the-perfidy-of-dashboards/
Graphs and stuff might be useful for doing capacity planning or observing some trends, but most likely you don’t need either.
If you want to know when something is down (and you might not need to know), set up alerts. (And do it well, you should only receive “actionable” alerts. And after setting alerts, you should work on reducing how many actionable things you have to do.)
(I did set up Nagios to send graphs to Clickhouse, plotted by Grafana. But mostly because I wanted to learn a few things and… I was curious about network latencies and wanted to plan storage a bit long term. But I could live perfectly without those.)
I think having a solid/stable virtualization layer is very helpful. Whether that’s Proxmox, Incus, or something else, it’s a matter of taste.
You can then put NixOS, Guix, Debian, Arch, whatever on top.


That’s what I use too. Coupled with soju it’s an easier experience for me. And they are both in Debian 13!


Not sure about how it handles video, but I’ve been meaning to take a look at https://getbananas.net/
I like Pop, but note that Gnome has a few extensions that implement tiling (I use PaperWM). I believe KDE also has some tiling support.
Certainly, many of the hardcore tiling environments are too bare and require significant effort to get to a usable state (esp. on laptops, where you want wireless network applets), and it’s unfortunate that it is no longer so easy to mix and match components (e.g. I used to run xmonad on top of Mate).
Having said that, I’ll have another go with the beta!
Is it an option? Can’t find it. (But GitHub is confusing and I’m old, so maybe there’s something?)


Hah, no worries. I think it’s just an unusual use case and… well, I recognized it because I’m obsessed with PiKVM lately and those things!
I’m not superknowledgeable on USB, but Linux has features to do this; they are called “gadgets” in this list:
https://docs.kernel.org/usb/index.html
I have used this to turn a RPI Zero into a virtual USB drive with these scripts: https://github.com/alexpdp7/rpi-zero-usb-iso/
Likely by searching the Internet for USB gadgets you might find good explanations about requirements. I know there are unexpected difficulties- I’m using a Pi Zero instead of a nicer Pi because… nicer Pis can draw too much power over USB and bork what they’re connected to. So be careful.


If this needs to be “hardware” level, I saw https://openterface.com/ recently. The PiKVM-style projects are also a bit adjacent to this.


How much storage you want? Do you want any specific feature beyond file sharing?
How much experience do you have self hosting stuff? What is the purpose of this project? (E.g. maybe you want a learning experience, not using commercial services, just need file sharing?)


To be fair, if you want to sync your work across two machines, Git is not ideal because well, you must always remember to push, If you don’t push before switching to the other machine, you’re out of luck.
Syncthing has no such problem, because it’s real time.
However, it’s true that you cannot combine Syncthing and Git. There are solutions like https://github.com/tkellogg/dura, but I have not tested it.
There’s some lack of options in this space. For some, it might be nicer to run an online IDE.
…
To add something, I second the “just use Git over ssh without installing any additional server”. An additional variation is using something like Gitolite to add multi-user support to raw Git, if you need to support multiple users and permissions; it’s still lighter than running Forgejo.


Reminder that you can go for hybrid approaches; receive email and host IMAP/webmail yourself, and send emails through someone like AWS. I am not saying you can’t do SMTP yourself, but if you want to just dip your toes, it’s an option.
You get many of the advantages; you control your email addresses, you store all of the email and control backups, etc.
…
And another thing: you could also play with https://chatmail.at/relays ; which is pretty cool. I had read about Delta Chat, but decided to play with it recently and… it’s blown my mind.


If you are going to run Jellyfin or some other media sharing, the key is if you need to transcode media (recompress because the playback device cannot handle it or not). Likely not, nowadays, but research that. If you need transcoding, research; you might get by with an old CPU, or maybe hardware transcoding support, but it’s difficult.
Outside transcoding, for file sharing/streaming, every simultaneous client will require additional horsepower and disk transfer usage. If you are the sole client, then likely you can do with an old CPU. But if you and three people more in your household are going to be using the system at the same time, it might be a bit complex.
One of my home servers is a 4gb of RAM, with a “Intel® Celeron® CPU G1610T @ 2.30GHz”. It’s very old and low end, but for file sharing it works quite well, but it rarely has more than a single simultaneous user.
But now with the end of Windows 10 looming, I need to upgrade a family member’s computer to Linux.
Why?
Did they ask for Linux? Do you have authority over them?
So this needs to be something that both is not going to break on its own (e.g. while doing automatic updates) and also won’t be accidentally broken by the users. … There’s no way I’m going to be able to handle long-distance tech support if things break more than once in a blue moon.
Issues appear. I would be more focused on setting up remote access than choosing a distro.
I’d choose something LTS that has been around for a while (Debian, Ubuntu, RHEL-derivatives, SuSE if there’s a freely-available LTS, etc.).
If you are not against the use of Google products, ChromeOS devices are about the best well-designed low maintenance operating systems. (Not Flex, a ChromeOS device.) But you would be sacrificing Firefox and LibreOffice, which might not be an option. (And technically, it’s running a Linux kernel, if I remember correctly.)


https://dgross.ca/blog/linux-home-server-auto-sleep did the rounds lately.
But you’ll need another system to always be on to handle this.
In many cases, you can “fake” this in other means. For example, I had Remmina configured to run a script to send a WOL packet and wait before connecting via remote desktop to a computer.
You don’t need to rebuild your server from scratch to use Ansible or any other configuration management tool. It helps, though, because then you can ensure you can rebuild from scratch in a fully automatic way.
You can start putting small things in control with Ansible; next time you want to make a change, do it through Ansible. If you stop making manual changes, you’ll already get some benefit- like being able to put your Ansible manifests in version control.
(I still use Puppet for configuration files, installing packages, etc. It just does some stuff better than Ansible. Still, Puppet is harder to learn, and Ansible can be more than enough. Plus, there’s stuff that Ansible can do that Puppet can’t do.)
Dotfiles are a completely separate problem, tackle them separately. Don’t use Ansible for that, use a dotfile-specific tool.