

I used to use 2FAS, but recently switched to a self-hosted instance of Ente
I used to use 2FAS, but recently switched to a self-hosted instance of Ente
if you’re actually using the waste heat from a PC does that mean its basically 100% energy efficient?
It is exactly as efficient as an electric heater, yes, but an electric heater is one of the least efficient ways to heat a home.
Got a friend or family member willing to let you drop a miniPC at their place?
You could also go the offline route - buy two identical external drive setups, plug one into your machine and make regular backups to it, drop the other one in a drawer in your office at work. Then once a month or so swap them to keep the off-site one fresh.
Also there’s really nothing wrong with cloud storage as long as you encrypt before uploading so they never have access to your data.
Personally I do both. The off-site offline drive is for full backups of everything because space is cheap, while cloud storage is use for more of a “delta” style backup, just the stuff the changes frequently, because of the price. If the worst were to happen, I’d use the offsite drive to get the bulk infrastructure back up and running, and then the latest cloud copy for any recently added/modified files.
The hard links aren’t between the source and backup, they’re between Friday’s backup and Saturday’s backup
If you want a “time travel” feature, your only option is to duplicate data.
Not true. Look at the --link-dest flag. Encryption, sure, rsync can’t do that, but incremental backups work fine and compression is better handled at the filesystem level anyway IMO.
There are two ways to maintain a persistent data store for Docker containers: bind mounts and docker-managed volumes.
A Docker managed volume looks like:
datavolume:/data
And then later on in the compose file you’ll have
volumes:
datavolume:
When you start this container, Docker will create this volume for you in /var/lib/docker/volumes/ and will manage access and permissions. They’re a little easier in that Docker handles permissions for you, but they’re also kind of a PITA because now your compose file and your data are split apart in different locations and you have to spend time tracking down where the hell Docker decided to put the volumes for your service, especially when it comes to backups/migration.
A bind mount looks like:
./datavolume:/data
When you start this container, if it doesn’t already exist, “datavolume” will be created in the same location as your compose file, and the data will be stored there. This is a little more manual since some containers don’t set up permissions properly and, once the volume is created, you may have to shut down the container and then chown the volume so it can use it, but once up and running it makes things much more convenient, since now all of the data needed by that service is in a directory right next to the compose file (or wherever you decide to put it, since bind mounts let you put the directory anywhere you like).
Also with Docker-managed volumes, you have to be VERY careful running your docker prune commands, since if you run “docker prune --volumes” and you have any stopped containers, Docker will wipe out all of the persistent data for them. That’s not an issue with bind mounts.
Docker is far cleaner than native installs once you get used to it. Yes native installs are nice at first, but they aren’t portable, and unless the software is built specifically for the distro you’re running you will very quickly run into dependency hell trying to set up your system to support multiple services that all want different versions of libraries. Plus what if you want or need to move a service to another system, or restore a single service from a backup? Reinstalling a service from scratch and migrating over the libraries and config files in all of their separate locations can be a PITA.
It’s pretty much a requirement to start spinning up separate VMs for each service to get them to not interfere with each other and to allow backup and migration to other hosts, and managing 50 different VMs is much more involved and resource-intensive than managing 50 different containers on one machine.
Also you said that native installs just need an apt update && apt upgrade, but that’s not true. Services that are built into your package manager sure, but most services do not have pre-built packages for all distros. For the vast majority, you have to git clone the source, then build from scratch and install. Updating those services is not a simple apt update && apt upgrade, you have to cd into the repo, git pull, then recompile and reinstall, and pray to god that the dependencies haven’t changed.
docker compose pull/up/down is pretty much all you need, wrap it in a small shell script and you can bring up/down or update every service with a single command. Also if you use bind mounts and place them in the directory for the service along side the compose file, now your entire service is self-contained in one directory. To back it up you just “docker compose down”, rsync the directory to the backup location, then “docker compose up”. To restore you do the exact same thing, just reverse the direction of the rsync. To move a service to a different host, you do the exact same thing, just the rsync and docker compose up are now being run on another system.
Docker lets you compact an entire service with all of its dependencies, databases, config files, and data, into a single directory that can be backed up and/or moved to any other system with nothing more than a “down”, “copy”, and “up”, with zero interference with other services running on your system.
I have 158 containers running on my systems at home. With some wrapper scripts, management is trivial. The thought of trying to manage native installs on over a hundred individual VMs is frightening. The thought of trying to manage this setup with native installs on one machine, if that was even possible, is even more frightening.
Pretty much guaranteed you’ll spend an order of magnitude more time (or more) doing than than just auto-updating and fixing things on the rare occasion that they break. If you have a service that likes to throw out breaking changes on a regular basis, it might make sense to read the release notes and manually update that one, but not everything.
Many of these are hostile buyouts, which means they use their money to buy a majority of shares in the company and then overthrow the board. I don’t know if the Toys R Us sale was one of those though.
And they’re not saying it is fraudulent. Just that it should be fraudulent.
I haven’t tried it, but my understanding is it’s still somewhat of a beta feature
A lot of it depends on your distro. I use exclusively Mint and Debian (primarily Debian), and everything works fine on both of those. My laptop runs Debian 13 and has the iGPU and an RTX4070, and one of my servers has both an RTX A6000 and a T400, both being passed through Proxmox into two different Debian 13 VMs. Everything works without issue. Before Debian 13 on the laptop I had Mint 22, and before that Ubuntu 23.10, and both worked without issue as well. The laptop before this one had the iGPU and a GTX1060 I believe, it ran Mint 18, then 19, then 20, then 21 all without any problems either.
That’s how that user types all of their posts. It’s really fucking annoying. They get called out on it a lot, downvoted for it, and just keep doing it for some reason.
It’s got dual graphics cards, with the graphics an Nvidia one. I’ve heard that they are finicky with Linux…
Not really. I’ve been using Nvidia cards on Linux for decades, the complaints are blown way, way out of proportion. Just install the proprietary drivers from the distro’s repos and 99% of the time that’s all that’s needed. The people who complain usually screwed something up, like installing drivers from the wrong source or not installing the meta package for their kernel headers so the drivers can’t rebuild on kernel updates. Just follow the official instructions for your distro and that should be all you have to do, there’s a lot of bad advice floating around on forums and blogs, so just stick to the official docs.
It’s literally one checkbox in the settings to shut those external media sources off
Either a lifetime pass, or you actually configured local access correctly instead of botching it (or ingoring it entirely) and then coming to lemmy to complain.
The issue is that GNOME is incredibly opinionated and makes it very difficult, if not impossible to configure some basic functionality that every other DE has options for. GNOME will work for you if you want a DE that works exactly the way stock GNOME works, but as soon as you want to change anything, you run into a brick wall. Nearly any other Linux DE can be configured to look and work similar to GNOME, but GNOME can’t be configured to work like anything other than the vanilla GNOME the devs insist you must use. It’s the antithesis of Linux IMO (modularity, reconfigurability, config-file-driven) and acts more like a MacOS skin.
In theory it could be useful to be notified when it’s done if you’re out of earshot of the washing machine.
You can do that without a smart washer/dryer, if you want, by looking at the power draw. My washer/dryer don’t have any network connectivity, but I still get push notifications on my phone when the cycle is finished from a python script monitoring power draw on each circuit in my home via an IoTaWatt and influxDB.
The time on the display rarely matches the time the machine actually takes to complete in my experience, especially for dryers.
Marketing absolutely works on Nerds, what a ridiculous statement. Just because certain types marketing will push us away doesn’t mean all marketing is pointless. Be honest, let me know what your product does, give me a proper datasheet and a price, and I’ll explore it. Try to shove some hyperbolic BS down by throat while hiding the things I actually care about and I’ll never buy from your company.
The legend is also just a handful of numbers. No description, no units. This has to be the worst heat map I’ve ever seen, I don’t even know if blue is good or bad.