• 0 Posts
  • 736 Comments
Joined 10 months ago
cake
Cake day: January 2nd, 2025

help-circle
  • Others have mentioned SFF desktops.

    My current server is an old Dell Optiplex SFF desktop. Idles at just under 20w, peaks at 80. Currently has an NVME boot drive, and an 8TB 3.5" drive.

    Runs like a champ, easily serves Jellyfin video, with transcoding, while converting videos with handbrake (and with 2 other systems converting videos off that drive over the net).

    Cost, internal space, options and power it’s hard to beat an SFF. If you don’t need internal space or conversion power, than a NUC can work (the lack of sufficient cooling limits it’s converting capabilities).











  • I sync hundreds of gigs, (if not terabytes at this point) using Syncthing with errors on only one machine (it’s running on 6 devices, including a VM). And those errors are of my own doing, not random Syncthing errors.

    It’s surprisingly robust these days, especially for a single-user notes.

    I have an indexing job that runs on my server every 30 minutes, saving into a text file (it indexes my media folder, which is about 3TB of movies and TV shows).

    Those text files sync to my phone when they’ve changed (so every 30 minutes). They’re always up to date when I open them.

    My phone also has jobs to continually sync my photos to home, an ad-hoc folder to my laptop, and about 25 other folder pairs (including NeoBackup) that sync under different conditions, without fail.

    I’m currently testing Cherrytree using Sourcherry on Android and it seems to work fine as a single-user solution with Syncthing.



  • Others have clarified, but I’d like to add that security isn’t one thing - it’s done in layers so each layer protects from potential failures in another layer.

    This is called The Swiss Cheese Model. of risk mitigation.

    If you take a bunch of random slices of Swiss cheese and stack them up, how likely is there to be a single hole that goes though every layer?

    Using more layers reduces the risk of “hole alignment”.

    Here’s an example model:

    So a router that has no open ports, then a mesh VPN (wireguard/Tailscale) to access different services.

    That VPN should have rules that only specific ports may be connected to specific hosts.

    Hosts are on an isolated network (could be VLANS), with only specific ports permitted into the VLAN via the VPN (service dependent).

    Each service and host should use unique names for admin/root, with complex passwords, and preferably 2FA (or in the case of SSH, certs).

    Admin/root access should be limited to local devices, and if you want to get really restrictive, specific devices.

    In the Enterprise it’s not unusual to have an admin password management system that you have to request an admin password for a specific system, for a specific period of time (which is delivered via a secure mechanism, sometimes in person). This is logged, and when the requested time frame expires the password is changed.

    Everyone’s risk model and Swiss cheese layering will fall somewhere on this scale.


  • About 5 years ago I opened a port to run a test.

    Within hours it was getting hammered (probably by scripts) trying to figure out what that port was forwarded to, and trying to connect.

    I closed the port about a week later, but not before that poor consumer router was overwhelmed with the hits.

    I closed the port after a week. For the next 2 years I’d get hammered with scans occasionally.

    There are tools out there continually looking for open ports, they probably get added to a database and hackers/script kiddies, whoever, will try to get in.

    Whats interesting is I did the same thing around 2000 with a DSL connection (which was very much a static address) and it wasn’t an issue even though there were fewer always-on consume connections.