I currently have a home server which I use a lot and has a few important things in it, so I kindly ask help making this setup safer.

I have an openWRT router on my home network with firewall active. The only open ports are 443 (for all my services) and 853 (for DoT).

I am behind NAT, but I have ipv6, so I use a domain to point to my ipv6, which is how I access my serves when I am not on lan and share stuff with friends.

On port 443 I have nginx acting as a reverse proxy to all my services, and on port 853 I have adguardhome. I use a letsencrypt certificate with this proxy.

Both nginx, adguardhome and almost all of my services are running in containers. I use rootless podman for containers. My network driver is pasta, and no container has “–net host”, although the containers can access host services because they have the option “–map-guest-addr” set, so I don’t know if this is any safer then “–net host”.

I have two means of accessing the server via ssh, either password+2fa or ssh key, but ssh port is lan only so I believe this is fine.

My main concern is, I have a lot of personal data on this server, some things that I access only locally, such as family photos and docs (these are literally not acessible over wan and I wouldnt want them to be), and some less critical things which are indeed acessible externally, such as my calendars and tasks (using caldav and baikal), for exemple.

I run daily encrypted backups into OneDrive using restic+backrest, so if the server where to die I believe this would be fine. But I wouldnt want anyone to actually get access to that data. Although I believe more likely than not an invader would be more interested in running cryptominers or something like that.

I am not concerned about dos attacks, because I don’t think I am a worthy target and even if it were to happen I can wait a few hours to turn the server back on.

I have heard a lot about wireguard - but I don’t really understand how it adds security. I would basically change the ports I open. Or am I missing something?

So I was hoping we could talk about ways to improve my servers security.

  • bokherif@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    4 days ago

    Start with the basics:

    • Harden SSH by only allowing public key authentication and use strong keys to authenticate instead of passwords.
    • Setup fail2ban (lots of online resources, check Linode guides) to block malicious IPs temporarily.
    • If the data you store is something only you should see, then it should not ever be connected to the internet, airgap wherever possible.
    • And finally, keep your shit updated.
    • Skydancer@pawb.social
      link
      fedilink
      English
      arrow-up
      10
      ·
      4 days ago

      To be even more explicit on the last point, that means regularly updating OpenWRT and all your containers, not just the server’s base OS

  • satanmat@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    ·
    4 days ago

    The single best thing you can do security wise, is to NOT have any personal data on a web facing server.

    Separate the data

    Rereading it does look like you are doing the things right; so just audit what is on the public side. - your calendar and tasks- cool

    Your photo and docs, do those need to be on there?

    they are not accessible on the WAN

    If they are on a server that is publicly accessible, please move them to a different location

    Otherwise you sound like your doing well

    • miau@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      That was a great answer, thank you so much!

      Yes I didnt even notice the family photos and docs dont need to be on that same server. Initially I just put them there to act as a local file share. But you are absolutely right, moving them from the public server is the best thing I can do to protect them.

      I will look into setting up a second server for the private stuff that is not publicluly accessible

      • Lyricism6055@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 days ago

        If this server is publicly accessible and gets pwned, they can use it as a jump box for your internal devices.

        • miau@lemmy.sdf.orgOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 days ago

          Thats a good point, I hadnt thought about it before. I like the possibility of sharing these files in my intranet but I suppose you are right. Maybe I could use openwrt to split two networks, one for public stuff only, but my knowledge of networking is quite limited.

          • Lyricism6055@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 days ago

            Yeah what you’re talking about is a DMZ, it still won’t help a ton if you don’t have strict firewall controls inside your network too.

            I just use wireguard with firewall rules to restrict to just my server with my docker containers on it and my DNS

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      Your photo and docs

      At least in my case, it’s really handy to share photos with other family members. But certainly you don’t need all of them available on the same public service.

      • miau@lemmy.sdf.orgOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        Thats a good point. Maybe I can get away with just temporary file sharing. So when someone wants something I can upload it to the server and send a link. I bet even nextcloud could do that.

        Still way less scary then having everything on the server all the time

  • root@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 days ago

    Is keeping everything inside of a local “walled garden”, then exposing the minimum amount of services needed to a WireGuard VPN not sufficient?

    There would be be no attack surface from WAN other than the port opened to WireGuard

    • linearchaos@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Minimum open services is indeed best practice but be careful about making statements that the attack surface is relegated to open inbound ports.

      Even Enterprise gear gets hit every now and then with a vulnerability that’s able to bypass closed port blocking from the outside. Cisco had some nasty ones where you could DDOS a firewall to the point the rules engine would let things through. It’s rare but things like that do happen.

      You can also have vulnerabilities with clients/services inside your network. Somebody gets someone in your family to click on something or someone slips a mickey inside one of your container updates, all of a sudden you have a rat on the inside. Hell even baby monitors are a liability these days.

      I wish all the home hardware was better at zero trust. Keeping crap in isolation networks and setting up firewalls between your garden and your clients can either be prudent or overkill depending on your situation. Personally I think it’s best for stuff that touches the web to only be allowed a minimum amount of network access to internal devices. Keep that Plex server isolated from your document store if you can.

  • TedZanzibar@feddit.uk
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    4 days ago

    Admittedly I’m paranoid, but I’d be looking to:

    1. Isolate your personal data from any web facing servers as much as possible. I break my own rule here with Immich, but I also…
    2. Use a Cloudflare tunnel instead of opening ports on your router directly. This gets your IP address out of public record.
    3. Use Cloudflare’s WAF features to limit ingress to trusted countries at a minimum.
    4. If you can get your head around it, lock things down more with features like Cloudflare device authentication.
    5. Especially if you don’t do step 4: Integrate Crowdsec into your Nginx setup to block probes, known bot IPs, and common attack vectors.

    All of the above is free, but past step 2 can be difficult to setup. The peace of mind once it is, however, is worth it to me.

    • miau@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      Thanks for your reply!

      Suggestion 1 definetely does make a lot of sense and I will be doing exactly that asap. Its something I didnt think through before but that would make me much more in peace.

      Suggestions 2-4 sound very reasonable, I have indeed searched for a way to self host a waf but didnt find much info. My only only concern with your points is… Cloudflare. From my understanding that would indeed add a lot of security to the whole setup but they would then be able to see everything going through my network, is that right?

      • TedZanzibar@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        4 days ago

        Yes and no? It’s not quite as black and white as that though. Yes, they can technically decrypt anything that’s been encrypted with a cert that they’ve issued. But they can’t see through any additional encryption layers applied to that traffic (eg. encrypted password vault blobs) or see any traffic on your LAN that’s not specifically passing through the tunnel to or from the outside.

        Cloudflare is a massive CDN provider, trusted to do exactly this sort of thing with the private data of equally massive companies, and they’re compliant with GDPR and other such regulations. Ultimately, the likelihood that they give the slightest jot about what passes through your tunnel as an individual user is minute, but whether you’re comfortable with them handling your data is something only you can decide.

        There’s a decent question and answer about the same thing here: https://community.cloudflare.com/t/what-data-does-cloudflare-actually-see/28660

        • miau@lemmy.sdf.orgOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          Yes absolutely. For work most of my clients use cloudflare’s different services so I understand they have credibility.

          For me though, part of the reason I self host is to get away from some big tech companies’ grasp. But I understand I am a bit extreme at times.

          So thanks for opening my mind and pointing me to that very interesting discussion, as well as for sharing your setup, it sure seems to be very sound security wise.

    • youmaynotknow@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      Sounds exactly like my setup for the last 5 years, minus NGINX (don’t need it with Cloidflared since each service is it’s own Proxmos Container and use their own exclusive tunnels).

    • ShortN0te@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      4 days ago
      1. Guess what, all IP addresses are known. There is no secret behind them. And you can scan all IPv4 addreses for ports in a few seconds at most.
      2. So some countries are more dangerous than others? Secure your network and service and keep them up to date, then you do not have to rely on nonsense geoblocking.
      3. Known bots are also no issue most of the time. They are just bots. They usually target a decade old Vulnerabilities and try out default passwords. If you follow my advice on 3. this is a non issue
      • TedZanzibar@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        4 days ago
        1. Sure but there’s no reason to openly advertise that yours has open services behind it.
        2. Absolutely. There are countries that I’m never going to travel there so why would I need to allow access to my stuff from there? If you think it’s nonsense then don’t use it, but you do you and I’ll do me.
        3. See point 3.

        We all need to decide for ourselves what we’re comfortable with and what we’re not and then implement appropriate measures to suit. I’m not sure why you’re arguing with me over how I setup my own services for my own use.

        • ShortN0te@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          4 days ago

          Yes i do i and you do you. But advertising those things as security measures while not adding any real security is just snake oil and can result in neglecting real security measures.

          As i said, the whole internet can be port scanned within seconds, so your services will be discovered, what is the risk you assume to have when your IP address is known and the fact that you host a service with it? The service has the same vulnerabilities if it is hosted via cloudflare tunnels or directly via port forwarding on the router. So you assume that your router is not secure? Then unplug it, cause it is already connected to the router.

          Geoblocking is useless for any threat actor. You can get access to VPN services or a VPS for very very very little money.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    4 days ago

    Wireguard is a VPN, so that’s not going to help you much here unless you’re forwarding all your traffic through a remote server, in which case anyone gets in there will still be able to get your local machines. It’s another hop in the chain, but that’s about it.

    If you want to be more on guard about reacting to attacks, or just bad traffic, you probably want something like Crowdsec. You’ll at least be able to detect and ban IPs probing your services. If that’s too much work, leverage OoenWRT reporting and some scripting to ban bad actors that probe your firewall and open ports. That’s a good first step.

    If you’re concerned about the containers, consider using something more secure than dockerd. Podman rootless with a dedicated service user is a good start. Then maybe look at something more complex: Kata, gvisor, lxc…etc. The goal being sandboxing the containers more to prevent jailbreaks.

    • miau@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      Thanks for the amazing reply and specially for the explanation regarding wireguard.

      I didnt know about crowsec and kata containers, both amazing projects, I will definetely look into it and try to set them up.

      Just one quick follow up question, when you mention dedicanted service user, do you mean its best to have a sepate user for each service, such as one for nginx, one for adguardhome and so on? Currently all of them run under the same user and I didnt think about this possibility before.

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        4 days ago

        Yeah, so if you’re running rootless containers, they aren’t run by root, and for added security, you don’t want them run by your normal user because if they get broken, then they’d have access to what your user has access to. Just create another user that only runs containers, and doesn’t have access to your things or root.

        • miau@lemmy.sdf.orgOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          That makes a lot of sense. Thats also very easy to setup so I will do it tonight.

          Thanks again for your amazing input!

  • Evotech@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    Unless you are a diehard FOSS person or whatever I’d recommend only using reverse tunneling and leveraging cloudflares infrastructure for access and also authentication.

    It’s crazy the amount of stuff they give away for free

  • bear_cube@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 days ago

    It’s great that you self host but security especially of service directly exposed to internet is very difficult. Use somekind of Direct VPN or services like tailscale etc

  • filister@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    4 days ago

    Why don’t you use something like Tailscale? Other than that using non standard ports greatly reduces the risks of you getting compromised. The majority of attacks come from port scanners scanning for default ports and trying to use known vulnerabilities.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    3 days ago

    They aren’t going to go after your data. They will take over your machine and use it for there own purposes. This happens in a automated way and they can build botnets made of 1000’s of devices.

    I would strongly suggest not opening any ports. Instead use a mesh VPN like Tailscale or Netbird. You could even access it over the dark web via i2p or Tor.

  • slug@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    4 days ago

    does anyone have an actual horror story about anything happening via an exposed web service? let’s set aside SSH

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      4 days ago

      Counter question

      How would you know something went wrong? Do you monitor all the logs? Do you have alerting?

      What happens if one service has a serious vulnerability and is compromised? Would an adversary be able to do lateral movement? For that matter are you scanning/checking for vulnerabilities? Do you monitor security tracker?

      All of these are things to consider

    • linearchaos@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      Yeah, a company got toasted because one of their admins was running Plex and had tautulli installed and opened to the outside figuring it was read-only and safe.

      Zero day bug in tat exposed his Plex token. They then used another vulnerability in Plex to remote code execute. He was self-hosting a GitHub copy of all the company’s code.

      • mint_tamas@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        This guy was running a three year old version of Plex with a known (and later fixed RCE), and was working for LastPass.

    • miau@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      Id like to know as well. I definetely dont want to be the first person of that story tough

      Ive heard of someone who exposed the docker management port on the internet and woke up to malware running on their server. But thats of course not the same as web services.

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        Once a server is compromised there are lots of uses. Everything from DDOS attacks to obscuring attacks against other targets. An attacker doesn’t want to be discovered so they likely will hide as much as they can.

  • Solar Bear@slrpnk.net
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    4 days ago

    Something you might want to look into is using mTLS, or client certificate authentication, on any external facing services that aren’t intended for anybody but yourself or close friends/family. Basically, it means nobody can even connect to your server without having a certificate that was pre-generated by you. On the server end, you just create the certificate, and on the client end, you install it to the device and select it when asked.

    The viability of this depends on what applications you use, as support for it must be implemented by its developers. For anything only accessed via web browser, it’s perfect. All web browsers (except Firefox on mobile…) can handle mTLS certs. Lots of Android apps also support it. I use it for Nextcloud on Android (so Files, Tasks, Notes, Photos, RSS, and DAVx5 apps all work) and support works across the board there. It also works for Home Assistant and Gotify apps. It looks like Immich does indeed support it too. In my configuration, I only require it on external connections by having 443 on the router be forwarded to 444 on the server, so I can apply different settings easily without having to do any filtering.

    As far as security and privacy goes, mTLS is virtually impenetrable so long as you protect the certificate and configure the proxy correctly, and similar in concept to using Wireguard. Nearly everything I publicly expose is protected via mTLS, with very rare exceptions like Navidrome due to lack of support in subsonic clients, and a couple other things that I actually want to be universally reachable.

    • miau@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      Wow, thats very, very nice. I didnt know this even existed.

      But I suppose if it had widespread support it would be the perfect solution.

      Firefox mobile not supporting it might be a dealbreaker though, since it is the browser I use and the one I persuaded all my friends and family to switch to…

      But this is an incredibly interesting technology and I will surely look into implementing at least partially if that works.

      Thanks a lot for sharing!

  • caseyweederman@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 days ago

    Just do what I do and consistently forget to set up DDNS and also be bad at noticing when your ISP juggles your IP address.

    • miau@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      Been there, done that lol, my ISP doesnt change my IP half as much as I should like, and I renew my certs half as often as they deserve.

      Seriously though, I had certs expire twice until I finally decided to get this setup properly.

  • Lyricism6055@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    4 days ago

    Just close 443 and use VPN with ACME DNS challenges for your certs. That’ll help make it even more secure, nothing is full proof though and a VPN is a good first step

    • miau@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      Thanks for replying!

      I do use dns challanges for renewing my certs. But I use port 443 for application data, not for certs.

      Is a vpn always safer then a reverse proxy? Do you use wireguard or do you have any other options worth looking into?

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        4 days ago

        Is a vpn always safer then a reverse proxy?

        Depends on what you trust, I guess.

        A reverse proxy on a standard cert is a bigger target for automated scripts than a reverse proxy on a non-standard port. A VPN runs through the VPN’s authentication, whereas a reverse proxy relies on whatever that app’s authentication is. So whether it’s secure enough depends on the VPN configuration, what you’re hosting, etc.

        I’m behind CGNAT, so I have limitations you don’t, but here’s my setup:

        • VPS at the edge for my public services - basically the same as a reverse proxy because the application is directly exposed
        • self-hosted VPN at VPS to facilitate reverse-proxy - I could shut down public access any time and just login w/ the VPN
        • static DNS entries on my router so I can use my domains inside my network (TLS also works properly)

        I like this approach because I can eat my cake (nice domain names instead of IPs and ports) and have it too (fast connection inside LAN, can disable reverse proxy if I want better security). You could get the same w/o the VPS, and if you require WireGuard VPN access outside the LAN, you get better security than a public-facing service.

        • miau@lemmy.sdf.orgOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          I didnt mention on my original post but I do have a virtual machine on gcp, which I use to run mongodb. I didnt mention it because I am not too concerned with it, but mostly it follows the same practices, with the exception being that ssh is open and it has no private data in it.

          But I suppose I could do something similiar to what you mentioned. The ideia of having and eating the cake is very nice. And if something goes wrong I could turn of public access and have the vpn still working.

          I will consider implementing something like that as well, thanks a lot for sharing your thoughts!

      • Lyricism6055@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        4 days ago

        I still use a reverse proxy, but to get into my network you need to be on VPN. It’s more secure for me I guess.

        I use traefik forward auth, even inside my network on VPN, for an extra layer of security for some apps.

        My opinion is that port 443 getting accidentally misconfigured by me is just too likely a scenario. With wireguard on my router I also am able to restrict traffic to ONLY my webserver and DNS servers for my devices.

        So I guess that’s another positive of wireguard, you can use your own DNS servers for all your phones all the time and always have ad blocking with pihole or something similar, even on mobile.

        By using VPN I don’t have to worry about accidentally exposing a website with a copy paste error or something over my reverse proxy. I can also easily restrict who has access to my VPN and do routing rules from my router per device or subnet (for people who aren’t in my family I have a separate subnet I assign with more strict firewall rules)

  • Blue_Morpho@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    4 days ago

    You might want to consider that backups only protect very old data from ransomware.

    Ransomware works by getting on a machine and sitting for several months before activating. During that time, your data is encrypted but you don’t know because when you open a file, your computer decrypts it and shows you what you expect to see. So your backups are working but are saving files that will be lost once the ransom ware activates.

    The only solution is to frequently manually verify the backup from a known safe computer. Years ago I looked for something to automate this but didn’t find it. (Something like a raspberry pi with no Internet that can only see the PC it’s testing, compares a known file, then touches the file so it gets backed up again.)

    • miau@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      Thanks a lot for your input. I honestly had not considered this possibility.

      Others in the post recommended removing those important files from the public facing server so that in the case of an attack they wouldnt be exposed. So I will try and follow this recommendation asap.

      But your answer still applies to everything else I will be hosting so I am concerned. I had no idea ransomware was this smart. I will research more about this topic, but basically if I access a file from two different servers and its fine it means the file is free from infection?

    • ShortN0te@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      During that time, your data is encrypted but you don’t know because when you open a file, your computer decrypts it and shows you what you expect to see.

      First time i hear of that. You sure? Would be really risky since you basically need to hijack the complete Filesystem communication to do that. Also for that to work you would need the private and public key of the encryption on the system on run time. Really risky and unlikely that this is the case imho.

      • miau@lemmy.sdf.orgOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        I don’t know much about ransomware but thats what got me concerned. I always assumed if I were to be infected, restic would just create a new snapshot for the files and Id be able to restore after nuking the server.

        • ShortN0te@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          4 days ago

          I doubt that this is the case, whether it is encrypted or not. The complexity and risks involved with decrypting it on the fly is really unrealistic and unheard of by me (have not heard of everything but still)

          Also the ransomware would also need to differentiate between the user and the backup program. When you do differentiated backups(like restic) with some monitoring you also would notice the huge size of the new data that gets pushed to your repo.

          Edit: The important thing about your backup is, to protect it against overwrites and deletes and have different admin credentials that are not managed by the AD or ldap of the server that gets backed up.

          • miau@lemmy.sdf.orgOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 days ago

            I see, I appreciate you sharing your knowledge on the matter.

            Yeah I thoght about the spike in size, which I would definetely notice because the amount of data is pretty stable and I have limited cloud storage.

            Regarding your last point, I currently have everything under a user account: the data I am backing up, the applications and restic itself all run on the same user account. Would it be a good ideia to run restic as root? Or as a different service account?

            • schizo@forum.uncomfortable.business
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              good ideia to run restic as root

              As a general rule, run absolutely nothing as root unless there’s absolutely no other way to do what you’re trying to do. And, frankly, there’s maybe a dozen things that must be root, at most.

              One of the biggest hardening things you can do for yourself is to always, always run everything as the lowest privilege level you can to accomplish what you need.

              If all your data is owned by a user, run the backup tool as that user.

              If it’s owned by several non-priviliged users, then you want to make sure that the group permissions let you access it.

              As a related note, this also applies to containers and software you’re running: you shouldn’t run docker containers as root unless they specifically MUST have a permission that only root has, and I personally don’t run internet facing ones as the same user as all the others: if something gets popped, then they not only do not have root permissions, but they’re also siloed into their own data in the event of a container escape.

              My expectation is that, at some point, I’ll miss a CVE and get pwnt, so the goal is to reduce how much damage someone can do when that happens, rather than assume I’m going to be able to keep it from happening at all, so everything is focused on ‘once this is compromised, how can i make the compromise useless to the attacker’.

            • ShortN0te@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 days ago

              You want your backup functional even if the system is compromised so yes another system is required for that, or through it to the cloud. Important that you do not allow deleting or editing of the backup even if the credentials used for backing up are compromised. Basically an append only storage.

              Most Cloud Storage like S3 Amazon (or most other S3 compatible providers like backblaze) offer such a setting.

              • miau@lemmy.sdf.orgOP
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 days ago

                Oh, now I get what you mean, thanks for the explanation

                Yeah it makes sense, I had originally gone with onedrive for the much cheaper price but I will take a look into s3 compatible storage and consider migrating in the future.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      4 days ago

      Ransomware is unlikely for a individual as there isn’t a lot of payout. Not impossible but unlikely.

      More likely that you computer will be used for other things.