Docker docs:

Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.

  • skuzz@discuss.tchncs.de
    link
    fedilink
    arrow-up
    10
    ·
    11 hours ago

    For all the raving about podman, it’s dumb too. I’ve seen multiple container networks stupidly route traffic across each other when they shouldn’t. Yay services kept running, but it defeats the purpose. Networking should be so hard that it doesn’t work unless it is configured correctly.

  • ohshit604@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    14
    ·
    13 hours ago

    This post inspired me to try podman, after it pulled all the images it needed my Proxmox VM died, VM won’t boot cause disk is now full. It’s currently 10pm, tonight’s going to suck.

      • ohshit604@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        12 hours ago

        Okay so I’ve done some digging and got my VM to boot up! This is not Podman’s fault, I got lazy setting up Proxmox and never really learned LVM volume storage, while internally on the VM it shows 90Gb used of 325Gb Proxmox is claiming 377Gb is used on the LVM-Thin partition.

        I’m backing up my files as we speak, thinking of purging it all and starting over.

        Edit: before I do the sacrificial purge This seems promising.

          • ohshit604@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            12 hours ago

            So I happened to follow the advice from that Proxmox post, enabled the “Discard” option for the disk and ran sudo fstrim / within the VM, now the Proxmox LVM-Thin partition is sitting at a comfortable 135Gb out of 377Gb.

            Think I’m going to use this fstrim command on my main desktop to free up space.

            • sip@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              5 hours ago

              I think linux does fstrim oob.

              edit: I meant to say linux distros are set up to do that automatically.

    • qaz@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      12 hours ago

      I also ended up using firewalld and it mostly worked, although I first had to change some zone configs.

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    17
    ·
    20 hours ago

    This only happens if you essentially tell docker “I want this app to listen on 0.0.0.0:80”

    If you don’t do that, then it doesn’t punch a hole through UFW either.

  • grrgyle@slrpnk.net
    link
    fedilink
    arrow-up
    29
    arrow-down
    1
    ·
    23 hours ago

    If I had a nickel for every database I’ve lost because I let docker broadcast its port on 0.0.0.0 I’d have about 35¢

      • grrgyle@slrpnk.net
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        20 hours ago

        I exposed them because I used the container for local development too. I just kept reseeding every time it got hacked before I figured I should actually look into security.

        • MangoPenguin@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          18
          ·
          edit-2
          20 hours ago

          For local access you can use 127.0.0.1:80:80 and it won’t put a hole in your firewall.

          Or if your database is access by another docker container, just put them on the same docker network and access via container name, and you don’t need any port mapping at all.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      3 hours ago

      Another take: Why should I care about dependency hell if I can just spin up the same service on the same machine without needing an additional VM and with minimal configuration changes.

    • Domi@lemmy.secnd.me
      link
      fedilink
      arrow-up
      48
      ·
      20 hours ago

      My impression from a recent crash course on Docker is that it got popular because it allows script kiddies to spin up services very fast without knowing how they work.

      That’s only a side effect. It mainly got popular because it is very easy for developers to ship a single image that just works instead of packaging for various different operating systems with users reporting issues that cannot be reproduced.

    • null_dot@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      1
      ·
      19 hours ago

      I dont really understand the problem with that?

      Everyone is a script kiddy outside of their specific domain.

      I may know loads about python but nothing about database management or proxies or Linux. If docker can abstract a lot of the complexities away and present a unified way you configure and manage them, where’s the bad?

    • LordKitsuna@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      17 hours ago

      That is definitely one of the crowds but there are also people like me that just are sick and tired of dealing with python, node, ruby depends. The install process for services has only continued to become increasingly more convoluted over the years. And then you show me an option where I can literally just slap down a compose.yml and hit “docker compose up - d” and be done? Fuck yeah I’m using that

    • MangoPenguin@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      11
      ·
      20 hours ago

      No it’s popular because it allows people/companies to run things without needing to deal with updates and dependencies manually

    • greyfox@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      12 hours ago

      Actually I believe host networking would be the one case where this isn’t an issue. Docker isn’t adding iptables rules to do NAT masquerading because there is no IP forwarding being done.

      When you tell docker to expose a port you can tell it to bind to loopback and this isn’t an issue.

    • exu@feditown.com
      link
      fedilink
      English
      arrow-up
      16
      ·
      24 hours ago

      network: host gives the container basically full access to any port it wants. But even with other network modes you need to be careful, as any -p <external port>:<container port> creates the appropriate firewall rule automatically.

  • dohpaz42@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    4
    ·
    1 day ago

    It’s my understanding that docker uses a lot of fuckery and hackery to do what they do. And IME they don’t seem to care if it breaks things.

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      31
      ·
      24 hours ago

      To be fair, the largest problem here is that it presents itself as the kind of isolation that would respect firewall rules, not that they don’t respect them.

      People wouldn’t make the same mistake in NixOS, despite it doing exactly the same.

    • Guilvareux@feddit.uk
      link
      fedilink
      arrow-up
      6
      arrow-down
      3
      ·
      22 hours ago

      I don’t know how much hackery and fuckery there is with docker specifically. The majority of what docker does was already present in the Linux kernel namespaces, cgroups etc. Docker just made it easier to build and ship the isolated environments between systems.

  • jwt@programming.dev
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    21 hours ago

    Somehow I think that’s on ufw not docker. A firewall shouldn’t depend on applications playing by their rules.

    • qaz@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      18
      ·
      21 hours ago

      ufw just manages iptables rules, if docker overrides those it’s on them IMO

      • null_dot@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        ·
        19 hours ago

        Not really.

        Both docker and ufw edit iptables rules.

        If you instruct docker to expose a port, it will do so.

        If you instruct ufw to block a port, it will only do so if you haven’t explicitly exposed that port in docker.

        Its a common gotcha but it’s not really a shortcoming of docker.

      • jwt@programming.dev
        link
        fedilink
        arrow-up
        11
        arrow-down
        1
        ·
        21 hours ago

        Feels weird that an application is allowed to override iptables though. I get that when it’s installed with root everything’s off the table, but still…

      • pressanykeynow@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        15 hours ago

        iptables is deprecated for like a decade now, the fact that both still use it might be the source of the problem here.

    • IsoKiero@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      ·
      21 hours ago

      Docker spesifically creates rules for itself which are by default open to everyone. UFW (and underlying eftables/iptables) just does as it’s told by the system root (via docker). I can’t really blame the system when it does what it’s told to do and it’s been administrators job to manage that in a reasonable way since forever.

      And (not related to linux or docker in any way) there’s still big commercial software which highly paid consultants install and the very first thing they do is to turn the firewall off…