• JATtho@sopuli.xyz
    link
    fedilink
    arrow-up
    3
    ·
    6 months ago

    It happened to me when I was configuring IP geoblocking: Only whitelist IP ranges are allowed. That was fetched from a trusted URL. If the DNS provider just happened to not be on that list, the whitelist would become empty, blocking all IPs. Literally 100% proof firewall; not even a ping gets a pass.

  • TimTamJimJam@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    6 months ago

    Happened to me in work once… I was connected via SSH to one of our test machines, so I could test connection disruption handling on a product we had installed.

    I had a script that added iptables rules to block all ports for 30 seconds then unblock them. Of course I didn’t add an exception for port 22, and I didn’t run it with nohup, so when I ran the script it blocked the ports, which locked me out of SSH access, and the script stopped running when the SSH session ended so never unblocked the ports. I just sat there in awe of my stupidity.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        Ah, if only it was a server room and not a customer 3 hours drive away. And he’d closed and gone home for the night.

        Fortunately it just needed a reboot, and I was able to talk him through that in the morning.

        • JasonDJ@lemmy.zip
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          6 months ago

          Oof I did a firmware upgrade on my main external firewall.

          The upgrade went fine but when we added an ISP a month or so prior, I forgot to redistribute the ISPs routes. While all my ISPs were technically working, and the firewall came back up, nothing below it could get to the internet, so it was good as down.

          Cue the 1.5 hour drive into the office…

          Had that drive to think about what went wrong. Got into the main lobby, sat down, joined the wifi, and fixed it in 3 minutes.

          Didn’t even get to my desk or the datacenter.

        • SpaceCowboy@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          Oof… well you can just say “it must be some hardware problem or something… maybe a reboot will fix it.”

    • krash@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      Out of curiousity, how would nohup make your situation different? As I understand, nohup makes it possible to keep terminal applications running even when the terminal session has ended.

      • octopus_ink@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        If the script was supposed to wait 30 secs and then unblock the ports, running with nohup would have allowed the ports to be unblocked 30 secs later. Instead, the script terminated when the SSH session died, and never executed the countdown nor unblock.

      • aidan@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        the script stopped running when the SSH session ended so never unblocked the ports

        • JasonDJ@lemmy.zip
          link
          fedilink
          arrow-up
          2
          ·
          6 months ago

          Tmux essentially creates a pseudo-shell that persists between sessions.

          So you can start a process, detach the session, start something else, disconnect, come back next week, and check on it.

          It does other things too. Like console tiling.

        • u/lukmly013 💾 (lemmy.sdf.org)@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          6 months ago

          Well, the script could keep running even after he would have detached from that tmux session due to losing ssh connection. And since that script would unblock all ports after 30 seconds…

          (Same use case as nohup that they mentioned)

  • TurboWafflz@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    6 months ago

    I accidentally put all the interfaces on my router running openwrt into the wrong firewall zone so now I can’t access it via ssh or the web interface. I already had it configured though and it still works so I’m just ignoring the problem until something breaks

  • PlexSheep@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    What is a good firewall that can also block ports published with docker? I’d need it to run on the same host.

      • PlexSheep@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        I remember trying with ufw and the docker ports were still open. Iirc I’ve read somewhere that docker and ufw both use the same underlying software, so ufw cannot block docker (IP tables?)

        • iiGxC@slrpnk.net
          link
          fedilink
          arrow-up
          0
          ·
          6 months ago

          Hmm, not sure. I know with docker you can “mock” ports for the container, where the port the container sees is different than the port on the system. Maybe you can do something with that?

          • PlexSheep@feddit.de
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            I can configure the containers in ways that don’t require ports to be published for the real network, but that’s always possible. It would still be nice to have a firewall that can block even those containers that try to publish their ports to the whole (real) network.

    • dan@upvote.au
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      Are your Docker containers connecting to the network (eg using ipvlan or macvlan)? The default bridge network driver doesn’t expose the container publicly unless you explicitly expose a port. If you don’t expose a port, the Docker container is only accessible from the host, not from any other system on the network.

        • dan@upvote.au
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          If you don’t want the Docker container to be accessible from other systems then just don’t publish the port.

          • PlexSheep@feddit.de
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            Yeah of course, that’s what I’m doing anyways, but the purpose of a firewall would be defense in depth, even is something were to be published, the firewall got it.

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      You are assuming there is a keyboard and monitor plugged to it, and that the computer is somewhere nearby.

      None of those are automatically true. And when it’s nearby, it’s usually easier to just get the SD card into another computer and edit the configuration.