• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    4 days ago

    This low bandwidth scenario led to highly aggravating scenarios, such as when a web app would time out on [Paul] while downloading a 20 MB JavaScript file, simply because things were going too slow.

    Two major applications I’ve used that don’t deal well with slow cell links:

    • Lemmyverse.net runs an index of all Threadiverse instances and all communities on all instances, and presently is an irreplaceable resource for a user on here who wants to search for a given community. It loads an enormous amount of data for the communities page, and has some sort of short timeout. Whatever it’s pulling down internally — I didn’t look — either isn’t cached or is a single file, so reloading the page restarts from the start. The net result is that it won’t work over a slow connection.

    • This may have been fixed, but git had a serious period of time where it would smash into timeouts and not work on slow links, at least to github. This made it impossible to clone larger repositories; I remember failing trying to clone the Cataclysm: Dark Days Ahead repository, where one couldn’t even manage a shallow clone. This was greatly-exacerbated by the fact that git does not presently have the ability to resume downloads if a download is interrupted. I’ve generally wound up working around this by git cloning to a machine on a fast connection, then using rsync to pull a repository over to the machine on a slow link, which, frankly, is a little embarrassing when one considers that git really is the premier distributed VCS tool out there in 2025, and really shouldn’t need to rely on that sort of workaround.

    • mesa@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      I remember there is some timeout flags you can do on curl that you can use in conjunction with git…but its been nearly a decade since Ive done anything of the sort. Modern day GitHub is fast-ish…but yeah bigger stuff has some big git issues.

      Good points! Didn’t know about Lemmyverse.net!

      • Rimu@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        3 days ago

        Didn’t know about Lemmyverse.net!

        As a PieFed user, soon you don’t need to - piefed instances will automatically subscribe to every community in newcommunities@lemmy.world so the local communities-finder will always have everything you ever need.

        Coming in v1.2.

        Every third party site hanging around the fringes of Lemmy is a crutch for missing features in Lemmy and an opportunity for PieFed to incorporate it’s functionality.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 days ago

      A bit of banging away later — I haven’t touched Linux traffic shaping in some years — I’ve got a quick-and-dirty script to set a machine up to temporarily simulate a slow inbound interface for testing.

      slow.sh test script
      # !/bin/bash
      # Linux traffic-shaping occurs on the outbound traffic.  This script
      # sets up a virtual interface and places inbound traffic on that virtual
      # interface so that it may be rate-limited to simulate a network with a slow inbound connection.
      # Removes induced slow-down prior to exiting.  Needs to run as root.
      
      # Physical interface to slow; set as appropriate
      oif="wlp2s0"
      
      modprobe ifb numifbs=1
      ip link set dev ifb0 up
      tc qdisc add dev $oif handle ffff: ingress
      tc filter add dev $oif parent ffff: protocol ip u32 match u32 0 0 action mirred egress redirect dev ifb0
      
      tc qdisc add dev ifb0 root handle 1: htb default 10
      tc class add dev ifb0 parent 1: classid 1:1 htb rate 1mbit
      tc class add dev ifb0 parent 1:1 classid 1:10 htb rate 1mbit
      
      echo "Rate-limiting active.  Hit Control-D to exit."
      cat
      
      # shut down rate-limiting
      tc qdisc delete dev $oif ingress
      tc qdisc delete dev ifb0 root
      ip link  set dev ifb0 down
      rmmod ifb
      

      I’m going to see whether I can still reproduce that git failure for Cataclysm on git 2.47.2, which is what’s in Debian trixie. As I recall, it got a fair bit of the way into the download before bailing out. Including the script here, since I think that the article makes a good point that there probably should be more slow-network testing, and maybe someone else wants to test something themselves on a slow network.

      Probably be better to have something a little fancier to only slow traffic for one particular application — maybe create a “slow Podman container” and match on traffic going to that? — but this is good enough for a quick-and-dirty test.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 days ago

          Thanks. Yeah, I’m pretty sure that that was what I was hitting. Hmm. Okay, that’s actually good — so it’s not a git bug, then, but something problematic in GitHub’s infrastructure.

          EDIT: On that bug, they say that they fixed it a couple months ago:

          This seems to have been fixed at some point during the last days leading up to today (2025-03-21), thanks in part to @MarinoJurisic 's tireless efforts to convince Github support to revisit this problem!!! 🎉

          So hopefully it’s dead even specifically for GitHub. Excellent. Man, that was obnoxious.

          • mesa@piefed.socialOP
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            3 days ago

            I wonder if there is a retry or something on git? I know there is if you create a basic bash script, but we can assume someone is having the same issue, right?

            I did see some depth=1 or something like that to get only a certain depth of git commits but thats about it.

            I cant find the curl workaround I used a long time ago. It might have been just pulling the code as a zip or something like some GH repos let you do.

            • tal@lemmy.today
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 days ago

              I did see some depth=1 or something like that to get only a certain depth of git commits but thats about it.

              Yeah, that’s a shallow clone. That reduces what it pulls down, and I did try that (you most-likely want a bit more, probably to also ask to only pull down data from a single branch) but back when I was crashing into it, that wasn’t enough for the Cataclysm repo.

              It looks like it’s fixed as of early this year; I updated my comment above.