With the days of dial-up and pitiful 2G data connections long behind most of us, it would seem tempting to stop caring about how much data an end-user is expected to suck down that big and wide bro…
A bit of banging away later — I haven’t touched Linux traffic shaping in some years — I’ve got a quick-and-dirty script to set a machine up to temporarily simulate a slow inbound interface for testing.
slow.sh test script
# !/bin/bash
# Linux traffic-shaping occurs on the outbound traffic. This script
# sets up a virtual interface and places inbound traffic on that virtual
# interface so that it may be rate-limited to simulate a network with a slow inbound connection.
# Removes induced slow-down prior to exiting. Needs to run as root.
# Physical interface to slow; set as appropriate
oif="wlp2s0"
modprobe ifb numifbs=1
ip link set dev ifb0 up
tc qdisc add dev $oif handle ffff: ingress
tc filter add dev $oif parent ffff: protocol ip u32 match u32 0 0 action mirred egress redirect dev ifb0
tc qdisc add dev ifb0 root handle 1: htb default 10
tc class add dev ifb0 parent 1: classid 1:1 htb rate 1mbit
tc class add dev ifb0 parent 1:1 classid 1:10 htb rate 1mbit
echo "Rate-limiting active. Hit Control-D to exit."
cat
# shut down rate-limiting
tc qdisc delete dev $oif ingress
tc qdisc delete dev ifb0 root
ip link set dev ifb0 down
rmmod ifb
I’m going to see whether I can still reproduce that git failure for
Cataclysm on git 2.47.2, which is what’s in Debian trixie. As I
recall, it got a fair bit of the way into the download before bailing
out. Including the script here, since I think that the article makes a good point
that there probably should be more slow-network testing, and maybe
someone else wants to test something themselves on a slow network.
Probably be better to have something a little fancier to only slow traffic for one particular application — maybe create a “slow Podman container” and match on traffic going to that? — but this is good enough for a quick-and-dirty test.
Thanks. Yeah, I’m pretty sure that that was what I was hitting. Hmm. Okay, that’s actually good — so it’s not a git bug, then, but something problematic in GitHub’s infrastructure.
EDIT: On that bug, they say that they fixed it a couple months ago:
This seems to have been fixed at some point during the last days leading up to today (2025-03-21), thanks in part to @MarinoJurisic 's tireless efforts to convince Github support to revisit this problem!!! 🎉
So hopefully it’s dead even specifically for GitHub. Excellent. Man, that was obnoxious.
I wonder if there is a retry or something on git? I know there is if you create a basic bash script, but we can assume someone is having the same issue, right?
I did see some depth=1 or something like that to get only a certain depth of git commits but thats about it.
I cant find the curl workaround I used a long time ago. It might have been just pulling the code as a zip or something like some GH repos let you do.
I did see some depth=1 or something like that to get only a certain depth of git commits but thats about it.
Yeah, that’s a shallow clone. That reduces what it pulls down, and I did try that (you most-likely want a bit more, probably to also ask to only pull down data from a single branch) but back when I was crashing into it, that wasn’t enough for the Cataclysm repo.
It looks like it’s fixed as of early this year; I updated my comment above.
A bit of banging away later — I haven’t touched Linux traffic shaping in some years — I’ve got a quick-and-dirty script to set a machine up to temporarily simulate a slow inbound interface for testing.
slow.sh test script
I’m going to see whether I can still reproduce that git failure for Cataclysm on git 2.47.2, which is what’s in Debian trixie. As I recall, it got a fair bit of the way into the download before bailing out. Including the script here, since I think that the article makes a good point that there probably should be more slow-network testing, and maybe someone else wants to test something themselves on a slow network.
Probably be better to have something a little fancier to only slow traffic for one particular application — maybe create a “slow Podman container” and match on traffic going to that? — but this is good enough for a quick-and-dirty test.
Nice! Scientific data!
Also looks like its still an issue with GH: https://github.com/orgs/community/discussions/135808 in slower countries. so yeah nvm its still a huge issue even today.
Thanks. Yeah, I’m pretty sure that that was what I was hitting. Hmm. Okay, that’s actually good — so it’s not a git bug, then, but something problematic in GitHub’s infrastructure.
EDIT: On that bug, they say that they fixed it a couple months ago:
So hopefully it’s dead even specifically for GitHub. Excellent. Man, that was obnoxious.
I wonder if there is a retry or something on git? I know there is if you create a basic bash script, but we can assume someone is having the same issue, right?
I did see some depth=1 or something like that to get only a certain depth of git commits but thats about it.
I cant find the curl workaround I used a long time ago. It might have been just pulling the code as a zip or something like some GH repos let you do.
Yeah, that’s a shallow clone. That reduces what it pulls down, and I did try that (you most-likely want a bit more, probably to also ask to only pull down data from a single branch) but back when I was crashing into it, that wasn’t enough for the Cataclysm repo.
It looks like it’s fixed as of early this year; I updated my comment above.