Reddit’s API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware.
The key point: This doesn’t touch Reddit’s servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone.
What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine.
API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools.
Self-hosting options:
- USB drive / local folder (just open the HTML files)
- Home server on your LAN
- Tor hidden service (2 commands, no port forwarding needed)
- VPS with HTTPS
- GitHub Pages for small archives
Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away.
Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic.
How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is “trust but verify” – it accelerates the boring parts but you still own the architecture.
Live demo: https://online-archives.github.io/redd-archiver-example/ GitHub: https://github.com/19-84/redd-archiver (Public Domain)
Pushshift torrent: https://academictorrents.com/details/1614740ac8c94505e4ecb9d88be8bed7b6afddd4
It would be neat for someone to migrate this data set to a Lemmy instance
It would be inviting a lawsuit for sure. I like the essence of the idea, but it’s probably more trouble than it’s worth for all but the most fanatic.
Is it though? That is (or was, and should be again) publicly accessible information that was created over the years by random internet users. I refuse the notion that an American company can “own it” just because they ran the servers. Sure they can hold copyright for their frontend and backend code, name and whatever. But posts and comments, no way.
Of course it would be dumb for someone under US jurisdiction but we’ll see how much an international DMCA claim is worth considering the current relations anyway.
They don’t own it, the individual posters own the content of their own posts, however, from the reddit terms of service:
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit.
And with each of those rights granted, Reddit’s lawyers can defend those rights. So no, they don’t own it “just because they ran the servers” - they own specific rights to copy granted to them by each poster.
(I don’t like this arrangement, but ignorance of the terms of service isn’t going to help someone who uploaded a full copy of the works they have extensive rights to) On this subject I think there needs to be an extensive overhaul to narrow what terms you can extend to the general public. The problem is I straight up don’t trust anyone currently in power to make such a change to have our interests in mind.
I’m not at all familiar with legalese, but wouldn’t ‘non-exclusive’ in that statement mean that you, and others permitted by you, can redistribute the content as you see fit? Meaning that copying and redistributing reddit content doesn’t necessarily violate reddit’s terms of service but does violate the user’s copyright?
Yeah so at worst you could get sued by some random reddit users that don’t want their post history hosted on your site.
Given how little traction artists and authors have had with suing AI companies for blatant copyright infringement, I kinda doubt it would go anywhere.
Might be easiest to set up an instance in a country that doesn’t give a fuck about western IP law, then others can federate to it.
So yeah, fanatic levels of effort.
Brb, setting up a Lemmy server in Red Star OS
(The machine with the only Steam account active in North Korea
would like toalready knows your location)The chances are pretty high that is probably Kims computer, arent they?
I think we were all hoping that some loveable genius was quietly subverting their surveillance state and getting a view of the outside world via Team Fortress 2, but, yeah, if it’s not North Korea’s fattest man, it’s probably a high ranking military crony.
. . .Hey just musing here but that sounds like a kinda hilariously easy doxx. You don’t think they’d keep state secrets on that same machine? . . . Surely. . .? Noooo. . . 🤔
Post and comments are not Reddit’s IP anyway :3
They might have set up the user agreement for it. Stackexchange did and their whole business model was about catching businesses where some worker copy/pasted code from a stackexchange answer and getting a settlement out of it.
I agree with you in principle (hell, I’d even take it further and think only trademarks should be protected, other than maybe a short period for copyright and patent protection, like a few years), but the legal system might disagree.
Edit: I’d also make trademarks non-transferrable and apply to individuals rather than corporations, so they can go back to representing quality rather than business decisions. Especially when some new entity that never had any relation to the original trademark user just throws some money at them or their estate to buy the trust associated with the trademark.
/u/Buddahriffic put it better than I could.
I agree, it shouldn’t be reddit’s intellectual property. But the law binds the poor and protects the rich.
this is one reason i support tor deployment out of the box 😋
Lemmit already existed and was annoying as hell. It was the first account I remember blocking.
Now this is a good idea.
This seems especially handy for anyone who wants a snapshot of Reddit from pre-enshittification and AI era, where content was more authentic and less driven by bots and commercial manipulation of opinion. Just choose the cutoff date you want and stick with that dataset.
What is the timing of the dataset, up through which date in time?
however the data from 2025-12 has been released already, it just needs to be split and reprocessed for 2025 by watchful1. once that happens then you can host archive up till end of 2025. i will probably add support for importing data from the arctic shift dumps instead so that archives can be updated monthly.
Thank you very much, very cool.
It’s literally says in the link. Go to the link and it’s the title.
Oh I didn’t see it. I’m sorry I asked.
What’s the size difference when you remove the porn stuff from the torrent?
Willing to bet a 90% size reduction
Thanks. This is great for mining data and urls.
so kinda like kiwix but for reddit. That is so cool
You should be very proud of this project!! Thank you for sharing.
How long it takes to download this 3TB torrent ?
week(s)
Thank you for answer. I think I do this one instead https://academictorrents.com/details/30dee5f0406da7a353aff6a8caa2d54fd01f2ca1 Looks like it’s divided by year-month.
those are not split by subreddit so they will not work with the tool
Fuck Reddit and Fuck Spez.
You know what would be a good way to do t? Take all that content and throw it on a federated service like ours. Publicly visible. No bullshit. And no reason to visit Reddit to get that content. Take their traffic away.
Where would it be hosted so that Conde Nast lawyers can’t touch it?
What would they say? It’s information that’s freely available, no payment required, no accounts to simply read it, no copyrights, where’s the legal in hosting a duplicate of the content?
Oh I agree with you, friend. The problem is that they’ll say that they’re losing ad revenue. So they’ll try and sue, even if they’re in the wrong.
Fine, decentralize it then. And fuck your ad revenue, nobody likes you, Spez!
It might fall under the same concept that recipes do - you can’t copyright a recipe, but a collection of recipes (such as a book) is copyrightable.
In any case, they have a lot more money to pay lawyers than you or I do, I’ll bet, so even if you are right, that doesn’t mean you’ll have the money to actually win.
So distribute it and n a fault tolerant way. They can’t sue all of us.
grabs external
Very cool! Do you know how your project may compare with arctic shift ? For those more interested in research with reddit data, is there benefit of one vs another?
How does this compare to redarc? It seems to be similar.
redarc uses reactjs to serve the web app, redd-archiver uses a hybrid architecture that combines static page generation with postgres search via flask. is more like a hybrid static site generator with web app capabilities through docker and flask. the static pages with sorted indexes can be viewed offline and served on hosts like github and codeberg pages.
Is there difference in how much storage space is needed between the two approaches?
redd-archiver will take up more disk space because the database exists along with the static html
I do not consent for this
Eww, Voat and Ruqqus.
I’d be worried about having some of the voat stuff on a hard drive I own.
I’m surprised GitHub hasn’t automatically nixed the archive.
i will always take more data sources, including lemmy!
… for building your personal Grok?
if you didn’t notice, this project was released into the public domain
The Free/Libre Torment Nexus
Can anyone figure out what the minimum process is to just use the SSG function? I’m having a really hard time trying to understand the documentation.
did you check the quickstart?
Yes, both the standalone quickstart and the quickstart section of the readme (which are both different).
Is it possible to get the static sites without spinning up a DB backend?
the database is required















