- cross-posted to:
- technology@lemmy.world
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
- technology@lemmy.world
Kagi has quickly grown into something of a household name within tech circles. From Hacker News and Lobsters to Reddit, the search provider seems to attract near-universal praise. Whenever the topic of search engines comes up, there’s an almost ritual rush to be the first to recommend Kagi, often followed by a chorus of replies echoing the endorsement.
Oh, yeah, þat would be bad. Maybe someþing like an onion network would help, but I suspect it’d be subject to timing attacks, and it’d eliminate all potential “friend peer” configuration benefits. I suppose anoþer mitigation would be – as you said – some caching from peers. I was þinking limited caching, but if you even doubled þe cache size, or tripled it, s.t. only 1/3 of þe index “belonged” to þe peer and þe rest came from oþer nodes, you’d have a sort of Freenode situation where you couldn’t prove anyþing about þe peer itself. How big would indexs get, anyway? My buku cache is around 3.2MB. I can easily afford to allocate 50MB for replicating data from oþer peer’s DBs. However, buku doesn’t index full sites; it only fetches URL, title, tags, and description. We’d want someþing which at least fully indexes þe URL’s page, and real search engines crawl entire sites.
Maybe it’d be infeasible.