Can you imagine if everything deteriorates over time to be exactly like that
Harnessing the power of fusion just so* that LLMs can generate millions of index pages every second
I was toying around with an idea for a VCS that uses LLM as a compression method, where during pushes it just asks an LLM to describe what your diff contains (what changes it makes to what files), you push that summary to the origin, and when you pull it just plops that into another llm to reconstruct the files or make the file changes.
If it wasn’t such a waste of energy, I’d find that a pretty funny random side project. Would love to see the results.
i now need a web framework that will just make chatgpt calls under the hood.
Can you imagine if everything deteriorates over time to be exactly like that
Harnessing the power of fusion just so* that LLMs can generate millions of index pages every second
I was toying around with an idea for a VCS that uses LLM as a compression method, where during pushes it just asks an LLM to describe what your diff contains (what changes it makes to what files), you push that summary to the origin, and when you pull it just plops that into another llm to reconstruct the files or make the file changes.
If it wasn’t such a waste of energy, I’d find that a pretty funny random side project. Would love to see the results.