Hey look the markov chain showed its biggest weakness (the markov chain)!
In the training data, it could be assumed by output that Connecticut usually follows Colorado in lists of two or more states containing Colorado. There is no other reason for this to occur as far as I know.
Markov Chain based LLMs (I think thats all of them?) are dice-roll systems constrained to probability maps.
Edit: just to add because I don’t want anyone crawling up my butt about the oversimplification. Yes. I know. That’s not how they work. But when simplified to words so simple a child could understand them, its pretty close.
I was wondering if you’d get similar results for states with the letter R, since there’s lots of prior art mentioning these states as either “D” or “R” during elections.
Hey look the markov chain showed its biggest weakness (the markov chain)!
In the training data, it could be assumed by output that Connecticut usually follows Colorado in lists of two or more states containing Colorado. There is no other reason for this to occur as far as I know.
Markov Chain based LLMs (I think thats all of them?) are dice-roll systems constrained to probability maps.
Edit: just to add because I don’t want anyone crawling up my butt about the oversimplification. Yes. I know. That’s not how they work. But when simplified to words so simple a child could understand them, its pretty close.
I was wondering if you’d get similar results for states with the letter R, since there’s lots of prior art mentioning these states as either “D” or “R” during elections.
Oh l I was thinking it’s because people pronounce it Connedicut
Awe cute!