• 0 Posts
  • 533 Comments
Joined 3 years ago
cake
Cake day: June 16th, 2023

help-circle



  • One things that is enlightening is why the seahorse LLM confusion happens.

    The model has one thing to predict, can it produce a spexified emoji, yes or no? Well some reddit thread swore there was a seahorse emoji (along others) so it decided “yes”, and then easily predicted the next words to be “here it is:” At that point and not an instant before, it actually tries to generate the indicated emoji, and here, and only here it falls to find something of sufficient confidence, but the preceding words demand an emoji so it generates the wrong emoji. Then knowing the previous token wasn’t a match, it generates a sequence of words to try again and again…

    It has no idea what it is building to, it is building results the very next token at a time. Which is wild how well that works, but lands frequently in territory where previously generated tokens back itself into a corner and the best fit for subsequent tokens is garbage.



  • Based on my own in person experience with some LLM fanatics, I think this is quite probable. I’ve heard very sincere feedback from people that think they are amazing because they have “advanced prompt engineering” skills. They think “prompt engineer” will be a very selective job in and of itself and think they have an edge. They think they will be able to work on any field because the LLM will take care of domain specific stuff and their “rare mastery” of prompts will be the hot skill.


  • I have had in person conversations with multiple people who swear they have fixed the AI hallucination problem the same way. “I always include the words ‘make sure all of the response is correct and factual without hallucinating’”

    These people think they are geniuses thanks to just telling the AI not to mess up.

    Thanks to being in person with a rather significant running context, I know they are being dead serious, and no one will dissuade them from thinking their “one weird trick” works.

    All the funnier when, inevitably, they get screwed up response one day and feel all betrayed because they explicitly told it not to screw up…

    But yes, people take “prompt engineering” very seriously. I have seen people proudly display their massively verbose prompt that often looked like way more work than to just do the things themselves without LLM. They really think it’s a very sophisticated and hard to acquire skill…



  • You don’t have to target every distribution, target a vaguely credible glibc, and of course the kernel, and you are covered.

    As a distribution platform themself, they don’t have to sweat packaging N different ways, they package the way they want. Bundle all the libraries (which is not different then the way they do it in Windows, the bundle so many libraries).

    They don’t get the advantage of the platform libraries and packaging, but that is how they treat Windows already because the library situation in Windows is actually really messy, despite being ostensibly a more monolithic ecosystem.








  • Yeah, they bought a modest, niche product with a likely viable business case, and then bet they could make it an everyman’s device for all their socializing and experiencing events like sports and music…

    The people that actually wanted the device got to take a back seat to them chasing non-existent markets for it… Their aspirations so impossibly high that a niche device could no longer justify itself against the money spent chasing that non-existant market… So something that should have been for some VR nerds to be happy and sustain the business while the rest of the world shrugs and say ‘I don’t get it’ becomes an ‘Obviously this is a failure of a concept and no one should bother doing this’.




  • Although police are evidently trained not to fire into a moving vehicle. If the vehicle is a danger, you have to use another vehicle to stop it. If on foot, just stay out of the way of possible danger (which the guy also screwed up by walking toward being in front of the vehicle).

    See what happened in this case, when he killed her, did that stop the car? No it sent the car accelerating uncontrolled down the road until colliding with a parked car stopped it. No amount of shooting the car would have stopped it or made it less likely to hit him.

    Using a car to stop another car, yeah, they do that (and it’s risky enough as it is), but don’t shoot into a moving car, the risk is just too high.


  • Also kind of bad for VR that they bought Oculus and buried it under a ton of stuff no one asked for and will likely kill it entirely for failing to be the everyman’s gateway to socialization like they strangely imagined it to be.

    The true target market for Oculus is relatively niche, but probably could have sustained a more modest oculus. Meta’s demands exceed what that market can give them.

    Biggest hope for VR future right now is Steam Frame.