Bio field too short. Ask me about my person/beliefs/etc if you want to know. Or just look at my post history.

  • 0 Posts
  • 59 Comments
Joined 2 years ago
cake
Cake day: August 3rd, 2023

help-circle
  • Afaik, yes. But. It doesn’t take effect until late this year. The synthetic THC industry is pretty damn big, though, and I’m holding out hope that we get some lobbying that supports things actual people want for once, and that turns into a bill making it legal instead of a loophole in an annual farm bill.

    Call your senators and reps.


  • I keep asking, and have never seen one:

    Someone find me a recording of writing a real application – not a “vibe coding demo” – using one of these magic tools. Let’s ignore the complexities of scale or data integrity, or even error handling. Write a simple asteroids clone or calculator app. Something that doesn’t need to reach outside its tiny box.

    Build that in only an AI coding tool and see how long it takes to get a real product. (I get stuck here. So far, all demos end up halting before they are MVP, though they tend to get something faster than I could)

    Afterwards, get a code review and fix any findings without breaking anything.

    And here’s a catch: If you want to say AI codes better than humans, then a human is never allowed to modify the code directly. They can only prompt.

    Show me that, and I might change my tune, but we’re nowhere close to it AFAIK, and I don’t see it ever happening. Until then, “AI coding” is at-best a way to reduce time spent writing small atomic functions that can be easily verified by a human, or maybe starting a function that a human fixes.

    Things that took an experienced team months to build can now be done by a single guy in a few days…

    I have yet to see this materialize outside of CEO townhalls or industry self-aggrandizement. If you’re an experienced programmer, can you go make me the demo I want and then review the code with a critical eye?

    I’m not saying that a noob can ask an AI to build Google and it will be done; But you’d better believe that an experienced programmer using AI will deliver weeks of high-quality work in a single day.

    And, I wonder, how does an intern or novice programmer become this experienced programmer if they never have to touch code‽ We’re headed to Idiocracy here, where the people who knew how to get shit done and why it worked eventually died off and there was no pipeline to replace them.

    Until then… “vibe coding” is just smoke and will result in terrible things happening in a few years as banks and companies start inflicting terrible, cheap, AI code on us all.

    Want to make tons of money? Go learn to be a security researcher. I’d be happy to be proved wrong.


  • I really like this comment. It covers a variety of use cases where an LLM/AI could help with the mundane tasks and calls out some of the issues.

    The ‘accuracy’ aspect is my 2nd greatest concern: An LLM agent that I told to find me a nearby Indian restaurant, which it then hallucinated is not going to kill me. I’ll deal, but be hungry and cranky. When that LLM (which are notoriously bad at numbers) updates my spending spreadsheet with a 500 instead of a 5000, that could have a real impact on my long-term planning, especially if it’s somehow tied into my actual bank account and makes up numbers. As we/they embed AI into everything, the number of people who think they have money because the AI agent queried their bank balance, saw 15, and turned it into 1500 will be too damn high. I don’t ever foresee trusting an AI agent to do anything important for me.

    “trust”/“privacy” is my greatest fear, though. There’s documentation for the major players that prompts are used to train the models. I can’t immediately find an article link because ‘chatgpt prompt train’ finds me a ton of slop about the various “super” prompts I could use. Here’s OpenAI’s ToS about how they will use your input to train their model unless you specifically opt-out: https://openai.com/policies/how-your-data-is-used-to-improve-model-performance/

    Note that that means when you ask for an Indian restaurant near your home address, Open AI now has that address in it’s data set and may hallucinate that address as an Indian restaurant in the future. The result being that some hungry, cranky dude may show up at your doorstep asking, “where’s my tikka masala”. This could be a net-gain, though; new bestie.

    The real risk, though, is that your daily life is now collected, collated, harvested and added to the model’s data set; all without your clear explicit actions: using these tools requires accepting a ToS that most people will not really read and understand. Maaaaaany people will expose what is otherwise sensitive information to these tools without understanding that their data becomes visible as part of that action.

    To get a little political, I think there’s a huge downside on the trust aspect of: These companies have your queries(prompts), and I don’t trust them to maintain my privacy. If I ask something like “where to get abortion in texas”, I can fully see OpenAI selling that prompt to law enforcement. That’s an egregious example for impact, but imagine someone could query prompts (using an AI which might make shit up) and asks “who asked about topics anti-X” or “pro-Y”.


    My personal use of ai: I like the NLP paradigm for turning a verbose search query into other search queries that are more likely to find me results. I run a local 8B model that has, for example, helped me find a movie from my childhood that I couldn’t get google to identify.

    There’s use-case here, but I can’t accept this as a SaaS-style offering. Any modern gaming machine can run one of these LLMs and get value without the tradeoff from privacy.

    Adding agent power just opens you up to having your tool make stupid mistakes on your behalf. These kinds of tools need to have oversight at all times. They may work for 90% of the time, but they will eventually send an offensive email to your boss, delete your whole database, wire money to someone you didn’t intend, or otherwise make a mistake.


    I kind of fear the day that you have a crucial confrontation with your boss and the dialog goes something like:

    Why did you call me an asshole?

    I didn’t the AI did and I didn’t read the response as much as I should have.

    Oh, OK.


    Edit: Adding as my use case: I’ve heard about LLMs being described as a blurry JPEG of the internet, and to me this is their true value.

    We don’t need a 800B model, we need an easy 8B model that anyone can run that helps turn “I have a question” into a pile of relevant actual searches.



  • Similarly, my fantasy is that If I won the lottery, or otherwise became independently wealthy, I’d be doing a ton of different entry-level jobs to find one that hit as a passion.

    Construction worker, stagehand, (i’ve already been retail), food service, intern for anything that requires a degree I don’t have, etc.

    I like my current job but if I didn’t need the paycheck then I’m not sure I’d stay. I might stick around if I could negotiate terms and only do the parts I liked, though.

    I wish I could learn a little about everything, but our culture pushes us to commit and be deep instead, and then we get stuck in a job that used to be a fun hobby.


  • In the nicest possible way, and only judging from this post, you are part of the problem. Hear me out:

    They don’t actually need you. Either party. There’s a solid base of voters who are going to vote blue or stay home, or vote red or stay home. If you require being courted, then you’re either effectively random, staying home, or lean towards one side over the other.

    You’re possibly upset that none of your choices are good. That’s pretty true. ‘both sides’ have reasons to not vote for them. You need to help fix that: pick a side, whichever one you lean towards, and go make the choices better.

    Local politics (the ones at the precinct, county, state levels) decide how we choose our candidates in the larger races by deciding who represents us on those larger stages internally to the party. Example: the general public was not polled for the dnc chair election, it was only people put into dnc leadership, who were voted for, several steps down, by people at the precinct level. https://en.wikipedia.org/wiki/2025_Democratic_National_Committee_chairmanship_election

    Is there corporate bullshit here? almost certainly. Can it be overcome? Only if people are paying attention and care to get involved. Voting only in November elections and expecting the candidates to cater to you specifically will not resolve the problems.

    The candidates don’t need to work for your vote. You need to work for better candidates. Or shut up and vote for the least harm.



  • korazail@lemmy.myserv.oneto196@lemmy.blahaj.zoneCorruption rule
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 month ago

    I don’t have a link, but I remember an exposé a few years ago where some politician sold out their constituents for like 10k-100k in campaign contributions.

    The response was along the lines of, ‘why don’t we just make a Kickstarter to buy them back’

    Obviously this results in a bidding war we probably can’t win… And it’s, in theory, what PAC is supposed to be; but it might be useful in both defining a given politicians price, and in driving up the cost of corruption.

    That feels very ‘free market’ to me.

    edit: fixed autocorrect typo.



  • While you’re not exactly wrong, there are multiple types of cameras.

    The ones at the convenience store or watching the street in front of a business are probably CCTV, and the store only has so much history stored and, most importantly, it’s only accessible with a warrant.

    Speed trap cameras are maybe isolated and only deliver data to the police… I’m not aware of how they work and they predate the ‘hardware as a service’ model we have to live with today.

    Flock and similar kinds of cameras, though, are a service that your local government or businesses subscribe to. They are tracking vehicles (maybe people/faces, who knows, black box) and other metadata across the country, collating that data centrally, are not accountable to tax payers, have no ToS for the people they are tracking and thus no way to request or delete your data, the data at rest is not subject to many government regulations the way data on a government server would be, and accessing that data doesn’t require a warrant. While theoretically that data is “owned” by the local jurisdiction or business, there appear to be no safeguards preventing the federal government from querying it all at once, or any hacker with a stolen credential.

    Notably, Flock’s privacy policy doesn’t include the actual humans and cars it is monitoring, only the ‘administrator, customers, and team creators’ that access the data. Police privacy is maintained, but not yours.

    This “infrastructure hardware” is owned by the corporations, not your government. We have corporations acting as government intelligence agencies and if that doesn’t frighten you, it should: They aren’t beholden to the same laws and restrictions that come with that scope and scale.

    Use a FOIA request to find out if a given camera is owned by your city/state. If not, show up at your townhall and demand it be accountable as if it were.


  • I wasn’t trying to be antagonistic, just defending “gross” foods. I absolutely agree that one should know what they are doing before inflicting it on others… but if cooking for yourself or others who are in on the adventure, there’s no harm (except maybe nausea) in trying things without knowing what they are.


  • I’d be unsure how to prepare it in a way that my American palate would enjoy it, but fermented fish as Asian ‘fish sauce’ is mighty tasty when used correctly, so it’d be worth a shot. My google search (I was pretty sure it was similar to lutefisk, but wasn’t sure how) had an AI overview question of ‘is it illegal to open surströmming indoors?’, which I thought was funny.

    So many things taste great after a fermentation that we don’t always notice the process: cheese, sourdough, beer/wine/liquor, kimchi, (some kinds of) pickles, etc, including meats such as salami and chorizo. Why not a fish?

    I may be misreading things, but if you’re going to pick on a regional specialty… pick on durian :P I’m assuming it’s like coriander, in that some find it pleasant and others disgusting based on their genetics. I’m in the latter category for durian. Foods for me are like pokemon: Gotta try 'em all.

    .

    Some only once.


  • Not antagonistically speaking here.

    Do you think your input is not being used to train LLMs when posting on Lemmy? It’s publicly visible without an account.

    I’d be shocked if there wasn’t either a scraper, or a whole federated instance, that was harvesting lemmy comments for the big ai companies.

    The only difference is that no one is trying to make money off providing that content to them. A big part of the reddit exodus was that reddit started charging for api calls to make cash off the AI feeding frenzy, which broke tools the users liked. With lemmy, there’s no need for a rent-seeking middle man.


  • I tripped over this awesome analogy that I feel compelled to share. “[AI/LLMs are] a blurry JPEG of the web”.

    This video pointed me to this article (paywalled)

    The headline gets the major point across. LLMs are like taking the whole web as an analog image and lossily digitizing it: you can make out the general shape, but there might be missed details or compression artifacts. Asking an LLM is, in effect, googling your question using a more natural language… but instead of getting source material or memes back as a result, you get a lossy version of those sources and it’s random by design, so ‘how do I fix this bug?’ could result in ‘rm -rf’ one time, and something that looks like an actual fix the next.

    Gamers’ Nexus just did a piece about how youtube’s ai summaries could be manipulative. While I think that is a possibility and the risk is real, go look at how many times elmo has said he’ll fix grok for real this time; but another big takeaway was how bad LLMs still are at numbers or tokens that have data encoded in them: There was a segment where Steve called out the inconsistent model names, and how the ai would mistake a 9070 for a 970, etc, or make up it’s own models.

    Just like googling a question might give you a troll answer, querying an ai might give you a regurgitated, low-res troll answer. ew.



  • From later in the article:

    Students are afraid to fail, and AI presents itself as a saviour. But what we learn from history is that progress requires failure. It requires reflection. Students are not just undermining their ability to learn, but to someday lead.

    I think this is the big issue with ‘ai cheating’. Sure, the LLM can create a convincing appearance of understanding some topic, but if you’re doing anything of importance, like making pizza, and don’t have the critical thinking you learn in school then you might think that glue is actually a good way to keep the cheese from sliding off.

    A cheap meme example for sure, but think about how that would translate to a Senator trying to deal with more complex topics… actually, on second thought, it might not be any worse. 🤷

    Edit: Adding that while critical thinking is a huge part. it’s more of the “you don’t know what you don’t know” that tripped these students up, and is the danger when using LLM in any situation where you can’t validate it’s output yourself and it’s just a shortcut like making some boilerplate prose or code.


  • korazail@lemmy.myserv.onetoMicroblog Memes@lemmy.worldWise words
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    2 months ago

    In addition to the word compliance, called out by nimble, there’s also the word nervous.

    When do you laugh ‘nervously’? When the joke made was from someone with power over you; when it was racist, sexist, or otherwise crass; or maybe when you just don’t want to be near the person making it.

    In those situations, the nervous laughter may be interpreted by the other person as agreement, acceptance, etc. while it is anything but.

    It will take a force of will to not chuckle and just let it slide and instead push the issue, but it may result in the other person actually thinking about the issue and realizing their ‘joke’ was unacceptable.


  • And this is why Digit wanted a clarification. Let’s make a quick split between “Tech Bro” and Technology Enthusiast.

    I’d maybe label myself a “tech guy”, and forego the “bro”, but I could see other people calling me a “tech bro”. I like following tech trends and innovations, and I’m often a leading adopter of things I’m interested in if not bleeding edge. I like talking about tech trends and will dive into subjects I know. I’ll be quick to point out how machine learning can be used in certain circumstances, but am loudly against “AI”/LLMs being shoved into everything. I’m not the CEO or similar of a startup.

    Your specific and linked definition requires low critical thinking skills, big ego and access to “too much” money. That doesn’t describe me and probably doesn’t describe Digit’s network.

    Their whole point seemed to be that the tech-aware people in their sphere are antagonistic to the idea of “AI” being added to everything. That doesn’t deserve derision.