• 0 Posts
  • 11 Comments
Joined 3 years ago
cake
Cake day: July 16th, 2023

help-circle

  • x1gma@lemmy.worldtoSelfhosted@lemmy.worldCertificates...ugh
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    The easiest way would be to set up caddy to use acme on the servers, and never care about certificates again. See https://caddyserver.com/docs/automatic-https.

    If you insist on your centralized solution, which is perfectly fine imo, just place the certificates to a directory properly accessible to caddy, and make sure to keep the permissions minimal, so that the keys are only accessible by authorized users.

    If the certificates are only for caddy, there’s no reason to mess around in system folders.


  • No, I think the distinction is already made and there are words for that. Adding additional terms like “generators” or “pretend intelligence” does not help in creating clarity. In my opinion, the current definitions/classifications are enough. I get Stallman’s point, and his definition of intelligence seems to be different from how I would define intelligence, which is probably the main disagreement.

    I definitely would call a LLM intelligent. Even though it does not understand the context like a human could do, it is intelligent enough to create an answer that is correct. Doing this by basically pure stochastics is pretty intelligent in my books. My car’s driving assistant, even if it’s not fully self driving, is pretty damn intelligent and understands the situation I’m in, adapting speed, understanding signs, reacting to what other drivers do. I definitely would call that intelligent. Is it human-like intelligence? Absolutely not. But for this specific, narrow use-case it does work pretty damn good.

    His main point seems to be breaking the hype, but I do not think that it will or can be achieved like that. This will not convince the tech bros or investors. People who are simply uninformed, will not understand an even more abstract concept.

    In my opinion, we should educate people more on where the hype is actually coming from: NVIDIA. Personally, I hate Jensen Huang, but he’s been doing a terrific job as a CEO for NVIDIA, unfortunately. They’ve positioned themselves as a hardware supplier and infrastructure layer for the core component for AI, and are investing/partnering widely into AI providers, hyperscalers, other component suppliers in a circle of cashflow. Any investment they do, they get back multiplied, which also boosts all other related entities. The only thing that went “10x” as promised by AI is NVIDIA stock. They are bringing capex to a whole new level currently.

    And that’s what we should be discussing more, instead of clinging to words. Every word that any company claims about AI should automatically be assumed to be a lie, especially for any AI claim from any hyperscaler, AI provider, hardware supplier, and especially-especially from NVIDIA. Every single claim they do directly relates to revenue. Every positive claim is revenue. Every negative word is loss. In this circle of money they are running - we’re talking about thousands of billions USD. People have done way worse, for way less money.



  • I disagree with this post and with Stallman.

    LLMs are AI. What people are actually confused about is what AI is and what the difference between AI and AGI is.

    There is no universal definition for AI, but multiple definitions which are mostly very similar: AI is the ability of a software system to perform tasks that typically would involve human intelligence like learning, problem solving, decision making, etc. Since the basic idea is basically that artificial intelligence imitates human intelligence, we would need a universal definition of human intelligence - which we don’t have.

    Since this definition is rather broad, there is an additional classification: ANI, artificial narrow intelligence, or weak AI, is an intelligence inferior to human intelligence, which operates purely rule-based and for specific, narrow use cases. This is what LLMs, self-driving cars, assistants like Siri or Alexa fall into. AGI, artificial general intelligence, or strong AI, is an intelligence equal to or comparable to human intelligence, which operates autonomously, based on its perception and knowledge. It can transfer past knowledge to new situations, and learn. It’s a theoretical construct, that we have not achieved yet, and no one knows when or if we will even achieve that, and unfortunately also one of the first things people think about when AI is mentioned. ASI, artificial super intelligence, is basically an AGI but with an intelligence that is superior to a human in all aspects. It’s basically the apex predator of all AI, it’s better, smarter, faster in anything than a human could ever be. Even more theoretical.

    Saying LLMs are not AI is plain wrong, and if our goal is a realistic, proper way of working with AI, we shouldn’t be doing the same as the tech bros.


  • It is not a lie but a widely accepted and agreed on definition that precedes LLMs by years, and had been created by people way smarter then you and I combined, and who have spent more time in AI research than most people here.

    An LLM is an ANI (artificial narrow intelligence), and any ANI is an AI, the broader term for any artificial intelligence. An ANI operates not on intelligence as a human intelligence, its intelligence is a set of rules. A search engine algorithm is a set of rules. Your phone’s keyboard is a set of rules. T9 typing on your old Nokia is a set of rules and can be classified as an ANI. An LLM has rules how it spits out the next token.

    There is no universal definition of AI, because we would need to have a universal definition of human intelligence for that first. Since there is no single universal definition, it’s free for you to disagree on that definition. But calling it disinformation, that no computer program is intelligent, or a lie is simply wrong.


  • In all honesty, the constant rambling against any service provider when something goes wrong is tiring. as. fuck.

    “I’m not using anything, I’m self-hosting everything and no cloudflare can take ME down!” - hot stuff buddy, let’s talk again when at some point you’ll have something interesting and get hugged to death. Or when something of your diy self hosted stack breaks or gets taken down by an attack.

    “I’m not using (big company name) but (small startup name), and I’m not having any issues!” - wow, great, obviously the goal of the company is to stay as small as they are and supply your service. Let’s talk again too, when at some point your friendly startup gets sold, or grows more. Oh btw, smaller company usually also means less resources.

    “That’s all because they are using centralized services, we need to federate everything to not have a single point of failure” - federation alone won’t help if the centralized service has several magnitudes of resources more. Any single cloudflare exit node can probably handle several times the load of the fediverse. We’ve seen lemmy instances go down all the same, and this will happen with any infrastructure.

    I’m not supporting big companies having that much market share and the amount of control over the Internet as a whole that they have. But, have at least some respect from a technical standpoint for the things they’ve built. I’d say way over 80% here haven’t seen infrastructure, traffic and software on a scale that’s even remotely close to the big players, but are waffling about how this or that is better and how those problems should be solved and handled. Sit the fuck down.


  • No matter how well reasoned, allegedly fit for purpose or how much something pretends to be it, we shouldn’t be trusting those promises, especially not from people we don’t know. That does not end well neither for the free candy van nor for cybersecurity. Trust like that has been responsible for a lot of attacks over varying vectors and for projects going wrong.


  • On the other hand, detrimental reliance is a tort and if someone is relying on an app for a specific safety function, the app could be civilly liable if it fails it’s function in some way.

    Yes, if the app would be any kind of official tool.

    Imagine if you had this attitude about an insulin use tracker/calculator, that sometimes gave wildly wrong insulin dose numbers.

    Yes, and that’s why regulations for those kinds of things exist, that prevent those things. There is no regulation for the ice tracker.

    Maybe down the road, it’s decided that aiding and abetting ICE is a crime, and providing misinformation intentionally or unintentionally is a criminal act. App developer dude could be criminally liable if he knew or ought to have known he had vulnerabilities. You know, in your New Nuremberg trials that you are going to get sometime in the next decade or so.

    If down the road a regulation would happen for, app developer dude would be forced to either comply or to stop operations.


  • x1gma@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    3
    ·
    6 months ago

    So fucking what? He is not being paid in any kind, and anything he does on that project is volunteer work. If he was not able to do anything on that project due to regular work, vacation, personal issues, or the simple fact that he didn’t want to?

    If you don’t pay for a service, you don’t get to decide what people do, deal with it


  • x1gma@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    1
    ·
    6 months ago

    Honestly, apart from the report being potentially wrong, the researcher seems pretty entitled as well. Like good intentions and all that, but he’s given him a week to fix the issue, usual practice in responsible disclosure are 90 days. We’re not talking about a company here, it’s some single random dude providing the app.

    This really sounds like some personal issue written down for public drama, while making himself ridiculous for not knowing his own shit properly.