I imagine the cross section of DDG users and people who fucking hate AI is higher than average, but I hope at least that this is somewhat reflective of general public sentiment.
I don’t hate AI as a tool. Especially in narrow, high-impact use-cases.
I work in medicine. I have already seen instances of AI, used as a tool by professionals, helping to literally save lives. The applications in medical research (and many scientific fields probably) are genuinely exciting. AlphaFold won a nobel for a reason. Insanely cool projects like the Human Cell Atlas wouldn’t be possible without it.
The problem is stupid-ass ‘general’ chatbots being forced down everyone’s throats so corpos can hoover up even fucking more of our data and sell more fucking ads.
Even these chatbots can be useful, but I won’t use any that collect data or sell ads.
In this regard I think DDG’s approach is pretty reasonable. You can turn on or off, you can use it without an account, and all queries are anonymized before being sent to the model.
I get that people have a reflexive “fuck AI” reaction because of the way it has been deployed in society. I truly understand it. But honestly that’s more of a capitalism problem than an AI problem. AI is a tool like a hammer. Just because evil corporate pricks are using it to bash our heads in doesn’t mean we should hate hammers, it means we should hate evil corporate pricks.
This is where terminology is an issue. Yes Alpha Fold and Chatgpt are both “AI” but they’re very different technologies underneath. Most people who say “fuck AI” usually just mean the generative AI technologies behind Chatgpt and Sora and such.
The common person doesn’t understand this difference though and probably isn’t even aware of AlphaFold.
Transformer architectures similar to those used in LLMs are the foundation for AlphaFold 2 and medical vision models like Med-ViT. There’s not really a clean way to distinguish “good” and “bad” AI by architecture. It’s all about the use.
It’s a tool. There aren’t any good and bad hammers. Someone using a hammer to build affordable housing is doing a good thing. Someone using a hammer to kill kittens is doing a bad thing. It’s not the fucking hammer’s fault, but it’s also not surprising that if 95% of the people buying hammers are using them to kill kittens and post videos on instagram about it to the point that manufacturers start designing their hammers with specialized kitten-killing features and advertising them for the purpose non-stop, people will get pretty fucking angry at all the stores and peddlers selling these fucking hammers on every street corner.
And that’s where we are with “generative AI” right now. Which is not really AI, by the way, none of this has any “intelligence” of any kind, that’s just a very effective sales tactic for a fundamentally really interesting but currently badly abused technology. It’s all just the world’s largest financial grift. It’s not the technology’s fault.
I am anti generative AI. I am agressively anti generative AI. Years ago I saw someone make an AI to tell if a mole was cancerous or not (the modelin question was flawed because it learned if there is a ruler in the photo there was cancer but that’s not the point). An image model trained exclusively to detect cancer moles vs safe moles is a useful first tool that you could just use your phone for before going in for a real test.
The same is true for applications in psychology where for example early warning systems are being tried and studied. But corporate had to focus on a forceful every day application of AI instead of sciences and research.
I completely get your skepticism, but I was being serious. Yes, at least one life within my organization has literally been saved with the help of an AI drug discovery tool (used by a team of geneticists). I’m not going to get into specifics because nothing from the case has been released publicly (I’m sure a case report will pop up at some point) and I don’t want to get my ass fired, but it’s not a joke that these tools can be incredibly powerful in medicine when used by human experts, including helping to save lives.
My friend does diabetes research and he was using machine learning to analyze tissue samples and the model he built is way more accurate than humans looking at the same material. There are definitely good use cases for ML in medicine.
70k+ is a good representation of the users. Plenty of data points they can extrapolate and all of them point to scrapping AI. Good. Save some money and skip the slop trough.
It’s not a survey. It’s an ad. It’s an ad for noai.duckduckgo.com. The fact that we’re thinking it and talking about it means it was a good ad. But it’s just an ad. The numbers are entirely meaningless.
Nothing about this ad says that they are scrapping AI. They aren’t. They still provide AI by default. This is a way for the end user to opt out of that default.
“Oops, looks like we lost the data of the voting, so we’ll just assume YES won because everyone loves Copilot AI, which is the best AI and has nothing to do with us having a contract with Microsoft!”
:D
Edit: 15 hours later it is still 93%. I am getting suspicious this isn’t real.
It was 94% when I first looked at it a few days ago
Well. Glad to see I don’t need to bother.
I imagine the cross section of DDG users and people who fucking hate AI is higher than average, but I hope at least that this is somewhat reflective of general public sentiment.
I don’t hate AI as a tool. Especially in narrow, high-impact use-cases.
I work in medicine. I have already seen instances of AI, used as a tool by professionals, helping to literally save lives. The applications in medical research (and many scientific fields probably) are genuinely exciting. AlphaFold won a nobel for a reason. Insanely cool projects like the Human Cell Atlas wouldn’t be possible without it.
The problem is stupid-ass ‘general’ chatbots being forced down everyone’s throats so corpos can hoover up even fucking more of our data and sell more fucking ads.
Even these chatbots can be useful, but I won’t use any that collect data or sell ads.
In this regard I think DDG’s approach is pretty reasonable. You can turn on or off, you can use it without an account, and all queries are anonymized before being sent to the model.
I get that people have a reflexive “fuck AI” reaction because of the way it has been deployed in society. I truly understand it. But honestly that’s more of a capitalism problem than an AI problem. AI is a tool like a hammer. Just because evil corporate pricks are using it to bash our heads in doesn’t mean we should hate hammers, it means we should hate evil corporate pricks.
This is where terminology is an issue. Yes Alpha Fold and Chatgpt are both “AI” but they’re very different technologies underneath. Most people who say “fuck AI” usually just mean the generative AI technologies behind Chatgpt and Sora and such.
The common person doesn’t understand this difference though and probably isn’t even aware of AlphaFold.
Let’s all agree to use the term GenAI for chatbots and other bullshit generators.
I asked grok who said the correct term is “MechaHitler”
Transformer architectures similar to those used in LLMs are the foundation for AlphaFold 2 and medical vision models like Med-ViT. There’s not really a clean way to distinguish “good” and “bad” AI by architecture. It’s all about the use.
It’s a tool. There aren’t any good and bad hammers. Someone using a hammer to build affordable housing is doing a good thing. Someone using a hammer to kill kittens is doing a bad thing. It’s not the fucking hammer’s fault, but it’s also not surprising that if 95% of the people buying hammers are using them to kill kittens and post videos on instagram about it to the point that manufacturers start designing their hammers with specialized kitten-killing features and advertising them for the purpose non-stop, people will get pretty fucking angry at all the stores and peddlers selling these fucking hammers on every street corner.
And that’s where we are with “generative AI” right now. Which is not really AI, by the way, none of this has any “intelligence” of any kind, that’s just a very effective sales tactic for a fundamentally really interesting but currently badly abused technology. It’s all just the world’s largest financial grift. It’s not the technology’s fault.
I work for a health tech AI company and agree, but I also agree that most AI can fuck right off and doesn’t need to be in every god damn thing.
This.
I am anti generative AI. I am agressively anti generative AI. Years ago I saw someone make an AI to tell if a mole was cancerous or not (the modelin question was flawed because it learned if there is a ruler in the photo there was cancer but that’s not the point). An image model trained exclusively to detect cancer moles vs safe moles is a useful first tool that you could just use your phone for before going in for a real test.
The same is true for applications in psychology where for example early warning systems are being tried and studied. But corporate had to focus on a forceful every day application of AI instead of sciences and research.
“Literally save lives,” Bullshit.
I completely get your skepticism, but I was being serious. Yes, at least one life within my organization has literally been saved with the help of an AI drug discovery tool (used by a team of geneticists). I’m not going to get into specifics because nothing from the case has been released publicly (I’m sure a case report will pop up at some point) and I don’t want to get my ass fired, but it’s not a joke that these tools can be incredibly powerful in medicine when used by human experts, including helping to save lives.
My friend does diabetes research and he was using machine learning to analyze tissue samples and the model he built is way more accurate than humans looking at the same material. There are definitely good use cases for ML in medicine.
Yes but ML is not what people mean when they say ”AI” now. They mean LLMs.
I’m seeing 79,264 votes with the same percentages now.
Technically, with 93%, it’s safe to say, that we all feel the same about AI.
Yeah, the Pro-AI vote is getting close to the lizardman constant.
70k+ is a good representation of the users. Plenty of data points they can extrapolate and all of them point to scrapping AI. Good. Save some money and skip the slop trough.
It’s not a survey. It’s an ad. It’s an ad for noai.duckduckgo.com. The fact that we’re thinking it and talking about it means it was a good ad. But it’s just an ad. The numbers are entirely meaningless.
Nothing about this ad says that they are scrapping AI. They aren’t. They still provide AI by default. This is a way for the end user to opt out of that default.
I answered yes to see what happened. It tells me “Thanks for voting — You’re into AI. With DuckDuckGo, you can use it privately. Try Duck.ai”
No idea where they’re going to take it from here, just wanted to provide some insight on the other option.
Next up, from DDG:
“Oops, looks like we lost the data of the voting, so we’ll just assume YES won because everyone loves Copilot AI, which is the best AI and has nothing to do with us having a contract with Microsoft!”
Good, maybe now they can make it opt-in.