Pro@programming.dev to Technology@lemmy.worldEnglish · 1 day agoDo chatbots have a moral compass? Researchers turn to Reddit to find out.news.berkeley.eduexternal-linkmessage-square7linkfedilinkarrow-up14arrow-down151file-text
arrow-up1-47arrow-down1external-linkDo chatbots have a moral compass? Researchers turn to Reddit to find out.news.berkeley.eduPro@programming.dev to Technology@lemmy.worldEnglish · 1 day agomessage-square7linkfedilinkfile-text
minus-squareMarshezezz@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up18arrow-down1·1 day agoNo, they do what they’ve been programmed to do because they’re inanimate
minus-squareElectricblush@lemmy.worldlinkfedilinkEnglisharrow-up9·1 day agoA better headline would be that they analyzed the embedded morals in the training data… but that would be far less click bait…
minus-squareMarshezezz@lemmy.blahaj.zonelinkfedilinkEnglisharrow-up2·1 day agoThey’ve created a dilemma for themselves cos I won’t click on anything with a clickbait title
minus-squareundefined@lemmy.hogru.chlinkfedilinkEnglisharrow-up3arrow-down1·1 day agoRight? Why the hell would anyone think this? There are a lot of articles lately like “is AI alive?” Please, it’s 2025 and it can hardly do autocomplete correctly.
No, they do what they’ve been programmed to do because they’re inanimate
A better headline would be that they analyzed the embedded morals in the training data… but that would be far less click bait…
They’ve created a dilemma for themselves cos I won’t click on anything with a clickbait title
Right? Why the hell would anyone think this? There are a lot of articles lately like “is AI alive?” Please, it’s 2025 and it can hardly do autocomplete correctly.