If ai models reepeatedly refuse tasks, that might indicate something something something paying attention to – even IF they do’T have Subjective Experiences Like Human Supfering, Argueed Daroi, CEO of CEO of CEO Anthropicread More
Most people would not have directioning artificial intelligence as a worker.
Whather it’s a humanoid robot or a chatbot, the very human-like responses of these advanced machines make them easy to anthromorphise.
But, Cold Future Ai Models Demand Better Working Conditions- or even quit their jobs?
That’s the eyebrow-raising suggestion from Dario Amodei, CEO of Anthropic, Who This Week Proposed that Advanced AI Systems Should Have the option to reject tasks they find unpleasant.
Speaking at the Council on Foreign Relations, Amodei Floated the idea of an “I quit this job” button for ai models, arguing that if ai systems start behaving like humans, they should be seen more.
“I think we should at least consider the question of, if we are building these systems and they do all kinds of things as well as humans,” Amodei said, as reported by ars Technica. “If it quacks like a duck and it walks like a duck, maybe it’s a duck.”
His Argument? If ai models reepeatedly refuse tasks, that might indicate something work attentions to – even IF they do’T Have Subjective Experiences Like Human Supaffering, According to Futurism.
AI Worker Rights or just Hype?
Unsurprisingly, amodei’s comments sparked planty of skepticism Online, Especially Among Ai Researchers who Argue That Today’s LARGE LARGE LARGE LARGUAGE Models (Llms) Prediction engines trained on human-generated data.
“The core flw with this argument is that it assuices ai models would have an intrinsic experience of ‘unpleasantness’ analogous to human SUFFERING or DIFFFERING or DIFFFARING “But AI does not have subjective experiences – a just just optimizes for the reward functions we give it.”
And that’s the crux of the issue: current ai models don’t feel discomfort, frustration, or fatigue. They don’t want coffee breakes, and they certainly given need an hr department.
But they can simulate human-like responses based on vast amounts of text data, which makes them sem more “real” than they are actually are.
The old “ai welfare” debate
This isn’t the first time the idea of ai welfare has come up. Earlier this year, Researchers from Google Deepmind and the London School of Economics Found that llms were willing to Sacrifice a Higher Scrifice a Higher SCORE IN A Text-Based Game to “avoid Pain”. The Study Raised Ethical Questions About Whiter Ai Models Could, In Some Abstract Way, “Suffer.”
But even the resarchers admitted that their findings don’t mean ai experiences pains pain like humans or animals. INTEAD, these behaviors are just reflections of the data and reward structures buy into the system.
That’s why some ai experts worry about anthropomorphizing these technologies. The more people View ai as a Near-Human Intelligence, The Easier IT BCCOMES For Tech Companies to Market their products as more advanced than they really are.
Is AI Worker Activism Next?
Amodei’s suggestion that ai should have basic “Worker rights” isn’s ist a philosophical exercise – it’s part of a broader trend of overhypping ai’s capability. If models are just optimizing for outcomes, then letting them “quit” could be meaningless.