Which way should we go on AI?
Recently a diarist wrote, or had written for him, a piece about what an AI would like to say to the DK community, particularly those who have doubts about AI.
This is what the AI said when I asked it what it wanted to say to the DK community— in particular to those who have been responding to the whole idea of AI in hostile and dismissive ways.
www.dailykos.com/...
Ironically, despite a warning, within its reply to a commenter, from the AI to not anthropomorphize AI, the author asked the AI what it wanted to say to the doubters, as if the AI were capable having the desire to express itself to the DK community.
www.theguardian.com/…
There is an army of AI tech workers who are tasked with rating AI output in terms of accuracy (hallucinations) and appropriateness, flagging immoderate responses for racism, violence and sexually abusive material, deepfakes, etc..
www.theguardian.com/...
These workers complain of grueling deadlines, poor pay and opacity around the work to make AI seem intelligent. They worry about what they might, and have, missed in their task. They are given dozens of tasks each day and are expected to complete each one within 10-15 minutes, regardless if they have any expertise in the field the task relates to, such as medical information, for example, or racist dog whistles (which are meant to go unheard).
One worker who joined GlobalLogic [a hiring contractor for AI tech] in spring 2024 and has worked on five different projects so far, including Gemini and AI Overviews, described her work as being presented with a prompt – either user-generated or synthetic – and with two sample responses, then choosing the response that aligned best with the guidelines, and rating it based on any violations of those guidelines. Occasionally, she was asked to stump the model.
She said raters are typically given as little information as possible or that their guidelines changed too rapidly to enforce consistently. “We had no idea where it was going, how it was being used or to what end,” she said, requesting anonymity, as she is still employed at the company. For me, that gap between what’s expected of us and what we’re actually given to do the job is a clear sign that companies are prioritizing speed and profit over responsibility and quality.”
AI workers distrust the models they work on because of a consistent emphasis on rapid turnaround time at the expense of quality. One AI worker, on Amazon Mechanical Turk, explained that while she doesn’t mistrust generative AI as a concept, she also doesn’t trust the companies that develop and deploy these tools. For her, the biggest turning point was realizing how little support the people training these systems receive.
In closing, the AI industry prioritizes safety, until it gets in the way of getting an AI model being released speedily (in a race to the bottom?), or becomes a threat to profitability.
“We joke that [chatbots] would be great if we could get them to stop lying,” said one AI tutor who has worked with Gemini, ChatGPT and Grok, requesting anonymity, having signed nondisclosure agreements.
AI is nowhere near ready for primetime and is best avoided, or at least taken with much more than a few grains of salt. If you are looking for information, there’s not a lot of value in finding it from an AI that requires double checking to verify its veracity.