VOLUME 104
ISSUE 09
The Student Movement

Ideas

Open AI’s Chat GPT

Gabriela Francisco


Photo by Public Domain

When you think about artificial intelligence, do you automatically think of “I, Robot” in which robots are designed to keep humans safe but end up actually killing humans because one specific robot’s algorithm sees that humans will cause themselves to go extinct left unchecked? I do, but I also think of the other parts of AI that you and I use every day—getting places using our Maps app, opening our phones with our faces, unlocking our computers with our fingerprints, double checking our essays on Grammarly, or searching for information on Google. According to NetApp, “Today, the amount of data that is generated, by both humans and machines, far outpaces humans' ability to absorb, interpret, and make complex decisions based on that data. Artificial intelligence forms the basis for all computer learning and is the future of all complex decision making.”

A new form of artificial intelligence that has become popular are AI’s that you can talk to—meaning, not only can you give it a command, like with Amazon’s Alexa, but it can actually respond with a coherent sentence, like Apple’s Siri. One AI that has become especially popular is OpenAI’s Chat GPT. This Time article comments that “the powerful artificial intelligence (AI) chatbot can generate text on almost any topic or theme, from a Shakespearean sonnet reimagined in the style of Megan Thee Stallion, to complex mathematical theorems described in language a 5 year old can understand.”

Of course, while we praise the technological advances that different companies like OpenAI create, we can always expect downsides. While the article’s title leads readers to believe that a downside of manufacturing AI is the harsh and cruel treatment of workers in sweatshops – this could be an issue somewhere else, but is far from the actual issue of this particular story. Yes, while the Kenyan workers were being paid less than $2 an hour, their cost of living is not the same as here in the United States and consequently, their pay actually falls within the local? average pay range of 27,704 KES to 126,949 KES, the equivalent of $224.27 to $1,027.68 per month. At the time the story took place, there was no universal minimum wage in Kenya. But now, according to Bloomberg Tax, under “Order 2022, minimum wages range from 8,109.90 Kenyan shillings (US $67.36) per month to 34,302.75 Kenyan shillings per month, up from the previous range of 7,240.96 Kenyan shillings per month to 30,627.45 Kenyan shillings per month.”

While we can debate if it’s America’s job to enforce higher wages when a country’s own government doesn’t enforce that, or argue that we shouldn’t outsource our work to other countries (which are all good discussions), that is not the point of this article and would actually not do the story justice.

Sama, the company that OpenAI had to outsource their work to, only came into the picture because ChatGPT had a very bad habit of blurting out racist, sexist, and violent remarks. This happened because, in order to train ChatGPT to have such an expansive amount of knowledge and the capability to intelligently respond, OpenAI used the internet as the source of all the knowledge. So while ChatGPT had lovely things like Shakespearian sonnets and Bible verses to draw from, it equally drew from the darkest places on the internet, with literally nothing you can imagine left out. This was insanely inappropriate for users to interact with, but impossible to leave out. Because it was impossible to leave out without having humans manually go through all the training data sets to scrub out the bad information, the only thing left to do was create another AI to comb through the data, thus training the original AI to filter through what was labeled as “bad.” In order to do this, the new AI had to be given examples of the speech OpenAI did not want to be included, i.e. hate speech and examples of violence and sexual abuse. Once the new AI was trained to detect those forms of toxicity, it could be built into the original system to filter the toxicity out. This is where Sama came in, employing around three dozen Kenyans to go through and tag the snippets of text that were pulled from the darkest and most repulsive parts of the internet.

As you can imagine, having to sit at a desk for eight hours a day, reading stories that uplift sexual abuse and violence in general, was deteriorating and mentally scarring for these three dozen Kenyan workers. Allegedly, the workers were expected to read and label anywhere from 150-250 passages that would range from 100-1000 words. Sama had mental health therapists available for the workers to see on a one-to-one basis, but a worker stated that only group sessions were available to them and it was difficult to attend them because they could get more money depending on the number of snippets they went through in a day.

The next issue addressed was that OpenAI allegedly also sent images to Sama to comb through and get rid of—which they started to do. This, however, is illegal. And when Sama found out that labeling the images and even having the images was illegal, they terminated their work with OpenAI, which claims that they never sent the images to Sama. Because Sama terminated their work with OpenAI, that meant that all those workers were going to lose their jobs. To the workers, this work was a way to provide for their families. Time.com says that “most of the roughly three dozen workers were moved onto other lower-paying workstreams without the $70 explicit content bonus per month; others lost their jobs.”

I can acknowledge that artificial intelligence is extremely important in our ever-growing and changing world. I can acknowledge that artificial intelligence helps us make leaps and bounds over hurdles in multiple facets of our lives like medicine, engineering, and more. The big question we have to ask ourselves is: is it worth having AI at the cost of people’s sanity? Will those three dozen Kenyans ever be the same after having to comb through that content for months? Do they sleep the same? Do they live their lives the same? I don’t know. If we’re smart enough to create AI that can do what ChatGPT does, aren’t we smart enough to obtain the training data sets in a way that avoids exposing people to the darkest parts of human existence that we see on the internet?

Mark 8:36 (KJV) says “For what shall it profit a man, if he shall gain the whole world, and lose his own soul?” I ask again, how important is it to make these amazing strides in technology that will no doubt alter the way we live our lives, at the cost of dramatically changing the mental health of the three dozen Kenyans, and others to come, if the method to clean up the knowledge base of these systems doesn’t change?


The Student Movement is the official student newspaper of Andrews University. Opinions expressed in the Student Movement are those of the authors and do not necessarily reflect the opinions of the editors, Andrews University or the Seventh-day Adventist church.