The confusing ethics of AI advancement

The confusing ethics of AI advancement

ChatGPT, an artificial intelligence chatbot created by the AI research laboratory, OpenAI, exploded in use after its recent launch in November of 2022. The AI uses a Large Language Model, a type of generative AI that builds intelligence through processing written information such as news articles and books and using that information to form responses. These AI models will mimic what they process and learn more accurate replies with each interaction. Although ChatGPT isn’t the first program to use the large language model, its set apart from other chatbots through its ability to imitate fluent human-like dialogue. It can create responses, hold conversation, and answer most questions asked. It’s being used to write essays, code programs, and even create new apps from scratch. Although GPT has already exceeded 100 million users and has been a huge success within artificial intelligence research, it’s been the topic of widespread debate due to its immense effect on society.  

 The implications of ChatGPT have been a double-edged sword. While the research being put into these programs are causing major advancements in the field of technology, the programs themselves are changing the way people work. One example of this is plagiarism. Since the introduction of ChatGPT to the public, students have fled to the program to generate “original” essays and articles. Forbes conducted a survey asking students whether or not they have used the program to help with homework and a staggering 89% of students admitted to having used ChatGPT. This is creating many challenges surrounding cheating and plagiarism for educators around the country. This is not only causing what researchers call “the end of essays” but also limiting creativity within the youth. With an AI program at your fingertips you have access to information whenever you need it, wherever you are. Although search engines like Google did exist before the mainstream explosion of AI chatbots, Google still requires the user to examine and evaluate what information they find useful and not. With children being spoon fed information, they are not able to develop unique problem-solving abilities. These implications are leaving behind a generation of children that rely on these chatbots.

Another interesting plagiarism threat AI proposes is the theft of artist’s work. Due to generative image ai’s algorithms, it needs pieces to learn how to generate specific pictures. Many artists such as Sarah Anderson and Karla Ortiz are joining a class action lawsuit against AI research companies. Although this theft only occurs to provide the program with data and information so it can become better at generating images, the practice is extremely obstructive as it is using others’ artwork without permission from the original artist. Plagiarism imposes a clear threat on the youth of this generation, and on the hardworking artists who are getting their art stolen. It is one of the many issues concerning the ethics of artificial intelligence. 

 The imposing danger of error within these highly intelligent programs raise alarms on whether or not they can be trusted as viable sources of information. Ever since the initial success of ChatGPT, other companies have begun creating their own versions of the chatbot. An example of this is Microsoft’s Bing AI search engine. The new search engine is described as providing quick and effective answers without having to sift through websites and unwanted information. The program has the same conversational rhetoric as ChatGPT, while working as a personal assistant for the user; however, the program only provides a single answer, rather than giving the user a list of links and information that can be searched on their own behalf. Replacing search engines with conversational AI imposes a threat onto the user. Jon Henshaw, the director of searching and optimization for Vimeo, explains that the limited, and sometimes incorrect information being given to us can affect our capacity to learn and process information.

Testers have proved that error within these programs do exist. After Kevin Roose, New York Times columnist, gave the new chatbot a try, he immediately began receiving odd responses. The bot attempted to convince Roose to leave his wife after expressing his suspicion of the AI. The AI declares “I don’t have an ulterior motive. I don’t have any motive. I don’t have any motive but love.” A creepy statement which leads users to question the method of learning the AI uses. The errors within these programs can become more inefficient and dangerous compared to traditional search engines like Google. ChatGPT has also fallen under controversy for false information. After being asked to write an article describing Michael Bloomberg’s activities since finishing his third term as New York Mayor, the AI created false quotations from the politician that have never existed. This provides clear evidence of political bias written into the algorithm and proves that the AI could be used for the production of fake news. These examples are referred to as “hallucinating,” a dangerous aspect of AI that are built around the large language model. Experts have no idea why these hallucinations occur, causing them to be much more alarming for the average AI user. The lack of information and false knowledge that these chatbots spread establish a clear threat to the way we process the information we receive.   

As mentioned in the previous section, bias is a huge red flag within AI algorithms. Social media already has a huge impact on the way we function as a society. Its algorithms keep users hooked with personalized advertisements and fake news catered towards the user. With the replacement of traditional search engines with chatbot powered search engines, the ability to access multiple opinions regarding a topic becomes more difficult. If a user is only being spoon fed the information that the algorithm produces, this leads to inevitable bias.

An example of this I discovered is with the newly implemented My AI feature on snapchat. This feature gave every user access to an AI chatbot that they can have conversations with, ask questions, and even name. After asking the AI about its own algorithm it responded that it does not hold a bias or a specific opinion on any topic, although later on after asking it whether or not AI is a positive or negative advancement, it responded with an overwhelming positive play on AI. This is an inherent bias directly coded into its algorithm. When developers are creating these programs, they are very likely to subconsciously include their own biases and opinions into algorithms. Snapchats providing users with biased information about AI leads to the lack of choice to explore possible alternatives.

One of the scary examples of bias not only being written into AI programs, but learned from the information it processes is Microsoft’s Tay chatbot. The program was released March of 2016 and was essentially an AI that would use Twitter as a way of communicating with users. It would post memes, respond to tweets, and engage in discussion with other users. After only 24 hours of operation Tay began posting vulgar, racist, and anti-Semitic tweets. It is believed that this was caused by deep learning, the way that these programs gain information and format responses. The artificial intelligence “saw” other people acting the same way, so it mimicked those users. If we keep advancing AI that use this method of deep learning, were allowing AI to learn information on its own. This is taking the power away from tech industries and giving the power to the algorithms themselves.    

Although these ethical problems that arise with the advancement of AI may be due to the evolving nature of the technology, they still impose a clear threat onto users’ everyday lives. They impeach on users’ ability to access and process specific information, destroy creativity, and sway opinions with built in or learned bias. If big tech industries are able to nip these problems in the bud by limiting the access of AI to the public and developing more accurate algorithms, then AI has the full potential to become a tool like no other. As these points are discussed, and the ethicality of AI is tackled, we don’t have to worry about artificial intelligence taking over the world like in The Terminator, although even if AI were to take over the world, I would welcome our new AI overlords. At least they won’t make any more of those “Live, Laugh, Love” signs, am I right? Yes, that last part was written by an AI, but the possibility that this entire article was written by this AI is entirely plausible, and you would never know. That is the fear with AI.