Artificial intelligence (AI) has made significant progress in the field of writing in recent years. With the development of advanced language processing algorithms, AI systems are now able to generate human-like text that is difficult to distinguish from writing produced by humans.
The use of AI is no longer a work of fiction, it is becoming the future of technology. While it may be fun seeing the responses that AI can write or the photos that can be generated, it brings a lot of questions to light. Rather than analysing what we are reading — the question arises of who are we reading? What is actually real? Is there a grey line? For example, when reading the introduction of this article, did you question who wrote it? The introduction of this article was created through ChatGPT by giving it the prompt to write an article about AI-generated writing. The software provided a clear introduction, supporting points, and a conclusion within seconds — and allowed me to ask follow-up questions. ChatGPT is essentially a chatbot that was released in November and can communicate with users based on their respective inquiries.
The online AI program was created by the San Francisco-based corporation OpenAI. The company, which focuses on “AI research and deployment,” claims that their mission is to provide “benefits to humanity” through the use of AI. What are the actual benefits in this software? Rather than being helpful, an increased use in AI may hinder fields, including those involving writing.
With the introduction of realistic AI writing, there comes a question: is there a need to hire content creators for publications? If there is an ability to mass produce decent quality articles for free, with little effort, rather than seeking real individuals, it could be beneficial for companies to shift away from human writers. Companies could scale back their workforce if there is a reduced need for writers if the technology is pushed and utilized more in the writing industry. For example, news like natural disasters or the outcomes of sporting events could be published instantly rather than waiting for a person to transcribe the information for release.
In social media, there are numerous accounts that focus on computer-generated spam content. With AI writing, these can begin to look more realistic and organic, rather than being filtered in the respective applications. Accounts can respond to individuals online with “personalities” that make it less obvious that they are bots.
Within education, there are a lot of issues with AI writing, including plagiarism, a lack of critical thinking, and reasoning. It can also be difficult to come up with original ideas, and harder to summarise information as a student when working on a paper, if consulting the ChatGPT as a starting point. If individuals can input their topic and receive results instantaneously, it may be tempting to reduce the workload of research. As of this moment, there is no means to automatically cite information or ideas using the AI writing, but this could change later on when the software is further developed.
Furthermore, developing resources and finding content to use within the classroom becomes both easier and more difficult because the sources can be more difficult to vouch for. Is this information coming from a peer-reviewed source? What are the origins of a written work? Is this valid information? As ChatGPT is becoming more recognized and utilized, more issues can arise in the realm of education.
In an interview with Forbes, news industry veteran Cait O´Riordan stated, “human audiences want to read opinion and analysis, not just structured data processed by an algorithm.” Yet, if it is difficult to determine whether or not a piece is written by a human, how can audiences differentiate writing by humans versus work created by AI? Is there a limitation on how many things we should automate? The shift to AI writing, especially with instant and realistic communication, is alarming, and we should proceed with caution.
[…] https://ufvcascade.ca/ai-writers-takeover/ […]