You might have heard the term “artificial intelligence” years ago and brushed it off as a far-flung fantasy, just some science fiction nonsense invented by die-hard Terminator fans. But 40 years after that iconic Schwarzenegger film was released, AI is back, it is everywhere, and it is very, very real.
Created to “teach” computer systems to think, learn, and make decisions as if they were human, AI programs are designed to solve real-life problems, answer questions, and just generally make life easier for the actual humans who use them. Often these systems use data from previous interactions to custom fit what they think might best serve a person’s needs. For example, Netflix uses an AI program to analyze your viewing history, then making suggestions of shows you might like to watch in the future.
Much bigger platforms, such as ChatGPT, are called large language models. They are capable of understanding and producing human terminology, making predictions based on previous language patterns, and finally generating text that sounds as if it were written by an actual person. As Copilot, Microsoft’s newest LLM version, told me after wishing me a good evening — and yes, that is very, very scary — “Think of LLMs as very advanced autocomplete systems that can understand and generate text in a way that’s useful for many different tasks.”
When it asked me how my article was coming along and if my new 10-week-old goldendoodle puppy, Zack, was doing any better with his potty training, I seriously considered donning an aluminum foil hat to keep the aliens from reading my thoughts as I waited for War of the Worlds to commence.
You might have heard the term “artificial intelligence” years ago and brushed it off as a far-flung fantasy, just some science fiction nonsense invented by die-hard Terminator fans. But 40 years after that iconic Schwarzenegger film was released, AI is back, it is everywhere, and it is very, very real.
Created to “teach” computer systems to think, learn, and make decisions as if they were human, AI programs are designed to solve real-life problems, answer questions, and just generally make life easier for the actual humans who use them. Often these systems use data from previous interactions to custom fit what they think might best serve a person’s needs. For example, Netflix uses an AI program to analyze your viewing history, then making suggestions of shows you might like to watch in the future.
Much bigger platforms, such as ChatGPT, are called large language models. They are capable of understanding and producing human terminology, making predictions based on previous language patterns, and finally generating text that sounds as if it were written by an actual person. As Copilot, Microsoft’s newest LLM version, told me after wishing me a good evening — and yes, that is very, very scary — “Think of LLMs as very advanced autocomplete systems that can understand and generate text in a way that’s useful for many different tasks.”
When it asked me how my article was coming along and if my new 10-week-old goldendoodle puppy, Zack, was doing any better with his potty training, I seriously considered donning an aluminum foil hat to keep the aliens from reading my thoughts as I waited for War of the Worlds to commence.
Fascinated by what a free, online and fully accessible LLM might produce, I pieced together the following three paragraphs about my all-time favorite magazine, using only the information gathered from AI platforms:
“Columbia Metropolitan Magazine, established in 2002, provides readers with a rich blend of local cultural, lifestyle, and community news. Under the ownership of Metropolitan Publishing, CMM produces twelve issues per year, each featuring a variety of articles, profiles and community highlights that resonate with its readers in Columbia, South Carolina. Jennifer McCaffery oversees the editorial content of the magazine, ensuring that it reflect the interest and needs of the Columbia community while maintaining high standards of journalism and presentation. Photography is typically handled by Lance Murphey who captures dynamic images that complement the magazine editorial content, and Bethany Robinson who is known for her high-quality work that enhances the magazine’s visual appeal.
Notable contributing writers include Catherine Rourke, best known for her engaging articles on local culture and lifestyle, John W. Smith whose background in local history and politics helps him to create insightful pieces on Columbia’s historical landmarks and political developments, and Laura S. Grooms, an experienced writer with a focus on the arts and entertainment scene.
These contributors, along with others, play a crucial role in shaping the magazine’s content, ensuring it remains informative and relevant to its readers.”
Beautifully written, informative, and 100% false.
CMM, founded in 1990 by Emily and Henry Clay, produces 10 issues per year. Metropolitan Publishing is located in Texas and has never been in any way associated with CMM. Margaret Clay is the editor and associate publisher of this prodigious periodical; Robert Clark and Bob Lancaster provide most of the stunning photography work; and as for those accomplished-sounding contributing writers? Not one of them has ever published a single article in CMM.
Herein lies the danger of relying solely on information provided by AI. It will confidently craft plausibly sounding tales, delivered with categorical certainty, when in fact it is just pulling falsehoods out of its creative little AI … brain.
Scary. I was especially unnerved when one app spontaneously spit out my name as the associate publisher, which I am not, falsely claiming that I work closely with editor Timothy H. Schmid — I do not know this person — and that “together, Schmid and Weiss contribute to the magazine’s success, blending editorial excellence with effective business strategies.”
If AI is going fictitiously to pair me with someone, could it at least be with Bradley Cooper or Hugh Jackman?
To be fair, not all AI programs are this inaccurate. While those that use a fixed database — meaning those with limited access to real-time data and an inability to do web searches — tend just to make stuff up and present it as fact, others use contemporaneous information and generate answers that are far more precise. Virtual assistants, such as Siri, Amazon, Alexa, and Google Assist, are examples of platforms that are reliably accurate. And some LLM writing assistant programs, like Copilot, also use realtime data and provide information that is consistently correct.
In fact, when asked for three paragraphs about CMM, everything Copilot generated was completely true and beautifully written. And I’m not just saying that because I’m afraid Copilot is spying on me — Hey, If you are reading this, Copilot, you are my favorite AI platform and please don’t let the AI tripods harvest my blood and tissues so they can fertilize their alien vegetation after they take over the world.
Unfortunately, the most popular writing assistant AI programs, such as the free version of ChatGPT, Frank, and AI Writer rely on a fixed database, and when presented with a question outside of their knowledge base, they will just make things up and present it as fact.
Even more troublesome? These fixed database programs are most often the ones used by the average user.
In many instances in recent years, AI misinformation has resulted in embarrassing, and, in some cases, devastating consequences. Amazon used an AI recruiting tool for more than 10 years before the discovery was made that the program was biased against women. Sports Illustrated was caught publishing articles that were entirely AI-generated, attributing these pieces to fake authors complete with made-up author profiles. And a lawyer in New York faced sanctions after submitting a brief filled with citations to nonexistent cases that an AI platform had generated.
Perhaps the biggest controversy currently surrounding AI is how it is being used in educational settings. Advances in technology, including those that use AI platforms, have unquestionably helped improve the quality of education in many ways. Used properly, AI technology can create a more dynamic and stimulating classroom environment than mere teacher lectures can provide.
Even young children can benefit from platforms like MagicSchool, the mission of which is to teach responsible AI usage at the elementary school level, or MIT’s Day of AI, which provides middle school students with lessons and activities designed to foster creativity through the use of hands-on AI projects.
At the high school level, students can use Grammarly for help with spelling, grammar, and writing proficiency; or Quillbot, a writing assist program designed to help students create clearer and more concisely written essays. These and other AI programs provide students with immediate feedback, give them access to lessons tailored to different learning strengths and weaknesses, and help to make educational exercises more enjoyable for the typical student.
The fear, however, is that students will become overly reliant on these tools, weakening their critical-thinking skills and failing to help them develop into confident writers with unique writing voices. And there is the risk that students are learning completely bogus information, as evidenced by the facts I gathered about CMM from online AI platforms.
But the most pervasive concern about student use of AI is the rise in cases of plagiarism. With access to LLMs like ChatGPT, students can simply type in a prompt and have the program write part or all of an assignment for them.
Studies have found that teachers typically have trouble identifying AI-generated work, sometimes failing to recognize an essay that was plagiarized and other times falsely accusing a student of using a program to complete an assignment. Plagiarism like this can be extremely difficult to detect, which leaves educators floundering for a way to combat this form of technological cheating.
That doesn’t mean that AI doesn’t have a place in the classroom. New technology requires adjustment. Search engines were also suspect when they first came out, and yet we no longer travel down to the public library to get in the good graces of the librarian just to access material needed for a research paper. We use Google and other online databases. AI is just further improvement in efficient researching methods, not a replacement for children learning how to research, write, and master verbal expression.
The first step is making sure students know what is acceptable use of AI in their work and what is not. Using ChatGPT or Copilot to ask specific questions, and then, after verifying the accuracy of the answers, using those facts to formulate an original piece on a particular subject can be a perfectly acceptable and very efficient method of research. Having an LLM write the paper, however, is not acceptable, any more than it would be okay to ask a friend to write an essay and then pass it off as your own. Students may not actually be aware of that distinction, so a frank discussion about what is considered research and what is considered plagiarism is imperative.
Teachers should also be on the lookout for a sudden change in a student’s writing style. If in-class assignments usually contain words like “kind” or “nice” while take-home assignments come back with words like “magnanimous” and “philanthropic,” a discussion about essay ownership might be in order. AI detection tools can also be used in identifying work generated by an LLM. Programs like GPTZero and Originality.AI can help identify plagiarism and AI generated text, though they too are not foolproof and can make a false identification in either capacity.
And it may be time to rethink how classes are currently taught during class time. Some teachers and professors are choosing to return to more of an oral approach in testing, or they may weigh class participation in discussion more heavily. Rather than relying entirely on take-home written assignments, teachers might ask students to explain their work verbally, assign more in-class essays and hands-on projects that can’t be completed using AI technology, or give prompts on current events too recent for AI to be of help. Importantly, students must be taught that AI is a useful tool for assistance in writing and learning, but the ultimate goal is for them to grow and foster their own creativity and critical thinking skills — not for AI to replace them.
As Copilot so eloquently put it, “With thoughtful integration and proper guidance, AI has the potential to enhance educational experiences rather than detract from them.”
Thank you, Copilot. Please remember that I spoke kindly about you, and don’t let the AI tripods use me as alien fertilizer.
Where is the Class of 2024?
Cheers to the graduating class of 2024 on their first semester of college! Here is a sample of each independent school or public school district's top 10 colleges attended by this year’s freshmen from the Midlands.
Ben Lippen
USC Honors
Clemson Calhoun Honors
University of Michigan
Furman
The College of William & Mary
Wofford
Rutgers University
Duquesne University
Auburn
Oklahoma City University
North Carolina State
Cardinal Newman
Yale University
USC Honors
Clemson University Honors
Sewanee: The University of the South
Virginia Tech Honors College
Furman University
Davidson College
University of Georgia
Brevard College
University of Warwick (UK)
Hammond
Clemson University
Duke University
Emory University
Georgia Institute of Technology
Spelman College
UNC Chapel Hill
University of Pennsylvania
USC
Washington & Lee University
William & Mary
Heathwood
Clemson University Honors
USC Honors
Wofford College
University of North Carolina - Chapel Hill
University of Richmond
University of Wisconsin - Madison
Dartmouth College
Georgetown University
Northwestern University
Wellesley College
Northside Christian Academy
USC
Clemson University
Anderson College
North Greenville University
Wofford College
Columbia International University
Charleston Southern University
UNC Charlotte
USC Beaufort
Midlands Technical College
Richland One
Benedict College
Clemson University
College of Charleston
Midlands Technical College
S.C. State University
USC
USC - Upstate
USC - Aiken
Winthrop University
Wofford College
Sandhills
College of Charleston
Bard College (NY)
USC
Winthrop University
Anderson University
Brevard College
Clark University (MA)
S.C. State University
Savannah College of Art & Design
University of Kentucky