fbpx

Universities Making Changes Amid AI Fears

Artificial Intelligence
University sign on a building | Image by Johnny Habell/Shutterstock

Generative artificial intelligence, as seen in many customer service chatbots, has become more advanced in recent months, resulting in students using them for their courses, prompting universities to consider changing how they assign work.

Professors such as Antony Aumann from Northern Michigan University have noticed an uptick in some students’ work quality. Aumann recently read a paper that he described as the “best paper in the class” and soon discovered that it was generated by an artificial intelligence site called ChatGPT.

The site allows users to insert a prompt to which the computer can respond articulately and specifically to that prompt.

Now, Aumann has considered changing how he teaches to account for artificial intelligence.

“What’s happening in class is no longer going to be, ‘Here are some questions — let’s talk about it between us human beings,’” Aumann said to The New York Times, but instead “it’s like, ‘What also does this alien robot think?’”

Some universities have been reluctant to ban the use of artificial intelligence outright since it could impede the academic freedom of students. Instead, professors are beginning to think of assignments that an artificial intelligence website could not answer accurately.

Frederick Luis Aldama, the chair of humanities at the University of Texas at Austin, told The New York Times that he plans to focus on more niche assignments. Rather than having students analyze the more famous Shakespearean sonnets, he will focus on those less well-known.

While some university officials, such as Aldama, work to avoid getting assignments from chatbots, others are attempting to educate students about the tools.

Kelly Ahuna directs the academic integrity office at the University at Buffalo in New York, and she expects to incorporate artificial intelligence into discussions about academic integrity with her students.

“We have to add a scenario about this, so students can see a concrete example,” said Ahuna. “We want to prevent things from happening instead of catch them when they happen.”

Similarly, some universities also plan to update their academic codes of conduct. John Dyer, Dallas Theological Seminary’s vice president for enrollment services and educational technologies, and plans to make revisions to account for these tools.

He told the NYT that they will update their definition of plagiarism to include students “using text written by a generation system as one’s own (e.g., entering a prompt into an artificial intelligence tool and using the output in a paper).”

Although some universities and professors do not support the use of these tools, students such as Lizzie Shackney from the University of Pennsylvania’s law school and design school believe it can be valuable.

Shackney told the NYT that she uses these tools to help her brainstorm ideas and debug coding problems.

“There are disciplines that want you to share and don’t want you to spin your wheels,” she said. “The place where my brain is useful is understanding what the code means.”

Artificial intelligence is expected to continue developing as companies learn more about these tools. Joe Glover, provost of the University of Florida, is preparing for the development of more A.I. tools.

“This isn’t going to be the last innovation we have to deal with,” he said to the NYT.

Support our non-profit journalism

Submit a Comment

Your email address will not be published. Required fields are marked *

Continue reading on the app
Expand article