As a researcher in the educational technology space, Daniel Alexander Novak, PhD, assistant clinical professor in health sciences and director of scholarly activities at UC Riverside, has heard time and time again that a new technology will change the world. He’s often been disappointed. But in late December last year, he pulled up OpenAI’s ChatGPT on a whim while watching Netflix.
“I was amazed really out of the gate with its ability to both understand and answer fairly esoteric questions related to my research,” he said of the program, which provides human-like responses to text queries based on a language model and can be used to write everything from code to song lyrics in a variety of styles. “It could piece together really cogent responses to questions about as well as a person,” he added. “The more I worked with it, the more I understood that it was really a powerful tool.”
The many uses of ChatGPT
Novak began experimenting with ChatGPT to complete research reviews and other tasks. With research reviews, “ChatGPT can really help you get a handle on what's already been written,” he said. “You always have to treat it with some skepticism, but I would say no more or less so than having a human do a literature review. And it does it in minutes so that you can then go on in a set and build from there and move forward much more deliberately.”
Novak doesn’t think the increased efficiency should be limited to instructors and researchers. He has been teaching his students to use ChatGPT, too. “I think it's a learning tool and we can use it in our classrooms from an educational perspective,” he said. “If we use it creatively, we can say things like, start asking your research question to ChatGPT and see what it says. And then students might find out that a lot is already known on a subject that they thought was pretty unexplored.”
ChatGPT may also help students evaluate their papers before submitting them for grading. Novak said students could give ChatGPT the grading rubric along with their assignment, then receive detailed feedback without waiting for an instructor to read their draft. “The problem with rubrics is you can lie to yourself and say, this is a five out of five on clarity of my research question. And then you put it into ChatGPT and it's like, this is a three out of five for the following reasons,” he explained. “So you could go, oh, okay, I guess I’d better work on that a little bit before I turn it in.”
Ethics and the need for guidelines
Despite ChatGPT’s potential for benefiting education, Novak recognizes that it can also be used unethically—an issue that has caused ChatGPT to be banned at many schools.
These issues include plagiarism by students submitting content written by ChatGPT as their own or students using ChatGPT to reword someone else’s work. Novak also identified other issues, such as disguising someone’s writing through ChatGPT to conceal their identity.
Novak quickly realized the need for guidelines around the new tool. “We have to have something out of the gate, and partially because I'm part of the problem,” he said. Understanding that students would find ChatGPT on their own, making it futile to try to block it, he chose the opposite approach: teaching students to use it within well-defined boundaries.
And who better to help determine boundaries around ChatGPT than ChatGPT itself? Novak began creating his policies by asking ChatGPT to generate a list of its own ethical uses (“not that that means anything, because it's just math spitting out what it thinks are related to ethics,” he noted). “As I recall, it gave me a pretty good running start,” he said. “It's impossible for it to have any agency in this matter, but it can certainly help us think about some of these ideas.”
Illustrating ChatGPT’s ability to suggest new ideas, Novak once again asked it to create a set of guiding policies while we talked. As ChatGPT-generated text appeared on the screen, Novak scanned the new list, which was similar to his policies but not the same. “Some of these are good,” he commented. “I could even add them.”
Building off ChatGPT’s original list, Novak has created a set of five guiding principles:
- Accountability: The user is responsible for the consequences of using ChatGPT, including any inaccurate information it provides.
- Beneficence: ChatGPT and similar tools should be used in a way that advances the public good without advancing biases.
- Creativity: Use of the tool should promote new ideas and approaches.
- Devolution: Individuals, including instructors, editors, and others, should have a say in how these tools are used in their courses or other areas.
- Ethicality: The user should follow existing rules and policies when using ChatGPT and other tools.
Novak hopes that instructors or schools adapt the guidelines or use them as a starting point for incorporating ChatGPT for instructional use. “It's far from complete, but I would hope people either add to it, or eventually, we could have a generic code of conduct that other people could use for their courses, so that there's no ambiguity about what is and isn't allowed,” he said.
With ChatGPT likely to become a “universal issue” in education at all levels and even the corporate world, Novak hopes schools adopt policies like his rather than attempting to ban the technology.
Although ChatGPT may seem like a new threat, he pointed out that educators are constantly adapting and evolving to changes in technology. Novak pointed to Wikipedia as an example. When he started working on ChatGPT guidelines, he explained, “I just had flashbacks to when I was an undergrad and Wikipedia started becoming a more important resource. And often instructors said, ‘don’t go to Wikipedia,’ kind of wagging their finger like it's not a trustworthy source. And it turns out over time, it was borne out to be a very trustworthy source and a very essential educational tool.”
Novak recognizes that it will take time to adopt the guidelines, whether at UCR or elsewhere. He believes that the discussion, which should include student and faculty input, is a necessary one to have in the UCR community. “It's better to do that now than to wind up in a situation where we have students brought before professionalism boards with questions that are unresolved in terms of policy around when it's okay to use these technologies,” he said. “And it's also better than finding out they're surreptitiously using it in some way that we don't know about. I'd rather bring things into the light and have a good discussion about what is and is not in bounds than have it fester.”
Addressing other concerns around ChatGPT
Of course, ChatGPT still has downsides beyond ethical concerns around student use.
One of these involves the accuracy of its output. Novak was initially excited to use ChatGPT to find studies he may have overlooked in his research. In one search, he was intrigued to see ChatGPT come up with exactly the study he was looking for, one that clearly stated several ideas that other research had only hinted at. When he searched for the study’s source, though, he found that it didn’t exist. “I think that's something people need to be aware of is in its zeal to present you with the response to your question that you're looking for, ChatGPT will make stuff up sometimes,” said Novak. “So that's why you really have to be cautious about it.”
Novak has also considered some negative aspects around using ChatGPT for grading. “I fear a feedback loop where we have students using ChatGPT to write papers that they turn in, and then instructors use ChatGPT to grade and give feedback on the papers,” he said. “Then we wind up with nobody learning anything from the experience. But, hopefully we can find a way to avoid that slippery slope.”
There are some concerns around ChatGPT replacing human jobs, as well.
To answer this fear, Novak first emphasized what ChatGPT is not: artificial intelligence. “Artificial intelligence is a buzzword,” he said, explaining that ChatGPT is instead a large-scale language model. “I think by casting it as a kind of intelligence, that automatically positions us to misunderstand what it does. It's just a calculation engine, and the output of that calculation is a set of words, a probability that the words are going to answer your questions.”
As far as jobs, he said, “At the end of the day, these tools have no agency. They can't evaluate anything because fundamentally that's a human property.”
Like with Wikipedia in the educational space, he pointed out that technology often disrupts jobs, particularly ones with repetitive tasks. Giving typesetters and typists as an example, he said, “Once we had keyboards and word processors, why did we need those workers anymore? I think the answer is the tool changed what people were capable of doing, but it didn't remove the need for the people to do it.” With new technology, he explained, administrative assistants could focus on more complex tasks rather than typing. “In a similar way, I really think that it's not going to replace anybody. I think that, for example, people who write blogs, it'll make their work more efficient.”
Looking ahead with ChatGPT
On the whole, Novak is excited for the potential of ChatGPT, particularly if guidelines around its use are widely adopted. “I think it can take what used to take us weeks to do in qualitative and educational research and do it in hours,” he said. “And it also, I think, democratizes that process. In working with faculty, it used to take me a long time to get them up to speed on topics like, how do we do qualitative research? Now it's able to let us do that much faster.”
He hopes that other instructors will try ChatGPT before making decisions around its use. “I'd like them to know that if you can push the limits of your own thinking around what kinds of questions you would ask and answer, then you can really get to some new and interesting places with this,” Novak said.
He noted that fear around ChatGPT distracts from its potential.
“There are major ethical issues and challenges, but ultimately, as with many other technologies in the past, this is going to be a real net positive for the way we teach and the way we learn,” he said. “And the sooner we can get people into that mindset, the sooner we can both identify dangers, of course, but also identify real benefits.”