From Our Faculty Experts

Jon Rawski, Assistant Professor
Department of Linguistics & Language Development
San Jose State University
jrawski.info

jonAda Lovelace, who founded modern computing, noted that computers "weave algebraic patterns like the Jacquard loom [a textile machine] weaves flowers and leaves". Alan Turing stated that "can machines think?" is "too meaningless to deserve discussion". ChatGPT doesn't think, and is a weaver of predictions of the next item in a sequence, the Cloze task. After inputs like "I drink coffee with ____" it outputs "milk" rather than "socks" based on statistically curated data. Such tasks have little to do with actual human language. It looks like language because we see the output as text. Similarly the horse Clever Hans scammed many into believing it could do arithmetic because they saw it gesture to numbers given a math sum and a carrot. ChatGPT is a product, not an algorithm: the output of human design, human programming, and human data (from underpaid outsourced human labelers with significant mental trauma from the work). Its outputs are brittle, often so incorrect that many outlets have banned it. When correct, it is because the humans behind it were correct, meaning its use is automated plagiarism, and anti-GPT detectors are ripe for abuse.

The real question concerns our relationship to scalable plagiarism. Universities should stop policing students so strongly, especially after a pandemic exposed some truly heinous academic surveillance methods, decreasing student trust. If assignments are so banal that students pass using ChatGPT it says more about the assignment than the plagiarizer. Our assignments shouldn't make students even feel a need to plagiarize. Resist the hype, and treat ChatGPT like Excel, a cute product but too boring to deserve discussion.


Ryan Skinnell, Associate Professor 
Department of English and Comparative Literature

San Jose State University

ryanMost discussions about ChatGPT focus on output: the threat or potential of AI-generated text. I want to invite us also to think in terms of input—or more accurately, a potential lack of input. In his short essay, “Missing Practice,” rhetoric and writing specialist Bill Hart-Davidson acknowledges that chatbots can be used to “cheat,” but he’s primarily concerned with cheating students out of practice. For Hart-Davidson, practice is the input that supports learning. People often learn—concepts, theories, habits, disciplines, skills—through writing, and people learn to write through practice. If you offload the practice, you circumvent the learning.

Hart-Davidson points us to a related issue, which is how teachers use writing in their classes. Some teachers teach writing, which is to say they give students opportunities to practice writing using scaffolded assignments, formative feedback, and multiple drafts. In other classes, writing is principally an instrument of evaluation—a way for students to demonstrate they’ve learned particular subject matter. Neither form of classroom writing is inherently better—“better” depends on a class’s learning goals—but distinguishing teaching from evaluation is important. It turns out, chatbots are pretty bad at mimicking the practices involved in teaching and learning to write. They’re much better at producing writing to be evaluated. As we grapple with AI’s potential effects on student writing, we’d do well to keep our attention on whether our perceived problems (and preferred solutions) are about input or output, and whether our focus in any given circumstance is appropriate to our educational goals.


Jon Oakes, Library Technology Lab Coordinator
San Jose State University Library

ChatGPT and other AI tools, such as Codex, GPT-3, DALL-E, Bard, BLOOM, and Ernie, are the starting pistol of an AI race that will have a profound impact on education, as well as all other aspects of our lives. Incorporating these new tools into the educational process and adapting will be challenging. However, AI will empower students to better understand difficult concepts or research topics, as well as allow them to develop more practical and effective strategies for personalized learning in ways that were impractical for them before.

To ensure true mastery is achieved, higher education must move away from simply evaluating performance, and instead focus on open-ended questions that put the student and their experiences at the center of  assignments and assessments where possible. In fact, including the student experience in learning is part of emancipatory educational practice. Student-Centered learning practices  will help prevent misuse of AI tools, while also encouraging students and instructors to consider the transformative power of education as it relates to their own life experience. 

If used correctly, AI tools can be truly transformational  in education. Artificial Intelligence tools can assist students in navigating, evaluating and utilizing more effectively, our already overabundant sources of information, Students  may craft original arguments in ways that would normally be impossible. It is up to us to teach ethical use and hold corporations accountable for how AI is deployed and monetized, so that we can utilize advancements for positive universal benefit and as a powerful tool for instruction, rather than something to be avoided. We cannot ignore the role of AI in education, and now is the time to construct an ethical and curricular framework to enable its use.