Recently, a mother and father sued their son’s school because he was punished after being caught using AI for his class assignment.
News like this always gives me pause, because it suggests that every party in the story was wrong. The son, for example, probably should have told his teacher that he would use AI to help complete his assignment. The teacher should have also talked with their students about when it's okay to use AI to complete homework and when it isn't. The teacher's administrators have also had the better part of two years to design a comprehensive response to students using AI-generated content in assignments.
The parents in this story also have their share of the blame. A response to a lack of policy should not be reason enough to sue a school for perceived opportunities lost.
I am not a lawyer, but I do know that litigation this early into the development of AI EdTech is a recipe to stifle innovation and create unnecessary backlash. I am a tenured professor and educational psychologist focused on understanding how educational technologies impact students’ learning and motivation.
As the associate director of USC's Center for Generative AI and Society, I have studied how students and educators have responded to the age of AI-driven tools and have found many misconceptions and a general lack of foresight — sometimes even a lack of hindsight to learn from previous mistakes, as was the case for Los Angeles Unified School District’s most recent EdTech debacle.
AI-driven educational technologies are not the solution to all of education's ills, as some prominent venture capitalists would have you believe. But neither are they the monsters under our metaphorical beds. Instead, they are simply another set of tools that will become integrated into the instructional landscape.
Mark my words, in about five years, we will have figured out what AI in education is good for and what it isn't. After that, we will simply see AI the same way we see our smartphones — as mundane technologies that serve a narrow purpose, sometimes doing harm and other times doing good. We will likely not be able to anticipate how much of either dynamic AI is capable of in the near term.
What I do know right now, however, is that we cannot litigate our way into solving the AI question. If AI is to be used in education, then it cannot be done amidst an atmosphere of fear. Students should be encouraged to experiment with AI so they can better understand how new AI can help them learn, how it can motivate them, and how it can help them discover new ways of exploring the world around them.
Educators should similarly explore AI solutions so that they can fill gaps that are present in their educational setting and personalize content for their students. None of this can happen when everyone is looking over their shoulder, wondering who will set a rule that can be accidentally broken, or wondering if overly litigious parents will make accusations about denied civil liberties when their child uses a technology that is still under development. Innovation and development cannot happen under those circumstances.
Instead, everyone should take a deep breath and understand that AI in education is here to stay, but we are still figuring out its place in schools. In the meantime, the adults in the room should create policies that foster curiosity among their students and staff so that the truly useful aspects of AI in education can emerge.
Those policies should be drafted now and should focus on documentation so that if AI is used, educators know how it is used. Educators should encourage their students to use AI when developmentally appropriate, and parents should trust educators to make that call.
Stephen J. Aguilar is a tenured professor at the USC Rossier School of Education, specializing in educational psychology. His research focuses on the impact of educational technologies, including AI, on student learning, motivation and educational outcomes. He is also the associate director of USC's Center for Generative AI and Society