Chatbot Hype or Harm? Teens Push to Broaden A.I. Literacy
It was difficult late last year for many teenagers to know what to make of the new wave of A.I. chatbots.
Teachers were warning students not to use bots like ChatGPT, which can fabricate human-sounding essays, to cheat on their schoolwork. Some tech billionaires were promoting advances in A.I. as powerful forces that were sure to remake society. Other tech titans saw the same systems as powerful threats poised to destroy humanity.
School districts didn’t help much. Many reactively banned the bots, at least initially, rather than develop more measured approaches to introducing students to artificial intelligence.
Now some teenagers are asking their schools to go beyond Silicon Valley’s fears and fantasy narratives and provide broader A.I. learning experiences that are grounded firmly in the present, not in science fiction.
“We need to find some sort of balance between ‘A.I. is going to rule the world’ and ‘A.I. is going to end the world,’” said Isabella Iturrate, a 12th grader at River Dell High School in Oradell, N.J., who has encouraged her school to support students who want to learn about A.I. “But that will be impossible to find without using A.I. in the classroom and talking about it at school.”
Students are weighing in at a moment when many school districts are only beginning to define “A.I. education” and consider how it may fit into to existing courses like computer science, social studies and statistics. Outside influencers have their own ideas.
Tech giants like Amazon, Microsoft and Google are encouraging schools to teach the A.I. career skills that the industry needs. Some nonprofit groups want schools to help students develop a more critical lens to focus on emerging technologies, including examining A.I. risks and societal impacts.
At a White House event last week, the National Science Foundation announced new grants for programs that prepare students for A.I. careers. And the Computer Science Teachers Association, a nonprofit group whose top donors include Microsoft and Google, said it would develop education standards to incorporate A.I. into K-12 computing education. Amazon said it was donating $1.5 million to the teachers’ group for A.I. education and related projects.
Teenagers have their own ideas about what they want to learn about A.I. But public schools rarely allow students to propel curriculum change or shape how they want to learn. That is what makes the student A.I. education campaign at River Dell High so unusual.
It started last winter when the school’s Human Rights Club, led by Ms. Iturrate and two other students, decided to research A.I. chatbots. The students said they were initially troubled by the idea that generative A.I. systems, which are trained on vast databases of digital texts or images, might displace writers, artists and other creative workers.
Then they learned more about positive use cases for A.I. — like predicting mammoth rogue waves or protein folds, which could speed the development of new medicines. That made the students concerned their teachers might be limiting students exposure to A.I. by focusing only on chatbot-enabled cheating.
The club leaders consulted their adviser, Glen Coleman, a social studies teacher who encourages students to develop their own points of view. And they decided to develop a survey to gauge their schoolmates’ knowledge and interest in A.I. chatbots.
River Dell High, which serves about 1,000 students in an upper middle class enclave of Bergen County, is not a typical public school. When the Human Rights Club proposed to field their A.I. survey schoolwide last spring, the principal, Brian Pepe, enthusiastically agreed.
More than half of the school — 512 9th through 12th graders — answered the anonymous questionnaire. The results were surprising.
Only 18 students reported using ChatGPT for plagiarism. Even so, the vast majority of students said that cheating was their teachers’ main focus during classroom discussions about A.I. chatbots.
More than half of the students said they were curious and excited about ChatGPT. Many also said they wanted their school to provide clear guidelines on using the A.I. tools and to teach students how to use the chatbots to advance their academic skills.
The students who developed the survey had other ideas as well. They think schools should also teach students about A.I. harms.
“A.I. is actually a huge human rights issue because it perpetuates biases,” said Tessa Klein, a 10th grader at River Dell and co-leader of the Human Rights Club. “We felt the need for our students to learn how these biases are being created by these A.I. systems and how to identify these biases.”
In June, Mr. Pepe had the club leaders present their findings to the teachers. The students used the survey data to demonstrate their schoolmates’ interest in broader opportunities to learn about and use A.I.
Mr. Pepe said he hoped high school students would eventually be able to take stand-alone courses in artificial intelligence. For now, he has floated the idea of a more informal “A.I. Lab” at the school during lunch period where students and teachers might experiment with A.I. tools.
“I don’t want A.I. or ChatGPT to become like this Ping-Pong game where we just get caught back and forth weighing the positives and negatives,” said Naomi Roth, a 12th grader who helps lead the Human Rights Club. “I think kids need to be able to critique it and assess it and use it.”