Dear friends,
I am so happy to share here (first!) that the book that I have been working on with my co-author Maya Bialik is up for pre-order: Irreplaceable: How AI Changes Everything (and Nothing) About Teaching and Learning.
This is a book for teachers, school leaders, and those who support teachers and school leaders. Due out in January, it explores and offers guidance on how to use AI effectively in schools. There’s so much to say about what’s in it:
Maya and I have written this book from our various experiences and perspectives not only as teachers, school leaders and researchers, but also as technology founders, award-winning digital innovators, and artists. The book is meant to be both principled and practical.
Perhaps most affirming to us as authors is this: we pitched the book in late 2023, and the principles that we outlined then are even more relevant today than they were two years ago. In this period of rapid change, this stability has confirmed to us that while so much is indeed changing around us, the heart of what happens in schools is timeless. And in this time of change, while we engage with new technologies by cultural necessity (and curiosity, too!) we want enduring truths to guide our work. This book aims to provide that guide.
In the weeks ahead, I will be sharing more about the book in advance of its release in January, including samples of what’s inside, posts with material that we couldn’t fit in the book, and more. We would be deeply grateful if you would pre-order a copy from Amazon.
With gratitude,
Peter
Also this week… an excellent issue:
In the features, find two posts about admissions: one from university counselors on how to effectively use AI to write letters of recommendation, and another from the university admissions perspective about how easy it is to spot student essays written with AI.
In the Character section this week, Tim Dasey explores different kinds of empathy, and he reflects on what kind of empathy might be increasingly important in a digitally saturated world.
For more analog teaching: see the post in Pedagogy from the Chronicle of Higher Ed about what makes the classroom special. It has some interesting echoes with last week’s post about the tendency in some circles to want to make education more fun. There’s a deeper nerve this discussion touches between the instinct to personalize, facilitating more of what students want, and the instinct to stretch students out of their own wants and needs and into the perspectives, wants, and needs of others. How much of education is about strengthening what is already in students, and how much of education is about stretching them beyond what is already in them? More on this in a future post.
In the meantime, enjoy these articles and more!
Peter
PS. This week I’ll be sharing excerpts of the book at MASS CUE (Massachusetts Computer Using Educators). Drop me a line if you’ll be there — it would be great to say hello! Other events in the months ahead:
Browse and search over 15,000 curated articles from past issues online:
“As technology has evolved, I’ve begun to incorporate artificial intelligence (AI) into my workflow. However, when it comes to recommendations, that is a delicate process. Teachers and counselors know the key personal details about students that can make a real difference in the application process. While I use AI tools in some parts of my process, there are some aspects that remain (and for me will always be) human-driven. In this post, I’ll share some examples and resources that can help make writing letters of recommendation easier—with and without technology.”
“Researchers in the Cornell Ann S. Bowers College of Computing and Information Science compared 30,000 college application essays written by humans to ones written by eight popular large language models (LLMs), AI models that process and generate text, like ChatGPT. Even when they specified a person’s race, gender and geographic location in the prompt, the models spit out highly uniform text that was easy to distinguish from actual human writing.”
“What I’m saying is this: trust your audience enough to make something real. Something that reflects actual human experience, not a focus-grouped approximation of it. That can be joyful. That can be funny. That can be wildly entertaining. But it has to come from a place of genuine necessity—from artists who need to say this thing, in this way, right now.”
“The same AI might behave cautiously when discussing medical topics but exploratively when tackling creative writing. Effective users recognize these shifts and adjust their approach accordingly. That’s cognitive work, building and updating mental models of how the system operates in different contexts… Cognitive empathy manifests as systematic pattern recognition. You notice that ChatGPT tends toward sycophancy, agreeing with user statements even when they’re wrong, validating rather than challenging.”
“You might call these “political origin stories”—tales of forming worldviews—about which attendees could then ask follow-up questions and reflect what they had heard (I go into greater detail in my book, Learning to Depolarize). Those who shared stories felt rejuvenated and buoyed by the experience, while many of those who listened found themselves similarly refreshed. “Listening became a gift,” said one attendee of the gatherings. “Colleagues left the space excited and invigorated by our exchanges,” said another. The feedback was overwhelmingly positive.”
“The compact introduces a chance to establish a much-needed fresh relationship between the United States and higher education. It also offers an opportunity to pivot away from executive-branch overreach and restore legislative supremacy.”
“So much of what you’re teaching is how not to be paralyzed by difficulty. How do you deal with something that you find off-putting, either because you hate it on sight or because it’s difficult? What actually put off a lot of students about The House of Mirth is just the 19th-century diction. The analogy in the book is that it’s like running on the beach — most people find running on sand very tiring, and a few people love it. The meta-skill here is to ask the right question. Instead of, How am I outraged or bored or put off by this book? The first question should be, What kind of a reader does this book want me to be? What does it want me to care about?”
“After a month, more than three-quarters of the users who had typically spent roughly 30 minutes per day on TikTok were now spending nearly twice that much time on the app, on average. A few months later, these users’ daily watch times reached 70 minutes, on average. For some, their time spent scrolling every day more than tripled or quadrupled.”
“And, at the end of a phone-free day, when asked if life is better with no phones this year compared to last year, the students said yes. Fifteen-year-old Sonia Ngai said, "I think I'm living more in the moment.” Those moments are to be savored, they now realize, because while smartphones may go on forever, their last year of high school will not.”
“The newest and most ambitious of these experiments go beyond gauging performance based primarily on test scores to consider a teacher’s classroom practice, peer collaboration, and willingness to tackle high-need subjects, students, and communities.”
As AI technology develops and spreads further into humanity’s technological lives, our human perspectives continue to diversify — and grow in amplitude.
In the features, find a sweeping and increasingly evidence-based look at existential risk from AI. Also find a good procedural look at what AI is doing when its LLM functionality is reviewing data. This latter post is good functional literacy for those who may be using AI to synthesize large documents.
Further, see in the Education section how California is making AI available to all students and faculty in its community colleges. Meanwhile, others in schools are focused on literacy for both students and teachers. Also, multiple outlets are starting to recognize that what matters most is that students and teachers are having ongoing conversations about AI. I’m a firm supporter of this last approach. The technology continues to evolve, valuable uses of it take time to develop, societal norms shift, boundaries require tweaking and adjustment — these all happen best when we are in ongoing discussion.
These and more, enjoy!
Peter

“In the course of quantifying the risks of A.I., I was hoping that I would realize my fears were ridiculous. Instead, the opposite happened: The more I moved from apocalyptic hypotheticals to concrete real-world findings, the more concerned I became. All of the elements of Dr. Bengio’s doomsday scenario were coming into existence. A.I. was getting smarter and more capable. It was learning how to tell its overseers what they wanted to hear. It was getting good at lying. And it was getting exponentially better at complex tasks.”
“For decades, districts have used analytical tools. When you ran a report in your Student Information System or built a pivot table in Excel, the process was straightforward: You decided what to measure (attendance below 80%, GPA drop over 0.5 points), The system counted and calculated based on your rules, You got results you could verify (show me the 23 students who meet these criteria)… Here’s what’s different when an AI “analyzes” your data: It doesn’t find patterns in your data. It generates text about your data based on patterns from its training. Let me unpack that:”
“The percentage of high school students who report using GenAI tools for schoolwork is growing, increasing from 79% to 84% between January and May of 2025.”
“Miller said the debate around AI “echoes earlier moments in art history,” particularly during the Renaissance era, when the introduction of oil painting “gave artists the freedom to revise and enhance their work over time.””
“Given current AI capabilities, the public already supports automating 30% of occupations. When AI is described as outperforming humans at lower cost, support for automation nearly doubles to 58% of occupations. Yet a narrow subset (12%)—including caregiving, therapy, and spiritual leadership—remains categorically off-limits because such automation is seen as morally repugnant. This shift reveals that for most occupations, resistance to AI is rooted in performance concerns that fade as AI capabilities improve, rather than principled objections about what work must remain human.”
Every week I send out articles I encounter from around the web. Subject matter ranges from hard knowledge about teaching to research about creativity and cognitive science to stories from other industries that, by analogy, inform what we do as educators. This breadth helps us see our work in new ways.
Readers include teachers, school leaders, university overseers, conference organizers, think tank workers, startup founders, nonprofit leaders, and people who are simply interested in what’s happening in education. They say it helps them keep tabs on what matters most in the conversation surrounding schools, teaching, learning, and more.
– Peter Nilsson