An excellent week.
In the features, find the first post in former Surgeon General Vivek Murthy’s new blog. During his two (!) tenures in the role, Murthy led efforts to combat the loneliness epidemic we are facing in our digital age. In the post he makes the case that we need to waste time together to build community. I interpret this not quite so literally. Instead, it is shared experiences that we need to build — and it’s best if those shared experiences allow space for organic interaction (e.g. wasting time). In the other feature, find a helpful HBR post about where AI leadership fits in the org chart.
Also this week: a remarkable number of curricular components that seem to have come and gone and come back again: algebra in middle school, cursive, computer science, and more.

Last, here’s an inside scoop on our book: it’s about teaching and learning first — and then about AI in the context of teaching and learning. It’s not a pro-AI book or an anti-AI book, but instead a book about learning design, feedback, student agency, research, productive struggle, and more — and AI’s interaction with all of these. If you’re looking to drive conversation about effective teaching and learning, consider it for your faculty. Discounts available for bulk orders. Reply to this email for details.
All these and more — including a robust AI update this week — enjoy!
Peter

Browse and search over 15,000 curated articles from past issues online:
“If you want to build community, you have to give it time. Not optimized time. Unstructured, unhurried, “unproductive” time. The kind of time where conversations wander. Where nothing needs to be accomplished. Where people can show up as they are, not as the most efficient version of themselves. For many people, pausing can feel impossible when you’re working hard just to make ends meet. But even small moments of connection can matter. Just 10–15 minutes of unhurried conversation or shared presence can help calm the body’s stress response and give our nervous system a chance to reset.”
“When multiple leaders have legitimate claims to the same jurisdiction, organizations typically default to one of four responses: Hand it to the most capable leader, Give it to the squeakiest wheel, Ship it off to a committee, Pass it to whoever has existing budget. However, none of these will work for agentic AI. The stakes are too high for politics, the technology is moving too fast for non-expert committees with competing interests to keep pace, and the scope is way too broad for any single function’s existing desires or capabilities.”
So many good posts this week.
In the features, find two excellent posts about practical applications of AI for teaching and learning: a history teacher designs a constitutional convention for his students, and Dr Philippa Hardman describes effective uses of AI for providing feedback. These are strong (and creative) use cases. Also in the features, see Anthropic’s report on what people want from AI. This offers good insight into how people are finding productive uses of the technology.
Elsewhere in the AI Update, find a surprising number of surveys that have published their results in the past few weeks: on general AI usage, on AI use in teacher evaluation, on AI shopping agents, on kid and family perspectives, and more. These are excellent sources of culture/profession-wide data.
Also of note: the city of Boston plans for every high school graduate to have AI literacy. Not long ago Ohio State University made the same objective for every single of its college graduates. Increasingly, it’s understood that AI is or will be pervasive in the workforce, and part of our work in schools is to prepare students for this workforce. It’s not our only objective, but it is one we are hearing more and more about.
Last, a few posts on emerging understandings:
Or perhaps one more, for something uplifting — or at least thought provoking: A recent study looked at what happens when AI models use reasoning mode. It turns out they’re not just “thinking longer,” they’re actually looking at issues from multiple perspectives, weighing how the perspectives form a fuller picture, and producing a more broadly considered output. This is the kind of liberal arts perspective-taking we hope that our students will learn. There’s value to be made in the alignment of these goals/practices. What that looks like is still taking shape. See the post in Industry Development.
These and more, enjoy!
Peter

“When I first started using AI seriously, I was doing what most teachers do – asking questions, iterating on responses, but primarily generating text – converting ideas into polished plans and getting feedback and analysis on my own work. AI remains useful for all of those tasks. But agentic AI introduces something fundamentally different. I can now craft an interactive learning experience – deciding what it should do, how it should be organized, who would use it, and what problem it needs to solve. That’s a much different – and more involved – cognitive activity.”
“Last December, tens of thousands of Claude users around the world had a conversation with our AI interviewer to share how they use AI, what they dream it could make possible, and what they fear it might do.”
“AI-only feedback and support seems most effective when tasks are low-stakes, tightly structured, and not identity-relevant: grammar correction, coding syntax, multiple-choice practice, drill exercises. In these contexts, AI is functional, welcome, and effective. The Karimova and Csapó (2024) meta-review found its strongest gains in exactly these structured, skill-based domains. Students consistently draw the line themselves. In Alkhalaf, Alkhayat and Alzahrani (2026) and across domain studies, they say: AI for drills and drafts, humans for capstones, professional work, and emotionally difficult tasks. The boundary hinges on identity relevance and emotional stakes. When the work is “just practice,” AI is a useful tool.”
“Mayor Michelle Wu is launching a new program she says will make Boston the first major city in the U.S. to ensure that all high school graduates are proficient in artificial intelligence.”
“I think a lot of teachers see AI as a bad thing because they don’t really know how to use it themselves. They only think of students plugging in a prompt and it just spitting out an answer. But I think if teachers develop this AI literacy – I think AI might see a lot more healthy use in classrooms than it’s seeing right now.”
“The most powerful use of AI isn’t producing learning materials. It’s creating environments where learners actually practise. And you can build these right now — no dev team, no custom platform, no code. Each method below includes a prompt you can paste into your preferred AI tool to generate a working interactive prototype: a self-contained practice activity with a briefing screen, a live AI interaction, and a debrief — all running in the browser, ready to share with stakeholders or deploy to learners.”
“The share of students in middle school grades and up who reported using AI for help with their homework increased from 48 percent in May 2025 to 62 percent in December 2025. Use among middle and high schoolers drove the overall increase.”
“What happens inside an ostensibly singular reasoning model? A community conversation, as it turns out. In a recent study, we demonstrated that frontier reasoning models like DeepSeek-R1 and QwQ-32B do not improve simply by “thinking longer.” Instead, they simulate complex, multi-agent-like interactions within their own chain of thought—what we term a “society of thought”. These models spontaneously generate internal debates among distinct cognitive perspectives that argue, question, verify, and reconcile. This conversational structure causally accounts for the models’ accuracy advantage on hard reasoning tasks, which we demonstrated by explicitly priming and amplifying multi-party conversation.”
“This has implications for how we teach AI literacy. If the model is a parrot, the right pedagogical move is to emphasize how mid the outputs are. They’re “just statistical regularities” and AI literacy is mostly about correcting their middling outputs. If the model is a geometric object, then learning to navigate that space is a genuine skill, one that depends on 1) domain expertise and 2) creativity. We absolutely should emphasize basic information literacy and be wary of poor or inaccurate outputs, but knowing how to navigate a model’s terrain successfully is upstream of that.”
“Researchers warn sycophancy is an urgent safety issue requiring developer and policymaker attention.”
Every week I send out articles I encounter from around the web. Subject matter ranges from hard knowledge about teaching to research about creativity and cognitive science to stories from other industries that, by analogy, inform what we do as educators. This breadth helps us see our work in new ways.
Readers include teachers, school leaders, university overseers, conference organizers, think tank workers, startup founders, nonprofit leaders, and people who are simply interested in what’s happening in education. They say it helps them keep tabs on what matters most in the conversation surrounding schools, teaching, learning, and more.
– Peter Nilsson