A weekly collection of education-related news from around the web.

Educator’s Notebook #517 (March 29, 2026)

INTRODUCTION

  • An excellent week.

    In the features, find the first post in former Surgeon General Vivek Murthy’s new blog. During his two (!) tenures in the role, Murthy led efforts to combat the loneliness epidemic we are facing in our digital age. In the post he makes the case that we need to waste time together to build community. I interpret this not quite so literally. Instead, it is shared experiences that we need to build — and it’s best if those shared experiences allow space for organic interaction (e.g. wasting time). In the other feature, find a helpful HBR post about where AI leadership fits in the org chart.

    Also this week: a remarkable number of curricular components that seem to have come and gone and come back again: algebra in middle school, cursive, computer science, and more.

     

    The arrival of AI is an excellent reason to talk about the elements of good teaching and learning.

     

    Last, here’s an inside scoop on our book: it’s about teaching and learning first — and then about AI in the context of teaching and learning. It’s not a pro-AI book or an anti-AI book, but instead a book about learning design, feedback, student agency, research, productive struggle, and more — and AI’s interaction with all of these. If you’re looking to drive conversation about effective teaching and learning, consider it for your faculty. Discounts available for bulk orders. Reply to this email for details.

    All these and more — including a robust AI update this week — enjoy!

    Peter

     

    What do people want from AI? See the Anthropic survey in the AI Update for more.

     


     

    Browse and search over 15,000 curated articles from past issues online:

    Subscribe to the Educator’s Notebook

    • Vivek Murthy
    • 03/24/26

    “If you want to build community, you have to give it time. Not optimized time. Unstructured, unhurried, “unproductive” time. The kind of time where conversations wander. Where nothing needs to be accomplished. Where people can show up as they are, not as the most efficient version of themselves. For many people, pausing can feel impossible when you’re working hard just to make ends meet. But even small moments of connection can matter. Just 10–15 minutes of unhurried conversation or shared presence can help calm the body’s stress response and give our nervous system a chance to reset.”

    • Harvard Business Review
    • 03/12/26

    “When multiple leaders have legitimate claims to the same jurisdiction, organizations typically default to one of four responses: Hand it to the most capable leader, Give it to the squeakiest wheel, Ship it off to a committee, Pass it to whoever has existing budget. However, none of these will work for agentic AI. The stakes are too high for politics, the technology is moving too fast for non-expert committees with competing interests to keep pace, and the scope is way too broad for any single function’s existing desires or capabilities.”

ADOLESCENCE

ATHLETICS

CHARACTER

CURRICULUM

DIVERSITY/INCLUSION

GOVERNMENT

HUMANITIES

LEADERSHIP

LEARNING SCIENCE

PEDAGOGY

READING/WRITING

SAFETY

SOCIAL MEDIA

STEM

TECH

GENERAL

A.I. Update

A.I. UPDATE

  • So many good posts this week.

    In the features, find two excellent posts about practical applications of AI for teaching and learning: a history teacher designs a constitutional convention for his students, and Dr Philippa Hardman describes effective uses of AI for providing feedback. These are strong (and creative) use cases.  Also in the features, see Anthropic’s report on what people want from AI. This offers good insight into how people are finding productive uses of the technology.

    Elsewhere in the AI Update, find a surprising number of surveys that have published their results in the past few weeks: on general AI usage, on AI use in teacher evaluation, on AI shopping agents, on kid and family perspectives, and more.  These are excellent sources of culture/profession-wide data.

    Also of note: the city of Boston plans for every high school graduate to have AI literacy. Not long ago Ohio State University made the same objective for every single of its college graduates. Increasingly, it’s understood that AI is or will be pervasive in the workforce, and part of our work in schools is to prepare students for this workforce.  It’s not our only objective, but it is one we are hearing more and more about.

    Last, a few posts on emerging understandings:

    • AI doesn’t reduce work — it intensifies it
    • AI brainstorms more than most humans, but still in consistent categories
    • AI sycophancy is increasingly understood as a problem

    Or perhaps one more, for something uplifting — or at least thought provoking: A recent study looked at what happens when AI models use reasoning mode. It turns out they’re not just “thinking longer,” they’re actually looking at issues from multiple perspectives, weighing how the perspectives form a fuller picture, and producing a more broadly considered output. This is the kind of liberal arts perspective-taking we hope that our students will learn. There’s value to be made in the alignment of these goals/practices. What that looks like is still taking shape. See the post in Industry Development.

    These and more, enjoy!

    Peter 

    Anthropic has the most secure models. See Ethics and Risk for more.
    • Teaching in the Age of AI
    • 03/16/26

    “When I first started using AI seriously, I was doing what most teachers do – asking questions, iterating on responses, but primarily generating text – converting ideas into polished plans and getting feedback and analysis on my own work. AI remains useful for all of those tasks. But agentic AI introduces something fundamentally different. I can now craft an interactive learning experience – deciding what it should do, how it should be organized, who would use it, and what problem it needs to solve. That’s a much different – and more involved – cognitive activity.”

    • Anthropic
    • 03/01/26

    “Last December, tens of thousands of Claude users around the world had a conversation with our AI interviewer to share how they use AI, what they dream it could make possible, and what they fear it might do.”

    • Dr. Philippa Hardman
    • 02/19/26

    “AI-only feedback and support seems most effective when tasks are low-stakes, tightly structured, and not identity-relevant: grammar correction, coding syntax, multiple-choice practice, drill exercises. In these contexts, AI is functional, welcome, and effective. The Karimova and Csapó (2024) meta-review found its strongest gains in exactly these structured, skill-based domains. Students consistently draw the line themselves. In Alkhalaf, Alkhayat and Alzahrani (2026) and across domain studies, they say: AI for drills and drafts, humans for capstones, professional work, and emotionally difficult tasks. The boundary hinges on identity relevance and emotional stakes. When the work is “just practice,” AI is a useful tool.”

TECH/AI: EDUCATION

TECH/AI: ETHICS AND RISK

TECH/AI: INDUSTRY DEVELOPMENT

    • Ipsos
    • 03/24/26
    • Arxiv
    • 03/21/26

    “What happens inside an ostensibly singular reasoning model? A community conversation, as it turns out. In a recent study, we demonstrated that frontier reasoning models like DeepSeek-R1 and QwQ-32B do not improve simply by “thinking longer.” Instead, they simulate complex, multi-agent-like interactions within their own chain of thought—what we term a “society of thought”. These models spontaneously generate internal debates among distinct cognitive perspectives that argue, question, verify, and reconcile. This conversational structure causally accounts for the models’ accuracy advantage on hard reasoning tasks, which we demonstrated by explicitly priming and amplifying multi-party conversation.”

    • New York Times
    • 03/19/26
    • New York Times
    • 03/19/26
    • Joel Gladd
    • 03/15/26

    “This has implications for how we teach AI literacy. If the model is a parrot, the right pedagogical move is to emphasize how mid the outputs are. They’re “just statistical regularities” and AI literacy is mostly about correcting their middling outputs. If the model is a geometric object, then learning to navigate that space is a genuine skill, one that depends on 1) domain expertise and 2) creativity. We absolutely should emphasize basic information literacy and be wary of poor or inaccurate outputs, but knowing how to navigate a model’s terrain successfully is upstream of that.”

TECH/AI: SOCIAL

TECH/AI: USES AND APPLICATIONS

TECH/AI: GENERAL

Issues

Every week I send out articles I encounter from around the web. Subject matter ranges from hard knowledge about teaching to research about creativity and cognitive science to stories from other industries that, by analogy, inform what we do as educators. This breadth helps us see our work in new ways.

Readers include teachers, school leaders, university overseers, conference organizers, think tank workers, startup founders, nonprofit leaders, and people who are simply interested in what’s happening in education. They say it helps them keep tabs on what matters most in the conversation surrounding schools, teaching, learning, and more.

Peter Nilsson

Subscribe

* indicates required