A weekly collection of education-related news from around the web.

Educator’s Notebook #497 (August 31, 2025)

INTRODUCTION

  • It’s Labor Day weekend in the US and the new year is upon us.

    In this week’s features, find an excellent reflection by Michael Wagner on what AI literacy means today — and he draws on scholarly traditions from centuries past.  Also in the features, find a good reminder that there is a science to learning, and practical strategies for learning can be designed around this science. The Middle Web post highlights an excellent set of bread and butter practices for students and teachers.

    Also this week, find a link to a growing Wikipedia page cataloging patterns found in AI writing. Find also another excellent post by Moira Kelly at Explo on leadership — this time it’s a guide for how to set a high cadence for leadership productivity.  A must read for those running teams who want to get things done.

    All these and more this week, enjoy!

    Peter

    What books from school have stuck with you over the years? See the NPR post in Reading/Writing for more.

     


     

    Browse and search over 15,000 curated articles from past issues online:

    Subscribe to the Educator’s Notebook

     

    • Augmented Educator
    • 08/26/25

    “It is not an "AI curriculum"; it is a comprehensive framework for critical thinking in a multi-modal world. AI is the catalyst that makes these skills urgent, but their scope is far broader. 1) Critical Reading: This is no longer just about analyzing a printed text. It’s about interrogating the logic of hyperlinks, understanding the persuasive architecture of a website, and detecting the subtle biases in algorithmically curated news feeds. It’s a foundational skill for navigating any information system, human or machine-made… 2) Critical Listening… 3) Critical Seeing… 4) Critical Making.”

    • Middle Web
    • 08/24/25

    “Many of the strategies students gravitate toward are among the least effective. These include rereading, highlighting, reviewing notes, and summarizing. While these approaches feel productive, research paints a different picture of how study time should be spent. Two separate, large-scale studies identified five common, high-yield study strategies for teachers and students to utilize: practice testing, distributed practice, elaborative interrogation, self-explanation, and interleaved practice (Dunlosky, 2013; Donoghue, 2021).”

ADMISSIONS

ADOLESCENCE

ASSESSMENT

CHARACTER

CURRICULUM

DIVERSITY/INCLUSION

GOVERNMENT

HUMANITIES

INTERNATIONAL

LANGUAGE

LEADERSHIP

    • Stanford
    • 08/27/25
    • Explo Elevate
    • 08/19/25

    “Just as individuals vary dramatically in their ability to process challenges and maintain energy under pressure, school leadership teams have vastly different capacities for handling complexity, making decisions, and sustaining momentum through difficult changes. Some teams can absorb multiple competing priorities, quickly convert problems into action plans, and maintain strategic focus even when facing setbacks. Others become overwhelmed by the first major challenge and need extended recovery time between initiatives.”

PEDAGOGY

    • Cult of Pedagogy
    • 08/31/25

    “Rather, this is a post about what can happen in the classroom when you go beyond tossing student work up on the walls and actually center student work in the classroom itself. As much as I love the bulletin board — especially with some diligent student aides to help make updating it manageable! — the most impactful aspect occurs when I project a sentence written by a student in front of the entire class and use it as a teaching tool for the rest of their class, an inspiration of what is possible that is all the more powerful because it emerged from our own classroom.”

    • Lauren Brown
    • 08/22/25

READING/WRITING

STEM

    • Inside Higher Ed
    • 08/25/25

    “When asked how important math skills were for the majority of the U.S. workforce, 40 percent of young adults rated having math skills as very important—the lowest rating of nine skills evaluated, including reading, language, technology and leadership, according to Gallup.”

GENERAL

A.I. Update

A.I. UPDATE

  • Lots of late summer reports coming out.

    In the features, find Anthropic’s report on how educators are using Claude, and also a reflection from a writing professor on his discussions and experiments with his students regarding AI. Both are excellent reads that also include practical insights for teachers.

    Elsewhere in the AI Update, the Andreesen Horowitz report on the most used GenAI products shows a familiar pattern: after the top places are held by foundation models, the next place is held by Character.AI.  It’s a reminder that social and emotional use cases are a major source of AI traffic — and a major source of concern because of how widely they are used without many (any?) safeguards.

    To that end, one of the most edge conversations right now is about the somewhat marginal topic of “model welfare.” It’s an extreme moral interpretation of what AI is and whether it has consciousness. Model welfare is the idea that, should people begin to consider AI as a conscious entity, then steps should be taken to protect the welfare of the model. No major AI companies are advocating that this is the case, but some are beginning to factor the small possibility into the behaviors of their AI tools. The ensuing discussion is not yet vigorous, but is pointed. See the several posts in the Ethics and Risk section. I think Mustafa Suleyman’s post hits the right mark: we need to be sure that we are designing AI for people, not to be a person. While this is all a fringe conversation, it may be pushing decisions that will be very healthy for the human experience by leading to design decisions that drive people to work more with other people and not be as involved with AI in an accidentally social way.

    These and more this week, enjoy!

    Peter

    What are the most used Gen AI web products? See the post in Industry Development for more.
    • Anthropic
    • 08/26/25

    “Some educators are automating grading; others are deeply opposed — In our Claude.ai data, faculty used AI for grading and evaluation less frequently than other uses, but when they did, 48.9% of the time they used it in an automation-heavy way (where the AI directly performs the task). That’s despite educator concerns about automating assessment tasks, as well as our surveyed faculty rating it as the area where they felt AI was least effective.”

    • Literary Hub
    • 07/28/25

    “I attempted the experiment in four sections of my class during the 2024-2025 academic year, with a total of 72 student writers. Rather than taking an “abstinence-only” approach to AI, I decided to put the central, existential question to them directly: was it still necessary or valuable to learn to write? The choice would be theirs. We would look at the evidence, and at the end of the semester, they would decide by vote whether A.I. could replace me. What could go wrong?”

TECH/AI: EDUCATION

TECH/AI: ETHICS AND RISK

    • New York Times
    • 08/26/25

    “Why Adam took his life — or what might have prevented him — is impossible to know with certainty. He was spending many hours talking about suicide with a chatbot. He was taking medication. He was reading dark literature. He was more isolated doing online schooling. He had all the pressures that accompany being a teenage boy in the modern age.”

    • New York Times
    • 08/25/25

    “When A.I. chatbots are purposely trained as digital therapists, they show more promise. One example is Therabot, designed by Dartmouth College researchers. In a randomized controlled trial completed earlier this year, adult participants who used Therabot reported significant reductions in depression, anxiety and weight concerns. They also expressed a strong sense of connection to the chatbot. But these findings don’t neatly translate to adolescents.”

    • Australian Financial Review
    • 08/22/25
    • Mustafa Suleyman
    • 08/19/25

    “The arrival of Seemingly Conscious AI is inevitable and unwelcome. Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions… those actually working on the science of consciousness tell me they are inundated with queries from people asking ‘is my AI conscious?’ What does it mean if it is? Is it ok that I love it? The trickle of emails is turning into a flood. A group of scholars have even created a supportive guide for those falling into the trap… We aren’t ready for this shift.  The work of getting prepared must begin now.”

    • Anthropic
    • 08/15/25

    “We recently gave Claude Opus 4 and 4.1 the ability to end conversations in our consumer chat interfaces. This ability is intended for use in rare, extreme cases of persistently harmful or abusive user interactions. This feature was developed primarily as part of our exploratory work on potential AI welfare, though it has broader relevance to model alignment and safeguards. We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.”

    • Anthropic
    • 04/24/25

    “Human welfare is at the heart of our work at Anthropic: our mission is to make sure that increasingly capable and sophisticated AI systems remain beneficial to humanity. But as we build those AI systems, and as they begin to approximate or surpass many human qualities, another question arises. Should we also be concerned about the potential consciousness and experiences of the models themselves? Should we be concerned about model welfare, too? This is an open question, and one that’s both philosophically and scientifically difficult. But now that models can communicate, relate, plan, problem-solve, and pursue goals—along with very many more characteristics we associate with people—we think it’s time to address it.”

TECH/AI: INDUSTRY DEVELOPMENT

TECH/AI: SOCIAL

TECH/AI: USES AND APPLICATIONS

Issues

Every week I send out articles I encounter from around the web. Subject matter ranges from hard knowledge about teaching to research about creativity and cognitive science to stories from other industries that, by analogy, inform what we do as educators. This breadth helps us see our work in new ways.

Readers include teachers, school leaders, university overseers, conference organizers, think tank workers, startup founders, nonprofit leaders, and people who are simply interested in what’s happening in education. They say it helps them keep tabs on what matters most in the conversation surrounding schools, teaching, learning, and more.

Peter Nilsson

Subscribe

* indicates required