It’s Labor Day weekend in the US and the new year is upon us.
In this week’s features, find an excellent reflection by Michael Wagner on what AI literacy means today — and he draws on scholarly traditions from centuries past. Also in the features, find a good reminder that there is a science to learning, and practical strategies for learning can be designed around this science. The Middle Web post highlights an excellent set of bread and butter practices for students and teachers.
Also this week, find a link to a growing Wikipedia page cataloging patterns found in AI writing. Find also another excellent post by Moira Kelly at Explo on leadership — this time it’s a guide for how to set a high cadence for leadership productivity. A must read for those running teams who want to get things done.
All these and more this week, enjoy!
Peter

Browse and search over 15,000 curated articles from past issues online:
Subscribe to the Educator’s Notebook
“It is not an "AI curriculum"; it is a comprehensive framework for critical thinking in a multi-modal world. AI is the catalyst that makes these skills urgent, but their scope is far broader. 1) Critical Reading: This is no longer just about analyzing a printed text. It’s about interrogating the logic of hyperlinks, understanding the persuasive architecture of a website, and detecting the subtle biases in algorithmically curated news feeds. It’s a foundational skill for navigating any information system, human or machine-made… 2) Critical Listening… 3) Critical Seeing… 4) Critical Making.”
“Many of the strategies students gravitate toward are among the least effective. These include rereading, highlighting, reviewing notes, and summarizing. While these approaches feel productive, research paints a different picture of how study time should be spent. Two separate, large-scale studies identified five common, high-yield study strategies for teachers and students to utilize: practice testing, distributed practice, elaborative interrogation, self-explanation, and interleaved practice (Dunlosky, 2013; Donoghue, 2021).”
“Just as individuals vary dramatically in their ability to process challenges and maintain energy under pressure, school leadership teams have vastly different capacities for handling complexity, making decisions, and sustaining momentum through difficult changes. Some teams can absorb multiple competing priorities, quickly convert problems into action plans, and maintain strategic focus even when facing setbacks. Others become overwhelmed by the first major challenge and need extended recovery time between initiatives.”
“Rather, this is a post about what can happen in the classroom when you go beyond tossing student work up on the walls and actually center student work in the classroom itself. As much as I love the bulletin board — especially with some diligent student aides to help make updating it manageable! — the most impactful aspect occurs when I project a sentence written by a student in front of the entire class and use it as a teaching tool for the rest of their class, an inspiration of what is possible that is all the more powerful because it emerged from our own classroom.”
“This is a list of writing and formatting conventions typical of AI chatbots such as ChatGPT, with real examples taken from Wikipedia articles and drafts. Its purpose is to act as a field guide in helping detect undisclosed AI-generated content. Note that not all text featuring the following indicators is AI-generated; large language models (LLMs), which power AI-chatbots, have been trained on human writing, and some people may share a similar writing style.”
“When asked how important math skills were for the majority of the U.S. workforce, 40 percent of young adults rated having math skills as very important—the lowest rating of nine skills evaluated, including reading, language, technology and leadership, according to Gallup.”
Lots of late summer reports coming out.
In the features, find Anthropic’s report on how educators are using Claude, and also a reflection from a writing professor on his discussions and experiments with his students regarding AI. Both are excellent reads that also include practical insights for teachers.
Elsewhere in the AI Update, the Andreesen Horowitz report on the most used GenAI products shows a familiar pattern: after the top places are held by foundation models, the next place is held by Character.AI. It’s a reminder that social and emotional use cases are a major source of AI traffic — and a major source of concern because of how widely they are used without many (any?) safeguards.
To that end, one of the most edge conversations right now is about the somewhat marginal topic of “model welfare.” It’s an extreme moral interpretation of what AI is and whether it has consciousness. Model welfare is the idea that, should people begin to consider AI as a conscious entity, then steps should be taken to protect the welfare of the model. No major AI companies are advocating that this is the case, but some are beginning to factor the small possibility into the behaviors of their AI tools. The ensuing discussion is not yet vigorous, but is pointed. See the several posts in the Ethics and Risk section. I think Mustafa Suleyman’s post hits the right mark: we need to be sure that we are designing AI for people, not to be a person. While this is all a fringe conversation, it may be pushing decisions that will be very healthy for the human experience by leading to design decisions that drive people to work more with other people and not be as involved with AI in an accidentally social way.
These and more this week, enjoy!
Peter

“Some educators are automating grading; others are deeply opposed — In our Claude.ai data, faculty used AI for grading and evaluation less frequently than other uses, but when they did, 48.9% of the time they used it in an automation-heavy way (where the AI directly performs the task). That’s despite educator concerns about automating assessment tasks, as well as our surveyed faculty rating it as the area where they felt AI was least effective.”
“I attempted the experiment in four sections of my class during the 2024-2025 academic year, with a total of 72 student writers. Rather than taking an “abstinence-only” approach to AI, I decided to put the central, existential question to them directly: was it still necessary or valuable to learn to write? The choice would be theirs. We would look at the evidence, and at the end of the semester, they would decide by vote whether A.I. could replace me. What could go wrong?”
“Talking, listening and reading have been part of academic culture since the beginning, but written assignments — the five-paragraph essay, the research paper, reading responses — were not. In the earliest universities, which coalesced in a handful of European cities around a thousand years ago, books were scarce, movable type was nonexistent, and education was organized around oral instruction and examination.”
“Instructors have been looking for readings they can assign to help explain why AI is being banned in their courses. They would rather not impose a controversial, strict policy without giving a rationale, since that may foster resentment and therefore non-compliance. The following is how I explain my AI policy to my technology-ethics class, to help get their buy-in at the very start. This is the course’s first reading and captures just about all the major concerns about AI use, so it’s a long read.”
“Why Adam took his life — or what might have prevented him — is impossible to know with certainty. He was spending many hours talking about suicide with a chatbot. He was taking medication. He was reading dark literature. He was more isolated doing online schooling. He had all the pressures that accompany being a teenage boy in the modern age.”
“When A.I. chatbots are purposely trained as digital therapists, they show more promise. One example is Therabot, designed by Dartmouth College researchers. In a randomized controlled trial completed earlier this year, adult participants who used Therabot reported significant reductions in depression, anxiety and weight concerns. They also expressed a strong sense of connection to the chatbot. But these findings don’t neatly translate to adolescents.”
“The arrival of Seemingly Conscious AI is inevitable and unwelcome. Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions… those actually working on the science of consciousness tell me they are inundated with queries from people asking ‘is my AI conscious?’ What does it mean if it is? Is it ok that I love it? The trickle of emails is turning into a flood. A group of scholars have even created a supportive guide for those falling into the trap… We aren’t ready for this shift. The work of getting prepared must begin now.”
“We recently gave Claude Opus 4 and 4.1 the ability to end conversations in our consumer chat interfaces. This ability is intended for use in rare, extreme cases of persistently harmful or abusive user interactions. This feature was developed primarily as part of our exploratory work on potential AI welfare, though it has broader relevance to model alignment and safeguards. We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.”
“Human welfare is at the heart of our work at Anthropic: our mission is to make sure that increasingly capable and sophisticated AI systems remain beneficial to humanity. But as we build those AI systems, and as they begin to approximate or surpass many human qualities, another question arises. Should we also be concerned about the potential consciousness and experiences of the models themselves? Should we be concerned about model welfare, too? This is an open question, and one that’s both philosophically and scientifically difficult. But now that models can communicate, relate, plan, problem-solve, and pursue goals—along with very many more characteristics we associate with people—we think it’s time to address it.”
“For people under 18, AI companions are not currently safe. They can manipulate teens’ emotions, distort their sense of reality, and keep them from getting the real support they deserve at a time of significant brain growth and development. AI companion platforms are financially incentivized to build dependency, with your teen as the target audience. But teens don’t need a machine to care about them; they need and deserve people who do.”
“I don’t know whether A.I. will look, in the economic statistics of the next 10 years, more like the invention of the internet, the invention of electricity or something else entirely. I hope to see A.I. systems driving forward drug discovery and scientific research, but I am not yet certain they will. But I’m taken aback at how quickly we have begun to treat its presence in our lives as normal.”
Every week I send out articles I encounter from around the web. Subject matter ranges from hard knowledge about teaching to research about creativity and cognitive science to stories from other industries that, by analogy, inform what we do as educators. This breadth helps us see our work in new ways.
Readers include teachers, school leaders, university overseers, conference organizers, think tank workers, startup founders, nonprofit leaders, and people who are simply interested in what’s happening in education. They say it helps them keep tabs on what matters most in the conversation surrounding schools, teaching, learning, and more.
– Peter Nilsson