What a week.
Cal Newport’s feature calls for a thinking revolution commensurate to the health revolution of the 20th century. It’s a fascinating framing of not just the AI moment, but the impact of technology generally. I don’t agree with some of this conclusions, but I find his framing helpful and well put. In the other feature, explore one of two articles this week focusing on wisdom. These posts align with one of our claims in Irreplaceable: that while technology can perform marvels, learning still moves at the speed of humanity instead of the speed of silicon. It still takes time to accrue wisdom, students still take time to form excellent questions.
In this week’s Character section, find more on agency, the science of morals, change, high expectations, and more.
Also this week, the feeling of gloom and doom just seems to be swelling in the news. For something uplifting, consider the post in STEM on beauty in mathematics, or maybe the piece in Character on how to flourish in challenging times.
All these and more — including a robust AI update this week — enjoy!
Peter


Browse and search over 16,000 curated articles from past issues online:
“In the world of physical health, we now know we should largely avoid ultraprocessed snacks like Doritos and Oreos, which are Frankenfoods made by reconstituting stock ingredients like corn and soy with hyperpalatable ratios of salt, sugar and fat. Much of the digital content that ensnares our attention in the current moment is also ultraprocessed, in that it’s the result of vast databases of user-generated content that are sifted, broken down and recombined by algorithms into personalized streams designed to be irresistible. What is a TikTok video if not a digital Dorito? We should consider taking as strong a stance against ultraprocessed content as we already do against ultraprocessed food. Which is to say: Most people should avoid these diversions most of the time. In the same way that you’re unlikely to eat Twinkies as a regular snack or still believe that Pop-Tarts provide a balanced breakfast, stop consuming ultraprocessed content.”
“When wisdom comes naturally, it often derives from lessons learned through intense experiences or dilemmas. These experiences may be painful, like breakups or illnesses, but wisdom can also be gained from experiences that are simply challenging, like moving to a new city or having a baby, Glück says. Yet plenty of people who get cancer or become parents never gain much wisdom. Why? By reviewing wisdom research and interviewing wise and less-wise people using varied measures, Glück has identified five prerequisites for extracting wisdom from experience. These include the ability to manage uncertainty, to maintain an openness to change and new perspectives, to reflect on one’s experiences, to regulate emotional ups and downs, and to practice empathy.”
“To stick up for communities like Gray’s Ferry, researchers need to do more than responsibly represent the facts. They need, somehow, to become part of a bulwark against attacks on community activists’ moral standing. Scientists’ research alone can’t shift our standards for right and wrong to make sure that, ultimately, exposing communities to toxic pollution is met with widespread public disapproval and reparative action. Community activists need to question the moral logics that excuse the harms that scientists demonstrate. When members of groups like Philly Thrive say, “Your jobs do not make up for our lives,” we need to hear them as legitimate contributors to our collective conscience.”
“What makes kids try harder? Teachers, mostly. Strong teachers motivate students to elevate their effort as the material gets more challenging. A positive school culture – the sum of many teachers and support staff aligned to the same standard – ensures consistency across classrooms and magnifies the effect.”
“The Bechdel Test is familiar to most writers. A work passes if it has: 1) at least two women, 2) who talk to each other, 3)about something other than a man… What makes the Bechdel Test so effective? Perhaps because of the low standard it sets. A creative work can pass the test and still have weak, underwritten female characters. The bar is—quite honestly—low, by design. The Bechdel Test conversation reveals how often women exist in a story only in relation to men.”
“Traditional technology institutes treated the humanities and social sciences as supplements — breadth requirements to balance technical depth. The next-generation model begins from a different premise: The defining problems of the 21st century, such as climate change, public health, AI governance, and urban design are inherently hybrid. They require both technical expertise and humanistic understanding… The curriculum rests on a simple premise: that human questions (What makes a life meaningful? How should resources be distributed? What do we owe one another?) cannot be separated from technical ones (How do we build this? Will it scale? What could go wrong?).”
“Some pilot schools put their 8th graders directly into Algebra 1. Others gave 8th graders the option to enroll in two classes: the regular 8th grade math course, and Algebra 1 as an elective. The natural experiment gave researchers the ability to compare the different sequences, and see if one produced stronger outcomes.”
“Not all technologies improve people’s lives. Just as Berners-Lee’s now omnipresent web shapes industries and markets, it shapes its users’ thoughts, perceptions, and relationships. As we’re slowly coming to understand, human beings did not evolve to be virtual creatures in a computer-generated world. The internet operates at a scale and speed that conflict with the brain’s deliberate pace of thought, the intellect’s slow accumulation of knowledge, and the psyche’s limited capacity for stimulation and social exchange. To be able to do anything and be anywhere at any moment seems liberating for a while, but it ends in a blurred and chaotic existence, the physical world’s familiar, steadying divisions of space and time dissolving in endless torrents of data.”
“Researcher Philip Jackson (p. 149) said that elementary teachers have 200 to 300 exchanges with students every hour (between 1200-1500 a day), most of which are unplanned and unpredictable calling for teacher decisions, if not judgments.”
Stanford Teaching Commons has an excellent four part framework for AI Literacy, and one of those four parts focuses on effective use of AI. One of this week’s features breaks down effective AI use into four traits. It’s a helpful summary. Also in the features, find the story of the educator who vibe-coded an app for fact checking online writing. The story isn’t really about the fact-checking — it’s really about how anyone can tell an AI to write code to do most anything that involves digital information. This is a real turning point that will fuel decades of creativity, and it’s hard to imagine that what we’re seeing now is just the beginning.
In Teach/AI: Education, find a host of compelling posts. I was particularly moved by the Division Director who was alerted about a student’s sadness by a chatbot.
Also this week, see a pair of posts by Tom Millinchip on using Claude Projects. It has been a game changer for me, too.
These and more, enjoy!
Peter

“They were ambitious in how they approached AI use… They treated AI as a reasoning partner… They delegated complex tasks with clear objectives… They treated AI as a general cognitive tool rather than a narrow productivity shortcut…”
“I typed a simple prompt: “I’d like to build an app that fact-checks articles on the web.” And Claude built it.”
“What’s the role of AI in this world? Well, if you visit secondary schools that foster deeper learning, you will see that they resemble modern workplaces, with students often working together on long-term projects. In these settings, AI can be used the same way you use it in a workplace. Sometimes, it enables you to do something faster or more efficiently, sometimes, it is useful for brainstorming, and sometimes it is useless. The role of the teacher as coach, then, is to help students make judgments about whether, when, and under what conditions AI can help.”
“He was chatting with the AI bot and it alerted me he said he was depressed. Never in my years of teaching did I ever think I would type that.”
“Tech workers, it is becoming clear, have been building their A.I. replacements. The profitable business models of software companies are also threatened by A.I. Even the way companies are built is being turned inside out, as tiny shops use A.I. to build apps and software that would have taken dozens of skilled programmers just a few years ago.”
“In other words, human connection didn’t win out in this experiment just because it’s superior—it won out because participants in the human treatment were required to show up for one another. And outside of lab, the world often lacks systems and structures designed to do just that. If we want more relationships, we need more brokers. When was the last time someone (your employer, school, or community center) randomly assigned you to chat with a stranger for two weeks? If your answer was never, you’re not alone. We don’t really do much connecting for connections’ sake.”
Every week I send out articles I encounter from around the web. Subject matter ranges from hard knowledge about teaching to research about creativity and cognitive science to stories from other industries that, by analogy, inform what we do as educators. This breadth helps us see our work in new ways.
Readers include teachers, school leaders, university overseers, conference organizers, think tank workers, startup founders, nonprofit leaders, and people who are simply interested in what’s happening in education. They say it helps them keep tabs on what matters most in the conversation surrounding schools, teaching, learning, and more.
– Peter Nilsson