In this week’s features, find a fascinating and thought provoking read framed by the question: does being intelligent make us happy? It explores how we define intelligence, especially in relation to poorly defined problems vs well defined problems. It juxtaposes doing well in school (a well defined problem) with living happily (a poorly defined — or complex — problem). The post prompts meaningful questions about what we teach and why.
Also in the features, find a study reported by Carl Hendrick that revisits some of the nuance in “Thinking Fast and Slow.” It offers some insight into how we think about intuition and what leads to sound intuition.
Also this week, I was heartened to see the report in the pedagogy section on how effective math teachers use more math vocabulary. The report is about 4th and 5th grade teachers, but I suspect the principle applies to all grades. I recall when I was a hiring leader across departments, when interviewing candidates, I would listen for whether and how they used the language of their discipline. When I asked about their undergraduate or graduate work, when asking about what attracted them to the field, or what courses they most love teaching and why, I wanted to hear them automatically use the language and structure of their discipline. I’m glad to know that research is bearing out that this trait makes a difference.
Also this week… the book comes out! So here’s the most important aspect of the book: it’s about teaching and learning first and then about AI in the context of teaching and learning. It’s not a pro-AI book or an anti-AI book — it’s about about lesson design, feedback, student agency, research, productive struggle, and much more — and about how AI interacts with all of these, including concrete next steps for teachers and leaders alike. It’s a book that captures many of the key principles of good practice and learning science that decades of research and experience have taught us as a field, and then sees AI through the lens of that professional knowledge.
This is why Maha Bali writes “It is so rare to find a book on AI in education that is neither uncritically enthusiastic nor completely resistant, but one that makes space for readers who have diverse value systems and philosophies of education… Here is a book that helps you on the conceptual level, gives you practical tips, and most importantly, respects the reader as a whole person and not just a role. This is a book that I plan to use with fellow teachers and students alike.”
Are you looking for a book that will help you or your faculty with principles and practices of excellent teaching and learning — and how AI relates to that? Pre-order here. And hear a good overview and introduction to the book in our recent interview on EdTech Insiders (at the guest section at 46 min).
Much, much more this week, including two different but excellent features in the AI Update below.
These and more, enjoy!
Peter
PS. Where you can find me:


Browse and search over 15,000 curated articles from past issues online:
“Sound thinkers are not primarily characterised by their capacity to correct intuitive errors through deliberation. Rather, they appear to be distinguished by their ability to intuit correctly in the first place… What makes some people better intuitive thinkers than others? …Using a verbal fluency task to map participants’ semantic networks, the researchers found that individuals with more interconnected, flexible semantic memory structures displayed higher rates of correct intuitive responding. Specifically, those who performed well intuitively had semantic networks characterised by shorter path lengths between concepts, greater local clustering, and fewer distinct sub-networks. Their knowledge was not siloed but integrated… This distinction is important and worth dwelling on. The study does not show that knowing more stuff produces better intuition. It shows that having a more interconnected, flexible semantic architecture does…. The question of whether education can cultivate the kind of flexible, interconnected semantic networks that support sound intuition remains open.”
“We’ve got no problem fawning over people who are good at solving well-defined problems. They get to be called “professor” and “doctor….” People who are good at solving poorly defined problems don't get the same kind of kudos. They don’t get any special titles or clubs. There is no test they can take that will spit out a big, honking number that will make everybody respect them. And that’s a shame. My grandma does not know how to use the “input” button on her TV’s remote control, but she does know how to raise a family full of good people who love each other, how to carry on through a tragedy, and how to make the perfect pumpkin pie.”
“Remember that your attention is a scarce resource.”
“Intellectual pluralism should be more than an exercise in gathering discrete political viewpoints into a common educational frame… Instead, as Amanda Anderson has argued, it should mean cultivating the individual, internal intellectual agility essential to think beyond “positions” whose political implications are set. Neatly declarable viewpoints, no matter which ones or how many, should not be the academy’s stock-in-trade. Scholarly materials worth grappling with rarely fit within summary political containers, and powerful ideas are ideologically mobile.”
“Public education is not a consumer good, like a gym membership or a streaming service. It is a civic institution.”
“The two camps, Snow observed, might as well have lived on different planets. Along with the knowledge divide came a power divide. To the public, the literary intellectuals — Snow pointed to the poet and critic T. S. Eliot as the “archetypal figure” — constituted the cultural elite… Today, the divides remain, but the power dynamic has been turned on its head. The STEM camp, in particular its technological wing, dominates the culture. Techies take prominent seats at presidential inaugurations and White House banquets… As for old-style public intellectuals, they’ve disappeared from the scene..”
“Leaders should do four things when asked to execute a call they disagree with: 1) Stabilize themselves; 2) prioritize where they can make an impact; 3) lead the team through the transition; and 4) preserve trust.”
“I was 39 when I started teaching high school social studies. Before that, I had lectured and conducted seminar discussions in a university setting for over a decade. So, assume that I knew almost nothing about teaching.”
“It wasn’t until a writing friend with a background in publicity said to me that I’d written a “genre-bending” debut that I realised I had—or the ways in which broadening my reading changed my writing.”
“Once, scientists believed in the four humors as the key to health, in phlogiston as the essence of fire and in the ether as the carrier of light. Eggs were bad for you, then fine, then maybe bad again. Even Newtonian physics, once considered unshakable, was revised by Einstein. If so many widely accepted theories have been discarded, why should we trust the ones we have now? It’s a sobering question but also a misleading one. It implies that the only possible attitudes toward science are naïve faith and wholesale pessimism… Fortunately, there is another attitude to adopt toward science — one you might call disciplined trust — that would serve us much better. It just happens to require some actual knowledge of science and some intellectual humility.”
"The big breakthroughs often come when concepts from a different field get brought to bear on a problem for the first time.”
In one of this week’s features, the CEO of Notion meaningfully reflects on how technology has changed industry over the past centuries — and from that looks ahead to how AI may further change industry. I think he’s right for the most part, but misses a key point. Those businesses that have identified the technology infrastructure upon which the future world will be built have indeed facilitated extraordinary transformation. But the post overlooks that while some industries scale exponentially, others remain grounded in human interactions, where relationships are built, where purpose is found, and where meaning is made. As our technology advances further and further, the distance between the scaled world and the lived world grows farther and farther. Culturally, we need to wrestle with this, which is part of why talking with our students and faculty about AI is so important. It’s why H. G. Wells wrote in 1920 that “human history becomes more and more a race between education and catastrophe.”
Also in the features, find research from the AI company QuestionWell about what makes good multiple choice questions and how they have tuned their model to produce better multiple choice questions.
Also this week, a good post about how AI “isn’t just predicting the next word anymore.” This is a good reminder as we engage in discussion with skeptics. The technology is advancing rapidly.
These and more, enjoy!
Peter

“Writing good multiple-choice questions (MCQs) is well studied and well known to be difficult, even for human beings. Decades of research document common flaws: inadvertently giving students cues, implausible distractors, ambiguity, and questions that reward test taking strategies, and allow students to side-step understanding. Large language models (LLMs) make many of the same errors as human question writers. At QuestionWell, we push our AI to do better. This fall, we trained our AI model specifically to reduce well-documented item writing flaws in multiple choice questions.”
“My co-founder Simon was what we call a 10× programmer, but he rarely writes code these days. Walk by his desk and you'll see him orchestrating three or four AI coding agents at once, and they don't just type faster, they think, which together makes him a 30-40× engineer. He queues tasks before lunch or bed, letting them work while he's away. He's become a manager of infinite minds.”
“Unlike the solo AI agent paradigm dominating headlines, multi-user collaborative AI represents something fundamentally different and truly exciting: AI as social infrastructure that strengthens relationships and encourages collaboration. This is why one of my predictions for 2026 is that it will be the year that “Social AI” breaks into the mainstream, especially in education.”
“Most drones require a human pilot. But some new Ukrainian drones, once locked on a target, can use A.I. to chase and strike it — with no further human involvement.”
Every week I send out articles I encounter from around the web. Subject matter ranges from hard knowledge about teaching to research about creativity and cognitive science to stories from other industries that, by analogy, inform what we do as educators. This breadth helps us see our work in new ways.
Readers include teachers, school leaders, university overseers, conference organizers, think tank workers, startup founders, nonprofit leaders, and people who are simply interested in what’s happening in education. They say it helps them keep tabs on what matters most in the conversation surrounding schools, teaching, learning, and more.
– Peter Nilsson