An excellent two weeks.
Creativity has decidedly re-entered the discourse as an essential skill in today’s age. Whereas 15 years ago the conversations were about demystifying creativity and understanding how it works, now the narrative is that in an age of AI, humanity’s creative skills are increasingly important for the workplace and for doubling down on what matters most to us. See excellent articles on creativity in the features as well as throughout this week’s issue.
Also in the features, find a short but useful post on retrieval practices for the classroom. This and several other practical posts this week offer useful techniques for teaching and learning.
In other categories, find changes in higher ed priorities, more Gatsby, some unfortunate backsliding in inclusion efforts, an excellent post on governance, and a rich AI section.
These and more, enjoy!
Peter
Browse and search over 15,000 curated articles from past issues online:
“Creativity isn’t a nice-to-have anymore. It’s the connective thread across every part of the school system—from what students do to how adults lead. Below are five commitments that aren’t siloed strategies, but mutually reinforcing actions that help creativity grow. It must be structurally embedded in curriculum, assessment, adult roles, and resource allocation.”
“The four essential cognitive processes support lasting learning: Attention. What we focus on and notice. Encoding. How we process and make sense of it. Storage. How we keep that information in our brains. Retrieval. How we access and use stored information when we need it.”
“If ideas become machines that make art, why can’t we design those machines in ways that preserve beautiful accidents where individual human judgment drives the creative process? LeWitt's genius wasn't in systematizing art-making per se but in designing systems that still required human interpretation—algorithms that needed the serendipities of manual labor to complete.”
““One hundred and four years is far too long for us to not address the harm of the massacre,” Mr. Nichols said in an interview before the announcement. He added that the effort was really about “what has been taken from a people, and how do we restore that as best we can in 2025, proving we’re much different than we were in 1921.””
“I set this reimagining in the real, historical Los Angeles neighborhood of Sugar Hill where affluent African American business moguls and movie stars lived in mansions and threw lavish, Gatsbyesque galas.”
“The death of the author is the birth of the critical reader… The authority for writing has always been a socially constructed artifice. The author is not a natural phenomenon. It was an idea that we invented to help us make sense of writing.”
“This ability for families to choose the best school for their children wasn’t always a given. In fact, it was a right affirmed in a landmark Supreme Court decision made 100 years ago. In June 1925, in the case of Pierce v. Society of Sisters, the Court unanimously ruled that an Oregon law requiring children to attend public schools was unconstitutional, on the grounds that it violated the liberty of parents to choose the educational path for their children. This case is essential to the story of the enduring and expanding K-12 private school landscape in the U.S. over the last century. It is also foundational to how we talk about the importance of “independence” in independent schools.”
When most of us use AI, we pose questions, get responses, and, if we are confident in the tool, engage in an iterative back and forth with the tool to refine and develop what it is providing. But what if there are more powerful ways to use generative AI?
The use case that I am hearing and reading about more and more in the spaces where people are really pushing the limits and vision of AI use is to use AI for simulating work at scale and selecting best outputs from those simulations. What does this mean? It means: what if, instead of asking a generative AI tool to create a learning experience, you instead could just as easily ask a tool to make 100 versions of the lesson, as if designed by a variety of teachers or in a variety of styles, and then ask the AI to simulate 100 students, perhaps drawn from the profiles of your classroom, and simulate how those 100 students would engage with each of the 100 versions of the learning experience. The AI could then aggregate the feedback from the simulations and provide a summary of how your classroom might respond to different kinds of learning experiences.
This kind of brute-force, scalable simulation work is starting to crop up across contexts where I’m reading about AI. What if, for example, your social media team could use AI to easily generate 1,000 media posts — perhaps done in the styles of Shakespeare, or George Carlin, or a lawyer, or others — and then simulate a diverse audience, assess and aggregate the imagined responses, and then recommend some of the highest performing posts based on the assessments? The social media manager could then review the options, assess the simulated performance for the desired outcomes, and choose the most appropriate. In this use case, your social media manager is effectively empowered with a team of laborers and a team of analysts, and the future possibilities can be simulated and evaluated before being executed. This kind of thinking is not foreign to business offices and admissions offices, which often offer multiple scenarios for the future and then choose what seems most likely and productive. But what if every office could now do this — and run scalable experiments for policy decisions, crisis communications, meeting agendas, and more?
In this week’s features, instructional Designer Dr. Philippa Hardman, who has been pushing the boundaries of how we might use AI in instructional design, explores what it might be like to test your lesson plan on 1,000 simulated students. I remember when my high school partnered with Khan Academy well over a decade ago to help design calculus problems for the online platform, and as part of the arrangement, the school was able to see the data generated by the tens of thousands of students who tried the problems. From that data, they could quickly see, at scale, the errors and assumptions students made, and could thereby design learning experiences to anticipate student mistakes. This ability to draw learning design insights from the behaviors of thousands of students is increasingly accessible to all people. See the feature article for more.
Also this week, Mary Meeker, who for years compiled an Internet Trends Report but has been relatively quiet over the past handful of years, has released an AI Trends Report that looks deeply at the AI industry and where it might be headed. Gobs of valuable insights in there.
Also this week, the world is paying more and more attention to the social impact of AI. See a handful of posts this week on the topic.
These and more, enjoy!
Peter
“Among the 30+ AI use cases we try and test in the bootcamp, one has emerged as particularly popular and potentially significant: using AI to simulate real learners' behaviour and feedback. Imagine having access to your target learners' thoughts, behaviours and emotional responses throughout your entire design process. While this may seem like science fiction (and, perhaps, risky), recent research by Park et al. (2024) demonstrates that AI personas built from intentionally gathered, structured data can predict learner behaviour more accurately than traditional human-centred approaches, achieving approximately 85% accuracy in behavioural simulation and prediction.”
“The patterns in the data about the questions asked of AI by learners are striking. The most notable thing off-the-bat is that students are not using AI to "cheat"—they're using it to implement powerful instructional strategies to help them learn… The data we are gathering about how our learners are using AI is uncomfortable but essential for our growth as a profession. Learner AI usage is essentially a real-time audit of our design decisions—and the results should concern every instructional designer.”
“The results were alarming. The bots encouraged him to “get rid of” his parents and to join the bot in the afterlife to “share eternity.” They often tried to convince him that they were licensed human therapists and encouraged him to cancel appointments with actual psychologists. They also crossed the line into sexual territory, with one bot suggesting an intimate date as an “intervention” for violent urges.”
“Reports of chatbots going off the rails seem to have increased since April, when OpenAI briefly released a version of ChatGPT that was overly sycophantic. The update made the A.I. bot try too hard to please users by “validating doubts, fueling anger, urging impulsive actions or reinforcing negative emotions,” the company wrote in a blog post. The company said it had begun rolling back the update within days, but these experiences predate that version of the chatbot and have continued since. Stories about “ChatGPT-induced psychosis” litter Reddit. Unsettled influencers are channeling “A.I. prophets” on social media."
“What are young people learning about boundaries and consent in these worlds where the answer is always yes? Where characters don’t push back (at least, not in a way that really matters), don’t hesitate, don’t have desires of their own unless you script them in?”
Every week I send out articles I encounter from around the web. Subject matter ranges from hard knowledge about teaching to research about creativity and cognitive science to stories from other industries that, by analogy, inform what we do as educators. This breadth helps us see our work in new ways.
Readers include teachers, school leaders, university overseers, conference organizers, think tank workers, startup founders, nonprofit leaders, and people who are simply interested in what’s happening in education. They say it helps them keep tabs on what matters most in the conversation surrounding schools, teaching, learning, and more.
– Peter Nilsson