The Educator's Notebook

A weekly collection of education-related news from around the web.

Educator’s Notebook #450 (June 16, 2024)


  • An excellent issue.

    In the features, capping a growing rise of concern about social media and teens, the US Surgeon General recommends Congress approve a warning label on social media.  This is the highest profile efforts to raise concerns about the correlation between the rise of social media and teen mental health.  Also find a helpful primer on restorative justice in schools in the second feature piece.

    Also this week, fund a healthy bit of writing about health use of tech in schools.  Tech writing has recently focused enormously on AI, but in this issue, find a small surge in writing about other tech use cases.

    More to the pace of the summer, enjoy the two articles in PD on ways to think about growth this summer.  Time to read!

    Last, these past several weeks have seen continued rapid movement in the AI space. See the AI section at the end for more.



    Pew Research
    • New York Times
    • 06/17/24

    “The United States Surgeon General, Dr. Vivek Murthy, announced on Monday that he would push for a warning label on social media platforms advising parents that using the platforms might damage adolescents’ mental health.”

    • EdWeek
    • 05/31/24

    “Even as educators see an uptick in misbehavior, nearly half of teachers and administrators say their schools are using restorative justice practices more now than five years ago, according to a recent EdWeek Research Center survey. But the purest version of the restorative justice framework is hard to come by, said Allison Payne, a professor at Villanova University in Pennsylvania who studies the practice… Here’s an overview of what restorative justice is and how it fits into the larger school system.”



    • EdWeek
    • 05/30/24

    “As they choose a presidential candidate in the November election, California voters may also have an unusual opportunity to decide whether the state should add a new course to its high school graduation requirements. While supporters say the course is urgently needed, critics say the unusual step of putting curriculum-related issues directly to voters could prompt more such proposals—including the hot-button issues that have plagued many other states over the past three years.”









    • The Verge
    • 06/12/24

    “I’d wager that if you wanted to see the most exciting drama happening at the MGM on this Friday night, you’d have to walk through the casino and look for the small sign advertising something called The Active Cell. This is the site of the play-in round for the Excel World Championship, and it starts in five minutes. There are 27 people here to take part in this event (28 registered, but one evidently chickened out before we started), which will send its top eight finishers to tomorrow night’s finals. There, one person will be crowned the Excel World Champion, which comes with a trophy and a championship belt and the ability to spend the next 12 months bragging about being officially the world’s best spreadsheeter. Eight people have already qualified for the finals; some of today’s 27 contestants lost in those qualifying rounds, others just showed up last-minute in hopes of a comeback.”





    • New York Times
    • 06/03/24

    “Benjamin B. Bolger has been to Harvard and Stanford and Yale. He has been to Columbia and Dartmouth and Oxford, and Cambridge, Brandeis and Brown. Over all, Bolger has 14 advanced degrees, plus an associate’s and a bachelor’s. Some of Bolger’s degrees took many years to complete, such as a doctorate from the Harvard Graduate School of Design. Others have required rather less commitment: low-residency M.F.A.s from Ashland University and the University of Tampa, for example.”

    • Siena Awards
    • 06/01/24

A.I. Update


  • The featured articles in this issue focus on the detailed use of AI in individual feedback and tutoring situations.  These will, inevitably, be the most common use case, and both research and the industry are progressing quickly.

    Also this week, see the research from the Walton Family Foundation on AI adoption by students and teachers.  This is the most recent research available, and it shows a rapid spike in use in education contexts.  It’s time to get moving, folks.

    Last, find helpful writing by Anthropic on AI safety mechanisms (Red Teaming) and how LLM designers should think about personality.

    These and more, enjoy!


    • AI and Education
    • 06/03/24

    “This piece primarily breaks down how Google’s LearnLM was built, and takes a quick look at Microsoft/Khan Academy’s Phi-3 and OpenAI’s ChatGPT Edu as alternative approaches to building an “education model” (not necessarily a new model in the latter case, but we’ll explain). Thanks to the public release of their 86-page research paper, we have the most comprehensive view into LearnLM.”

    • Hechinger Report
    • 06/03/24

    “A team of researchers compared AI with human feedback on 200 history essays written by students in grades 6 through 12 and they determined that human feedback was generally a bit better. Humans had a particular advantage in advising students on something to work on that would be appropriate for where they are in their development as a writer. But ChatGPT came close… Most of these humans had taught writing for more than 15 years or they had considerable experience in writing instruction. All received three hours of training for this exercise plus extra pay for providing the feedback.”


    • Marc Watkins
    • 06/14/24
    • Harvard Business School
    • 06/13/24
    • Chronicle of Higher Ed
    • 06/13/24

    “Instead, he will assign less writing and less deep reading, because students’ work in that area is now difficult to assess. He will rely more on lectures and in-class, handwritten exams. “It’s going to force everybody to the lowest common denominator.” But he refuses, he says, “to waste a whole bunch of time just grading robots.””

    • CNBC
    • 06/11/24

    “Maybe most notable, the reviews from students are broadly positive. Seventy percent of K-12 students had a favorable view of AI chatbots. Among undergraduates, that rises to 75%. And among parents, 68% held favorable views of AI chatbots.”

    • Impact Research
    • 06/03/24

    “Americans have broadly positive views of AI and are using it at work and in their daily lives. 59% of teachers, 70% of K-12 students, 75% of undergraduate students, and 68% of K-12 parents have favorable views of AI chatbots. Around three-quarters in each group report having used AI chatbots either personally or at school/work. 46% of teachers, 51% of parents, 48% of K-12 students, and 46% of undergraduate students report using AI chatbots once a week or more.”

    • Impact Research
    • 05/31/24

    “The sample includes a total of N=1003 teachers, N=1001 K-12 students, N=1003 undergrads, and N=1000 parents. The teachers, K-12 students, and parents' samples were weighted to align with demographic estimates from the U.S. Census and undergraduates with demographic estimates from the National Center for Education Statistics.”

    • Google
    • 05/14/24

    “Recent advances in generative AI (gen AI) have created excitement about the potential of new technologies to offer a personal tutor for every learner and a teaching assistant for every teacher. The full extent of this dream, however, has not yet materialised. We argue that this is primarily due to the difficulties with verbalising pedagogical intuitions into gen AI prompts and the lack of good evaluation practices, reinforced by the challenges in defining excellent pedagogy. Here we present our work collaborating with learners and educators to translate high level principles from learning science into a pragmatic set of seven diverse educational benchmarks, spanning quantitative, qualitative, automatic and human evaluations.”


    • Anthropic
    • 06/12/24
    • Marc Watkins
    • 06/07/24

    “If generative AI is truly good at one thing, it is tearing down the facade of a given practice and revealing how shoddy the foundation was to begin with. ChatGPT made many in education confront writing practices that produced rote outcomes. AI detection has done the same with how little work we put into teaching students what it means to use these new tools ethically. The last 18 months should send a clear message that teaching ethical behavior isn’t something we should automate.”




    • New York Times
    • 06/11/24

    ““Society needs concise ways to talk about modern A.I. — both the positives and the negatives,” he said. “‘Ignore that email, it’s spam,’ and ‘Ignore that article, it’s slop,’ are both useful lessons.””

    • Benn Stancil
    • 06/07/24

    “Is it the end of something? Is some era dead? Are we getting getting eaten? Is it time to build? I don’t know. The ocean is changing too quickly; the weather is too unfamiliar. The moment betrays buzzy headlines and showy predictions. Whatever gets made now will likely be upended two or three times before the sea settles down. The best reason to build a boat today may not be to make something that endures, but simply to be out on the ocean, for whenever it finds its rhythm again. Because the water is convulsing in weird ways, and you don’t fight the sea.”


Every week I send out articles I encounter from around the web. Subject matter ranges from hard knowledge about teaching to research about creativity and cognitive science to stories from other industries that, by analogy, inform what we do as educators. This breadth helps us see our work in new ways.

Readers include teachers, school leaders, university overseers, conference organizers, think tank workers, startup founders, nonprofit leaders, and people who are simply interested in what’s happening in education. They say it helps them keep tabs on what matters most in the conversation surrounding schools, teaching, learning, and more.

Peter Nilsson


* indicates required