#27 — Fostering human agency in Computing Education
Moving beyond AI fluency
Writing code by hand as a barrier against AI dependency
Writing code by hand (pen and paper) isn’t such a useless exercise, folks. It’s a natural barrier against AI dependency. An English professor I know already has his students write their exams and assignments by hand in class. This return to “the old ways” isn’t pedagogical romanticism—it’s a conscious response to a profound shift in learning dynamics.
Most professors I know aren’t trying to demonize AI but rather preserve what gets lost when students outsource their thinking, writing, and creativity. Many professors are adopting strategies like more in-person work, shared pair programming in class, or direct supervision of the creative process. Some admit they feel “on the defensive,” facing the challenge of detecting when a text was AI-generated using detection tools (which are also imperfect).
The AI Oral Exam Debate
An interesting discussion unfolded in the SIGCSE-MEMBERS community when an instructor shared their experience using AI voice agents to conduct oral exams at scale in an AI/ML Product Management course. Students disliked it, but the instructor considered it a successful assessment tool.
The case for AI oral exams centers on pragmatic/logistic benefits: eliminating examiner bias, enabling scalability, and offering adaptive questioning without the inconsistency of human graders. Geoffrey Challen (UIUC) frames this as replacing typed exams rather than human conversations, and plans to pilot the approach. Brian Harrington (University of Toronto) adds that verbal responses make it harder for students to pause and consult LLMs, revealing who actually did the work.
But the critics see something more troubling at stake. Fred Martin (UTSA) asks a foundational question: “Teaching and learning is relational—do we really want to be offloading conversations with our students to machines?” Nicholas Weaver (Berkeley) cuts to the technical problem: evaluation requires checking correctness, not plausibility.
Jérémie Lumbroso (Penn) goes deeper, pointing out that the “Fighting Fire with Fire” framing reveals an adversarial mindset incompatible with education. The original post, he notes, showed zero curiosity about students as individuals—treating them as “cargo rather than navigators.” This is Freire’s banking model of education, now using AI to double down on surveillance rather than facilitate genuine dialogue.
The debate surfaces a fundamental question: Should we use AI to scale assessment mechanics, or does this moment demand we reexamine what assessment is for?
Computing Education in 2026: Insights from Raspberry Pi Foundation’s Hello World Podcast
The latest Hello World podcast episode from the Raspberry Pi Foundation features host James Robinson discussing the state of computing education as we transition to 2026. Joining him are three colleagues from the Foundation—Bobby Whyte (Research Scientist), Laura James (Learning Manager), and Rehana Al-Soltane (Learning Manager)—along with international partners from Kenya, South Africa, and Greece. The episode reflects on 2025’s major developments in computing education, particularly highlighting the rise of data science and the growing importance of digital literacy. International colleagues share firsthand accounts of the barriers they face and the progress being made in their respective regions. Looking ahead to 2026, the discussion offers practical advice for educators and explores bold predictions about how AI, curriculum changes, and emerging technologies will shape classrooms this year. You can check out the episode here:
Learning should be a bit challenging. It should stretch your brain and your capacity. So if it’s not challenging and there’s this option where you can take a shortcut, you’re gonna take it. So how do you incentivize and direct people more towards this ‘it’s worth me struggling because it’s going to be valuable in the long run’? How do we incentivize that more invested thinking in their future?
🔍 Resources for Learning CS
→ CodeMender (Google DeepMind)
DeepMind’s CodeMender is an agent that detects and fixes vulnerabilities autonomously before they can be exploited. It uses Gemini to identify flaws, generate patches, and automatically validate solutions before final human review. It can handle large codebases (4M+ lines of code) and it uses static analysis, dynamic analysis, differential testing, and other methods to find issues. Then a debugger and source code browser generate and validate fixes.
→ Gist of Go: Concurrency is out!
Learn Go concurrency from the ground up with 50 auto-tested exercises and tons of interactive examples. It's a full course + book in one.
→ Side Project Ideas Collection
There are quite a few interesting side project ideas here.
→ HN Software Engineering Blogs
Good examples on this HN post of high-quality software engineering blogs with real-world depth.
→ Career Story + Compensation Resource
Sometimes the best manager is the one who lets you go when they recognize you've outgrown what the organization can offer you. It's not a failure of the manager—it's a limitation of the system. Great career story. He also shared this excellent resource for comparing titles and compensation across big tech.
→ R Tutorials
Learn EDA in R with Mine Çetinkaya-Rundel using Positron IDE. The tutorial uses a real project analyzing how homework deadlines affect student performance and stress. You'll learn to clean, filter, and visualize data with ggplot2 while discovering Positron's workflow features. Great for mastering R data visualization. Andrew Gard, a professor of mathematics and computer science at Lake Forest College, also has tons of free videos with useful R tips.
→ AI-Integrated Programming Assignments
A Rock, paper, and scissors exercise for an intro programming class that integrates AI into the process without removing human agency. This other one using Quick, Draw! is also interesting for a machine learning and neural networks class. Excited to see more CS assignments in the coming months from The AI Pedagogy Project by metaLAB (at) Harvard.
→ Making Software (Shaders Chapter) + Nanda’s Interactive Blog Posts
I already mentioned it in a previous edition, but over the break, I managed to catch up again with Dan Hollick's Making Software book and read the chapter on shaders. This one is really beautiful. Fantastic stuff. Since I'm talking about past editions, I also shared one of Nanda Syahrasyad's interactive blog posts in another edition, but it has more on computer science and web development. You can read them all here.
→ Machine Learning Without Equations
Learn how machine learning actually works without drowning in equations. This book uses clear visuals to explain prediction, distance, evaluation, trees, and preprocessing, plus includes a complete end-to-end project. Free to read online.
🔍 Resources for Teaching CS
→ Netflix’s Responsible AI Guide
I'd encourage you to check out this guide Netflix published on their responsible use of content. I think it's a pretty sensible, responsible, and ethical approach to using AI for content production within their platform and ecosystem. The application to teaching seems clear to me for establishing when it's responsible and when it's not. That said, it's important to provide concrete examples.
→ Cloudflare Database Incident
Real-world example of how a small change (database permissions) can trigger a massive failure. I’m thinking about CS classes where this example would fit: SE (error handling/fail-safes/circuit breakers/incident response), Distributed Systems (propagation/cascading failures/eventual consistency), Databases (ClickHouse/permissions/metadata), DevOps or SRE (monitoring/alerts/rollback mechanisms). I really liked Cloudflare’s transparency in sharing technical details about a problem that had real-world scale impact.
→ Every Version of Windows Explained
If you teach operating systems, this video is worth watching—but even if you don't, it's worth it just for the screenshots alone.
→ Mermaid to Animation Tool (Fanfa)
Turn your mermaid diagrams into animations | There are many parts of CS that can always be turned into a graph. Graphs are easier to understand, easier to digest, and help the professor focus the message when there are many technical details. I'm thinking of a DSA class, for example. This tool converts Mermaid code into animations with colors and dynamic arrows. Very nice, though somewhat limited in its free version. Little tip: You can ask AI to help with Mermaid syntax, then paste it into Fanfa for a makeover.
→ MIT Press Algorithm Books
These books on Algorithms (optimization, decision making, and validation) from the MIT Press look quite interesting to read. Excellent for diving into algorithm theory and core ML algorithms. All of these are available as PDF downloads!
→ Stanford CS230 Deep Learning (Fall 2025)
Stanford has released its Fall 2025 CS230 Deep Learning course for free on YouTube, taught by Andrew Ng and Kian Katanforoosh. Here's the syllabus.
→ Flatlogic Open-Source Templates
Top quality premium templates for free | If you teach web development, Flatlogic just open-sourced 29 templates for React, Angular, Vue, React Native, and Bootstrap. Bold and generous move in the web development templates industry.
→ Intro CS Resources (Subgoals + AI/ML)
If you're teaching intro to CS this Spring, I suggest the subgoals exercises (Python or Java) in Runestone Academy or Learn CS Online. If you'd also like to give students some understanding of how AI systems work, what they can and can't do, and present some of the basic ideas of machine learning, large language models, and so on, check out my podcast episode with Danny Yellin, who's teaching a course on LLMs for software engineering this semester. For Neural Networks, check out this playlist from 3Blue1Brown, and Leland Beck from San Diego State University recommends using Alice to teach programming concepts for non-majors.
→ Software Architecture Patterns Playbook
If you teach Software Architecture or a related course, check out this free Architecture Patterns Playbook and download it if you haven't yet.
🦄 Quick bytes from the industry
→ Rong Yan's Career Journey
Rong Yan has an interesting career path. He did CS in China (Tsinghua) and a PhD in CS at CMU. After a few years in an industry research lab at IBM, he decided to pivot toward pure tech industry in various positions at Meta, Square, Snapchat, Verishop, HubSpot, and now HeyGen. This conversation is very focused on management, but what I take away are two ideas:
His decision to choose Meta (Facebook back then) instead of a CS faculty job or quant firms
After spending over 8 years in research (5 years PhD at CMU + 3 years at IBM Research), Rong faced a critical inflection point in 2009. He had three clear paths ahead:
Academia - CS Faculty position at a top school (he had interviews lined up)
Quantitative Finance - Offers from top quant trading firms
Software Engineering - Join a fast-growing tech company like Facebook
His decision-making framework was:
Impact at scale: As he puts it: “I want to go to places that can make engineering to be the first class citizens. I think in the financial world, engineering is always a second class... I can be a faculty, but being a faculty, I think the impact is smaller because you can only impact the scale of a school or maybe the community, but not the entire world.”
Engineering as first-class citizens: He sought environments where engineering was valued as a primary driver, not a support function. Facebook represented this perfectly - a place where engineers shaped product and company direction.
How he manages, even being at the highest management level, to get into the details of technical problems
This is perhaps the most distinctive aspect of Rong’s leadership philosophy, deeply influenced by Facebook’s culture:
The Facebook bootcamp revelation: When Rong joined Facebook, he experienced their 6-week bootcamp where everyone - including VP-level hires - sat alongside new grads finding bugs, fixing bugs, and writing pull requests. He recalls: “There was a VP-level hire sitting right next to me. And she’s just doing the same thing as what I was doing... She was doing that for six weeks. That actually shocked me because I come from IBM. IBM’s VP never coded anymore.” This experience was “deeply planted in my heart” and shaped his entire approach to technical leadership.
Current technical practice: As a CTO, Rong maintains remarkable technical involvement:
Writes 2-3 pull requests every week
Reviews code regularly
Still writes actual production code
His rationale is clear: “I’m not really good at speaking up if I don’t know the details. I want to make sure that I understand the details so that I know I’m not making things up so that I can make the best strategic decisions for the teams.”
The prioritization framework: How does he balance this with executive responsibilities? His answer is elegantly simple:
“At the beginning of every single week, I will ask myself, what are the top three things I need to achieve? Only focus on those top three things, and then everything else is less important.”
Some weeks, one of those top three things is development and getting into details. The key insight: “I’m not saying that every single week you need to do the same things. But every single week, you need to have a theme for your work.”
First principles thinking: He constantly recalibrates against his North Star goal, understanding how each weekly theme delivers maximum impact. As he emphasizes: “I think the best people, not really just spending more time, but they’re really good at allocating their time. And by understanding the priority of each direction, and spending the right time at each priority.”
Philosophy on technical depth: Rong believes “everyone who works on engineering needs to be technical” - this is non-negotiable. Being technical and detail-driven is critical for making sound strategic decisions. He wants to ensure he’s not “making things up” but rather making informed decisions grounded in deep understanding of the systems and problems his teams face.
→ Steve Yegge’s take on vibe coding
Lots of interesting insights here:
Next year the tools are going to get much better at decomposing the task and assigning them to the right size model for cost optimization.
Just because you don’t have to write code anymore you still have to learn a massive amount of stuff to be an effective software engineer in the new world.
→ Bret Taylor's Advice for the Next Generation of CS Students
Why do I think a CS degree is stronger than ever with AI? A CS degree makes a lot of sense because it provides you with a formal and solid foundation that’s key in the job market, especially with AI. Those fundamentals are what help you make decisions based on essential properties that don’t change as much as tools do. What matters now isn’t so much syntax, but how we ask for things or how we interact with the LLM. In those questions or in the context we give the LLM lies the difference (conditions, technical requirements, etc.).
And not just in the input but in the output—that is, how to interpret the response. AI doesn’t replace experience or human judgment; it’s a great ally ONLY when used on top of a solid knowledge base. It’s easy to become dependent or to overlook subtle errors that are hard to detect. Therefore, what you learn in your degree will help you leverage AI to review code, automate tests, write documentation, and build pipelines with a clear understanding of the environment so you don’t settle for partial or fragile solutions. With this combination of solid foundation plus real experience, you’ll be able to benefit much more from AI.
Here is what Bret Taylor has to say on the subject:
I still think it’s extremely valuable to study computer science. I say that because computer science is more than coding. If you understand things like big O notation or complexity theory or study algorithms and why a randomized algorithm works and why two algorithms with the same big O complexity—one can in practice perform better than others—and why a cache miss matters and just all these little details, there’s a lot more to computer science than writing the code.
I believe that the act of creating software is going to transform from typing into a terminal or typing into Visual Studio Code to operating a code-generating machine. I think that is the future of creating software, but operating a code-generating machine requires systems thinking. And computer science—there are other disciplines as well—but computer science is a wonderful major to learn systems thinking.
And at the end of the day, AI will facilitate creating this software. We may do a lot more in the next few years than we can’t even imagine, but your job as the operator of that code-generating machine is to make a product or to solve a problem, and you really need to have great systems thinking. You’re going to be managing this machine that’s doing a lot of the tedious work of making the button or connecting to the network.
But as you’re thinking about the intersection of technology and a business problem, you’re trying to affect a system that will solve that problem at scale for your customers. And that systems thinking is always the hardest part of creating products.
I think that whether AI is writing code or doing the design or doing all these other things, you need to learn how to have a system in your head. You need to understand the basics of what’s hard and what’s easy and what’s possible and what’s impossible. And AI can help you do that too, by the way. But I do think that’s a really useful skill.
I think computer science, especially the foundations, will continue to be the foundations of how we build software. And understanding that—when you’re interacting with something that’s smarter than you and producing code that you may not completely understand—how you constrain it and how you get it to produce these outcomes, I think it will require a lot of sophistication, actually.
In this episode, Bret also offered a perspective that balances the challenges educators face today with optimism about where we’re headed. I do my best to summarize it in my own words:
Bret Taylor has a thoughtful perspective on AI in education that balances optimism with awareness of current challenges. He believes we’re in an awkward transitional phase, much like when graphing calculators were first introduced to AP calculus exams. The education system hasn’t yet adapted to a world where students have superintelligence in their pockets, and many traditional evaluation methods are broken by LLMs. Teachers are struggling because technology is moving faster than educational institutions can keep up.
Despite these challenges, Taylor is fundamentally optimistic about AI’s potential as perhaps the most effective educational tool in history. He sees it as a democratizing force that gives every child access to a personalized tutor capable of teaching in whatever style works best for them, whether visual, audio, or reading-based. Students no longer need to be wealthy to afford private tutoring, and those who excel beyond their school’s curriculum can now access advanced material that might not be available otherwise. He envisions AI amplifying agency for motivated learners, giving them the equivalent of the best combination of every teacher they’ve ever had.
Taylor actively encourages his own kids to integrate AI into their learning. His daughter uses ChatGPT to explain Shakespeare, take practice quizzes before tests, and his oldest learned to code by consulting ChatGPT whenever she had questions. He’s intentional about teaching them to use these tools constructively, viewing it as an essential skill for their future.
However, he acknowledges a darker possibility: these same tools can enable students who want to avoid learning. The next few years will likely be bumpy as parents, teachers, and the education system figure out how to navigate this new landscape. Yet Taylor remains confident that just as education adapted to calculators, it will adapt to AI through redesigned homework, testing, and classroom approaches.
On the question of phones for children, Taylor draws a sharp distinction. He doesn’t think mobile phones are great for kids and advocates waiting a long time before giving them smartphones. He sees phones as addictive devices with push notifications that don’t belong in schools. But he views ChatGPT completely differently, comparing it to Google search rather than social media. It’s a utility for learning, not an addictive entertainment platform. In his view, parents wouldn’t typically ask “when should I let my kid use Google search?” because it’s simply a different category of tool. For his own kids, AI access comes through computers at desks rather than pocket devices, maintaining that boundary between learning tools and potentially problematic mobile technology.
🌎 Computing Education Community
Kristin (CS-Ed Podcast) has shared many resources on AI and teaching, but now she finally decided to create one podcast episode about it: some of the variability has gone down, more people seem ready to listen, and the paper this episode discusses seems like an important topic we need to discuss more.
Bucknell University’s CS department is hiring an open-rank professional track faculty member (starting Fall 2026) to teach core undergraduate courses like Data Structures, Software Engineering, and Computing Ethics.
Utrecht University is hiring a PhD candidate in Requirements Engineering and Specification-Driven Development.
Do you know any really amazing undergraduates who would like to spend their summer in Raleigh doing CS and AI education research? Please encourage them to apply to our REU Site: Research Experiences in Innovative Computer Science and Artificial Intelligence Education.
Miranda Wei is recruiting PhD students starting Fall 2026 to work on human-centered security and privacy, with focus on sociotechnical safety, online abuse, and social media studies.
CSEE&T 2026 will be held at the University of Florence, Italy, July 20-22. The Call for Papers is now available.
Augustana College is hiring a Professional Faculty position in Computer Science (starting August 2026) - primarily teaching-focused. Application review begins January 15.
Dan (deblasio@CS.CMU.EDU) is looking for examples of coding assignments for non-CS majors (specifically Biology majors) that are designed with AI coding assistants in mind, as his current Jupyter notebook assignments have become too simple with GenAI tools.
SIGCSE Tutorial 104 on FPGAs/configurable hardware and RISC-V (Wed, Feb 18, 7-10pm) - early registrants receive free UPduino FPGA platforms. Contact Bill Siever (bsiever@gmail.com) with questions.
ITiCSE 2026 (Madrid, July 13-15) is seeking additional PC members for full papers, posters, and tips/techniques. Review periods: Jan 12-Feb 18 for full papers, Mar 18-Apr 8 for posters/tips. Sign up!
The ‘Student Participation in Team-based Software Projects’ SIGCSE Affiliated Event isn’t listed in the main SIGCSE registration—you need to register separately. It’s a half-day workshop (Wed, Feb 18, 1-5 PM, Room 105) for instructors teaching team-based software development. Focus areas include HFOSS projects and courses using real clients, large codebases, and industry tools like GitHub. The format includes short presentations and discussions to share teaching materials and approaches.
Susan Rodger is selling Notable Women in Computing playing card decks at SIGCSE TS 2026. The cards (54 different women, one per card) come in two sizes: regular ($5) and large ($10). They can only be purchased during conference registration (through Feb 7) and must be picked up in person at the exhibit hall (Thursday-Saturday, or Wednesday with Susan directly).
CCSC-Midsouth 2026 will be held April 10-11 at Lipscomb University in Nashville, TN, with a submission deadline of January 12, 2026 for papers, posters, workshops, and tutorials, plus a student programming contest (teams of up to 3) with registration due March 27, 2026—reviewers are also needed and should sign up even if they’ve reviewed before.
🤔 Thought(s) For You to Ponder…
It’s ironic—the parallel between AI and what we’re losing with this new way of life. The dehumanization of not cooking (kitchens are more than just a physical space: they’re spaces for socializing; they’re where we connect, where we care for each other by preparing the food we’ve bought, where we learn and pass down knowledge…). It’s culture, it’s memory…
I found this interview with Rob Riemen in Ethic very thought-provoking:
The greatness of Velázquez’s painting is very different from Trump’s “greatness”; spending a night with any sick person, in the hospital or at home, is a different kind of greatness from the “greatness” of the world’s richest man. Why is Elon Musk’s “greatness” so fascinating right now? Because he’s the richest man in the world. We’re obsessed with that kind of greatness, a false greatness because it has no substance or quality—it’s only quantity. It points to a type of power that is ephemeral, that won’t endure, unlike Bach’s music, for example.
But one can maintain their dignity, act well, not thinking of their own benefit but of the common good. That’s why the Muses are important, as important as language and literature, which allow us to know a different world and encourage us to champion dignity. Today’s utopians are tomorrow’s realists—this is already known. We shouldn’t be impressed or depressed by what happens around us. Acting correctly, each of us in our own sphere, is already a complete triumph.
We already know that screens act like drugs—we know it, but we keep using them... And the worst part: we give that drug to children! We are free, I insist, to change the world, for example, with our use of screens, but we can’t just believe it; we must act accordingly.
What use is love? What exchange value does the woman or man we love have? None. The fundamental things in life are not useful; they have nothing to do with utility. Think about what gives meaning to our life—friendship, for example. The moment it becomes instrumental, it loses its inherent value and becomes a tool.
This New York Times article got me thinking: it’s clear that technology is changing our capacity for reading, reasoning, and concentration. And it also seems clear—though we still need more studies and evidence—that maintaining intact cognitive abilities is on track to become a luxury.
I’ve been catching up on the Spanish podcast Monos estocásticos. It’s where I usually go when I want to form an opinion about AI news. I sometimes find it hard to process so much information. They explain dense topics in simple ways.
Recently they interviewed Tíscar Lara, who recently published this book on AI and education, and she said two things I’d like to share with you:
On agency:
We need to be aware of that basic scaffolding so we have the judgment not to skip the process.
On humanities and technology:
Technology itself also builds culture… the world of the humanities also gives us tools for knowledge and analysis and for pausing to think and doing this critical reflection, which I think is also good for technology development.
Going back to issue #1 over the holidays, I was thinking about the concept of ‘flow state’—how there’s something intrinsically human and satisfying about doing things yourself, even when it involves effort, frustration, and mediocre results.
Facing a challenge, testing our skills, seeking out information, learning new things, solving problems, connecting our actions with the outcome... these are activities that make us feel good, that lead us to what Mihaly Csikszentmihalyi called the ‘flow state.’
If we think only about the outcome, there’s an easier, cheaper, higher-quality external alternative for all of it. Whether it’s a specialist we can turn to, an industrial process that makes everything perfectly uniform and hyperproductive, or an AI that automates the process... it’s really hard for ‘do it yourself’ to make economic sense.
And it’s a strange feeling when what makes human sense doesn’t make economic sense, and what makes economic sense doesn’t make human sense. So where does that leave what makes us human? Is there room for resistance? At what cost?
Because reading isn’t running. Reading is stopping. Reading isn’t about speed—it’s about digestion. About engaging with what you read, chewing on it, thinking as you go. Speed-reading is like running through a museum without looking at the paintings. You can count it as an accomplishment, but what’s the point? Reading is a conversation with someone who’s probably no longer alive but still has something to tell you. And that conversation, like good dinners with friends, can’t be rushed. There’s only one real trick to reading better: read with intention. Choose what you read carefully, give it time, go back if you need to, let ideas settle. It’s not sexy, it’s not fast, it’s not scalable. But it works.
Opinionated article but very solid and from a well respected person in the field. I especially liked his point about Claude Code.
Technology that wants nothing from you. Single-purpose technology. That doesn’t try to keep you engaged longer or get you to do more things—it just does one thing, and does it well.
I hope and expect that models like the University of Dallas, which is making a strong commitment to classical liberal education, will become more and more common in Spain and Europe. UNAV and UFV are clear signs that something is moving in that direction. I see it in the students—they have a much broader vision. I wish there were more CS majors with liberal arts minors. I think this profile is more necessary than ever.
Great stuff from Marc Watkins on centering education around human agency rather than AI fluency, embracing struggle, and building student awareness. I want to put this into practice for myself in 2026, in a more conscious way.
Human connection remains a powerful competitive advantage. Knowing there’s a human being behind the work fundamentally changes how we experience it. Put another way, behind any creation, we don’t just see the result—we see the effort, the intentionality, a window into someone else’s internal experience.
Copilot adoption is a concern internally at Microsoft. The problem isn’t just technical, but also cultural and educational. I know Microsoft is working on training so that many clients (especially companies with more licenses) better understand all the new features.
This post argues that attention is a choice that shapes our lives, and offers 14 practical habits to be more intentional about what we focus on. It’s been a long process, but I’m glad to know I’ve been cultivating most of these habits over time, and the day feels much more productive. I especially liked point 2: choosing content with substance, that’s thoughtful, inspiring—the kind that leaves a deeper, enduring impression and changes how we think and feel. Anything that can nourish the mind rather than distract it.
You’re not competing with other startups; you’re competing with the rate of capability improvement in the foundation models. For example, Claude Code now has native LSP support, giving it IDE-level code understanding that makes many third-party code intelligence tools obsolete
Oxide encourages LLM use but prioritizes human responsibility—employees remain accountable for all LLM-generated work. LLMs excel at reading comprehension, research (with source verification), editing polished drafts, and code review. They’re problematic as writers because LLM prose undermines authenticity and trust. For coding, they’re useful for experimental work but production code requires human judgment and careful review.
Interesting! The bottleneck has shifted from writing code to proving it works. But as Zarar Siddiqi says in his highly recommended Substack, the good news is that we already have tools popping up that make this easier.
Andy Pavlo’s annual databases year-in-review is back! PostgreSQL dominated 2025 with major acquisitions (Databricks bought Neon for $1B, Snowflake bought CrunchyData for $250M), every DBMS added Anthropic’s Model Context Protocol support, MongoDB sued FerretDB over API compatibility, five new file formats challenged Parquet’s dominance, and Larry Ellison briefly became the richest person in history thanks to Oracle’s AI datacenter deals.
I completely agree with Mattias: AI has also given me back that productivity I lost in the past with so many specialized domains (the frontend was absurdly complex). The good thing is that if you have experience now, you can go from idea to execution in days, recognize good code from bad code by capitalizing on that experience, and be more productive. That mental space for creativity is what has changed with AI, because we no longer have to be mentally saturated with technical details, but rather focused on building, which is something I enjoy when I do web development. It’s much more fun.
📌 Research Corner
→ Alejandro Piad Profile + Substack Reflection
I really like Alejandro Piad's profile (University of Havana). He embodies exactly the kind of CS educator I aspire to be—someone with a public outreach dimension democratizing knowledge through Substack, a tenured academic side conducting research and teaching, an author's portfolio spanning both popular and technical books, and a community builder (AI Cuba). Looking at the raw statistics on my Substack, this post on “Don't Turn Your Brain Off” stands out as receiving far more attention than the others. Somebody reposted this article on LinkedIn, and it picked up like crazy! In this new year, I’d like to keep going on my weekly post schedule. I want to keep writing about “computing education” explaining why this is important and what we can do about it. I will keep exploring the intersection of learning and technology. For sure, you will see more research, but also more about how to make learning and teaching cs experiences more effective, using or not AI.
→ PhD Exploration + Inspiring CS Academics Podcast
Although my focus is on computing education and my research centers around AI in programming and computing education, I like to explore other fields of computer science so I don’t lose touch with other academic worlds. I want to work on problems I find interesting, and I’m always very broad in what excites me, so I generally like to collaborate. I found this podcast with some very inspiring CS academics. I particularly liked this episode with Michael J. Freedman and this one with Jelani Nelson.
→ ACM Webinar on GenAI Ethics in CS Ed
After watching the ACM webinar on this long collaborative paper between several universities on the Ethical and Societal Impacts of GenAI in Comp Ed, my main takeaway is that this study validates the ESI Framework (Ethical and Societal Impacts Framework) for analyzing GenAI ethical dilemmas in CS Ed from multiple stakeholder perspectives, identifying conflict archetypes (such as automation vs. human augmentation, or cognitive offloading vs. critical partnership) that help navigate ethical decisions and evaluate trade-offs. Here are the slide screenshots in case you want to dive deeper into it. For me, listening to Tony Clear, Janice Mak, and Tingting Zhu was a pleasure. I learned a lot about the whole process. As a first-year PhD student, this helped me get my thoughts more organized when it comes to presenting work and considering all the elements.
→ Brain Activity Study (Nataliya Kosmyna, MIT)
Because of my research area, I read a lot of papers about how AI is impacting universities. I have some context since I’m working for one. But I don’t have a clear picture. I’m learning as I go. The integration of tools like ChatGPT in education is transforming the learning experience. Whereas students used to have to grapple with complex texts and solve difficult problems as an essential part of the learning process, today many of those struggles are replaced by instant answers generated by AI. It’s hard to detect, so it can’t be penalized. This trend, driven by our cultural preference for instant gratification (short videos, quick answers), poses challenges for developing critical and deep-thinking skills, and forces us to rethink how we prepare new generations for a world increasingly mediated by AI. As this paper points out, the cognitive debt is real.
And speaking of cognitive debt, this study by Nataliya Kosmyna from MIT Media Lab reveals that using chatbots like GPT-4o to write essays significantly reduces brain activity compared to working independently or using a search engine. Key findings from the paper:
Up to 55% less neural connectivity compared to the group that wrote without assistance, and between 34 and 48% less for those who only used search engines.
Additionally, AI-assisted students showed lower information retention, a weaker sense of authorship, and worse performance when they later had to write without assistance, demonstrating that relying on AI from the start can undermine deep learning.
The study emphasizes that while AI offers opportunities to enhance research and thinking, its premature and unreflective use can weaken intellectual development, which is why it recommends incorporating AI support only after building a solid foundation of individual knowledge. That’s why I’m concerned about its use at early ages—that’s where I believe the greatest impact is.
→ Derek Bruff Resources
I’ve been following Derek Bruff (UVA) for a few months now, and he produces interesting learning & teaching material. He normally doesn’t speak from a technical perspective, but rather from an educational one. For example, here and here he focuses on some of the pedagogical reasons an educator might want to design a custom AI chatbot. I’ve been thinking about ideas for student engagement in the last few weeks—it is a topic I want to research more.
During Christmas, I also listened to him on the Grading Podcast talking about his history with alternative grading practices, faculty development on grading, AI-aware teaching, and the Alternative Grading Institute! You can listen to their conversation here.
🪁 Leisure Line
I had dinner here with a friend. It was so good and tasty 🤤 In case you want to add it to your list, we went to the Vintage Park location.
It was Aristotle who argued that alongside intellectual work—which elevates the person—every free man should learn a manual trade as a way to develop their different dimensions. Christmas was a perfect time to practice with more hands-on work like…. It opened me up to experiencing reality in a way different from the merely virtual world of screens—more truthful, more real. That contact with the real allowed me to develop in dimensions that our digital, virtual, screen-based world overlooks.
This year I had to open family gifts remotely. It was also nice. I like to collect sweet details throughout the year in my Notebook post-it. These were this year’s Christmas gifts.





Unwrapped these presents from under the tree on the 25th! Absolutely happy with my Williams Racing Team polo and cap – what a fantastic season it’s been for Williams finishing P5 in the constructors’ championship! The UH gear is also super cool – Go Coogs! And oh wow, a Texans tee (does this mean I’m supposed to start following pro football now? That’s a whole other sport to keep up with… noo 😂). Also got some other great stuff: a Car Phone Mount, candies, a new mouse, a desk mat, cufflinks, a streaming LED light, and some Happy Socks. Feeling very grateful! The most random gift of the day – though it wasn’t for me – goes to… a turtle!
📖📺🍿 Currently Reading, Watching, Listening
After listening to a recent episode of El hilo about data centers in Chile, I became interested in learning more about the topic, which led me to ‘Colossus,’ a two-episode special from the Search Engine podcast. Both episodes explain the genesis of these centers in various rural regions of the US, how they’ve become indispensable hubs in what’s known as the ‘AI race,’ and the reaction of residents living in these areas that are now filled with buildings created specifically to power AI tools.
Watched The Thinking Game. It’s about the early days of Google DeepMind and gives a great perspective on how AI has evolved. It already has over 200M views on YouTube. I couldn’t agree more with this tweet.
While fantasy isn’t my thing—I usually prefer essays or other types of novels—I discovered these videos from Brandon Sanderson’s writing course at BYU. I can’t recommend them enough. Not only do you hear Brandon deconstruct the craft of writing, but his genuine passion for story and helping people become writers shines through.
It’s surreal. But I loved this musical short with a pretty peculiar protagonist. Life is learning to coordinate differences without erasing them.
🌐 Cool things from around the internet
🔗 into.md — If you’re trying to pass any website to an LLM, you should 100% be using this tool.
🔗 a cute(-ish) koala — Found in Cassidy Williams Newsletter: Alvaro Montoro live coding some CSS Art. All drawn with HTML and CSS.
🔗 VoiceInk — Great voice dictation tool for Mac.
🔗 Try X in Y minutes — It’s not a bad idea to rethink the “better C” language among this family of languages that Anton proposes here. If you decide to go for it, he has created a product to create an interactive guide for it. See examples here.
As always, if you enjoy Computing Education Things, please like, comment, or share this post! You can also support this work through Buy Me A Coffee.











