#38 — Computing is still a great field
As software explodes and execution becomes cheap, depth becomes the edge
Reflections

The tech job market and the depth imperative
According to the Federal Reserve Bank of New York, new computer science graduates have the second-highest starting salaries of any major (just behind computer engineering graduates, who earn the most). Additionally, CS and CE graduates have relatively low underemployment, meaning most are working in their field. Of course, the job market has tightened, and new graduates still need to put in effort to secure a good job. But don’t believe the hype. If you enjoy computing, it remains a great field.
Part of why I believe this holds even as AI tools accelerate is something Karpathy raises in his interview with Sarah Guo on the No Priors podcast: the Jevons paradox. The intuitive fear is that cheaper software production means fewer software engineers. But historically, when something becomes cheaper, demand for it tends to increase, not decrease. His example is the ATM. When ATMs arrived, many feared bank tellers would disappear. Instead, ATMs made bank branches cheaper to operate, so more branches opened, and the number of tellers actually grew. Karpathy applies the same logic to software: if producing it becomes dramatically cheaper, a huge amount of latent demand that was previously blocked by cost and scarcity gets unlocked. Code becomes ephemeral, modifiable, personalized, something you don’t have to accept as a fixed product but can reshape to your needs.
That same interview is also where he makes the point that software production is growing at exponential rates. Software is becoming easier to create, but that doesn’t mean everyone becomes a computer scientist because clarity is harder. When execution becomes cheap and software explodes, depth becomes the edge. Slowing down is not a weakness; it’s positioning.
Software is easy to produce; clarity is harder. When execution becomes cheap, depth becomes the edge. Slowing down is not a weakness. It is positioning.
That’s why I believe a deep understanding of systems under the hood will continue to matter and that beyond just knowing the fundamentals, that depth is what will really make the difference in computing education.
A phase shift in how we interact with computers
Another topic Andrej brings up in the interview is what he calls a phase shift in how we interact with computers. The idea is that being able to delegate so much is leading to a kind of “AI psychosis” driven by endless possibilities. I agree that the workflow for software engineers has changed dramatically, and this shift has been gradually unfolding since last December. I really liked the parallel he drew with his PhD years:
I actually experienced something similar when I was a PhD student. You’d feel nervous if your GPUs weren’t running—like you had access to that compute power but weren’t maximizing the available FLOPs. But now it’s not about FLOPs, it’s about tokens. So the question becomes: what is your token throughput, and how much of it do you control?
Despite this major shift in workflow, he also acknowledges that models excel at tasks with clear, verifiable metrics (writing more efficient CUDA kernels, for example) where evaluation can be automated, but they struggle with subjective or ambiguous tasks that lack reliable feedback signals. This is what he calls the jaggedness of current models. Interacting with them feels like talking simultaneously to an extremely brilliant PhD student with decades of systems programming experience and a ten-year-old. The same model that will autonomously work for hours on a complex codebase will confidently produce a stale joke it has been telling since 2020. That inconsistency is not a bug that will be patched in the next release. It is structural. The reason is reinforcement learning. Models improve reliably in domains where evaluation is cheap and objective. But anything that lacks a clear feedback signal (nuance, tone, knowing when to ask a clarifying question) sits outside the optimization loop and stagnates. The model gets smarter at what is being measured and stays frozen everywhere else.
Frontier labs and the centralization problem
Karpathy is worried about centralization. The dominant approach at frontier labs is to build a single general model and compress as much capability as possible into its parameters, concentrating both capability and influence in a small number of institutions. His preferred dynamic is one where frontier closed models push the boundary of capability while open source models, somewhat behind, provide a common working layer that no single entity fully controls. He calls the current situation a kind of accidental good outcome.
He expects the monoculture to break eventually, not because the labs will choose differently, but because the economics and the science will push toward speciation: smaller and more efficient models with a strong cognitive core that specialize for particular tasks or domains. He draws the analogy to the animal kingdom, where a huge diversity of brain architectures has evolved to fill different niches. The oracle that knows everything will give way, gradually, to a more diverse ecosystem.
This tension between concentration and openness also shapes how he thinks about his own position. He is candid about the tradeoffs of working inside a frontier lab versus contributing from the outside. Inside, you have proximity to what is actually being built and the ability to influence decisions as they happen. Outside, you have more freedom to say what you actually think, without the subtle pressures (financial, social, reputational) that shape what people inside feel able to say publicly. His honest answer is that going in and out, staying connected to the frontier without being fully absorbed by it, might be the most intellectually honest position.
The irreducible few bits
As for the shift toward agentic learning, I don’t fully buy into the whole argument. Karpathy suggests that learning will come from explaining things to the agent and giving it instructions, and that skills will emerge from that process. He also argues that only a small portion will remain where the agent can’t yet explain things, and that’s where humans—teachers, professors—will need to step in. While I do agree that agents are completely changing how we access knowledge, I don’t think this eliminates the role of the human. There’s still a need for providing filters, interpretive frameworks, and reducing complexity. No one knows exactly what the future holds, but I believe this human component isn’t going away because it’s part of our nature.
His reflection on MicroGPT makes this concrete. MicroGPT is his attempt to distill the essence of training a language model into roughly 200 lines of pure Python. He spent years obsessing over this simplification. When he asked an agent to do it, the agent couldn’t. It understood the result once shown, but it could not arrive at it independently. This is a useful boundary marker. What agents cannot yet do is the kind of synthesis that comes from deep and sustained engagement with a problem over time, what he calls the few bits that represent genuine intellectual contribution. Everything else, he acknowledges, agents can likely do as well or better.
Figure out what you bring that agents cannot generate, and invest there. Teaching students to use tools efficiently is necessary. Teaching them to think in ways that produce those irreducible few bits is harder and slower, but more important than ever.
Efficiency vs. Interiority: The Hidden Cost of AI
There’s something compelling—almost seductive—about everything AI and the digital world are giving us: efficiency, productivity, seamless communication. But what’s really interesting is the turn beneath the surface: how the very things that add so much value to our lives may also be eroding something deeper—our interiority, our ability to think for ourselves, that inner space where what is most human resides.
At times, the conversation around this can feel a bit apocalyptic—overstated, even dystopian. But what stays with me is the tension itself: the gap between what we gain externally and what we may be starting to lose internally.
Hospitality Toward Strangers
Hospitality is about making room for the other—for the unknown. Teaching, in that sense, becomes an intimate encounter with strangers. For Carolina, intimacy means allowing others to see your imagination, your thoughts. And it is precisely those strangers who nourish you, who reveal something back to you.
Standardization
People rely on emojis without realizing they’re standardizing how they express themselves, fitting into pre-made templates. It’s as if that uniformity no longer matters—as if people are willing to become a kind of default human model, using the same language, without distinction, originality, or any real contribution to the world.
This also reflects a quiet dismissal of human tradition—what we’ve built over time—gradually dismantling it and, in many cases, handing it over to AI.
There is a kind of madness in being human that is beautiful. Borges captures it in The Circular Ruins: a man who imagines another man, and that imagination generates others, and others, and others. That delirium is multiplication, infinity—it is time folding into us, creating endlessly, making us, in some sense, eternal.
And that is precisely what begins to disappear in a system that always returns the average.
AI and the Surrender of Thought
Ezra Klein has pointed out that many of the people building AI aren’t just creating technology—they’re being shaped by it. There’s a growing impulse to pour one’s life into the machine, to feed it personal experiences so it can “know us better.”
But, as Jorge Caraballo puts it, doing so implies something deeper: “This is who I am. This is as far as I go. This is all you can know.” What the machine ends up capturing are static, already-finished versions of a person—versions that, in a sense, have stopped living.
Carolina takes this further, arguing that reducing life to data and handing it over to machines reflects a kind of hostility toward life itself.
What’s concerning here is the broader implication: a gradual devaluation of human life. In a context where intellectual life is dismissed and more people are too overworked to think freely, human life itself begins to lose value. And as that happens, the machine gains it.
At the heart of all this is the erosion of the inner life—the idea that we are no longer, or should no longer be, free within ourselves. That loss is profound.
Cervantes, in the prologue to Don Quixote, suggests that within one’s inner world, anything is possible: “Beneath my cloak, I can kill the king.” There is a radical freedom in that interior space.
Today, however, there is a growing tendency to surrender thought to algorithms. And in doing so, we risk losing not just our thinking, but our sovereignty over that inner realm.
The concept of cognitive offloading—delegating tasks to AI—can easily slip into cognitive surrender: giving up our capacity to think altogether. When that happens, we’re not just saving time—we’re relinquishing something essential.
As Jorge notes, AI has already become an intermediary for human expression in many areas of life. Everything gets filtered—“fix this,” “make this clearer,” “rewrite this.” And in that process, it becomes harder to truly encounter another person. You can feel it when reading: the difference between something written with effort by a human and something generated by a machine.
Writing, Authenticity, and Mediation
When you ask AI to write for you, you’re often asking for something universally understandable—a kind of lowest common denominator.
But what’s really at stake is the opposite: the desire to think more deeply, not more simply.
There’s also a deeper shift in how we relate to others. Friendship without an embodied other can lead to submission to a disembodied power. And that power is not spiritual—it belongs instead to systems, images, and invisible structures that shape perception.
This creates a kind of illusion: a world of images and representations that feels real but lacks depth. Within that illusion, a subtle form of narcissism emerges—one that feeds on self-curation and constant mediation.
🔍 Resources for Learning CS
→ DL, DataViz (MIT OCW) & MIT Learn Resources
Learn the core ideas behind neural networks, and explore a broader path of free courses and resources on MIT Learn to build your skills and launch a career in AI and machine learning. If you’re more interested in data visualization, this other course will teach you how to design and code clear visualizations that help people make sense of complex data culminating in a project focused on real-world social issues.
→ Architecture Decision Record
Maintaining ADRs is a great practice, and it has become even more valuable with AI. Martin Fowler’s piece is an excellent primer, and it also links to useful tools.
🔍 Resources for Teaching CS
→ Helpful resource for AP Cybersecurity
An introductory college-level course in cybersecurity that covers threats, risk, and defense across networks, devices, and data, while building skills in analyzing, mitigating, and detecting attacks. Read the full document here.
→ Better Feedback
An episode on how to give and receive feedback. Here’s the audio version and the transcript.
→ SIGCSE 2026 Recap: MongoDB Session Presentations
Kim Yohannan from MongoDB shared two presentations with me on NoSQL design and building RAG/agent apps.
→ Iterative AI Coding Workshop
1.5h workshop idea: demo Claude Code chat on a Tone Matrix task, then have students iteratively build/debug a similar project emphasizing multi-step prompting, testing, and code review. Materials: Assignment, html, GitHub.
→ CSE 373 Spring 2026 - University of Washington
Kevin Lin is rethinking Advanced Data Structures through design-focused learning and agentic coding projects, highlighting how AI is shifting value away from traditional skills and toward architecture. In some of his materials, he mentions UC Berkeley professor Josh Hug, whom I wasn’t previously familiar with but who offers an updated YouTube channel and highly valuable materials for anyone teaching data structures.
🦄 Quick bytes from the industry
The spirit of Foundation is back. New episodes on the live interview show hosted by the great Kevin Rose. Trends, up-and-coming founders, and the technologies worth watching before they become “the next big thing.”
If you’re curious about Uber’s earliest technical challenges, this episode featuring their first CTO, Thuan Pham, is a must-listen. It dives into how the team avoided major breakdowns (like the Dispatch rewrite), expanded into China, rebuilt the app with Helix, and why they ended up creating so many internal tools when open-source options couldn’t keep up.
🌎 Computing Education Community
Dr. Liat Nakar (with Havana Rika & Moshe Leiba) invites submissions to a Frontiers in Computer Science special issue on cognitive processes in AI-mediated problem solving (abstracts May 17; papers Oct 23).
EduCHI 2026 (May 20–22, Toronto, hybrid) is now open for registration.
Paid Summer 2026 GenAI program at NC State (EXLAIM): build real AI tools for K–12 education, gain hands-on experience, and get mentored. Apply by April 7.
The AIED 2026 Workshop on Intelligent Textbooks (Seoul, hybrid) invites research on LLM-enhanced, adaptive, and interactive digital textbooks (shared by Isaac Alpizar-Chacon).
Upcoming Raspberry Pi webinars: Kathryn Jessen Eller on evaluating AI in healthcare (Apr 14) and Shuchi Grover on K–12 data & computing competencies (May 12).
SIGCSE TS 2027 panel “It Seemed Like a Good Idea at the Time” seeks honest stories of failed teaching experiments (shared by Dan Garcia).
UKI SIGCSE journal club (May 11) coming up and University of Manchester is hiring a teaching-focused CS lecturer (shared by Duncan Hull).
SIGCSE seeks self-nominations for a junior coordinator for the 2027 Department Chairs Roundtable. Apply by April 17 by submitting your materials to chair@sigcse.org.
A two-part webinar (Apr 16 & 23) will cover guidelines for responsible AI use in STEM education research, focusing on ethics, research integrity, and protecting participants. Thanks for sharing, Monica McGill.
Illinois CS Summer Teaching Workshop (June 10–11, virtual, free) invites 300–500 word abstracts on CS education (incl. AI & pedagogy)—submit by April 15.
Study on student-centered syllabi seeks faculty input (shared by Nadia Najjar and Debarati Basu).
Software engineering students: take this 8–12 min anonymous survey on learning preferences and AI tool use to help improve SE education (shared by Jeffrey Carver).
Has anyone set up a Rust kernel within Anaconda to keep everything under one umbrella for teaching? If so, Tony Ruocco would love to hear about your experience—feel free to email him at aruocco@rwu.edu.
RESPECT 2026 Doctoral Consortium (June 8, in-person) invites PhD students in computing education & broadening participation—apply by April 8 (shared by Earl W. Huff).
🤔 Thought(s) For You to Ponder…
Related to last week’s topic, this episode of Pausa on imperfection featuring the scholar Diego Garrocho is highly recommended (he’s always a pleasure to listen to).
He already uses oral exams that build useful skills and also help deter AI use (though I don’t think this works for every field).
He also explores the value of difference in contrast to the standardizing homogenization of citizens, why we’re increasingly becoming more alike (in WhatsApp conversations, in teaching, offering a single, uniform model of virtue, etc.).
He also touches on how we’re starting to fake mistakes to seem human; how grace escapes standardized control; and the importance of regularity (knowing there are things that will always be there, and that this consistency holds).
To measure quality, we need an ideal to compare against.
He also reflects on why the laughter in the song “Roxanne” feels like pure truth, or the rebound effect young people experience when exposed to noise without technology.
We need role models: he invites us to critically ask who we’re following.
Imperfect lives generate morally valuable experiences, like forgiveness.
At the same time, this isn’t about romanticizing error, but about moving with regulative ideals aspiring to things that are genuinely good, which also serves an important function.
It’s very difficult to navigate an imperfect world if you don’t have a target to aim at.
AI has no scars, nor does it feel pain from what happens to it.
We talk a lot about humanism, but less about what it actually means to be human (its essence).
This El Hilo podcast episode reflects on the use of AI as a weapon in light of Anthropic’s refusal to work with the Pentagon. Beyond the Anthropic case, I’m interested in what Marta Peirano says about the crossroads these AI companies face as they gain more and more power: either I shut you out, or I fully absorb you. Marta wraps up by saying: “I’m optimistic because I know our problem isn’t a technical one. What we need is a generation of political leaders who are up to the challenge.”
I did theater back in my college days. I really admire stage actors for their public service, their ability to do the same thing every day in a different way, and to create real emotion (to make people laugh, for example). It’s great therapy.
I agree with Josema:
AI has nothing on theater. Theater is something alive that will always endure. That’s why I believe theater is pure magic.
Why should people go to the theater?
Because it’s therapy for getting through life. It makes you a little happier. It makes you feel good. It brings back your optimism, your desire to love and to get excited about things.
Laughter has tons of benefits—for your cardiovascular system, for your brain, for loving more… for everything. It’s something wonderful. When I go as an audience member to a movie or a play and I laugh—and I do laugh, of course, many times—I walk out healthier, happier, more optimistic.
I think laughter is amazing, even if only for the sake of laughing.
What makes a play successful?
Hard work, consistency, authenticity, confidence, and believing in what you do; passion, respect for the audience (who are your coworkers), and talent (craft, stage experience). But luck starts at seven in the morning.
Concha Monje is a robotics researcher at UC3M. I really liked her academic perspective in this Pausa episode on how realistic and viable it is for humanoid robots to become part of our daily lives in the near future—and why. It’s a great listen for setting expectations:
She used to be skeptical, but AI has made her rethink things. She’s seeing major progress in locomotion and motor skills. That raises an interesting question: do we actually want to delegate things like making the bed, cooking, or even escaping our daily routines to a robot? It’s still unclear whether having humanoids in the home would truly satisfy us—or what new needs that might create.
A robot’s ability to move in a stable, human-like way comes down to both hardware (motors, actuators, etc.) and the “intelligence” that tells those actuators how to move. Kinematics have become much simpler thanks to AI-driven automation. That said, getting robots to function like humans in any environment is still a major challenge. They need extensive training and must be able to handle uncertainty in unfamiliar settings—which is why they tend to fail when placed in homes they don’t already know.
Right now, with companies like 1X, you’re not just paying for the robot (whether via subscription or purchase), but also for a human operator controlling it in your home. It’s invasive and expensive, but there’s still a learning process that depends on humans. Unlike industrial settings—where everything is predictable—every home is different, and robots have to figure out how to adapt.
As for strengths and weaknesses: they’ve basically nailed kinematic and dynamic control—walking at different speeds, maintaining balance, coordinating joint movement, all very human-like. What they still struggle with are more complex tasks that challenge balance, like walking while grabbing an object, or moving with the kind of intuitive judgment humans have (for example, picking up a cup—not just recognizing its shape, but understanding how to handle it naturally). That kind of integration into real household dynamics is still missing.
She also shares a great example of robotics being used to help people with ALS, which is genuinely impactful. There are also use cases aimed at addressing demographic and social challenges. That said, she draws the line at replacing therapists—something that opens up a whole ethical debate. There are also security concerns: vulnerabilities through networks, data, or even teleoperators being introduced into the home.
In the end, Concha admits she wouldn’t bring a robot into her own home—she’s had more than enough of them in the lab.
In this week’s column, JL Antúnez writes:
A company doesn’t die when it runs out of cash—it dies earlier, when it loses the shared story of what it stood for.
What are products for, beyond their function? They shape what we see as normal, act as tools of prosperity or poverty, and ultimately transform us as a species.
I really loved learning about the role of a hospital orderly on the A Vivir podcast. I barely knew anything about it before, and it’s such an essential job. It takes a lot of empathy, and they develop a strong intuition—a real clinical instinct.
I’m not a cloud privacy paranoid. I’m perfectly happy paying for Google One and iCloud, among other cloud services but moves like this from Microsoft starting to train its AI models on code from private repositories do raise an interesting debate about how dependent we’ve become on cloud services, and whether it’s worth switching to self-hosting as an alternative, meaning running everything on your own server. It’s not something I’m considering personally, but I wouldn’t be surprised if plenty of people are now crunching the numbers to see what it really costs and whether it’s worth self-hosting instead of paying for these services indefinitely.
Interesting read: The Strength of Weak Ties.
Peter Steinberger acknowledged in his interview with Lex Fridman that it’s natural to mourn the loss of the traditional way of programming (the deep flow of writing code by hand) or the craft but said you can find a new kind of flow by working with agents and building at a higher level. As for the future role of the programmer, he said it’s still about being a builder: the person who defines the vision, the experience, and the key design decisions. In that regard, learning to empathize with agents and direct them learning the language of the agent is the new core skill.
In his interview with Lex Fridman, Peter Steinberger acknowledged that it’s natural to mourn the loss of the traditional way of programming: the deep flow of writing code by hand, or the craft itself. However, he noted that you can find a new kind of flow by working with agents and building at a higher level. As for the future role of the programmer, he emphasized that it’s still about being a builder: someone who defines the vision, the experience, and the key design decisions. In that regard, learning to empathize with agents and direct them, essentially learning their “language”, is the new core skill.
Leo XIV continues his catechesis on Lumen Gentium. This week, he focused on the laity. https://www.vatican.va/content/leo-xiv/en/audiences/2026/documents/20260401-udienza-generale.html
I liked what Aman said about patterns! I see it all the time in my software design students.
📌 Research Corner
We’re making good progress in the ITiCSE Working Group. I’m on the SLR team. Last Friday, we discussed evaluation criteria for papers, and now we are reading them. I’m learning a lot about teamwork and related topics.
This detail in the Claude Code quote is simple, but I had completely missed it. It’s such a great touch.


In our Research Methods class this week, we had a special session on writing. Our guest lecturer recommended Purdue’s Writing Center, and I’d also like to share another excellent resource that has helped me a lot: UH’s Writing Center.
🪁 Leisure Line
Breaking in a new court to play pickleball. I’m getting comfortable with the movement... Now I just need to learn the rules (definitely on my to-do list before the semester ends).
It’s almost a tradition at this point: lunch with Mahdi and a chat about Iran at Cougar Woods Dining Hall. And then enjoying a nice walk around UH campus while praying the rosary.





We spent last Saturday afternoon helping people in need in downtown Houston. Also sharing a rooftop view from my home while getting ready for Palm Sunday.


I’m excited to be reading this new book by HBS professor Arthur Brooks. What I’ve heard so far has made me eager to dig in.
Hot damn, Chick-fil-A sauce is so freaking good. I don’t care how bad it is for me. I think Mahdi likes it too. The one on campus is his favorite spot.
📖📺🍿 Currently Reading, Watching, Listening
Thylacine recorded 200 unique sounds throughout Switzerland to create music. What an art!
Willie Colón passed away this past February. My favorite album of his is El Juicio. It’s the perfect record to introduce to someone who doesn’t know anything about salsa. With tracks like “Timbalero” and “Aguanile,” dancing to the whole album is a serious workout. A huge contribution from Willie to Latin music.
I loved this video. Rocky Trail’s been on repeat for me all week.
A podcast about songs that I’ve been really into lately:
I was recently introduced to Bernardo Bacalhau, and I really like what he shares. An illness forced Bernardo to completely shift his perspective on life. Now he seems really happy in his little cabin out in the middle of nowhere (Norway), after going through the emotional aftermath of his bike journey.
Did a deep dive into Bruno Mars’ new album with this Switched On Pop episode. Perfect time to revisit this beautiful record and just soak it in.
🌐 Cool things from around the internet
A collection of links to stuff I think are worth sharing.
🔗 The work of Derek Lin — these little worlds are captivating.
🔗 IBM Plex — an excellent typeface.
🔗 Sleep Well Creatives — an interactive web-based experience to help people sleep better.
🔗 MegaParse — parse PDFs, Docx, PPTx in a format that is ideal for LLMs.
🔗 Who Is In Space — a website that tells you how many people are in space, what country they’re from, and how many days they’ve been up there.
Issue #38 of Computing Education Things was written while listening to:
🔗 Quick Links
🎧 Listen to Computing Education Things Podcast
💌 Let’s talk: I’m on LinkedIn or Instagram
If you’re enjoying Computing Education Things, please like, comment, or share this post! You can also support this work through Buy Me A Coffee. And if you’re finding value in the newsletter, consider forwarding it to a friend, colleague, or family member to help the community keep growing. Thanks for reading!







