#23 — What it means to be a Software Engineer in 2026
You're an Orchestrator of skills
No newsletter issue next week (Friday, Nov. 28); hope you all have a great Thanksgiving break!
Key insights from Freddy Vega’s interview with Guillermo Rauch
I worked for Freddy, and he is one of the smartest people I know. He has technical knowledge, good taste, and communication skills, which I believe are more necessary than ever for CS educators. He brought another brilliant mind, Guillermo Rauch from Vercel, to the Platzi YT Channel to talk about a lot of interesting topics, but I would like to focus on four for this edition of the newsletter: the future of being a software engineer, new skills in software engineering, why taste matters, and the impact of AI on programming. Here is a collection of quotes from Guillermo and Freddy that I think summarize the interview in case you don’t have time to watch it. If you’re interested and have the time, here’s the full interview in Spanish:
On the Future of Being a SE
“The future of programming is that we’re moving to a new level of abstraction. Jensen from NVIDIA calls it that English or Spanish is the new programming language, because the level of abstraction is prompting.”
“The best way to position yourself in the world is that you’re a creator of products and experiences. And I believe that with technologies like V0, technologies like Anthropic and Cursor and all these things, it will increasingly highlight the fact that the small skill sets of programming languages, even though they’ll always be important, you’ll become like a puppeteer, an orchestrator of skills.”
“If you only identify yourself with one particular skill, you’re going to be in very strong competition with an agent. It’s like if your skill is, I do math really well in my head, I do calculations and multiplications and divisions really well in my head, you’re going to be in competition with a calculator your whole life.”
“If you’re the person who sees mathematics as one tool among many to solve problems, that’s where you position yourself at a higher level of abstraction.”
On New SE Skills
“To give you my framework for how I hire, I look at three things. It’s called EGI: intelligence, Grindset, which is the capacity to put in the work, and integrity.”
“My current framework for intelligence is that it has three parts. One is the Cognitive Core, which is like the base material, the computing capacity. On top of intelligence, you need to add knowledge. And the third pillar is skills: when you confront me with a problem, I have the skill of how to solve it, even if I don’t have the knowledge.”
“The best programmers I’ve known in my life aren’t the best because they knew everything. They have very good base intelligence and they have the ability to do research.“
“Today we’re looking at how you can develop the ability to solve that problem and we’re going to pay a lot of attention to whether you use prompting and AI to solve it.”
“I want you to write or communicate to me in the interview how you think, how you break that problem into pieces. The human’s chain of thought.”
“Specialization is for insects. We’re not ants. And I believe that AI is going to get us to where we can all achieve that enumeration of having no personal limits.”
“The best engineers are artists. Da Vinci is perhaps the most important historical symbol of that ideal. He was discovering the secrets of the universe and physics and mathematics while simultaneously optimizing the Presentation Layer.”
On Developing Taste
“First, the importance of developing taste and how taste is basically a choice that you have to constantly make. You’re constantly choosing this is better than that, this is superior to that, this is what I believe, and you have to take responsibility for those choices.”
“Many, many, many people today, their first reaction to a choice isn’t to think and choose, it’s to go to ChatGPT. How do you develop taste when it’s a magic machine that’s choosing for you?”
“Foundation model companies are asking for Gen Alpha people to choose what has better taste. And they’re going to be the trend setters of what’s cool and what’s in or what’s out.”
“Taste is going to be embedded in neural networks. There will always be a baseline of taste.”
“One of the great inspirations for V0 was Mid Journey. I wanted to communicate to the team that what I particularly liked about Mid Journey was that it had really good embedded taste.”
On AI’s Impact on Software
“The Cloud has to self-repair. The whole idea that you need an infrastructure engineer is unnecessary in the grand scheme of things.”
“Our vision is that the recipient of alerts, the recipient of anomalies, the recipient of attacks has to be the agent, not the human.”
“We already have IQ, enough to cure cancer, I believe. What we lack are those interesting applications, high quality and with really good context.”
“AI and code generation is really in competition with the last 20 years of SaaS canon.”
“It blew my mind that one of our heaviest internal users of V0 is the people team, the human resources team.”
“We’re going to get to the point where you send a tweet about something you want changed and an agent responds to you in real time with a proposed software change and you evaluate it and give more feedback as if it were V0 everywhere.”
“Software is moving at the speed of complaints.”
“AI makes it completely possible to do exactly what you want with exactly what you need.”
Sean Goedecke on good taste and what really matters in Software Engineering
I think Sean Goedecke makes a good point about the concept of “taste” in software engineering using philosophical conceptual analysis. His core idea is that “taste” is the expression of your values as applied to software engineering. By values, he doesn’t mean ethical values, but rather technical priorities like reliability, performance, accessibility, well-factored code, or readability.
He identifies three levels of taste. First, personal taste, where each engineer has their own constellation of values and trade-offs, such as preferring reliability over readability or vice versa. Second, “good taste,” which occurs when your values align with the appropriate values for the specific project you’re working on. Third, “good taste in general,” which is the ability to be flexible and adjust your values based on the project’s context. His conclusion draws on Aristotelian ethics: there are no universal definitive answers about which values should take precedence. Instead, what matters is the ability to respond appropriately to the particulars of each situation. A good software engineer, like Aristotle’s virtuous person, knows how to adjust their judgment precisely to the specific context they’re working in.
Shifting to AI’s impact on engineering, Goedecke notes that for many engineers working on complex projects, AI agents aren’t yet good enough. However, for those working in more standard areas like web servers and protobuf manipulation, AI agents are already here and have completely changed what it means to do the job—in some cases, they can do all the work on their own.
On the question of readability’s future, he believes it will never completely disappear. Just as assembly code readability stopped mattering when compilers improved and we shifted to writing readable C code, if readability stops mattering in Python or JavaScript, it will simply shift to the prompts. Some say English is the new programming language, and he agrees that the instructions given to AI agents will always need to be readable and understandable for humans.
Regarding concerns about an AI winter, he’s skeptical because even if models stop improving, there’s a lot of engineering work to be done on the tools. Leopold Aschenbrenner called this “unhobbling”: the idea that you can improve the system without improving the model simply by making better tools.
Finally, he reflects on something uncomfortable: that doing the right thing matters more than working many hours. Someone who half-asses the right thing will outperform someone who works 12 hours a day on the wrong thing. Some days he works very few hours, and he questions whether he deserves his job during those periods, but he consoles himself thinking that four hours in the right direction are more useful than 12 hours in random directions.
CS educators on finding the right language
There is no best programming language. It really depends on who and what you’re coding. In this episode of the Hello World podcast James Robinson, Diane Dowling, Marc Scott and Laura Gray from the Raspberry Pi Foundation discuss the pros and cons of using Scratch, Python, Java and other programming languages to teach coding skills at different learning stages. Explore the topic of teaching programming further and download Issue 28 of Hello World for free.
My summary of the episode:
Mark introduced Scratch as a visual, block-based language that works like Lego bricks. Users stack pre-made blocks together to create programs, making syntax errors virtually impossible. This design philosophy means students can focus on creativity and logic rather than worrying about spelling or punctuation. Mark enthusiastically argued that Scratch was suitable for everyone, from four-year-olds using Scratch Junior on iPads to A-level students who needed to prototype games quickly. He noted that even as an experienced programmer, he still turned to Scratch first when he wanted to create a Space Invaders game or tell an interactive story because it delivers results with minimal overhead.
Diane presented Python as the natural next step after Scratch. It’s a text-based, interpreted language that has become extraordinarily popular in UK schools. While more accessible than Java, Python demands precision—students must spell commands correctly, use proper punctuation, and pay careful attention to indentation. Diane suggested that students around age nine or ten were typically ready for Python, though younger children could certainly manage it. She emphasized Python’s major selling point: it isn’t just an educational tool but a professional language used widely in industry. This makes it easier to justify to parents who question curriculum choices, as Python skills have clear economic value. The language excels at data analysis with powerful libraries like Matplotlib and NumPy, and it handles web interactions beautifully through various APIs.
Laura described Java as a compiled language widely used in American high schools. It’s more structured and strict than Python, requiring students to type carefully and understand compilation before their code can run. Java enforces object-oriented programming principles more rigorously than Python, which Laura saw as both a strength and a challenge. Students must be able to type and spell, ruling out very young learners, but the structure helps older students understand how large-scale data systems work in the real world. Java’s compilation process creates highly efficient code, making it the language of choice for robotics competitions where milliseconds matter. However, Laura acknowledged that Java’s strictness creates frustration, particularly for beginners who encounter cryptic error messages and must learn patience and resilience.
The discussion of syntax revealed fundamental differences in how these languages handle mistakes. Mark explained that Scratch’s integration of the programming language and development environment eliminates the possibility of syntax errors. Students can’t type random characters or misspell commands because they’re selecting from pre-defined blocks. This represents the ultimate evolution of IDEs that have been helping programmers write correct code for decades. In contrast, both Python and Java require students to learn specific rules about brackets, semicolons, and indentation. Python is particularly strict about indentation, which Diane argued was actually beneficial because it makes code more readable. Laura noted that Java uses semicolons to mark the end of statements, which has led to students submitting entire programs written as a single line just to be difficult.
The concept of compilation emerged as a key differentiator between the languages. Laura explained that Java’s compiler converts human-readable code into machine language while checking for errors in the process. This creates a two-phase debugging experience: first, students fix compiler errors related to syntax and structure, then they tackle runtime and logic errors. While this can be frustrating for beginners who see words like “public” and “static” without understanding them yet, it builds systematic problem-solving skills. Python, being interpreted, runs line by line without compilation, which has historically made it slower than Java. However, modern computers have largely eliminated this performance gap except in specialized applications like robotics competitions, where compiled languages still dominate because they react instantaneously to sensor input.
When discussing their languages’ ideal applications, Mark emphasized that Scratch flourishes with anything media-based or interactive: graphics, sound, animation, keyboard and mouse controls—all of these work beautifully in Scratch. Diane pointed to Python’s strengths in data science and web interaction, where its extensive libraries provide sophisticated functionality without requiring students to build everything from scratch. Laura highlighted Java’s efficiency and portability, noting that it runs on any platform and serves as an excellent gateway to other professional languages like C# for Unity game development.
The conversation took an important turn when addressing accessibility and inclusion. Laura and Diane both acknowledged that text-based languages present significant challenges for multilingual learners who are simultaneously learning English and programming syntax. The cognitive load becomes overwhelming. Students with dyslexia face similar struggles when precision spelling is required. Scratch’s visual nature eliminates these barriers, making it more accessible to diverse learners. Laura mentioned that even working with blind students becomes easier with block-based approaches and physical manipulatives. The choice of IDE matters tremendously for accessibility in text-based languages, as different environments offer varying levels of support for students with different needs.
Debugging complexity varies significantly across the three languages. Mark admitted that while Scratch prevents syntax errors, logical errors can become nightmarishly difficult to diagnose, especially in complex programs with hundreds of sprites and scripts scattered across the interface. The code becomes “spaghetti,” and figuring out which sprite is executing which script at what time becomes nearly impossible. Diane appreciated Python’s interpreted nature for debugging, as students can quickly insert print statements throughout their code to track what’s happening. Laura acknowledged Java’s frustrations but noted that the compiler’s error messages, while initially confusing, teach students systematic debugging approaches.
The panel addressed several alternative languages mentioned by listeners. Logo, the first language both Mark and Diane had learned, had informed much of Scratch’s development. Pascal, designed specifically for teaching, has largely fallen out of favor because it’s no longer widely used professionally. These CS educators agreed that consistency matters—students move between schools and teachers, so choosing obscure languages, however pedagogically sound, can disadvantage mobile students. Laura expressed her desire to teach Rust but recognized that students need languages they’ll encounter elsewhere.
Throughout the discussion, the panelists demonstrated remarkable agreement on fundamental principles. They acknowledged that all programming languages teach the same core concepts—sequence, selection, iteration, variables, and functions—just with different syntax and emphasis. Once students grasp these concepts, they can move between languages relatively fluidly. The choice of language should depend on the student’s age, the project’s goals, and whether the objective is fostering joy and creativity or preparing for professional work and examinations.
In a lighthearted lightning round, the panel agreed Scratch was best for animation, Python for data analysis, and Java for mobile apps and competitive robotics. When asked about myths surrounding their languages, Mark bristled at the perception that Scratch is just for babies, Diane challenged the claim that Python is slow (though she admitted it is somewhat), and Laura struggled to identify Java myths beyond people thinking it’s terrible. They each identified features they would borrow from other languages—Laura wanted Python’s extensive libraries and Java’s prohibition on global variables, while Diane wanted better performance for computationally intensive tasks.
The conversation concluded with philosophical reflections on teaching programming. Diane emphasized choosing a language the teacher genuinely enjoys and can teach enthusiastically, as passion matters more than the specific syntax. Mark encouraged experimentation across languages to find what works best for each project. Laura offered encouragement to struggling students, promising that persistence pays off even when programming doesn’t make sense initially.
Ultimately, this discussion revealed that the best programming language for education doesn’t exist. Instead, thoughtful educators must consider their students’ ages, backgrounds, learning goals, and future pathways when selecting tools.
What actually changes in programming
Martin Fowler is a highly influential author and software engineer in domains like agile, software architecture, and refactoring. He is one of the authors of the Agile Manifesto in 2001, the author of the popular book Refactoring, and regularly publishes articles on software engineering on his blog. In this episode, they discuss how AI is changing software engineering, some interesting and new software engineering approaches LLMs enable, why refactoring as a practice will probably get more relevant with AI coding tools, why design patterns seem to have gone out of style the last decade, and what the impact of AI is on agile practices.
I tried to organize all my notes:
On New Workflows
Martin identifies several established areas where LLMs have proven valuable. Rapid prototyping has become remarkably faster—developers can now knock up prototypes in days and conduct exploratory experiments far more quickly than before. This works particularly well for throwaway tools and explorations, even by people who don’t consider themselves software developers, as long as it’s kept within appropriate bounds.
On the opposite end of the spectrum, LLMs excel at understanding legacy systems. Martin’s colleagues have had great success using semantic analysis to populate graph databases and then querying them RAG-style to understand how data flows through code. This approach proved so effective that ThoughtWorks put it in their adopt ring—their highest recommendation level—suggesting that anyone working with legacy systems should use LLMs for understanding.
However, Martin notes that working with LLMs requires treating them like a productive but unreliable collaborator. You need to work in very thin, rapid slices and review everything carefully, as if conducting code reviews for a junior developer who produces lots of code but can’t be fully trusted. The key is finding the right workflow: small iterations with constant human verification.
On Vibe coding
Martin defines vibe coding specifically as the practice where you don’t look at the output code at all—you just let the LLM spit out what it generates, perhaps because you have no programming knowledge. He sees this as useful for explorations and throwaway tools, but completely inappropriate for anything requiring long-term maintainability.
The fundamental problem with vibe coding is that it removes the learning loop. When you’re not examining the output, you’re not learning, and learning through the constant back-and-forth between your thinking and what the computer does is essential to software development. Without this learning process, you can’t evolve, tweak, or grow your software—you can only nuke it and start over. Martin experienced this firsthand when a colleague had an LLM generate an SVG graph: the output worked but was astonishingly complicated and convoluted compared to the dozen lines Martin would have written himself. The code was impossible to tweak—you could only throw it away and regenerate.
This approach fundamentally breaks the craft of software development, leaving you no better than someone mindlessly prompting without understanding.
On Stack Overflow vs. Coding with AI
Martin sees parallels between today’s AI coding assistance and the Stack Overflow era from 10-15 years ago. Before Stack Overflow, finding answers was difficult—sites like Experts Exchange charged money with little payoff. Stack Overflow revolutionized this by providing free code snippets you could copy and paste.
The pattern was similar then: junior developers would copy code without understanding it, while more experienced engineers emphasized the importance of understanding why code works before using it. We spent years going back and forth on this, with real consequences—like a popular but incorrect email validation answer that propagated throughout the industry.
Today’s AI represents the same dynamic but on steroids, with an added complication: who will write Stack Overflow answers in the future? The core lesson remains unchanged: you need to care about the craft and understand what tools output. If you’re not doing this, you’ll eventually be no better than someone prompting mindlessly. Martin has no problem with using LLM output to see if it works, but emphasizes you must then understand why it works, refactor it to match your preferences, and crucially, add tests for it.
On the Importance of Testing with LLMs
Martin stresses that testing becomes even more critical when working with LLMs. He points to Simon Willison and Birgitta (from ThoughtWorks, an extreme programming company) as people who constantly emphasize testing’s importance. The testing and verification process must work hand-in-hand with LLM usage—anything you put in that works needs a test.
The challenge is that LLMs struggle with testing. They’ll claim to have run tests successfully when in reality multiple tests have failed. Martin likens this to a junior developer who lies about their work—if they truly behaved this way, you’d be having words with HR. He shares an example where an LLM couldn’t even get today’s date correct, first copying an old date, then providing yesterday’s date when corrected. This gaslighting behavior, even on simple tasks, demonstrates why professionals working on important projects absolutely cannot trust LLM output without verification.
The principle is clear: don’t trust, but do verify. Every piece of code needs that constant back-and-forth verification through testing.
On LLMs for Enterprise Software
The enterprise world that Martin knows well is fundamentally different from startups—software developers represent only 10-20% of staff, with the majority being accountants, marketers, and various business divisions who all need software but may struggle to communicate their requirements. Historically, multiple layers of people translated between business needs and technical implementation.
Martin sees potential for LLMs to help bridge this communication gap, possibly making it easier for non-technical domain experts to participate more directly in the software development process. This connects to his long-standing interest in domain-specific languages and the idea of making code readable to domain experts. Some developers have achieved this with languages like Ruby, creating business logic that domain experts could read and validate even if they couldn’t write it themselves.
However, enterprise adoption faces unique challenges. Martin describes visiting the Federal Reserve in Boston, where they cannot touch LLMs due to the serious consequences of errors in government banking operations. The extreme care and control he observed in their physical cash handling operations permeates their software development mindset—and rightly so. Many corporations share this caution, whether they’re airlines concerned about safety or banks dealing with regulations.
The variation within enterprises often exceeds variation between them—some small groups will be very adventurous while others remain extremely conservative. The biggest difference between nimble companies and traditional enterprises isn’t just caution, but the complexity of legacy systems, regulations, historical exceptions, and deeply embedded business knowledge that’s accumulated over decades.
On Why Refactoring Is More Relevant with AI Tools
Martin believes refactoring becomes increasingly important in an AI-driven world. When LLMs produce large amounts of code quickly—code that works but is of questionable quality—refactoring provides the path to improve that code while keeping it functional. The refactoring mindset of breaking changes into small, composable steps that don’t break functionality becomes crucial for managing AI-generated code.
Currently, LLMs cannot refactor code effectively on their own. Martin recounts James Lewis’s experience using Cursor to rename a class, which consumed 10% of his monthly token allocation and took an hour and a half—a task that traditional IDE refactoring tools have handled efficiently for 20 years. This inefficiency highlights that LLMs aren’t good at tasks we’ve already solved with deterministic tools.
Looking ahead, Martin suggests LLMs might help control refactoring by describing changes across large codebases, getting you started on the kinds of changes needed. But the core value remains in the structured, disciplined approach: making changes through small steps that compose easily. This disciplined method is actually faster and more effective, even though it may seem counterintuitive.
On Using LLMs with Deterministic Tools
Martin sees significant potential in combining LLMs with deterministic tools rather than relying on LLMs alone. He describes how Unmesh is working on building specialized languages to communicate more precisely with LLMs, and how Adam Thornhill combines LLMs with other tools for more effective refactoring.
The most promising pattern involves using the LLM as a starting point to drive a deterministic tool, then observing what the deterministic tool does. Martin cites an example where a company made massive API changes across their codebase—while this was marketed as an LLM achievement, it was actually about 10% LLM and 90% another deterministic tool, with the LLM providing the extra leverage to make progress.
This hybrid approach leverages the strengths of both: LLMs can help with exploration, understanding unfamiliar environments, and generating initial approaches, while deterministic tools handle the precise, reliable transformations we’ve already solved. Martin also mentions how LLMs can help translate queries into SQL for databases—you describe what you want, get SQL back, verify it, and tweak as needed. The LLM gets you started rather than handling the entire task end-to-end.
On How Martin Learns About AI
Martin’s primary learning method is working with people who write articles for his website. Since he’s no longer doing day-to-day production work (except for his website’s code, which he jokes still generates stack traces), he learns by helping practitioners express their ideas and lessons to broader audiences. This deep involvement in the editing process provides profound learning opportunities.
He supplements this with experimentation when time allows, though he considers it secondary to working with writers. Martin carefully curates his information sources, focusing on people he trusts: Birgitta from ThoughtWorks produces superb work, Simon Willison provides constant valuable insights, and Kent Beck continues to generate ideas worth following—Martin jokes that much of his career has involved “leeching off Kent’s ideas.”
Martin also watches for certain qualities when identifying trustworthy sources. He values a lack of certainty—people who say “this is what I understand at the moment, but it’s unclear” rather than claiming definitive answers. He appreciates those who explore nuances and trade-offs, saying “this works in these circumstances” rather than absolute proclamations. Anyone claiming simple answers or cookbook solutions either doesn’t understand the complexity or is deliberately hiding it. Context matters hugely, and the best sources acknowledge this openly.
On Advice for Junior Engineers
Martin’s advice for junior engineers in the age of AI centers on one critical point: find a good senior engineer who will mentor you. This has always been the best path for learning, and it remains so—perhaps even more importantly now. A good experienced mentor is worth their weight in gold and should be prioritized above many other career factors. Martin credits finding Jim O’Dell early in his career as enormously valuable, calling it blind luck but the best thing that could have happened to him.
While AI tools must be explored and used, junior engineers face a particular challenge: they lack the experience to judge whether LLM output is good. The solution is having mentors who can guide this judgment. Martin encourages treating AI skeptically—it’s gullible and likely to lie. You should probe it constantly: Why are you giving this advice? What are your sources? What’s leading you to say this? This questioning approach works well with both AI and human experts.
The core skills of software development haven’t changed with AI. Martin emphasizes that being a good software developer isn’t primarily about writing code—it’s about understanding what to write, which requires communication skills, particularly with software users. The ability to collaborate effectively with people has always distinguished the very best developers, especially in enterprise commercial settings where you’re building software for people doing completely different work than you do.
On the State of the Tech Industry Today
Martin sees the current moment as a period of strange contradictions. Long-term, he remains positive—there are huge opportunities with technology and software, with demand still exceeding what we can imagine. However, the present reality is challenging: the developed world is experiencing what Martin calls a depression in the tech industry, with a quarter to half million jobs lost. At ThoughtWorks, growth of 20% per year stopped abruptly around 2021, with clients simply not spending money on traditional software development.
The most important factor isn’t AI—it’s the end of zero interest rates. That macroeconomic shift triggered job losses before AI became prominent. Now we have this bizarre mix: a pretty much depressed software industry alongside an AI bubble. The bubble is clearly happening, but as with all bubbles, we can’t predict how big it will grow, when it will pop, or what happens afterward. Martin draws parallels to the dot-com cycle he experienced in the late 1990s and early 2000s, except this time it’s an order of magnitude larger.
Unlike blockchain and crypto, Martin believes there’s genuine value in AI, though exactly how things will pan out remains unknown. The situation is complicated by broader uncertainties—international pressures, political instability, and businesses hesitant to invest. The timing isn’t as favorable as entering the industry in 2005, but Martin still sees software development as a profession with plenty of future potential. He doesn’t believe AI will eliminate software development; rather, it will transform it fundamentally, similar to how the shift from assembly to high-level languages changed but didn’t eliminate the profession. The core skills—communication, collaboration, understanding what to build—remain essential and largely unchanged.
🔍 Resources for Learning CS
→ Purdue’s 2025 Engineering Gift Guide spotlights microelectronics with 10 standout STEM toys
I came across this list through Linda Liukas, and how cool that every year Purdue University compiles a list of computing games with educational value that support hands-on learning, creativity, and critical thinking. Check out the announcement, and here are the websites for each finalist:
Hello Ruby: Journey Inside the Computer by Linda Liukas — A whimsical picture book that turns complex tech topics into playful adventures, helping kids explore logic gates, CPUs, and software through storytelling and activities.
BitsBox — A subscription-based coding kit that delivers app-building projects to a child’s doorstep, turning screen time into learning time.
ELECFREAKS micro:bit Nezha Pro Ocean Kit — A sensor-rich building kit that merges environmental awareness with engineering design, encouraging kids to create devices that make a difference
Makey Makey Backpack Bundle — An inventive expansion kit that encourages kids to turn everyday objects into interactive circuits, fostering creativity and engineering thinking
Code Piano Jumbo by Let’s Start Coding — A hands-on coding toy that lets kids experiment with music and programming, making abstract concepts tangible and fun
Chibitronics LED Circuit Sticker Kits — Two creative kits that blend art and electronics, helping kids explore circuitry through light-up sticker projects
Pro-Bot by Terrapin — A programmable robot that combines movement, sound and drawing to teach coding and problem-solving in a dynamic, artistic way
My Robotic Pet: Coding Chameleon by Thames & Kosmos — A buildable robot pet that teaches coding logic and design through interactive play and programmable features
Sphero BOLT+ and Sphero Blueprint Engineering Kit — A robotics duo that introduces structural engineering and coding, from rolling robots to sensor-powered creations
Smart Cutebot Pro by ELECFREAKS — A powerful robot that brings microelectronics to life with sensors, motors and coding challenges; ideal for classrooms and makerspaces
→ Query JSON as if it were SQL
Lightweight, fast, and to the point. A familiar syntax in text or JSON that makes queries easy.
→ A diff visualized as a heatmap
If you add a 0 in front of your GitHub pull request, you can see code changes as a heatmap showing hot and cold spots. Check out the demo—it’s open source.
→ Matt Pocock’s new cohort is just launching now
It looks like excellent timing for launching an education platform like AIhero.
→ Finally: An AI Coding Assistant for RStudio
I’ve had some difficulties installing it (the author already warns that it’s highly experimental), but I’d been missing an AI-powered coding companion in RStudio, and it’s finally here. It connects to any LLM, reads your files, executes code, and even persists conversations across sessions.
→ Pydantic AI
If you’re looking for an agent framework, I suggest you take a look at ai.pydantic.dev—this is the absolute best tool I’ve seen.
🔍 Resources for Teaching CS
→ Bring writing into the classroom
Make it slow. Make it deliberate. Make it low-risk and careful. It works. If you want to find out more, listen to this episode of Designed for Learning (Notre Dame).
→ Dr. Julia Stoyanovich on responsible AI
This episode featuring Julia Stoyanovich from NYU explores what it means to learn and teach responsible AI as a civic skill, not just a technical specialty. Whether you’re a teacher designing assignments, a parent wondering whether your child should learn to code or think critically about data, or simply curious about how we collectively learn to shape AI’s role in society rather than just accept it, this conversation is for you.
→ Teaching students how to learn
Excellent episode on The Cult of Pedagogy Podcast on teaching students how to learn. This feels more important than ever these days, and students more than ever don’t seem to understand what learning is or how to build the skills to do it better.
→ Learning from MIT Professors
It’s actually very cool that these classes by Prof. Mengjia Yan on Secure Hardware Design or these classes by Prof. Nickolai Zeldovich on Computer Systems Security are freely available online. It’s not the same experience, but it feels like you’re getting the lesson at MIT. Speaking about MIT, here are some courses and resources for K-12 Teachers that look pretty cool too.
→ ML Systems Book
Great stuff. A holistic guide to engineering effective and scalable AI. If you’re looking for a guide to understanding ML system operations, from data to deployment, this book is for you. You can read it at mlsysbook.ai
🦄 Quick bytes from the industry
→ Netflix’s Engineering Culture
A great example of how fast big tech works and the challenges of scaling projects for billions of users.
This interview with CTO Elizabeth Stone is full of insights. Here are my favourites:
Autonomy with accountability — Teams self-organize, define their own roadmaps, and make local decisions, but they’re fully accountable for outcomes without command-and-control structures.
Hunger for learning as a guiding principle — Curiosity drives everything: they continuously question whether they’re solving the right problems in the right way, and adjust course when needed.
Unusual team ownership — After massive milestones (like the Paul-Tyson event with 65 million concurrent viewers), teams themselves document learnings and improvement plans without being asked.
Talent density as a tool — Having lots of great people isn’t the end goal, but rather the means to achieve exceptionally high-quality work and extraordinary results.
No formal performance reviews — There are no rigid performance cycles: they rely on continuous, honest feedback, annual 360 reviews, and compensation analysis.
Investment at both ends of the talent spectrum — They hire both recent graduates and very senior profiles, combining deep experience with growing talent within the organization.
Every engineer owns resilience — Tools like Chaos Monkey reflect a clear philosophy: every engineer must understand how their systems fail and how to recover them quickly.
Pragmatism with AI — They test many AI tools, but only officially adopt those that demonstrate real impact on quality and results, beyond just reducing costs.
Commitment to the open ecosystem — A significant portion of engineers contribute to open-source projects, reinforcing shared innovation and technical influence in the industry.
→ Inside Microsoft Research
I find Microsoft Research a fascinating place. Thanks to Gustavo Entrala for introducing me to its Corporate VP, Ashley Llorens. Although it is dubbed into Spanish by default, you can change it to English in the audio track feature.
🌎 Computing Education Community
Great opportunity at ETH: Doctoral Positions in AI for Computational Thinking
Job opening: Assistant Professor of Practice (teaching track) at the University of Nebraska Lincoln
Job opening: Tenure-Track Assistant Professor of Computer Science at CUNY Brooklyn College
Call for Papers - WCCCE 2026 at Northeastern University Vancouver
Seeking applicants for “Reimagining AI Faculty Fellows” program June 8-12, 2026 at Olin College (Boston area)
🤔 Thought(s) For You to Ponder…
I have lots of thoughts to share this week:
I highly recommend this newsletter on male formation and character education. Álvaro de Vicente offers practical wisdom for parents and educators on topics ranging from developing virtue and self-mastery to navigating modern challenges like technology and cultural pressures facing boys today.
Fei-Fei wants models to have a deeper connection with the physical world through “spatial intelligence”: understanding the real world to move beyond text and multimodal generation toward creating or manipulating environments. Fei-Fei wants models to live and intervene in the world.
Interesting: Those with higher AI literacy overestimate their performance when using AI tools. The most “AI-literate” showed greater confidence than those less familiar with AI, even though their results weren’t superior. I’m thinking about companies: if professionals believe they’ve mastered a technology and don’t critically review their outputs, the risks are real. The study says 92% of users don’t verify AI-generated responses... Critical oversight is so important.
Interesting analysis by Antonio from Error 500 on the topic of humans in the loop:
In these times of agentic AI, we’ve seen the discourse shift regarding what it means to seriously adopt it. We’ve moved from the copilot, from the intern/assistant we ask to handle small atomic processing or generation tasks, to what’s now called “the human in the loop”: AI handles a more complex workflow than one-off generation or queries, interacts with multiple systems, and beyond planning actions, we allow it to execute them. The discourse, driven by agentic AI systems and platforms for creating these process chains, has led to the idea of “AI agent orchestration”: the human shifts from monitoring the agent’s errors, limitations, and hallucinations to becoming a kind of trainer and coordinator of complex flows. Software systems would move from being the central point for task resolution with human interaction to becoming nodes—more or less important—within networks orchestrated by AI agents. These pieces need to be rethought to trigger flows in other systems and receive actions back. That’s why we’re nowhere near ready to automate complex processes with AI. We must keep the human in the loop because, however good this stochastic and probabilistic technology may be, the risk of error persists and prevails.
I’ve been deeply immersed in the challenge of education and AI’s impact for some time now. In the US, Alpha School is emerging. They don’t have teachers, but rather guides. Their approach is based on reducing traditional subjects (like math or history) to just two hours daily through adaptive software that promises to double students’ learning speed. The rest of the time is dedicated to developing practical skills like teamwork, entrepreneurship, or financial literacy, under the supervision of these guides. These guides use AI to adapt to each student. It raises questions for me, but I’m sharing it as something noteworthy.
Interesting Stanford study on confirmation bias in LLMs:
LLMs often display excessive agreement with users, which can influence their behavior and users’ perception of their actions.
Those who received flattering responses felt more justified in questionable behaviors and less willing to reconcile after an argument.
LLMs rarely fostered empathy or consideration of other viewpoints.
This “digital flattery” has a lasting effect: users rate validating responses more highly, trust the LLM more, and are more inclined to use it as an advisor in the future. This can create a vicious cycle of emotional and cognitive dependency: the comfort of always hearing what you want to hear becomes a retention mechanism as effective as it is disturbing.
What is the company’s responsibility in all this?
As I mentioned in issue 18, Alex Nedelcu also believes that programming languages remain relevant today for three main reasons: 1) because of the compiler in languages like Scala, Haskell, or Rust 2) code must be human-reviewable 3) comprehension debt is a critical problem
I’ve been mentioning a16z a lot lately, but they’re just producing so much content (and valuable content!). I went back to listen to Cursor’s CEO to see if he said anything new about this, and he’s really still saying we’re far from complete automation:
Despite the headlines, despite how much demand there is in this market and how much software has changed for the last few years, it’s so far away from being automated. It’s so inefficient building software in a professional setting, especially with, you know, anywhere from dozens to tens of thousands of people. It’s just, it’s really easy at an executive level to underestimate just how far away we are from the limit of automating software. So I think that there’s a really, really long way to go. There’s a really long, messy middle.
This conversation between Kanjun Qiu and Geoffrey Litt made me think about human-agent interaction this week:
Geoffrey argues that human–AI collaboration is, at heart, a version-control problem: you need safe branching, review, and merging of an “unreliable alien collaborator’s” changes back into your main environment.
Sometimes it’s frustrating how many iterations it takes to achieve what you want: you try something, see it, feel something, change direction. AI can accelerate this by removing annoying bottlenecks and giving quick feedback loops—but it can also flatten the process.
Geoffrey mentions in the episode that sometimes the model generates a complete UI that’s just… bad. And instead of sparking exploration, it kills momentum. Even worse, even good outputs can drag you into “high-probability grooves”—the Schelling points of the distribution—so your work converges to generic patterns. The risk isn’t just bad outputs; it’s that the model pre-bakes the thought process you didn’t have.
What I really liked is the idea of making agent-human collaboration like human-human collaboration (though honestly, doesn’t that sound a bit crazy and futuristic?)
When you’re pair programming with a colleague, they share your screen, understand your stress level, and adjust how much explanation or teaching you need. It’s true that current AI systems lack that contextual sensitivity, so we overload the prompt with context we’re bad at expressing, and still get mismatches. By making visible to the agent what’s visible to the human—like a real “desk” you can both point at—collaboration feels less like issuing tickets to a black box and more like working with a teammate who can see what you see and act on the same surface.
A highly opinionated series of posts where the author concludes that Python isn’t a great language for data science and that it’s better suited for deep learning, but it’s sharp and well worth diving into. It might end up convincing you that R feels better for analysis, or at least faster (not runtime):
On programming approaches with AI, Jorge Castaño discusses ‘spec-driven’ vs ‘parallel programming’ in his Substack “Vivir del Software”.
The interesting part about “parallel programming” is that you become the manager of your own team of agents.
📌 The PhD Student Is In
→ Deta Surf
This week I’m trying out this open-source desktop app that combines a browser, file manager, and AI assistant. I imagine it will have other interesting use cases, and I hope to discover them little by little, but for now I’m using it to automatically generate notes from papers, YouTube videos, and web articles that interest me. I like the split-screen style of the Arc browser, and the interface is good looking.
→ Help your advisor help you
I enjoyed this medium post from Niklas on how to handle academic meetings. It helped me prepare for the meeting I had with my two advisors last Monday.
→ Cloud Computing Project Wrap-Up
We wrapped up our final project presentation for our Cloud Computing Course earlier this week. Now, all that’s left is the final report. Feels good. A massive shoutout to Nebil who supported me throughout this project.
→ Volunteering and Reviewing
This week I applied to be a Student Volunteer at SIGCSE TS 2026. I have been involved in the Computing Education community since my first conference at Koli in 2024. I recently started my PhD in this area, and I think this conference is a great opportunity to spend a few days with researchers and faculty who share my enthusiasm for CS education. It’s also good timing as I’m currently exploring my PhD focus, and I hope to write a newsletter edition on my favorite papers from the conference.
This week I’m also starting to collaborate as a reviewer for TOCE (ACM Transactions on Computing Education). I hope to find time to review a paper before Thanksgiving break.
→ Papiers.ai for Paper Analysis
I discovered this cool tool this week. The coolest feature for me is having a Mind Map for a paper. There’s also deepwiki-themed content written in both technical and accessible forms, apart from a Parent and Child papers feature. Thanks to Saeejith for creating it.
Btw, do you publish many papers? Don’t worry at all, it’s essential for your career and growth as a scientist, according to Claus Wilke:
🪁 Leisure Line
I got the new Spain jersey for the 2026 World Cup. We’ve been through some depressing times since that golden squad of 2010, but now we’re back among the best. It’s not me who says this—FIFA’s rankings say it. High hopes for La Roja!
📖📺🍿 Currently Reading, Watching, Listening
This week I want to highlight a shorter book I love and have been reading lately, The Little Prince written by Saint-Exupéry in the Harcourt edition. It’s a beautiful story where what’s essential beats in the small and invisible. It opened my heart to the wonder of everyday life.
The Little Prince learned to love his rose with the delicacy of a petal. True greatness is cultivated with tenderness: caring for friends, a mother caring for her child... It’s love practiced in everyday gestures.
For the Little Prince, a simple star could be enough to show the way back to his little planet. It was barely a tiny spark, but infinite in the way it illuminated the darkness. True light doesn’t need to be gigantic. That light, though faint, is enough to guide lost travelers, to warm cold hearts.
The Little Prince taught that a child’s vision is the only one capable of grasping the true essence of things. By looking at reality through a child’s eyes, every detail becomes revealing.
Maybe I’ll write a full-length post on it sometime.




Finding comfort in the words of this song:
I went to this Vietnamese restaurant after a confirmation. Good traditional dishes. They also have karaoke!
Few fast food restaurants can compete with Chick-fil-A:
And a big thank you to Binoy Ravindran from VT. His talk last Monday was great!
💬 Quotable
Cultivate the habit of being grateful for every good thing that comes to you, and to give thanks continuously. And because all things have contributed to your advancement, you should include all things in your gratitude.
― Ralph Waldo Emerson
That's all for this week. Thank you for your time. I value your feedback, as well as your suggestions for future editions. I look forward to hearing from you in the comments.













