#20 — What the heck am I even doing? Teach good taste
That's going to be what differentiates CS students
Taste as the new moat
We say someone has taste when they are capable of discerning options that are more tasteful than others. All other things being equal, people prefer more tasteful options to less tasteful options. —Brian Kihoon Lee
Taste is becoming the moat for CS students. It’s clearly a differentiator in an AI world where anyone can generate code—design sense and judgment will matter most.
Why taste? Because yeah, anybody can create a WYSIWYG or whatever, but can you make it actually good? That’s where taste becomes the differentiating factor. And I still think there’s going to be a lot of value in deep expertise, because people assume that just because they can vibe-code a web app, they can sling it over the fence, deploy it, and they’re done. But the reality isn’t quite that simple. How many of these apps are going to be properly secured? How many are going to have critical security vulnerabilities that exfiltrate customer data because somebody vibe-coded their backend? There are a lot of risks here—you still need taste and deep expertise. I don’t think those things are going away anytime soon.
Learnings from a Spanish Pedagogist: The Human Element
One of the things my research area (Computing Education) allows me to do is learn from many other fields, like pedagogy. Today I want to share my notes from this Spanish podcast featuring a prominent spanish pedagogist who has a lot to say when we talk about learning and teaching, but also about related topics like attention, screens, human-computer interaction, culture, reading, writing, and effort.
Balancing Emotions:
Knowing how to enjoy when it’s time to enjoy. Knowing how to suffer when it’s time to suffer. Knowing how to be calm when the time comes. You acquire this through your own experience.
On Parents’ Experience:
If your parents consider that your experience can be avoided because they already went through it. Then, simply what they’re doing is: first, undervaluing you. And second, limiting your world. Every limit of a person’s world means enclosing them in some prison.
Perfectionism Trap:
There’s no way to be perfect 24 hours a day, but if there were… Couldn’t we then teach our children to deal with situations that are imperfect?
The Technological Lens:
What is the technological dimension? The technological aspect of our culture is increasingly being seen through a technological lens, and that seems risky to me. Why? Because technological culture has taught us that if there’s a problem with a machine, everything reduces to adjusting it. But let’s say that technology offers us this. Life is NOT a machine. We never finish understanding ourselves.
The Human Element:
There’s a part of us that escapes us. Everything we carry inside cannot be reduced to an algorithm. And that’s immense luck because we’re always giving ourselves surprises we didn’t know about.
Family:
In such a competitive, technological world where everything goes so fast, there’s a sphere where you’ll be loved unconditionally just for being who you are. That seems to me like extraordinary fortune.
The Reading-Language Loop:
The more texts you read, the more you’ll enrich your language and knowledge. The poorer your language, the more difficulties you’ll have reading texts. And the more difficulties you have, you’ll end up hating reading. In four words, the more linguistic mastery you have, the more easily you’ll read texts. And the more knowledge you have, the better you’ll understand and interpret the world.
The ONLY effective antidote to screens:
I only know one effective antidote against screens: friends
Eliminating Effort?:
What would be the meaning of being human? If everything in you is potential, it’s going to pass to a machine? Effort consists of elevating yourself above yourself
Vocabulary Development:
Moving from experiential vocabulary to conceptual vocabulary is, definitively, the fundamental role of school. The main constitutive element of school is the creation of a common culture. NOT psychologizing the school.
The Art of Learning:
Learning requires integrating the new into what’s already known
Attention:
Yes, children have attention capacity, it manifests in daily life. But it seems it’s not worth the effort at school. It can be applied at university.
Writing:
Reading and writing are distinct neural activities. When you write, you have to choose the most adequate words for the text. Writing allows you to question yourself. Writing is not just transmitting ideas. Writing is one of the most faithful exercises of self-knowledge and critical thinking.
🔍 Resources for Learning CS
→ The Strange Math That Predicts (Almost) Anything
This week in AI class, the professor talked about Markov chains. I loved this video that shows the history of Markov chains, one of the most powerful tools in statistics with interesting applications in computer science.
→ Beautiful Tables
gt-extras is a collection of helper functions for creating beautiful tables with the great-tables package in Python. I’m using it for my current learning analytics/educational data mining project.
→ Helpful list of tools to create diagrams from text
From sequence charts to ERDs, this comprehensive list of text-to-diagram tools is a goldmine for visualizing systems and workflows. And since it’s all text-based, it plays well with LLMs for automated docs and fast prototyping.
→ Well I guess it’s that time of the year again
The State of AI Report 2025 is out. It’s packed with resources about the current state of AI. It distills what you need to know about AI research, commercial applications, politics, safety, and predictions for the coming year. It’s dense but the format makes it easy to follow. Stanford also released the 2025 edition of their AI Index Report. You can find it here. And Matt Turck’s MAD (ML, AI & Data) Landscape also captures the pulse of data and AI.
→ Any Distance Goes Open Source
I’m not going to lie to you, I’m not a runner at all, I’m more into ball sports, but I love that Daniel Kuntz has made the Any Distance code open source. Being able to see the code behind it is so inspiring.
Two good bytes from the industry
→ What I Learned from Ilya Grigorik
This podcast is becoming a regular in this section. This week I was listening closely to this conversation between Ryan Peterman y Ilya Grigorik from Shopify.
I really enjoyed it and took some notes that I want to share with you:
Combining theory with the hands-on and applied part is key (referring to the University of Waterloo’s co-op program):
You actually get exposure to engineer things, not learn the hypothetical and academic of like, I know how a big O notation expresses this particular algorithm. It’s like, I actually had to build the thing, right? Like, and turns out this didn’t work or that particular solution worked or we had to solve it under these constraints. And the whole big O thing was an issue because I was like sorting 10 items in an array. Who cares? Right. It’s like pragmatic engineering. So I think that is key. That is still a superpower of Waterloo. And I wish more universities, like it’s an open secret. And I wish more people would copy it because it’s such a lever for success for people that go through that program.
This is a clear retention/motivation factor for software engineers:
Google is an open culture which is the thing that I loved about Google and I could approach anyone and everyone about any piece of infrastructure, any product. And I leveraged that. And I learned a lot.
Sometimes a professional career needs these shifts:
I wanted to shift gears and go into a mode where I have more self-direction and go pursue some interesting research or projects where I can apply my own skillset in a unique way.
Mission of the principal engineer:
My job is to up-level the team and make sure that they can deliver this thing self-sufficiently.
Now, occasionally I stumble into a problem where I’m like, either I’m like, just it’s in my bones and I feel like I need to be like, I want to own that problem and see it to a logical conclusion. Or I have some unique position or leverage in it where like, yes, I’m the right person to take on the broader responsibility and kind of execution of the team to see it through completion. And it’s a judgment call for which, you know, when you pull that off and when you say, okay, this is going to be my tour of duty.
What does “dynamic range” mean?
It is proof of the dynamic range of the problems you can solve. Dynamic range goes in a few directions. One is technical—have you shown a repeatable record of operating at all layers of the stack? If you’re engaged with the team, can you go down to the bare metal, understand the constraints, and reconstruct the problem?
At the same time, can you operate with the business requirements, with your VP and product counterparts, to understand what’s in their head, how it manifests in code, and translate that into actionable change to deliver the right product? That’s the technical aspect.
Then there’s the execution model in the middle, where you have to flex. Some projects are at the frontier of knowledge, with fog of war—you don’t know what you don’t know. You have to work in startup mode: fire, see where it lands, aim, and fire again rapidly to figure out where you are and what to build.
Other times you’re parachuted into a slower-moving layer of the company: “We have this platform API problem, and it’s critical to think through the design because it’ll have long-term repercussions for our partner ecosystem.” That requires slow, deliberate thinking to understand how changes at one layer affect the rest. It’s more academic, but you still need to converge that divergent thinking into a clear recommendation.
Ultimately, it’s about demonstrating your ability to execute across all domains. For a distinguished engineer, the expectation is similar to a VP. A VP’s job is to solve every problem that lands on their plate, and by definition, only the hard ones remain. So show the volume and versatility of your toolkit that gives confidence you can be parachuted into any oddly shaped problem and navigate it successfully.
Don’t be the best, be the only:
One thing that I share often with folks that I coach and work with is I have this belief that a better way, a more resilient way to build your career is to optimize for being the only person as opposed to the best person. “Be the only” to me is about building a talent stack or portfolio skills that you can juggle and be effective. So oftentimes when I find myself in conversations at Shopify or elsewhere, I’m the translator between all of these different parts of the company and organs of the body, where I’m able to synthesize that information and bring it together. And that gives me a lot of versatility, right? A very important part of my skill set is to figure out who are the most knowledgeable people and figure out how to leverage them in the most effective way.
You know enough about databases, you know enough about graphics, you know enough about all these other components—who is the person that is the only person that has sufficient domain expertise and connective tissue to solve this problem? Right? So being “the only” is about having a wide range of tools that you can reach into and say, hey, I can combine them in any particular way, which is what makes you versatile and makes you effective in these kind of ambiguous situations where you can, based on the situation, adapt how you work.
The principal engineer doesn’t have all the answers:
What I’ve learned is a lot of pattern matching. I’ve learned how to find answers. I know where to find answers. And I know how to navigate myself out of tricky situations. But it doesn’t mean that I know every API and every quirk.
Keep putting stuff out there because don’t let that expectation become a gate and filter for the work that you put up because that feedback that you get and exposure is like that—that is actually the quintessential ingredient that you need for growth.
The way you ask questions becomes very important because it’s very easy to come across as trying to question all the things as opposed to being curious.
We need to rethink how we think about education: he recommends the books from Sir Ken Robinson: Creative Schools and Finding Your Element.
→ Addy Osmani on vibe coding versus AI-assisted engineering and LLMs as a tool for learning
Addy Osmani has worked on the Chrome team for 13 years. And if you ever opened up the developer tab on Chrome, you’ve definitely used the stuff he built. He is also a prolific author. His latest book is titled Beyond Vibe Coding and is aimed at professional software engineers. In this conversation with Gergely Orosz from The Pragmatic Engineer Podcast, they go into vibe coding versus AI-assisted engineering, why vibe coding isn’t useful for much more than just prototyping something quick and dirty, the importance of understanding what the model does, why Addy always reads through the thinking log of the model he uses to make sure he fully understands what it did and why, before approving changes, new development workflows with AI, how things like spec development, asynchronous coding background agents, and parallel coding with several agents are new and unexplored areas of software engineering. If you’re a software engineer who wants to work better with AI coding tools on the day-to-day and wants to build reliable software with AI tools, then this episode is for you. Here’s some quotes from the episode.
If you want to read more of what Addy writes, I highly recommend his Substack, where he writes a lot about learnings from AI-assisted development:
Knowing how to use AI and being intentional
I’ve been listening to some episodes of this Notre Dame podcast. It’s an interesting educational podcast to dive into. See all episodes here.
I particularly liked the last episode, which is about teaching students when NOT to use AI:
We still need to have confidence in our ability to do things, always questioning AI response and authority. Derek proposes the rubber duck effect as a way to articulate our problems that helps us find a solution to the problem.
Helping students approach these tools with some skepticism: part of our job as educators is to help our students understand that these tools are designed in particular ways to do particular things and that’s not always helpful because students are often surprised when they learn how these tools actually work and how they generate text and how they’ve been designed.
Derek Bruff on finding the right process: The skill is keeping track of inputs and circling back in thoughtful ways when I need to do brainstorming. Knowing how they themselves brainstorm, what tools they like to use, what approaches work best for them and when to use different approaches for different tasks. That’s what the students need to learn. And AI can play a role in that but there’s a lot going on in terms of metacognition (how we learn, operate on different tasks) and also a fair amount of self regulation to say I am not going to go straight to AI and maybe I will check AI later after I’ve come up with 20 good ideas and see if it can spark something different but there’s some self regulation involved in knowing to kind of not take the easy way and to kind of take the harder way because you are going to end up in a more useful place at the end of it.
Knowing how to use the tool: there is a value to students of encountering course material in both digital and analog ways. This is called the principle of variety, a core principle of teaching, giving people different and varied approaches or exposures to materials, knowledge, skill development. One of the reasons to not use AI is because we want to have very varied approaches to developing a skill. Students can find their way to the one that works best for them in the future. If you want to make good decisions, you have to understand when to use each tool, when it’s useful and when it’s not. Is this a skill you want to get better at or not? Be thoughtful about how to use the tools: washing machines, GPS and calculators are good examples.
We need to be intentional about AI use because the skills and experiences at play feel more core to who we are as humans or as a person: thinking or writing, for example.
Develop your own understanding of AI: instructors should experiment with AI as much as possible to look at what it produces, take the useful things, reflect on what you want to accomplish in the lesson. The process itself helps you clarify your process itself.
AI in Higher Education with Michael Littman
Great episode of the Oxide and Friends podcast by Adam and Bryan with Brown computer science prof Michael Littman, who also currently serves as Brown University’s inaugural Associate Provost for Artificial Intelligence. If you want to go straight to the topic of AI implications in higher education, jump directly to minute 18 onward.
After listening to it carefully, here are my main takeaways (keep in mind that some notes are exact quotes from the episode, others are modified quotes to introduce some nuance, and others are summarized comments/analysis of what they said):
This feels like it’s really changing pedagogy. It feels like this is a very broad based change.
It is definitely the case that it is extraordinarily disruptive. Things that we had that were working don’t work anymore and we don’t have great ideas about how to replace them. So this is kind of a mess. It’s impacting professors all across the university.
I think in the beginning of the post chat GPT world, it was very much like we need to catch people. Then it evolved into, at the moment, I think really the main thing that people are concerned about is this is making people not learn. It’s just making that interaction so effortless and the creation so effortless that there’s no gear enmeshing. There’s no friction at all. And so the brain doesn’t engage. And so the students are not learning. And that’s way scarier. I think it pisses professors off. They’re like, ‘Oh wait, we can’t do things the way we used to do things’. But it’s an existential threat to think, ‘Oh, this technology is now blocking people from learning‘. That really is my job as a professor is to try to help people learn. And to the extent that I can’t do that anymore, well, what the heck am I even doing? And I think it’s really evolved. The fear has evolved into that form now. Like what the heck are we doing to these people’s brains?
These things are actually extremely powerful and you can put them to good honest use that actually helps learning. It could actually help you learn better. The thing is that it doesn’t come with the button that you can push that says, now it’s gonna be good for learning; now it’s gonna be bad for learning. There’s this fractal boundary between those two things that we haven’t really navigated yet. And so I think that the concern is more, here are these college students, maybe their frontal lobes are not fully developed yet. And what they’re given is the option of like, here’s a thing that can help you learn, but it can also make your life really, really easy. And that’s so, so, so, so tempting.
The hope is that it doesn’t block things and also leave you in a position where it’s like too late to start learning things.
Have conversations with your class about this. Like get it out the shadows because to the extent that this is a thing that students like to sneak off and do in their dorm room, that’ll never turn into anything positive. But if we can just shine the sunlight on it and have these conversations and say, Hey, did you notice when you did it this way? Did it actually, like, did you feel smarter afterward or did you feel dumber? And so I’ve been doing that in my classes.This is maybe not a representative class, but right now this semester, I’m teaching my first class since ChatGPT came out, and so this was very new to me and I’ve been trying to be very open with the students. It’s a 2000-level course. So it’s got PhD students and master’s students and undergrads as well, but it’s a pretty advanced undergrad. So I’m not talking to freshmen yet. I don’t know how those conversations go, but at least with my highly mixed advanced class, they’re really, really thoughtful about what it is that they’re experiencing and how they’re engaging with it. And sometimes they’ll say, look, I needed to get through this assignment. And I wasn’t like, I don’t think I’m really blocking my learning on this because I really didn’t need to know this anyway. But this other thing that I really think I do need to know, I’ve been buckling down, I’ve been avoiding using the chatbot where I use the chatbot in ways to kind of support my understanding, not to supplant it. And so at least what I’m getting from the students—the ones who show up to class, all of them—is this really interesting analysis where we’re working on this together. None of us know the answer to this. I don’t view what they’re doing as being immoral. Think they’re facing some real challenges and some choices. And to the extent that they’ve got good answers, I want the other students in the class to know them. And so that’s been really productive so far, but again, it’s early days. So I can’t really say what the long term impact has been.
Most faculty feel like their role is to educate in the sense of helping people learn to be better thinkers. I want to help guide you to the point where you can engage intellectually with things you’ve never seen before. I think that’s what most of my colleagues believe their job is. This intellectual world is so rich and so interesting. Let me help you learn how to navigate this world because you’re gonna be faced with all kinds of ideas that I can’t even tell you about yet because I don’t know about them yet, and you should be ready. It could really fundamentally change the structure of what these universities are trying to do and how they do it.
Don’t depend on it. Like don’t risk anything. Don’t risk your life. Don’t risk your education because you can’t really trust these systems to consistently deliver what it is that they promise.
Michael Littman shares that he wrote a book titled “Code to Joy: Why Everyone Should Learn a Little Programming.” Although many warned him that writing was a tough experience, he enjoyed it immensely. He eventually hired a developmental editor, who not only improved the book but also helped him become a better writer through their feedback. Littman compares this to using chatbots: while using them to write can make people worse writers, using AI for editing could have the opposite effect—improving human skills. “A lot of times people use these chatbot tools to write for them, which causes them to become less good writers”.
In Brown’s Anthropology department, a professor recounted that a student would ask a chatbot to write their entire assignment and then rewrite it sentence by sentence to “make it their own.” The result was strange: the text maintained the typical syntactic structure of AI, but with human errors and odd lexical choices. The professor noticed it seemed written by a chatbot, but with human words. Bryan Cantrill and Adam Leventhal reflect that this method probably doesn’t teach the student much, but perhaps something is learned by reviewing and reading well-written texts aloud, even if indirectly.
Littman comments that he’s been visiting different departments at Brown and has observed very different reactions to AI: A history professor deeply skeptical about artificial intelligence, who ended up talking about capitalism and Marxism and humanity’s uncertain future. In contrast, Brown’s Chief Investment Officer adopts a completely pragmatic and optimistic view: she studies AI’s value chain to identify investment opportunities and quantifies the time savings its use provides.
My job is to support the campus. I like to say that navigating the opportunities and the risks that are suddenly thrust upon us. I don’t want people to feel like this is something that I’m doing to them because I’m really not, this was not my thing. How is this impacting you?
Michael Littman explains that one of Brown University’s largest introductory computer science courses—known as CS0150, a long-standing class that now teaches Java but once used Pascal—was immediately affected when ChatGPT appeared. Students quickly realized they could simply prompt the chatbot to generate code for their assignments. Because LLMs are so well suited to turning homework prompts into working solutions, this created an obvious challenge for instructors. To respond, the course staff decided not to ban the technology outright but to integrate a controlled “chatbot TA.” The class is massive, often enrolling around 400 students and supported by roughly 40 human teaching assistants, leading to long wait times for help. The AI assistant was designed to relieve some of that pressure while being carefully prompted not to reveal complete answers. This mirrors an existing problem with human TAs, who can also be tempted to “just show the solution” when students ask cleverly. The instructors now tell students to use only the official course chatbot and not outside systems, since that would cross into clear cheating. Although Littman isn’t sure how strictly this rule can be enforced, he notes that the experiment has been valuable so far. Students appreciate both the extra help and the opportunity to understand the technology itself—learning how to work with AI responsibly while still developing their own programming skills.
Bryan Cantrill observes that chatbots might solve one long-standing issue in those large classes: limited TA availability and long help lines. With a chatbot, students can ask questions anytime — even ones they might consider “dumb” — without fear of judgment. He points out that this psychological safety can have real pedagogical value, since students often hesitate to ask simple but important questions in front of others. Littman agrees, noting humorously that even he sometimes switches to a different chatbot mid-conversation so as not to “embarrass himself” by asking a naive question. They joke about the over-polite tone of current AI systems and the idea of programming them to “stop sucking up” and act more robotic.
Littman meets with faculty to understand their experiences, while the university’s Sheridan Center for Teaching and Learning helps instructors redesign courses to make them more “chatbot-aware.” He emphasizes that playing with AI tools is essential — not every professor needs to become a power user, but it’s impossible to understand the opportunities and risks without firsthand experience.
Brown has officially approved Google’s Gemini for use with sensitive data, after confirming it met privacy standards, and Littman now encourages faculty to experiment with it safely. When Littman tested Gemini and ChatGPT on one of his own homework assignments, both initially failed — until he uploaded the course notes. Then the systems not only solved the problems correctly but offered insightful, even original comments he hadn’t considered. It was, he admits, both astonishing and unsettling. Cantrill compares this to other emerging AI tools like NotebookLM, which can generate podcast-style summaries of research papers. They discuss how this capability — to produce articulate, NPR-like conversations about technical topics — can be both impressive and strange. Littman finds the output remarkably fluent but too uncritical: the AI mimics the author’s tone rather than evaluating their claims. He notes that he’s a “mean paper reader,” someone who wants to find the weaknesses in an argument, and wishes AI systems could adopt that critical stance when asked.
The group discusses how LLMs might even assist in peer review, helping researchers interrogate dense academic papers. Cantrill argues that everyone on a review committee should use LLMs as analytical tools rather than treating them as separate “robot reviewers.” Littman notes that while some conferences explicitly prohibit reviewers from using AI, others are experimenting with dedicated “LLM reviewers.”
They also touch on a potential ethical asymmetry in education: teachers banning students from using AI while secretly using it themselves for grading. Littman says that, so far, he hasn’t seen faculty do that, but agrees that it’s an issue of perception — it “smells wrong” even if it’s not inherently unethical, since teaching and learning are fundamentally different roles.
The conversation closes with a reflection on Brown University’s distinctive open curriculum, introduced in 1969, which gives students exceptional freedom over their education. Littman leads a new committee, “GADL” (Generative AI in Teaching and Learning), tasked with producing a report for the provost about how AI should be integrated into this philosophy. He insists that Brown’s response to generative AI must align with its tradition of trusting students to take ownership of their learning. Cantrill praises this approach, seeing it as a chance for Brown to lead again — to show what it means to “trust 18-year-olds with their own education” in an era of profound technological change.
Both reflect on generational responsibility. Littman compares the current AI upheaval to the earlier disruption of social media, noting that his generation (“Xers”) may have created the problem, but it’s the younger generation who will ultimately have to solve it. They end on a warm and humorous note — acknowledging the scale of change higher education now faces, and expressing gratitude for the opportunity to navigate it together. Littman concludes that, despite the workload, he feels deeply fulfilled by helping the university confront these challenges at such a transformative moment.
🔍 Resources for Teaching in CS Education
→ AI Research Tools
This resource was shared during last Friday’s ‘I Am and Becoming’ lab session. It’s a basic list of current AI tools that may be useful for academics and researchers.
→ CMU’s Latest Database Seminar Series
The latest Seminar Series from the CMU Database Group explores the future of database systems, and includes talks from a wide variety of industry experts including Jordan Tigani (MotherDuck), Ian Cook (Columnar), Vinoth Chandar (Apache Hudi), Will Manning (Spiral), and more. All talks are live on Zoom and recordings will also be available. The series is free and open to all.
→ The best resource for casually skimming broad areas of math I’ve ever seen
Came across this absolutely amazing project by mathematician Evan Chen. A nearly 1,000-page manual, educational and constantly updated, covering the main concepts of mathematics. Evan Chen’s “Infinitely Large Napkin” bridges the gap between undergrad math and graduate-level topics with a refreshingly practical approach. This free, open-source resource gives you a bird’s-eye view of advanced mathematics without drowning in formal proofs.
→ Peer Instruction practices
Kristin Stephens-Martinez from Duke recently served on a panel about teaching practices, where she shared how she uses peer instruction. She also put together a primer with all the nitty-gritty details on how she makes her forms, the settings she uses, and the process. I hope it’s helpful to you!
→ Teaching Accessible Computing
A great book for faculty to quickly learn about accessibility, its relationship to CS, and practical ways to integrate accessibility and disability topics into your CS courses throughout the curriculum.
→ Database Internals Made Visual
Huge thanks to Nanda for engineering this great web app! It’s clean and interactive. The latest guide is a solid introduction to database internals that shows how key-value stores handle persistence, indexing, and compaction based on Martin Kleppmann’s “Designing Data-Intensive Applications.” Mad respect!
🌎 Computing Education Community Highlights
The SIGCSE Technical Symposium (18–21 February in St. Louis, Missouri) is looking for session chairs for the paper sessions. If you are able to attend and would like to volunteer, please fill out this form. Please note that you do not have to be an accepted author to be a session chair.
Job postings: Two Assistant Professor openings in Cyber Security and one in Computer Science at BSU in Massachusetts. A tenure-track Assistant or Associate Professor position open at James Madison University. Westminster College is looking for a tenure-track position as Assistant Professor of Computer Science to begin August 2026. UMN Twin Cities is also looking for a full-time teaching position (non-tenure-track). Kaleidoscope Circles is hiring a director for a new circle on artificial intelligence, including AI safety. And finally Georgia Tech is hiring a Chair of the School of Computing Instruction for the Atlanta campus.
alphaXiv is hiring ML Researchers and LLM Inference Engineers to build the future of AI research tooling. They’re a small team of researchers and engineers working on training and deploying state-of-the-art models. If this excites you, they’d love to hear from you. Email hiring@alphaxiv.org with your resume and a project you’re proud of.
🤔 Thoughts For You to Ponder…
Geoffrey Litt is coding like a surgeon:
Personally, I’m trying to code like a surgeon. A surgeon isn’t a manager, they do the actual work! But their skills and time are highly leveraged with a support team that handles prep, secondary tasks, admin. The surgeon focuses on the important stuff they are uniquely good at.
I’m not sure if the metaphor totally works (I have no idea about medicine), but what he says makes sense when it comes to collaboration.
It seems that the idea of using LLMs to generate fake human survey responses instead of recruiting real people is gaining traction, but this paper provides a reality check: small tweaks to your model setup can completely flip your results.
With AI, new forms or dimensions of trouble are emerging, aren’t they? I like to think about these epistemic challenges. For example, students now have to decide when to trust AI explanations, when to question them, and how to integrate the generated code into their own thinking. There’s a fine line between dependence and understanding. So maybe what’s shifting isn’t so much the threshold concepts, but the context.
I prefer the term ‘AI-Assisted Engineering’ over Simon Willison’s ‘Vibe Engineering,’ but this article gives you something to think about for a while. It also mentions my three favorite tools for AI-assisted programming, which I’ve been using a lot lately in my projects. These are, in this order: Claude Code, Gemini CLI, and OpenAI’s Codex. Nowadays, my essential stack is: VS Code connected via SSH to our local server with Claude Code installed. That’s the baseline, and then I move between tools depending on my needs. I completely agree with this sentiment:
The rise of coding agents—tools like Claude Code (released February 2025), OpenAI’s Codex CLI (April) and Gemini CLI (June) that can iterate on code, actively testing and modifying it until it achieves a specified goal, has dramatically increased the usefulness of LLMs for real-world coding problems. I’m increasingly hearing from experienced, credible software engineers who are running multiple copies of agents at once, tackling several problems in parallel and expanding the scope of what they can take on. I was skeptical of this at first but I’ve started running multiple agents myself now and it’s surprisingly effective, if mentally exhausting!
📌 The PhD Student Is In
Learned a lot in last week’s I AM and Becoming Lab session with Kali H. Trzesniewski from UC Davis on how she is using AI to conduct research more effectively and her perspective on some of the current debates about the use of AI in education and research. Thanks to Allison Master for organizing it!
Are TAs still adding value for CS students?
The TA chatbot part I mentioned above got me thinking. Let’s tell the truth—it’s harsh, but we have to face it: students rarely use us TAs anymore to actually learn, only to handle bureaucratic issues like questions about their assignment grades, problems with a course resource, or IT stuff. I don’t feel bad about it; it’s part of my job, but it’s just a fact. Only one student this semester in my Software Engineering class seriously came to ask me how he could improve and what path I’d recommend—that was cool. This does not mean that we are not necessary for the prof. I believe we take a lot of work off their hands in terms of grading, tracking attendance, and answering student questions, but I am not sure to what extent we generate value for the student anymore. That is my point.
🪁 Leisure Line
What an evening the Green Jay football team had yesterday at Strake Jesuit! They faced the Woodlands Christian Academy team who hadn’t lost a single game all season. The Green Jays won this contest 36-0.
📖📺🍿 Currently Reading, Watching, Listening
Great content in Casey’s last video about Sora, OpenAI’s AI video generation tool and social network:
Been in love with the new album of Dan Mangan. I’m gonna spend the rest of my 2025 with the track number 1:
That's all for this week. Thank you for your time. I value your feedback, as well as your suggestions for future editions. I look forward to hearing from you in the comments.
Quick Links 🔗
🎧 Listen to Computing Education Things Podcast
📖 Read my article on Vibe Coding Among CS Students
💌 Let's talk: I'm on LinkedIn or Instagram










