The Missing Semester of Your CS Education – Revised for 2026
We returned to MIT last month to teach a revised version of Missing Semester, six years after the original debut (which has been extensively discussed on HN, in https://news.ycombinator.com/item?id=22226380 and https://news.ycombinator.com/item?id=34934216).We’ve updated the course based on our personal experiences as well as major changes in the field (e.g., the proliferation of AI-powered developer tools) over the past several years. The 2026 course includes revised versions of four lectures from the previous course, and it adds five entirely new lectures:- Development Environment and Tools- Packaging and Shipping Code- Agentic Coding- Beyond the Code (soft skills)- Code QualityWe’d love to hear any feedback from the HN community to improve the current or future iterations of the course. In particular, we’re curious to hear the community’s take on our inclusion of AI-related topics (e.g., dedicating an entire class to the topic of agentic coding; though we tried to counterbalance it with plenty of disclaimers, and a dedicated section on AI etiquette in Beyond the Code).--Anish, Jon, and Jose
353 points by anishathalye - 106 comments
git bisect/blame/revert/rebase/… become so much less useful when VC is treated as a chore and afterthought, and basically amounts to: “Feature is done, my work is complete, just do `git commit -am "changes"` and be done with it.”. And don’t get me started on commit messages.
It is shameful that for a large part of the industry, this is the norm. It is shameful that for a lot of professional, who call themselves software architects or reliability engineers and such fancy titles, still have essentially no idea what they are doing with git, and their response when git add/commit/push/pull don’t work is to shrug, and just delete and re-clone the repo.
Version control should be treated with care and attention to detail. It pays for itself 100 times over.
If your commit history is maintained and tells a story, it is a joy to review your PR. If you just `git commit -am "try fix"` 26 times over, and all that is left in the end is a ball of mud, it is horrible.
Git is better than what came before, and it might be the best at what it does, but that does not mean that it is good.
- The interface is unintuitive.
- Jargon is everywhere.
- Feature discoverability is bad.
- Once something goes wrong, it is often more difficult to recover. If you're not familiar enough with Git to get yourself into that situation, then you certainly aren't familiar enough to get yourself out of it.
Many of those issues are due to git being a command line interface, but others (like no general undo and funny names) are simply due to bad design.
I think it is about time that we try again and build a better version control tool, but maybe git is just too entrenched.
I would say that is a reasonable criticism of git ... but I've seen the same thing in svn, perforce, cvs, and rcs. Different variations of the same issue of people not caring about the version history.
Since it's been a problem since the dawn of version control, it is either something that is part of all version control being a tool's fault that has been carried with it since doing ci, or it is something that people aren't caring about.
I feel this is more akin to a lack of comments in code and poor style choices and blaming the text editor for not making it easier to comment code.
At the start of my career I ended up in a UI position. Old school usability on the back side of a 2 way mirror.
The tool has lots of shortcomings: images, documents that aren't text, working with parts of repositories... These aren't issues faced by the kernel (where emailing patches is the order of the day). And these shortcomings have lead to other tools emerging and being popular, like artifactory, journaling file systems, and various DAM's.
Technology on the whole keeps stacking turtles rather than going back to first principles and fixing core issues. Auth (DAP, LDAP, and every modern auth solution). Security (so many layers, tied back to auth). Containers and virtualization (as a means of installing software...). Versioning is just one among this number. We keep stacking turtles in the hope that another layer of abstraction will solve the problem, but we're just hiding it.
One of the few places where we (as an industry) have gone back and "ripped off the bandaid" is Systemd... It's a vast improvement but I would not call it user friendly.
Usability remains a red headed step child, its the last bastion of "wont fix: works for me" being an acceptable answer.
This is a standard that we don't apply to most other tools outside of IT. I do think git could be more usable, but most powerful tools have sharp edges and require training.
A bandsaw is a fantastic tool, but if you try to use one without reading about it first, you'll end up losing a finger. I'm not sure I'd blame the bandsaw in that instance...
There are of course power tools with obnoxious protections that make them difficult to use, but since we are dealing with software here, we are not bound by the laws of physics. I believe that we can create a better tool that is both powerful and easy to use.
https://old.reddit.com/r/todayilearned/comments/158lp0m/comm...
>My high school shop teacher, before he let any of us near the machines or power tools, told us horror stories about students who lost fingers and eyes by being careless with them. For the entirety of that semester, nobody got so much as a chipped fingernail.
which is a better match for my experience --- the best advice I ever got was from my high school shop teacher:
>Before turning on the power switch, count to ten under your breath on all your fingers while visualizing all the forces involved and all the ways the operation could go wrong, then remind yourself that you want to be able to repeat that count after turning the power off.
I don't think Sawstop would have a business model if all tablesaw injuries were tried by a jury of such shop teachers (heard him scream at the kid who removed a guard through hearing protection all the way on the other side of the shop around a corner while operating a lathe while making a heavy interrupted roughing cut w/ a chisel I really should have paused to sharpen --- the student was banned from ever entering the shop again).
Replace tool with one of piano|guitar|etc and see your logic fall apart. Software tools like any other have a manual and require effort and time to learn.
I've read a few blog posts and half a book on git, and I don't remember the last time I had issues with it.
I also don't recall a junior ever having trouble uploading files with git. Unless they're in an interactive rebase, which wouldn't happen your first time trying out git.
There are wrappers that make it much more approachable. IntelliJ’s Git frontend, for example, is pretty nice.
With the above in order, I found I could subset the git state model:
* temporary branches rather than the "stash"
* commit tentative work to HEAD; amend, discard or set aside in a temporary branches rather than as later discoveries require
* side-step the index/cache/staging_area for most operations -- transfer directly between work tree and HEAD commit
git switch some-branch # edit files git restore file2 # undo changes to file2 git stage file1 git commit
Instead of the old workflow using checkout with a bunch of different flags.
I agree though that git is needlessly obtuse. I advocated for mercurial instead of git for years because mercurial was so much more user friendly, but git won. I hear good things about jj now
For professional work, people can and do learn complex interfaces and jargon, if it is advantageous.
If somebody can get a lot done with a tool, then it's a good tool. And a lot of tools can't both enable people to get things done and avoid being misused. They have to pick one.
Does "getting it done with pliers" make them a good wrench?
Now, until such a person exists, ridiculous counterexamples are still ridiculous.
A compromise/synthesis: everyone should absolutely learn how git works internally, but not necessarily how to use the git-specific porcelain/tooling/CLI
Fast forward twelve years and my wife did the MCIT at UPenn (https://catalog.upenn.edu/graduate/programs/computer-informa...) where git and other topics woven into the curriculum. Even then, they were perhaps a novelty because their focus was bringing non-CS undergrads into a CS Masters program. So-called "conversion" master's degrees were the norm in the UK in 2002.
I think this is a good argument for teaching git, and being thorough in doing so, as many people are likely to never take that initiative themselves, while the benefits to being good at git are so obvious.
I'm sure there's an element of (intellectual) laziness too, but I think people tend to only learn git by necessity because git is simply unpleasant to use.
A lot of us have Stockholm syndrome because git is less bad than what came before it, but git is not good.
Like, I attempt to write good commit messages and stage my changes in such a way that the commits are small, obvious, and understandable. That's about it. But the advanced tooling around git is scary ngl.
Meanwhile enterprise teams are often like - who cares, let's auto-squash all commits into one.
This is just false. In the UK, you would learn version control in the first week, then submit all work through version control for the whole course.
I find it hard to believe that Americans just don't use version control at school. It doesn't make any sense.
There are cases where I've staged commits this way for a PR, to make it more reviewable. I'd usually rather split them off into separate PRs, but when that would create a pipeline of three MRs that are meaningless on their own, then rewriting history for a single MR makes sense. I generally consider my feature branch's commit history to be for me, not for you. Going back and rewriting history is a chore that shouldn't be necessary if I did a decent enough job with the PR description and task decomposition. Those commits are getting squashed anyway. Along with all the "fix MR comments" commits on top of it.
It wouldn't bother me to adopt your workflow if it fits your team and its tools and processes. I'd just say, consider that your way isn't the only correct way of doing things. Your preferences are valid, but so are others'. The only thing that really bothers me is absolutism. "My way or the highway."
Your writing here reminded me of a particularly unpleasant coworker I had in the past. I quickly browsed your comment history to make sure you're not him... Excessive rigidity is not an endearing quality.
All that being said, I have also been constantly annoyed by people with too many YoE who can't be bothered to spend an hour or three to learn the basics of how the Git tree is structured, and what merge vs rebase does. They rely too heavily on their GUI crutches and can't fix anything once it goes sideways. Even when you lead them to water, sending them reading material and offering to answer questions after, they refuse to drink. Willful ignorance is far more irritating than stubbornness. I don't expect them to be able to remember what bisect vs cherry-pick does. Claude will spit out the subcommands for them if they can describe what they need in English. But they can't do that if they have no understanding of the underlying data structures...
In fact, generally teaching people to select the right tool for the job is a good skill to prevent them from using golden hammers.
This time would be much better spent watching these 9h of lectures.
These would have been very useful to me back when I was in the university.
Later I got my own IBM 386 and installed Linux on it and started to program in Perl …
I am a big fan of Jon’s YouTube videos on Rust and I started to use Rust in non conventional ways.
I am going to follow this lecture series and “port” them to rustdoc and see how it goes.
Another rabbit hole to fall down, it is going to be fun.
https://ontouchstart.github.io/rabbit-holes/missing/
https://oils.pub/
Things are so much easier. No need to a massive amount of plugins and configs, mostly works out of the box.
https://oils.pub/
I'm still using bash out of habit, though. My one nod to modern tooling is using fzf for shell history search...
I agree that these shells are better than bash, etc. But some bash knowledge is probably a must, given its ubiquity. You're not always going to have the option to install fish.
(Sadly, most of my jobs, including my current one, require tcsh).
I think this is fine and if anything you should give it more space. It doesn't replace foundational understanding, but the course is explicitly about "practical" aspects, we can assume said foundational understanding is developed in other courses.
Something like "build your own agent" would be a great intuition pump. The model is doing the heavy lifting and a basic harness is a couple hundred lines of simple code. It could fit in a single lecture and it would be very high signal in my opinion.
You don't appreciate it when you're studying, because obviously it sounds a bit soft. But when you're learning how something works, often the thing that stops you isn't the fundamentals, which you know what are, it's the little frustrations like not knowing how to commit or pull code, or not knowing how to navigate the terminal.
edit: already in the "beyond the code" section... cool!
- first of all, you need to know how to manage your own digital information. Even though it's taken for granted that a CS/CE freshman knows this, well, in my experience, that's usually not the case also for many PhD... Information management isn't just a taxonomy of files and dirs; it's also about evaluating, for example, what happens if the software you use for your notes is discontinued, or if your photo gallery disappears, and so on, and acting accordingly knowing your SPOFs and how to mitigate them;
- then you need to know how to write, in the broadest sense, which includes mathematical notation, generating graphs, "freehand" drawing like simple CAD, and formatting your work for various purposes and media, whether it's emails, theses, reports, or general messages. This is where teaching LaTeX, org-mode, R/Quarto, etc comes in. It's not "advanced" is the very basic. Before learning to program and no, Office suites are not an answer, they are monsters from a past era, made to makes untrained human with little culture to use a computer for basic stuff instead of typewriters, a student is not that;
- you need to know how to crunch numbers. Basic statistics are useful, but they're largely stuck in another era. You need to know how to do math on a computer, symbolic computation, whether it's Maxima or SymPy, doesn't really matter, and statistical processing basis. For instance, knowing Polars/Plotly/* at a minimum level are basic skills a freshman should have at a software/operational level, because they should be working in these environments from day one, given that these are the epistemological tools of the present, not paper anymore.
Then you also need to manage code, but in the broadest sense. A dSCM is also for managing your own notes and documents, not just software, and you need to know how to share these with others, whether it's Radicle or Forgejo or patches vua mail doesn't really matter, but this family of software needs to be introduced and used at least at a basic level. A DynDNS services should be also given so anyone could try to self-host the services they want.
Knowing how to communicate is an essential skill, and it's not about using Gmail or Zoom... it's about learning how to self-host basic communication services. It doesn't really matter if it's XMPP, Matrix, or Nostr, but the concept must be clear, and understanding the distributed and decentralized options we have today is vital. A student needs to learn how to stand on their own two feet, not on someone else's servers.
These are basic IT skills that aren't "advanced" at all, despite what many people think, or "sysadmin-level" and so on; they're simply what a freshman should have as someone who loves knowledge and wants to get their hands dirty.
https://missing.csail.mit.edu/2026/code-quality/
Although it's specifically code quality, not software quality, I feel so much is missing. Of course there is no space to explain it in detail, but they could at least list/mention things like complexity, maintainability, modularity etc.
I do think your point is valid. The trend in the industry is putting emphasis on cosmetic qualities (format, workflow, testing), producing huge amount of metadata that consumes huge amount of human and machine energy for the peace of mind.
Complexity, maintainability, modularity have more to do with thinking about the problems at proper abstraction levels.
It seems we ended up spending more time on tools like writers spending more time on sharpening pencils or playing with fonts than writing something meaningful.
A low quality software can have beautiful code just like a low quality book has beautiful fonts.
Two decades of Git: A conversation with creator Linus Torvalds
https://www.youtube.com/watch?v=sCr_gb8rdEI
I was surprised that he only spent four months on it as a maintainer. What a great piece of work by a great software developer.
Want my feedback? Delete the AI bullshit and go back to teaching programmers how to learn and understand what they are building.
You are still necessary to push the frontier forward. Though, given the way some models will catch themselves making a conceptual error and correct in real time, we should be nervous.
They are completely, 100% useless, no matter what I do. Add on another layer of abstraction like "give me a function to calculate <engineering value>" and they get even worse. I had a small amount of luck getting it to refactor some really terrible code I wrote while under the gun, but they made tons of errors I had to go back and fix. Luckily I had a pretty comprehensive test suite by that point and finding the mistakes wasn't too hard.
(I've tried all of the "just point them at the documentation" replies I'm sure are coming. It doesn't help)
Yes, and that's a problem. If the advent of coding agents leads to people that are only in it for the money staying away from higher education - good. Those people are the reason why higher education turned to shit anyway and maybe it will be a nice change when people go into higher ed out of curiosity and not because they smell money.
Not necessarily going to be true by the time current first year students graduate, given that solved problems are most exposed to AI acceleration.
AIs are pushing many things forward, but due to training sets and context windows, I think meaningfully adding to actually valuable apps, at least as we currently write them (the kind with many DBs/caches/message queues, services) will take a fair bit longer.
To be fair to the parent poster, many people do seem to aspire only to be LLM operators, who will be a dime-a-dozen commodities accorded even less respect and pay than the average developer is today.