On one particular project from 1995 where the hardware was very cost optimised, the C program compiled to 1800 bytes which meant we could save nearly a dollar by buying micro-controllers with 2KB flash rather than 4KB flash. We manufactured 20,000 units with this cheaper chip. 2 years down the line we needed a simple code change to increase the UART baud rate to the host, a change that should have resulted in the same sized binary, but instead increased it to 2300 bytes due to a newer C compiler. We ended up tweaking the assembly file and running an assembler, then praying there would be no more changes!
I have always over specified the micro-controllers a little from that point, and kept a copy of the original dev environment, luckily all my projects are now EOL as I am retired.
Neywiny [3 hidden]5 mins ago
Could also just edit the old binary directly in a pinch?
travoc [3 hidden]5 mins ago
"luckily all my projects are now EOL as I am retired."
I doubt that everything you ever worked on is end-of-life. Some of it is still out there...
boznz [3 hidden]5 mins ago
Correct, I have thousands of tank temperature controllers still out there, still working fine where the End Of Life was 3 years ago. EOL just means support for spares and software updates cannot be guaranteed past that point, and is mainly tied to the EOL of the specific micro-controller used.
7thpower [3 hidden]5 mins ago
Better have kept those environments.
direwolf20 [3 hidden]5 mins ago
Visual C++ 6 was the first C(++) compiler I used. I'm fairly certain it had auto completion (Intellisense).
Casey Muratori would point out the debugger ran faster on hardware from the era than modern versions run on today's hardware, though I don't have a link to the side–by–side video comparison.
Edit: Casey Muratori showing off the speed of visual studio 6 on a Pentium something after ranting about it: Jump to 36:08 in https://youtu.be/GC-0tCy4P1U — earlier section of the video is how it is today (or when the video was made)
clarity_hacker [3 hidden]5 mins ago
Build environment archaeology like this matters more than people realize. Modern CI assumes containers solve reproducibility, but compiler version differences, libc variants, and even CPU instruction sets can silently change binary output. The detail about needing to reinstall Windows NT just to add a second CPU shows how tightly coupled OS and hardware were — there was no abstraction layer pretending otherwise. Exact toolchain reproduction isn't nostalgia; it's the only way to validate that a specific binary came from specific source.
kelnos [3 hidden]5 mins ago
> The detail about needing to reinstall Windows NT just to add a second CPU shows how tightly coupled OS and hardware were — there was no abstraction layer pretending otherwise.
In this case there was: the reason you need to reinstall to go from uniprocessor to SMP was because NT shipped with two HALs (Hardware Abstarction Layer): one supporting just a single processor, and one supporting more than one.
The SMP one had all the code for things like CPU synchronization and interrupt routing, while the UP one did not.
If they'd packed everything into one HAL, single-processor systems would have to take the performance hit of all the synchronization code even though it wasn't necessary. Memory usage would be higher too. I expect that you probably could run the SMP HAL on a UP system (unless Microsoft put extra code in to make it not let you), but you wouldn't really want to do that, as it would be slower and require more RAM.
So it wasn't that those abstraction layers didn't exist back then. It was that abstraction layers can be expensive. This is still true today, of course, but we have the cycles and memory to spare, more or less, which was very much not the case then.
Sesse__ [3 hidden]5 mins ago
> If they'd packed everything into one HAL, single-processor systems would have to take the performance hit of all the synchronization code even though it wasn't necessary. Memory usage would be higher too.
Linux also used to be like this, but these days has unified MP/UP kernels; on single-CPU systems (or if you give nosmp), the extra code is patched away at boot time. It wouldn't have been an unheard of technique at the time.
kccqzy [3 hidden]5 mins ago
I actually would love this to be built in to a language/compiler. A lot of times when I’m building a single-threaded program but I’m using libraries written by other people. These libraries don’t know whether they are being incorporated into programs with single thread or not. So they either take the performance penalty of assuming multi-threaded (the approach by std::shared_ptr) or they give callers choice by making two implementations (Rust Arc and Rc). But the latter doesn’t actually work because this needs to be a global setting, not just a decision made at a local call site. It won’t work if such a library is a transitive dependency.
do_not_redeem [3 hidden]5 mins ago
Zig supports this. If you compile with -fsingle-threaded, operations on mutexes turn into nops, atomics become simple loads/stores, etc.
amluto [3 hidden]5 mins ago
They could have shipped both HALs. Or made it easy to switch which one was in use without reinstalling.
CDs were around and hard drives weren’t that small at the time. (Or maybe the really early SMP versions predated widespread availability of CD-ROMs, but I remember dealing with this nonsense and reinstalling from an MSDN CD set.)
flomo [3 hidden]5 mins ago
With NT4, I'm pretty sure both HALs were on the CD-ROM (unless you had an exotic system with a custom HAL, which came with its own install media). Keep in mind your use case is approximately nobody, you either had a SMP system or you didn't.
amluto [3 hidden]5 mins ago
It was really not that rare to want to move a disk from one system to another. Except that there was an obnoxiously high chance that Windows would refuse to boot.
vintermann [3 hidden]5 mins ago
Linux also used to have separate SMP kernels back when multi processor systems were rare.
amluto [3 hidden]5 mins ago
I’m pretty sure that the SMP kernel would boot on UP and vice versa, though.
webdevver [3 hidden]5 mins ago
there is something to be said about old windows installation CDs being essentially modern-day equivalents of immutable docker layers - i don't think one could say that about modern windows, but then i'm not super clued in into ms stuff.
bluedino [3 hidden]5 mins ago
I'd like to see someone build the Linux source code leak that came out not to far after Quake was released.
yjftsjthsd-h [3 hidden]5 mins ago
What do you mean, "leak"? Linux would have been developed in the open?
IsTom [3 hidden]5 mins ago
I think OP means leak of linux quake port source
Maro [3 hidden]5 mins ago
Quake book incoming from Fabien?
bombcar [3 hidden]5 mins ago
Almost certainly - every other of his books has been telegraphed by articles about the work he’s doing to get the original setup built and running.
torh [3 hidden]5 mins ago
I hope so. The other books have been great fun to read, with the detour of CP-SYSTEM as a nice surprise.
knorker [3 hidden]5 mins ago
> The first batches of Quake executables, quake.exe and vquake.exe were programmed on HP 712-60 running NeXT and cross-compiled with DJGPP running on a DEC Alpha server 2100A.
Is that accurate? I thought DJGPP only ran on and for PC compatible x86. ID had Alpha for things like running qbps and light and vis (these took for--ever to run, so the alpha SMP was really useful), but for building the actual DOS binaries, surely this was DJGPP on x86 PC?
Was DJGPP able to run on Alpha for cross compilation? I'm skeptical, but I could be wrong.
I thought the same thing. There wouldn't be a huge advantage to cross-compiling in this instance since the target platform can happily run the compiler?
frumplestlatz [3 hidden]5 mins ago
Running your builds on a much larger, higher performance server — using a real, decent, stable multi-user OS with proper networking — is a huge advantage.
jeffrallen [3 hidden]5 mins ago
> (Visual Studio 6) I never used it but it must have felt like a dream at the time.
I used it in the mid-90's and yes, it was eye opening. On the other hand, I was an Emacs user in uni, and by studying a bit the history of Emacs (especially Lucid Emacs) I came to understand that the concepts in Visual Studio were nothing new.
On the third hand, I hated customizing Emacs, which did not have "batteries included" for things like "jump to definition", not to mention a package manager. So the only times in the late-90s I got all the power of modern IDEs was when I was doing something that needed Windows and Visual Studio.
ErroneousBosh [3 hidden]5 mins ago
Funny, I've just been (re-)playing Quake 2 recently.
bombcar [3 hidden]5 mins ago
Action Quake II is still the best I’ve ever been at FPS.
jasonb05 [3 hidden]5 mins ago
Nod. AQ2 was so damn fun!!
webdevver [3 hidden]5 mins ago
love software archaeology like this.
there was another article where someone bootstrapped the very first version of gcc that had the i386 backend added to it, and it turns out there was a bug in the codegen. I'll try to find it...
EDIT: Found in, infact there was a HN discussion about an article referencing the original article:
I have always over specified the micro-controllers a little from that point, and kept a copy of the original dev environment, luckily all my projects are now EOL as I am retired.
I doubt that everything you ever worked on is end-of-life. Some of it is still out there...
Casey Muratori would point out the debugger ran faster on hardware from the era than modern versions run on today's hardware, though I don't have a link to the side–by–side video comparison.
Edit: Casey Muratori showing off the speed of visual studio 6 on a Pentium something after ranting about it: Jump to 36:08 in https://youtu.be/GC-0tCy4P1U — earlier section of the video is how it is today (or when the video was made)
In this case there was: the reason you need to reinstall to go from uniprocessor to SMP was because NT shipped with two HALs (Hardware Abstarction Layer): one supporting just a single processor, and one supporting more than one.
The SMP one had all the code for things like CPU synchronization and interrupt routing, while the UP one did not.
If they'd packed everything into one HAL, single-processor systems would have to take the performance hit of all the synchronization code even though it wasn't necessary. Memory usage would be higher too. I expect that you probably could run the SMP HAL on a UP system (unless Microsoft put extra code in to make it not let you), but you wouldn't really want to do that, as it would be slower and require more RAM.
So it wasn't that those abstraction layers didn't exist back then. It was that abstraction layers can be expensive. This is still true today, of course, but we have the cycles and memory to spare, more or less, which was very much not the case then.
Linux also used to be like this, but these days has unified MP/UP kernels; on single-CPU systems (or if you give nosmp), the extra code is patched away at boot time. It wouldn't have been an unheard of technique at the time.
CDs were around and hard drives weren’t that small at the time. (Or maybe the really early SMP versions predated widespread availability of CD-ROMs, but I remember dealing with this nonsense and reinstalling from an MSDN CD set.)
Is that accurate? I thought DJGPP only ran on and for PC compatible x86. ID had Alpha for things like running qbps and light and vis (these took for--ever to run, so the alpha SMP was really useful), but for building the actual DOS binaries, surely this was DJGPP on x86 PC?
Was DJGPP able to run on Alpha for cross compilation? I'm skeptical, but I could be wrong.
Edit: Actually it looks like you could. But did they? https://www.delorie.com/djgpp/v2faq/faq22_9.html
I used it in the mid-90's and yes, it was eye opening. On the other hand, I was an Emacs user in uni, and by studying a bit the history of Emacs (especially Lucid Emacs) I came to understand that the concepts in Visual Studio were nothing new.
On the third hand, I hated customizing Emacs, which did not have "batteries included" for things like "jump to definition", not to mention a package manager. So the only times in the late-90s I got all the power of modern IDEs was when I was doing something that needed Windows and Visual Studio.
there was another article where someone bootstrapped the very first version of gcc that had the i386 backend added to it, and it turns out there was a bug in the codegen. I'll try to find it...
EDIT: Found in, infact there was a HN discussion about an article referencing the original article:
https://miyuki.github.io/2017/10/04/gcc-archaeology-1.html
https://news.ycombinator.com/item?id=39901290