Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The first batches of Quake executables, quake.exe and vquake.exe were programmed on HP 712-60 running NeXT and cross-compiled with DJGPP running on a DEC Alpha server 2100A.

Is that accurate? I thought DJGPP only ran on and for PC compatible x86. ID had Alpha for things like running qbps and light and vis (these took for--ever to run, so the alpha SMP was really useful), but for building the actual DOS binaries, surely this was DJGPP on x86 PC?

Was DJGPP able to run on Alpha for cross compilation? I'm skeptical, but I could be wrong.

Edit: Actually it looks like you could. But did they? https://www.delorie.com/djgpp/v2faq/faq22_9.html

 help



I asked John Carmack and he told me they did.

There is also an interview of Dave Tayor explicitly mentioning compiling Quake on the Alpha in 20s (source: https://www.gamers.org/dhs/usavisit/dallas.html#:~:text=comp... I don't think he meant running qbsp or vis or light.


> he told me they did.

This is when they (or at least Carmack) was doing development on Next? So were those the DOS builds?


I thought the same thing. There wouldn't be a huge advantage to cross-compiling in this instance since the target platform can happily run the compiler?

Running your builds on a much larger, higher performance server — using a real, decent, stable multi-user OS with proper networking — is a huge advantage.

Yes, but the gains may be lost in the logistics of shipping the build binary back to the PC for actual execution.

An incremental build of C (not C++) code is pretty fast, and was pretty fast back then too.

In q1source.zip this article links to is only 198k lines spread across 384 files. The largest file is 3391 lines. Though the linked q1source.zip is QW and WinQuake, so not exactly the DJGPP build. (quote the README: "The original dos version of Quake should also be buildable from these sources, but we didn't bother trying").

It's just not that big a codebase, even by 1990s standards. It was written by just a small team of amazing coders.

I mean correct me if you have actual data to prove me wrong, but my memory at the time is that build times were really not a problem. C is just really fast to build. Even back in, was it 1997, when the source code was found laying around on an ftp server or something: https://www.wired.com/1997/01/hackers-hack-crack-steal-quake...


"Shipping" wouldn't be a problem, they could just run it from a network drive. Their PCs were networked, they needed to test deathmatches after all ;)

And the compilation speed difference wouldn't be small. The HP workstations they were using were "entry level" systems with (at max spec) a 100MHz CPU. Their Alpha server had four CPUs running at probably 275MHz. I know which system I would choose for compiles.


> "Shipping" wouldn't be a problem, they could just run it from a network drive.

This is exactly the shipping I'm talking about. The gains would be so miniscule (because, again, and incremental compile was never actually slow even on the PC) and the network overhead adds up. Especially back then.

> just run it from a network drive.

It still needs to be transferred to run.

> I know which system I would choose for compiles.

All else equal, perhaps. But were you actually a developer in the 90s?


Whats the problem? 1997? They were probably using 10BaseTX network, its 10Mbit... Using Novel Netware would allow you to trasnfer data at 1MB/s.. quake.exe is < 0.5MB.. so trasnfer will take around 1 sec..

Not sure what you mean by "problem". I said miniscule cancels out miniscule.

Networking in that era was not a problem. I also don’t know why you’re so steadfast in claiming that builds on local PCs were anything but painfully slow.

It’s also not just a question of local builds for development — people wanted centralized build servers to produce canonical regular builds. Given the choice between a PC and large Sun, DEC, or SGI hardware, the only rational choice was the big iron.

To think that local builds were fast, and that networking was a problem, leads me to question either your memory, whether you were there, or if you simply had an extremely non-representative developer experience in the 90s.


Again, I have no idea what you mean by networking being a "problem".

You keep claiming it somehow incurred substantial overhead relative to the potential gains from building on a large server.

Networking was a solved problem by the mid 90s, and moving the game executable and assets across the wire would have taken ~45 seconds on 10BaseT, and ~4 seconds on 100BaseT. Between Samba, NFS, and Netware, supporting DOS clients was trivial.

Large, multi-CPU systems — with PCI, gigabytes of RAM, and fast SCSI disks (often in striped RAID-0 configurations) — were not marginally faster than a desktop PC. The difference was night and day.

Did you actively work with big iron servers and ethernet deployments in the 90s? I ask because your recollection just does not remotely match my experience of that decade. My first job was deploying a campus-wide 10Base-T network and dual ISDN uplink in ~1993; by 1995 I was working as a software engineer at companies shipping for Solaris/IRIX/HP-UX/OpenServer/UnixWare/Digital UNIX/Windows NT/et al (and by the late 90s, Linux and FreeBSD).


Ok that's not what I said. So we'll just leave it there.

That's exactly what you said, and it was incorrect:

> This is exactly the shipping I'm talking about. The gains would be so miniscule (because, again, and incremental compile was never actually slow even on the PC) and the network overhead adds up. Especially back then.

The network overhead was negligible. The gains were enormous.


>> I said miniscule cancels out miniscule.

> You keep claiming it somehow incurred substantial overhead

This is going nowhere. You keep putting words in my mouth. Final message.


Jesus Christ. Networking was cheap. Local builds on a PC were expensive. You are pedantic, foolish, and wrong.

Were you even a developer in the 90s? Are you trying to annoy people?


> I mean correct me if you have actual data to prove me wrong, but my memory at the time is that build times were really not a problem.

I never had cause to build quake, but my Linux kernel builds took something like 3-4 hours on an i486. It was a bit better on the dual socket pentium I had at work, but it was still painfully slow.

I specifically remember setting up gcc cross toolchains to build Linux binaries on our big iron ultrasparc machines because the performance difference was so huge — more CPUs, much faster disks, and lots more RAM.

That gap disappeared pretty quickly as we headed into the 2000s, but in 1997 it was still very large.


I remember two huge speedups back in the day: `gcc -pipe` and `make -j`.

`gcc -pipe` worked best when you had gobs of RAM. Disk I/O was so slow, especially compared to DRAM, that the ability to bypass all those temp file steps was a god-send. So you'd always opt for the pipeline if you could fill memory.

`make -j` was the easiest parallel processing hack ever. As long as you had multiple CPUs or cores, `make -j` would fill them up and keep them all busy as much as possible. Now, you could place artificial limits such as `-j4` or `-j8` if you wanted to hold back some resources or keep interactivity. But the parallelism was another god-send when you had a big compile job.

It was often a standard but informal benchmark to see how fast your system could rebuild a Linux kernel, or a distro of XFree86.


> Linux kernel builds took something like 3-4 hours on an i486

From cold, or from modified config.h, sure. But also keep in mind that the Pentium came out in 1993.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: