If you want to get real technical
If you want to get real technical, the whole Win9x/Me line were still 16-bit, because what most people thought of as Windows was really just a GUI shell running on top of DOS. It was basically a special 32-bit stack thrown on top of a 16-bit OS. That's a bit of an oversimplification, but the legacy of DOS was what caused people to have never ending problems with "system resources" and the general instability of the whole line of operating systems. The Windows shell was able to run in a protected mode, but there was no enforcement of this, and a single 16-bit program would bring the whole house of cards crashing down.
Windows NT 3.1 and on through XP were all fully 32-bit protected mode operating systems with enforcement. There was kind of experimental 64-bit build of XP, and technically there were 64-bit builds of at least NT4 and Win2000 for SPARC64 and Alpha instruction sets. However, few people ever really knew about these, and as you might imagine, kind of a limited market since those are high end workstation and server chips. Amusingly, MS initially comes out saying that they're dropping 64-bit platforms with XP, only to ultimately go back on that when the x86-64 instruction set comes out. I can't recall for sure, but there might have also been a port to Intel's doomed IA64 instruction set. You are correct that the 64-bit version of XP wasn't really intended for widespread use, which is why it was only ever distributed as OEM. Vista was taking a lot longer than expected to get out the door, MS' partners were getting impatient, so they slapped together a quick 64-bit version of XP.
It is something of a misnomer that more bits means more data being processed at a time, thus more efficiency. It's both true and untrue all at the same time. For the most part, save a few programs like video editing, you just won't see much of a difference because the number of bits being processed at a time wasn't a limiting factor. Unless you have a program that makes use of 64-bit variables like video editing, the number of bits really doesn't matter that much. Although, I forget the specific day of doom, but computers tell time by counting the number of seconds since some date in like 1970, and a 32-bit integer variable can only go up to around 4 billion. So at some point within most of our lifetimes, we'd hit the integer wraparound point where clocks reset to think it's 1970 something. Kind of the whole Y2K think all over again. People come up with ugly, temporary, hacks, which end up sticking around for decades. And despite it being a pretty well known problem to a key set of people, nothing is ever done about it. Granted a 64-bit integer variable will probably take us a few centuries to deplete, and the human race may not even survive that long, but rather than solve the underlying problem we've just applied another band-aid fix.
Also, since we're getting into the weeds a bit... There's no 16-bit support on ANY of Microsoft's current 64-bit offerings. A 32-bit version of Windows will have 16-bit support, but not the 64-bit version. I'll have to admit to never having looked into exactly why that is, but if I had to guess I would assume it's a limitation of the CPU architecture. I would presume x86-64 mandates the use of protected mode operation, and that would preclude 16-bit programs from running unless MS set up a special kind of sandbox which is probably just way too much trouble for a handful of apps people shouldn't still be holding onto anyway. Of course I could be completely full of it on that last point, so make sure to get some corroboration before taking it at face value.
Finally, for all intents and purposes XP x64 is DOA at this point. It never made it to SP3, and MS dropped support for SP2 a little while back. So it is no longer getting security updates and the like, but never mind that, as driver support was always poor at the best of times.
Was this reply helpful? (0) (0)