In a few milliseconds my brain made the connection and I had one of those “so that is where it came from!” moments.

When I was in Silicon Valley last week, I had the privilege of taking a tour of the Computer History Museum. One of the stops was in front of these hanging grids:

I had no idea what they were. My friendly tour guide, Fred Ware, told me that I was looking at “core memory.” It was an early form of memory technology that used small holes, or cores, through which wires were inserted. This setup allowed the machine to control the polarity of the magnetic field created and thereby store data inside of them.

When I first started programming on a Unix machine in college, I would occasionally do something stupid like try to peek at the contents at memory address zero which would do bad things, cause a crash, and give me a “core dump.” I had always assumed that the “core” referred to in the error message was an adjective for “main,” but when I saw real “core memory,” I realized that I had misunderstood the term for years.

Our industry is full of terms that seem awkward unless you know their history. Even basic things like the “print” method in almost all languages doesn’t make much sense unless you see that it goes back to the days when you connected to a computer via a TeleTYpe (TTY) machine that printed its result on rolled paper.

This isn’t unique to computing. Human languages have quite a bit of history in their words. For example, the word “salary” comes from the bygone days when you were paid for your labor in salt. The difference with computing history is that most of it happened recently and most of the people that created it are still alive.

Great stories are just below the surface of things you use each day. While I knew that Linux is a Unix-like operating system that was started by Linus Torvalds when he was a student the University of Helsinki, I didn’t realize until last week the interesting story of “Unix” itself. The name probably came about as a result of a silly pun. It was a “castrated” version of MULTICS. The MULTICS operating system was developed in the mid 1960’s to be a better time-sharing operating system for the expensive computer time of the day. Perhaps frustrated with the complexity of MUTLICS, Ken Thompson wrote a rough “simplified” version of it in a month. Thus, the story goes, it was “UNIplexed” and simpler where MULTICS was “MULTiplexed” and complicated.

Unix’s history gives color to the people that created it. I’m certainly no Ken, but I can relate to the feeling that some code seems overly complex and feel the itch to rewrite it to something simpler. It’s neat to see that his diversion worked out so well. It’s also a testament to the people behind MULTICS that most of the central ideas in our “modern” operating systems trace their origin to it.

Programming languages tend to have a story as well. Usually the most interesting is the philosophy that drove their creation. A good representative sample is the philosophical gap between Simula and C++.

Simula is regarded as the first object-oriented programming language. It too was developed in the golden era of Computer Science in the 1960’s. As stated by its creators:

“From the very outset SIMULA was regarded as a system description language”

This naturally led to the bold design objective #6:

“It should be problem-oriented and not computer-oriented, even if this implies an appreciable increase in the amount of work which has to be done by the computer.”

It was so bold, that they had to tone it down a bit. They still had 1960’s hardware after all:

“Another reason for the [de-emphasizing the above goal] was that we realized that the success of SIMULA would, regardless of our insistence on the importance of problem orientation, to a large to a large extent depend upon its compile and run time efficiency as a programming language.”

Regardless, it’s telling of its philosophy. As David West writes:

“Both Parnas and the SIMULA team point to an important principle. Decomposition into subunits is necessary before we can understand, model, and build software components. If that decomposition is based on a ‘natural’ partitioning of the domain, the resultant models and software components will be significantly simpler to implement and will, almost as a side effect, promote other objectives such as operational efficiency and communication elegance. If instead, decomposition is based on ‘artificial,’ or computer-derived, abstractions such as memory structures, operations, or functions (as a package of operations), the opposite results will accrue.”

As David continues, the philosophy of Simula was:

”… to make it easier to describe natural systems and simulate them in software, even if that meant the computer had to do more work.”

Bjarne Stroustrup took a different approach with C++:

“SIMULA’s class-based type system was a huge plus, but its run-time performance was hopeless:

The poor runtime characteristics were a function of the language and its implementation. The overhead problems were fundamental to SIMULA and could not be remedied. The cost arose from several language features and their interactions: run-time type checking, guaranteed initialization of variables, concurrency support, and garbage collection…” (Emphasis added)

“C with Classes [precursor to C++] was explicitly designed to allow better organization of programs; ‘computation’ was considered a problem solved by C. I was very concerned that improved program structure was not achieved at the expense of run-time overhead compared to C.” (Emphasis added)

This simple historical account alone can probably guide you to answers to things that seem odd or annoying about C++. It also reveals a larger truth. Usually if someone way smarter than me, like Bjarne, creates or does something that I think is weird, there’s probably an interesting historical reason behind it. In C++, the driving force was almost always performance. I find it amusing that a lot of the “new” ideas in languages and runtimes are just bringing back things from Simula that C++ took out.

Practicalities like performance often hinder the adoption of otherwise superior technology. Betamax lost because it only held an hour of video compared to VHS’s three. Not being able to store a full length movie is a real problem. The QWERTY keyboard layout was designed in the 1860’s when it was important to separate common letter pairs like “th” in order to prevent typebar jamming. When computer keyboards were designed, the designers simply copied the already popular QWERTY layout to ease adoption even though other, potentially more efficient, layouts were available. Sometimes a “good enough” solution wins because the better approach isn’t worth the transition cost.

Popular history often misleads people into thinking that things came easily to the innovators. The real stories offer hope because they demonstrate that you too can make history if you’re willing to persevere.

Google’s founders Larry Page and Sergey Brin were turned down when they tried to sell their core PageRank algorithm to AltaVista and Yahoo. One of my favorite stories is that almost no one believed Bob Kahn, the inventor of TCP, when he advocated that congestion control would be important on a large network. It wasn’t until the first 12 packets were sent out and congestion actually occurred did people finally agree that it could be a problem.

Probably the biggest misconception is how Isaac Newton came to explain gravity. Scott Berkun writes:

“It’s disputed whether Newton ever observed an apple fall. He certainly was never struck by one, unless there’s secret evidence of fraternity food fights while he was studying in Cambridge. Even if the apple incident took place, the telling of the story discounts Newton’s 20 years of work to explain gravity, the feat that earned him the attention of the world” (Emphasis added)

Tim Berners-Lee, the primary man behind the web, tells a similar story:

“Journalists have always asked me what the crucial idea was or what the singular event was that allowed the Web to exist one day when it hadn’t before. They are frustrated when I tell them there was no Eureka moment. It was not like the legendary apple falling on Newton’s head to demonstrate the concept of gravity… it was a process of accretion (growth by gradual addition).”

What’s the history lesson? Don’t worry if you never have an apple falling on your head moment. Newton didn’t have one either. Things take a lot of hard work and perseverance. True stories of innovator’s persistence go on and on. While you’re slogging away, don’t worry about mistakes too much:

“Anyone who was never made a mistake has never tried anything new.” - Einstein

Simply learn from them and move on.

Although doing something insanely great might take a long time, it demonstrates how important people are in our industry. People are the most important part of any industry because it is through them and their stories that history is created. People are especially important in computing. Unfortunately, some of their most interesting stories are practically unknown by most:

  • Alan Turing wrote the paper that effectively started computing as we know it.
  • William Shockley practically put the “silicon” in “Silicon Valley,”
  • Our modern Internet exists in large part because of the fascinating story of J.C.R. Licklider who used ARPA funding to work on his dream that everyone should have a personal computer connected to an “Intergalactic Computer Network.”
  • Bob Taylor, the man that led the wildly successful Xerox Palo Alto Research Center (PARC) inspired creativity in the great people that worked with him by creating environment that encouraged exploration and laughter.
  • Dave Cutler was in large part the reason why we have the much more reliable NT kernel running on our computers and aren’t stuck with “toy” Windows 9x one.

Their stories are full of little gems. You sometimes get to see what drove their curiosity. Bob Barton, arguably one of the greatest hardware designers ever, made the comment “I often thank IBM because they gave me so much motivation to do better.” I’m sure that Google’s founders could say the same thing about AltaVista of the 1990’s.

I think that one of the reasons for the plummeting enrollment in Computer Science majors is that there is a lack of understand of people and their stories in our field. This leads to the perception that a career in computing will result in a “social death.” I might be biased, but I think we have great stories. If you haven’t visited it yet, check out the videos on the Computer History Museum section on YouTube. They’re quite interesting.

I’ll close with one lesson from computing history: don’t be afraid to dream big. When Vint Cerf had to put an upper limit in 1977 on the number of addresses for what would become the Internet, he probably thought that 4 billion addresses would never be used, but that’s the “address exhaustion” problem that we’re facing now. Be careful when creating something, it just might exceed your wildest dreams.

Why do we as an industry largely ignore our history and rarely mine its rich stories? Do we think we’re moving too fast for it to be relevant? I think that it’s important to understand our history exactly because we’re moving so fast. It seems that learning principles that are timeless or much longer lived is more valuable than keeping up with the deep intricacies of the latest technology of the day that won’t matter in a year or two.

What’s your favorite piece of computing history? Who are the people that you remember? What are their stories?

P.S. Special thanks to Alan Kay for sharing some of his stories with me last week.