Wednesday, April 16, 2008

Towards Moore's Law Software: Part 3 of 3

(Note: This is the third and final part of a series. Part one appeared Monday and part two appeared yesterday.)

First "STEPS"

Let's say we want to build the TCP/IP stack of an operating system. A traditional implementation might take 10,000 lines of code. What if you rethought the design from the ground up? What if you could make the IP packet handling code look almost identical to the RFC 791 diagram which defines IP? That's exactly what the Viewpoints team did. This is real code in their system:

and the TCP stack is similar to the RFC 793 diagram:

That's right; the ASCII art diagram is the code. How did they do it? Well, when you have a powerful "meta-meta language-language" like their IS, you can define languages in a form that reads like a traditional Backus-Naur Form (BNF) grammar representation:




With packet parsing out of the way, the implementation of their TCP algorithm is a little dense, but readable piece of code that handles the SYN, ACK, and other responses:

With this as our basis, writing a service like the Daytime Service is quite simple:

All told, the entire stack is comfortably under 200 lines of code without using any tricks like code generation wizards. The graphics subsystem is similarly clever. It basically defines everything as a polygon (including fonts, windows, etc). It's well under 500 lines of code. They have to be this compact in order to meet their goal of an entire system in under 20,000 lines.

Another interesting metaphor in the project is the use of "massively parallel objects" or what they refer to as "particle fields," as a fundamental part of the system:

"Even less in the mainstream, the “particle/field” idea [PF] has been found more in specialty areas (such as finite-automata and FEM, swarm programming in biology, etc.), than as a general system building tool (although it is the center of systems such as Sketchpad and TEX, and is even found in an operating system such as MUSE). Traditional object-oriented design has tended to overbalance its attention on the objects and to give too rudimentary attention to message-passing. If the center of attention were to be shifted to messaging of all kinds, then the notion of “fields” immediately suggests itself as a more general way to think about inter-object relationships (the previous example of “ants distributing and sensing pheromones” is a good metaphor of this style)."

For example, this idea can be applied to text formatting where you essentially treat every letter as its own object. Programming the solution then becomes much easier. You can have simple rules where you just look out for your immediate neighboring letters and then use an extremely efficient message passing system to make your simple code work in a very efficient manner. Here's a sample demonstration from their NSF proposal:

This reminds me of ideas from "emergence" which is a theory that explains how a flock of birds or an ant colony can do complex things even though each individual in the system thinks simple thoughts. An individual bird is only thinking in terms of simple rules like "try to follow the guy in front of you" and "get out of the way if something dangerous is nearby." Just these two rules alone can lead to the fantastically complicated formations that we see in the sky.

The massively parallel objects with efficient messaging metaphor leads to algorithms that are simpler in concept because you can focus on the behavior of one little object rather than have to worry about how the whole system works.

"I want it now!"

With the Viewpoints team only in their second year of a five year project, we can get a feel for where the future of software development is going, but we can't realistically put it into production quite yet. What are our options then?

Ted Neward likes to talk about how the next five years will be about programming languages attempting to bridge the "huge disconnect between the practitioners, the guys who get stuff done, and the academics, who think about how we get things done." He says this disconnect exists because "the academics and the practitioners don't talk to each other." I really think that the next five years we'll see significant strides towards improving the situation. If you want to jump ahead of the curve, I think it's worthwhile start imagining a dream language that would help you cut along the "natural joints" of a problem you work on. Can you think of an expressive language like the ASCII art for parsing TCP/IP headers that is targeted for your specific problem?

Another interesting observation Ted made was that:

"... programming languages, much like human languages, are expressions not just of concepts within them, but also the environment in which they were born."

That is, no language is perfect. Each language has a culture that gave birth to it which was usually focused on a particular type of problem. I often find that I put myself in too much of a language box and unfortunately am happy in that box. This led to me asking Alan Kay about the best way to cope with a design problem I had. The power to think in a highly tailored, but not necessarily general, programming language might end up being the best solution to problems we face.

If you go down this path, there are many tools and some good articles available now to help you get started writing your own custom language. If you want to be a bit more conservative, you can write an internal domain specific language inside of host languages like Ruby or C#. However, if you go down the custom route, you'll benefit of doing it now rather than if you had done it 20 years ago because there are huge frameworks at your disposal like Microsoft's Dynamic Language Runtime (DLR) that handle most of the "goo" involved in getting your own language up and running. You can leverage a lot of the libraries built on .net or the JVM so that you don't have to worry so much about supporting concerns like database access and handling XML if that's not you're primary concern. This is in contrast to a decade ago when you would have had to build all these supporting libraries just to get people to even think about your language seriously. Even a popular "new" language like Ruby has taken about 10-12 years to get enough libraries to make it feasible for development. By standing upon the shoulders of frameworks, you can quickly build something that is production worthy.

You can even use Phoenix, Microsoft's new backend optimizing compiler for your language. That way, you don't have to worry about emitting machine code or IL. You just parse your language into a simple intermediate representation and your generated code will be produced with the same efficiency as the compiler being used for Windows 7. To get you started, the Phoenix SDK will include a LISP compiler as a sample app.

Final Thoughts

"The future is already here -- it's not just not evenly distributed" -- William Gibson

The widespread adoption of virtual machines like .net's Common Language Runtime (CLR) are increasingly making it an option to use custom languages to solve particular problems. This is because your code can be easily used by more mainstream languages like C#. I think this is one of the critical reasons why Microsoft decided to include a functional language, F#, into an upcoming version of Visual Studio. Similarly, I enjoyed watching all the different ideas at the lang.net symposium because they showed the viability of the CLR for many different languages that would also integrate well with mainstream code.

I have to be honest and admit that that I don't envision myself going off and writing my own language right now, but at least I'll be looking for ways where a custom language might be a really good fit. I think I need to apply some meta level thinking by perhaps writing some of my Project Euler solutions in languages that make this easier like Scala or F# even though C# is easy for me to use.

This journey towards a "Moore's Law" increase in expressiveness has been a bit long, but I'm starting to see some practical benefits:

  1. I'm already feeling expressive differences between C# 2 and C# 3. I'm starting to build on LINQ's good ideas in my production code by using ideas from C++'s Standard Template Library (STL) to make the simple ideas in the code clear and get rid of some of the scaffolding "goo" that exists. Notable areas of influence on my thinking in the .net arena have been Wintellect's Power Collections and the NSTL project. Another variant on this idea is to see more of day to day challenges as graph problems and use a library like Peli's QuickGraph to solve them.
  2. The "massively parallel objects" metaphor made possible by highly efficient message passing has changed the way that I have thought about some of my hobby programs. Instead of focusing on rules for how a single class can orchestrate hundreds or thousands of objects, I am happily thinking of the object as a bird in a flock. The good news is that this metaphor works well and the same overall macro goal is obtained. I think this is just the start to curbing the problem Alan mentioned of "[there] has been an enormous over-focus on objects and an under-focus on messaging (most so-called object oriented languages don’t really use the looser coupling of messaging, but instead use the much tighter “gear meshing” of procedure calls – this hurts scalability and interoperability)."
  3. Reflecting on the Viewpoints' IS meta-meta language-language and what can be built with it has really allowed me to see more of the beauty that real computer science has to offer. Even if I can't put those ideas immediately into production code, I could start down that path with something smaller like applying the ideas of code-generation. The latter is one of the ideas that essentially made Rails so popular: it's the code that you don't have to write yourself that makes it interesting.

The bad news about going down the path of meta level thinking and writing your own custom languages is that the tools are still relatively young and it's a little more work than it will be ten years from now. The good news is that if you spend the time at least exploring what smart guys like Alan and Charles are doing now, you'll probably have a comfortable advantage on your competitors when these ideas become mainstream. If it took object oriented programming ideas a good 25+ years to become popular, the ideas behind language oriented programming and things like massively parallel objects will probably take at least 10 years to hit wide adoption.

Real "Moore's Law" software will have what Gordon Moore noticed about the effects of his own "law" on hardware: it told the industry that they "[have] to move that fast or they fall behind." I think some of the ideas mentioned here might help get towards that philosophy. However, I'm just a secondhand observer. The best place to get started is to look at the original sources, or at the very least, the Viewpoints project overview video.

I'm interested in your thoughts. What do you think might lead to a "Moore's Law" increase in software expressiveness? Do you think it's even possible?

15 comments:

Steve Campbell said...

Yes, I think its possible to get a one-time multiple-order-magnitude increase in software. Unlike Moore's law, there is an easy to define upper limit though. I think Alan Kay's group are already at that limit.

On the expressiveness of the TCP example, I have done similar things before, and they have worked *really* well. It was a risk though, because I had to trust that I understood the problem domain well enough to invest a lot of time in the parser/interpreter.

The interesting thing about this sort of code is that it is *never* legacy code. As long as the abstraction holds and the concept it supports is needed, the code is correct and can be re-used, independent of platform, language, technology etc.

Final thought - the transformation+model-based thinking that is necessary to work in this way will not come easily to everyone. Some (maybe most) programmers will never make the leap.

Ivan Tihonov said...

You should read about Chuck Moore's ideas. He is talking about this for last 40 years.

Peter Christensen said...

For the last 3 days, I've been reading it as "Moser's Law of Software". Oops!

Anonymous said...

Adam Dunkel's uIP that is used in Contiki is very interesting implementation of the TCP/IP stack.

Jeff Moser said...

Steve Campbell: Part of me agrees that they're really approaching a limit, but another part of me hopes there is still another sizable increase (I can't think of it though -- however Alan Kay mentioned that part of their goal is to raise the bar enough to let others possibly see it)

Good job on watching out for the "Turing Tar Pit" on your own parser/interpreter.

It is harder to write, but as you noticed, it has some artistic elegance and beauty which is a rare find in code.

Thanks for the thorough comment!

Ivan Tihonov: Are you referring to the inventor of Forth? Is there a specific paper or book you can point me to that is a sample of what you're speaking about? I saw some thoughts in terms of chip design but not clear ones in terms of software.

Peter Christensen: I'm not worthy of having my own law :)

anonymous: I briefly looked at the code and it seemed to be interesting, but quite a lot of C code. Besides its implementation on microcontrollers, what stuck out about the implementation that you found interesting?

Ivan Tihonov said...

Moore does not write much. You could read articles by Jeff Fox on www.ultratechnology.com about Moore and his ideas.

http://www.ultratechnology.com/moore4th.htm
http://www.colorforth.com/1percent.html

Btw, Occam, Enstein and Exupéry said same things decades before him. "KISS principle" is widely accepted. But what is seen as "simple" today is overcomplicated actually.

Jeff Moser said...

Ivan Tihonov: This is interesting. I had a conversation with someone at work a day earlier and he mentioned someone that used Forth as well. The particular use was for a driver. Do you see thoughts from this language entering into more mainstream languages? If so, any example that you'd like to see?

It's interesting that both Moore's are noted for their appeal for an improvement in efficiency.

jim said...

Chuck Moore did some incredible stuff with Forth. A good book is "Thinking Forth" by Leo Brodie, it's a free download I believe.

Also, you should check out Clojure. It's a new dynamic language for the JVM that is built around immutable data structures, excellent concurrency ideas, Java interop and Lisp syntax/macros. Cool stuff

Jeff Moser said...

jim: I did a search and found the book online as you said. It seems that Forth is sort of like thinking in the CLR's IL (stack based). It seems a bit overly terse (but not quite like J). I suppose it looks better with more experience.

I hadn't looked into Clojure before. It's sort of like F# on the CLR side but with more LISP syntax. I welcome the languages to try out ideas. Like I mentioned, it's really nice we can leverage the huge frameworks like the JRE's libraries on the JVM.

Thanks for the info!

Ivan Tihonov said...

Joe, Factor, even ANS Forth are against Moore's ideas. They all break the "simplicity" rule.

Do not use Forth as a programming language. Read "Thinking Forth", ultratechnology.com and colorforth.com and use these ideas in a language of your choice.

It's not about stacks and reverse polish notation. It's about being stupid in a clever way.

There is a reason for not using Forth language. Your environment is C-friendly. May be C#-friendly. Or Java-friendly. But for sure it was designed without Forth in mind.

Programming in Forth is like driving a car of octopus alien race with your two mankindish hands. It just does not fit.

It is pretty comfortable if you have no need to deal with all these OS/API/XML stuff which burdens our minds and hard drives. But life differs.

Btw, here is an interesting view why Forth is unsuccessful in a mainstream.

http://www.msmisp.com/futuretest/Forth's_Dilemma.htm

jim said...

I don't think I'd say it quite as strongly as Ivan did, but I kind of agree. Forth has some really good ideas and if you're willing to dive into the internals, you'll see just how much you can accomplish with very little code. Chuck Moore did a complete CAD system for designing IC's in something like 8k. That included simulation with virtual probes he could put on individual circuits to see how the signals looked.

Forth is excellent for low-level stuff, but you have to be disciplined about maintaining simplicity. What's special about "Thinking Forth" is that it lays out how to think about programming and how to keep things simple. Read it for the ideas it has that you can apply to other languages, not for the sake of learning Forth. A similar book that I've read lately is "The Little Schemer". Highly recommend it as well.

Peter Christensen said...

To give you an idea of how much I enjoyed this, here's a thought I had in the shower this morning (2 months after I first read it):

Hardware could follow Moore's Law because it was one identical solution to everyone's problem: general computing hardware. Once the variation and diversity of solutions moved above hardware and into software, then a single hardware solution could apply to every problem. All other solutions became unnecessary and all effort gets focused on increasing the speed of the accepted hardware solution.

Software is different - every problem is different and nearly all solutions aren't applicable to another problem. Therefore, optimizations, energy, and time are divided widely among different projects. The bigger the mindshare of a project, the more mature the software is (like Linux, Apache).

Also, software is trying to meet an infinite and growing number of needs, while hardware just had to meet one need and extremely well: fast computation.

Hardware is like making airplanes: even competitors are more similar than different. Software is like movies: each one has different goals, needs, resources, momentum, and direction, and achieves wildly different results.

Jeff Moser said...

Ivan Tihonov and jim: Good thoughts.

Peter Christensen: Would you say that hardware design is really an identical solution? With all of the pipelining and so forth, it seems like modern chips have quite a bit of innovation and different solutions. Maybe it was that the interfaces were a bit clearer? But, perhaps there was an element of making hardware more of a commodity.
Unfortunately, I don't know enough about hardware to compare the AMD and Intel designs. What is interesting though, is that the functionality had to be identical (plus or minus some fancy multimedia instructions for parallel computing). This is something that you almost never see in software except in space craft or flight control systems (two programs designed to do the exact same thing in the same way). Some might argue that IIS 7 and Apache 2 might be close enough to that (at least in theory with a modular HTTP architecture.)

Is software different in a fundamental way? What about reusable components in COM or .NET? Perhaps the arranagement of components is different, but ultimately there is a lot of reuse. Unfortunately, reuse often isn't a major priority.

I think that if I understood more about modern chip design, I'd probably conclude that it's more varied than it appears. I still think that the biggest influence of Moore's Law is that it clearly gave chip designers a feeling for when they were falling behind. We just don't have that in software.

I think that's why the future needs to have more of the software pushing the hardware design rather than the other way around.

Perhaps I didn't fully catch your thread. Maybe you can elaborate more on it?

Interesting thoughts; thanks for sharing!

Anonymous said...

Moore's Law is really about scaling and the ability to predict performance and control costs over multiple scales of manufactured size.

Jeff Moser said...

Anonymous: Sure. I agree with the classic definition of Moore's Law. What I was going for in this series was to see what it would take to have the Moore's Law metaphor become a reality in terms of software development/expressiveness.