Alan Kay Disapproves of OOP

The Inventor of Object Orientation Disapproves of OOP

Imagine there had been a couple of people, somewhere buried in the past, that basically invented the complete foundation of technologies our modern reality is built upon (and will continue to do so for the foreseeable future). Sounds like a cheesy creation myth? But it’s not; it’s actually what happened at Xerox PARC starting in the 1970th. Here’s a list of some of the notable names from the early days of PARC:

This are the actual people behind the following technologies:

The sheer amount of innovations produced at PARC is breathtaking. Most of which formed the basis for the technological development that came afterwards. People like Steve Jobs and many Microsoft Managers harvested a whole life of business success from a couple of inspiring visits at PARC. The people at PARC deserve a closer look, because the things that where invented there are essential for the field of software.

Adele Goldberg, Alan Kay, L. Peter Deutsch, Ronald Kaplan, Warren Teitelman, Dan Ingalls and Diana Merry where among the first people that took it to themselves to design something that comes close to what we now know as integrated development environments. What that means is that – for the first time – people actually got their hands dirty in programming big systems. They produced the full stack of tools required to do that.

The building blocks of a programming system

The scientists at PARC created one of the first truly object oriented programming languages, compilers, interpreters, even just-in-time compilers originated there. The 1st virtual machine was imagined an programmed at PARC, the 1st garbage collector. Little known: a neat trick that allowed for the first high performance graphics systems, the bit blit, was invented there and not by the folks at AMIGA.

The good people at PARC went through the complete software life cycle with all of it. So their experiences covered the whole spectrum of trial, error and making it better the second time.

All of this happened during the 1970th and 1980th, when we can observe a co-evolution of Smalltalk and Lisp based systems. Both developed their object systems with extensive class libraries. The people involved in software development during this time all experienced what it meant to prototype, maintain and extend big systems in object-oriented fashion. As early as the 1970th and early 80th. What is kind of a surprise though:

They where wildly misunderstood and ignored, for decades to come! No-one cared to learn from them.

Alan Kay disapproves of what became of object orientation

It’s not a secret that the person who coined the term OO - Alan Kay - is not very happy about the concepts that over time happened to become associated with it. Nowadays object orientation means:

Whereas Alan Kay had the following principles in mind:

  • I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages (so messaging came at the very beginning – it took a while to see how to do messaging in a programming language efficiently enough to be useful).
  • I wanted to get rid of data. The B5000 almost did this via its almost unbelievable HW architecture. I realized that the cell/whole-computer metaphor would get rid of data, and that “<-” would be just another message token (it took me quite a while to think this out because I really thought of all these symbols as names for functions and procedures.
  • My math background made me realize that each object could have several algebras associated with it, and there could be families of these, and that these would be very very useful. The term “polymorphism” was imposed much later (I think by Peter Wegner) and it isn’t quite valid, since it really comes from the nomenclature of functions, and I wanted quite a bit more than functions. I made up a term “genericity” for dealing with generic behaviors in a quasi-algebraic form.
  • I didn’t like the way Simula I or Simula 67 did inheritance (though I thought Nygaard and Dahl were just tremendous thinkers and designers). So I decided to leave out inheritance as a built-in feature until I understood it better.

Alan Kay in a response to a question from Stefan Ram Source: Re: Clarification of “object-oriented”

While some of those things might overlap, the basic direction and the mental model behind it is fundamentally different:

Designing software around decoupled subsystems that communicate via messages, are designed around behaviors and keep repetition low by using techniques that allow for generic expression of behavior.

He was fully aware of the fact that all of this is achievable from fundamentally different directions in fundamentally different ways. That’s why he chose this generic way of talking about it. Something that we modern people seem to have completely forgotten.

What kind of thing is this “object-orientation” actually?

If one looks at what object-orientation tries to achieve at the end of the day, one could say that it’s a set of ordering and design principles for program behavior to help reduce complexity and raise maintainability. If there is something that we can learn from the big software systems created in the last 30 years in programming languages like C++ and Java, than it’s that somehow they fail to improve on the goals to reduce complexity and raise maintainability. Alan Kay’s emphasis gets way closer to what is actually important than the concepts inheritance, polymorphism and encapsulation will ever get.

Why did the meaning of object orientation change?

There are many different opinions about the reasons behind this change. Some people argue that Alan Kay’s definition just wasn’t what the market actually required, so it got replaced by something else. I don’t agree with this and tend more towards the following.

Before the 70th computerization in businesses and corporations wasn’t very high. During the 80th the corporate world suddenly began to grasp the potential of the technology and began to adopt it. What they learned quickly was that it’s way more expensive than they would like it to be. At the same time the micro-computer revolution happened and the market was flooded with cheap and underpowered computer hardware. While it’s true, that only the price drop made the broad adoption possible, it also caused a complete and utter setback in what was possible programming-wise. So that the tools required where mainly chosen for their capability to perform well on underpowered hardware. This canalized the choice of tools for the decades to come.

Instead of Lisp and Smalltalk lower-level languages like COBOL, C and Pascal where needed to get acceptable performance out of the CPUs of low-cost-systems. This established a couple of companies on the market and trained the people programming for them and running them in the before mentioned programming systems. And this is what shaped and shapes the perspective on object orientation nowadays.

Strangely some of the seemingly big concepts developed over the years don’t seem to add anything anymore to the ordering and design principles for program behavior to help reduce complexity. The “Liskov Substitution Principle” for instance is a blatancy on top of inheritance. It doesn’t say anything about how inheritance helps to improve maintainability of big systems or why it makes sense to treat classes of objects as equivalents to the notion of type.

The modern field of object oriented software design seems to be quite self-content with this. The strive for simplicity as a core value got lost on the way of the big software companies to the top. Many of them operate so profitable (software as a business has no reproduction costs per sold unit) that there’s simply no need to rethink their tooling.

I will follow up on this train of though in a later article.