In the new world of single-user computers (personal computing), programming languages evolved with completely different purposes. User interface had never been a consideration before, but researchers soon realized they needed to put the personal into personal computing.
Smalltalk: It’s turtles all the way down
Alan Kay was lucky enough to be born in 1940. That was lucky because it meant that, when he graduated with a PhD from the University of Utah, and got his first job at Xerox, it was 1969. This was a time when US companies were spending lavishly on research and development, and researchers had few constraints.
Kay was a polymath, having learnt how to read at 3 years old and having read more than 100 books by the time he started 1st grade. His passions, which he brought with him to Xerox, were aesthetic beauty and early learning, and particularly the work of Jean Piaget. Piaget’s constructivist theories strongly influenced Kay’s approach to programming-language design, leading him to seek out primitive building blocks that could be used to create objects of incredible complexity. He wanted to create a language that would be easy for children to learn, but that would have the power to continue to be used even as the child grew to adulthood. Easy to learn and easy to use are rarely the same thing. It is exceedingly difficult to create something that does a good job at both.
Smalltalk, which Kay created over 8 days in 1972, was a masterpiece of programming-language design. He was able to accomplish it in that short amount of time for two reasons: firstly, he had been thinking about it for years, and secondly, he had already made several earlier attempts (also using the name Smalltalk) and therefore knew what he did not want to do.
It was a masterpiece because Kay had managed to reduce the concept of programming to its barest essentials. Smalltalk has only 6 reserved keywords and the language specification fits on one page. He accomplished this by creating a synthesis of all that had come before him. He took composability from Lisp, as well as objects from Simula, and added 3 critical ingredients:
- Encapsulation: An object’s internal state was its own business—no code outside of the object could mess with it.
- Messaging: The way objects communicated with each other was by sending messages.
- Everything is an object: As in, “turtles all the way down.”
He envisioned each object like a computer and message passing like networking between computers. He successfully captured the most primitive building blocks of programming. Smalltalk is the quantum mechanics of programming.
And just as every language before owed a debt to Fortran and Lisp, every single language (except for C++ and Haskell) we will look at in the rest of this series owes a debt to Smalltalk.
C++: C with class(es)
Bjarne Stroustrup, a Danish computer scientist, was working for AT&T Bell Laboratories and experimenting with updating Unix for distributed computing (see page 2).
Distributed computing is exactly what it sounds like. A computer program that runs on more than one computer. To do this successfully, the program has to be separated into chunks, and because networking between computers is not 100% reliable, those chunks have to be made independent of each other somehow. Doing this in a low-level language is extraordinarily difficult to keep track of, so abstractions were needed.
As we know, Unix is written in C, so Stroustrup decided to create some enhancements to C that would make writing programs for distributed computing easier. He was familiar with Simula, which he had learnt while working on his PhD thesis, so he came up with the idea of merging Simula’s approach of object-oriented high-level abstraction with the low-level power of C.
Another key motivating factor for Stroustrup was his dissatisfaction with the available types—he wanted to create a programmer-extensible type system, where the new types were first-class citizens (something we will see again with Haskell).
In his own words:
“A class is a user-defined data type… having different rules for the creation and scope of built-in and user-defined types is inelegant… so I wanted the C notion of pointers to apply uniformly over user-defined and built-in types."
“This is the origin of the notion that over the years grew into a ‘principle’ for C++: User-defined and built-in types should behave the same relative to the language rules and receive the same degree of support from the language and its associated tools.”
The philosophy of C++ is in its essence quite simple: Create a language capable of high-level abstractions without compromising the low-level power of C in any way.
Brad Cox was on a mission. He was passionate about software reuse and wanted to create an ecosystem that would promote this. He had worked with Smalltalk in 1981 and believed that objects could be used as software interchangeable components.
Like Stroustrup, he decided to start with C, but that is as far as the resemblance goes. C++ was released in 1985 as a formal description of the language (which was the typical way languages were introduced in those days). It had no standard libraries, a situation that was only remedied by Alexander Stepanov in 1993 with his Standard Template Library.
Cox released Objective-C in an entirely new and different way. Because of the focus on reuse from the very start, it came packaged with extensive class libraries that standardized many aspects of building an application. When Cox published the book Object-Oriented Programming: An Evolutionary Approach, it was more about how to use the language than simply a formal description of it.
Steve Jobs decided to acquire Objective-C and use it as the basis of the NeXTSTEP object framework for his NeXT workstation. He and his team hit a home run. The World Wide Web was created using the NeXTSTEP framework in Objective-C on the NeXT computer. Sir Tim Berners-Lee invented the web as a means of sharing research documents while he was working as a software consultant at the CERN particle physics research facility. He says that because of the simplicity and straightforwardness of the NeXTSTEP libraries, creating http and the web browser was “was remarkably easy”.
Objective-C and its associated class libraries (now named Cocoa) form a robust and very useable framework that is still used extensively today for Macs, iPhones, and iPads. It’s a tremendous productivity enhancer because it gives developers simple, coherent access to complex preprogrammed capabilities, which was Cox’s driving philosophy.
Haskell was created by a committee. In 1987, while attending the conference on Functional Programming Languages and Computer Architecture (FPCA ’87) in Portland, Oregon, a committee of researchers, mostly academic, formed to create a single, standard, purely functional programming language.
Their motivation was a strong consensus that “widespread use of this class of functional languages was being hampered by the lack of a common language.” The committee’s goals were to design a language that:
- Would be suitable for teaching, research, and applications, including building large systems.
- Would be completely described via the publication of a formal syntax and semantics.
- Would be freely available. Anyone was to be able to implement the language and distribute it to whomever they wanted.
- Would be based on ideas that enjoy a wide consensus.
- Would reduce unnecessary diversity in functional programming languages.
The last goal, in many respects, was the most significant. The committee “hoped that extensions or variants of the language would appear, incorporating experimental features” and actively encouraged this. The design of the language is inherently extensible, as programmer-defined types are not only allowed, but are the fundamental mechanism of creating a Haskell application.
Haskell was designed with the philosophy of embodying the paradigm of functional programming. As such, it is “pure”, meaning that everything must be represented as a function, and no “side effects,” such as mutating variables, are permitted. It is statically typed, but as described above, the type system is infinitely extensible and is the primary programming paradigm. This type system was designed to provide elegant, type-safe operator overloading, and its most famous feature is lazy evaluation.
Perl was the first scripting language to hit the big time. The languages it was based upon (C, shell script, AWK, sed) had been around for a long time on Unix, and were extensively used by system administrators and power users to automate repetitive tasks and create small utility programs such as installers and backup programs.
When Larry Wall first created Perl in 1987, it was with the intention of filling a gap between those shell-scripting languages and a language like C, with a new one that you could write an entire application with.
Nobody (not even Wall) expected Perl to completely change the world, but Perl unexpectedly developed a superpower that led to it having a dominant position for a few years. It became the “duct tape... that holds the entire Web together” when the Apache web server became the dominant server on the Internet in the mid-1990s.
The first web servers used the common gateway interface (CGI) as a standard protocol for web servers to execute programs. But Apache allowed extensions, called modules, as an alternative to CGI, and mod_perl module was the fastest of the bunch. Writing a web-server program using mod_perl could speed up your performance by a few orders of magnitude.
“I realized at that point that there was a huge ecological niche between the C language and Unix shells. C was good for manipulating complex things—you can call it ‘manipulexity.’ And the shells were good at whipping up things—what I call ‘whipupitude.’ But there was this big blank area where neither C nor shell were good, and that’s where I aimed Perl.”
This article is part of Behind the Code, the media for developers, by developers. Discover more articles and videos by visiting Behind the Code!
Want to contribute? Get published!
Follow us on Twitter to stay tuned!
Illustration by Victoria Roussel