A Potted History of Computing

As the sign indicates, this article is currently under construction. The sign will be removed when I consider that a readable draft is available. This is likely to be late January, at a rough guess.

There are full-blown books at one end of the reading spectrum and small articles or blogs, usually on the Internet, at the other end. This potted history sits somewhere in the middle. The sole objective is to give the reader a taste for the subject. Hopefully, there is sufficient information in the bibliography section to allow the reader to delve further if he or she is interested.

There is no shortage of material available on computing and its history. Apart from books you can find a myriad of detailed articles and essays on a wide range of topics, the majority of which are freely available on the Internet. My objective is simply to provide a (hopefully) useful overview which covers the period before the first computers appeared, through to around 2010.

Comments and feedback are welcome via the contact me page.

Introduction

This potted history generally proceeds in a chronological fashion, commencing with a summary of various inventions from the tally stick in prehistory up to mechanical analogue devices of the late 19th century.

It is followed by coverage of the period from the late 1930s up to the end of the 1960s, which corresponds to what are usually known as the first three generations of the modern computer. The pace of change then increased markedly, and so each of the succeeding decades is dealt with separately. 

I tend not to say very much about the period from 2010 onwards for the simple reason that, as this is a history, it is difficult to gain any perspective on something that happened yesterday.

I should point out that the history of computing is littered with instances when advances in software technology really required matching advances in hardware speed and capacity if they were to be deployed successfully at the sort of workload volumes that businesses of any reasonable size would expect. In truth, many software people either expected the necessary hardware to be there already, if not then soon, or perhaps they did not think too hard about such issues? Whatever, hardware improvements in speed and capacity did subsequently provide the base for the needs of ever-hungry software. Relational database technology is one obvious example.

Finally, many developments have their roots in research work that was started in the 1960s. I may not have mentioned every last one. Mea culpa.

Contents

From Tally Sticks to Analogue Devices

From the 1930s to the end of the 1960s
Early Digital Computers
Second and Third Generation Machines
Early Computer Companies
Arrival of Programming Languages
Early Operating Systems
Early Magnetic Data Storage
Early Computer Communications
Application Software

The 1970s
Large Scale Integration
Minicomputers
Data Modelling
Hierarchical and Network Database Management Systems
TP Monitors
Online Data Capture
Terminal Emulators
Microcode and the Introduction of System Emulators
Evolution of Operating Systems in the 1970s
Virtualisation
Appearance of the Unix Operating System
Commingled Environments
Fault Tolerant Systems
Supercomputers I
Early Microprocessors
Early Operating Systems on Microcomputers
The Appearance of Word Processing
The Appearance of Spreadsheets
GUIs

The 1980s
Workstations
The Introduction of RISC Chips
The Evolution of Microprocessors
Operating System Kernels
Clusters and Network File Systems
Forerunner of the Internet
Evolution of Local Area Networks in the Early 1980s
Relational Database Managements Systems
The Introduction of Structured Design and Programming Techniques
Evolution of Hard Disks
Solid State Devices

The 1990s
The World Wide Web
The Appearance of Search Engines
Web Cache Servers, Firewalls and Antivirus Software
Email
Blogging
Fat Clients
Remote Desktop Services
Middleware
Evolution of Operating Systems in Microcomputers
Supercomputers II
High Availability Systems
Object Orientation
Data Warehousing
Embedded Systems
Personal Organisers and Assistants
Open Source Software

The 2000s
Processor Cores
The Advent of Larger Servers
Interconnects
SAN and NAS Storage
Virtualisation became popular again
The Internet in the 2000s
Server-side Technologies
NoSQL
Smartphones and their Operating Systems
Artificial Intelligence
Home Networks
Gaming Hardware
Cloud Computing
Start of the Internet of Things

Whatever Happened to Them?

Odds and Sods

Bibliography and Further Reading
Acknowledgements
Version History

From Tally Sticks to Analogue Devices

Computers, as we would understand them, arrived in the late 1930s and early 1940s with what are commonly known as the first-generation machines. However, let us start by briefly summarising major inventions before this period.

A tally stick was an ancient device which was used to record numbers by means of cutting notches on a piece of wood or bone. It is conjectured that some animal bones which have been discovered, dating back as far as the Upper Paleolithic era, may have been the first tally sticks.

The first counting tool was the abacus which is thought to have been invented by the Babylonians, around 2200-2300BCE. Variants of the original design gradually appeared in most parts of the known world, from Europe, across Asia, to China and Japan. The abacus is of course still used today. For example, the Cranmer abacus is used by the blind.

The slide rule is thought to have been invented around 1622 by the Reverend William Oughtred, based on work that had been carried out on logarithms by John Napier and Edmund Gunter.

There is some debate as to whether the first calculator was invented by Wilhelm Schickard in the 1620s or Blaise Pascal in the 1640s; the latter’s invention could add or subtract 2 numbers. An improved version was produced by Gottfried Leibniz in the 1670s, which was capable of performing addition, subtraction, multiplication and division. However, it was to be around 1820 before the first successful mass-produced mechanical calculator emerged. The Arithmometer, as it was called, was patented by Thomas de Colmar.

It was also in the early 19th century that we come across a significant early example of programmability when Joseph-Marie Jacquard, a French weaver and merchant, developed a loom in which the pattern being woven was controlled by a chain of punched cards. The pattern could be changed by altering the punched cards, not the machine.

Around the same period, Charles Babbage, sometimes hailed as the father of computing, initially invented a Difference Engine in 1823 to help with navigational calculations. He followed this up with an idea for a more general purpose mechanical machine, which he called the Analytical Engine. This included an arithmetic unit, a control flow which allowed conditional branches and loops, plus an integrated memory. Program and data input was to be via punched cards, while the output could be to a printer, curve plotter, bell or punched card. Ada Lovelace produced her “notes” on this proposed computer, which are considered to be the first description of programming. Unfortunately, Babbage never completed the build of his design.

Analogue computers began to appear in the second half of the 19th century to solve specific problems, by using continuously changing values to arrive at a solution. For example, the first mechanical machine which made tidal predictions was developed by Sir William Thomson, later Lord Kelvin, in 1872.

From the 1930s to the end of the 1960s

This period covers the first three generations of the modern computer: vacuum tube technology; transistors; and integrated circuits.

Early Digital Computers

A theoretical machine, which would work on discrete rather than changing values, was first described by Alan Turing in his 1936 paper, On Computable Numbers. He termed it a universal machine, although it is more usually known nowadays as a Turing machine. John von Neumann, arguably the foremost mathematician from the 1930s up to his death in 1957, acknowledged that the central concept of the modern computer was due to this paper.

The first-generation digital computers, roughly developed in the period between 1939 and 1956, used vacuum tubes for logic circuitry. A prototype, the Atanasoff-Berry computer, was developed in 1939. Unfortunately, the tubes were not very reliable: one could break every couple of days; and it could take some time to isolate the faulty one. Memory was sometimes implemented by the use of a revolving magnetic drum with fixed read / write heads which had been invented by Gustav Tauschek in 1932.

Colossus, whose existence was not acknowledged until the 1970s, was the name of a series of machines which were designed by the British from 1943 to aid cryptanalysis of the German Lorenz cipher during the latter part of World War II. Tommy Flowers, who was responsible for the design, came to the conclusion that vacuum tubes were more reliable if they were kept running, rather than being switched off after each program run.

Another wartime development was ENIAC, a system that was designed by the Americans in 1945, and initially used in the development of the hydrogen bomb. To give an indication of the size and weight of these early computers, ENIAC took up around 1,800 square feet, used about 18,000 vacuum tubes, and weighed almost 50 tons.

These early machines had no stored program. They were typically programmed by plugboard wiring and the setting of switches.

Further computers were designed and built in the late 1940s and early 1950s, including the Ferranti Mark 1, IBM 701 and the UNIVAC 1, the latter being the first commercial system that was sold to a client. It became known for predicting the outcome of the 1952 US Presidential election.

Meanwhile, work had started on the development of magnetic core memory in the late 1940s, where each bit was represented by a ferrite ring which was connected to a number of wires. A bit could represent zero or one, depending on whether it was magnetised in a clockwise or anti-clockwise direction. Core memory was used in computers from around 1955 to 1975, and the term “core” is still sometimes used when referring to memory, although possibly just by old fogies such as myself?

Second and Third Generation Machines

Hardware technology underwent two major changes between the mid-1950s and the end of the 1960s. Second Generation Computers (roughly 1956-63) saw the use of transistors mounted on printed circuit boards rather than vacuum tubes. A transistor is a semiconductor device which amplifies or switches electronic signals and electric power. It had first been developed by Bell Labs in 1947, and the first prototype computer systems using this technology appeared around the mid-1950s. The IBM 1401, UNIVAC 1107 and the DEC PDP-1 were examples of second-generations computers.

As transistors became smaller, integrated circuits appeared which could house many transistors on a small flat piece of semiconductor material, typically silicon. The first computers to employ this technology were used by the military in the early 1960s, with commercial systems following on from the mid-1960s, although it was to be towards the end of the decade before they became mainstream offerings, including on the later IBM/360 models and the early DEC PDP-11s.

Early Computer Companies

As I have mentioned several companies, it is time that I briefly interrupted the flow and introduced some of them more formally.

IBM became the most successful company in the computer industry in the 20th century. It had been formed as CTR in 1911, an amalgamation of four companies, the most notable being Thomas Hollerith who developed his punched card tabulating technology in the 1880s, which was used in the 1890 US census. CTR changed its name to IBM (International Business Machines) in 1924. Apart from tabulating technologies, IBM had introduced the 80-column punched card, and developed electric typewriters, along with clocks and other time-recording devices.

Burroughs was another long-established company, being founded in 1886, selling adding machines which were still being called arithmometers at the time. Its first computer product in 1957 was the B205 tube computer. Burroughs concentrated heavily on the banking sector, developing branch terminals in the late 1960s.

J. Presper Ekert and John Mauchly who had built the ENIAC computer, which was mentioned earlier, subsequently formed their own company, EMCC, where they designed the BINAC computer in 1949. The company was relatively short-lived, and it was acquired in 1950 by Remington Rand, of typewriter fame. Ekert and Mauchly had already started working on what was to become the UNIVAC computer in 1951. Univac in fact became the name of the division within Remington Rand. A series of subsequent mergers and reorganisations led to the Sperry Corporation, and ultimately to Unisys which was formed in 1986 when the company merged with Burroughs.

Ken Olsen and Harlan Anderson were two engineers who worked at MIT Lincoln Laboratory. They came up with the idea of time-sharing computing to meet the needs of students for adequate machine time. They founded DEC (Digital Equipment Corporation) in 1957, but their initial financial backer persuaded them to establish a solid base for the company before developing their idea, as computer companies were not popular with financial backers at that time. And so, they initially supplied a range of digital devices for use in laboratories which became popular with other computer companies. Their success allowed them to progress their original idea, resulting in the PDP-1 which was first delivered in 1960.

In the UK, the British Tabulating Machine Company (BTM) developed the HEC4, a first-generation computer, in 1951. It merged with Powers-Samas in 1959 to form ICT (International Computers and Tabulators), who subsequently acquired the business computer division of Ferranti in 1963. Another merger, this time with English Electric in 1968 led to ICL. The ICT 1201 was a first-generation machine.  The successful 1900 series followed on in the mid-1960s after the acquisition of Ferranti’s assets.

The Arrival of Programming Languages

In 1945 the von Neumann architecture (or model) had described the main features of a modern digital computer.

In this architecture, program instructions reside in memory along with data, as opposed to being hard-wired. His paper was originally written as an enhancement to the design of the EDVAC computer which was used by the US Army. EDSAC at Cambridge (which used the mercury-filled delay line technology) and the Manchester Baby were the first working computers to adopt what became known as the stored-program concept.

Programming for early computers was initially done in machine code where the instructions were held in binary format. Needless to say, it was somewhat cumbersome and long-winded to write such programs. It was not long before assembler languages, termed the second generation, appeared. They had an instruction set with English-like mnemonics which made it easier to write programs. The source program had to be converted into machine code by a compiler or assembler before it could be executed. Kathleen Booth is credited with creating the initial assembly language and designing the assembler for the first ARC computers at Birkbeck College in London in the late 1940s.

There was obviously a demand for programming languages that were even more English-like, and hence easier and quicker to write and test. FORTRAN and FLOW-MATIC were the first to appear around the mid-1950s, followed by ALGOL and COBOL towards the end of the decade.

Early Operating Systems

An operating system (OS) may be crudely defined as system software which controls and manages both the hardware and software resources, providing services to application programs.

The first computers did not have operating systems. Programs interfaced directly with the hardware. Work on their development started at the beginning of the 1950s so that application programmers would be relieved of the complexities of dealing directly with the hardware. Early examples were quite rudimentary. Surprisingly, they were initially developed by customers, not by the hardware suppliers, one example being GM-NAA I/O which was developed in 1956 by General Motors for its IBM 704.

Burroughs produced MCP (Master Control Program) for its B5000 series in 1961. This is noteworthy for the fact that it was written in ESPOL, a high-level language which was based on ALGOL 60. Operating system code was normally written in a low-level language such as assembler. EXEC I, an operating system for the UNIVAC 1107, followed in 1962.

Meanwhile, IBM was putting all its efforts into developing the ultimately successful System/360 hardware range which it introduced in the middle of the 1960s, running the OS/360 operating system on the middle and high-end machines and DOS/360 on the smaller systems. Increased addressing capabilities arrived when its first 32-bit processor chips appeared in some of the later System/360 models and in the successor, the System/370, in the early 1970s.

The other point of note around this time was the appearance of operating systems which supported time-sharing work, i.e. multiple online users. Examples included the Berkeley and Dartmouth Time Sharing Systems. It was on the latter system that the BASIC programming language was first developed in 1964.

Early Magnetic Data Storage

Magnetic tape for recording sound had been invented by Fritz Pfleumer in Germany in 1928. The technology was improved, but World War II precluded the dissemination of any information on the subject until after the end of the conflict.

Open reels of magnetic tape for data storage first appeared in the UNIVAC 1 in 1951. IBM soon followed, and indeed their technology quickly became the industry standard.

IBM also invented the hard disk, shipping the 350 disk with its RAMAC 305 system in 1957. The characteristics of this disk drive were: 50 platters, each with a diameter of 24 inches; 100 recording surfaces; 100 tracks per surface; total capacity of 5MB (3.75MB usable); a rotation speed of 1200 rpm; and a transfer speed of 8,800 characters per second. It was rather a large beast!

By 1961 Bryant Computer Products had managed to produce a device with a capacity of up to 205MB on its 4000 series drives which had 26 platters with an even larger diameter of 39 inches. Unfortunately, this did not prove to be a successful product.

The above were, as you may have guessed, fixed disks. IBM shipped the 1311, the first removable disk drive, in the following year. Each disk pack had a capacity of 2 million characters.

The plug-compatible market, where other manufacturers supplied peripheral devices which could replace standard IBM devices, started in 1965 when Telex offered tape drives. This was followed in 1968 by Memorex who released its 630 disk which was compatible with the IBM 2311 device. Other companies, such as CDC and Storage Technology Corporation, soon began to provide plug-compatible devices.

Early Computer Communication

Batch workloads predominated in the 1950s and 1960s, particularly in commercial organisations, with online systems slow to appear. Modems had originally been developed back in the 1920s to support teletype machines. They allowed ordinary telephone lines to support remote communication, converting digital signals to analogue for transportation over to a modem at the other end of the line which would then convert them back to digital.

Modems were being used in the late 1950s with computer systems in the US defence industry. It was 1962 before AT&T introduced them to the commercial market, the initial Bell 103 device only operating at 300 bits per second.

Application Software

The 1960s saw the widespread use of computers by commercial companies to run their business applications. The software came from various sources:

  • Many were developed in-house, mostly in assembler languages with a gradual take up of COBOL
  • There were some instances in the early 1960s of skeleton software which was provided by the main hardware supplier that the customer could then customise. For example, IBM bundled ’62 CFO in with its 1401 computer for small and medium insurance companies
  • Some of the main hardware suppliers set up user programming services departments to design and build bespoke systems for clients.

Full-blown packaged application software for the likes of payroll and accounting which was developed and supported by computer manufacturers and other third-parties, e.g. software houses, began to appear in the late 1960s.

The 1970s

The 1970s were characterised by:

  • Significant leaps forward in hardware technology where the rapid increase in the miniaturisation of components led to a growth in the minicomputer market around the start of the decade, and on to the appearance of the first microcomputers later in the decade
  • On the application software front, the recognition of the importance of data (and of Database Management Systems)
  • Moves towards the greater use of online systems
  • More sophisticated operating systems, including the appearance of Unix
  • the establishment of fault tolerant systems and supercomputers
  • and the first signs of office applications.

Large Scale Integration

MOS (metal-oxide-semiconductor) transistors, invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959, were to prove the basic building block of modern electronics. Large Scale Integration saw the ability to fit thousands of transistors onto a single chip. A variety of acronyms were introduced to indicate the ever increasing degree of miniaturisation, viz. LSI, VLSI (Very Large Scale Integration) and ULSI (Ultra Large Scale Integration).

Minicomputers

DEC introduced the first models in its highly successful 16-bit PDP-11 series in 1970, followed by its 32-bit VAX range from 1977. The company’s success inevitably led to competition in this market sector. Let me briefly mention some of the main players.

Data General was founded in 1968. Three of the four founders of the company were ex-DEC employees. Their first machine was the 16-bit Nova in 1969, which competed against DEC’s PDP-8.

Wang Laboratories had been founded in 1951, producing electric typewriters and subsequently calculators. By the late 1960s it was addressing both the word processor and data processing markets. Arguably, its first dedicated general purpose minicomputer was the Wang 2200 which initially shipped in 1973.

Hewlett Packard had originally been founded back in 1939. Its first product was an audio oscillator for Disney to test sound equipment, followed by various fast frequency counters and oscilloscopes in the 1950s. It entered the world of business computing in 1972 with the HP 3000 series.

Data Modelling

Computers had been seen as little more than giant calculators in the early years of this fledgling industry. However, the importance of data, including the relationships between data entities and the storage of that data, came to be appreciated by the management of companies.

Young and Kent of NCR led the way in this field when they presented a paper in 1958 on the need for “a precise and abstract way of specifying the informational and time characteristics of a data processing problem”. Significant research work subsequently took place on data modelling, as it became known, in the 1960s and 1970s. Notable contributors in the field included Bachman, Codd, Chen and Date.

Data modelling came to consist of three parts: the identification of data items; the grouping of those data items into what are called entities; and the identification of relationships between those entities.

In 1975 ANSI, the standards organisation, defined a data architecture as consisting of three layers: a description of all data items as understood by the users of the computer system; a single conceptual model which contains all the data items (without ambiguity); and a physical layer (called the schema) which describes how the data is stored in the system, e.g. type and size.

Data Modelling techniques became an integral part of the systems analysis and design frameworks which subsequently appeared in the 1980s.

Hierarchical and Network Database Management Systems

Access methods to stored data had been largely of the home grown type with the odd exception such as indexed sequential files. This changed with the arrival of database management systems, principally on mainframe systems initially.

The hierarchical model is one in which the data are organised in a tree-like structure where a parent can have many children, but a child can only have one parent. IBM is credited with developing this model in the 1960s, introducing IMS (Information Management System).

Charles Bachman had come up with the concept of a network model in the 1960s where relationships could be established between record types in what are called sets. In this arrangement, many to many relationships are supported, that is a record can be both a parent in one set and a child in a different set. The Bachman diagram is a conceptual data model which describes the entities in a system, along with their relationships. His work led to IDS (Integrated Data Store) which was developed at General Electric in 1963. It was subsequently converted into a product by Cullinane, and released in 1973 as IDMS, probably the most popular network database product. Cincom’s TOTAL DBMS was another successful product in this area.

TP Monitors

The first commercial online transaction processing system was SABRE, developed by IBM for American Airlines in the 1960s. It led to the development of control programs for online systems in the mainframe world in the 1970s, which were typically known as TP (Transaction Processing) Monitors. Their objective was principally to remove the need for application programmers to deal with the complexities of online processing, such as handling multi-phase transactions, logging and recovery in case of failure, et cetera.

IBM developed two such TP monitors: IMS DC which sat alongside its IMS database product and CICS, while Cincom created ENVIRON/1, a rival product in the IBM world. Other mainframe vendors who provided TP Monitors included ICL with TPMS; Burroughs with GEMCOS; Univac with TIP; and Honeywell with TDS. Tuxedo was subsequently developed by AT&T in 1983 to support Unix systems.

Online Data Capture

The Punch Room was an indispensable part of most IT departments, being responsible for keying bulk data input to applications from the 1950s. The medium was either punched cards or paper tape.

From the mid-1960s key to magnetic tape systems appeared, followed by key to disk, thus gradually getting rid of the bulky card and paper tape media.

The next step was to remove the need for a central Punch Room by getting individual departments to key in their own data. While remote job entry systems provided one solution, they typically demanded punch operator(s) in each department. Arguably, the more favoured approach was to get individual clerks to enter their own data via online terminals. While a full-blown online system would solve the problem, many organisations needed a more pragmatic approach, whereby an online front-end system could be developed, and tagged onto existing legacy systems, at least in the short term. Such online data entry and enquiry systems were developed during the 1970s, including Datafeed and Datadrive from ICL, providing satisfactory stop-gap solutions for many organisations.

Terminal Emulators

VDU (Visual Display Unit) terminal emulators were produced by a wide range of third-party hardware suppliers and other computer companies, starting in the 1970s. IBM 3270 and DEC VT100 were probably the most popular forms of terminal emulation, hardly surprising as they were the most successful companies in their respective fields.

Remote Job Entry was another area that encouraged the development of emulators. IBM 2780 and ICL 7020 were examples of systems that supported remote bulk peripherals such as card readers and printers. Emulators in this area would allow equipment from other manufacturers to send bulk data to the host computer or receive bulk output back from it. I remember one subsidiary company which had its own operational systems which ran on Datapoint equipment, but it used the parent company’s general ledger system which ran on ICL equipment. Input data was transferred from the Datapoint to the ICL equipment using a 7020 emulator. 

Microcode and the Introduction of System Emulators

Originally, processor designs were completely hardwired, until it was realised that software approaches could provide a more flexible solution. Microcode, as it became known, sat on top of hardware which had a rudimentary set of instructions. Microprogramming allowed the chip designers to implement higher-level instructions, including complex multi-step instructions. One obvious advantage of this method was that the microcode could be changed quite late in the design process, and indeed any bugs in the production system could be readily fixed.

IBM’s System/360 was an early example of microcode. From a marketing perspective, microcode also helped the company to persuade clients to upgrade from their old second-generation systems to the System/360 by supplying special microcode and associated software which would emulate those old systems, thus allowing clients to run their legacy systems until such time as they were ready to replace them.

ICL provided another example. Their 2900 series was first announced in 1974, along with its VME operating system. The company introduced DME in 1977, microcode which emulated the older 1900 series on the new range, a move which helped to persuade existing clients to upgrade to the new machines while continuing to run their old 1900 systems.

This approach to design is now standard on many hardware devices, controllers, adapters et cetera. However, many vendors tend to use the term firmware, as opposed to microcode.

Evolution of Operating Systems in the 1970s

New operating system features in this period can be split into two main areas: facilities that would support a greater workload on a single computer; and the provision of functions that would help programmers to be more productive.

In the first area, memory management was a significant step forward. Memory sizes were extremely modest at this time. I remember spending much time trying to shoehorn an ever-growing assembler program into a miserable 5.8k 24-bit words on an ICL 1901 in the late 1960s. It got to the stage where, although I could still manage it, the compiler could not! Small memory capacities also limited the ability of a computer to run more than one program at a time.

In efforts to overcome memory limitations, programmers had previously been using overlay techniques, that is loading segments of code when required, overwriting what was previously in memory. This was followed by swapping techniques, particularly in time-sharing systems, where one user’s code and associated data was moved out to secondary storage (usually a drum or disk) to accommodate the next user who required processing.

Operating systems improved on these rudimentary techniques by implementing a concept called virtual memory, where a reasonably fast device such as a magnetic drum or disk could be used as an auxiliary area of memory storage. It would move small areas of instructions between real memory and this secondary storage area as required, to accommodate the various programs that were running. The term paging is used to describe this activity.

Burroughs had actually introduced virtual memory management concepts to MCP back in 1961, as had the University of Manchester on its Atlas computer around the same period, but it was to be the early 1970s before the approach became mainstream when IBM introduced DOS/VS on its smaller systems and OS/VS1 on the larger ones.

Burroughs were also early providers of support for more than one processor in a computer. Their initial implementation was fairly straightforward, running system tasks on one processor and applications programs on another. This approach, eventually called AMP (Asynchronous Multi Processors), provided quite modest rewards, achieving roughly 1.2 to 1.4 times the throughput of a single processor on a system with two processors.

More sophisticated implementations were eventually forthcoming, where each processor could execute either system or application code, reducing the overheads and allowing 1.7 to 1.9 times the performance of a single processor on a dual processor system. SMP (Symmetric Multi-Processors) systems, as they became known, could reasonably support up to four processors. The increasing overheads of supporting multiple processors were described by Amdahl’s Law.

With respect to supporting programmers, facilities such as file management, accounting, the automatic control of running programs and the provision of system calls to allow access to system facilities were among the features that began to appear.

Virtualisation

The main objective of virtualisation was to make maximum use of a single physical computer system by allowing multiple logical environments to co-exist within it. Research in this area had begun in the 1960s. The most notable product was IBM’s VM which was released in 1972. In essence, a hypervisor ran the show, allowing multiple guest operating systems to run under its control.

A simple example might have two guests, one running a company’s production system, while the second supported its test system. There might be significant cost savings by being able to run both environments on a single computer. A popular alternative in the IBM world was to run multiple versions of CMS, a single user operating system, to support a time-sharing environment for developers and other users.

The Appearance of the Unix Operating System

Operating systems had so far been proprietary, that is they were developed to run on specific hardware platforms. The first signs of change appeared with the introduction of Unix in 1971. It was written in C, a new general purpose, high-level programming language which had been developed in parallel with Unix at Bell Labs. The initial implementation was on a DEC PDP-11 system. In essence, if C was ported to other hardware platforms then versions of Unix could be developed to run on them. This portability of language and operating system was one of the major reasons for the subsequent popularity of both Unix and C.

Commingled Environments

Another strand of research produced what is sometimes inelegantly referred to as a commingled environment, that is operating system, programming language and data handling effectively rolled into one.

MUMPS was arguably the most popular such environment. It was developed at Massachusetts General Hospital in 1966 and 1967 on a spare DEC PDP-7. It was soon ported to a range of other PDPs, Data General systems, et cetera. It became particularly popular when MUMPS-11 appeared on the PDP-11 series in the early 1970s. Unsurprisingly, its use became widespread among the medical community, but it soon spread to other market sectors, and quickly boasted a strong user group. MUMPs still exists today in several forms, most notably as Caché from Intersystems.

Pick was another such environment. It was first developed in 1965 on an IBM/360, and was eventually commercially released in 1973 as the Reality Operating System. For a while in the 1980s some observers thought (incorrectly) that it might become a competitor to Unix.

Fault Tolerant Systems

While computer technology was making significant advances, hardware was still not hugely reliable by the 1970s. Some organisations, particularly those in the finance sector, had a mandatory requirement for them to work without failure.

This led to the development of fault tolerant systems. Tandem introduced its first systems in 1976, and it was followed by Stratus which was founded in 1980. In both cases all hardware components such as processors, memory, devices, controllers, cables, power supplies et cetera were duplicated. The systems had to continually check that each component and its duplicate were alive and well. Tandem achieved this in software, while Stratus did it in hardware.

Supercomputers I

Seymour Cray had led a team at CDC in the 1960s with the objective of developing significantly faster computers which would be principally aimed at the scientific community. This resulted in the CDC 6600 in 1964. He subsequently left in 1972 to set up his own company.

Cray Computer Corporation shipped its first system, the Cray-1, in 1976 and came to dominate the supercomputer market through to the end of the 1980s. Various techniques were deployed to improve the speed of processing, including: the use of the first 64-bit processor chips; vector processors which had instructions that operated on an entire one-dimensional array, as opposed to scalar processors which could only work on one data item at a time; various cooling techniques to prevent overheating when the processors were operating at high speeds; and pipelining techniques to improve memory access performance.

Early Microprocessors

The first microprocessors appeared around 1971. They included the Intel 4004 and Texas Instruments TMS-0100, 4-bit processors which were both used in calculators.

They were followed around the middle of the decade by various 8-bit microprocessors, including the MOS Technology 6502, Motorola 6800 and Zilog Z80. These chips heralded the appearance of the first personal computers in 1977 when the Commodore Pet, Apple II and Tandy were introduced. The Apple II and Commodore Pet both used the 6502 chip from MOS Technology with a maximum of 48KB of memory (on the Apple) and a floppy disk, while the Tandy used a Z80 chip.

Early Operating Systems on Microcomputers

Initial operating systems on such small machines were perforce somewhat rudimentary, The Apple II used Apple DOS, while the Macintosh had no OS name initially, before becoming MAC OS.

On other platforms, CP/M from Digital Research came to dominate the market initially, until MS-DOS was developed by Microsoft for the IBM PC.

The Appearance of Word Processing

Word processing evolved from the typewriter market. Arguably, the initial breakthrough came with IBM’s MT/ST system in 1964 which stored text on magnetic tape. IBM was also responsible for inventing the floppy disk in the early 1970s, another key element in early word processor systems. It was around the same time that CRT (Cathode Ray Tube) screens began to appear in systems from the likes of Lexitron and Vydec. However, it was Wang who introduced systems with many of the fundamental features of word processing that we would recognise today. Their first system was the Wang 1200 WPS which was shipped in 1976.

The subsequent arrival of microcomputers brought a range of word processing software products to the marketplace. Wordstar appeared in 1978, but it was quickly replaced by WordPerfect and Microsoft Word in the early 1980s as the pre-eminent packages on CP/M and MS-DOS machines.

The Appearance of Spreadsheets

Batch spreadsheet report generators had appeared in the 1960s. They were followed by the LANPAR spreadsheet compiler in the early 1970s, which was an important stepping stone to spreadsheets as we know them today. The same period also saw the popularity of financial planning tools such as Autoplan and Prosper, which might be described as pseudo-programming languages.

However, the key moment in the establishment of electronic spreadsheets came with the development of Visicalc on the Apple II in the late 1970s. It became so popular that people bought the Apple II just to get Visicalc.

Supercalc, which ran on CP/M machines, was initially the main competitor to Visicalc. However, Lotus 1-2-3 appeared on MS-DOS machines in 1983, and almost immediately took over as the leading spreadsheet vendor. Microsoft’s Excel, which subsequently came to dominate the market, first appeared in 1985.

GUI (Graphical User Interface)

Ivan Sutherland had developed Sketchpad in 1963 where a light pen was used to create and manipulate engineering drawings with coordinated graphics. And in the late 1960s, Douglas Engelbart developed NLS which accessed text-based hyperlinks by using a mouse, which was a new device at the time.

Xerox took on these ideas to form a basis for research work on their proposed Alto computer. They subsequently introduced the concepts of windows, icons, menus, radio buttons and check boxes, their work forming the basis of subsequent GUIs.

The Alto was never released as a commercial product, although several thousand were made. The first commercially available GUI was to appear on the PERQ workstation from the Three Rivers Corporation which was shipped in 1979. GUIs became a hot topic in the 1980s, and arguably many people first came across them on the Apple Macintosh which was released in 1984.

In 1987 IBM published a Common User Access (CUA) definition, as part of its System Application Architecture. It came to form the basis of interfaces used in the likes of Microsoft Windows, IBM’s OS/2 Presentation Manager and the Unix Motif toolkit.

1980s

Advances in this decade included:

  • The introduction of faster 32-bit processor chips in microcomputers and the appearance of the first 1MB memory chip in 1984
  • The appearance of RISC processor chips
  • Standards (SCSI and ATA) for peripheral connections and data transfer, particularly for disks and tape
  • initial appearance of VAX clusters and network file systems
  • The emergence of solid state devices which eventually used flash memory
  • The relational database model which proved to be highly attractive
  • The ARPANET project, the precursor to the Internet which started in the 1970s, continued, adopting TCP in this decade
  • Widespread use of the GUI (graphical user interface)
  • The use of systems analysis and design methodologies which became mainstream.

Workstations

Workstations were used primarily for scientific and other processor-intensive work, arguably the first being IBM’s 1620 back in 1960. However, it was in the early 1980s that the market really took off when the Motorola 68000 32-bit chip appeared. Sun Microsystems and Silicon Graphics Inc, both major players in this market, ran Unix on their workstations.

The Introduction of RISC processor chips

RISC (the Reduced Instruction Set Computer) is precisely that, a series of relatively simple instructions that can execute in a short cycle time, as opposed to the longer cycle time taken by the traditional comprehensive instruction set which retrospectively came to be named CISC (Complex Instruction Set Computer). RISC relied on compilers to generate efficient code to help realise the performance benefits, although the code would naturally have greater memory requirements than the equivalent code on CISC.

Although some work had been done in this area in the late 1960s and during the 1970s, it was projects at Stanford University and Berkeley in the early 1980s which realised the potential of RISC. They led to the MIPS and SPARC chips respectively, the latter being adopted by Sun Microsystems. A host of other RISC chips eventually followed, including the likes of POWER from IBM, PA-RISC from Hewlett Packard, Alpha from DEC, and ARM. They were 32-bit processor chips initially, 64-bit versions appearing from the early 1990s.

Evolution of Microprocessors

The speed of change in microprocessor development increased rapidly. Intel had announced its 8080 family in 1978, and its 8088 chip was selected by IBM for deployment in its first PC which was introduced in 1981.

It produced various families of microprocessor chips over the next two decades, starting with the 80286, followed by the 80386 (32-bit chip) and 486 in the 1980s, through to the Pentiums and Xeons in the 1990s (the latter mostly deployed in servers). Advanced Micro Devices (AMD) began to produce Intel-compatible chips in the 1990s.

The Apple Macintosh appeared in 1984 with a Motorola 68000 chip and 128KB of memory. It subsequently moved to PowerPC chips in 1994, and on to Intel chips in 2006.

Operating System Kernels

As memory capacities grew, so too did operating systems, offering more and more features. However, one of the downsides was that with more code came the greater possibility of bugs. Work in the 1980s and 1990s looked at ways of circumventing potential problems with what became known as the monolithic kernel. The approach was to remove sections of code from the kernel and run them in the application space. Microkernels removed as much as possible, while hybrid kernels offered a halfway house solution where selected features would remain in the kernel.

Monolithic vs Micro Kernels

Clusters and Network File Systems

DEC introduced VAX clusters in 1983. This allowed a number of VAX machines to be loosely coupled, that is they could operate largely independently but share basic peripherals such as printers plus disks. The nodes were initially connected with proprietary cables via a star-coupler. The cluster concept proved to be a successful venture for DEC.

Meanwhile, Novell introduced Netware around the same time, a network file system which ran on PC-style hardware using the IPX network protocol. This allowed multiple PCs to share files on the Netware disk(s).

In addition, Sun Microsystems developed the NFS protocol (Network File System) in 1984, providing an alternative solution to Netware. Version 2, which was released in 1989, was made available outside Sun.

Forerunner of the Internet

Ideas surrounding the concept of a network of networks, potentially a global network, had been the subject of significant research work in the late 1960s and early 1970s. Several vendors almost inevitably saw such networks using products that they would develop, including IBM with SNA and DEC with DECNET.

Meanwhile, the ARPANET project, initially founded by the Advanced Research Projects Agency (ARPA) of the United States Department of Defense, had quietly managed to connect four universities in 1969. It had also been created to support a network of networks, thus allowing disparate military and government-sponsored organisations which might well be running on different hardware / software platforms to communicate with each other.

One of the first essentials was to allow a network circuit to be shared. ARPANET (and several other networks) achieved this by implementing packet-switching. The concepts of packet-switching had been defined in the 1960s by Paul Baran at the RAND Corporation and Donald Davies at the National Physical Laboratory (NPL) in England, working independently of each other. The NPL produced the first prototype in 1969 for its own use, and ARPANET subsequently followed.

A second issue was the need for a common internetworking protocol. ARPANET initially used the Network Control Program (NCP) which had been developed by the Network Working Group. It was eventually replaced by TCP (Transmission Control Protocol). The specification for TCP had first been published in 1974. It used the term Internet as a shorthand form of internetworking. The first person(s) to actually use the term is somewhat unclear. ARPANET transferred over to TCP in 1983. TCP, originally a monolithic piece of software, was converted into a modular architecture which included IP (Internet Protocol). IP was responsible for delivering datagrams (as they are called) from source machines to destination machines across the network. The first experimental versions of IP date from 1977.

The use of this internetworking capability in the 1980s was limited to governments and government-sponsored bodies such as research and education establishments. However, around the end of the decade there were gradual signs of commercial usage, indicated by the appearance of the first ISPs (Internet Service Providers) in the USA. The ARPANET project itself was closed down in 1990.

Evolution of local networks in the early 1980s

A number of experimental local network systems were developed in the 1970s to support multiple devices in a local environment such as a department, including Ethernet at Xerox PARC in 1973/74. It was not introduced commercially until 1980 when it operated at 10 Mbits/sec running over coaxial cable.

It was followed by IBM’s implementation of Token Ring in 1984 which used shielded twisted pair cable and ran at 4 Mbits/sec initially, before 16 Mbits/sec became available in 1988.

Relational Database Management Systems

The term “relational database” was originally coined by Ted Codd at IBM in 1970. He came up with a list of 12 rules which databases should meet to qualify as relational. However, no databases conform to all 12 rules, so the term came to describe a broader set of databases which (a) present the data as a collection of tables with rows and columns and (b) provide relational operators to manipulate that data. IBM’s initial work saw the production of a prototype called System/R and a query language that was called SQL.

ORACLE was in fact the first commercially available relational database management system in 1979, while IBM produced various offerings before it shipped DB2 in 1983, initially on its larger mainframe systems. The almost instant popularity of the relational model spawned various other Database Management products in this decade, including: Ingres (1980), Informix (1981), Sybase (1987) and Microsoft’s SQL Server (1988).

The Introduction of Structured Design and Programming Techniques

The second half of the 1970s and the early 1980s saw attempts to improve the quality of application development.

I can testify that back in the 1960s and early 1970s there were many programs, particularly those written in assembler, which were extremely difficult to understand, and even harder to maintain. They often went under the title of “spaghetti coding”.

Efforts to improve programming techniques had actually started back in the 1950s. ALGOL was a prime example of a language which came to support if/then/else selection, iterative loops, block structures and subroutines, all in an effort to improve the clarity of code and the ease of its maintenance. It was Dijkstra, the Dutch computer scientist, who came to coin the term “structured programming” in 1966.

All variants of assembler and early versions of COBOL, the most frequently used languages in the commercial world at the time, did not readily lend themselves to supporting structured programming. Jackson Structured Programming (JSP) techniques were introduced in the 1970s to improve matters, particularly for COBOL users. JSP principally worked on the basis that there was a strong correspondence between the data structure stream and program code stream. The techniques are still used today.

The early 1980s saw the introduction of methodologies which brought structured techniques to the requirements, analysis and design phases of a project lifecycle. They included SSADM (Structured System and Design Method) and Yourdon. A range of software tools became available in this area, which went under the banner of CASE (Computer Aided Software Engineering) tools.

The Evolution of Hard Disks

The next phase of disk technology had happened when IBM shipped its Winchester disks in 1973. They were removable disk modules that included head and arm assemblies. However, the most notable part of the technology was that the heads did not have to withdraw from the disk, they could land and take off from the media as the disk spun up and down. The 1970s also saw the form factor of devices reduce to 8 inches and then to 5.35 inches.

The 1980s saw the arrival of SCSI (Small Computer System Interface). It is a set of standards for physically connecting and transferring data between computers and peripheral devices. It was used mainly in disk and tape subsystems.

ATA (Advanced Technology Attachment), also known as IDE or PATA, was subsequently announced in 1986. It was a parallel interface, as was SCSI. ATA drives were not as fast as SCSI, nor did they last as long, but they were cheaper and tended to win out when compared on a price/performance basis.

These parallel interfaces were subsequently replaced in the early 2000s by faster serial interfaces, viz. SAS (Serial Attached SCSI) and SATA (Serial Advanced Technology Attachment). SATA drives came to dominate the desktop and laptop market.

Meanwhile, disks got faster and capacities grew. For example, Seagate shipped drives with ever faster rotation speeds: 7200rpm in 1992; 10,000rpm in 1996; and 15,000 rpm in 2000. With regards to capacity, the first one gigabyte drive came from IBM in 1980 (although it was the size of a fridge!), and Hitachi introduced the first one terabyte drive in 2007.

Solid State Devices

A Solid State Device (SSD) can be loosely described as a wodge of non-volatile memory which emulates a hard disk drive. Being memory, it was orders of magnitude faster than a physical disk with moving parts, but it was significantly more expensive, certainly in the beginning.

In the mainframe world, the Storagetek STC 4305 had been announced in 1978, acting as a plug-compatible replacement for the IBM 2305 disk. It had a capacity of 45MB and was circa. 7 times faster than the 2305. The increased speed made it a popular candidate for use as a device to act as a auxiliary storage site to improve memory performance.

SunDisk (nowadays SanDisk) saw the potential for using flash memory, invented by Masuoka Fujio at Toshiba in the early 1980s, in SSDs. They brought out their first device in 1991 which had a capacity of 20MB. Capacities increased rapidly, with a 18GB device appearing by 1999 and a 1TB device by 2011.

Flash memory also came to be used in USB memory sticks which first appeared in 2000, initially sold by Trek 2000 International, a company in Singapore. The first device had a capacity of 8MB. Rapid growth saw 4GB by 2005; 32GB by 2007; and 256GB by 2011.

1990s

Advances in this decade included:

  • The Internet arrived with the world wide web, search engines and blogging tools
  • Object Orientation gripped the attention of large sections of the IT community
  • Multi-layer systems became the fashion with (fat) client-server systems, the deployment of thin clients which helped to contain costs, particularly in large organisations, and middleware to support the transfer of data between systems
  • High Availability options began to offer cost-effective resilience features, particularly on larger systems
  • Data warehousing took its first steps in this decade
  • Supercomputers began to deploy very large numbers of connected servers which were capable of tackling scientific problems in parallel
  • Personal Digital Organisers became popular
  • and the free software movement which started in the 1980s led to the creation of the open source initiative in 1998.

The World Wide Web

A key moment in the history of the Internet came with the invention of the World Wide Web in 1989 by Tim Berners-Lee while he was working at CERN. The Web, as it is commonly known, is an information system where items of relevant information anywhere else on the Web can be accessed by using hyperlinks to take the user directly to that information. Berners-Lee developed the first web server and web browser which were both released to other research establishments in January 1991, and then made public later in the same year.

Other web browsers were soon developed, Mosaic, later to be called Netscape, being the most popular in the early and mid-1990s. Microsoft, who had totally misjudged the impact of the Internet, eventually remedied the situation by bringing out Internet Explorer in 1995.

The vast majority of information that was being served up on the Internet in these early years was static, that is it was pre-existing data that was already stored in filesystems. There were some facilities, notably CGI (the Common Gateway Interface), which would allow the creation of information on the fly, but they were seldom used.

The Appearance of Search Engines

Search engines are an indispensable part of the Internet. In the beginning, Berners-Lee manually documented details of all existing websites, obviously there were not many. This was a situation that could not last, and did not. Archie appeared; this was a tool to index FTP archives, allowing users to find specific files.

W3Catalog, arguably the first primitive search engine, appeared in 1993. It was quickly followed by others in subsequent years. From my fading memory, I can remember WebCrawler, Lycos, Yahoo, AltaVista and Ask Jeeves. They could all be somewhat problematic, in the sense that it was extremely easy to end up with meaningless results unless your search criteria were very specific. It was not unusual (at least for me!) to spend a significant amount of time adjusting the criteria.

The situation changed with the appearance of Google in 1998. Here for the first time was a reliable search engine. Some of the existing products, such as Yahoo, gradually improved, while new products were generally superior to previous offerings, e.g. Bing (the current name of Microsoft’s search engine).

Web Cache Servers, Firewalls and Antivirus software

There were 10 web servers in 1991, a figure which had risen to 23,000 serving an estimated 40 million users by 1995. A content cache was built into the web browser to store recently-accessed data in order to improve performance. However, there was a requirement to further offload work from a web server. This led to the Harvest Project which developed a cache server, subsequently to be known as Squid, which sat in front of a web server, storing static content which had been retrieved by users for subsequent re-use without bothering the web server.

The first network firewalls appeared in the late 1980s to protect private networks. The early 1990s saw IP routers with filtering rules, while the first commercial firewall became available in 1992.

The computer security industry really got going around the middle of the 1980s, when companies such as Sophos were founded. Antivirus products followed with the advent of the Internet, including the first release of Norton by Symantec in 1991.

Email

It was in the early 1990s that email, as we understand it today, eventually came into mainstream use by both businesses and individuals.

The ability to exchange messages had originated back in the 1960s with the arrival of time-sharing systems, albeit within the confines of an individual system. The first networked email was sent by Ray Tomlinson to himself in 1971 between two DEC-10 systems that were sat side by side. Facilities to list emails, reply to them and forward them appeared in the following year.

The 1970s and 1980s saw the appearance of various host-based email systems using assorted proprietary protocols. As the use of ARPANET had spread to other sections of academia, gateways were developed to allow mail to pass between them. They included JANET and X.400.

The protocols that eventually became standard by the mid-1990s, viz. SMTP, POP and IMAP, had originated during the 1980s.

Blogging

The advent of the Internet encouraged the keeping of online diaries which began to appear around 1994, making use of such limited tools as were available at the time. They became known as weblogs in 1997, a term that was subsequently shortened to blogs.

Open Diary was released in 1998, the first specific blogging tool to support a community. It was quickly followed by others, including Blogger, MySpace, Typepad and WordPress. The latter is currently the most used, having since morphed into a blogging / content management system tool.

Fat Clients

Online Database systems originally worked with dumb terminals where each user typically had his own process running on the server. The appearance of PCs with reasonable memory capacity encouraged vendors to develop tools in the early to mid-1990s whereby the application code could run on the PC and only database calls would be sent over the network to the server. While this would offload work from the server it would generate more network traffic. In addition, where large user populations had to be supported, some organisations found the cost of the necessary PC hardware and software to be prohibitive, and the maintenance of the application code to be onerous.

Remote Desktop Services

These problems encouraged the development of what became known as thin client technology. The outline approach was that each PC, modestly configured and priced, would connect to a server where the application code would run while the PC would act as a relatively dumb I/O device which supported GUI. This provided a more cost-effective solution for many organisations.

Citrix was the market leader in this field, its first version appearing in 1991, while Microsoft eventually introduced its own product, Terminal Server, with Windows NT in 1998.

Middleware

There were requirements by the 1990s for disparate applications, typically running on separate servers, to pass data between each other, either synchronously or more typically asynchronously. There were two typical scenarios around that time:

  • The need for new system(s) to communicate with existing legacy systems
  • The requirement to implement a range of products from different vendors to meet a business’s need, passing data between these products to produce an integrated solution.

Middleware tools were developed to meet these (and other) needs. IBM’s MQ Series was one of the first in 1993, while Tuxedo and TIBCO provided other notable offerings. MQ Series has been rebranded several times, Websphere MQ and more recently IBM MQ.

The current term for this technology appears to be “application and integration middleware” which indicates that the toolset has expanded to include features such as message brokering and event-processing engines, as well as application servers which are mentioned in the server-side technology section below. IBM’s Websphere and Oracle’s Fusion are examples of such toolsets.

The Evolution of Operating Systems on Microcomputers

The introduction of more powerful processors with greater memory capacities led to the development of more sophisticated operating systems for higher specification PCs and servers.

Apple acquired NeXT in 1996 before rolling out OS X in 2001, which was part MAC OS and part NextStep, the Unix-based operating system which ran on the NeXT workstation. OS X ran on a Unix kernel.

IBM and Microsoft had jointly produced OS/2 in the late 1980s before they fell out, and Microsoft went off to develop Windows NT which was first released in 1993. It was to be Windows/XP in 2001 before the sort of facilities that were offered by NT became available to home users.

On the Unix front, Minix, based on a microkernel architecture, had been created by Andrew Tannenbaum in 1987 for educational purposes. It ran on IBM PCs of the time, although it was subsequently ported to other platforms.

Although the Minix source code was freely available, the licensing terms restricted its use to education. This encouraged Linus Torvalds to develop the Linux kernel in 1991. It quickly gained in popularity, not least because it was free. It started to appear as Linux Distributions, that is it included other associated software in addition to the Linux kernel. Examples included Debian and Ubuntu.

Supercomputers II

By the end of the 1980s, individual Cray configurations contained no more than 8 processors. However, from the early 1990s research concentrated on partitioning a problem into many parts and splitting the overall workload across large numbers of processors which could all operate in parallel. It was often called MPP (Massively Parallel Processing).

Linux appeared in this supercomputer world where the requirement to support large numbers of networked servers made it an attractive option on cost grounds. And it did not stop there, before too long Linux began to appear on large mainframe-class servers.

Fujitsu, NEC and IBM now became prominent players in the supercomputer market. IBM’s Deep Blue is arguably the most celebrated, being the first computer to defeat the world chess champion, Gary Kasparov, in 1997. 

The aggregate speed of these large systems has increased dramatically over time. Using the Linpack benchmark which measures the number of floating-point operations (flops) executed per second, the Intel Paragon XPS/140 clocked 143.4 gigaflops/second in 1993, a figure which had risen to 10.51 petaflops/second on the Fujitsu K by 2011.

High Availability Systems

The requirement for highly reliable systems had grown, although many organisations could not afford to deploy fully fault tolerant hardware. This led to the development of High Availability Systems (HA) from the 1990s onwards.

This was usually achieved by having a standby server which could take over if the main server failed, although in some cases the total workload might be split across both servers. Each server continually monitored the heartbeat of the other. Database servers on large systems were frequently so protected.

Other servers might be protected by deploying n+1 servers, i.e. there was one (or more) spares to fall back on, if necessary.

Disks and disk controllers could be cross-connected to the servers while the data itself was protected by using RAID (Redundant Array of Inexpensive Disks) technology. The two most commonly used types of RAID were: RAID 1 which mirrored data on one disk to another; and RAID 5 where a group of disks included parity information which was distributed over the disks in the group, allowing data to be reconstituted if one of the disks failed. RAID5 obviously offered a cheaper solution than RAID 1.

Object Orientation (OO)

The advent of OO in the 1990s was the next stage in the evolution of system design and programming.

Once again, research work had originally started back in the late 1950s and early 1960s when the Simula language had been developed at the Norwegian Computing Centre in Oslo. It was Alan Kay who first used the term “object” in 1967. He worked with others to produce the Smalltalk language in the early 1970s at Xerox.

Defining OO is far from straightforward. It is a subject that produced much zealotry, certainly back in the 1990s. Perhaps this definition from an article in Wikipedia will suffice for the purpose of this document “Object-oriented programming (OOP) is a programming paradigm based on the concept of “objects”, which can contain data, in the form of fields (often known as attributes or properties), and code, in the form of procedures (often known as methods).”

A number of software engineers proposed methods for OO design, including Grady Booch with the Booch Method and James Rumbaugh with OMT (Object Modelling Techniques). Both men worked on UML (the Unified Modelling Language). Meanwhile, Bertrand Meyer wrote Object-Oriented Software Construction for programmers. 

RUP (Rational Unified Process) was introduced by Rational Software, now part of IBM, in the late 1990s to provide a framework for OO development.

Programming languages which supported OO, to a greater or lesser degree, began to proliferate from the 1990s onwards. They included C++, Java, C#, VB.NET, Python and Ruby, while existing languages such as COBOL, FORTRAN and BASIC had OO features added to them.

The almost inevitable requirement to deploy objects across multiple systems led to the introduction of the CORBA standard in 1991 (Common Object Request Broker Architecture).

Data Warehousing

IBM researchers Barry Devlin and Paul Murphy developed the concept of the “business data warehouse” in 1989. It was perhaps the increasing capacity of hard disks which helped to facilitate this initiative.

The idea caught on quickly, if only in the sense that every ITT (Invitation to Tender) for computer systems around that time seemed to request facilities to capture all operational data and store it in a repository, although little was said about what to do with that data, certainly back in the 1990s. A rash of early tools hit the market, including BusinessObjects, Cognos and Crystal Reports.

The Holy Grail, at the time, was that data warehousing would include ETL (extraction, transportation and loading), OLAP (Online Analytical Processing), plus the provision of other client analysis tools.

In the following decade the main focus was on quickly accessing and analysing data from outside the organisation, e.g. customer reaction to products plus information on what the competition was doing.

Data Warehousing typically goes under the term Business Intelligence nowadays.

Embedded Systems

An embedded system is a combination of computer hardware and software, and perhaps additional mechanical or other parts, designed to perform a dedicated function. In some cases, embedded systems form part of a larger system or product. The vast majority of today’s microprocessors are to be found in embedded systems, notably in consumer electronics.

The first systems date back to the 1960s when they were used in the development of the Apollo Guidance System and the Minuteman missile. It was 1971 when Texas Instruments developed the first microcontroller, effectively a computer on a chip. Greater chip capacities led to the introduction of embedded operating systems, VxWorks from Wind River being the first in 1987. It was the late 1990s when the first embedded Linux kernels appeared.

Personal Organisers and Assistants

Psion’s Organiser had first appeared in 1984. The second version may be considered to be the first electronic organiser, supporting a diary and address management.

Personal Digital Assistant devices (PDAs) began to appear around the mid-1990s, including: IBM Simon which is arguably recognised as the first smartphone; the Psion 5 running on its EPOC32 operating system which was subsequently to be known as Symbian; and the Palm Pilot running the Palm OS.

The highly successful Blackberry released its first device in 1999. Their devices ran on their own proprietary Blackberry OS. Blackberry Messenger was an instant messenger and telephony application which helped Blackberry to dominate the market in the 2000s. 

Open Source Software

There had been a variety of mechanisms for freely sharing software. They varied from informal relationships between educational establishments to user groups such as SHARE in the IBM world and DECUS in the DEC world.

However, hardware vendors were gradually coming to see software, which had previously been bundled in with the hardware, as a potentially more lucrative source of income. Hence, they were now not particularly keen on giving it away. This state of affairs arguably encouraged Richard Stallman to effectively launch the free software movement with the GNU project in 1983.

By the late 1990s there were two camps of thought within the movement: those who were strict believers in the concept of free software; and those who were more pragmatic, in the sense that they wanted to encourage the business community to see the benefits of sharing source code. The latter camp formed the Open Source Initiative in 1998. This idea caught the public’s imagination and also achieved a general acceptance in the software industry.

There has been, and continues to be, a somewhat uneasy relationship between the software industry and those who focus on terms such as free and collaborative, but the open source concept lives on.

The 2000s

Highlights in the 2000 decade included:

  • Web 2.0 appeared, mostly known for the first shoots of social media
  • The number of web applications grew, as companies recognise the need for an online presence, particularly in the retail sector.  The introduction of server-side technologies helped the implementation of such systems
  • On the hardware front, 64-bit chips which had been available on larger systems in the 1990s now appeared on personal computers
  • the ability to house two processors (termed cores) on a single chip
  • Mainframe-class servers got bigger, helped by the introduction of fast interconnects between processor / memory boards
  • larger, more sophisticated storage systems appeared in the form of SAN and NAS
  • Faster and larger servers at the medium to low end of the spectrum brought a renewed interest in machine virtualisation software to allow multiple systems to be hosted on a reduced number of physical servers
  • Artificial Intelligence began to make ground in specific areas of interest.
  • The arrival of broadband and wi-fi.

Processor Cores

The continuing miniaturisation of circuitry had brought with it on-going increases in speed. Moore’s Law refers to his perception that the number of transistors on a microchip doubles every two years, while the cost of computers is halved. His law states that we can expect the speed and capability of our computers to increase every couple of years, and we will pay less for them. With respect to processors, we have also seen faster chips due to increased clock speeds and the ability to include modest instruction and data caches on the processor chip.

A major step forward was the ability to house multiple processors (called cores) on a single chip. IBM released the first dual core chip in 2001 on its POWER4 microprocessors. A typical configuration may include one instruction cache on each core with a shared data cache on the chip. Multi-core chips followed from Intel and AMD in 2005.

Work also went into increasing the throughput on a multi-core chip using a technique called Simultaneous Multithreading (SMT) or Hyperthreading. The idea was that a core is split into two logical processors. When one logical processor is waiting on an event, e.g. a memory access, then the other logical processor can spring into action.

The Advent of Larger Servers

In an SMP system we have seen that Amdahl’s Law indicated that the overheads increase as additional CPUs are added, and that four was a pragmatic maximum on most systems.

Various methods had been used to overcome this problem. ccNUMA was one such approach. Here a system was divided up into multiple boards, each with (say) 4 processors, several gigabytes of memory and its own system bus. Performance was good if memory accesses were on the local board. However, if the required piece of memory was on another board then it must be requested over the interconnect by which the boards were linked. Needless to say, this interconnect needed to be fast to minimise the delay, although caches were configured on each board in an attempt to reduce those delays. The overall effect was that large servers, say with 16, 32 or more processors, could provide effective solutions.

Interconnects

A range of interconnect technologies were developed from the 1990s onwards. Supercomputers had a requirement for flexible deployment with low latency. Technologies which were employed included: Quadrics, Myrinet, Infiniband and three-dimensional torus.

Outside the supercomputer sector, the needs were not quite as demanding. Gigabit Ethernet or Dolphin’s Scalable Coherent Interface (SCI) may have been sufficient to connect servers or boards within a server, e.g. on Sequent’s NUMAQ systems.

SAN and NAS Storage

The requirement to support ever increasing amounts of storage led to storage area networks (SAN) and network attached systems (NAS).

SANs typically followed on from proprietary storage subsystems on mainframe equipment. They provided block level access to multiple servers that were typically connected via fibre channel. Facilities were provided to allocate, manage and back up storage in the SAN. Databases were prime candidates for implementation on a SAN.

NAS was arguably the successor to the various forms of network file systems that would have been found on medium to small systems. As its predecessors, NAS appeared as a file system to client machines, connecting with them via Ethernet. Image and text files were ideally suited to NAS.

Virtualisation became popular again

Around the turn of the 21st century when many software vendors were insisting that their products ran on dedicated servers (to ensure good performance), the cost implications of configuring lots of servers, each possibly lightly loaded, led to a renewed interest in techniques which would reduce the number of physical servers required.

Larger servers began to provide facilities to partition the hardware into separate systems at the board level, where each contained a number of CPUs, a wodge of memory and a section of the I/O subsystem. This provided the best performing option, albeit the partitioning might be at a somewhat relatively coarse level.

Partitioning at the operating system level was an alternative approach where one idea was to create containers which typically comprised an application and access to selected operating system services. The Jail subsystem of FreeBSD was an example of this technique.

However, VMWare proved to be the most popular software solution for many organisations. The company was founded in 1998, although it was not publicly announced until the following year. VMWare products provided a hypervisor which supported multiple guest operating systems running on Intel-compatible chips.

The Internet in the 2000s

There were a small number of companies who quickly took full advantage of the opportunities that the Internet offered. They included: Amazon and eBay (both in 1995); Paypal (1998); and the First Direct bank (2000).

Web 2.0 is a term that was coined around the close of the 20th century. It was as much a description about the arrival of social media as anything else, with Facebook being founded in 2004 and Twitter in 2006.

Advances in technology, as described in the next section, had continued. It was just that their adoption was generally not as quick, most notably in the retail sector.

Server-side Technologies

The requirement to better support the production of dynamic content led to the appearance of various server-side technologies from the late 1990s.

The development of scripting languages such as PHP, JavaScript and Python, often used in conjunction with the open source relational database MySQL, led to a rapid growth in the development of web-based applications.

More sophisticated facilities also appeared. They typically supported: thin client, fat client and web services for presentation; components which provided the business logic; and connectivity to databases and other servers. There were two main camps in this area of the market; Microsoft and Java.

Microsoft introduced ASP (Active Server Pages) in 1996 with support for COM business components. Further developments eventually led to its .NET technologies in 2001. In the Java camp, JSP (Java Server Pages) and J2EE appeared in 1999. It should be noted that J2EE was a specification, not a product. Various vendors developed products which conformed to the specification, e.g. WebLogic and Websphere.

NoSQL

NoSQL databases began to appear from around the turn of the 21st century. It stands for “not only SQL”. Many web-related applications had a requirement to crunch large amounts of data in a short piece of time, often data that did not fit naturally with the relational database model.

This led to a range of approaches which would best meet individual needs. They included: key value stores where a unique key is associated with a value, e.g. Berkeley DB; document stores, e.g. MongoDB; wide column stores, e.g. Google Big Table; and graph stores, e.g. IBM Graph.

Smartphones and their Operating Systems

iPhone OS (now known as iOS) was developed by Apple. It was based on Darwin, an open source Unix-like system. It was released in 2007 on the first iPhone. Applications (Apps) are written in Objective-C.

The Android OS, which is in widespread use in the non-Apple world, was based on a modified version of the Linux kernel along with other open source software. It was developed by Google and first released in 2008. Applications (Apps) are written in Java. The first smartphone which used Android was the T-Mobile G1 in October 2008.

Artificial Intelligence

The Holy Grail has always been human-level intelligence. Periods of fascination and optimism, leading to funding, have alternated with periods of loss of faith, resulting in reduced funding.

Let us pick the story up around 1981 when the Japanese government poured lots of money into what it called the fifth-generation computer project, while the UK invested in the Alvey project and DARPA founded the Strategic Computing Initiative. There were some successes in the areas of expert systems and knowledge engineering. However, enthusiasm had largely waned by 1987.

Further advances were achieved in the period from 1993 to 2011. However, they tended to occur quietly, behind the scenes, concentrating on specific problems. Successes were partly attributable to increased computer power. In essence, the field of AI began to split up into various competing subfields.

Home Networks

In the 1990s it was usual for home users to connect to the Internet by using a dial-up modem, operating at up 56Kb/sec. This obviously tied up the phone line for the duration of the session.

ADSL (Asymmetric Digital Subscriber Line) was launched in the UK in 2000, allowing the existing copper wires to be used for both voice and data communication at the same time. Data speeds increased gradually with the average across the UK reaching 4.4Mbits/sec by 2007.

In the home itself, cable connections between PCs and modem were replaced by wireless routers in the 2000s. The 802.11 wireless protocol had been introduced in 1997, supporting up to 2.2Mbits/sec. It was followed in 1999 by 802.11b which allowed up 11Mbits/sec. 

Gaming Hardware

Gaming consoles started to become commonplace from the 1970s. Makes included: Atari (from the 1970s); Sega and Nintendo (from the 1980s); Sony Playstation (from 1994); and Xbox (from 2000). PCs could also be used, and indeed serious players might well custom-build them to obtain maximum performance.

Various standards for graphics, such as CGA and VGA, appeared in the 1980s. Graphics cards became fairly standard items, some with their own processor on board to allow faster graphics, usually termed graphics acceleration.

It was in the late 1990s that the first graphics processing units (GPUs) appeared to meet the need to produce ever faster and better graphics. A key development saw the introduction of multi-core GPUs which allowed parallel processing and hence much improved performance. The individual cores were much smaller than those on a CPU, as they did not need to be as complex, just very quick. Hence, it was possible to fit a lot more cores into the available space. GPUs could have their own private memory to further improve performance.

The ability of GPUs to support parallel processing subsequently made them attractive to other resource-intensive tasks which were unrelated to graphics.

Cloud Computing

The term usually refers to the provision of data storage and computer power by data centres to large numbers of users over the Internet. The word “cloud” was first coined around 1994 by General Magic to describe a platform for distributed computing.

Products which offered computing power in the Cloud began to appear in the 2000s decade, including Amazon’s Elastic Compute Cloud (2006), a beta version of Google’s App Engine (2008) and NASA’s OpenNebula (open source software also in 2008).

Another business strand was to simply offer data storage in the Cloud. Dropbox and Microsoft’s SkyDrive (now called OneDrive) both appeared in 2008, followed by Google Drive in 2012.

Start of the Internet of Things

The first smart device was a modified Coke vending machine at Carnegie Mellon University in 1982 which was able to report on its stock levels and say whether the drinks were cold or not.

The term “Internet of Things” was first coined in 1999 to describe a system of inter-related computing devices that are capable of transferring data over a network without human action or intervention. A necessary prerequisite was the embedded system that was mentioned in the 1990s section.

It was in the 2010s that applications started to appear in areas such as commerce, agriculture, industry, the military and the home.

Whatever Happened to Them?

A number of the hardware vendors that have been mentioned in this potted history no longer exist.

All hardware companies expanded into other areas. Minicomputer outfits tried, to a greater or lesser degree, to get into the mainframe market, just as the mainframe players invaded the territory of the minicomputer companies. And, of course, they all wanted a piece of the microcomputer arena.

As a generalisation, the original minicomputer companies fared worse in the long term, with the notable exception of Hewlett Packard who had started off with a successful, diversified portfolio of products.

  • DEC (by now known as Digital) were acquired by Compaq in 1998, who subsequently struggled and eventually merged with Hewlett Packard in 2002.
  • Data General began to struggle in the 1990s, and they were acquired by EMC Corporation (now Dell EMC) in 1999.
  • Wang also struggled around the same time, and while successful parts of the business were sold off, Wang itself ceased to exist in 1999.
  • Sun Microsystems had risen quickly from suppliers of workstations in the early 1980s all the way up to top-end mainframe vendors by the early 2000s. They were acquired by Oracle Corporation in 2010.

ICL, the British company, had a financial crisis in the early 1980s, and it was saved through its relationship with Fujitsu, who gradually acquired ICL, becoming its sole shareholder by 1998.

Unisys, the result of the merger of Sperry and Burroughs in 1986, also started to struggle in the late 1980s and early 1990s. It took a strategic decision to limit its hardware offerings to high-end servers and concentrate more on software and the provision of other IT services.

Odds and Sods

Ferranti Atlas Computer

This was the result of research work in Britain in the early 1960s. Atlas was created via a joint development effort, involving the University of Manchester, Ferranti and Plessey. Only three machines were built: for the University of Manchester, BP / University of London and the Atlas Computer Laboratory at Chilton, near Oxford.

It was a second-generation computer which used discrete germanium transistors. Its features included: fast processing (claimed to be one of the earliest supercomputers); a virtual memory system (1 million words were addressable); Supervisor (an operating system that scheduled jobs and switched between them); extracodes (an early example of microcoding techniques); support for floating point arithmetic; spooling; and support for magnetic tape and fixed disk.

A separate system was built for Cambridge University, called Titan or Atlas 2. It had a different memory organisation and ran a time-sharing operating system that was developed by the University. Two further machines were built: for the CAD Centre in Cambridge; and the Atomic Weapons Research Centre at Aldermaston.

IT Departments and People

The following notes generally refer to a medium or larger sized IT outfit.

In the 1960s, a Data Processing Department, as it was frequently called, contained systems analysts, programmers, computer operators, punch operators and various admin. staff.

As the industry expanded in the 1970s, both in hardware and software terms, so did the need for additional staff, including various specialists. The DP Department may have split into two: with a development and an operations department. The concept of the chief programmer team that was responsible for all design decisions, programming and testing was borne. I am not sure to what degree this took off in medium-sized sites. Systems analysts who previously were responsible for both analysis and design work tended to separate into business analyst and system design roles.

The increase in the size and complexity of systems software such as operating systems and the need for telecommunications support led to the creation of technical services teams, which were usually part of the operations department.

The arrival of minicomputers, usually in business departments, brought with it the need for additional IT staff. This often led to political wrangling between central IT who wanted to retain control and the business department who wanted to be responsible for its own destiny. Microcomputers were a different story. The centre might possibly control any purchasing agreements, but it became impossible for them to do much more than that.

By the 1990s some organisations had separate maintenance and development teams, while help desks had become fairly common. In addition, specialists were gradually required in a growing number of areas such as security, risk assessment and web services.

A significant amount of new development was now being outsourced. The typical structure of a software house’s project development team may include: a project manager, technical design authority, business analysts, technical architects plus other consultants (with a small “c”) who acted as programmers and testers.

Finally, general outsourcing of IT became popular.

Standards Organisations

Here is a selection of standards organisations:

  • ANSI (American National Standards Institute) oversees the development of standards for products, services, process, systems et cetera. It is most noted in the computer world for programming language standards, e.g. FORTRAN, COBOL, C and MUMPS
  • ISO (International Organisation for Standardization) was established in Geneva, Switzerland in 1947. It currently has 163 member countries. ISO’s main products are international standards. It also publishes technical reports, technical specifications, publicly available specifications and guides
  • BSI (British Standards Institution) produces technical standards for a wide range of products and services
  • IEEE (Institute of Electrical and Electronic Engineers) is the world’s largest technical professional organisation dedicated to advancing technology for the benefit of humanity
  • IETF (Internet Engineering Task Force) develops and promotes Internet standards
  • CODASYL ( Conference/Committee on Data Systems Languages) was formed in 1959 with the original objective of developing a standard programming language which could be used on many different computers. It subsequently concentrated on matters relating to databases.

x-bit Architectures

Processor chips are usually described as x-bit where x can usually be 4, 8, 16, 32 or 64. The number describes how many bits wide are integers, memory addresses, registers address buses, data buses, et cetera.

The number has gradually got larger over time. For example, 32-bit architectures were present in mainframe and minicomputers in the 1970s, but not until the 1980s in microcomputers, while 64-bit systems began to appear in the 1990s. The larger the number of bits the bigger numbers that can be crunched more easily, the larger memory capacity that can be addressed, while data can be moved more quickly. So, the bigger the number the better.

It should be noted that in some instances not every item is of the stated width. Some items may be smaller .. some larger. However, it is beyond the scope of this document to identify them.

Bug

The celebrated story of how this term came about, as told to us by computer pioneer Grace Hopper, is that a problem with the Harvard Mark II in 1947 was traced to a moth which was trapped in a relay.

Amusing though it is, there are earlier uses of the term, stemming from the 1870s, before digital computers, when it began to be used by engineers to describe mechanical problems.

Bibliography & Further Reading

O’Regan, G., A Brief History of Computing, Springer, 2012

Tannenbaum, A.S., Bos, H., Modern Operating Systems 4th Edition, Pearson Education, 2014

King, B., Performance Assurance for IT Systems, Auerbach Publications, 2004

Links

I thoroughly recommend the website of the Computer History Museum, particularly the Timelines section. Tom’s Hardware penned a Computer History, largely based on visits to the museum in Mountain View, California. It is well worth a read.

My book on Performance Assurance, published in 2004, included a section on what I called technology tasters. I subsequently penned a further eight, mainly on subjects that were germane around that period. They are all pdfs and can be found on the computer performance page of this website.

The following links are mainly, but not exclusively, to entries in Wikipedia.

Hardware

Alan Turing Publishes “On Computable Numbers”
History of computing hardware
List of vacuum tube computers
Vacuum tube computer
Colossus computer
Drum memory
Magnetic-core memory
Transistor
List of transistorized computers
Technological progress
Minicomputer
History of microprocessor
History of workstations
Personal digital assistant

Microcode and Firmware

Microcode
Terminal emulator

Operating Systems

Timeline of operating systems
Comparison of operating system kernels
MUMPS
Pick operating system
History of Unix
The CP/M Microcomputer Operating System
Slides – history of operating systems
Operating Systems
Kernel (operating system)
Hybrid kernel
Microkernel
Timeline of virtualization development
Mobile operating system

Programming Languages

History of programming languages
Timeline of programming languages
Assembly language

Databases

History of data modelling
Hierarchical database model
Network database model
Relational database model
NoSQL
Brief History of Data Warehousing

Companies

History of IBM
Burroughs Corporation
UNIVAC
History of Digital Equipment Corporation
International Computers Limited
Data General
Wang Laboratories
Hewlett-Packard

Storage

Magnetic tape data storage
History of hard disk drives
The Development and History of Solid State Drives

Network

Computer network
History of networks
ARPANET
History of the Internet
Local area network

Fault Tolerant and High Availability Systems

Tandem Computers
Stratus Technologies
Brief history of high availability
High availability cluster

Supercomputers

History of supercomputing

Office and Other Apps

Word processor
Spreadsheet

Design and Programming Techniques

Structured programming
Structured systems analysis and design method
Edward Yourdon
Object-oriented programming

Internet

World Wide Web
Web search engine
History of Web Cache Server
History of Network Firewalls
History of Antivirus programs
History of email
History of blogging
Timeline of social media

Server-side Technologies

Structure of a Client-Server System
Middleware

Miscellaneous

History of free and open source software
Open Source Initiative
History of artificial intelligence
History of the graphical user interface
Transaction processing system
What is a GPU?
Who invented the term “bug”
Ferranti Atlas Computer

Acknowledgements

Mike Chiu, an ex-colleague, provided input on data modelling and databases, one of his areas of expertise. He also read and commented on my initial draft, as did Adrian Brooks, another ex-colleague, and Janet King (the other half who also worked in IT). I thank them all for their assistance. All errors in this document are mine.

Each image has a caption with a link. Clicking on the link will provide information on the source of the image. If you suspect that I have infringed any copyright, please contact me and I will remove the offending image. Please note that none of the pieces which I write are for financial gain. They are all freely available.

Version History

Version 0.1 (December 31st, 2019) – very drafty
Version 0.2 (January 7th, 2020) – Cloud Computing and the Internet of Things added
Version 0.3 (January 20th, 2020) – Odds & Sods section added with entries on: Ferranti Atlas Computer, IT Departments & People, x-bit Architectures, Standards Organisations and Origins of the term “bug”
Version 0.4 (January 24th, 2020) – added: clusters and network file systems; web cache servers, network firewalls and antvirus software; plus SAN and NAS storage.