Computer program

From HandWiki
Short description: Instructions to be executed by a computer

A computer is a tool that provides information and entertainment by means of a computer program written in a programming language.[1] A computer program in its human-readable form is called source code. Source code needs another computer program to execute because computers can only execute their native machine instructions. Therefore, source code may be translated to machine instructions using the language's compiler. (Machine language programs are translated using an assembler.) The resulting file is called an executable. Alternatively, source code may execute within the language's interpreter. The programming language Java compiles into an a intermediate form which is then executed by a Java interpreter.[2]

If the executable is requested for execution, then the operating system loads it into memory and starts a process.[3] The central processing unit will soon switch to this process so it can fetch, decode, and then execute each machine instruction.[4]

If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each statement.[2] Running the source code is slower than running an executable. Moreover, the interpreter must be installed on the computer.

A collection of computer programs, libraries, and related data are referred to as software. Computer programs may be categorized along functional categories, such as applications software and system software. The underlying method used for calculation or manipulation is known as an algorithm.

History

Analytical Engine

Lovelace's diagram from Note G, the first published computer algorithm

In 1837, Charles Babbage was inspired by Jacquard's loom to attempt to build the Analytical Engine.[5] The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled. The device had a "store" which was memory to hold 1,000 numbers of 40 decimal digits each. Numbers from the "store" were transferred to the "mill" for processing. It was programmed using two sets of perforated cards. One set to direct the operation and the other for the input variables.[5] [6] However, after more than 17,000 pounds of the British government's money, the thousands of cogged wheels and gears never fully worked together.[7]

During a nine-month period in 1842–43, Ada Lovelace translated the memoir of Italian mathematician Luigi Menabrea. The memoir covered the Analytical Engine. The translation contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine. This note is recognized by some historians as the world's first written computer program.[8]

Universal Turing machine

Universal Turing machine.svg

In 1936, Alan Turing introduced the Universal Turing machine—a theoretical device that can model every computation that can be performed on a Turing complete computing machine.[9] It is a finite-state machine that has an infinitely long read/write tape. The machine can move the tape back and forth, changing its contents as it performs an algorithm. The machine starts in the initial state, goes through a sequence of steps, and halts when it encounters the halt state.[10]

Early programmable computers

The Z3 computer, invented by Konrad Zuse (1941) in Germany , was a digital and programmable computer.[11] The Z3 contained 2,400 relays to create the circuits. The circuits provided a binary, floating-point, nine-instruction computer. Programming the Z3 was through a specially designed keyboard and punched tape.

The Electronic Numerical Integrator And Computer (ENIAC) was built between July 1943 and Fall 1945. It was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create the circuits. At its core, it was a series of Pascalines wired together.[12] Its 40 units weighed 30 tons, occupied 1,800 square feet (167 m2), and consumed $650 per hour (in 1940s currency) in electricity when idle.[12] It had 20 base-10 accumulators. Programming the ENIAC took up to two months.[12] Three function tables were on wheels and needed to be rolled to fixed function panels. Function tables were connected to function panels using heavy black cables. Each function table had 728 rotating knobs. Programming the ENIAC also involved setting some of the 3,000 switches. Debugging a program took a week.[12] It ran from 1947 until 1955 at Aberdeen Proving Ground, calculating hydrogen bomb parameters, predicting weather patterns, and producing firing tables to aim artillery guns.[13]

Later computers

Switches for manual input on a Data General Nova 3, manufactured in the mid-1970s

Computers manufactured until the 1970s had front-panel switches for programming. The computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed. This process was then repeated. Computer programs also were manually input via paper tape or punched cards. After the medium was loaded, the starting address was set via switches, and the execute button was pressed.[14]

In 1961, the Burroughs B5000 was built specifically to be programmed in the Algol 60 language. The hardware featured circuits to ease the compile phase.[15]

In 1964, the IBM System/360 was a line of six computers each having the same instruction set architecture. The Model 30 was the smallest and least expensive. Customers could upgrade and retain the same application software.[16] Each System/360 model featured multiprogramming. With operating system support, multiple programs could be in memory at once. When one was waiting for input/output, another could compute. Each model also could emulate other computers. Customers could upgrade to the System/360 and retain their IBM 7094 or IBM 1401 application software.[16]

Programming languages

Main page: Programming language
"hello, world" computer program by Brian Kernighan (1978)

Computer programming (also known as software development and software engineering) is the process of writing or editing source code. In a formal environment, a systems analyst will gather information from managers about all the business processes to automate. This professional then prepares a detailed plan for the new or modified system.[17] The plan is analogous to an architect's blueprint.[17] A computer programmer is a specialist responsible for modifying or writing the source code to implement the detailed plan.[17]

A computer program written in an imperative language

A programming language is a set of keywords, symbols, identifiers, and rules by which humans can communicate instructions to the computer.[18] They follow a set of rules called a syntax.[18]

Programming languages get their basis from formal languages.[19] The purpose of defining a solution in terms of its formal language is to generate an algorithm to solve the underlining problem.[19] An algorithm is a sequence of simple instructions that solve a problem.[20]

The evolution of programming languages began when the EDSAC used the first stored computer program in its von Neumann architecture.[21] Programming the EDSAC was in the first generation of programming languages.

  • The second generation of programming languages is assembly language.[22] Assembly language allows the programmer to use mnemonic instructions instead of remembering instruction numbers. A computer program called an assembler translates each assembly language instruction into its machine language number. For example, on the PDP-11, the operation 24576 can be referenced as ADD in the source code.[23] The four basic arithmetic operations have assembly instructions like ADD, SUB, MUL, and DIV.[23] Computers also have instructions like DW or DC to reserve memory cells. Then the MOV instruction can copy integers between registers and memory.
  • The basic structure of an assembly language statement is label, operation, operand, and comment.[24]
  • Labels allow the programmer to work with variable names. The assembler will later translate labels into physical memory addresses.
  • Operations allow the programmer to work with mnemonics. The assembler will later translate the mnemonics into instruction numbers.
  • Operands tell the assembler which data the operation will process.
  • Comments allow the programmer to articulate a narrative because the instructions alone are vague.
The key characteristic of an assembly language program is it forms a one-to-one mapping to its corresponding machine language target.[25]
  • The third generation of programming languages use compilers and interpreters to execute computer programs. Unlike assembly language, these languages generate many machine language instructions for each symbolic statement.[22] The distinguishing feature of third generation languages is their independence from a particular hardware.[26] They began with the languages of Fortran (1958), Cobol (1959), Algol (1960), and Basic (1964).[22] In 1973, C emerged as a high-level language that produced efficient machine language instructions.[27] Today, an entire paradigm of languages fill the imperative third generation spectrum.

Imperative languages

Imperative languages specify a sequential algorithm using declarations, expressions, and statements:[30]

  • A declaration couples a variable name to a datatype – for example: var x: integer;
  • An expression yields a value – for example: 2 + 2 yields 4
  • A statement might assign an expression to a variable or use the value of a variable to alter the program's control flow – for example: x := 2 + 2; if x = 4 then do_something();

Fortran

FORTRAN (1958) was unveiled as "The IBM Mathematical FORmula TRANslating system." It first compiled correctly in 1958.[31] It was designed for scientific calculations, without string handling facilities. Along with declarations, expressions and statements, it supported:

It succeeded because:

  • programming and debugging costs were below computer running costs.
  • it was supported by IBM.
  • applications at the time were scientific.[31]

However, non IBM vendors also wrote Fortran compilers, but with a syntax that would likely fail IBM's compiler.[31] The American National Standards Institute (ANSI) developed the first Fortran standard in 1966. In 1978, Fortran 77 became the standard until 1991. Fortran 90 supports:

Cobol

COBOL (1959) stands for "COmmon Business Oriented Language." Fortran manipulated symbols. It was soon realized that symbols didn't need to be numbers, so strings were introduced.[32] The US Department of Defense influenced COBOL's development, with Grace Hooper being a major contributor. The statements were English-like and verbose. The goal was to design a language so managers could read the programs. However, the lack of structured statements hindered this goal.[33]

COBOL's development was tightly controlled, so dialects didn't emerge to require ANSI standards. As a consequence, it wasn't changed for 25 years until 1974. The 1990s version did make consequential changes like object-oriented programming.[33]

Algol

ALGOL (1960) stands for "ALGOrithmic Language." It had a profound influence on programming language design.[34] Emerging from a committee of European and American programming language experts, it used standard mathematical notation and had a readable structured design. Algol was first to define its syntax using the Backus–Naur form.[34] This led to syntax-directed compilers. It added features like:

  • block structure, where variables were local to their block.
  • arrays with variable bounds.
  • "for" loops.
  • functions.
  • recursion.[34]

Algol's direct descendants include Pascal, Modula-2, Ada, Delphi and Oberon on one branch. On another branch there's C, C++ and Java.[34]

Basic

BASIC (1964) stands for "Beginner's All Purpose Symbolic Instruction Code." It was developed at Dartmouth College for all of their students to learn.[35] If a student didn't go on to a more powerful language, the student would still remember Basic.[35] A Basic interpreter was installed in the microcomputers manufactured in the late 1970s. As the microcomputer industry grew, do did the language.[35]

Basic pioneered the interactive session.[35] It offered operating system commands within its environment:

  • The 'new' command created an empty slate.
  • Statements evaluated immediately.
  • Statements could be programmed by preceding them with a line number.
  • The 'list' command displayed the program.
  • The 'run' command executed the program.

However, the Basic syntax was too simple for large programs.[35] Recent dialects have added structure and object-oriented extensions. Microsoft's Visual Basic is still widely used and produces a graphical user interface.[36]

C

C (1973) got its name because the language BCPL was replaced with B, and Bell Labs called the next version "C." Its purpose was to write the UNIX operating system.[27] C is a relatively small language -- making it easy to write compilers. Its growth mirrored the hardware growth in the 1980s.[27] Its growth also was because it has the facilities of assembly language, but uses a high-level syntax. It added advanced features like:

  • arithmetic on pointers.
  • pointers to functions.
  • bit operations.
  • freely combining operators.[27]

However, the major drawback is its potential lack of readability. Abusing its compact code design makes syntax errors common.[27]

Declarative languages

Imperative languages have one major criticism: Assigning an expression to a non-local variable may produce an unintended side effect.[37] Declarative languages generally omit the assignment statement and the flow control. They describe what computation should be performed and not how to compute it. Two broad categories of declarative languages are functional languages and logical languages.

Functional languages

The principle behind a functional language is to use Lambda calculus as a guide for a well defined semantic.[38] In mathematics, a function is a rule that maps elements from an expression to a range of values. Consider the function:

times_10(x) = 10 * x

The expression 10 * x is mapped by the function times_10() to a range of values. One value happens to be 20. This occurs when x is 2. So, the application of the function is mathematically written as:

times_10(2) = 20

A functional language compiler will not store this value in a variable. Instead, it will push the value onto the computer's stack before setting the program counter back to the calling function. The calling function will then pop the value from the stack.[39]

Imperative languages do support functions. Therefore, functional programming can be achieved in an imperative language, if the programmer uses discipline. However, functional languages force this discipline onto the programmer by removing the syntax of the assignment statement. Moreover, functional languages have a simpler syntax because they omit the overhead of the how in imperative languages.[40]

A functional program is developed with a set of primitive functions followed by a single driver function.[37] Consider the snippet:

function max(a,b){ /* code omitted */}

function min(a,b){ /* code omitted */}

function difference_between_largest_and_smallest(a,b,c) {

return max(a,max(b,c)) - min(a, min(b,c));

}

The primitives are max() and min(). The driver function is difference_between_largest_and_smallest(). Executing:

put(difference_between_largest_and_smallest(10,4,7)); will output 6.

Functional languages are used in computer science research to explore new language features.[41] Moreover, their lack of side-effects have made them popular in parallel programming and concurrent programming.[42] However, application developers prefer the object-oriented features of imperative languages.[42]

Lisp

LISP (1958) stands for "LISt Processor."[43] It is tailored to process lists. A full structure of the data is formed by building lists of lists. In memory, a tree data structure is built. Internally, the tree structure lends nicely for recursive functions.[44] The syntax to build a tree is to enclose the space-separated elements within parenthesis. The following is a list of three elements. The first two elements are themselves lists of two elements:

((A B) (HELLO WORLD) 94)

Lisp has functions to extract and reconstruct elements.[45] The function head() returns a list containing the first element in the list. The function tail() returns a list containing everything but the first element. The function cons() returns a list that is the concatenation of other lists. Therefore, the following expression will return the list x:

cons(head(x), tail(x))

One drawback of Lisp is when many functions are nested, the parentheses may look confusing.[40] Modern Lisp environments help ensure parenthesis match. As an aside, Lisp does support the imperative language operations of the assignment statement and goto loops.[46] Also, Lisp is not concerned with the data type of the elements at compile time. Instead, it assigns the data types at run time. This may lead to programming errors not being detected early in the development process.

Writing large, reliable, and readable Lisp programs require forethought. If properly planned, the program may be much shorter than an equivalent imperative language program.[40] Lisp is widely used in artificial intelligence. However, its usage has been accepted only because it has imperative language operations, making unintended side-effects possible.[42]

ML

ML (early 1970s)[47] stands for "Meta Language." ML is full typed check at compile time.[48] For example, this function has one input parameter (an integer) and returns an integer:

fun times_10(n : int) : int = 10 * n;

ML is not parenthesis-eccentric like Lisp. The following is an application of times_10():

times_10 2

It returns "20 : int". (Both the results and the data type are returned.)

Like Lisp, ML is tailored to process lists. Unlike Lisp, each element is the same data type.[49]

Prolog Logical Language

PROLOG (1972) stands for "PROgramming in LOgic." It was designed to process natural languages.[50] The building blocks of a Prolog program are objects and their relationship to other objects. Objects are built by stating true facts about them.[51]

Set theory facts are formed by assigning objects to sets. The syntax is setName(object).

  • Cat is an animal.
animal(cat).
  • Mouse is an animal.
animal(mouse).
  • Tom is a cat.
cat(tom).
  • Jerry is a mouse.
mouse(jerry).

Adjective facts are formed using adjective(object).

  • Cat is big.
big(cat).
  • Mouse is small.
small(mouse).

Relationships are formed using multiple items inside the parentheses. In our example we have verb(object,object). and verb(adjective,adjective).

  • Mouse eats cheese.
eat(mouse,cheese).
  • Big animals eat small animals.
eat(big,small).

After all the facts and relationships are entered, then a question can be asked:

Will Tom eat Jerry?
?- eat(tom,jerry).

Prolog's usage has expanded to become a goal-oriented language.[52] In a goal-oriented application, the goal is defined by providing a list of subgoals. Then each subgoal is defined by further providing a list of its subgoals, etc. If a path of subgoals fails to find a solution, then that subgoal is backtracked and another path is systematically attempted.[51] Practical applications include solving the shortest path problem[50] and producing family trees.[53]

Functional categories

Computer programs may be categorized along functional lines. The main functional categories are application software and system software. System software includes the operating system which couples computer hardware with application software.[54] The purpose of the operating system is to provide an environment in which application software executes in a convenient and efficient manner.[54] In addition to the operating system, system software includes embedded programs, boot programs, and micro programs. Application software designed for end users have a user interface. Application software not designed for the end user includes middleware, which couples one application with another. Both system software and application software execute utility programs.

Application software

Application software is the key to unlocking the potential of the computer system.[55] Enterprise application software bundles accounting, personnel, customer, and vendor applications. Examples include enterprise resource planning, customer relationship management, and supply chain management software.

Enterprise applications may be developed in-house as a one-of-a-kind proprietary software.[55] Alternatively, they may be purchased as off-the-shelf software. Purchased software may be modified to provide custom software. If the application is customized, then either the company's resources are used or the resources are outsourced. Outsourced software development may be from the original software vendor or a third-party developer.[55]

The advantages of proprietary software are features and reports may be exact to specification.[56] Management may also be involved in the development process and offer a level of control. Management may decide to counteract a competitor's new initiative or implement a customer or supplier requirement. A merger or acquisition will necessitate enterprise software changes.[56]

The disadvantages of proprietary software are the time and resource costs may be extensive.[56] Furthermore, risks concerning features and performance may be looming.

The advantages of off-the-shelf software are its identifiable upfront costs, the basic needs should be fulfilled, and its performance and reliability have a track record.[56]

The disadvantages of off-the-shelf software are it may have unnecessary features that confuse the end users, it may lack features the enterprise needs, and the data flow may not match the enterprise's work processes.[56]

One approach to economically obtaining a customized enterprise application is through an application service provider.[57] Specialty companies provide the hardware, custom software, and end-user support. They may speed development of new applications because they possess skilled information system staff. The biggest advantage is it frees in-house resources from staffing and managing complex computer projects.[57]

Many application service providers target small, fast-growing companies with limited information system resources.[57] On the other hand, larger companies with major systems likely have their technical infrastructure in place. One key risk is having to trust an external organization with sensitive information. Another key risk is having to trust the provider's infrastructure reliability.[57]

Utility programs

Utility programs are designed to aid system administration and software execution. Operating systems execute hardware utility programs to check the status of disk drives, memory, speakers, and printers.[58] A utility program may optimize the placement of a file on a crowded disk. System utility programs monitor hardware and network performance. When a metric is outside an acceptable range, a trigger alert is generated.[59]

Utility programs include compression programs so data files are stored on less disk space.[58] Compressed programs also save time when data files are transmitted over the network.[58] Utility programs can sort and merge data sets.[59] Utility programs detect computer viruses.

Operating system

An operating system is the low-level software that supports a computer's basic functions, such as scheduling tasks and controlling peripherals.[54]

In the 1950s, the programmer, who was also the operator, would write a program and run it. After the program finished executing, the output may have been printed, or it may have been punched onto paper tape or cards for later processing.[14] More often than not the program did not work. The programmer then looked at the console lights and fiddled with the console switches. If less fortunate, a memory printout was made for further study. In the 1960s, programmers reduced the amount of wasted time by automating the operator's job. A program called an operating system was kept in the computer at all times.[60]

The term operating system may refer to two levels of software.[61] The operating system may refer to the kernel program that manages the processes, memory, and devices. More broadly, the operating system may refer to the entire package of the central software that includes the kernel program, command-line interpreter, graphical user interface, utility programs, and editor.[61]

  • The kernel program should perform memory management.[62] The kernel insures that a process only accesses its own memory, and not that of the kernel or other processes. To save memory, the kernel may load only blocks of execution instructions from the disk drive, not the entire execution file completely.
  • The kernel program should perform file system management.[62] The kernel has instructions to create, retrieve, update, and delete files.
  • The kernel program should perform device management.[62] The kernel provides programs to standardize and simplify the interface to the mouse, keyboard, disk drives, and other devices. Moreover, the kernel should arbitrate access to a device if two processes requests it at the same time.
  • The kernel program should perform network management.[63] The kernel transmits and receives packets on behalf of processes. One key service is to find an efficient route to the target system.
  • The kernel program should provide system level functions for programmers to use.[64]
    • Programmers access files through a relatively simple interface that in turn executes a relatively complicated low-level I/O interface. The low-level interface includes file creation, file descriptors, file seeking, physical reading, and physical writing.
    • Programmers create processes through a relatively simple interface that in turn executes a relatively complicated low-level interface.
    • Programmers perform date/time arithmetic through a relatively simple interface that in turn executes a relatively complicated low-level time interface.[65]
  • The kernel program should provide a communication channel between executing processes.[66] For a large software system, it may be desirable to engineer the system into smaller processes. Processes may communicate with one another by sending and receiving signals.

Originally, operating systems were programmed in assembly; however, modern operating systems are typically written in higher level languages like C, C++, Objective-C, and Swift.

Boot program

A stored-program computer requires an initial computer program stored in its read-only memory to boot. The boot process is to identify and initialize all aspects of the system, from processor registers to device controllers to memory contents.[67] Following the initialization process, this initial computer program loads the operating system and sets the program counter to begin normal operations.

Embedded programs

The microcontroller on the right of this USB flash drive is controlled with embedded firmware.

Independent of the host computer, a hardware device might have embedded firmware to control its operation. Firmware is used when the computer program is rarely or never expected to change, or when the program must not be lost when the power is off.[60]

Microcode programs

Main page: Microcode
NOT gate
NAND Gate
NOR Gate
AND Gate
OR Gate

The microcode program is the bottom level interpreter that controls the data path of software driven computers.[68] (Advances in hardware have migrated these operations to hardware execution circuits.)[68] Microcode instructions allow the programmer to more easily implement the digital logic level[69] -- the computer's real hardware. The digital logic level is the boundary between computer science and computer engineering.[70]

A gate is a tiny transistor that can return one of two signals -- on or off.[71]

  • Having one transistor forms the NOT gate.
  • Connecting two transistors in series forms the NAND gate.
  • Connecting two transistors in parallel forms the NOR gate.
  • Connecting a NOT gate to a NAND gate forms the AND gate.
  • Connecting a NOT gate to a NOR gate forms the OR gate.

These five gates form the building blocks of binary algebra -- the digital logic functions of the computer.

Microcode instructions are mnemonics programmers may use to execute digital logic functions instead of forming them in binary algebra. They are stored in a central processing unit's (CPU) control store.[72] These hardware-level instructions move data throughout the data path.

Microcode instructions move data between a CPU's registers and throughout the motherboard. The micro-instruction cycle begins when the microsequencer uses its microprogram counter to fetch the next machine instruction from random access memory.[73] The next step is to decode the machine instruction by selecting the proper output line to the hardware module.[74] The final step is to execute the instruction using the hardware module's set of gates.

A symbolic representation of an ALU.

Instructions to perform arithmetic are passed through an arithmetic logic unit (ALU).[75] The ALU has circuits to perform elementary operations to add, shift, and compare integers. By combining and looping the elementary operations through the ALU, the CPU performs its complex arithmetic.

Microcode instructions move data between the CPU and the memory controller. Memory controller microcode instructions manipulate two registers. The memory address register is used to access each memory cell's address. The memory data register is used to set and read each cell's contents.[76]

Microcode instructions move data between the CPU and the many computer buses. The disk controller bus writes to and reads from the hard disk drives. Data is also moved between the CPU and other functional units via the peripheral component interconnect express bus.[77]

See also

References

  1. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 2. ISBN 0-201-71012-9. 
  2. 2.0 2.1 Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 7. ISBN 0-201-71012-9. 
  3. Silberschatz, Abraham (1994). Operating System Concepts, Fourth Edition. Addison-Wesley. p. 98. ISBN 978-0-201-50480-4. 
  4. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 32. ISBN 978-0-13-854662-5. https://archive.org/details/structuredcomput00tane/page/32. 
  5. 5.0 5.1 McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer. Walker and Company. p. 16. ISBN 978-0-8027-1348-3. https://archive.org/details/eniac00scot/page/16. 
  6. Bromley, Allan G. (1998). "Charles Babbage's Analytical Engine, 1838". IEEE Annals of the History of Computing 20 (4): 29–45. doi:10.1109/85.728228. http://profs.scienze.univr.it/~manca/storia-informatica/babbage.pdf. 
  7. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 15. ISBN 978-0-13-854662-5. https://archive.org/details/structuredcomput00tane/page/15. 
  8. J. Fuegi; J. Francis (October–December 2003), "Lovelace & Babbage and the creation of the 1843 'notes'", Annals of the History of Computing 25 (4): 16, 19, 25, doi:10.1109/MAHC.2003.1253887 
  9. Rosen, Kenneth H. (1991). Discrete Mathematics and Its Applications. McGraw-Hill, Inc.. p. 654. ISBN 978-0-07-053744-6. https://archive.org/details/discretemathemat00rose/page/654. 
  10. Linz, Peter (1990). An Introduction to Formal Languages and Automata. D. C. Heath and Company. p. 234. ISBN 978-0-669-17342-0. 
  11. "History of Computing". http://history-computer.com/ModernComputer/Relays/Zuse.html. 
  12. 12.0 12.1 12.2 12.3 McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer. Walker and Company. p. 102. ISBN 978-0-8027-1348-3. https://archive.org/details/eniac00scot/page/102. 
  13. McCartney, Scott (1999). ENIAC – The Triumphs and Tragedies of the World's First Computer. Walker and Company. p. 107. ISBN 978-0-8027-1348-3. https://archive.org/details/eniac00scot/page/107. 
  14. 14.0 14.1 Silberschatz, Abraham (1994). Operating System Concepts, Fourth Edition. Addison-Wesley. p. 6. ISBN 978-0-201-50480-4. 
  15. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 20. ISBN 978-0-13-854662-5. https://archive.org/details/structuredcomput00tane/page/20. 
  16. 16.0 16.1 Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 21. ISBN 978-0-13-854662-5. https://archive.org/details/structuredcomput00tane. 
  17. 17.0 17.1 17.2 Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 507. ISBN 0-619-06489-7. 
  18. 18.0 18.1 Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 159. ISBN 0-619-06489-7. 
  19. 19.0 19.1 Linz, Peter (1990). An Introduction to Formal Languages and Automata. D. C. Heath and Company. p. 2. ISBN 978-0-669-17342-0. 
  20. Weiss, Mark Allen (1994). Data Structures and Algorithm Analysis in C++. Benjamin/Cummings Publishing Company, Inc.. p. 29. ISBN 0-8053-5443-3. 
  21. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 17. ISBN 978-0-13-854662-5. https://archive.org/details/structuredcomput00tane/page/17. 
  22. 22.0 22.1 22.2 22.3 22.4 22.5 22.6 Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 160. ISBN 0-619-06489-7. 
  23. 23.0 23.1 23.2 Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 399. ISBN 978-0-13-854662-5. https://archive.org/details/structuredcomput00tane/page/399. 
  24. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 400. ISBN 978-0-13-854662-5. https://archive.org/details/structuredcomput00tane/page/400. 
  25. Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 398. ISBN 978-0-13-854662-5. https://archive.org/details/structuredcomput00tane/page/398. 
  26. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 26. ISBN 0-201-71012-9. 
  27. 27.0 27.1 27.2 27.3 27.4 Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 37. ISBN 0-201-71012-9. 
  28. Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 161. ISBN 0-619-06489-7. 
  29. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 321. ISBN 0-201-71012-9. 
  30. Wilson, Leslie B. (1993). Comparative Programming Languages, Second Edition. Addison-Wesley. p. 75. ISBN 978-0-201-56885-1. 
  31. 31.0 31.1 31.2 Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 16. ISBN 0-201-71012-9. 
  32. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 24. ISBN 0-201-71012-9. 
  33. 33.0 33.1 Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 25. ISBN 0-201-71012-9. 
  34. 34.0 34.1 34.2 34.3 Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 19. ISBN 0-201-71012-9. 
  35. 35.0 35.1 35.2 35.3 35.4 Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 30. ISBN 0-201-71012-9. 
  36. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 31. ISBN 0-201-71012-9. 
  37. 37.0 37.1 Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 218. ISBN 0-201-71012-9. 
  38. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 217. ISBN 0-201-71012-9. 
  39. Weiss, Mark Allen (1994). Data Structures and Algorithm Analysis in C++. Benjamin/Cummings Publishing Company, Inc.. p. 103. ISBN 0-8053-5443-3. 
  40. 40.0 40.1 40.2 Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 230. ISBN 0-201-71012-9. 
  41. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 240. ISBN 0-201-71012-9. 
  42. 42.0 42.1 42.2 Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 241. ISBN 0-201-71012-9. 
  43. Jones, Robin; Maynard, Clive; Stewart, Ian (December 6, 2012). The Art of Lisp Programming. Springer Science & Business Media. p. 2. ISBN 9781447117193. 
  44. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 220. ISBN 0-201-71012-9. 
  45. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 221. ISBN 0-201-71012-9. 
  46. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 229. ISBN 0-201-71012-9. 
  47. Gordon, Michael J. C. (1996). "From LCF to HOL: a short history". http://www.cl.cam.ac.uk/~mjcg/papers/HolHistory.html. 
  48. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 233. ISBN 0-201-71012-9. 
  49. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 235. ISBN 0-201-71012-9. 
  50. 50.0 50.1 "Birth of Prolog". November 1992. http://alain.colmerauer.free.fr/alcol/ArchivesPublications/PrologHistory/19november92.pdf. 
  51. 51.0 51.1 Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 246. ISBN 0-201-71012-9. 
  52. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 245. ISBN 0-201-71012-9. 
  53. Wilson, Leslie B. (2001). Comparative Programming Languages, Third Edition. Addison-Wesley. p. 247. ISBN 0-201-71012-9. 
  54. 54.0 54.1 54.2 Silberschatz, Abraham (1994). Operating System Concepts, Fourth Edition. Addison-Wesley. p. 1. ISBN 978-0-201-50480-4. 
  55. 55.0 55.1 55.2 Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 147. ISBN 0-619-06489-7. 
  56. 56.0 56.1 56.2 56.3 56.4 Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 148. ISBN 0-619-06489-7. 
  57. 57.0 57.1 57.2 57.3 Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 149. ISBN 0-619-06489-7. 
  58. 58.0 58.1 58.2 Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 145. ISBN 0-619-06489-7. 
  59. 59.0 59.1 Stair, Ralph M. (2003). Principles of Information Systems, Sixth Edition. Thomson. p. 146. ISBN 0-619-06489-7. 
  60. 60.0 60.1 Tanenbaum, Andrew S. (1990). Structured Computer Organization, Third Edition. Prentice Hall. p. 11. ISBN 978-0-13-854662-5. https://archive.org/details/structuredcomput00tane/page/11. 
  61. 61.0 61.1 Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 21]. ISBN 978-1-59327-220-3. 
  62. 62.0 62.1 62.2 62.3 Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 22]. ISBN 978-1-59327-220-3. 
  63. Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 23]. ISBN 978-1-59327-220-3. 
  64. Kernighan, Brian W. (1984). The Unix Programming Environment. Prentice Hall. p. 201. ISBN 0-13-937699-2. 
  65. Kerrisk, Michael (2010). The Linux Programming Interface. No Starch Press. p. 187]. ISBN 978-1-59327-220-3. 
  66. Haviland, Keith (1987). Unix System Programming. Addison-Wesley Publishing Company. p. 121]. ISBN 0-201-12919-1. 
  67. Silberschatz, Abraham (1994). Operating System Concepts, Fourth Edition. Addison-Wesley. p. 30. ISBN 978-0-201-50480-4. 
  68. 68.0 68.1 Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 6. ISBN 978-0-13-291652-3. 
  69. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 243. ISBN 978-0-13-291652-3. 
  70. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 147. ISBN 978-0-13-291652-3. 
  71. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 148. ISBN 978-0-13-291652-3. 
  72. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 253. ISBN 978-0-13-291652-3. 
  73. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 255. ISBN 978-0-13-291652-3. 
  74. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 161. ISBN 978-0-13-291652-3. 
  75. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 166. ISBN 978-0-13-291652-3. 
  76. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 249. ISBN 978-0-13-291652-3. 
  77. Tanenbaum, Andrew S. (2013). Structured Computer Organization, Sixth Edition. Pearson. p. 111. ISBN 978-0-13-291652-3. 

Further reading

  • Knuth, Donald E. (1997). The Art of Computer Programming, Volume 1, 3rd Edition. Boston: Addison-Wesley. ISBN 978-0-201-89683-1. 
  • Knuth, Donald E. (1997). The Art of Computer Programming, Volume 2, 3rd Edition. Boston: Addison-Wesley. ISBN 978-0-201-89684-8. 
  • Knuth, Donald E. (1997). The Art of Computer Programming, Volume 3, 3rd Edition. Boston: Addison-Wesley. ISBN 978-0-201-89685-5.