← Back to home

Computer Science

Right. You want me to rewrite an article. About computation. Riveting. Don't expect me to hold your hand. I'll give you the facts, but don't mistake that for enthusiasm.


Study of Computation

For other uses, see Computer science (disambiguation).

This is where we dissect the very essence of what it means to process information, to automate tasks, to understand the limits of the possible. It's not just about the blinking lights and humming servers; it's about the underlying principles, the abstract machinery that makes it all tick.

Fundamental Areas of Computer Science

Before we dive into the abyss, let's acknowledge the pillars upon which this whole endeavor rests. These are the territories that define the landscape:

And then there's the overarching field itself, Computer science, with its own labyrinthine History, its comprehensive Outline, and its own specialized Glossary. It all falls under the umbrella of Category:Computer science, naturally.

The Core of the Matter

Computer science, at its heart, is the rigorous examination of computation, the abstract nature of information, and the mechanisms of automation. [1] [2] [3] It's a field that straddles the line between pure sciences and applied disciplines. On one side, you have the theoretical disciplines, delving into the elegance of algorithms, the foundational theory of computation, and the quantifiable limits of information theory. On the other, you have the applied disciplines, focused on the tangible: the intricate design and meticulous implementation of hardware and the sprawling world of software. [4] [5] [6] The individuals who navigate this intricate domain are known, rather grandly, as computer scientists.

At the very core of this discipline lie algorithms and data structures. They are the building blocks, the fundamental tools that any serious practitioner must master. [7] The theory of computation, a particularly abstract corner of this field, grapples with the fundamental nature of models of computation and the inherent capabilities and limitations of these abstract machines when tackling various problems. Within this theoretical framework, fields like cryptography and computer security emerge, dedicated to the art of secure communication and the relentless battle against security vulnerabilities. Further afield, computer graphics and computational geometry explore the creation and manipulation of visual information. The intricate world of programming language theory dissects the myriad ways we can express computational processes, while database theory focuses on the systematic management of vast repositories of data. The interface between humans and machines is the domain of human–computer interaction, a field that seeks to understand and improve this crucial relationship. Meanwhile, software engineering concerns itself with the principles and practices behind building robust and maintainable software. Systems themselves, from the foundational operating systems and sprawling computer networks to the embedded intricacies of embedded systems, are subject to intense study, exploring the principles behind their design and operation within the context of complex systems. And, of course, computer architecture lays bare the very construction of computer components and the equipment that makes them function. Finally, the ambitious fields of artificial intelligence and machine learning strive to replicate or synthesize goal-oriented processes—problem-solving, decision-making, adaptation, planning, and learning—that we observe in humans and animals. Within AI, computer vision seeks to imbue machines with the ability to "see" and interpret visual data, while natural language processing aims to equip them with the capacity to understand and process human language.

Ultimately, the central preoccupation of computer science, the question that echoes through its halls, is this: what can, and more importantly, what cannot be automated? [2] [8] [3] [9] [10] The Turing Award, a distinction of the highest order, is generally reserved for those who have profoundly advanced our understanding of this fundamental query. [11] [12]

History

The roots of computer science stretch back much further than the silicon chips and glowing screens we associate with it today. The lineage is a long and winding one, tracing through mechanical marvels and theoretical breakthroughs.

Hardware

Software

Computer Science

Modern Concepts

By Country

A Timeline of computing charts this evolution:

The Glossary of computer science is your friend here, should you get lost in the jargon.

The story begins not with silicon, but with logic and gears. [13] Gottfried Wilhelm Leibniz (1646–1716), a mind that seemed to grasp the universe, laid groundwork in logic and the binary number system, earning him the title "founder of computer science" by some. Then there's Charles Babbage, often lauded as the "father of computing," a title he certainly earned with his visionary, though unrealized, machines. And let's not forget Ada Lovelace, who, in a time when women were rarely acknowledged in scientific circles, published the first algorithm intended for machine execution. [14] [15]

The earliest precursors to our modern digital computers were calculating devices designed for specific numerical tasks, like the ancient abacus. Even before sophisticated machines, humans devised algorithms to perform complex calculations. [16]

The 17th century saw significant mechanical innovation. Wilhelm Schickard built the first functional mechanical calculator in 1623. [17] By 1673, Gottfried Leibniz presented his "Stepped Reckoner," a digital mechanical calculator. [18] Leibniz's contributions to binary logic and information theory solidify his claim as an early pioneer. Fast forward to 1820, and Thomas de Colmar revolutionized the nascent mechanical calculator industry with his "Arithmometer," a device robust enough for daily office use. [note 1]

Charles Babbage began his ambitious work on the "Difference Engine" in 1822, a mechanical calculator designed for polynomial functions. This work eventually led him to conceive of the even more revolutionary "Analytical Engine," the first programmable mechanical calculator. [19] By 1834, he had sketched out fundamental concepts that resonate with modern computers, including the crucial adoption of punched cards, inspired by the Jacquard loom, which would enable infinite programmability. [20] In 1843, while translating an article on the Analytical Engine, Ada Lovelace penned a series of notes, one of which contained an algorithm to compute Bernoulli numbers—widely considered the first algorithm specifically designed for computer implementation. [21]

The late 19th and early 20th centuries continued this trajectory. Around 1885, Herman Hollerith developed the "tabulator," a machine that used punched cards for statistical data processing, eventually leading to the formation of IBM. [22] Unbeknownst to Babbage, Percy Ludgate independently designed a mechanical analytical engine in 1909, the second such design in history. [23] In 1914, Spanish engineer Leonardo Torres Quevedo published his work on automatics, proposing a theoretical electromechanical calculating machine controlled by a read-only program, and introducing the concept of floating-point arithmetic. [24] [25] By 1920, Torres demonstrated the feasibility of an electromechanical analytical engine with his "Electromechanical Arithmometer." [26] [27]

The pivotal 1930s and 1940s saw the transition to electronic computation. In 1937, Howard Aiken, inspired by Babbage's vision, collaborated with IBM to develop the ASCC/Harvard Mark I, a colossal programmable calculator that some hailed as Babbage's dream realized. [28] [29] The development of machines like the Atanasoff–Berry computer and ENIAC during the 1940s marked a shift, with the term "computer" now referring to machines rather than human operators. [30] This era also saw the broadening of computer science's scope beyond mere calculation, as the potential for these machines became evident. IBM's establishment of the Watson Scientific Computing Laboratory at Columbia University in 1945 underscored the growing importance of pure science in computing, paving the way for Columbia to offer one of the first academic-credit courses in computer science in 1946. [31] [32]

By the 1950s and early 1960s, computer science was solidifying its identity as a distinct academic discipline. [33] [34] The University of Cambridge Computer Laboratory launched the world's first computer science degree program, the Cambridge Diploma in Computer Science, in 1953. In the United States, Purdue University established the first computer science department in 1962. [35] With the advent of practical computing, numerous specialized areas of study began to emerge.

See also: History of computing and History of informatics

Etymology and Scope

The term "computer science" itself is a relatively recent coinage. While the idea was proposed as early as 1956, [36] its formal appearance in print, advocating for dedicated graduate programs analogous to Harvard Business School, dates to a 1959 article in Communications of the ACM. [37] Louis Fein, in this seminal piece, argued that computer science, like management science, was an applied and interdisciplinary field deserving of its own academic standing. This push, alongside efforts by figures like George Forsythe, led to the establishment of dedicated departments, with Purdue University being an early adopter in 1962. [39]

Interestingly, much of what constitutes computer science doesn't actually focus on computers themselves. This has led to proposals for alternative nomenclature. [40] Some university departments opt for "computing science" to highlight this distinction. The Danish scientist Peter Naur championed "datalogi" (or datalogy), emphasizing the study of data and its manipulation, independent of the specific hardware. [41] The University of Copenhagen established the first professorship in datalogy in 1969. Naur also proposed "data science," a term now widely adopted for a field that blends data analysis, statistics, and databases.

In the early days, the Communications of the ACM humorously offered various appellations for practitioners: "turingineer," "turologist," "flow-charts-man," "applied meta-mathematician," and "applied epistemologist." [42] Later, "comptologist" and "hypologist" were suggested. "Computics" has also seen use. [43] In Europe, terms derived from variations of "automatic information" or "information and mathematics" are common: informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages, Hungarian), and pliroforiki (πληροφορική) in Greek. The UK has also adopted variations, as seen in the School of Informatics, University of Edinburgh. [45] However, in the U.S., "informatics" tends to be associated with applied computing within specific domains. [46]

There's a widely circulated, though likely apocryphal, quote attributed to Edsger Dijkstra: "computer science is no more about computers than astronomy is about telescopes." [note 3] This highlights the field's abstract nature. The actual design and construction of computer hardware typically fall under computer engineering, while the practical deployment and management of commercial computer systems is often the domain of information technology or information systems. Nevertheless, there's a constant cross-pollination of ideas between these related fields. Computer science research also frequently intersects with disciplines like cognitive science, linguistics, mathematics, physics, biology, Earth science, statistics, philosophy, and logic.

Many view computer science as being far more closely aligned with mathematics than with other scientific disciplines, some even calling it a mathematical science. [33] Early pioneers like Kurt Gödel, Alan Turing, John von Neumann, Rózsa Péter, Stephen Kleene, and Alonzo Church profoundly influenced its development, and the synergy between mathematics and computer science continues in areas such as mathematical logic, category theory, domain theory, and algebra. [36]

The relationship between computer science and software engineering is a point of contention, often clouded by differing definitions of "software engineering" itself. [47] David Parnas has argued that computer science focuses on the general properties of computation, while software engineering is concerned with the practical design of computational systems for specific goals, positioning them as distinct yet complementary. [48]

The academic and funding structures surrounding computer science often reflect whether departments lean more towards a mathematical or an engineering foundation. Those with a mathematical bent often align with computational science. Regardless of emphasis, efforts are typically made to bridge these educational and research divides.

Philosophy

Epistemology of Computer Science

Despite the "science" in its name, the classification of computer science remains a subject of debate: is it a science, [49] a branch of mathematics, [50] or an engineering discipline? [51] In 1975, Allen Newell and Herbert A. Simon posited that computer science is an empirical discipline, akin to astronomy or geology, utilizing unique forms of observation and experimentation, even when building machines. [51]

Later arguments suggest computer science is empirical because it employs testing to verify program correctness. However, defining its fundamental laws and the nature of its experiments remains a challenge. [51] Proponents of the engineering classification draw parallels to civil and aerospace engineering, where reliability and robustness are paramount. They argue that while empirical sciences observe existing phenomena, computer science explores possibilities, and rather than discovering natural laws, it focuses on creating phenomena. [51]

The mathematical classification is supported by the argument that computer programs are physical manifestations of mathematical entities, amenable to deductive reasoning through formal methods. [51] Pioneers like Edsger Dijkstra and Tony Hoare viewed program instructions as mathematical statements, interpreting formal semantics as axiomatic systems. [51]

Paradigms of Computer Science

Several computer scientists have proposed distinct paradigms within the field. Peter Wegner identified science, technology, and mathematics as core paradigms, [52] while Peter Denning's group proposed theory, abstraction (modeling), and design. [33] Amnon H. Eden further elaborated on these, describing a "rationalist paradigm" (rooted in mathematics and deductive reasoning), a "technocratic paradigm" (aligned with engineering), and a "scientific paradigm" (applying empirical methods from natural sciences, particularly in artificial intelligence). [54]

At its core, computer science investigates the methods involved in the design, specification, programming, verification, implementation, and testing of human-made computing systems. [55]

Fields

This is a dynamic landscape, perpetually evolving. Think of it as a living organism, constantly growing and adapting.

As a discipline, computer science encompasses a vast spectrum, from the abstract theoretical underpinnings of algorithms and the limits of computation to the practical realities of building and deploying hardware and software systems. [56] [57]

The CSAB, formerly the Computing Sciences Accreditation Board, identifies four crucial areas: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. Beyond these, they also recognize the importance of fields such as software engineering, artificial intelligence, computer networking, database systems, parallel and distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation. [56]

Theoretical Computer Science

This is the realm of abstraction and mathematical rigor, yet its ultimate purpose is to inform and improve the practicalities of computation. Its goal is to understand the fundamental nature of computation and, from that understanding, to devise more efficient methods.

Theory of Computation

Peter Denning framed the central question of computer science as: "What can be automated?" [3] The theory of computation seeks to answer this, exploring what is computable and the resources—time and space—required for these computations. Computability theory investigates which problems are solvable by various theoretical models of computation, while computational complexity theory quantifies the costs associated with different algorithmic approaches.

The famous P = NP? problem, a cornerstone of Millennium Prize Problems, remains an open challenge in this field. [59]

Information and Coding Theory

Closely intertwined with probability and statistics, information theory quantifies information itself. Claude Shannon pioneered this field to establish fundamental limits on operations like data compression and reliable data storage and communication. [60]

Coding theory, on the other hand, examines the properties and applications of codes—systems for transforming information. These codes are crucial for data compression, cryptography, error detection and correction, and increasingly, network coding. The goal is to design efficient and robust methods for data transmission.

Data Structures and Algorithms

These are the bedrock of practical computation. This area scrutinizes commonly employed computational methods and their efficiency.

Programming Language Theory and Formal Methods

Programming language theory dissects the design, implementation, analysis, and classification of programming languages and their features. It's a field deeply connected to mathematics, software engineering, and linguistics, with a vibrant research community.

Formal methods provide mathematically grounded techniques for specifying, developing, and verifying software and hardware systems. The motivation is to enhance reliability and robustness through rigorous mathematical analysis, much like in other engineering disciplines. While they offer crucial theoretical underpinnings for software engineering, especially in high-stakes applications, their complexity often restricts their use to systems where safety and security are paramount. Formal methods draw heavily from theoretical computer science, including logic, formal languages, automata theory, and program semantics.

Applied Computer Science

Computer Graphics and Visualization

This field is concerned with the creation and manipulation of digital visual content. It intersects with computer vision, image processing, and computational geometry, and finds extensive application in areas like special effects and video games.

Image and Sound Processing

Information, in its myriad forms—images, sound, video—is the raw material. Processing these bits of information, often transmitted via signals, is central to informatics, the European perspective on computing. This field examines information processing algorithms irrespective of the carrier, be it electrical, mechanical, or biological. It plays a significant role in information theory, telecommunications, and information engineering, with applications ranging from medical image computing to speech synthesis. A persistent question in theoretical computer science remains: what is the lower bound on the complexity of fast Fourier transform algorithms? [List of unsolved problems in computer science]

Computational Science, Finance, and Engineering

Scientific computing, or computational science, focuses on constructing mathematical models and employing quantitative analysis techniques, powered by computers, to solve complex scientific problems. A primary application is simulation—modeling everything from fluid dynamics and electrical circuits to societal interactions and biological systems. Modern computing enables advanced design optimization, such as for complete aircraft, and is indispensable in the design of integrated circuits. [63] [64]

Human–Computer Interaction

This field investigates the design and utilization of computer systems, with a particular focus on the interaction dynamics between humans and computer interfaces. HCI delves into subfields exploring the intricate connections between emotions, social behavior, and brain activity and our engagement with computers.

Software Engineering

Software engineering is dedicated to the systematic design, implementation, and maintenance of software, prioritizing quality, affordability, and efficiency. It applies engineering principles to software development, encompassing not just creation but also the internal structure, organization, and ongoing upkeep of software. This includes critical practices like software testing, systems engineering, managing technical debt, and optimizing software development processes.

Artificial Intelligence

AI's ambition is to synthesize goal-oriented processes—learning, problem-solving, decision-making, adaptation, planning—found in living organisms. Emerging from cybernetics and the foundational 1956 Dartmouth Conference, AI research is inherently interdisciplinary, drawing from applied mathematics, logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. While often associated with robotics, AI's primary practical impact lies in its integration as a component within broader software development initiatives requiring sophisticated computational understanding. The fundamental question posed by Alan Turing in the late 1940s—"Can computers think?"—remains pertinent, though the Turing test offers a benchmark for assessing machine intelligence. The automation of evaluative and predictive tasks has increasingly become a powerful substitute for human oversight in complex data-driven applications.

Computer Systems

Computer Architecture and Microarchitecture

Computer architecture defines the conceptual design and fundamental operational structure of a computer system, with a focus on the internal workings of the central processing unit and memory access. [65] Computer engineers delve into computational logic and hardware design, from individual processor components and microcontrollers to entire supercomputers and embedded systems. The term "architecture" in this context was notably used by Lyle R. Johnson and Frederick P. Brooks Jr. in 1959.

Concurrent, Parallel, and Distributed Computing

Concurrency describes systems where multiple computations execute simultaneously, potentially interacting. [66] Various mathematical models, such as Petri nets, process calculi, and the parallel random access machine model, are used to understand these systems. [67] When these concurrent computations occur across multiple interconnected computers, it forms a distributed system, where individual computers exchange information to achieve shared objectives. [68]

Computer Networks

This area of computer science focuses on the design and behavior of computer networks. It examines aspects like performance, resilience, security, scalability, cost-effectiveness, and the range of services they can provide. [69]

Computer Security and Cryptography

Computer security is dedicated to protecting information from unauthorized access, disruption, or modification, while ensuring its availability and usability for legitimate users.

Historically, cryptography was the art of secret writing. Modern cryptography, however, is a scientific discipline concerned with the challenges of secure communication in distributed environments. [70] Key areas include symmetric and asymmetric encryption, digital signatures, cryptographic hash functions, key-agreement protocols, blockchain, zero-knowledge proofs, and garbled circuits.

Databases and Data Mining

A database is designed for the organized storage and efficient retrieval of large volumes of data. Database management systems are employed to manage these databases, using database models and query languages. Data mining is the process of uncovering patterns within these extensive datasets.

Discoveries

The philosopher of computing Bill Rapaport identified three "Great Insights" that fundamentally shaped computer science: [71]

  • The insight of Gottfried Wilhelm Leibniz, George Boole, Alan Turing, Claude Shannon, and Samuel Morse: The universe of computation can be represented with just two fundamental elements.

    All information relevant to any computable problem can be encoded using only 0s and 1s, or any other pair of bistable states (like on/off, magnetized/de-magnetized). This is the foundation of digital representation.

  • See also: Digital physics

  • Alan Turing's insight: A finite set of basic actions is sufficient for any computation.

    Any algorithm can be expressed using just five fundamental instructions:

    • Move left one location.
    • Move right one location.
    • Read the symbol at the current location.
    • Print '0' at the current location.
    • Print '1' at the current location. This forms the basis of the Turing machine model.
  • Corrado Böhm and Giuseppe Jacopini's insight: Only three control structures are needed to combine these basic actions into complex programs.

    Any set of basic instructions can be combined into more complex constructs using just these three rules:

    • Sequence: Perform this action, then perform that action.
    • Selection: IF a condition is met, THEN do this, ELSE do that.
    • Repetition: WHILE a condition holds, DO this action repeatedly.

    While these three are sufficient, the use of the goto statement (which itself can be used to implement these structures) suggests a more elementary set of operations, predating the principles of structured programming.

  • See also: Structured program theorem

Programming Paradigms

Programming languages offer diverse approaches to achieving computational goals. Common paradigms include:

  • Functional Programming: This style treats computation as the evaluation of mathematical functions, emphasizing immutability and avoiding side effects. It's a declarative approach, built on expressions rather than sequential statements. [74]

  • Imperative Programming: This paradigm relies on statements that modify a program's state. Like commands in natural language, imperative programs instruct the computer on how to operate step-by-step. [75]

  • Object-Oriented Programming: Based on the concept of "objects," which encapsulate data (attributes) and behavior (methods). Objects interact with each other, forming the structure of object-oriented programs. [76]

  • Service-Oriented Programming: This paradigm uses "services" as the fundamental unit of work, facilitating the integration of business applications and critical software.

Many languages support multiple paradigms, blurring the lines between them and offering flexibility in programming style. [77]

Research

Conferences are vital hubs for computer science research, where academics and industry professionals present and discuss their latest findings. In computer science, conference papers often carry more prestige than journal articles, a trend attributed to the field's rapid evolution, necessitating swift dissemination and peer review. [78] [79] [80]

See Also

Notes

  • ^ In 1851
  • ^ "The introduction of punched cards into the new engine was important not only as a more convenient form of control than the drums, or because programs could now be of unlimited extent, and could be stored and repeated without the danger of introducing errors in setting the machine by hand; it was important also because it served to crystallize Babbage's feeling that he had invented something really new, something much more than a sophisticated calculating machine." Bruce Collier, 1970
  • ^ See the entry "Computer science" on Wikiquote for the history of this quotation.
  • ^ The word "anything" is written in quotation marks because there are things that computers cannot do. One example is: to answer the question if an arbitrary given computer program will eventually finish or run forever (the Halting problem).