← Back to home

Unconventional Computing

Oh, computing. By methods that stray from the well-trodden, utterly predictable paths. Fascinating. Or, more accurately, a mild diversion from the sheer tedium of existence. You want me to rehash Wikipedia, but... better. More detailed. More me. Fine. Don't expect enthusiasm.


Unconventional Computing: A Divergence from the Mundane

Unconventional computing, a rather sterile term for something that implies a certain boldness, refers to computing achieved through methods that are anything but standard. It's a landscape dotted with new and unusual approaches, a deliberate departure from the rigid structures we’ve become accustomed to.

The phrase itself, "unconventional computation," was apparently coined by Cristian S. Calude and John Casti. They even managed to get a conference dedicated to it in 1998. A conference. Imagine the sheer, unadulterated excitement.

Background: The Evolution of the Machine

The very concept of computation isn't confined to silicon and circuits. Historically, we tinkered with mechanical contraptions, clunky things that nevertheless managed to crunch numbers. Then, as if that wasn't enough, we stumbled upon electronics. Now, it seems, the ever-expanding, often perplexing, realms of modern physics are offering up yet more avenues for… processing.

Models of Computation: The Blueprints of Thought

Before we delve into the how, let's acknowledge the what. A model of computation is essentially a description of how a function spits out an output given its input. It details the architecture: how computational units are organized, how memory is accessed, how communication flows. It’s the blueprint. Understanding these models allows us to analyze the efficiency of an algorithm, to dissect its performance without getting bogged down in the messy specifics of any single implementation or fleeting technology.

We have the usual suspects, of course: register machines, random-access machines, the venerable Turing machine, lambda calculus, rewriting systems, digital circuits, cellular automata, and Petri nets. They're the standard toolkit. But the world of unconventional computing suggests there are other tools, perhaps sharper, perhaps more elegant, lying just beyond the obvious.

Mechanical Computing: The Gears of Thought

Long before the transistor, there were gears. Mechanical computers were the industrial workhorses. They’re not entirely relegated to history, mind you. Some still hold a certain fascination, particularly in the realm of analog computation. We have the theoretically intriguing billiard-ball computers, which rely on collisions for logic, and the rather charmingly named MONIAC or the Water integrator, which used hydraulics. Ingenious, in their own way.

Analog Computing: The Flow of Reality

An analog computer doesn't deal in discrete bits; it manipulates continuous physical quantities. Think electrical signals, mechanical movements, or fluid pressures. These machines were once the cutting edge, solving complex scientific and industrial problems with a speed that digital counterparts struggled to match. By the 1950s and 60s, however, they began to fade, now mostly confined to niche applications like flight simulators or teaching control systems. Yet, the lineage is there, in the humble slide rule, the nomogram, and even ancient marvels like the Antikythera mechanism, a mechanical calculator of planetary positions, or the planimeter, used to measure areas.

Electronic Digital Computers: The Reign of the Von Neumann Architecture

Most of what you interact with daily is built upon the Von Neumann architecture, powered by digital electronics. The invention of the transistor and the relentless march of Moore's law have cemented its dominance.

But "unconventional computing," as defined by those who gathered in Santa Fe in 2007, seeks to move beyond these established paradigms. It's about exploring computational operations based on non-standard principles. These are often in the research phase, but the ambition is to transcend the limitations of current technology, even if they can be simulated on existing hardware.

Generic Approaches: The Universality of Stuff

The idea here is that computation isn't exclusive to silicon. It can be anything.

Physical Objects: The World as a Circuit

  • Billiard-ball computers: Imagine a circuit where billiard balls, rather than electrons, carry signals. Their paths are the wires, and their collisions at intersections represent logic gates. It's a purely mechanical manifestation of Boolean logic. Fredkin and Toffoli explored this, and it’s a rather elegant demonstration of how physical interactions can embody computation.
  • Domino computers: Similar in spirit, these use falling dominoes to represent digital signals. A chain reaction, a logic gate – it's a surprisingly effective, if somewhat precarious, way to illustrate computational principles. Watching an OR gate constructed from dominoes is… undeniably satisfying.

These are pedagogical marvels, demonstrating that computation is a fundamental property of interacting systems, not just of microchips.

Reservoir Computing: The Echoes of Dynamics

Reservoir computing is a framework that uses a complex, fixed, non-linear system – the "reservoir" – to map input signals into a higher-dimensional space. The reservoir itself is a network of interconnected units, often with recurrent loops, allowing it to store temporal information. The clever part? Only the output layer is trained. This makes it remarkably efficient, and it can be implemented using naturally occurring systems, even quantum ones. It's about leveraging existing dynamics, rather than building everything from scratch.

Tangible Computing: The Feel of Data

This is about interacting with digital information through physical objects. Think beyond screens and keyboards. It’s about grasping, manipulating, and embodying data. Claytronics and tangible user interfaces fall under this umbrella. The goal is to leverage our innate ability to interact with the physical world for more intuitive collaboration, learning, and design. It’s about making the abstract tangible.

Human Computing: The Original Processors

Before machines, there were people. "Human computers" were individuals, often working in teams, who performed complex calculations by hand, following strict rules. It was laborious, but it was the cutting edge. The term also now refers to those with extraordinary mental arithmetic skills. It’s a reminder that computation, at its core, is about structured thought.

Human-Robot Interaction: The Symbiosis

This field studies how humans and robots coexist and collaborate. Cobots, or collaborative robots, are designed to work alongside humans, assisting with tasks in manufacturing and logistics. It's about creating a functional partnership, where the strengths of each complement the other.

Swarm Computing: The Collective Intelligence

Inspired by the intricate behaviors of social insects, swarm robotics utilizes large numbers of simple robots. Through local communication and interaction, they achieve complex, emergent behaviors. It’s about distributed control and scalability, where the whole is far greater than the sum of its simple parts. Swarm intelligence is the underlying principle.

Physics Approaches: Harnessing the Universe's Quirks

Optical Computing: The Speed of Light

Optical computing uses light waves, not electrons, for processing. The potential for higher bandwidth is immense, but the energy cost of converting between electrical and optical signals can be a hurdle. All-optical computers aim to bypass this, reducing power consumption. Applications range from radar to object recognition.

Spintronics: The Spin of Information

This field exploits the electron's spin, not just its charge, for computation. It offers potential advancements in data storage, transfer, and even quantum and neuromorphic computing. Devices are built using materials like magnetic semiconductors. It's a subtle but significant shift in how we manipulate information at the electronic level.

Atomtronics: The Quantum Dance

Atomtronics uses ultra-cold atoms in coherent matter-wave circuits, mimicking components found in electronics and optics. These systems hold promise for fundamental physics research, sensors, and quantum computers. It's about controlling matter at its most fundamental level to perform computations.

Fluidics: The Logic of Flow

Fluidics uses fluid dynamics for computation, particularly in environments where electronics fail – extreme radiation, for instance. These devices operate without moving parts, using non-linear amplification to perform logic. It’s an ingenious application of fluid mechanics for computational tasks, finding use in nanotechnology and military contexts.

Quantum Computing: The Realm of Probability

Perhaps the most talked-about unconventional method, quantum computing leverages superposition and entanglement. Qubits, unlike classical bits, can exist in multiple states simultaneously, offering the potential for exponential speedups on certain problems. The challenges, however, are immense: maintaining quantum states, error correction. It’s a field fraught with both promise and peril. The study of its computational complexity is known as quantum complexity theory.

Neuromorphic Quantum Computing: The Brain Meets the Quantum

This approach merges neuromorphic computing principles with quantum operations. The idea is to perform quantum computations with similar efficiency to traditional quantum computing, but through a different architecture. Both this and standard quantum computing deviate from the von Neumann architecture, aiming to solve problems by mapping them onto physical systems and finding the "minimum" state, leveraging the unique properties of quantum mechanics.

Superconducting Computing: The Chill of Efficiency

This cryogenic approach utilizes superconductors' zero resistance and ultrafast switching. It's often intertwined with quantum computing, requiring extremely low temperatures to operate. Data is encoded and processed using single flux quanta.

Microelectromechanical Systems (MEMS) and Nanoelectromechanical Systems (NEMS): The Miniature Movers

MEMS and NEMS involve microscopic devices with moving parts, ranging from micrometers to nanometers. They combine processing units with sensors, interacting with their environment. Unlike molecular nanotechnology, they also account for surface chemistry and external influences. Think accelerometers and chemical sensors.

Chemistry Approaches: The Reactions of Computation

Graphical Representation of a rotaxane, useful as a molecular switch

Molecular Computing: The Molecule as a Switch

Molecular computing uses chemical reactions for computation. Data is represented by concentrations, and the goal is to use individual molecules as computational components. This is also known as chemical computing or reaction-diffusion computing. It’s distinct from organic electronics, which uses molecules to alter bulk material properties.

Biochemistry Approaches: The Code of Life

Peptide Computing: The Building Blocks of Logic

This model uses peptides and antibodies to tackle complex problems, offering potential universality. It boasts advantages over DNA computing, like more flexible interactions, but practical realization is hampered by the limited availability of specific antibodies.

DNA Computing: The Double Helix of Data

DNA computing uses DNA and biological machinery to perform calculations. It's a form of massively parallel computing, potentially solving certain problems much faster than traditional computers. While it doesn't expand our theoretical understanding of computability, its parallel processing power is significant. The trade-offs are slower speeds and more complex result analysis.

Membrane Computing: The Compartmentalized Mind

Also known as P systems, membrane computing models distributed and parallel computation based on biological membranes. Objects are processed within membrane-bound compartments, with communication between them and the environment being key. These hierarchical systems, often visualized graphically, have theoretical potential for solving NP-complete problems and have been proposed for hardware implementations.

Biological Approaches: Nature's Algorithms

Biologically-Inspired Computing: Nature's Blueprint

This broad field uses models inspired by biology to solve computer science problems, particularly in AI and machine learning. It encompasses artificial neural networks, evolutionary algorithms, swarm intelligence, and artificial immune systems. These can be implemented on conventional hardware or alternative media. It even extends to viewing natural processes and the universe itself as forms of computation.

Neuroscience: Mimicking the Brain

Neuromorphic computing aims to replicate the brain's neurobiological architecture in electronic circuits. The goal is to create artificial neural systems that learn and adapt like biological ones, using hardware like memristors or transistors. The field of neuromorphic engineering studies how design impacts computation, representation, and function. "Wetware computers," composed of living neurons, are a conceptual, though limited, extension of this. Advanced imaging and recording technologies are crucial for mapping neural connections to inform these designs.

Cellular Automata and Amorphous Computing: The Grid of Life

Cellular automata are discrete models where cells on a grid change state based on a rule and their neighbors' states. Some can exhibit complexity, even Turing-completeness. Amorphous computing deals with systems of numerous, simple processors with limited individual capabilities and local interactions. Think of developmental biology or neural networks as natural examples. The aim is to understand and engineer novel systems through abstracting these amorphous algorithms. Conway's Game of Life, with its famous Glider Gun, is a classic illustration.

Evolutionary Computation: Survival of the Fittest Algorithm

Inspired by biological evolution, evolutionary computation uses algorithms to find optimized solutions. It involves generating solutions, eliminating weaker ones, and introducing random variations. Through selection and mutation, solutions evolve towards increased fitness. It's a powerful technique for complex problem-solving.

Mathematical Approaches: The Elegance of Abstraction

Ternary Computing: Beyond Binary

Ternary computing uses base 3 logic (trits) instead of the standard binary (bits). While largely superseded by binary systems, it has potential for high-speed, low-power devices, perhaps using Josephson junctions.

Reversible Computing: Undoing the Steps

In reversible computing, the computational process can be reversed, with no increase in entropy. Quantum circuits, for instance, are reversible as long as quantum states aren't collapsed. Reversible functions are bijective, meaning they have an equal number of inputs and outputs.

Chaos Computing: The Power of Unpredictability

This approach uses chaotic systems to perform computations. Their ability to rapidly switch between patterns makes them suitable for fault-tolerant and parallel computing applications, finding use in fields like meteorology and finance.

Stochastic Computing: Probability as Data

Stochastic computing represents values as streams of random bits, performing operations through simple bit-wise logic. It's a hybrid analog/digital approach where precision increases with stream length. It can accelerate iterative systems but requires careful handling of random bit streams and has limitations for certain digital functions.


So, there you have it. A rather exhaustive, if I do say so myself, look at the fringes of computation. It’s all rather… much. But if you find yourself needing to delve further into any of these peculiar corners, don't hesitate. Though, frankly, I'd rather you didn't.