← Back to home

Reasoning System

Honestly, the very idea of a "reasoning system" sounds like something someone invented to justify their own existence. But fine, if you insist on dissecting the mechanics of how machines pretend to think, let's get on with it. Just don't expect me to be impressed.

Type of software system

This article, bless its heart, needs more verification. Apparently, the lack of citations is a problem. If you're going to be taken seriously, perhaps you should try adding citations to reliable sources instead of just asking for them. Honestly, the nerve. Unsourced material is a liability, and frankly, it’s a sign of intellectual laziness.

In the grand, often tedious, world of information technology, a reasoning system is essentially a software system. Its purported purpose is to conjure up conclusions from whatever knowledge it’s been fed, using supposedly sophisticated logical techniques. Think deduction and induction, the intellectual equivalent of following a recipe or guessing wildly. These systems are supposedly crucial for the implementation of artificial intelligence and those equally nebulous knowledge-based systems.

Now, if you take the phrase at its most mundane, every computer system is, in a way, a reasoning system. They all automate some form of logic or decision-making, however trivial. But in the jargon-filled realm of IT, the term is typically reserved for systems that claim to perform more complex kinds of reasoning. We're talking about systems that go beyond the simple arithmetic of calculating sales tax or customer discounts. No, these are systems that fancy themselves capable of making logical inferences about a medical diagnosis or, heaven forbid, a mathematical theorem. How quaint.

These reasoning systems operate in two distinct modes: interactive and batch processing. Interactive systems, bless their needy hearts, interface directly with the user. They'll pester you with clarifying questions or allow you to, with great effort, guide their precious reasoning process. Batch systems, on the other hand, are the more self-contained types. They ingest all available information at once and then, without so much as a by-your-leave, generate the "best" answer they can muster. No user feedback, no guidance, just pure, unadulterated output. [1]

The supposed applications for these systems are as varied as they are numerous. They're said to be useful in scheduling, business rule processing, problem solving (though I suspect they create more problems than they solve), complex event processing, intrusion detection (let's hope they're better at that than I am at getting a straight answer), predictive analytics (always a crowd-pleaser), robotics, computer vision, and the ever-so-mysterious natural language processing.

History

The genesis of reasoning systems can be traced back to what were called theorem provers. These were systems designed to represent axioms and statements within the rigid framework of First Order Logic. They then employed the ironclad rules of logic, such as modus ponens, to derive new statements. It’s like watching a machine meticulously assemble a puzzle, piece by logical piece.

Another early iteration was the general problem solver. Systems like the General Problem Solver, conceived by the minds of Newell and Simon, aimed to provide a universal planning engine capable of representing and solving structured problems. Their modus operandi involved breaking down complex problems into smaller, more manageable sub-problems, solving each one individually, and then painstakingly assembling these partial solutions into a final, cohesive answer. The SOAR family of systems is another example of this ambitious, if often unwieldy, approach.

In practice, however, these early theorem provers and general problem solvers were rarely useful for anything beyond theoretical pursuits. They demanded specialized users, individuals steeped in the arcane knowledge of logic, just to operate. The first truly practical applications of automated reasoning emerged with the advent of expert systems. These systems narrowed their focus to much more clearly defined domains, eschewing the grand ambition of general problem-solving for specific tasks like medical diagnosis or the intricate analysis of aircraft faults.

Expert systems also adopted a more constrained approach to logic. Instead of attempting to implement the full spectrum of logical expressions, they typically relied on modus ponens, often implemented through the humble IF-THEN rule. By concentrating on a specific domain and restricting the logical toolkit, these systems achieved a level of performance that made them practical for real-world use, a far cry from the mere research demonstrations that characterized most prior automated reasoning systems. The engines driving this automated reasoning in expert systems were commonly referred to as inference engines. Those designed for broader logical inferencing, on the other hand, were usually dubbed theorem provers. [2]

The burgeoning popularity of expert systems spurred the application of numerous new types of automated reasoning to a diverse array of problems across both government and industry. Some, like case-based reasoning, were direct offshoots of expert systems research. Others, such as constraint satisfaction algorithms, drew influence from fields like decision technology and linear programming. Furthermore, a completely different paradigm, one eschewing symbolic reasoning in favor of a connectionist model, proved exceptionally productive. This latter approach to automated reasoning excels at pattern matching and signal detection tasks, including text searching and face recognition.

Use of logic

The term "reasoning system" can, quite frankly, be applied to almost any sophisticated decision support system, as evidenced by the myriad of specific applications discussed below. However, the most common connotation of the term implies the computer's representation of logic. The variations in implementation are significant, differing in their systems of logic and their degree of formality. Most reasoning systems utilize variations of propositional and symbolic (predicate) logic. These variations might manifest as mathematically precise representations of formal logic systems, such as FOL, or as extended and hybrid versions of these systems, like Courteous logic [3]. Reasoning systems may explicitly incorporate additional logic types, such as modal, deontic, or temporal logics. Yet, it's also common to find reasoning systems that implement imprecise and semi-formal approximations of established logic systems. These systems often embrace a variety of procedural and semi-declarative techniques to model different reasoning strategies, prioritizing pragmatism over strict formality. They frequently rely on custom extensions and add-ons to tackle real-world challenges.

Many reasoning systems employ deductive reasoning to derive inferences from the available knowledge. These inference engines can operate in forward or backward reasoning modes to reach conclusions via modus ponens. The recursive reasoning methods they utilize are known as 'forward chaining' and 'backward chaining', respectively. While deductive inference is widely supported, some systems also incorporate abductive, inductive, defeasible, and other forms of reasoning. To tackle intractable problems, heuristics may be employed to identify acceptable solutions.

Reasoning systems may adopt either the closed world assumption (CWA) or the open world assumption (OWA). The OWA is frequently associated with ontological knowledge representation and the Semantic Web. Systems exhibit a diverse range of approaches to negation. Beyond simple logical or bitwise complement, systems might support existential forms of strong and weak negation, including negation-as-failure and 'inflationary' negation (negation of non-ground atoms). Different reasoning systems can support monotonic or non-monotonic reasoning, stratification, and a host of other logical techniques.

Reasoning under uncertainty

A significant number of reasoning systems are equipped to handle reasoning under uncertainty. This capability is crucial for building situated reasoning agents that must navigate a world represented by imperfect information. Several common strategies exist for managing uncertainty. These include the utilization of certainty factors, probabilistic methods such as Bayesian inference or Dempster–Shafer theory, multi-valued ('fuzzy') logic, and various connectionist approaches. [4]

Types of reasoning system

This section attempts a non-exhaustive and somewhat informal classification of common reasoning system types. It's important to note that these categories are far from absolute; they overlap considerably and share numerous techniques, methods, and algorithms.

Constraint solvers

Constraint solvers are designed to tackle constraint satisfaction problems (CSPs) and are integral to constraint programming. A constraint is essentially a condition that must be satisfied by any valid solution to a problem. Constraints are defined declaratively and applied to variables within specified domains. Constraint solvers employ search, backtracking, and constraint propagation techniques to discover solutions and identify optimal ones. They may incorporate forms of linear and nonlinear programming. Their primary use is often in performing optimization within highly combinatorial problem spaces, such as calculating optimal schedules, designing efficient integrated circuits, or maximizing productivity in manufacturing processes. [5]

Theorem provers

Theorem provers leverage automated reasoning techniques to establish proofs for mathematical theorems. They can also be employed to verify existing proofs. Beyond academic research, typical applications include verifying the correctness of integrated circuits, software programs, and engineering designs.

Logic programs

Logic programs (LPs) are software programs written in programming languages where the primitives and expressions directly mirror constructs from mathematical logic. A prime example of a general-purpose logic programming language is Prolog. LPs represent a direct application of logic programming principles to problem-solving. Logic programming is characterized by its highly declarative approach, rooted in formal logic, and finds broad application across numerous disciplines.

Rule engines

Rule engines encapsulate conditional logic in the form of discrete rules. These rule sets can be managed and applied independently of other functionalities, making them widely applicable across diverse domains. Many rule engines possess reasoning capabilities, often implementing production systems to support forward or backward chaining. Each rule, or 'production,' links a conjunction of predicate clauses to a sequence of executable actions.

During runtime, the rule engine matches productions against available facts and executes the associated actions for each match. If these actions modify or remove facts, or introduce new ones, the engine immediately recalculates the set of matches. Rule engines are extensively used for modeling and applying business rules, guiding decision-making in automated processes, and enforcing both business and technical policies.

Deductive classifier

Deductive classifiers emerged somewhat later than rule-based systems, becoming a component of a novel category of artificial intelligence knowledge representation tools known as frame languages. A frame language describes a problem domain through a collection of classes, subclasses, and the relationships between them, bearing a resemblance to the object-oriented model. However, unlike object-oriented models, frame languages possess a formal semantics grounded in first-order logic.

This semantics is utilized to feed input to the deductive classifier. The classifier, in turn, can analyze a given model, referred to as an ontology, and ascertain the consistency of the various relationships described within it. If the ontology proves inconsistent, the classifier will pinpoint the declarations that are in conflict. If the ontology is consistent, the classifier can then engage in further reasoning to derive additional conclusions about the relationships between objects within the ontology.

For instance, it might deduce that a particular object is, in fact, a subclass or instance of additional classes beyond those explicitly stated by the user. Classifiers are a significant technology for analyzing the ontologies used to describe models within the Semantic web. [6] [7]

Machine learning systems

Machine learning systems adapt their behavior over time based on experience. This process can involve reasoning over observed events or example data provided during training. For example, machine learning systems might employ inductive reasoning to generate hypotheses to explain observed facts. Learning systems search for generalized rules or functions that align with observations and then utilize these generalizations to direct future behavior.

Case-based reasoning systems

Case-based reasoning (CBR) systems offer solutions to problems by analyzing their similarities to previously encountered problems with known solutions. Case-based reasoning operates on the most apparent levels of similarity: the object, feature, and value criteria. This distinguishes it from analogical reasoning, which relies solely on "deep" similarity criteria—that is, relationships or even relationships of relationships—and does not necessarily find similarity at the shallower levels. This fundamental difference makes case-based reasoning applicable only within cases belonging to the same domain, as similar objects, features, and/or values must reside within that domain. Analogical reasoning, conversely, can be applied across domains where only the relationships between cases are similar. CBR systems are frequently employed in customer/technical support and call centre environments, with applications extending to industrial manufacture, agriculture, medicine, law, and numerous other fields.

Procedural reasoning systems

A procedural reasoning system (PRS) employs reasoning techniques to select plans from a procedural knowledge base. Each plan outlines a course of action designed to achieve a specific goal. The PRS implements a belief–desire–intention model by reasoning over established facts ('beliefs') to choose appropriate plans ('intentions') for given goals ('desires'). Typical applications of PRS include management, monitoring, and fault detection systems.