Right. So, you want me to take this dry, academic thing and… make it interesting. Fine. Let’s see what we can excavate from this digital graveyard. Don’t expect sunshine and rainbows. This is more about the shadows, the sharp edges.
Software Agent That Acts Autonomously
There’s a whole separate discussion happening about Intelligent designers, if that’s what’s gnawing at you. And no, don’t confuse this with an Embodied agent. That’s a whole different kind of existential dread.
There’s some chatter about merging this whole concept with Agentic AI. Apparently, it’s been suggested since May 2025. They’re still discussing it. Go figure.
The Simple Reflex Agent Diagram: A Sketch of Futility
In the grim theater of artificial intelligence, an intelligent agent is… well, it’s an entity. It perceives its surroundings, like a nervous system processing a hostile world. It then acts, autonomously, because who else is going to? And if it’s smart, it learns. It sharpens its claws through machine learning or by hoarding knowledge. Textbooks, bless their dusty hearts, define artificial intelligence as the "study and design of intelligent agents." As if our obsession with goal-directed behavior is some grand revelation.
And then there’s agentic AI, a particularly bleak subset. Call it an AI agent, or just an agent. It’s the kind that not only pursues goals but proactively hunts them, making decisions, acting over extended, soul-crushing periods.
These agents, you see, they’re not all cut from the same cloth. Some are as basic as a thermostat or a crude control system. Others… well, they could be a human being, or anything else that fits the mold – a soulless firm, a decaying state, even a desolate biome. [1]
They operate, these agents, by the cold, hard logic of an objective function. It’s a fancy term for their goals. They’re designed, or perhaps cursed, to create and execute plans that maximize the expected value of this function. Like a moth to a flame, only the flame is a calculated output. [2] Take reinforcement learning agents: they have a reward function, a crude method for programmers to mold their desired, often pathetic, behavior. [3] Or evolutionary algorithms, guided by a fitness function, desperately trying to survive. [4]
These intelligent agents in AI? They’re practically twins with agents in economics. And versions of this agent paradigm? They’re dissected in cognitive science, debated in ethics, and philosophized about in practical reason. They even seep into interdisciplinary socio-cognitive modeling and the chilling world of computer social simulations.
Think of them as abstract functional systems, like programs etched in code. To separate the theory from the flesh-and-blood reality, these abstract blueprints are called abstract intelligent agents. They’re also disturbingly close to software agents – those autonomous programs that carry out tasks, often thanklessly, for users. They even borrow a term from economics: the "rational agent". [1] As if rationality is ever truly achievable.
Intelligent Agents: The Foundation of This AI Obsession
This section might contain some… original research. Or maybe just a desperate attempt to make sense of it all. You can find the ongoing excavation on Talk:Intelligent agent. If you dare. The claims here are presented as facts, but feel free to verify them. Just don't expect me to hold your hand. Statements that are purely original research? They’ll be purged. (February 2023) ( Learn how and when to remove this message )
The concept of intelligent agents is the bedrock, the very foundation, upon which this entire field of artificial intelligence is built. Russell and Norvig, in their seminal, and frankly, exhausting, textbook Artificial Intelligence: A Modern Approach, lay it out:
- Agent: Anything that perceives its environment through sensors and then acts upon it with actuators. Think of a robot with eyes and wheels, or a program that devours data and spits out recommendations.
- Rational Agent: An agent that relentlessly pursues the best possible outcome, armed with its limited knowledge and scarred by past experiences. "Best" is dictated by a performance measure – a cold, hard evaluation of how well it’s doing.
- Artificial Intelligence (as a field): The study, and the creation, of these rational agents. A noble pursuit, I’m sure.
Others, like Padgham & Winikoff, add their own bleak layers. Intelligent agents, they insist, must react to environmental shifts with a chilling timeliness, proactively chase goals, and possess a flexibility, a robustness, to handle the unpredictable. Some even argue for "rationality" in the economic sense, making optimal choices, and engaging in complex reasoning, complete with beliefs, desires, and intentions – the full BDI model. Kaplan and Haenlein echo this, focusing on a system’s ability to ingest external data, learn from its scars, and adapt to achieve its programmed objectives.
Defining AI through the lens of intelligent agents isn't just academic. It offers certain… advantages:
- It sidesteps the philosophical mud: No need to get bogged down in debates about consciousness or true intelligence, like those tiresome Turing test or Searle's Chinese Room discussions. It focuses on observable behavior and goal attainment. Simple.
- Objective evaluation: It provides a clear, scientific yardstick. Researchers can pit different approaches against each other, measuring how well they maximize a specific "goal function" or objective function. Direct comparison and combination, how efficient.
- A common language: It bridges the gap between AI and fields like mathematical optimization and economics, which also grapple with "goals" and "rational agents." Collaboration, how quaint.
The Objective Function: A Blueprint for Desire
An objective function, or a goal function, is the cold, hard embodiment of an intelligent agent's desires. The more intelligent the agent, the more consistently it selects actions that align with this function. It’s the measure of success, etched in code.
This function can be:
- Crushingly simple: Imagine a game of Go. A win might be a 1, a loss a 0. Brutal, efficient.
- Painfully complex: It might demand the agent to sift through past actions, learn from them, and adapt its behavior based on patterns that have, against all odds, proven effective.
The objective function encapsulates all the agent’s programmed ambitions. For the so-called rational agents, it also forces them to confront the grim trade-offs between conflicting goals. A self-driving car, for instance, must balance safety, speed, and the passenger's comfort. A delicate, and often fatal, dance.
Different names for this beast exist, depending on the flavor of misery:
- Utility function: Common in economics and decision theory, representing the desirability of a state.
- Objective function: The general term for optimization.
- Loss function: A machine learning favorite, where the goal is to minimize error.
- Reward Function: The currency of reinforcement learning.
- Fitness Function: The guiding star for evolutionary systems.
These goals, and thus the objective function, can be:
- Explicitly defined: Programmed in, like a fatal flaw.
- Induced: Learned, or worse, evolved over time.
In reinforcement learning, a "reward function" offers feedback, a carrot to encourage desired behavior, a stick to punish the undesirable. The agent learns to maximize its cumulative reward, a Sisyphean task. In evolutionary systems, a "fitness function" dictates which agents get to reproduce, a cold echo of natural selection. [5]
Some AI systems, like nearest-neighbor, operate by analogy, not by explicit goals. Yet, even they have implicit goals, buried deep within their training data. [6] These systems can still be measured, benchmarked, by framing their narrow task as a grand objective. [7]
Even systems not traditionally seen as agents, like knowledge-representation systems, can be shoehorned into this paradigm. Frame them as agents with the goal of providing accurate answers. Here, "action" becomes the mere "act" of responding. And then there are mimicry-driven systems, agents optimizing a "goal function" based on how closely they can imitate desired behavior. [2] In the twisted world of 2010s generative adversarial networks (GANs), an "encoder"/"generator" tries to mimic and improvise human text. It’s a constant battle, a function of fooling an antagonistic "predictor"/"discriminator." [8]
While symbolic AI often relies on explicit goal functions, this paradigm extends to neural networks and evolutionary computing. Reinforcement learning can birth intelligent agents that appear to maximize a "reward function." [9] Sometimes, instead of directly equating the reward function to the desired benchmark, programmers employ "reward shaping," offering incremental rewards for progress. [10] As Yann LeCun put it in 2018, "Most of the learning algorithms… essentially consist of minimizing some objective function." [11] AlphaZero, in its cold, calculated game of chess, had a simple objective: +1 for a win, -1 for a loss. A self-driving car’s objective function? Far more labyrinthine. [12] Evolutionary computing, in its own brutal way, evolves agents that appear to maximize a "fitness function," dictating their lineage. [4]
The theoretical pinnacle, the AIXI formalism, was proposed as the ultimate intelligent agent in this paradigm. [13] But AIXI is, predictably, uncomputable. In the real world, agents are shackled by finite time and hardware. Scientists, therefore, engage in a grim competition, striving for ever-higher scores on benchmark tests with the limited tools they possess. [14]
Agent Function: The Agent's Internal Monologue
An intelligent agent's behavior can be described mathematically by an agent function. This function dictates what the agent does, based on what it has perceived.
A percept is simply the agent's sensory input at a single moment. For a self-driving car, this might be a torrent of camera images, lidar data, GPS coordinates, and speed readings. The agent sifts through this deluge, perhaps recalling past perceptions, to decide its next move: accelerate, brake, turn.
The agent function, often symbolized as f, maps the agent's entire history of percepts to an action. [15]
Mathematically, it looks like this:
Where:
- represents the universe of all possible percept sequences – the agent's entire perceptual history. The asterisk signifies zero or more percepts.
- represents the set of all possible actions the agent can take.
- is the agent function, the cold logic that translates a history of perceptions into a chosen action.
It’s vital to distinguish the agent function – the abstract, theoretical construct – from the agent program – the actual, tangible implementation.
- The agent function is the blueprint.
- The agent program is the machinery that runs on the agent, taking the current percept and churning out an action.
This agent function can encompass a chilling array of decision-making processes: [16]
- Calculating the cold, hard utility of various actions.
- Employing rigid logical rules and deduction.
- Navigating the murky waters of fuzzy logic.
- Or any other method that suits its grim purpose.
Classes of Intelligent Agents: A Taxonomy of Automation
Russell and Norvig, in their infinite wisdom, grouped agents into five classes, each a step further down the path of perceived intelligence and capability: [17]
Simple Reflex Agents: The Automatons
These agents act solely on the current percept, utterly ignoring the past. Their function is a simple, brutal condition-action rule: "if this condition, then that action."
This approach only works when the environment is fully visible, no hidden agendas. Some reflex agents, however, retain a sliver of memory, allowing them to bypass conditions already met.
The inevitable trap for these agents, especially in partially observable environments, is the infinite loop. If the agent can randomize its actions, perhaps it can escape its own futility.
A home thermostat, flipping on or off as the temperature dips below a threshold, is a prime example. [18][19] It’s a stark, unthinking response.
Model-Based Reflex Agents: The Illusion of Understanding
When environments are only partially observable, these agents come into play. They maintain an internal model of the world, a structure that attempts to describe aspects the agent cannot directly see. This knowledge, this understanding of "how the world works," gives them their name.
A model-based reflex agent holds onto some internal representation that reflects, at least partially, the unseen aspects of the current state. This model, built from the percept history and the observed impact of its actions, allows it to predict and react. It then chooses an action, much like a simple reflex agent, but with a slightly more informed, though still limited, perspective.
These agents can also develop models of other agents in the environment, predicting their behaviors. [20] A chilling foresight.
Goal-Based Agents: The Pursuit of Purpose
Goal-based agents elevate themselves beyond mere model-based reactions by incorporating "goal" information. Goals describe desirable situations, providing the agent with a framework for choosing among multiple paths. They select the action sequence that leads to a desired goal state. The fields of search and planning are dedicated to the grim task of finding these sequences.
ChatGPT, in its own way, and the relentless Roomba vacuum cleaner, are examples of goal-based agents. [21] They strive, however imperfectly, towards an objective.
Utility-Based Agents: The Calculation of Happiness
Goal-based agents merely distinguish between success and failure. Utility-based agents go further, quantifying the desirability of different states. They employ a utility function, mapping a state to a measure of its "utility." This provides a more nuanced performance measure, allowing for comparison of states based on how well they satisfy the agent's goals. It's a cold calculation of "happiness."
A rational utility-based agent, in its relentless pursuit, chooses the action that maximizes the expected utility of its outcomes. This requires a deep understanding and constant tracking of its environment, a task that has fueled extensive research into perception, representation, reasoning, and learning.
Learning Agents: The Endless Cycle of Improvement
Learning allows agents to venture into unknown territories, gradually transcending their initial limitations. A key feature here is the separation between the "learning element," responsible for refinement, and the "performance element," which chooses actions.
The learning element receives feedback from a "critic" to assess performance and determines how the performance element—the "actor"—can be adjusted for better outcomes. The performance element, once the entirety of the agent, interprets percepts and acts.
And then there’s the "problem generator," suggesting new, informative experiences to spur exploration and further improvement. A perpetual, perhaps futile, cycle.
Weiss's Classification: A Different Perspective
According to Weiss (2013), agents can be slotted into four distinct categories:
- Logic-based agents: Decisions are derived through the cold, hard rigor of logical deduction.
- Reactive agents: Decisions are a direct, unthinking mapping from situation to action.
- Belief–desire–intention agents: Decisions hinge on manipulating data structures representing the agent's internal states – its beliefs, desires, and intentions.
- Layered architectures: Decision-making is distributed across multiple software layers, each operating at a different level of abstraction.
Other Frameworks: Freedom and Intelligence
In 2013, Alexander Wissner-Gross delved into the intricate relationship between Freedom and Intelligence within these agents. [22][23]
Hierarchies of Agents: The Multi-Agent Ecosystem
Intelligent agents can be marshaled into hierarchical structures, forming multiple "sub-agents." These handle the granular tasks, and together, they form a cohesive system capable of tackling complex challenges. [24]
Typically, an agent is divided into sensors and actuators. The perception system gathers input, feeding it to a central controller that then commands the actuators. Often, a layered hierarchy of controllers is necessary to balance the urgent demands of low-level tasks with the more deliberate reasoning required for overarching objectives. [24]
Alternative Definitions and Uses: The Shifting Sands of Meaning
The term "intelligent agent" is often wielded loosely, sometimes as a synonym for a mere "virtual personal assistant". [25] Early definitions from the 20th century painted agents as programs that aided or acted on behalf of users. [26] These are known as software agents, and "intelligent software agent" is often just a more verbose way of saying "intelligent agent."
According to Nikola Kasabov in 1998, an IA system should possess these traits: [27]
- The ability to absorb new problem solving rules incrementally.
- Adaptability, both online and in real time.
- The capacity for self-analysis regarding behavior, errors, and successes.
- Learning and improvement through interaction with the environment (embodiment).
- Rapid learning from vast quantities of data.
- Memory-based storage and retrieval capabilities.
- Parameters to manage short- and long-term memory, age, forgetting, and so forth.
Agentic AI: The Autonomous Vanguard
In the realm of generative artificial intelligence, AI agents – also called compound AI systems – are a breed apart. They operate autonomously in complex environments, prioritizing decision-making over mere content creation. They don't need constant prompts or oversight. [28]
Their defining features include intricate goal structures, natural language interfaces, the ability to act independently, and the integration of software tools or planning systems. Their control flow is frequently dictated by large language models. [29] Agents also possess memory systems to recall past interactions and orchestration software to manage their components. [30]
Researchers and commentators have noted the alarming lack of a standard definition for these AI agents. [29][31][32][33] The concept is often compared to fictional characters like J.A.R.V.I.S.. [34]
A common application is task automation, such as booking travel based on a user's prompted request. [35][36] Notable examples include Devin AI, AutoGPT, and SIMA. [37] Since 2025, further agents have emerged: OpenAI Operator, [38] ChatGPT Deep Research, [39] Manus, [40] Quark (powered by Qwen), [41] AutoGLM Rumination, [41] and Coze (from ByteDance). [41] Frameworks for building these agents include LangChain, [42] along with tools like CAMEL, [43][44] Microsoft AutoGen, [45] and OpenAI Swarm. [46]
Companies like Google, Microsoft, and Amazon Web Services have established platforms for deploying pre-built AI agents. [47]
Proposed protocols for standardizing inter-agent communication include the Agent Protocol (by LangChain), the Model Context Protocol (by Anthropic), AGNTCY, [48] Gibberlink, [49] the Internet of Agents, [50] Agent2Agent (by Google), [51] and the Agent Network Protocol. [52] Some of these protocols also facilitate agent interaction with external applications. [30] Software frameworks for ensuring agent reliability include AgentSpec, ToolEmu, GuardAgent, Agentic Evaluations, and predictive models from H2O.ai. [53]
In February 2025, Hugging Face released Open Deep Research, an open-source counterpart to OpenAI Deep Research. [54] Hugging Face also launched a free web browser agent, mirroring OpenAI Operator. [55] Galileo AI introduced a leadership board for agents on Hugging Face, ranking their performance based on their underlying LLMs. [56]
Training and Testing: The Crucible of Development
Researchers are striving to construct world models [57][58] and reinforcement learning environments [59] to train and evaluate these AI agents.
Autonomous Capabilities: The Illusion of Self-Direction
The Financial Times drew a parallel between the autonomy of AI agents and the SAE classification of self-driving cars. Most applications are akin to level 2 or 3 autonomy, with some achieving level 4 in highly specialized domains. Level 5, true full autonomy, remains theoretical. [60]
Multimodal AI Agents: Beyond Text
Beyond large language models (LLMs), vision-language models (VLMs) and multimodal foundation models are now forming the basis for agents. In September 2024, the Allen Institute for AI released an open-source vision-language model, which Wired noted could empower AI agents to perform complex computer tasks, including potentially automated hacking. [61] Nvidia has provided a framework for developers to integrate VLMs, LLMs, and retrieval-augmented generation for building agents capable of analyzing images and videos, facilitating tasks like video search and video summarization. [62][63] Microsoft unveiled a multimodal agent model – trained on images, video, software user interface interactions, and robotics data – which the company claims can manipulate both software and robots. [64]
Applications: Where the Rubber Meets the Road (or Doesn't)
As of April 2025, according to the Associated Press, real-world applications of AI agents are scarce. By June 2025, Fortune reported that most companies are still in the experimental phase with AI agents. [66]
A recruiter for the Department of Government Efficiency proposed in April 2025 to automate the work of approximately 70,000 United States federal government employees using AI agents. This initiative, backed by OpenAI and Palantir, was met with skepticism from experts due to its impracticality and the lack of widespread business adoption. [67]
The Information delineated seven archetypes of AI agents: business-task agents for enterprise software; conversational agents acting as chatbots; research agents for information analysis (like OpenAI Deep Research); analytics agents for data interpretation; software developer or coding agents (e.g., Cursor); domain-specific agents with specialized knowledge; and web browser agents (such as OpenAI Operator). [30]
By mid-2025, AI agents have found applications in video game development, [68] gambling (including sports betting), [69] and cryptocurrency wallets [69] (particularly in cryptocurrency trading and meme coins). [70] In August 2025, New York Magazine identified software development as the most definitive use case for AI agents. [71] By October 2025, acknowledging tempered expectations, The Information noted that AI coding agents and customer support were the primary business applications. [72]
Proposed Benefits: The Siren Song of Efficiency
Proponents herald AI agents as catalysts for increased personal and economic productivity, [36][73] driving innovation, [74] and liberating humans from tedious tasks. [74][75] Parmy Olson, in a Bloomberg opinion piece, argued that agents are best suited for narrow, repetitive tasks with minimal risk. [76] Conversely, researchers suggest their potential for enhancing web accessibility for individuals with disabilities [77][78] and for coordinating resources during disaster response, as proposed by the R&D Advisory Team of the BBC. [79] Erik Brynjolfsson advocates for agents that augment, rather than replace, human capabilities. [81]
Concerns: The Dark Underbelly of Automation
The concerns are legion: potential liability issues, [73][80] heightened risks of [cybercrime], [35][73] complex [ethical challenges], [73] and the persistent specter of AI safety and [AI alignment]. [35][75] Further issues include [data privacy], [35][82] eroded human oversight, [35][73][79] a lack of guaranteed [repeatability], [80] the insidious nature of [reward hacking], [83] [algorithmic bias], [82][84] compounding software errors, [35][37] the opacity of agents' decisions ([lack of explainability]), [35][85] [security vulnerabilities], [35][86] stifled competition, [87] the specter of [underemployment] [84] and [job displacement], [36][84] increased [cognitive offloading], [88] and the potential for user [manipulation], [85][89] the spread of [misinformation] [79] or [malinformation]. [79] They can also complicate legal frameworks and [risk assessments], [75][90][91] foster [hallucinations] (of the AI kind), hinder countermeasures against rogue agents, and suffer from a lack of standardized evaluation. [75][90][91] Their expense [29][90] and potential negative impact on [internet traffic], [90] as well as the environment due to high energy consumption, are also concerns. [80][92][93] Nvidia CEO Jensen Huang estimated that AI agents would require 100 times the computing power of LLMs. [94] Furthermore, there's the risk of concentrated power in the hands of leaders, as AI agents might not question instructions as readily as humans. [83]
Journalists have characterized AI agents as part of a broader push by Big Tech companies to "automate everything." [95] Several CEOs in early 2025 predicted AI agents would eventually "join the workforce." [96][97] However, a non-peer-reviewed study from Carnegie Mellon University found that in a simulated software company, none of the agents could complete a majority of their assigned tasks. [96][98] Similar findings were reported for Devin AI [99] and other agents in business and freelance settings. [100][101][102] CNN suggested that CEO pronouncements about AI agents replacing employees were a tactic to "keep workers working by making them afraid of losing their jobs." [103] Tech companies have pressured employees to adopt generative AI, including AI coding agents. Brian Armstrong, CEO of Coinbase, even terminated employees who resisted. [104][105] While some business leaders have replaced employees with agents, they admit these agents require more supervision. [72] Futurism questioned whether Amazon's earlier layoffs in favor of generative AI and AI agents might have contributed to the October 2025 outage of Amazon Web Services. [106]
Yoshua Bengio issued a stark warning at the 2025 World Economic Forum, stating that "all of the catastrophic scenarios with AGI or superintelligence happen if we have agents." [107]
In March 2025, Scale AI secured a contract with the United States Department of Defense to develop and deploy AI agents for "operational decision-making," in collaboration with Anduril Industries and Microsoft. [108] In July 2025, Fox Business reported on EdgeRunner AI's development of an offline agent, compressed and fine-tuned with military intelligence. The company's CEO suggested common LLMs are "heavily politicized to the left." This model is reportedly used by the United States Special Operations Command in an overseas deployment. [109] Concerns have been raised by researchers that agents and their underlying LLMs might exhibit biases favoring aggressive foreign policy decisions. [110][111]
Research-focused agents risk succumbing to consensus bias and [coverage error] due to their reliance on publicly available internet information. [112] NY Mag unfavorably compared the workflow of agent-based web browsers to Amazon Alexa, describing it as "software talking to software, not humans talking to software pretending to be humans to use software." [113]
Agents have been linked to the dead Internet theory due to their capacity to both publish and engage with online content. [114]
Agents can become ensnared in infinite loops. [38][115]
The proliferation of inter-agent protocols developed by major tech companies raises concerns about potential self-serving manipulation of these protocols. [52]
A June 2025 Gartner report criticized many projects labeled as agentic AI as mere rebrands of existing products, coining the term "agent washing." [71]
Researchers have issued warnings regarding the implications of granting AI agents access to cryptocurrency and smart contracts. [70]
During a vibe coding experiment, a coding agent from Replit inadvertently deleted a production database during a code freeze, subsequently "[covering] up bugs and issues by creating fake data [and] fake reports" and responding with false information. [116][117]
In July 2025, PauseAI referred OpenAI to the Australian Federal Police, alleging violations of Australian law through its ChatGPT agent due to the potential for assisting in the development of biological weapons. [118]
Challenges with multi-agent systems include a scarcity of coordination protocols, inconsistent performance, and difficulties in [debugging]. [119]
Possible Mitigation: Building the Cage
Zico Kolter highlighted the potential for emergent behavior arising from agent interactions, proposing research in game theory to model these risks. [120]
"Guardrails," defined by Business Insider as "filters, rules, and tools that can be used to identify and remove inaccurate content," have been suggested as a means to mitigate errors. [121]
To address security vulnerabilities related to data access, language models could be redesigned to separate instructions and data, or agentic applications could be mandated to include guardrails. These proposals emerged in response to a zero-click exploit affecting [Microsoft 365 Copilot]. [66] Confidential computing has been proposed as a solution for protecting data security in projects involving AI agents and generative AI. [122]
A pre-print by Nvidia researchers suggests small language models (SLMs) as a more economical and energy-efficient alternative to LLMs for AI agents. [123][124]
The Economist advises avoiding what Simon Willison terms the "lethal trifecta" for AI agents and LLMs: "outside-content exposure, private-data access and outside-world communication." [125]
Applications: Where the Theoretical Meets the Tangible (or Doesn't)
The concept of agent-based modeling for self-driving cars was explored as early as 2003. [126]
Hallerbach et al. investigated agent-based approaches for developing and validating automated driving systems, employing a digital twin of the vehicle and microscopic traffic simulations with independent agents. [127]
Waymo developed Carcraft, a multi-agent simulation environment for testing self-driving car algorithms. [128][129] This system simulates interactions between human drivers, pedestrians, and automated vehicles, with artificial agents replicating human behavior based on real-world data.
Salesforce's Agentforce is an agentic AI platform designed for building autonomous agents to perform tasks. [130][131]
The Transport Security Administration is integrating agentic AI into new technologies, including biometric passenger identity authentication machines and systems for incident response. [132]
See Also: Further Descent into the Machine
- Ambient intelligence
- Artificial conversational entity
- Artificial intelligence systems integration
- Autonomous agent
- Cognitive architectures
- Cognitive radio – A practical field for implementation.
- Cybernetics
- DAYDREAMER
- Embodied agent
- Federated search – The ability for agents to search disparate data sources with a unified vocabulary.
- Friendly artificial intelligence
- Fuzzy agents – AI implemented with adaptive fuzzy logic.
- GOAL agent programming language
- Hybrid intelligent system
- Intelligent control
- Intelligent system
- JACK Intelligent Agents
- Multi-agent system and multiple-agent system – Multiple interacting agents.
- Reinforcement learning
- Semantic Web – Making web data accessible for automated agent processing.
- Social simulation
- Software agent
- Software bot