← Back to home

Security Level

This article is about strength in cryptography. For business security policy, see security level management.

Measure of cryptographic strength

In the realm of cryptography, the concept of security level serves as a critical metric, quantifying the inherent robustness that a given cryptographic primitive—be it a complex cipher or a foundational hash function—is designed to achieve and maintain against adversarial efforts. This measure, a rather stark assessment of an algorithm's resilience, is typically articulated as a specific number of "bits of security," often referred to interchangeably as "security strength." This nomenclature implies a direct relationship: an n-bit security level dictates that any attacker, no matter how resourceful, would theoretically be compelled to execute approximately 2n distinct operations in order to successfully compromise the system or "break" the underlying primitive. This isn't merely a theoretical exercise; it provides a stark, quantifiable benchmark for the immense computational effort required to undermine cryptographic protection. While this 2^n operational model is the prevailing standard for its elegant simplicity, other methodologies have been proposed, aiming to more precisely model the complex, multifaceted costs an actual attacker might incur, factoring in not just raw operations but also time, resources, and specific attack vectors.

The utility of expressing security in "bits" is not lost on those who design and deploy secure systems. It facilitates a straightforward, almost brutally efficient comparison between disparate algorithms, allowing for an immediate understanding of their relative strengths. This becomes particularly invaluable when constructing a hybrid cryptosystem, where multiple cryptographic primitives are interwoven. In such architectures, a clear understanding of each component's security level ensures that no single primitive inadvertently becomes the glaringly obvious "weakest link," thereby undermining the entire protective edifice. For instance, the widely adopted Advanced Encryption Standard with a 128-bit key (AES-128) is meticulously engineered to deliver a 128-bit security level. This is generally considered to be roughly equivalent in practical security strength to an RSA cryptosystem employing a substantially larger 3072-bit key, a testament to the differing computational challenges posed by symmetric and asymmetric cryptographic paradigms.

Within this specialized lexicon, the term "security claim" or "target security level" denotes the specific level of security that a given primitive was initially designed to attain. However, the phrase "security level" itself is frequently, and somewhat casually, employed in these contexts. A sobering reality in cryptography is that when attacks are subsequently discovered, presenting a lower computational cost than the original security claim, the primitive in question is, unequivocally, deemed "broken." This signifies not just a theoretical flaw, but a practical erosion of its intended defensive capabilities, often prompting a swift re-evaluation of its suitability for secure applications.

In symmetric cryptography

Symmetric algorithms, in their very design, typically come with a rather strictly defined and unambiguous security claim. For most symmetric ciphers, this claim is, almost by definition, equivalent to the key size of the cipher itself. This direct correspondence arises because the most fundamental and universally applicable attack against such systems is the brute-force attack. In this method, an attacker systematically tries every possible key until the correct one is found. The complexity of this attack scales directly with the key space, meaning an n-bit key generally requires 2n operations to exhaust. Thus, a 128-bit symmetric cipher, assuming no other weaknesses, offers 128 bits of security, as breaking it would necessitate, on average, 2127 attempts.

Cryptographic hash functions, while distinct from ciphers, also possess clearly defined security levels, though these are typically split across different attack objectives. For a hash function with an output size of n bits, its collision resistance is usually rated at n/2 bits. This seemingly halved security comes courtesy of the infamous birthday attack, a statistical phenomenon that exploits the "birthday paradox." Simply put, it's significantly easier to find two inputs that produce the same hash output (a collision) than it is to find a specific input for a given output. Specifically, finding a collision requires approximately 2n/2 operations, rather than 2n. Conversely, the preimage resistance level for the same n-bit hash function is typically n bits. This refers to the difficulty of finding an input that produces a specific hash output (first preimage) or finding a different input that produces the same output as a given input (second preimage). For example, SHA-256, which produces a 256-bit output, offers a robust 128-bit collision resistance (256/2) and a 256-bit preimage resistance.

However, the world of cryptography, much like reality, is rarely without its exceptions. Some algorithms deviate from these straightforward equivalences. The Phelix and Helix ciphers, for instance, despite being 256-bit ciphers, were designed to offer a 128-bit security level. This might seem counterintuitive, but it often reflects specific design choices to optimize for performance while still meeting a sufficiently high, albeit lower than key-size, security target. Similarly, the SHAKE variants of SHA-3 also exhibit different security characteristics. For a 256-bit output size, SHAKE-128 is specifically engineered to provide a 128-bit security level for both collision and preimage resistance, demonstrating a more balanced, albeit reduced, security profile compared to fixed-output hash functions like SHA-256. These exceptions underscore that while general rules exist, the precise security claim of any primitive is ultimately determined by its specific design and the best-known attacks against it.

In asymmetric cryptography

The very foundation of most asymmetric algorithms, often referred to as public-key cryptography, rests upon the elegant yet frustrating properties of certain mathematical problems. These problems are meticulously chosen because they are computationally efficient to solve in one specific direction, but become astronomically inefficient, if not practically impossible, to reverse without a secret piece of information (the private key). This inherent asymmetry is what allows for the magic of public-key operations. However, a crucial distinction from symmetric cryptography is that attacks launched against current public-key systems are almost invariably faster, often significantly so, than a mere brute-force search of the entire key space. The security level of these asymmetric algorithms isn't a fixed parameter set in stone at the design phase; rather, it represents a dynamic computational hardness assumption. This assumption is constantly re-evaluated and adjusted to align with the most potent, currently known attacks, reflecting an ongoing, somewhat tiresome, arms race between cryptographers and cryptanalysts.

Over time, various authoritative bodies have published recommendations, providing estimates for the security levels offered by different asymmetric algorithms. These recommendations, while generally consistent, can exhibit slight differences due to variations in their underlying methodologies and risk assessments.

  • For the venerable RSA cryptosystem, aiming for a 128-bit security level, institutions like the NIST (National Institute of Standards and Technology) and ENISA (European Union Agency for Network and Information Security) both advise the use of 3072-bit keys. The IETF (Internet Engineering Task Force), ever so slightly more cautious, recommends 3253 bits. This conversion from the raw key length to a practical security level estimate is fundamentally predicated on the computational complexity of the General number field sieve (GNFS), which remains the most efficient algorithm known for factoring large integers – the very mathematical problem RSA's security relies upon. The precise scaling factors are meticulously derived from extensive cryptanalytic research.

  • The Diffie–Hellman key exchange and the Digital Signature Algorithm (DSA) find themselves in a similar predicament to RSA when it comes to translating their key lengths into a comparable security level estimate. Their security, too, hinges on the presumed difficulty of related discrete logarithm problems in finite fields, and thus the same underlying computational hardness assumptions and attack complexities (such as the number field sieve variant for discrete logarithms) largely dictate their recommended key sizes for equivalent security.

  • Elliptic curve cryptography (ECC) presents a rather elegant alternative, demanding significantly shorter keys to achieve equivalent security levels compared to RSA or Diffie-Hellman. For a 128-bit security target, NIST suggests key sizes ranging from 256 to 383 bits, while ENISA simplifies this to 256 bits, and IETF slightly less at 242 bits. The conversion from an elliptic curve key size, denoted as f, to its security level is approximately f / 2. This relationship is largely due to the most effective method for attacking the Elliptic Curve Discrete Logarithm Problem (ECDLP), which is the Pollard's rho method. This method typically completes in roughly 0.886 times the square root of 2f additions, which, for practical purposes, translates to an effective security strength of about f/2 bits. The efficiency of ECC is why it has become a preferred choice in many modern cryptographic protocols, offering strong security with a reduced computational footprint.

Typical levels

The following table, derived from section 5.6.1.1 of the US NIST Special Publication 800-57, "Recommendation for Key Management," provides a pragmatic overview of typical security levels across various cryptographic algorithm types. This table is a crucial reference for practitioners aiming to select appropriate key lengths for different security requirements, ensuring a consistent and robust cryptographic posture.

Comparable Algorithm Strengths
Security Bits Symmetric Key Finite Field/Discrete Logarithm (DSA, DH, MQV) Integer Factorization (RSA) Elliptic Curve (ECDSA, EdDSA, ECDH, ECMQV)
80 2TDEA [a] L = 1024, N = 160 k = 1024 160 ≤ f ≤ 223
112 3TDEA [a] L = 2048, N = 224 k = 2048 224 ≤ f ≤ 255
128 AES-128 L = 3072, N = 256 k = 3072 256 ≤ f ≤ 383
192 AES-192 L = 7680, N = 384 k = 7680 384 ≤ f ≤ 511
256 AES-256 L = 15360, N = 512 k = 15360 f ≥ 512
  • ^ a b The Data Encryption Algorithm (DES), and by extension its two-key triple DES variant (2TDEA), was officially deprecated by NIST recommendations in 2003, signaling its inadequacy for new applications requiring robust security. While 3TDEA (three-key triple DES) still provides 112 bits of security and sees some legacy use, modern deployments overwhelmingly favor AES.

Under the stringent recommendations put forth by NIST, a fundamental principle of cryptographic hygiene dictates that a key of a given security level should only be transported or protected using another algorithm that offers an equivalent or, preferably, a higher security level. This is a critical safeguard, as allowing a high-security key to be protected by a weaker mechanism would be akin to guarding a vault with a flimsy padlock – entirely pointless and fundamentally insecure. The entire security chain must be as strong as its strongest link, or rather, not compromised by its weakest.

Furthermore, it is vital to understand that the quoted security level pertains to the computational cost of breaking one specific target—for instance, recovering a single key. It does not necessarily represent the amortized cost for a group of targets. To illustrate, recovering a single AES-128 key demands approximately 2128 operations. However, the amortized cost to find any key from a large collection of m AES-128 keys would still, on average, remain around 2128 operations per key if each key is independent and requires an individual brute-force effort. In stark contrast, when attempting to break m Elliptic Curve Cryptography (ECC) keys using the rho method, the collective effort required scales differently; it would necessitate approximately √m times the base cost to break a single key. This distinction highlights that the efficiency of an attack can vary dramatically depending on the algorithm and the scope of the attacker's objective, a nuance that often gets lost in simplified security discussions.

Meaning of "broken"

In the rigorous world of cryptography, a cryptographic primitive is, in no uncertain terms, considered "broken" the moment an attack is discovered that demonstrably requires less computational effort than its originally advertised level of security. This is a definitive statement, not a subjective opinion. However, the term "broken" doesn't always equate to "immediately practical" or "easily exploitable." The vast majority of cryptanalytic attacks that are publicly demonstrated today often take fewer than 240 operations. While still a substantial number, this translates, surprisingly, to a mere few hours on a modern, average personal computer, making such attacks very much within the realm of practical feasibility for determined adversaries.

For a more extreme example, consider the costliest demonstrated attack on hash functions to date: the 261.2 attack against SHA-1. This particular feat, which involved finding a chosen-prefix collision, was a monumental undertaking. It consumed two months of continuous computation on a cluster of 900 GTX 970 GPUs, and its direct computational cost was estimated at US75,000.Interestingly,theresearchersinvolvedestimatedthatastandardcollision(notchosenprefix)couldhavebeenachievedforaslittleas75,000. Interestingly, the researchers involved estimated that a standard collision (not chosen-prefix) could have been achieved for as little as 11,000. These figures provide a stark glimpse into the economic realities of large-scale cryptanalysis – it's an expensive endeavor, but not an impossible one for well-resourced entities.

Given this spectrum of "brokenness," Jean-Philippe Aumasson, a respected voice in the field, proposed a more nuanced terminology to better differentiate the practical implications of various attacks. He draws a rather pragmatic line between what is truly practical and what remains, for now, impractical, setting the threshold for practical attacks at approximately 280 operations.

  • A broken primitive is one for which an attack can be executed taking ≤ 280 operations. Such an attack is plausibly within the capabilities of well-funded actors, making the primitive fundamentally unsafe for its intended purpose.
  • A wounded primitive is characterized by an attack requiring between 280 and roughly 2100 operations. While such an attack might not be immediately feasible with current technology, future advancements in computational power or cryptanalytic techniques are highly likely to render it practical, making its long-term security questionable.
  • An attacked primitive is one where an attack exists that is cheaper than the original security claim, but still significantly more costly than 2100 operations. While theoretically compromised, such an attack remains far too expensive and computationally intensive to be practical in the foreseeable future, though it indicates a theoretical weakness that could be exploited by sufficiently advanced future adversaries.
  • Finally, an analyzed primitive is the ideal state: one for which no known attack is cheaper than its initial security claim. This signifies that, to the best of current knowledge, the primitive is holding up to its promises.

These distinctions are not merely academic; they inform critical decisions regarding algorithm selection, deprecation schedules, and the ongoing assessment of cryptographic risk in an ever-evolving threat landscape.

Quantum attacks

The burgeoning field of post-quantum cryptography represents a proactive and somewhat anxious endeavor to consider, and mitigate, the security implications for cryptographic algorithms in the face of a purely hypothetical (for now) attacker possessing a functional quantum computer. The advent of such a machine, if sufficiently powerful, threatens to fundamentally upend the security assumptions upon which much of our modern digital infrastructure is built.

  • For symmetric ciphers, the quantum threat generally manifests as a square-root speedup to their classical counterparts. This means that a quantum attacker could effectively halve the security level provided by a symmetric algorithm. For instance, AES-256, which classically offers 256 bits of security, would, in a quantum computing scenario, effectively provide only 128 bits of "quantum security." While this reduction is significant, 128 bits of security is still widely considered a robust level for most applications, suggesting that symmetric cryptography might be more resilient to quantum attacks than its asymmetric cousins. It's worth noting that while some theoretical quantum attacks, such as the slide attack using Simon's algorithm, have been explored, they have not yet proven particularly effective or useful in practically attacking established ciphers like AES.

  • The true existential threat from quantum computing, however, looms over asymmetric cryptography. Shor's algorithm, a theoretical quantum algorithm, promises a truly massive, exponential speedup in solving several cornerstone mathematical problems: the factoring problem (upon which RSA relies), the discrete logarithm problem (critical for Diffie–Hellman key exchange, DSA, and MQV), and the period-finding problem (relevant to elliptic curve cryptography variants like ECDSA, EdDSA, ECDH, and ECMQV). Should a sufficiently large and stable quantum computer, on the order of millions of qubits, become a reality, these algorithms in their current forms would become utterly compromised, effectively spelling their end as secure cryptographic primitives.

Even though quantum computers capable of performing these large-scale cryptographic operations remain firmly in the realm of advanced research and development, a pressing concern has emerged: the "harvest now, decrypt later" threat. This chilling scenario posits that sophisticated adversaries, including nation-states, are already actively intercepting and storing vast quantities of encrypted data today. Their strategic objective is to archive these ciphertexts, patiently awaiting the day when sufficiently powerful quantum computers become available, at which point they could theoretically decrypt all of this previously secured information. This foresight, or rather, this long-term threat assessment, has galvanized governments and private enterprises alike to proactively begin the arduous work of transitioning to quantum-resistant algorithms. Notable examples of these crucial efforts include Google and Cloudflare's pioneering tests of hybrid post-quantum TLS on the Internet, which aim to combine classical and quantum-resistant algorithms. Furthermore, the National Security Agency (NSA) in the United States released its Commercial National Security Algorithm Suite 2.0 in 2022, explicitly outlining a roadmap for the adoption of quantum-resistant cryptographic standards for national security systems, underscoring the urgency of this impending paradigm shift. The future of cryptography, it seems, is already being shaped by a computer that doesn't quite exist yet.