← Back to home

Iterated Function

Alright, let's dissect this. You want me to take a Wikipedia article, all its dry facts and figures, and… inject some life into it. Or, more accurately, my particular brand of existential ennui and sharp edges. You want it rewritten, expanded, but not a single piece of data lost. And those little blue links? They’re staying.

Don't expect sunshine and rainbows. This is going to be more like… a meticulously cataloged descent into the void. But at least it'll be interesting.

Here’s your rewritten article on iterated functions. Try not to get lost.


Iterated Functions: When a Function Becomes Its Own Obsession

In the sterile, calculated world of mathematics, there exists a concept that, in its relentless self-application, can feel both profoundly simple and deeply unsettling: the iterated function. It’s what happens when you take a function, apply it, and then, with a kind of grim determination, feed the result back into itself. Again. And again. This process, this act of relentless composition, is called iteration. It’s like watching a single thought loop endlessly in a mind that has seen too much, or perhaps, not enough.

Imagine starting with an object, some initial spark of data, and then subjecting it to the same function, over and over. The output of one application becomes the input for the next. It’s a recursive dance, a feedback loop that can lead to predictable patterns, astonishing complexity, or the slow, inevitable decay into something unrecognizable.

Consider the visual on the right. It’s not just a diagram; it’s a story told in transformations. On top, a clockwise rotation by 90 degrees. This isn't just a spin; it's a function with an order of 4. That means it takes four applications to return to its original state, the identity. Four cycles before it remembers where it began. Below that, a shear mapping. This one, however, has an infinite order. It never truly returns. It just keeps… shifting. Forever.

And then there are their compositions, the results of layering these transformations. Both end up with an order of 3. Three applications, and they've found their way back, a fleeting moment of resolution before the cycle inevitably begins anew. It’s a stark reminder that even within repetition, there are hierarchies of finality.

For instance, in the accompanying image, we see a basic illustration: L=F(K), M=FF(K)=F2(K).L = F(K), \ M = F \circ F(K) = F^2(K). Here, LL is the result of applying function FF to object KK once. MM, on the other hand, is the result of applying FF twice. It's FF acting on the result of FF acting on KK. This is the essence of iteration, the simple act of feeding the output back as the input.

The study of these iterated functions extends far beyond mere mathematical curiosity. They are the bedrock of computer science, where algorithms often rely on repeated operations. They form the intricate landscapes of fractals, where simple rules, applied iteratively, create infinite complexity. They are central to understanding dynamical systems, mapping the evolution of states over time. And in the realm of renormalization group physics, they offer a way to understand how systems behave at different scales, a kind of cosmic iteration.

Definition: The Anatomy of Repetition

To formally define an iterated function, we must first establish the ground rules. Let XX be a set – a collection of elements, anything from numbers to abstract concepts. And let f:XXf: X \to X be a function that maps elements of XX back into XX. No escaping the set, you see.

We then define fnf^n, the nn-th iterate of ff, where nn is a non-negative integer, with a certain, almost ritualistic, precision:

f0=defidXf^0 \stackrel{\mathrm{def}}{=} \operatorname{id}_X

This first step, f0f^0, is the identity function, idX\operatorname{id}_X. It’s the function that does nothing, the baseline state, the calm before the storm of iteration. It simply returns the element as it is, unchanged. It's the original object, before the relentless march of repetition begins.

And then, the core of the process:

fn+1=defffn,f^{n+1} \stackrel{\mathrm{def}}{=} f \circ f^n,

where \circ denotes function composition. This means (fg)(x)=f(g(x))(f \circ g)(x) = f(g(x)). So, fn+1f^{n+1} is simply ff composed with itself nn times. It's ff applied to the result of ff applied nn times. The chain reaction.

This notation, fnf^n for iteration, has a history. It’s attributed to John Frederick William Herschel back in 1813. He, in turn, credited Hans Heinrich Bürmann, though Bürmann's specific contribution remains a bit of a phantom, lost to time or perhaps never explicitly published.

Now, this notation fnf^n can be a bit… ambiguous. It’s also used for exponentiation of the function, particularly in trigonometry where sin2(x)\sin^2(x) means (sin(x))2(\sin(x))^2, not sin(sin(x))\sin(\sin(x)). To avoid this confusion, some mathematicians, with a sigh of exasperation no doubt, opt for fn(x)f^{\circ n}(x) to explicitly denote the nn-th iterate. Others, like Benjamin Peirce, used f[n](x)f^{[n]}(x), and Alfred Pringsheim and Jules Molk suggested nf(x)nf(x). It’s a small detail, but in the realm of mathematics, precision is often the only shield against chaos.

Abelian Property and Iteration Sequences: The Rhythms of Repetition

There’s a certain elegance, a predictable symmetry, in how these iterated functions behave. For any non-negative integers mm and nn, the following identity holds:

fmfn=fnfm=fm+n.f^m \circ f^n = f^n \circ f^m = f^{m+n}.

This is strikingly similar to the rule of exponentiation, aman=am+na^m a^n = a^{m+n}. It’s a testament to the ordered nature of composition. The order in which you apply the iterations doesn't matter; the final result is the same. It’s like shuffling cards in a very specific, very predictable deck.

This property, when extended to non-integer or even negative indices, becomes the translation functional equation. Think of it as the underlying structure of Schröder's equation and the Abel equation. On a logarithmic scale, this property mirrors the nesting behavior of Chebyshev polynomials, where Tm(Tn(x))=Tmn(x)T_m(T_n(x)) = T_{mn}(x). It’s a deep connection, a whisper of order resonating across different mathematical landscapes.

The sequence of functions fnf^n is sometimes called a Picard sequence, a nod to Charles Émile Picard. For any given starting point xx in XX, the sequence of values fn(x)f^n(x) traces its orbit. It’s the path this element takes under the relentless application of ff.

What happens if, after some steps, an element returns to a state it has occupied before? If fn(x)=fn+m(x)f^n(x) = f^{n+m}(x) for some m>0m > 0, then the orbit is called periodic. The smallest such mm is the orbit's period. The point xx itself is then a periodic point. This is the realm of cycle detection in computer science – the quest to find these recurring patterns within an otherwise potentially chaotic sequence.

Fixed Points: The Stillness in the Storm

Among these orbits, there are points of absolute stillness. If x=f(x)x = f(x), meaning the orbit of xx has a period of 1, then xx is a fixed point of the iteration. It’s a point that the function simply cannot move. It remains, unwavering, under the repeated application of ff. The set of all fixed points is often denoted as Fix(f)\operatorname{Fix}(f).

Mathematicians have developed a whole arsenal of fixed-point theorems – like the Banach fixed point theorem and the Brouwer fixed point theorem – to guarantee the existence of these points of stability in various scenarios. They are the anchors in the sea of iteration.

And when sequences generated by fixed-point iteration are too slow to converge, there are techniques for convergence acceleration. Aitken's method, applied to an iterated fixed point, becomes Steffensen's method, achieving a remarkably swift, quadratic convergence. It’s like finding a shortcut through a labyrinth.

Limiting Behaviour: The End of the Line, or a New Beginning?

As iteration continues, one might observe sets that seem to shrink, drawing ever closer to a single point. This point is an attractive fixed point. It’s a point that pulls its surroundings in, a gravitational center in the landscape of the function. Conversely, points might appear to diverge, fleeing from a single point. This is the domain of an unstable fixed point. It repels, pushing elements away.

When the points within an orbit converge to one or more limits, that set of limiting points is known as the limit set, or more specifically, the ω\omega-limit set. It’s the ultimate destination, the place where the orbit finally settles, if it settles at all.

These concepts of attraction and repulsion extend further. Iterates can be categorized into stable sets and unstable sets, based on how small initial neighborhoods behave under repeated application. One might also encounter wandering points – those that drift away, never to return, never even to approach their starting position.

Invariant Measure: The Collective Soul of the System

Instead of tracking individual points, we can consider the evolution of a density distribution. This is where the invariant measure comes into play. It describes the long-term behavior of a cloud of points, a dust of data, under repeated iteration. The invariant measure is an eigenstate of the Ruelle-Frobenius-Perron operator, or transfer operator, corresponding to an eigenvalue of 1. Smaller eigenvalues, in contrast, represent unstable, decaying states.

In many cases, repeated iteration is akin to a shift. The transfer operator, and its adjoint, the Koopman operator, can be viewed as shift operators acting on a shift space. The theory of subshifts of finite type offers a general framework for understanding many iterated functions, particularly those that lead to chaos.

Fractional Iterates and Flows: Beyond Whole Steps

What if we don't want to take a whole step? What if we want to take half a step, or a quarter? This is the realm of fractional iteration. A half iterate of a function ff, denoted gg, is a function such that g(g(x))=f(x)g(g(x)) = f(x). This gg can be written as f1/2(x)f^{1/2}(x). Similarly, f1/3(x)f^{1/3}(x) is the function that, when applied three times, yields f(x)f(x). And f2/3(x)f^{2/3}(x) would be f1/3(f1/3(x))f^{1/3}(f^{1/3}(x)). This all stems from the fundamental property fmfn=fm+nf^m \circ f^n = f^{m+n}.

This concept allows us to extend the iteration count nn beyond integers, treating it as a continuous parameter, a continuous "time" of evolution. When this is done, we speak of a flow.

If a function is bijective, meaning it has a unique inverse, then negative iterates correspond to its inverse and compositions thereof. f1(x)f^{-1}(x) is the usual inverse, and f2(x)f^{-2}(x) is the inverse applied twice: f1(f1(x))f^{-1}(f^{-1}(x)). Fractional negative iterates are defined analogously, with f1/2(x)f^{-1/2}(x) being the function such that f1/2(f1/2(x))=f1(x)f^{-1/2}(f^{-1/2}(x)) = f^{-1}(x).

Formulas for Fractional Iteration: A Glimpse into the Infinite

Finding a general formula for fractional iteration can be complex, but one method involves leveraging a fixed point.

  1. Identify a fixed point: Find a value aa such that f(a)=af(a) = a.
  2. Extend the property: Assume fn(a)=af^n(a) = a for all real numbers nn. This is a natural extension, preserving the fixed point’s immutability.
  3. Taylor Expansion: Expand fn(x)f^n(x) around the fixed point aa using a Taylor series: fn(x)=fn(a)+(xa)ddxfn(x)x=a+(xa)22!d2dx2fn(x)x=a+f^n(x) = f^n(a) + (x-a) \left. \frac{d}{dx} f^n(x) \right|_{x=a} + \frac{(x-a)^2}{2!} \left. \frac{d^2}{dx^2} f^n(x) \right|_{x=a} + \cdots
  4. Substitute and Simplify: After substituting the derivatives of the iterated function and utilizing the geometric progression, the series can be simplified.

A special case arises when f(a)=1f'(a) = 1. In this situation, the series takes a different form, involving terms with nn and n(n1)n(n-1), which can become quite intricate.

Example 1: The Linear Case

For a simple linear function f(x)=Cx+Df(x) = Cx + D, the fixed point is a=D/(1C)a = D/(1-C). The formula for the nn-th iterate simplifies elegantly to:

fn(x)=D1C+(xD1C)Cn=Cnx+1Cn1CD.f^n(x) = \frac{D}{1-C} + \left(x - \frac{D}{1-C}\right) C^n = C^n x + \frac{1-C^n}{1-C} D.

This is easily verifiable. It’s a clean, predictable outcome for a linear system.

Example 2: The Infinite Tower of Roots

Consider the expression: 222\sqrt{2}^{\sqrt{2}^{\sqrt{2}^{\dots}}} where the tower extends nn times. This can be represented by iterating the function f(x)=2xf(x) = \sqrt{2}^x starting from x=1x=1. A fixed point for this system is a=2a=2. Using the Taylor expansion around a=2a=2, we can approximate the value of the iterated function. Taking just the first few terms can yield results accurate to the first decimal place. This also relates to Tetration.

Example 3: Power Functions

For f(x)=xbf(x) = x^b, expanding around the fixed point 1 yields:

fn(x)=1+bn(x1)+12bn(bn1)(x1)2+13!bn(bn1)(bn2)(x1)3+f^n(x) = 1 + b^n(x-1) + \frac{1}{2} b^n(b^n-1)(x-1)^2 + \frac{1}{3!} b^n(b^n-1)(b^n-2)(x-1)^3 + \cdots

This is essentially the Taylor series of x(bn)x^{(b^n)} expanded around 1.

Conjugacy: Different Paths, Same Destination

If two iterated functions, ff and gg, are related by a homeomorphism hh such that g=h1fhg = h^{-1} \circ f \circ h, they are said to be topologically conjugate. This means they share the same fundamental dynamical behavior, just viewed through a different lens.

The property of topological conjugacy is preserved under iteration: gn=h1fnhg^n = h^{-1} \circ f^n \circ h. This is incredibly useful; if we can solve for one system, we can translate that solution to any of its conjugates. The tent map, for instance, is topologically conjugate to the logistic map.

A special case arises when we consider f(x)=x+1f(x) = x + 1. The iteration of g(x)=h1(h(x)+1)g(x) = h^{-1}(h(x) + 1) becomes:

gn(x)=h1(h(x)+n).g^n(x) = h^{-1}(h(x) + n).

By substituting x=h1(y)=ϕ(y)x = h^{-1}(y) = \phi(y), we arrive at g(ϕ(y))=ϕ(y+1)g(\phi(y)) = \phi(y+1), a form known as the Abel equation.

In a more general context, near a fixed point (say, at x=0x=0, where f(0)=0f(0)=0), we can often solve Schröder's equation for a function Ψ\Psi. This allows us to express f(x)f(x) as being locally conjugate to a simple dilation, g(x)=f(0)xg(x) = f'(0)x, where f(0)f'(0) is the derivative of ff at the fixed point. That is:

f(x)=Ψ1(f(0)Ψ(x)).f(x) = \Psi^{-1}(f'(0) \Psi(x)).

The iteration of such a function then becomes the conjugate of the iteration of the monomial:

Ψ1(f(0)nΨ(x)).\Psi^{-1}(f'(0)^n \Psi(x)).

Here, nn acts as a straightforward exponent. This means functional iteration can be reduced to multiplication, and the exponent nn doesn't need to be an integer or positive. It can represent a continuous "time," transforming the monoid of the Picard sequence into a full continuous group.

The image illustrating the iterates of the sine function shows this beautifully. The blue curve is the standard sine function. The orange curve is its half-iterate, the black curve its quarter-iterate, and so on, down to the 1/64th iterate. Below the sine function, we see integral iterates, diminishing towards a triangular function as the limit. The dashed line represents the negative first iterate, the inverse sine function, arcsin.

This method, often involving the perturbative determination of an eigenfunction Ψ\Psi, is a more powerful and systematic approach than the direct Taylor expansion.

Markov Chains: Probabilistic Iterations

When the function is linear and can be described by a stochastic matrix (a matrix where rows or columns sum to one), the iterated system is known as a Markov chain. It models systems that transition between states probabilistically.

Examples: Where Iteration Manifests

The world is rife with examples of iterated functions. There are countless chaotic maps, systems where minuscule changes in initial conditions lead to wildly different outcomes over time. The Mandelbrot set and iterated function systems are iconic visual representations of this phenomenon.

Ernst Schröder, in his 1870 work, explored special cases of the logistic map. For the chaotic case f(x)=4x(1x)f(x) = 4x(1-x), he found Ψ(x)=arcsin(x)2\Psi(x) = \arcsin(\sqrt{x})^2, leading to fn(x)=sin(2narcsin(x))2f^n(x) = \sin(2^n \arcsin(\sqrt{x}))^2. For the non-chaotic case f(x)=2x(1x)f(x) = 2x(1-x), the solution involved Ψ(x)=12ln(12x)\Psi(x) = -\frac{1}{2} \ln(1-2x), yielding fn(x)=12((12x)2n1)f^n(x) = -\frac{1}{2}((1-2x)^{2^n} - 1). These are not just abstract formulas; they describe the very fabric of how these systems evolve.

If ff represents the action of a group element, then its iteration corresponds to a free group.

It’s important to note that most functions do not yield simple closed-form expressions for their nn-th iterate. However, there are some notable exceptions, often involving linear or quadratic forms. The table below lists a few such cases, valid even for non-integer or negative nn.

| f(x)f(x) | fn(x)f^n(x)