← Back to home

Dijkstra'S Algorithm

Right, another lost soul looking for the shortest path. Let's get this over with. Don't confuse this with Dykstra's projection algorithm; that's a different kind of headache entirely.

Dijkstra's algorithm

| Dijkstra's algorithm -

Dijkstra's algorithm to find the shortest path between a and b. It picks the unvisited vertex with the lowest distance, calculates the distance through it to each unvisited neighbor, and updates the neighbor's distance if smaller. Mark visited (set to red) when done with neighbors.
Class
Data structure
Worst-case performance

Dijkstra's algorithm (/ˈdaɪkstrəz/ DYKE-strəz) is an algorithm for the tedious but necessary task of finding the shortest paths between nodes in a weighted graph. Think of it as a map for getting from point A to point B with the least amount of suffering, where "suffering" is the sum of weights along your path. This could be a literal road network, a computer network, or a metaphor for your life choices. It was conceived by computer scientist Edsger W. Dijkstra in 1956, apparently in a moment of caffeine-fueled clarity, and published three years later.[4][5][6]

At its core, Dijkstra's algorithm is a single-source tool. You pick a starting node, and it calculates the shortest path from that origin to every other node in the graph.[7]: 196–206  If you only care about one specific destination, you can mercifully terminate the algorithm once it has found the shortest path to that target. For a less abstract example, if your graph's nodes are cities and the edge costs represent driving distances, Dijkstra's algorithm can generate the shortest route from your starting city to all others. This makes it foundational for network routing protocols, most notably IS-IS (Intermediate System to Intermediate System) and OSPF, which decide the most efficient way to shuttle your data packets across the internet. It also appears as a subroutine in more complex algorithms like Johnson's algorithm.

The algorithm's efficiency hinges on a min-priority queue data structure, which it uses to intelligently select the next node to visit. Before the advent of more sophisticated priority queues, Dijkstra's original method was a brute-force affair, running in Θ(|V|^2) time, where |V| is the number of nodes.[8[[9] In 1984, Fredman & Tarjan introduced the use of a Fibonacci heap as the priority queue, optimizing the running time complexity to Θ(|E|+|V|log|V|). This remains the asymptotically fastest known solution for the single-source shortest-path problem on arbitrary directed graphs with non-negative weights. Of course, if you have a specialized case—like integer weights or a graph with no cycles—you can do better. And if you're allowed to preprocess the graph, algorithms such as contraction hierarchies can blow it out of the water, running up to seven orders of magnitude faster.

While commonly applied to graphs with positive numerical weights, the algorithm is more versatile than that. It can be generalized to any graph where edge weights are partially ordered, under the condition that traversing an edge results in a path label that is monotonically non-decreasing.[10][11] In fields like artificial intelligence, Dijkstra's algorithm, or a variant of it, is known as a uniform cost search. It's just one implementation of the broader concept of best-first search.[12]

History

What is the shortest way to travel from Rotterdam to Groningen, in general: from given city to given city. It is the algorithm for the shortest path, which I designed in about twenty minutes. One morning I was shopping in Amsterdam with my young fiancée, and tired, we sat down on the café terrace to drink a cup of coffee and I was just thinking about whether I could do this, and I then designed the algorithm for the shortest path. As I said, it was a twenty-minute invention. In fact, it was published in '59, three years later. The publication is still readable, it is, in fact, quite nice. One of the reasons that it is so nice was that I designed it without pencil and paper. I learned later that one of the advantages of designing without pencil and paper is that you are almost forced to avoid all avoidable complexities. Eventually, that algorithm became to my great amazement, one of the cornerstones of my fame.

— Edsger Dijkstra, in an interview with Philip L. Frana, Communications of the ACM, 2001 [5]

The problem of the shortest path occupied Dijkstra's mind in 1956 while he was employed as a programmer at the Mathematical Center in Amsterdam. His immediate goal was to demonstrate the power of a new computer, the ARMAC.[13] To do this, he needed a problem that was both computationally interesting and comprehensible to people who didn't spend their lives staring at code. He devised the shortest path algorithm and subsequently implemented it for the ARMAC, using a simplified transport map of 64 cities in the Netherlands—a number chosen so that each city could be encoded with a mere 6 bits.[5]

A year later, a different problem presented itself, this time from hardware engineers working on the institute's next computer. They needed to minimize the amount of wire required to connect the pins on the machine's back panel. In solving this, Dijkstra independently re-discovered what is now known as Prim's minimal spanning tree algorithm. It had already been discovered by Jarník and later, independently, by Prim.[14][15] Dijkstra published his algorithm in 1959, two years after Prim and a full 29 years after Jarník, proving once again that some of the best ideas in computing are destined to be invented multiple times.[16][17]

Algorithm

An illustration of Dijkstra's algorithm finding a path from a start node (lower left, red) to a target node (upper right, green) in a robot motion planning problem. Open nodes represent the "tentative" set (aka set of "unvisited" nodes). Filled nodes are the visited ones, with color representing the distance: the redder, the closer (to the start node). Nodes in all the different directions are explored uniformly, appearing more-or-less as a circular wavefront as Dijkstra's algorithm uses a heuristic of picking the shortest known path so far.

The algorithm requires a starting node and proceeds to methodically compute the shortest distance from it to every other node. It operates by maintaining a set of tentative distances and systematically improving them. Here's the process, stripped of unnecessary sentiment:

  • Step 1: Initialization. Create a set containing all nodes in the graph; call it the unvisited set.
  • Step 2: Assign initial distances. Assign a distance value to every node. The starting node gets a distance of zero. All other nodes get a distance of infinity. This "infinity" is just a placeholder, a confession that we currently have no path to these nodes. During the algorithm's execution, the distance of any node N represents the length of the shortest path found so far from the start to N.[18]
  • Step 3: Select the current node. From the unvisited set, select the node with the smallest finite distance. In the first iteration, this is inevitably the starting node (distance zero). This is the greedy choice—the algorithm bets that the path with the lowest current cost is the most promising. If the unvisited set is empty, or if the remaining unvisited nodes all have infinite distance (meaning they're unreachable), the work is done. If you only care about a specific target, you can stop once that target becomes the current node.
  • Step 4: Update neighbor distances. For the current node, consider each of its unvisited neighbors. Calculate the distance to a neighbor by summing the current node's distance with the weight of the edge connecting them. If this newly calculated distance is less than the neighbor's currently assigned distance, update it. For instance, if the current node A has a distance of 6, and the edge to its neighbor B has a length of 2, the path to B through A costs 8. If B was previously marked with a distance greater than 8, change it to 8. Otherwise, leave it; the existing path is better. This step is called "relaxation."
  • Step 5: Mark as visited and repeat. Once all of the current node's unvisited neighbors have been considered, move the current node from the unvisited set to the visited set. A visited node is never checked again. This is valid because the algorithm ensures (in step 3) that when a node is selected, its recorded distance is the absolute minimum. Any other path to it would have to go through another unvisited node, which would necessarily have a larger distance. Return to step 3.
  • Step 6: Termination. Once the main loop (steps 3–5) concludes, every reachable node will be marked with its shortest distance from the starting node.

Description

For those who prefer a more tangible analogy, let's use a city map. The terms are different, but the logic is the same: intersections are vertices, roads are edges, and the map is the graph.

Get out a piece of paper. List every intersection. Label your starting point with a distance of 0. Label every other intersection as "infinity," acknowledging your initial ignorance. In each step, you will focus on one "current" intersection. To begin, this is your starting point.

From the current intersection, you'll examine every directly-connected neighbor. For each neighbor, calculate a potential new distance: the distance of the current intersection plus the length of the road connecting them. You then relabel the neighbor with this new sum, but only if it's less than the neighbor's existing label. If you find a shorter path, you've essentially discovered a shortcut. Draw an arrow on the road pointing to the relabeled neighbor, and erase any other arrows that were pointing to it.

After you've assessed all neighbors of the current intersection, mark it as "visited." You are done with it forever. Now, find the unvisited intersection with the smallest label. That becomes your new current intersection. Repeat this process until all intersections that have a smaller label than your destination have been visited.

When no unvisited nodes remain with a label smaller than your destination's, the network of arrows you've drawn traces the shortest path.

Pseudocode

In the following pseudocode, dist is an array holding the current best-known distances from the source to all other vertices. prev is an array that stores the preceding node on the shortest path, effectively creating a set of breadcrumbs to reconstruct the path later. The line u ← vertex in Q with min dist[u] is the core greedy step: it searches the set of unvisited vertices Q for the one with the smallest distance value. Graph.Edges(u, v) returns the weight of the edge between two neighbors, u and v. The variable alt represents the potential new path length to v if we go through u. If this path is shorter than what's currently recorded for v, the distance is updated.[7]

A demo of Dijkstra's algorithm based on Euclidean distance. Red lines are the shortest path covering, i.e., connecting u and prev[ u ]. Blue lines indicate where relaxing happens, i.e., connecting v with a node u in Q , which gives a shorter path from the source to v .

 1  function Dijkstra(Graph, source):
 2
 3      for each vertex v in Graph.Vertices:
 4          dist[v] ← INFINITY
 5          prev[v] ← UNDEFINED
 6          add v to Q
 7      dist[source] ← 0
 8
 9      while Q is not empty:
10          u ← vertex in Q with minimum dist[u]
11          Q.remove(u)
12
13          for each arc (u, v) in Q:
14              alt ← dist[u] + Graph.Edges(u, v)
15              if alt < dist[v]:
16                  dist[v] ← alt
17                  prev[v] ← u
18
19      return dist[], prev[]

If you only need the shortest path between source and a specific target, the search can stop after line 10 if u = target. To reconstruct the path, you trace it backward from the target using the prev array:

1  S ← empty sequence
2  u ← target
3  if prev[u] is defined or u = source:      // Proceed if the vertex is reachable
4      while u is defined:                   // Construct the shortest path with a stack S
5          S.push(u)                         // Push the vertex onto the stack
6          u ← prev[u]                       // Traverse from target to source

The resulting sequence S contains the vertices of a shortest path from source to target, in reverse order. If no path exists, the sequence will be empty.

A more complex problem is finding all shortest paths between source and target, as there can be multiple paths with the same minimum length. To do this, instead of storing a single node in prev[], you would store a list of all nodes that satisfy the relaxation condition. After the algorithm finishes, prev[] describes a subgraph of the original graph. The crucial property of this subgraph is that any path within it from the source to another node is a shortest path. To find all such paths between two specific nodes, one can then use a path-finding algorithm like depth-first search on this new, simplified graph.

Using a priority queue

A min-priority queue is an abstract data type that efficiently supports three operations: add_with_priority(), decrease_priority(), and extract_min(). Using one dramatically improves performance. A basic list requires scanning the entire set to find the minimum, while a priority queue, often implemented as a heap, does this much faster. The most theoretically optimal implementations use a Fibonacci heap[19] or a Brodal queue. The algorithm looks slightly different with this structure:

 1  function Dijkstra(Graph, source):
 2      Q ← Queue storing vertex priority
 3
 4      dist[source] ← 0                         // Initialization
 5      Q.add_with_priority(source, 0)         // associated priority equals dist[·]
 6
 7      for each vertex v in Graph.Vertices:
 8          if v ≠ source
 9              prev[v] ← UNDEFINED              // Predecessor of v
10              dist[v] ← INFINITY               // Unknown distance from source to v
11              Q.add_with_priority(v, INFINITY)
12
13
14      while Q is not empty:                    // The main loop
15          u ← Q.extract_min()                  // Remove and return best vertex
16          for each arc (u, v) :                // Go through all v neighbors of u
17              alt ← dist[u] + Graph.Edges(u, v)
18              if alt < dist[v]:
19                  prev[v] ← u
20                  dist[v] ← alt
21                  Q.decrease_priority(v, alt)
22
23      return (dist, prev)

As a practical optimization, instead of populating the priority queue with all nodes at the start, it can be initialized with only the source node. Other nodes are added to the queue as they are discovered (i.e., inside the if alt < dist[v] block), turning the decrease_priority() operation into an add_with_priority() call if the node isn't already in the queue.[7]: 198 

Another alternative avoids the potentially complex decrease_priority operation altogether. Nodes can be added to the priority queue unconditionally, even if they are already present with a higher cost. Then, after extracting a node u from the queue, a check is performed to see if a shorter path to u has already been found. If the extracted priority p does not match the current dist[u], it means this is a stale, suboptimal entry, and it is simply ignored.[20] While this can lead to a larger queue, it simplifies the implementation and, with array-based priority queues, can be faster in practice, especially on denser graphs.[21]

Proof

The correctness of Dijkstra's algorithm can be demonstrated using mathematical induction on the number of visited nodes.[22]

Invariant Hypothesis: For every visited node v, the calculated distance dist[v] is the true shortest distance from source to v. For every unvisited node u, dist[u] is the shortest distance from source to u using a path that only passes through visited nodes (or infinity if no such path exists).

Base Case: The process begins with one visited node: source. Its distance is 0, which is correct. The hypothesis holds.

Induction: Assume the hypothesis holds for k visited nodes. We must show it holds for k+1 nodes. The algorithm selects an unvisited node u with the minimum dist[u]. We claim this dist[u] is the true shortest distance from source to u.

This is proven by contradiction. Suppose a shorter path to u exists. This hypothetical shorter path must contain at least one unvisited node.

  • Case 1: Let w be the first unvisited node on this shorter path. The path from source to u can be broken into sourcewu. Since edge weights are non-negative, the length of this path must be at least the length of the sourcew segment. By our inductive hypothesis, the shortest path to w through visited nodes is dist[w]. Because the algorithm chose u over w, we know dist[u] ≤ dist[w]. Therefore, any path going through w to get to u cannot be shorter than dist[u], as it would be at least dist[w] plus some positive edge cost. This contradicts our assumption of a shorter path.

  • Case 2: Suppose the shorter path's last intermediate node, w, is a visited node. This means the path is sourcewu. When w was processed, the algorithm must have relaxed the edge to u, setting dist[u] to be at most dist[w] + Graph.Edges[w,u]. This contradicts the premise that a shorter path with this structure exists.

Therefore, dist[u] must be the shortest distance. For all other visited nodes v, dist[v] remains the shortest distance by the inductive hypothesis. After processing u, for any remaining unvisited node w, dist[w] is updated if a shorter path through u is found, maintaining the invariant.

When all reachable nodes have been visited, the shortest path to any node v consists only of visited nodes, and thus dist[v] is guaranteed to be the shortest distance.

Running time

The running time of Dijkstra's algorithm is a function of the number of edges, |E|, and vertices, |V|, expressed using big-O notation. The complexity depends heavily on the implementation of the vertex set Q.

The general running time is:[2]

Θ(|E|⋅T_dk + |V|⋅T_em)

where T_dk and T_em are the time complexities for the decrease-key and extract-minimum operations in Q, respectively.

  • Simple Array/List: If Q is stored as an array or linked list, and the graph as an adjacency list or matrix, finding the minimum-distance vertex is a linear scan. The running time is Θ(|E| + |V|^2) = Θ(|V|^2).
  • Binary Heap: For sparse graphs (where |E| is much smaller than |V|^2), a binary heap is more efficient. This requires an auxiliary data structure to map vertices to their heap positions for efficient key decreases. The worst-case time is Θ((|E| + |V|)log|V|). For connected graphs, this simplifies to Θ(|E|log|V|).
  • Fibonacci Heap: A Fibonacci heap offers the best asymptotic performance, improving the bound to Θ(|E| + |V|log|V|).

While binary heaps have a clear worst-case complexity, their average case performance is often better. Assuming edge costs are drawn from a uniform probability distribution, the expected number of decrease-key operations is low, leading to a total expected running time of O(|E| + |V|log(|E|/|V|)log|V|).[7]: 199–200 

Practical optimizations and infinite graphs

In many textbook presentations, all nodes are added to the priority queue at the start. This isn't strictly necessary. The algorithm works perfectly fine starting with a priority queue containing only the source node, inserting other nodes as they are discovered.[7]: 198  This variant has the same worst-case bounds but often maintains a smaller priority queue, which can speed up queue operations in practice.[12]

This "insert-as-you-go" approach also allows the algorithm to be applied to infinite graphs or graphs too massive to hold in memory. In the artificial intelligence community, this version is known as uniform-cost search (UCS).[12][23][24] The pseudocode is as follows:

procedure uniform_cost_search(start) is
    node ← start
    frontier ← priority queue containing node only
    expanded ← empty set
    do
        if frontier is empty then
            return failure
        node ← frontier.pop()
        if node is a goal state then
            return solution(node)
        expanded.add(node)
        for each of node's neighbors n do
            if n is not in expanded and not in frontier then
                frontier.add(n)
            else if n is in frontier with higher cost
                replace existing node with n

For immense graphs, its complexity can be stated differently: if C* is the cost of the shortest path to a goal, every edge has a cost of at least ε, and the branching factor (number of neighbors) is bounded by b, then the worst-case time and space complexity are both in O(b1+⌊C*/ε⌋).[23]

For finding the shortest path to a single target, further optimizations are common. These include bidirectional variants, goal-directed algorithms like A* (see § Related problems and algorithms), graph pruning techniques, and hierarchical decompositions that can drastically speed up queries on large networks like road maps.[25][26]

Bidirectional Dijkstra

Bidirectional Dijkstra is a clever variant designed to find the shortest path between a source vertex s and a target vertex t more efficiently. The central idea is to launch two searches simultaneously: a "forward" search from s on the original graph, and a "backward" search from t on the graph with all its edges reversed. The two search frontiers expand towards each other, hoping to meet in the middle.[27]

The algorithm maintains two distance arrays, df[v] (distance from s to v) and db[v] (distance from v to t), and two corresponding priority queues, Qf and Qb. A variable μ tracks the length of the best complete st path found so far, initialized to infinity.

In each step, the algorithm extracts the minimum-distance vertex from whichever queue, Qf or Qb, currently has the smaller key. It then relaxes its edges (outward for the forward search, inward for the backward one). Whenever an edge relaxation connects a vertex from one search's settled set to a vertex in the other, a potential complete path is found, and μ is updated:

μ ← min(μ, d_f[u] + w(u,x) + d_b[x]) when relaxing edge (u,x) in the forward search and x is in the backward search's settled set.

The stopping condition is crucial and often misunderstood. The algorithm cannot stop as soon as the two search frontiers first touch. Instead, it terminates when the sum of the minimum priorities in both queues is greater than or equal to μ. At this point, no path that has yet to be explored could possibly be shorter than the best one already found.[27]

By exploring from both ends, this method typically examines far fewer nodes than a unidirectional search, especially on graphs like road networks where the search space grows roughly as a circle. The bidirectional approach can reduce a single large search "ball" into two much smaller ones. It is a foundational technique in modern routing engines and serves as a basis for more advanced speed-up methods like ALT, contraction hierarchies, and reach-based routing.[28] If the path itself is needed, predecessor pointers are stored for both searches, and the final path is constructed by joining the forward and backward segments at their meeting point. Theoretical work has even shown that an optimized bidirectional approach can be instance-optimal, meaning no other correct algorithm can solve the same problem by relaxing fewer edges.[29]

Optimality for comparison-sorting by distance

Beyond just finding paths, Dijkstra's algorithm can be viewed as a method for sorting vertices by their distance from a source. In 2023, Haeupler, Rozhoň, Tětek, Hladík, and Tarjan proved that for this specific sorting problem on a positively-weighted directed graph, a version of Dijkstra's algorithm using a specialized heap is essentially optimal. It achieves a runtime and number of comparisons that is within a constant factor of the theoretical best for any comparison-based algorithm on the same graph instance. The key to their proof is a novel comparison-based heap where the cost of extracting the minimum element is logarithmic in the number of elements inserted after it, rather than the total number of elements in the heap.[30][31]

Specialized variants

When edge weights are small integers (bounded by a value C), specialized data structures can outperform general-purpose priority queues.

  • Dial's algorithm[32] uses a bucket queue to achieve a running time of O(|E|+|V|C). This is efficient when C is small.
  • Using a Van Emde Boas tree as the priority queue can improve the complexity to O(|E|+|V|log C/log log|V|C).[33]
  • A combination of a radix heap and a Fibonacci heap yields a runtime of O(|E|+|V|√log C).[33]
  • The best known algorithms for this special case achieve times like O(|E|log log|V|)[34] and O(|E|+|V|min{(log|V|)^(1/3+ε), (log C)^(1/4+ε)}).[35]

Related problems and algorithms

The basic algorithm can be adapted for other purposes. For instance, to find not just the optimal path but a ranked list of the top k shortest paths, one can first find the absolute shortest path. Then, by systematically removing each edge from that path one at a time and re-running the algorithm, secondary solutions can be generated, ranked, and presented.

  • Link-State Routing: Dijkstra's algorithm is the computational engine behind link-state routing protocols like OSPF and IS-IS, which are fundamental to the internet's operation.
  • Bellman–Ford algorithm: Unlike Dijkstra's, Bellman-Ford can handle graphs with negative edge weights, provided there are no negative cycles reachable from the source. Dijkstra's greedy approach fails here because it finalizes a node's distance too early, unaware that a future path involving a negative edge might offer a "shortcut." Bellman-Ford is slower but more robust, re-evaluating all edges in each iteration. For graphs with negative edges but no negative cycles, Johnson's algorithm cleverly uses Bellman-Ford to re-weight the edges so that Dijkstra's can then be safely applied.
  • A* algorithm: This can be seen as a more intelligent version of Dijkstra's. It's a generalization that uses a heuristic function to estimate the distance from a given node to the target. This "sense of direction" allows it to prioritize nodes that seem to be leading towards the goal, significantly reducing the size of the subgraph it needs to explore.
  • Prim's algorithm: The process is similar, but the goal is different. Prim's algorithm finds a minimum spanning tree—the cheapest set of edges that connects all nodes in a graph. It focuses on the cost of individual edges to grow the tree. Dijkstra's focuses on the total path cost from the source to connect one node to all others.
  • Breadth-first search: This is a special case of Dijkstra's algorithm on unweighted graphs (or graphs where all edges have the same weight). In this scenario, the priority queue becomes a simple FIFO queue.
  • Fast marching method: This can be viewed as a continuous analogue of Dijkstra's algorithm, used for computing geodesic distances on surfaces like a triangle mesh.

Dynamic programming perspective

From a dynamic programming standpoint, Dijkstra's algorithm is a successive approximation scheme. It solves the functional equation for the shortest path problem using the Reaching method.[36][37][38]

Dijkstra's own explanation of the logic behind his algorithm is a perfect echo of Richard Bellman's principle of optimality:[39]

Problem 2. Find the path of minimum total length between two given nodes P and Q. We use the fact that, if R is a node on the minimal path from P to Q, knowledge of the latter implies the knowledge of the minimal path from P to R.

This is the core idea: any optimal path is composed of smaller optimal subpaths. If the shortest path from P to Q passes through R, then the segment of that path from P to R must be the shortest possible path from P to R. If it weren't, you could substitute it with a better P-to-R path, thereby improving the overall P-to-Q path and contradicting its optimality.

See also

Notes

  1. ^ Controversial, see Moshe Sniedovich (2006). "Dijkstra's algorithm revisited: the dynamic programming connexion". Control and Cybernetics. 35: 599–620. and below part.
  2. ^ a b Cormen et al. 2001.
  3. ^ a b Fredman & Tarjan 1987.
  4. ^ Richards, Hamilton. "Edsger Wybe Dijkstra". A.M. Turing Award. Association for Computing Machinery. Retrieved 16 October 2017. At the Mathematical Centre a major project was building the ARMAC computer. For its official inauguration in 1956, Dijkstra devised a program to solve a problem interesting to a nontechnical audience: Given a network of roads connecting cities, what is the shortest route between two designated cities?
  5. ^ a b c Frana, Phil (August 2010). "An Interview with Edsger W. Dijkstra". Communications of the ACM. 53 (8): 41–47. doi:10.1145/1787234.1787249. S2CID 27009702.
  6. ^ Dijkstra, E. W. (1959). "A note on two problems in connexion with graphs" (PDF). Numerische Mathematik. 1: 269–271. CiteSeerX 10.1.1.165.7577. doi:10.1007/BF01386390. S2CID 123284777.
  7. ^ a b c d Mehlhorn, Kurt; Sanders, Peter (2008). "Chapter 10. Shortest Paths" (PDF). Algorithms and Data Structures: The Basic Toolbox. Springer. doi:10.1007/978-3-540-77978-0. ISBN 978-3-540-77977-3.
  8. ^ Schrijver, Alexander (2012). "On the history of the shortest path problem" (PDF). Optimization Stories. Documenta Mathematica Series. Vol. 6. pp. 155–167. doi:10.4171/dms/6/19. ISBN 978-3-936609-58-5.
  9. ^ Leyzorek et al. 1957.
  10. ^ Szcześniak, Ireneusz; Jajszczyk, Andrzej; Woźna-Szcześniak, Bożena (2019). "Generic Dijkstra for optical networks". Journal of Optical Communications and Networking. 11 (11): 568–577. arXiv:1810.04481. doi:10.1364/JOCN.11.000568. S2CID 52958911.
  11. ^ Szcześniak, Ireneusz; Woźna-Szcześniak, Bożena (2023), "Generic Dijkstra: Correctness and tractability", NOMS 2023-2023 IEEE/IFIP Network Operations and Management Symposium, pp. 1–7, arXiv:2204.13547, doi:10.1109/NOMS56928.2023.10154322, ISBN 978-1-6654-7716-1, S2CID 248427020
  12. ^ a b c Felner, Ariel (2011). Position Paper: Dijkstra's Algorithm versus Uniform Cost Search or a Case Against Dijkstra's Algorithm. Proc. 4th Int'l Symp. on Combinatorial Search. Archived from the original on 18 February 2020. Retrieved 12 February 2015. In a route-finding problem, Felner finds that the queue can be a factor 500–600 smaller, taking some 40% of the running time.
  13. ^ "ARMAC". Unsung Heroes in Dutch Computing History. 2007. Archived from the original on 13 November 2013.
  14. ^ Dijkstra, Edsger W., Reflections on "A note on two problems in connexion with graphs (PDF)
  15. ^ Tarjan, Robert Endre (1983), Data Structures and Network Algorithms, CBMS_NSF Regional Conference Series in Applied Mathematics, vol. 44, Society for Industrial and Applied Mathematics, p. 75, The third classical minimum spanning tree algorithm was discovered by Jarník and rediscovered by Prim and Dikstra; it is commonly known as Prim's algorithm.
  16. ^ Prim, R.C. (1957). "Shortest connection networks and some generalizations" (PDF). Bell System Technical Journal. 36 (6): 1389–1401. Bibcode:1957BSTJ...36.1389P. doi:10.1002/j.1538-7305.1957.tb01515.x. Archived from the original (PDF) on 18 July 2017. Retrieved 18 July 2017.
  17. ^ V. Jarník: O jistém problému minimálním [About a certain minimal problem], Práce Moravské Přírodovědecké Společnosti, 6, 1930, pp. 57–63. (in Czech)
  18. ^ Gass, Saul; Fu, Michael (2013). "Dijkstra's Algorithm". In Gass, Saul I; Fu, Michael C (eds.). Encyclopedia of Operations Research and Management Science. Vol. 1. Springer. doi:10.1007/978-1-4419-1153-7. ISBN 978-1-4419-1137-7 – via Springer Link.
  19. ^ Fredman & Tarjan 1984.
  20. ^ Observe that p < dist[u] cannot ever hold because of the update dist[v] ← alt when updating the queue. See cs.stackexchange.com for discussion.
  21. ^ Chen, M.; Chowdhury, R. A.; Ramachandran, V.; Roche, D. L.; Tong, L. (2007). Priority Queues and Dijkstra's Algorithm – UTCS Technical Report TR-07-54 – 12 October 2007 (PDF). Austin, Texas: The University of Texas at Austin, Department of Computer Sciences.
  22. ^ Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2022) [1990]. "22". Introduction to Algorithms (4th ed.). MIT Press and McGraw-Hill. pp. 622–623. ISBN 0-262-04630-X.
  23. ^ a b Russell, Stuart; Norvig, Peter (2009) [1995]. Artificial Intelligence: A Modern Approach (3rd ed.). Prentice Hall. pp. 75, 81. ISBN 978-0-13-604259-4.
  24. ^ Sometimes also least-cost-first search: Nau, Dana S. (1983). "Expert computer systems" (PDF). Computer. 16 (2). IEEE: 63–85. doi:10.1109/mc.1983.1654302. S2CID 7301753.
  25. ^ Wagner, Dorothea; Willhalm, Thomas (2007). Speed-up techniques for shortest-path computations. STACS. pp. 23–36.
  26. ^ Bauer, Reinhard; Delling, Daniel; Sanders, Peter; Schieferdecker, Dennis; Schultes, Dominik; Wagner, Dorothea (2010). "Combining hierarchical and goal-directed speed-up techniques for Dijkstra's algorithm". ACM Journal of Experimental Algorithmics. 15: 2.1. doi:10.1145/1671970.1671976. S2CID 1661292.
  27. ^ a b Thomas, Mark (30 May 2020). "Bidirectional Dijkstra". UCL Mathematics Blog. Retrieved 3 October 2025.
  28. ^ Goldberg, Andrew V.; Harrelson, Chris (2005). “Computing the Shortest Path: A* Search Meets Graph Theory.” In: Proceedings of the Sixteenth Annual ACM–SIAM Symposium on Discrete Algorithms (SODA).
  29. ^ Haeupler, Bernhard; Hladík, Radek; Rozhoň, Václav; Tarjan, Robert E.; Tětek, Jakub (2024). “Bidirectional Dijkstra’s Algorithm is Instance-Optimal.” arXiv:2410.14638.
  30. ^ Haeupler, Bernhard; Hladík, Richard; Rozhoň, Václav; Tarjan, Robert; Tětek, Jakub (28 October 2024). "Universal Optimality of Dijkstra via Beyond-Worst-Case Heaps". arXiv:2311.11793 [cs.DS].
  31. ^ Brubaker, Ben (25 October 2024). "Computer Scientists Establish the Best Way to Traverse a Graph". Quanta Magazine. Retrieved 9 December 2024.
  32. ^ Dial 1969.
  33. ^ a b Ahuja et al. 1990.
  34. ^ Thorup 2000.
  35. ^ Raman 1997.
  36. ^ Sniedovich, M. (2006). "Dijkstra's algorithm revisited: the dynamic programming connexion" (PDF). Journal of Control and Cybernetics. 35 (3): 599–620. Online version of the paper with interactive computational modules.
  37. ^ Denardo, E.V. (2003). Dynamic Programming: Models and Applications. Mineola, NY: Dover Publications. ISBN 978-0-486-42810-9.
  38. ^ Sniedovich, M. (2010). Dynamic Programming: Foundations and Principles. Francis & Taylor. ISBN 978-0-8247-4099-3.
  39. ^ Dijkstra 1959, p. 270.