203 points | by foweltschmerz5 天前
Of course, if you have lots of time and space and a completely static graph, you can run all pairs shortest paths and simply store all the results for O(1) path lookup, but there are intermediates for varying types of graphs. This stack exchange article is a good overview: https://cstheory.stackexchange.com/questions/11855/how-do-th....
I've been wondering about how well D* lite would perform in practice with a somewhat varying graph. I read some suggestions that if the graph is changing even a bit on occasion, then it will mostly degrade to A*, since many changed paths would need to be re-evaluated.
In the context of games, I've also been thinking about a technique called true distance heurustics (TDH), where you essentially precompute the distances between some fixed set of nodes, and then use those as a part of the heurustic for A* (or D* lite in this case), but it seems like updating these TDH in the case of a changing graph might introduce just as much overhead as not having them in the first place. It might be an interesting trade off though, if you have some "lines" (e.g., think train lines) that are much faster than roadways, you could handle each of these specially via the TDH, and in exchange you would be able to assume a lower "max speed" for use with the A* heurustic, allowing you to explore fewer paths (since with a lower "max speed" paths will more rapidly increase in cost), whereas if you had to assume all car based paths could move as fast as a train, you would have to explore more paths.
But that statement doesn't apply to the version of Dijkstra's developed in this paper right?
> Universal optimality is a powerful beyond-worst-case performance guarantee for graph algorithms that informally states that a single algorithm performs as well as possible for every single graph topology.
It clarifies specifically what problem it is optimal for "We prove that our working-set property is sufficient to guarantee universal optimality, specifically, for the problem of ordering vertices by their distance from the source vertex", but A* only explores a subset of vertices based on the heuristic, so it can be more efficient.
> But that statement doesn't apply to the version of Dijkstra's developed in this paper right?
Simple thought experiment: I have an algorithm that has path costs for one graph memorized and first compares to that graph; it's O(V+E) to compare and return the value for that memorized graph. It obviously beats this algorithm for that graph.
I can't remeber exactly, but $thing was supposed to be hard to find, but because of how they baked in pathing costs- if you followed something it'd lead you there.
Some discussion here: https://news.ycombinator.com/item?id=28230305, but half of it is complaints about twitter.
But yeah, should have just used the original title.
That last bit makes me wonder: what would a shortest path algorithm without comparisons look like? Are there also "radix sort" like approaches to shortest-path algorithms that surpass comparison-based algorithms or something?
Also if you actually know the minimum item it’s much better for it to be first because it means you can skip almost all the work. The paper is focused on doing the best for a worst case input. In real life if you can guess you can improve the average case substantially. For example, this is what A* tries to do.
It has the nice property that the amortized cost of pushing/popping an element is independent of the number of other elements in the heap.
> Abstract: This paper proves that Dijkstra's shortest-path algorithm is universally optimal in both its running time and number of comparisons when combined with a sufficiently efficient heap data structure.
Dijkstra's algorithm: https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
NetworkX docs > Reference > Algorithms > Shortest Paths: https://networkx.org/documentation/stable/reference/algorith...
networkX.algorithms.shortest_path.dijkstra_path: https://networkx.org/documentation/stable/reference/algorith... https://github.com/networkx/networkx/blob/main/networkx/algo...
/? Dijkstra manim: https://www.google.com/search?q=dijkstra%20manim
That semester, I made a slight mistake on the final exam where I asked students to create an algorithm that could find the second shortest path from s to every other vertex in a graph. I forgot to specify that the second shortest path should be simple (i.e. should not reuse any vertex twice). Having to deal with non-simple paths makes the problem much much harder.
None of the students figured it out in the time available, and I'm sure I would also have been stumped if I had tried to solve the problem. Bob figured it out though. And then I remember he graded all 150 solutions to the problem himself, having as a blast as he went through students attempts at an effectively impossible problem.
The algorithm under discussion is not that search-use of Dijkstra's, but the original all shortest paths use, so it's not directly comparable here to A*.
This article I found really interesting at the time: https://roguebasin.com/?title=The_Incredible_Power_of_Dijkst...
If the heuristic is not consistent, the edge weights aren't necessarily nonnegative, but you can still use the "hybrid Bellman–Ford–Dijkstra algorithm", which is a generalization of Dijkstra that works for all graphs, and should be asymptotically better than naive A*.
There's almost certainly a paper somewhere proving that A* with a given heuristic can always be made O(large) by choosing the right adversarial inputs.
If all you need is shortest path between just one pair of points, this result doesn't necessarily apply.
As long as you have an admissible heurustic, A* won't ever perform worse than dijkstra's.
A* finds the shortest path from a node to a single other node. Dijkstra's finds the shortest paths from a node to all other nodes. If you use it as a search algorithm to find the shortest path to a single target, then yes, it's equivalent to A* with h(x)=0, but you're terminating Dijkstra's early (once your target is found) and not running the full algorithm.
>when combined with a sufficiently efficient heap data structure
So it depends on the circumstances a bit.