I got several emails in reply to my previous
post on representing Sudoku problems as
package dependencies. Several people asked about the algorithm that
aptitude uses to solve these problems. A
slightly out-of-date discussion package management and the aptitude
algorithm can be found at
http://people.debian.org/~dburrows/model.pdf. One question was
interesting enough to deserve a follow-up blog post:
referred to the page http://norvig.com/sudoku.html and asked how
long it would take aptitude aptitude to solve the hard Sudoku problems
on that page. He suspected that aptitude would do basic constraint
propagation and then be forced to exhaustively search all the possible
solutions to the problem.
In brief: the answer is that it doesn't work by default, but not for the reason he suggested, and you can make it work with a few command-line arguments. In order to try this out you'll need a new version of the Sudoku reducer that can parse the problems on that Web page: debsudoku2.py.
If I do a straight conversion of one of those problems and then run it through aptitude, it goes spinning off and never comes back. But aptitude is coded to avoid exhaustive searches; otherwise it couldn't solve dependency problems in reasonable amounts of time. In this particular situation, though, aptitude is clearly performing an exhaustive search; here's one series of status outputs:
open: 5124; closed: 4993; defer: 0; conflict: 374 Resolving dependencies... open: 10090; closed: 9942; defer: 0; conflict: 398 Resolving dependencies... open: 14726; closed: 14888; defer: 0; conflict: 440 Resolving dependencies...
open is the number of partial solutions that aptitude
wants to examine, and
closed is the number that it's already
examined. aptitude isn't making progress here and it isn't finding
solutions; in fact, it's creating, on average, about two new partial
solutions for every one it examines.
Why is aptitude doing this? Its resolver isn't a brute-force
algorithm; in fact, it looks remarkably like the Sudoku solver
discussed at http://norvig.com/sudoku.html. On each step, aptitude
takes a partial solution, picks one dependency, and resolves it by
installing or removing a package. This is pretty much equivalent to
filling in one not-filled-in cell; in fact, in the
Sudoku reduction, it's exactly equivalent.
There is one difference, though, and it stems from a difference in the problem domains of Sudoku and package management:
Not all dependency resolutions are created equal.
In Sudoku, any solution to the puzzle will do. Not so in package
solving a dependency problem by removing every
package on the system will produce a formally correct answer and a
very unhappy user. Even results that are not utterly disastrous could
be annoying to the user, so the resolver needs to somehow aim for
The page on Sudoku solvers suggests sticking to whatever value you placed in a cell, unless that doesn't lead to any solutions at all. This is technically known as a depth-first search. The problem with taking this approach when resolving package dependencies is that an apparently safe choice can end up requiring undesirable actions later on (like removing large swathes of the system). For this reason, aptitude tries to strike a balance between sticking to its current line of reasoning, and abandoning it when it seems to be turning out badly (this makes it a form of best-first search).
On typical Debian archives, the package relationships are simple enough that it's OK if aptitude does this from time to time; it makes the search a little more thorough (thus costly), but the ability to back out of questionable branches is very valuable. But it looks like when aptitude is applied to Sudoku puzzles, it has trouble settling on a single series of choices. It keeps thinking that the solutions are getting worse, so it backs up and tries another branch, but they don't look any better over there, so it backs up and tries another branch, and so on.
Aptitude::ProblemResolver::StepScore sets how hard
aptitude will try to stick to the same line of reasoning. Its default
70, which is reasonable for package archives, but turning it
up to a large value such as
10000 will cause aptitude to use a
straight depth-first search, just like the Sudoku page recommends:
$ aptitude -s -o 'aptitude::problemresolver::stepscore=10000' -o 'aptitude::auto-install=false' install puzzle
auto-install to false is a workaround for another odd
corner case/bug that happens to bite this Sudoku problem: for
technical reasons, doing this lets aptitude know that it should avoid
solutions that cancel the installation of puzzle. Otherwise you have
to sit there telling it
no for a bunch of solutions you aren't
interested in. Future versions of aptitude will probably provide a
way to explicitly give hints to the resolver.
With these options, aptitude was able to solve the
Sudoku puzzles that I gave it in anywhere from fifteen seconds to a
minute and 47 seconds. In contrast, the straight Sudoku solver does
30 problems in a second. Why is aptitude so slow? Based on skimming
debug traces and looking at the run-time behavior, I conjecture that
there are several reasons:
The aptitude search algorithm is very heavyweight. In the case where it takes over a minute, it only actually examines about 5,000 possible solutions. I don't know exactly where all this time is going, but I do know that the resolver does a lot of work on each step, for instance to try and detect dead ends that it hasn't reached yet. This is the
conflictscounter that appears while the resolver is running. This pays off tremendously in terms of allowing aptitude to skip huge amounts of work (it may be why aptitude can solve Sudoku puzzles at all!), but it means that the resolver takes a long time for each potential solution it looks at.
The Sudoku page suggests taking the cell with the fewest number of possible entries as the choice. I distinctly remember writing code to do this in aptitude, but it looks like I threw it out at some point or never committed it. I don't see why it should be any worse than the current approach, though (which is to pick an arbitrary dependency); maybe it just got misplaced at some point. Implementing this behavior decreased the time required to solve one puzzle from a minute and 45 seconds to just under a minute; I've committed this change to the post-lenny branch.
The Sudoku resolver has the advantage of being able to very quickly identify illegal
moves. aptitude's resolver can
learnthis information, but it takes time and it can't apply Sudoku-specific optimizations. For instance, the Sudoku solver knows exactly which values can be placed in a given cell at any time. aptitude has to actually
considereach of the cells that's available as a possibility and prove to itself that, e.g.,
6can't be written in the same cell at the same time. And although it can
learnthis information, it takes time to do so and it takes time to see whether any of the past conflicts that it
remembersare present in a given solution.
As I noted above, I've already checked in one change to the resolver's
behavior. It might also be worth seeing what happens with a massive
StepScore, so I've pegged it at
10000 on my computer
to see whether the results I get are noticeably worse.
The third item above -- the fact that aptitude needs to do more work
to understand which packages can't be installed -- is interesting.
The Sudoku solver is very proactive about this: it keeps track at all
times of which cell values are legal, and updates these sets every
time it transitions. In contrast, aptitude constantly re-checks all
moves it could make to see which ones are
I think that the resolver's speed could be boosted a little by doing the following:
Cache the currently legal solutions of each dependency at the time it's inserted into the queue.
When a new search node is generated by resolving a dependency, drop the formerly legal
movesthat are knocked out (for instance, by installing a different version of the same package).
Come up with a quick way of re-testing new conflicts (the global list of dead-ends) so that search nodes that are already in the queue can be checked against the conflicts that were added since they were queued.
These changes won't make aptitude able to solve problems that it couldn't solve before, or at least not very many of them, but they should make the resolver faster on every problem it encounters. However, I don't think I want to actually implement them. It would be tricky to confirm that the resolver got faster on real dependency problems (since it's normally almost instantaneous), the changes would make the resolver code more complex and fragile (less readable, less maintainable), and there's a real chance of breaking something. Unless there are cases where the resolver is currently too slow on real package archives that don't involve an exponential blowup, I'm going to shelve the idea for now.