Currently, the resolver is pretty much untested. Some of the core
code has a few basic verifications that are run by
make check, but
much of the
interesting stuff comes in from the interplay
between the apt layer and the low-level resolver layer. This means
that we have no way of knowing what impact changes to resolver
behavior and weights have in the
real world. It would be much
better to have a corpus of well-defined resolver inputs with
well-defined expected outputs.
The problem is that it's not easy to generate automatically checkable
test cases for the resolver. The input to the resolver is,
essentially, the entire apt state directory, the dpkg state file, and
all the configuration files for apt. This is way too much information
to dump into
tests/ in the source repository.
However, I believe this problem is solvable. I have written code that
can take the apt and dpkg state, strip out all packages except the
ones that impacted a particular resolver run, and write the result to
a directory in a format compatible with
- Done: Devise and implement a format for storing the desired outcome of a
resolver run on an input bundle. This needs to include:
- The packages that are being installed, removed, etc. How to store this? Just include anything whose state is different from its last saved state?
- The resolver manipulation commands issued before a solution
was generated (e.g.
reject this versionand so on).
- A step limit (maybe) within which we should produce a result.
- The result that should be produced. It should be possible to store several successive resolver runs with user commands interspersed.
- Write code to parse this file format.
- Write a tester that will restore the resolver state and check that the expected results come out.