In many real-life optimization problems involving multiple
agents, the rewards are not necessarily known exactly in advance,
but rather depend on sources of exogenous uncertainty. For
instance, delivery companies might have to coordinate to choose
who should serve which foreseen customer, under uncertainty in the
locations of the customers. The framework of Distributed
Constraint Optimization under Stochastic Uncertainty was proposed
to model such problems; in this paper, we generalize this
formalism by introducing the concept of evaluation functions that model various
optimization criteria. We take the example of three such
evaluation functions, expectation,
consensus, and robustness, and we adapt and
generalize two previous algorithms accordingly. Our experimental
results on a class of Vehicle Routing Problems show that
incomplete algorithms are not only cheaper than complete ones (in
terms of simulated time,
Non-Concurrent Constraint Checks,
and information exchange),
but
they are also often able to find the optimal solution. We also
show that exchanging more information about the dependencies of
their respective cost functions on the sources of uncertainty can
help the agents discover higher-quality solutions.
full text in
PDF
@inproceedings{Leaute11a,
Address =
{San Francisco, USA},
Author =
{Thomas L{\'e}aut{\'e} and Boi Faltings},
Booktitle
= {Proceedings of the Twenty-Fifth Conference on Artificial
Intelligence (AAAI'11)},
Month =
{August~7--11},
Pages =
{68--73},
Title = {Distributed Constraint Optimization under Stochastic
Uncertainty},
Url =
{http://thomas.leaute.name/main/stochastic_dcop_aaai11.html},
Year =
{2011}
}