Many applications that require distributed optimization also
include
uncertainty about the problem and the optimization criteria
themselves.
However, current approaches to distributed optimization assume
that the
problem is entirely known before optimization is carried out,
while
approaches to optimization with uncertainty have been investigated
for
centralized algorithms. This paper introduces the framework of
Distributed Constraint Optimization under Stochastic Uncertainty
(StochDCOP), in which random variables with known probability
distributions are used to model sources of uncertainty. Our main
novel
contribution is a distributed procedure called collaborative
sampling,
which we use to produce several new versions of the DPOP algorithm
for
StochDCOPs. We evaluate the benefits of collaborative sampling
over the
simple approach in which each agent samples the random variables
independently. We also show that collaborative sampling can be
used to
implement a new, distributed version of the consensus algorithm,
which
is a well-known algorithm for centralized, online stochastic
optimization in which the solution chosen is the one that is
optimal in
most cases, rather than the one that maximizes the expected
utility.