FRODO is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
FRODO is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License along with this program. If not, see http://www.gnu.org/licenses/.
FRODO includes software developed by the JDOM Project (http://www.jdom.org/).
FRODO is a Java open-source framework for distributed combinatorial optimization, initially developed at the Artificial Intelligence Laboratory (LIA) of École Polytechnique Fédérale de Lausanne (EPFL), Zwiterland. This manual describes FRODO version 2.x, which a complete re-design and re-implementation of the initial FRODO platform developed by Adrian Petcu. For more details on this previous version, please refer to . FRODO currently supports SynchBB , MGM and MGM-2 , ADOPT , DSA , DPOP , S-DPOP , MPC-Dis(W)CSP4 [31, 30], O-DPOP , AFB , MB-DPOP , Max-Sum , ASO-DPOP , P-DPOP , P2-DPOP , E[DPOP] [14, 16], Param-DPOP, and P3 ∕ 2-DPOP . FRODO also comes with a suite of benchmark problem generators, described in Appendix B.
This section describes the multi-layer, modular architecture chosen for FRODO. The three layers are illustrated in Figure 1; we describe each layer in some more detail in the following subsections.
The communications layer is responsible for passing messages between agents. At the core of this layer is the Queue class, which is an elaborate implementation of a message queue. Queues can exchange messages with each other (via shared memory if the queues run in the same JVM, or through TCP otherwise), in the form of Java Message objects. Classes implementing IncomingMsgPolicyInterface can register to a queue in order to be notified whenever messages of specific types are received. Such classes can be called policies because they decide what to do upon reception of certain types of messages.
Typically, in FRODO each agent owns one queue, which it uses to receive and send messages. Each queue has its own thread, which makes FRODO a multi-threaded framework. Special care has been put into avoiding threads busy waiting for messages, in order to limit the performance implications of having one thread per agent, in experiments where a large number of agents run in the same JVM.
FRODO is a platform designed to solve combinatorial optimization problems; the solution spaces layer provides classes that can be used to model such problems. Given a space of possible assignments to some variables, a solution space is a representation of assignments of special interest, such as assignments that correspond to solutions of a given problem. Intuitively, one can think of a solution space as a constraint or a set of constraints that describes a subspace of solutions to a problem.
In the context of optimization problems, utility solution spaces are used in order to describe solutions spaces in which each solution is associated with a utility. Alternatively, the utility can be seen as a cost, if the objective of the problem is to minimize cost rather than maximize utility.
In order to reason on (utility) solution spaces, FRODO implements operations on these spaces. Examples of operations are the following:
FRODO provides several implementations of utility solution spaces, the simplest one being the hypercube. A hypercube is an extensional representation of a space in which each combination of assignments to variables is associated with a given utility (or cost). Infeasible assignments can be represented using special a special infinite utility/cost. Solution spaces can also be expressed intensionally, based on JaCoP constraints .
Solution spaces can be a way for agents to exchange information about their subproblems. For instance, in the UTIL propagation phase in DPOP , agents exchange UTIL messages that are hypercubes describing the highest achievable utility for a subtree, depending on the assignments to variables in the subtree’s separator.
The algorithms layer builds upon the solution spaces layer and the communication layer in order to provide distributed algorithms to solve DCOPs. In FRODO, an algorithm is implemented as one or more modules, which are simply policies that describe what should be done upon the reception of such or such message by an agent’s queue, and what messages should be sent to other agents, or to another of the agent’s modules. This modular design makes algorithms highly and easily customizable, and facilitates code reuse and maintenance.
FRODO currently supports the following algorithms: SynchBB , MGM and MGM-2 , ADOPT , DSA , DPOP , S-DPOP 1 , MPC-Dis(W)CSP4 [31, 30], O-DPOP , AFB , MB-DPOP , Max-Sum , ASO-DPOP , P-DPOP , P2-DPOP , E[DPOP] [14, 16], Param-DPOP, and P3 ∕ 2-DPOP . Param-DPOP is an extension of DPOP that supports special variables called parameters. Contrary to traditional decision variables, the agents do not choose optimal assignments to the parameters; instead, they choose optimal assignments to their decision variables and output a solution to the parametric DCOP that is a function of these parameters. FRODO also provides a convenient algorithm to count the number of optimal solutions, which can be found in the package frodo2.algorithms.dpop.count.
To illustrate FRODO’s modular philosophy, let us consider the implementation of the DPOP algorithm. A DPOP agent uses a single queue, and is based on the generic, algorithm-independent SingleQueueAgent class. The behavior of this generic agent is specialized by plugging in three modules, which correspond to DPOP’s three phases.
This modular algorithm design makes it easy to implement various versions of DPOP, either by parametrizing one or more modules to make them behave slightly differently, or by completely replacing one or more modules by new modules to implement various behaviors of the agents.
This section describes how to use FRODO to solve Distributed Constraint Optimization Problems (DCOPs).
FRODO is distributed in a compressed a ZIP file, which, when expanded, contains the following elements:
In order to use FRODO’s GUI, it is also necessary to separately install Graphviz  and to make sure that Graphviz’ dot executable is on the search path.
FRODO takes in two types of files: files defining optimization problems to be solved, and configuration files defining the nature and the settings of the agents (i.e. the algorithm) to be used to solve them.
The file format used to describe DCOPs is based on the XCSP 2.1 format , with small extensions necessary to describe which agent owns which variable, and whether the problem is a maximization or a minimization problem, as the XCSP format was designed for centralized CSPs and WCSPs, not distributed optimization problems. The resulting XCSP format is a superset of the XDisCSP 1.0 format used in the DisCHOCO 2 platform ; this makes it possible to use DisCHOCO as a benchmark problem generator for FRODO. DisCHOCO supports multiple classes of benchmarks, including meeting scheduling, sensor networks, graph coloring, random Max-DisCSPs, and n-queens. Depending on the XCSP parser used (XCSPparser or JaCoPxcspParser), FRODO supports a restricted subset, and an extended subset of XCSP, respectively.
Restricted XCSP Subset: Extensional Soft Constraints Only
Figure 2 shows an example FRODO XCSP file, using the restricted XCSP subset supported by the XCSPparser. The file consists of five main sections:
Among all possible types of relations that are defined in the XCSP format, the XCSPparser currently only supports the soft relations (semantics = "soft"), which list possible utility values (or cost values, depending on whether the attribute maximize of the presentation tag is true or false), and for each utility, the assignments to the variables that are associated with this utility. In the example in Figure 2, the binary relation assigns the cost value +∞ to all assignments in which the two variables are equal, and a cost value of 0 to all other assignments (as specified by the defaultCost field). This example relation is essentially a soft relation representation of the hard inequality relation; the use of the special utilities/costs -infinity and infinity makes it possible to express hard constraints as soft constraints. Notice however that for MaxDisCSP problems in which the goal is to minimize the number of conflicts, it is necessary to avoid the use of the special infinite costs infinity, and resort to the value 1 instead, such that the cost of a particular solution corresponds to its number of constraint violations.
Restricted XCSP Subset with Support for StochDCOP
FRODO also supports a variant of the XCSP format that can be used to model DCOPs under Stochastic Uncertainty (StochDCOPs) , which include random variables that model sources of uncertainty in the problem. Expressing a StochDCOP involves the following two extensions of the previously described XCSP format, as illustrated in Figure 3.
Extended XCSP Subset: Adding Intensional Constraints The parser JaCoPxcspParser makes it possible to express constraints using a much richer syntax, including intensional constraints based on predicates, functions, and global constraints. Figure 4 shows the same FRODO XCSP file as in Figure 2, but this time, using this extended XCSP subset. There are two differences with respect to the extensional representation in Figure 2:
Note that the JaCoPxcspParser still supports extensional, soft relations; it also supports intensional, soft functions, described in Section A.5 (replacing nbPredicates with nbFunctions).
FRODO takes in an agent configuration file that defines the algorithm to be used, and the various settings of the algorithm’s parameters when applicable. Figure 5 presents a sample agent configuration file.
Performance Metrics FRODO supports the following performance metrics:
Note that this can be computationally expensive, as measuring message sizes involves serialization.
NOTE: The Variable Election module exchanges a number of messages that is linear in the parameter nbrSteps. For optimal results, this parameter should be set to a value just above the diameter of the largest connected component in the constraint graph. A good rule of thumb is to set it to a value just above the total number of variables in the DCOP.
Each agent has an internal clock. When it sends a message, the agent appends to it a timestamp that indicates the time at which the message was sent, according to the agent’s internal clock. When it receives a message, if the message’s timestamp is later than the agent’s internal clock, the agent updates its clock to match the timestamp. When the algorithm terminates, its runtime is then defined as the latest time indicated by any agent’s clock.
This is the same mechanism as the one used by the NCCC metric, except that instead of counting constraint checks, the agent counts time. However, unlike for the NCCC metric, to simulate the situation in which each agent would be running on a dedicated processor/core, it is necessary to make sure that, at any point in time, only a single agent is active with its clock ticking, while all other agents are “sleeping” with their clocks “frozen.”
To achieve this, FRODO uses a central mailer that collects all messages sent by the agents into an outbox, and only delivers them one at a time, to each of its destinations in turn (if the message has multiple destinations). An agent’s clock is only ticking when it is busy processing a message received; when the agent is done processing the message, its clock is frozen, and the control is returned to the central mailer, which can then deliver the next message. To enforce causality, the central mailer delivers the messages by increasing order of their timestamps.
Note that this implementation slightly differs from DCOPolis’ implementation , in which messages are processed in batches: the central mailer retrieves all outgoing messages from its outbox, puts them in a temporary, timestamp-ordered queue, and delivers the messages from this temporary queue in sequence. The disadvantage of DCOPolis’ approach is that, while the ordering by timestamp is enforced inside each batch, it may be violated from one batch to the next, and therefore it is possible for an agent to receive a message from agent a1 with timestamp t1 after another message from agent a2 with timestamp t2 > t1. This should not happen if message delivery is assumed instantaneous, which is the assumption made by the simulated time metric since it only measures computation time, and excludes message delivery time.
Other Statistics Several algorithmic modules can also report other statistical information about the problem. Whenever applicable, you can set the attribute reportStats to "true" to get access to these statistics. For instance, in the case of DPOP (Figure 5), the DFS Generation module can report the DFS tree that is computed and used by DPOP, using the DOT format , while the VALUE Propagation module can report the optimal assignments to the DCOP variables and the corresponding total utility. Setting the parser’s attribute displayGraph to true also results in displaying the constraint graph in DOT format. Wherever applicable, setting the attribute DOTrenderer to frodo2.gui.DOTrenderer (instead of the empty string) will render graphs in a GUI window instead of printing them in DOT format. This functionality requires that Graphviz  be preliminarily installed as described in Section 4.1.
FRODO provides preliminary, limited support for more general Multi-Agent Systems (MAS), in which there may be multiple types of agents, performing different algorithms. To enable this feature, the agent configuration file should be modified as in Figure 6, declaring one <modules> element for each agent type.
The user should subclass MASparser and MASProblemInterface as necessary, depending on the MAS problem class considered. The problem file must include a description of each agent’s subproblem, as illustrated in Figure 7. For convenience, FRODO makes it possible to specify each agent’s subproblem in a separate file; this can be achieved as in Figure 8, where the root element of agent1.xml is the corresponding <agent> element from Figure 7.
FRODO can be run in two modes: in simple mode, and in advanced mode (Section 4.4). In simple mode, all agents run in the same Java Virtual Machine. Agents exchange messages by simply sharing pointers to objects in memory.
The simple mode with Graphical User Interface (GUI) is launched using the main method of the class SimpleGUI in the package frodo2.gui. This is defined as the default entry point of frodo2.jar, therefore the following command should be used from within the directory containing the FRODO JAR file:
The method takes in two optional arguments, in the following order:
If the path to the agent file is omitted, FRODO uses the DPOP agent file DPOPagent.xml by default. If the path to the problem file is also omitted, FRODO generates and solves a random problem using DPOP; this requires JUnit to be on the classpath. The simple mode supports one option:
A screenshot of the GUI is presented in Figure 9. It allows the user to specify (and, optionally) edit a problem file in XCSP format, to render the corresponding constraint graph, to select (and, optionally) edit an agent configuration file, and to impose a timeout. During the execution of the chosen DCOP algorithm, FRODO also displays in separate windows the constraint graph and the variable ordering used, as illustrated in Figure 10. To render these graphs, FRODO uses Graphviz , which must be preliminarily installed as described in Section 4.1.
The simple mode without GUI is launched using the main method of the class AgentFactory in the package frodo2.algorithms, which can be achieved using the following command, called from within the directory containing the FRODO JAR file:
The arguments are almost the same as for the simple mode with GUI, except that the path to the problem file is required, and the following option is also supported:
FRODO’s advanced mode can be used to run algorithms in truly distributed settings, with agents running on separate computers and communicating through TCP. In this mode, each computer runs a daemon, which initially waits for a centralized controller to send the descriptions of the algorithm to use and the problem(s) to solve. The controller is only used during the initial setup phase; once the algorithm is started, the agents communicate with each other directly, and the controller could even be taken offline. In the context of experiments, for the purpose of monitoring the solution process on a single computer, agents can also be set up to report statistics and the solution to the problem(s) to the controller.
Using the advanced mode, it is possible to set up batch experiments. The configuration file (see Figure 11) can contain a list of problems that will be solved sequentially by the controller. The agent to be used is defined by the field agentName in the agentDescription element, which should refer to a file that is distributed with FRODO inside frodo2.jar. It is also possible to replace the agentName field with fileName = "agent.xml", where agent.xml is the name of a file outside frodo2.jar describing the agent to be used.
FRODO’s advanced mode has two submodes:
IMPORTANT NOTE: The advanced mode does not support the simulated time metric (Section 4.2.2). Furthermore, it should only be used on a (distributed or centralized) platform such that each agent gets a dedicated processor/core.
To run the controller in local submode, the Controller class in the package frodo2.controller must be launched with the argument -local, using the following command from within the directory containing frodo2.jar:
As an optional argument, one can set the work directory by giving the argument -workdir path. The default work directory is the one from where the controller is launched.
When the controller is launched, a simple console-based UI is started. To load a
particular configuration file, one uses the open command:
This command tells the controller to load the configuration file that contains all the information necessary to run the experiments. A sample configuration file can be found in Figure 11. To run the experiments, simply give the command start. When all the experiments are finished, the controller can be exited by giving the exit command.
To run the controller in distributed submode, the Controller class in the package frodo2.controller must be launched, without the -local option, using the following command from within the directory containing frodo2.jar:
To set the work directory one can again use the -workdir argument. When running in distributed mode, the controller assumes that the agents must be run on a set of daemons. These daemons can run on the same machine or on different machines. To start a daemon, open a new console, and launch the Daemon class in the package frodo2.daemon, using the following command from within the directory containing frodo2.jar:
The IP address of the controller can either be given with the command-line
argument -controller ip_address, or by issuing the command in the daemon console:
The port number used for the controller is 3000. The default port number used for
the daemon is 25000, but this can be changed using the command-line argument
-daemonport port_number. Each agent spawn by the daemon will be assigned an
increment of this port number, the first agent getting port port_number+1. When all
the daemons are running, one can check whether they are correctly registered
to the controller by using the following command in the controller console:
The configuration file can be loaded by the open command and the experiments started by using the start command. When the experiments are started, the agents are assigned to the different daemons in a round robin fashion. In the future we intend to allow for more flexibility in assigning agents to particular daemons.
It is also possible to interact with FRODO directly through its Java API. This is particularly recommended for users who would not want to have to write XCSP problem instance files. To this purpose, FRODO provides a special class called a solver for each DCOP algorithm, which is a sub-class of the abstract class AbstractDCOPsolver. Solvers provide several solve methods, the most useful of which is the following:
The first input must be a JDOM Document object that represents the DCOP problem to solve, in XCSP format (Section 4.2.1). You can generate such a Document object from an XCSP file using one of the static parse methods of the XCSPparser class. FRODO’s benchmarking problem generators usually also provide methods that directly produce Document objects. Alternatively, if you do not want to have to deal with XCSP, the solvers also provide solve methods that take in objects implementing DCOPProblemInterface, such as:
To construct an object that implements DCOPProblemInterface, it is possible to use the Problem class. Variables can be manually added to a problem using the method Problem.addVariable(String, String, V), and constraints using the method Problem.addSolutionSpace(UtilitySolutionSpace). A simple example of a UtilitySolutionSpace is a Hypercube. The spaces supported in FRODO and how to generate them are described in detail in Appendix A.
The second input nbrElectionRounds to the solve method is the number of rounds for the VariableElection module used to choose the first variable in the variable ordering (for the DCOP algorithms that need one). It is important to set this parameter as low as possible to reduce the complexity of the VariableElection module, while keeping it higher than the diameter of the constraint graph to ensure correctness. For random, unstructured problems, this parameter can be set to the number of variables in the problem. For more structured problems, it might be possible to set it to a lower value; for instance, if the problem domain has the property that each agent’s local subproblem is a clique, then this parameter can be set to 2 times the number of agents, which is smaller than the number of variables as soon as each agent owns at least 2 variables.
If you intend to run experiments that involve measuring and comparing the runtimes of various algorithms (be it wall clock time or simulated time), it is recommended to destroy and create a new JVM after each run. Otherwise, the algorithm that is run first might be disadvantaged by the time it takes to initialize the JVM and load all required Java classes.
Finally, it is also possible to run FRODO in distributed mode through the API, via the DistributedSolver. Have a look at the DistributedSolverDemo (which should be run instead of the Controller from Section 4.4.2) for an example of how to do this.
The experiments folder that comes with FRODO provides examples of how to run experiments to compare the performance of various algorithms on various problem domains. These examples consist in Python scripts that make use of the frodo2 Python module that is included in frodo2.jar.
There are several reasons why we strongly recommend running experiments using scripts outside of Java (in this case, in Python). First, initializing the JVM takes time, and if one were to call the algorithms one after another in a loop inside Java, only the first algorithm(s) would have to pay the price of the JVM initialization, and the experiments would not be fair; restarting a new JVM for each algorithm on each problem instance addresses this undesirable experimental bias by having each algorithm equally pay the price of JVM initialization. Second, if only one JVM were used, this JVM would tend to age as the experiment progresses, and algorithms could become slower and slower. Finally, if the experiment pushes some of the algorithms to their limits (which we recommend they should), then on some problem instances some algorithms could end up timing out without properly releasing all their resources, or the JVM could even run out of memory and abruptly terminate. To summarize, restarting a fresh JVM for each algorithm on each problem instance guarantees that what has happened during one run will not influence the performance of the JVM during the following run.
In order to make use of the frodo2 Python module, you must first import it using the following code, which assumes that your Python script lives and is started inside the experiments folder.
Section 4.6.1 first describes how to format the inputs to the run function that runs the experiments. Section 4.6.2 then describes how to report the experimental results in graphs.
To start an experiment, you should call the run method by passing it 8 arguments as follows. The nature and contents of each argument is documented below.
Continuing on the example of graph coloring experiments, the following setting will create random problems of varying numbers of nodes (from 3 included to 11 excluded), with a constant density of 0.4, a constant tightness of 0.0 (initially, all colors are allowed for all nodes), and 3 colors.
For instance, to compare the performance of DPOP  and SynchBB  on graph coloring problems, you should set algos to the following.
Once an experiment has been started, it can be interrupted by passing CTRL+C to the Python interpreter. However, when you do so, the experiment will not be abruptly interrupted; instead the frodo2 module will wait until all algorithms have finished running on the current problem instance before stopping the experiment. This delayed-interruption mechanism is used to avoid introducing an experimental bias : if you stopped an experiment at any random time, you would be more likely to stop it during a long-lasting run than a short-lasting run (the probability of interrupting a run that would have lasted 2 min is twice that of interrupting a run that would have lasted 1 min). This would introduce a bias in the results, making the interrupted algorithm appear to perform better than it really does.
If you still want to abruptly interrupt a running experiment, you can pass CTRL+C twice to the Python interpreter; the experimental results already gathered for some algorithms on the current problem instance will be discarded. Notice also that this delayed-interruption functionality may not be available if you run your Python script from within an IDE rather than from the command line. For instance, in the Eclipse IDE, pressing the red stop button will abruptly kill the Python interpreter rather than pass it an interruption signal.
The run function of the frodo2 Python module will record experimental results in a CSV file whose format is documented in below. You can read the raw data in that file yourself to report the results of your experiment, or you can use the plot or plotScatter functions in frodo2 to consolidate this raw data.
Format of the Output File The output file is a tab-delimited CSV file that can be easily imported into your favorite spreadsheet program. The first line in the file contains the headers, and each subsequent line contains the results of running one algorithm on one problem instance. The columns are the following.
The plot Function In addition to the run function, the frodo2 Python module also provides a plot function that consolidates the raw data written by the run function. This function can be called as followed:
The plot function consolidates the raw data in the yCol-th column by computing its median value and the corresponding 95% confidence interval. When the matplotlib  Python module is available on the Python path, the plot function will directly draw a graph such as in Figure 12. Missing data points for a given algorithm corresponds to problem sizes for which the algorithm timed out on more than 50% of the problem instances (i.e. the median is infinite). Please refer to the matplotlib documentation  if you want to customize the graph.
If the matplotlib module is not available, the plot function will instead write the consolidated results to another CSV file, whose format is documented in Table 1. The file can then be imported into your favorite spreadsheet program to produce a graph manually.
|y axis label:||yLabel|
|xLabel||algo1||algo1-||algo 1+||algo 2||...|
The plotScatter Function The plotScatter function can be used to compare the performance of two algorithms against one another on the same problem instances. This function can be called as follows:
When the matplotlib  Python module is available on the Python path, the plotScatter function will directly draw a graph such as in Figure 13. Please refer to the matplotlib documentation if you want to customize the graph. If the matplotlib module is not available, the plotScatter function will instead write the results to another CSV file, whose format is documented in Table 2. The file can then be imported into your favorite spreadsheet program to produce a graph manually.
|name of the performance metrics|
|name of the x algorithm||name of the y algorithm|
Important Note on Reporting Performance Notice that the plot function reports the median performance, not the average or expected performance. There is a very good rationale behind this . First, good experimental results should always include confidence intervals, which are a guarantee that the results are statistically significant. A 95% confidence interval means that there is a 95% probability that the median performance of the given algorithm is contained in the interval. In particular, when comparing two algorithms, if their confidence intervals are disjoint, you can claim with 95% confidence that the median performance of one algorithm is higher than the median performance of the other.
Without confidence intervals, the conclusions drawn from your results might be flawed, because it could be that you have run the algorithms on an insufficient number of problem instances, and that running them on more problem instances would have produced different results and different conclusions. By computing and reporting confidence intervals, and letting your experiments run until the intervals are disjoint, you can draw conclusions that are more likely to be correct.
Furthermore, while you could just report the average performance and its standard deviation, these results would be both less robust and less significant than reporting the median performance and its confidence interval. First, the results would be less robust, because the value of the average runtime performance depends on the arbitrary value you have chosen for the timeout. If the algorithm needs a virtually infinite amount of time to solve some of the problem instances, then increasing the arbitrary timeout threshold will not help, and will result in an artificially increased value for the average runtime. In contrast, as long as the algorithm times out in less than 50% of the cases, the value of the median runtime will not depend on the arbitrary value you have chosen for the timeout.
Reporting the median performance rather than the average performance is also more significant, because of a limitation of the law of large numbers. The whole philosophy behind running algorithms on sample problem instances is that the law of large numbers guarantees that, as you increase the number of problem instances, your performance results will asymptotically converge to the true performance of the algorithm on the given problem class. The issue is that the law of large numbers does not apply to heavy-tailed distributions, which is the case of your raw data distribution as soon as the algorithm times out on some instances. In that case, reporting the average performance and its standard deviation only informs about the performance of the algorithm on the problem instances you have used, and is not guaranteed to reflect the true average performance of the algorithm. In contrast, even in the case of heavy-tailed distributions, the median performance and its confidence interval does converge to the true median performance of the algorithm, because the median value is robust to the heavy tail (i.e. to the timeouts).
If you encounter errors or exceptions when using FRODO, you might want to pass the option -ea to the JVM in order to enable asserts. With this option on, FRODO will perform some optional (potentially expensive) tests on its inputs, which can sometimes help resolve problems. Various helpful tools (such as a bug tracker and a support request tracker) are available on FRODO’s SourceForge website. We warmly welcome constructive feedback about FRODO in order to constantly improve the platform and make it better fit users’ needs.
This section briefly describes the recommended steps one should go through in order to implement a new DCOP algorithm inside FRODO. This procedure is illustrated using the SynchBB algorithm.
Modularity is and must remain one of the strong points of FRODO. When considering implementing a new algorithm, first think carefully about possible phases of the algorithm, which should be implemented in separate modules if possible. A DCOP algorithm is then defined by its agent configuration file, which lists all the modules that constitute the algorithm. The configuration file for DPOP was already given in Figure 5; we now illustrate step-by-step how to write the configuration file for SynchBB. The general structure of an agent configuration file is given in Figure 14 (in XML format).
Several modules are already available for you to reuse, in particular when it comes to generating an ordering on the variables before the core of the DCOP algorithm is started.
The VariableElection Module This module can be reused to implement any algorithm that needs to elect a variable, for instance as the first variable in the variable ordering. It works by assigning a score to each variable, and then uses a viral propagation mechanism to find the variable with the highest score. It must be parameterized by a number of steps for the viral propagation, which must be greater than the diameter of the constraint graph to ensure correctness. It can also be parameterized by a set of scoring heuristics and recursive, tie-breaking heuristics. For instance, SynchBB elects the first variable in its ordering using the VariableElection module, with the smallest domain heuristic, breaking ties by lexicographical ordering of the variable names.
The LinearOrdering Module This module constructs a total ordering of the variables, starting with the variable chosen by the VariableElection module. Currently, it uses the max width heuristic  in oder to produce low-width variable orders; a future version of FRODO might make this heuristic customizable. The module takes in an boolean parameter reportStats whose purpose is explained in Section 5.2.4.
Other DCOP algorithms based on a pseudo-tree ordering of the variables instead of a total ordering should reuse the DFSgenerationParallel module implemented for DPOP (Figure 5).
The Main Module – SynchBB After the two modules for generating the variable ordering have been declared, it remains to declare the module(s) that constitute the core of the DCOP algorithm. Typically, if the algorithm is easily decomposable into several phases, there should be one module per phase, like in the case of DPOP (Figure 5). For SynchBB, which is a simpler, single-phase algorithm, a single module is sufficient (Figure 15).
The module may be parameterized by various attributes. The reportStats parameter has a special usage discussed in Section 5.2.4. The SynchBB module has been implemented to take in one additional parameter: convergence is a boolean attribute that specifies whether the module should keep track of the history of its variable assignments so that the experimenter can later analyze the convergence properties of the algorithm (Section 5.2.4).
Overriding the Message Types of Existing Modules In some circumstances, in order to reuse existing modules, it can be necessary to modify the types of the messages they listen to and exchange. An example of such a situation is that of P2-DPOP , which uses two different modules to elect a root variable: SecureVarElection elects an initial, temporary root, and SecureRerooting elects the true root used at each iteration of the algorithm. P2-DPOP also uses the module DFSgeneration to construct pseudo-trees, which normally listens to the output of SecureVarElection, but must be made to listen to the output of SecureRerooting instead. Another example situation would be one in which a new, custom module has to be placed between two existing modules, such that the new module intercepts the outputs of the first module and modifies them before passing them to the second. FRODO provides a simple way to achieve this, via the agent configuration file. For instance, P2-DPOP declares its module DFSgeneration as in Figure 16 (only showing the relevant XML elements).
This enforces that, before the module DFSgeneration is instantiated, its public static field DFSgeneration.ROOT_VAR_MSG_TYPE that is used for the type of the messages containing the elected root should be reset to the value of the public static field SecureRerooting.OUTPUT, which is the type used by SecureRerooting for its output messages. The attribute ownerClass is optional; if it is not specified, then the new message type is simply the value of the attribute value.
In FRODO, the modules defined in the agent configuration file behave like message listeners (one instance per agent in the DCOP), implementing the interface IncomingMsgPolicyInterface<String>.
This interface declares the following method, which is called by FRODO whenever the agent receives a message of interest:
IncomingMsgPolicyInterface<String> is itself a sub-interface of the interface MessageListener<String>, which declares the following two methods:
The method getMsgTypes must return the types of messages that the module wants to be notified of. The type of a message is defined as the output of Message.getType(). The method setQueue is called by FRODO when the agents are initialized, and passes to the module the agent’s Queue object that the module should use to send messages to other agents.
Sending messages can be achieved by calling one of the following methods of the module’s Queue object:
The method sendMessageToSelf is used by the module to send messages to another module of the same agent. This is how modules communicate with each other within the same agent; for instance, the SynchBB module listens for the output messages of the agent’s LinearOrdering module, which are of the class OrderMsg. All messages exchanged by all algorithms must be of the class Message, or a subclass thereof. Subclasses corresponding to messages with various numbers of payloads are provided for convenience: MessageWithPayload, MessageWith2Payloads, etc.
Optionally, to improve the performance of your algorithm in terms of message sizes, you can implement your own message classes by subclassing Message. This allows for instance to not count the type field of the message when measuring its size. This improvement is not necessary for virtual messages that are only sent by an agent to itself. Because Message implements Externalizable, you must not forget to do the following two things when you subclass Message:
Also, notice that the destinations passed to the queue’s methods sendMessage and sendMessageToMulti are the names of the agents, not the names of the variables. Finally, when the algorithm has terminated, the module should send a message of type AgentInterface.AGENT_FINISHED to itself, which will be caught by SingleQueueAgent. This does not kill the agent; it only sends a notification of termination to FRODO. If another message is later received from a neighboring agent, the method notifyIn() will be called on this message, as before. If the algorithm does not have a built-in termination detection mechanism, but should terminate when all agents are idle (i.e. all agents are waiting for messages, but there are no more messages to be delivered), then the algorithm should terminate when it receives a messages of type AgentInterface.ALL_AGENTS_IDLE.2
All modules declared in the agent configuration file must have a constructor with the signature in Figure 17.
The first input is used by the module to access the description of the agent’s subproblem. The interface DCOPProblemInterface declares are large number of methods that the module can call to retrieve information about neighboring agents, variables, domains, and constraints. As explained in Section 3.2, in FRODO, constraints are called solution spaces, and should be accessed using one of the DCOPProblemInterface.getSolutionSpaces methods.
Important note: for runtime measurements to be correct, none of the methods of DCOPProblemInterface should be called within the module’s constructor, because all reasoning about the problem should be delayed until the algorithm is actually started. This happens when the agent receives a message of type AgentInterface.START_AGENT.
The second input of the module’s constructor is a JDOM Element object that represents the module’s XML fragment from the agent configuration file. For instance, for the SynchBB module, the Element object contains the XML fragment in Figure 15, and the constructor can be implemented as in Figure 18.
As previously mentioned in Section 4.2.2, it can be useful for a module to report statistics about the problem, the solution process, and the solution found. In FRODO, this is done as follows: a special statistics gatherer agent is created that listens to statistics messages sent by all DCOP agents, combines them in order to get a global view of the overall solution process, and makes it available to the user. The code that takes care of aggregating statistics must be implemented inside the module that produces these statistics. To clarify how this works, let us consider the case of the SynchBB module.
The StatsReporter Interface The XML description of the SynchBB module in Figure 15 defines a parameter reportStats, set to true. FRODO automatically interprets this as the fact that the module implements the interface StatsReporter, which declares the following method (among others):
Inside this method, the module must notify the statistics gatherer’s queue of the types of the statistics messages it wants to aggregate. This can be done by calling the method Queue.addIncomingMessagePolicy. The module is then notified of statistics messages received by the statistics gatherer agent by a call to its notifyIn method, just like for normal messages.
All modules implementing StatsReporter are expected to have a constructor with the following signature:
Notice that the order of the inputs is reversed compared to the constructor of classes implementing IncomingMsgPolicyInterface, given in Figure 17. Since StatsReporter is a sub-interface of the latter, a module that reports statistics must have both constructors. The first input is the XML description of the module, as in Figure 17. The second input describes the overall DCOP problem (while in Figure 17 it described the agent’s local subproblem).
Studying Convergence Many DCOP algorithms such as SynchBB have an any-time behavior, and it can be interesting to study their convergence properties. A sub-interface of StatsReporter, called StatsReporterWithConvergence, is provided for this purpose. It declares the two following methods:
Consult the implementation of the SynchBB module for an example of how to use this functionality.
This third step is optional, as the two previous implementation steps already make it possible to use your algorithm in FRODO’s simple mode and advanced mode (Sections 4.3 and 4.4). However, it can be convenient to have a solver class to call your algorithm through the API (Section 4.5) or from a Python script (Section 4.6). The abstract class AbstractDCOPsolver can be extended to produce such a solver; it declares the following two abstract methods:
The method getSolGatherers must return instances of the modules that report statistics, which will be automatically added to the queue of the statistics gatherer agent. For SynchBB, only the SynchBB module reports relevant statistics about the solution found, and therefore the SynchBBsolver class implements this method as follows:
After the algorithm has terminated, the method buildSolution is called, which must extract statistics from the modules created in getSolGatherers and return an object of type Solution. Because SynchBB reports convergence statistics, its solver actually returns an object of class SolutionWithConvergence, which extends Solution.
An important strength of FRODO is that it is systematically, thoroughly tested using JUnit 3 tests. As soon as you have completed a first implementation of a module (or, ideally, even before you start implementing it), write JUnit tests to make sure it behaves as expected on its own. FRODO being an intrinsically multi-threaded framework, you should use repetitive, randomized tests whenever it makes sense to do so. Once all modules are assembled together and the algorithm is completed, write unit tests against other algorithms that have already been implemented, to check that the outputs of the algorithms are consistent (if your algorithm is complete and guaranteed to find the optimal solution).
An example of a JUnit test is the class SynchBBagentTest, which extends DPOP’s test class DPOPagentTest to favor code reuse. The use of solvers (Section 5.3) make it straightforward to implement unit tests for an algorithm, as demonstrated in the class P_DPOPagentTest. The class AllTests in the package frodo2.algorithms.test provides various methods to create random DCOP instances to be used as inputs for the tests.
The catalogue of constraints supported by FRODO depends on the XCSP parser used, as defined in the agent configuration file (Section 4.2.2). The simplest parser, XCSPparser, only supports soft, extensional constraints based on relations. The special parser XCSPparserVRP additionally supports vehicle routing global constraints. Finally, the most advanced parser based on JaCoP , JaCoPxcspParser, supports extensional (soft or hard) constraints (called relations), intensional, hard constraints (predicates), intensional, soft constraints (functions), and some global constraints, such as weighted sum, all different and diff2.
This appendix describes in some level of detail the XCSP format for the constraints currently supported, as well as how to create such constraints without using XCSP, when applicable. Note that we also provide XML Schema files to help users write and validate XCSP problem files (see the files XCSPschema*.xsd in the frodo2.algorithms package). To check an XCSP problem file agains the XML schema, its <instance> element should include two attributes as follows:
where the relative path to the appropriate XSD file should be adjusted accordingly. If you use JDOM to generate XCSP problem files, you should use the following command to set these attributes:
Note that the XML Schema standard version 1.0 does not support XCSP subsets used by XCSPparserVRP and JaCoPxcspParser (in which the type of the constraint depends on the value of the attribute reference); therefore we provide XML Schema 1.1 files instead (XCSPschemaVRP.xsd and XCSPschemaJaCoP.xsd). Using these XML Schema files to verify XCSP files requires an XSD parser that supports XML Schema 1.1; we suggest the use of the Xerces2 Java Parser , version 2.11.0-xml-schema-1.1-beta or later. To check an XCSP file problem.xml against the schema file schema.xsd, add xercesSamples.jar, xercesImpl.jar and org.eclipse.wst.xml.xpath2.processor_1.1.0.jar to your classpath, and run the following command:
XCSP Format FRODO’s format for extensional soft constraints is based on the official XCSP 2.1 format for weighted tuples , in abridged notation, with the following two modifications:
Figure 2 already provided a small example of an extensional soft constraint. More generally, such a constraint is specified as follows (for a ternary constraint):
where relationName must be the unique name of a relation, specified as follows:
where tuples are separated by a pipe character |, and each tuple has the format utilityOrCost : valueForVar1 valueForVar2 ... valueForVarN. The order of tuples does not matter. The first part of the tuple specifying the utility/cost can be omitted if it is the same as for the previous tuple, such that the following is a valid, shorter representation of the same relation:
The attribute defaultCost specifies the utility/cost assigned to tuples that are not explicitly represented; for instance, in the previous relation, all tuples in which the first variable equals 1 have utility/cost 0.
Java Class The class used to implement extensional soft constraints is the generic class Hypercube<V, U>, where V is the type of variable values, and U is the type of utility values (which, for most applications, can both be set to AddableInteger). A hypercube can be instantiated using one of its constructors, such as the following:
where infeasibleUtil must be set to ProblemInterface.getPlusInfUtility() (resp. getMinInfUtility) if the problem is a minimization (resp. maximization) problem, and utilities must be an array of size equal to the product of all variable domain sizes. The utility for each assignment to the variables can then be specified using the method setUtility(V assignment, U utility).
Important note: if you want FRODO to count constraint checks (NCCCs), you should use the following constructor instead:
Extensional hard constraints are only supported by JaCoPxcspParser. They are specified using relations just like extensional soft constraints, except that they no longer mention costs/utilities, and the semantics are now either "supports" (all specified tuples are allowed, all others are disallowed) or "conflicts" (all specified tuples are disallowed, all others are allowed). For instance:
When the special parser XCSPparserVRP is used, FRODO also supports intensional, vehicle routing constraints, as described in . A vehicle routing constraint is specified as in Figure 19, for an example involving 3 customers and 4 vehicles. Notice that, contrary to extensional soft constraints (Section A.1) that are defined through the intermediary of <relation> elements to which they refer via the attribute reference, vehicle routing constraints are defined directly inside the <constraint> element, and the attribute reference must be set to "global:vehicle_routing". The Java class used to represent such constraints is VehicleRoutingSpace, in the package solutionSpaces.vehiclerouting.
Some of the customers may have uncertain locations, and the problem is then a StochDCOP . In this case, the xCoordinate and yCoordinate attributes actually define the center of an “uncertainty circle,” whose radius is defined by the value of the additional attribute uncertaintyRadius. A second additional attribute uncertaintyAngleVar gives the name of the (integer-valued) random variable corresponding to the uncertain position of the customer on this circle. For instance, in Figure 19, the last customer’s position is uncertain.
Intensional hard constraints (only supported by the JaCoPxcspParser) can be expressed using constraint elements, whose reference is the unique name of a predicate , which is defined in Figure 20. When referring to a predicate, a constraint must specify the values (constants or variables names) that should be assigned to the parameters of the predicate, as in Figure 21.
where a boolean expression is formally defined as follows:
Intensional soft constraints (only supported by the JaCoPxcspParser) can be expressed using constraint elements, whose reference is the unique name of a function , which is defined below. The syntax for an integer expression is the same as in Figure 20. The integer value returned by the integer expression corresponds to the cost to be minimized (or the utility to be maximized).
From the list of JaCoP global constraints, the JaCoPxcspParser currently only supports the all different, cumulative, diff2 and weighted sum constraints.
The XCSP syntax for the global all different constraint  is illustrated below on a ternary constraint.
where the list of parameters may also contain integer constants.
The format for the Cumulative global constraint is a slight variation over the XCSP format in  (the end variables are omitted, and the operator has been introduced). Below is an example of two tasks to be scheduled on a resource of capacity limit, where each task i starts at time step start_i, has a duration of duration_i, and requires height_i units of resource. All parameters limit, start_i, duration_i and height_i can be either variables or integers. The only two supported operators are <eq/> and <le/>.
The XCSP syntax used in FRODO for the global constraint diff2 is illustrated below, for three rectangles, where orig_x_i and orig_y_i are the variables for the x and y coordinates of the origin of the ith rectangle, and size_x_i and size_y_i are the variables for its sizes in the x and y dimensions.
The XCSP syntax for the global weighted sum constraint is as follows, for the example constraint X0 + 2X1 - 3X2 > 12 :
The format supports the following comparison operators: <eq/>, <ne/>, <ge/>, <gt/>, <le/>, and <lt/>.
Besides being compatible with the benchmark problem generators in DisCHOCO 2 , FRODO also comes with its own rich suite of benchmark problem generators that can be used to evaluate the performances of various algorithms.
In a distributed graph coloring problem, each agent controls a single variable whose value corresponds to a color, which must be different from the respective colors of the agent’s neighbors in an underlying graph (, Section 2.2.1).
FRODO’s random graph coloring problem generator can be invoked using the following command (the optional input parameters are put in brackets):
Using the API, it is also possible to generate graph coloring problems in which the underlying graphs have a particular structure. This can be done by calling the method GraphColoring.generateProblem() whose first input is a Graph object, which can be generated using the RandGraphFactory. This factory supports acyclic, chordal, ring, and grid graphs.
In a meeting scheduling problem, each agent must take part in one or more meetings, and must agree on the time for these meetings with the respective other attendees.
FRODO’s random meeting scheduling problem generator can be invoked using the following command (the optional input parameters are put in brackets):
FRODO can also be used to produce completely random, binary-constrained, single-variable-per-agent, Max-DisCSP instances, using the following command:
Like for graph coloring (Section B.1), the method generateProblem() can also be called to produce Max-DisCSP instances based on graphs with a specific structure.
FRODO can take in random auction problem instances generated by the CATS generator  and formalize the winner determination problem as a DCOP (for auctions) or a DisCSP (for pure-satisfaction, resource allocation problems). This can be achieved using the following command (the optional input parameters are put in brackets):
In a Distributed Kidney Exchange Problem (DKEP) (, Section 4.2.4), each agent represents a patient/donor pair, where the patient is awaiting a kidney transplant, and the donor is a friend or relative who is willing to donate one kidney but is incompatible with the patient. The problem consists in finding directed cycles such as “donor A gives to a compatible patient B, whose paired donor B gives to a compatible patient C, whose paired donor C gives to donor A’s compatible paired patient in return.” Such problem instances can be generated using the following command (the optional input parameters are put in brackets):
The party game is a graphical, one-shot, strategic game in which each player must decide whether to attend a party, not knowing whether his liked or disliked acquaintances will also decide to attend. The problem of computing a Nash equilibrium to such a game can be formulated as a DisCSP (, Sections 2.2.6 and 3.7.4). FRODO can generate such random party games using the following command (the optional input parameters are put in brackets):
FRODO can also produce Distributed, Multiple-Depot Vehicle Routing Problems (DisMDVRPs) instances , based on the Cordeau repository of MDVRP instances in . This can be achieved using the following command (the optional input parameters are put in brackets):
We would like to thank the following people who have contributed code to the FRODO platform (in chronological order):
 Bernabé Dorronsoro. The VRP Web. http://neo.lcc.uma.es/radi-aeb/WebVRP/, March 2007.
 Boi Faltings, Thomas Léauté, and Adrian Petcu. Privacy Guarantees through Distributed Constraint Satisfaction. In Proceedings of the 2008 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT’08), pages 350–358, Sydney, Australia, December 9–12 2008.
 Alessandro Farinelli, Alex Rogers, Adrian Petcu, and Nicholas R. Jennings. Decentralised coordination of low-power embedded devices using the max-sum algorithm. In Lin Padgham, David C. Parkes, Jörg P. Müller, and Simon Parsons, editors, Proceedings of the Seventh International Conference on Autonomous Agents and Multiagent Systems (AAMAS’08), pages 639–646, Estoril, Portugal, May 12–16 2008.
 Amir Gershman, Amnon Meisels, and Roie Zivan. Asynchronous forward-bounding for distributed constraints optimization. In Gerhard Brewka, Silvia Coradeschi, Anna Perini, and Paolo Traverso, editors, Proceedings of the Seventeenth European Conference on Artificial Intelligence (ECAI’06), pages 103–107, Riva del Garda, Italy, August 29–September 1 2006. IOS Press.
 Amir Gershman, Roie Zivan, Tal Grinshpoun, Alon Grubshtein, and Amnon Meisels. Measuring distributed constraint optimization algorithms. In Proceedings of the AAMAS’08 Distributed Constraint Reasoning Workshop (DCR’08), pages 17–24, Estoril, Portugal, May 13 2008.
 Graphviz – Graph Visualization Software. http://www.graphviz.org/, 2011.
 Katsutoshi Hirayama and Makoto Yokoo. Distributed partial constraint satisfaction problem. In Gert Smolka, editor, Proceedings of the Third International Conference on Principles and Practice of Constraint Programming (CP’97), volume 1330, pages 222–236, Linz, Austria, Oct. 29–Nov. 1 1997.
 John Hunter. The matplotlib python module. http://matplotlib.org.
 JaCoP java constraint programming solver. http://jacop.osolpro.com/.
 The JDOM XML toolbox for Java. http://www.jdom.org/, 2009.
 Jean-Yves Le Boudec. Performance Evaluation of Computer and Communication Systems. EPFL Press, Lausanne, Switzerland, 2010. http://perfeval.epfl.ch.
 Thomas Léauté. Distributed Constraint Optimization: Privacy Guarantees and Stochastic Uncertainty. PhD thesis, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland, November 11 2011.
 Thomas Léauté and Boi Faltings. Privacy-Preserving Multi-agent Constraint Satisfaction. In Proceedings of the 2009 IEEE International Conference on PrivAcy, Security, riSk And Trust (PASSAT’09), pages 17–25, Vancouver, British Columbia, August 29–31 2009. IEEE Computer Society Press.
 Thomas Léauté and Boi Faltings. E[DPOP]: Distributed Constraint Optimization under Stochastic Uncertainty using Collaborative Sampling. In Proceedings of the IJCAI’09 Distributed Constraint Reasoning Workshop (DCR’09), pages 87–101, Pasadena, California, USA, July 13 2009.
 Thomas Léauté and Boi Faltings. Coordinating Logistics Operations with Privacy Guarantees. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI’11), pages 2482–2487, Barcelona, Spain, July 16–22 2011. AAAI Press.
 Thomas Léauté and Boi Faltings. Distributed Constraint Optimization under Stochastic Uncertainty. In Proceedings of the Twenty-Fifth Conference on Artificial Intelligence (AAAI’11), pages 68–73, San Francisco, USA, August 7–11 2011.
 Thomas Léauté, Brammert Ottens, and Boi Faltings. Ensuring Privacy through Distributed Computation in Multiple-Depot Vehicle Routing Problems. In Proceedings of the ECAI’10 Workshop on Artificial Intelligence and Logistics (AILog’10), Lisbon, Portugal, August 17 2010.
 Kevin Leyton-Brown, Mark Pearson, and Yoav Shoham. Towards a universal test suite for combinatorial auction algorithms. In Anant Jhingran, Jeff MacKie Mason, and Doug Tygar, editors, Proceedings of the Second ACM Conference on Electronic commerce (EC’00), pages 66–76, Minneapolis, Minnesota, USA, October 17–20 2000. ACM Special Interest Group on Electronic Commerce (SIGEcom), ACM. http://www.cs.ubc.ca/~kevinlb/CATS.
 Rajiv T. Maheswaran, Jonathan P. Pearce, and Milind Tambe. Distributed algorithms for DCOP: A graphical-game-based approach. In David A. Bader and Ashfaq A. Khokhar, editors, Proceedings of the ISCA Seventeenth International Conference on Parallel and Distributed Computing Systems (ISCA PDCS’04), pages 432–439, Francisco, California, USA, September 15–17 2004. ISCA.
 Rajiv T. Maheswaran, Milind Tambe, Emma Bowring, Jonathan P. Pearce, and Pradeep Varakantham. Taking DCOP to the real world: Efficient complete solutions for distributed multi-event scheduling. In Nicholas R. Jennings, Carles Sierra, Liz Sonenberg, and Milind Tambe, editors, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS’04), volume 1, pages 310–317, Columbia University, New York City, U.S.A., July 19–23 2004. ACM Special Interest Group on Artificial Intelligence (SIGART), IEEE Computer Society.
 Operations Research – Java Objects. http://sourceforge.net/projects/frodo2/files/3rd_party/.
 Organising Committee of the Third International Competition of CSP Solvers. XML Representation of Constraint Networks – Format XCSP 2.1, January 15 2008. http://www.cril.univ-artois.fr/~lecoutre/research/benchmarks/benchmarks.html.
 Brammert Ottens and Boi Faltings. Coordinating Agent Plans Through Distributed Constraint Optimization. In Proceedings of the ICAPS’08 Multiagent Planning Workshop (MASPLAN’08), Sydney, Australia, September 14 2008.
 Adrian Petcu and Boi Faltings. DPOP: A Scalable Method for Multiagent Constraint Optimization. In Leslie Pack Kaelbling and Alessandro Saffiotti, editors, Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI’05), pages 266–271, Edinburgh, Scotland, July 31 – August 5 2005. Professional Book Center, Denver, USA.
 Adrian Petcu and Boi Faltings. S-DPOP: Superstabilizing, fault-containing multiagent combinatorial optimization. In Manuela M. Veloso and Subbarao Kambhampati, editors, Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI’05), pages 449–454, Pittsburgh, Pennsylvania, U.S.A, July 9–13 2005. AAAI Press / The MIT Press.
 Adrian Petcu and Boi Faltings. O-DPOP: An algorithm for open/distributed constraint optimization. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI’06), pages 703–708, Boston, Massachusetts, U.S.A., July 16–20 2006. AAAI Press.
 Adrian Petcu and Boi Faltings. MB-DPOP: A new memory-bounded algorithm for distributed optimization. In Manuela M. Veloso, editor, Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI’07), pages 1452–1457, Hyderabad, India, January 6–12 2007.
 Marius-Călin Silaghi. Hiding absence of solution for a distributed constraint satisfaction problem (poster). In Proceedings of the Eighteenth International Florida Artificial Intelligence Research Society Conference (FLAIRS’05), pages 854–855, Clearwater Beach, FL, USA, May 15–17 2005. AAAI Press.
 Marius-Călin Silaghi and Debasis Mitra. Distributed constraint satisfaction and optimization with privacy enforcement. In Proceedings of the 2004 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT’04), pages 531–535, Beijing, China, September 20–24 2004. IEEE Computer Society Press.
 Evan A. Sultanik, Robert N. Lass, and William C. Regli. DCOPolis: A framework for simulating and deploying distributed constraint optimization algorithms. In Jonathan P. Pearce, editor, Proceedings of the Ninth International Workshop on Distributed Constraint Reasoning (CP-DCR’07), Providence, RI, USA, September 23 2007.
 Mohamed Wahbi, Redouane Ezzahir, Christian Bessiere, and El Houssine Bouyakhf. DisChoco 2: A platform for distributed constraint reasoning. In Proceedings of the Thirteenth International Workshop on Distributed Constraint Reasoning (DCR’11), pages 112–121, Barcelona, Spain, July 17 2011. http://www.lirmm.fr/coconut/dischoco/.
 Richard J. Wallace and Eugene C. Freuder. Conjunctive width heuristics for maximal constraint satisfaction. In Proceedings of the Eleventh National Conference on Artificial Intelligence (AAAI’93), pages 762–768, Washington, DC, USA, July 11–15 1993. AAAI Press / The MIT Press.
 The Apache Xerces project. http://xerces.apache.org.
 Weixiong Zhang, Guandong Wang, Zhao Xing, and Lars Wittenburg. Distributed stochastic search and distributed breakout: properties, comparison and applications to constraint optimization problems in sensor networks. Journal of Artificial Intelligence Research (JAIR), 161(1–2):55–87, Jan. 2005.