Removing Redundant Messaegs in N-ary BnB-ADOPT
Publication Type:
Journal ArticleSource:
Journal of Artificial Intelligence Research (In Press)Keywords:
DCOP; BnB-ADOPTAbstract:
This note considers how to modify BnB-ADOPT (Yeoh, Felner,& Koenig, 2010), a well-known algorithm for optimally solving distributed constraint optimization problems, with a double aim: (i) to avoid sending most of the redundant messages and (ii) to handle cost functions of any arity. Some of the messages exchanged by BnB-ADOPT turned out to be redundant. Removing most of the redundant messages increases substantially communication efficiency, dividing in most cases the number of messages by 3 and more (keeping the other measures almost unchanged) and maintaining termination and optimality properties. On the other hand, handling n-ary cost functions was addressed in the original work, but the presence of thresholds makes their practical usage more complex. Both issues --removing most of the redundant messages and efficiently handling n-ary cost functions-- can be combined, producing the new version BnB-ADOPT+. Experimentally, we show the benefits of this version over the original one.
Including Soft Global Constraints in DCOPs
Publication Type:
Conference PaperSource:
18th International Conference on Principles and Practice of Constraint Programming (CP 2012), Québec, Canada (2012)Abstract:
In the centralized context, global constraints have been essential for the advancement of constraint reasoning. In this paper we propose to include soft global constraints in distributed constraint optimization problems (DCOPs). Looking for efficiency, we study possible decompositions of global constraints, including the use of extra variables. We extend the distributed search algorithm BnB-ADOPT$^+$ to support these representations of global constraints. In addition, we explore the relation of global constraints with soft local consistency in DCOPs, in particular for the generalized soft arc consistency (GAC) level. We include specific propagators for the \emph{soft-all-different} and the \emph{soft-at-most} constraint and measure their impact in the solving process, providing empirical results on several benchmarks.
A Novel Way to Connect BnB-ADOPT+ with Soft AC
Publication Type:
Conference PaperSource:
20th European Conference on Artificial Intelligence (ECAI-12), Montpellier, France, p.903-904 (2012)Abstract:
Combining BnB-ADOPT$^+$ with AC and FDAC levels of soft arc consistency (SAC) improves efficiency for optimal DCOP solving. However, it seems difficult in distributed context to achieve the higher consistency level EDAC, especially considering privacy. As alternative, we propose DAC by token passing. Agents receiving a token ask neighbors for cost extensions. When deletions or $C_phi$ increments occur, the token is passed to neighbor agents. This strategy turns out to be more efficient than FDAC when combined with BnB-ADOPT$^+$, improving communication and specially computation.
Filtering Decomposable Global Cost Functions
Publication Type:
Conference PaperSource:
26th AAAI Conference on Artificial Intelligence (AAAI 2012), Toronto, Canada (2012)Abstract:
As (Lee and Leung 2009; 2010) have shown, weighted constraint satisfaction problems can benefit from the introduction of global cost functions, leading to a new Cost Function Programming paradigm. In this paper, we explore the possibility of decomposing global cost functions in such a way that enforcing soft local consistencies on the decomposition offers guarantees on the level of consistency enforced on the original global cost function. We show that directional arc consistency and virtual arc consistency offer such guarantees. We conclude by experiments on decomposable cost functions showing that decompositions may be very useful to easily integrate efficient global cost functions in solvers.
Soft Global Constraints in Distributed Constraint Optimization
Publication Type:
Conference PaperSource:
AAMAS 2012 workshop: International Workshop on Optimisation in Multi-Agent Systems, Valencia, Spain, p.43--50 (2012)Abstract:
In the centralized context, global constraints have been essential for the advancement of constraint reasoning. In this paper we propose to include soft global constraints in distributed constraint optimization problems (DCOPs), using three representations: direct, binary and nested. We extend the distributed search algorithm BnB-ADOPT$^+$ to support these three representations of global constraints. In addition, we explore the relation of global constraints with soft local consistency in DCOPs, in particular for the generalized soft arc consistency (GAC) level. We include specific propagators for the \emph{soft-all-different} and the \emph{soft-at-most} constraint and measure their impact in the solving process, providing empirical results on several benchmarks.
Reward-based region optimal quality guarantees
Multi-Agent Coordination: DCOPs and Beyond
Publication Type:
Conference PaperSource:
Twenty-Second International Joint Conference on Artificial Intelligence, AAAI Press, Volume 3, Barcelona, p.2838-2839 (2011)ISBN:
978-1-57735-515-1URL:
http://ijcai.org/papers11/contents.phpAbstract:
Distributed constraint optimization problems (DCOPs) are a model for representing multi-agent systems in which agents cooperate to optimize a global objective. The DCOP model has two main advantages: it can represent a wide range of problem domains, and it supports the development of generic algorithms to solve them. Firstly, this paper presents some advances in both complete and approximate DCOP algorithms. Secondly, it explains that the DCOP model makes a number of unrealistic assumptions that severely limit its range of application. Finally, it points out hints on how to tackle such limitations.
Improving function filtering for computationally demanding DCOPs
Publication Type:
Conference PaperSource:
Workshop on Distributed Constraint Reasoning at IJCAI 2011, Barcelona, p.99-111 (2011)Abstract:
In this paper we focus on solving DCOPs in computationally demanding scenarios. GDL optimally solves DCOPs, but requires exponentially large cost functions, being impractical in such settings. Function filtering is a technique that reduces the size of cost functions. We improve the effectiveness of function filtering to reduce the amount of resources required to optimally solve DCOPs. As a result, we enlarge the range of problems solvable by algorithms employing function filtering.
Improving DPOP with Function Filtering
Publication Type:
Conference PaperSource:
International Conference Autonomous Agents and Multiagent Systems AAMAS-10, Toronto, Canada (2010)Keywords:
DCOP; function filteringAbstract:
DPOP is an algorithm for distributed constraint optimization which has, as main drawback, the exponential size of some of its messages.
%which uses a solving strategy similar to bucket elimination in the centralized case.
Recently, some algorithms for distributed cluster tree elimination have been proposed. They also suffer from exponential size messages. However, using the strategy of cost function filtering, in practice these algorithms obtain important reductions in maximum message size and total communication cost. In this paper, we explain the relation between DPOP and these algorithms, and show how cost function filtering can be combined with DPOP. We present experimental evidence of the benefits of this new approach.
Cluster Tree Elimination for Distributed Constraint Optimization with Quality Guarantees
Publication Type:
Journal ArticleSource:
Fundamenta Informaticae, IOS Press, Volume 102, Issue 3-4, p.263-286 (2010)Keywords:
DCOP; function filteringAbstract:
Some distributed constraint optimization algorithms use a linear number of messages in the number of agents, but of exponential size. This is often the main limitation for their practical applicability. Here we present some distributed algorithms for these problems when they are arranged in a tree of agents. The exact algorithm, DCTE, computes the optimal solution but requires messages of size $exp(s)$, where $s$ is a structural parameter. Its approximate version, DMCTE$(r)$, requires smaller messages of size $exp(r)$, $r < s$, at the cost of computing approximate solutions. It provides a cost interval that bounds the error of the approximation. Using the technique of cost function filtering, we obtain DMCTEf$(r)$. Combining cost function filtering with bound reasoning, we propose DIMCTEf, an algorithm based on repeated executions of DMCTEf$(r)$ with increasing $r$. DIMCTEf uses messages of previous iterations to decrease the size of messages in the current iteration, which allows to alleviate their high size. We provide evidences of the benefits of our approach on two benchmarks.
