Current computational models for the emergence of conventions assume that there is no uncertainty regarding the information exchanged between agents. However, in more realistic MAS uncertainty exists, e.g. lies, faulty operation, or communication through noisy channels. Hence, within these settings conventions may fail to emerge. In this work we propose the use of self-tuning capabilities to increase the robustness of an emergence mechanism by allowing agents to dynamically self-protect against unreliable information.
Links:
[1] http://www.iiia.csic.es/en/individual/norman-salazar-ramirez
[2] http://www.iiia.csic.es/en/individual/juan-a-rodriguez-aguilar
[3] http://www.iiia.csic.es/en/individual/josep-lluis-arcos
[4] http://www.iiia.csic.es/en/publications/export/tagged/3430
[5] http://www.iiia.csic.es/en/publications/export/xml/3430
[6] http://www.iiia.csic.es/en/publications/export/bib/3430
[7] http://www.iiia.csic.es/en/project/at
[8] http://www.iiia.csic.es/en/project/iea