Analysis of On-line Social Networks Represented as Graphs – Extraction of an Approximation of Community Structure Using Sampling
Publication Type:
Conference PaperSource:
MDAI 2012, Springer-Verlag, Volume 7647, Girona, Catalunya., p. 149-160 (2012)Abstract:
In this paper we benchmark two distinct algorithms for extracting community structure from social networks represented as graphs, considering how we can representatively sample an OSN graph while maintaining its community structure. We also evaluate the extraction algorithms’ optimum value (modularity) for the number of communities using five well-known benchmarking datasets, two of which represent real online OSN data. Also we consider the assignment of the filtering and sampling criteria for each dataset. We find that the extraction algorithms work well for finding the major communities in the original and the sampled datasets. The quality of the results is measured using an NMI (Normalized Mutual Information) type metric to identify the grade of correspondence between the communities generated from the original data and those generated from the sampled data. We find that a representative sampling is possible which preserves the key community structures of an OSN graph, significantly reducing computational cost and also making the resulting graph structure easier to visualize. Finally, comparing the communities generated by each algorithm, we identify the grade of correspondence.
Dynamic Sanctioning for Robust and Cost-Efficient Norm Compliance
Publication Type:
Conference PaperSource:
Twenty-Second International Joint Conference on Arti?cial Intelligence, IJCAI/AAAI, Barcelona, p.414-419 (2011)ISBN:
978-1-57735-516-8Abstract:
As explained by Axelrod in his seminal work An Evolutionary Approach to Norms, punishment is a key mechanism to achieve the necessary social control and to impose social norms in a self-regulated society. In this paper, we distinguish between two enforcing mechanisms. i.e. punishment and sanction, focusing on the specific ways in which they favor the emergence and maintenance of cooperation. The key research question is to find more stable and cheaper mechanisms for norm compliance in hybrid social environments (populated by humans and computational agents). To achieve this task, we have developed a normative agent able to punish and sanction defectors and to dynamically choose the right amount of punishment and sanction to impose on them (Dynamic Adaptation Heuristic). The results obtained through agent-based simulation show us that sanction is more effective and less costly than punishment in the achievement and maintenance of cooperation and it makes the population more resilient to sudden changes than if it were enforced only by mere punishment.
A Comparison of Two Different Types of Online Social Network from a Data Privacy Perspective
Publication Type:
Book ChapterSource:
Lecture Notes in Artificial Intelligence, Springer, Volume 6820, p.223-234 (2011)Keywords:
Social network; data privacy; descriptive statistics; risk of disclosure; information lossAbstract:
We consider two distinct types of online social network, the first made up of a log of writes to wall by users in Facebook, and the second consisting of a corpus of emails sent and received in a corporate environment (Enron). We calculate the statistics which describe the topologies of each network represented as a graph. Then we calculate the information loss and risk of disclosure for different percentages of perturbation for each dataset, where perturbation is achieved by randomly adding links to the nodes. We find that the general tendency of information loss is similar, although Facebook is affected to a greater extent. For risk of disclosure, both datasets also follow a similar trend, except for the average path length statistic. We find that the differences are due to the different distributions of the derived factors, and also the type of perturbation used and its parameterization. These results can be useful for choosing and tuning anonymization methods for different graph datasets.
Social Instruments for Convention Emergence
Publication Type:
Conference PaperSource:
10th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2011), Volume 3, Taipei. Taiwan, p.1161-1162 (2011)ISBN:
978-0-9826571-5-7Abstract:
In this paper we present the notion of Social Instruments as a set
of mechanisms that facilitate the emergence of norms from repeated
interactions between members of a society. Specifically, we focus on
two social instruments: rewiring and observation. Our main goal is
to provide agents with tools that allow them to leverage their
social network of interactions when effectively addressing
coordination and learning problems, paying special attention to
dissolving meta\-stable subconventions. Finally, we present a more
sophisticated social instrument (observation + rewiring)
for robust resolution of \emph{subconventions}, which works dissolving Self-Reinforcing Substructures (SRS) in the social
network.
Topology and memory effect on convention emergence.
Publication Type:
Conference PaperSource:
IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT 2009), Milan, Italy (2009)Abstract:
Social conventions are useful self-sustaining protocols for groups to
coordinate behavior without a centralized entity enforcing
coordination. We perform an in-depth study of different network
structures, to compare and evaluate the effects of different network
topologies on the success and rate of emergence of social
conventions. While others have investigated memory for learning
algorithms, the effects of memory or history of past activities on the
reward received by interacting agents have not been adequately
investigated. We propose a reward metric that takes into
consideration the past action choices of the interacting agents. The
research question to be answered is what effect does the history based
reward function and the learning approach have on convergence time to
conventions in different topologies. We experimentally investigate the
effects of history size, agent population size and neighborhood size
or the emergence of social conventions.
Effects of interaction history and network topology on rate of convention emergence.
Publication Type:
Conference PaperSource:
3rd International Workshop on Emergent Intelligence on Networked Agents (WEIN’09), p.13-19 (2009)Abstract:
Social conventions are useful self-sustaining protocols for groups to coordinate behavior without a centralized entity enforcing coordination. The emergence of such conventions in different multi agent network topologies has been investigated by several researches. Although we will perform an exhaustive study of different network structures, we are concerned that different topologies will affect the emergence in different ways. Therefore, the main research question in this work is comparing and studing effects of different topologies on the emergence of social conventions. While others have investigated memory for learning algorithms, the effects of memory on the reward have not been investigated thoroughly. We propose a reward metric that is derived directly from the history of the interacting agents. The reward metric is the majority rule, thus the emerging convention becomes self propagating in the society. Agents are proportionally rewarded based upon their conformity to the majority action when interacting with another agent. Another research question to be answered is what effect does the history based reward function have on convergence time in different topologies. We also investigate the effects of history size, agent population size and neighborhood size proving their effects by agent-based experimentation.
Dynamics in the Normative Group Recognition Process.
Publication Type:
Conference PaperSource:
Proceedings of IEEE Congress on Evolutionary Computation (IEEE CEC 2009), p.757-764 (2009)Abstract:
This paper examines the decentralized recognition of groups within a multiagent normative society in dynamic environments. In our case, a social group is defined based on the set of social norms used by its members. These social norms regulate interactions under certain situations, and situations are determined by the environmental conditions. Environmental conditions might change unexpectedly, and so should the notion of social group for each agent. Consequently, agents need mechanisms to adjust their notion of group dynamically and accordingly the agents with whom it is socially related.
In this work we analyze how different algorithms (whitelisting, blacklisting, labelling), that allow agents to recognize the others as members of a certain social group, behave in these dynamic environments. Simulation results are shown, confirming that the limited memory approach reacts better against environmental changes.
Moreover we compare two approaches that regulate the adaptation of the relevance of norms and the notion of group: the unlimited normative memory and the limited memory.
Group Recognition through Social Norms.
Publication Type:
Conference PaperSource:
8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2009), Budapest, p.1347-1348 (2009)Abstract:
This paper examines the decentralized recognition of groups within a multiagent normative society. In this work we explore different mechanisms that allow agents to recognize the others as members of a certain social group. Considering as the basic mechanism the one that makes agents interact with other agents without considering the previous interactions and with no communication, three new algorithms have been developed and tested to improve the efficiency of the basic one. These algorithms are: (1) the whitelisting, (2) the blacklisting, and (3) the labelling algorithm. Moreover, a reinterpretation of the definition of group is done in order to make it more dynamic and flexible with respect to the environment where agents are located. Analysis on simulation results confirms the effectiveness of this dynamic member evaluation function.
Towards the Group Formation through Social Norms.
Publication Type:
Conference PaperSource:
Sixth European Workshop on Multi-Agent Systems (EUMAS08) (2008)Abstract:
This paper examines the decentralized formation of groups within a multiagent normative society. In our case, a group is defined based on the set of social norms used by its members: all the agents using the same set of norms belong to the same social group. In this paper we explore different mechanisms that allow agents to recognize the others as members of a certain social group. Considering as the basic mechanism the one that makes agents interact with other agents without considering the previous interactions and with no communication, three new algorithms have been developed and tested to improve the efficiency of the basic one. These algorithms are: (1) the whitelisting algorithm, which works as a recomender of trusted neighbours; (2) the blacklisting algorithm, whose basic functioning is based on defaming the non-related agents inside a certain social group; and (3) the labelling algorithm, which basically publishes information of the interactions with different agents allowing the rest to access that information. Simulation results are shown, confirming that these algorithms improve the efficiency of the basic one. Finally, we present and discuss some of the weak points of the algorithms presented as well as future improvements.
A multiagent network for peer norm enforcement
Publication Type:
Journal ArticleSource:
Autonomous Agents and Multi Agent Systems, Volume 21, p.397-424 (2010)Keywords:
Multiagent systems; norms; enforcement; social network; ostracismAbstract:
In a multiagent system where norms are used to regulate the actions agents ought to execute, some agents may decide not to abide by the norms if this can benefit them. Norm enforcement mechanisms are designed to counteract these benefits and thus the motives for not abiding by the norms. In this work we propose a distributed mechanism through which agents in the multiagent system that do not abide by the norms can be ostracised by their peers. An ostracised agent cannot interact anymore and looses all benefits from future interactions. We describe a model for multiagent systems structured as networks of agents, and a behavioural model for the agents in such systems. Furthermore, we provide analytical results which show that there exists an upper bound to the number of potential norm violations when all the agents exhibit certain behaviours. We also provide experimental results showing that both stricter enforcement behaviours and larger percentage of agents exhibiting these behaviours reduce the number of norm violations, and that the network topology influences the number of norm violations. These experiments have been executed under varying scenarios with different values for the number of agents, percentage of enforcers, percentage of violators, network topology, and agent behaviours. Finally, we give examples of applications where the enforcement techniques we provide could be used.
