Engineers have been dealing with massive amounts of data accumulated over decades of fundamental experiments and field measurements, vitalized in the form of cleverly organized charts, tables and heuristic laws. In the last few decades, our capability to generate data has increased even further with the developments in (i) the digital measurement techniques including sensing technologies, (ii) computational power, (iii) faster, easier and cheaper data transfer and storage and (iv) post-processing tools and algorithms. On the other side, the problems that are needed to be addressed today, such as the food-water-energy security, pandemics and diseases, or global warming, are massive and at a completely different scale. More drastically, we have comparably much less time to find sustainable solutions. Therefore, we need a paradigm shift in how to interpret the data we collect and solve our problems, which can speed up our hypothesis test cycle. In this talk, we will visit some case studies relevant to the energy problem and discuss how the expertise of AI specialists can tip the scales in our favour.
Cihan is a junior research group leader at KIT, under the Institute of Thermal Turbo Machinery, Multiphase Flow & Combustion group. With his group, he is working on the design and optimization of energy intensive processes. He is also a PI at the Graduate School Computational and Data Science and KIT Emerging Field of Health Technologies.
Over the last decade, the research on autonomous vehicles (AVs) has made revolutionary progress, which brings us hope of safer, more convenient, and more efficient means of transportation. Most significantly, the advance of artificial intelligence (AI), especially machine learning, allows a self-driving car to learn and adapt to complex road situations with millions of accumulated driving hours, which are way higher than any experienced human driver can reach. However, autonomous vehicles on roads also introduce new challenges to traffic management, especially when we allow them to travel mixed with human driving vehicles.
New theories for better understanding of the new era of transportation and new technologies for smart roadside infrastructures and intelligent traffic control are crucial for development and deployment of autonomous vehicles. This presentation will discuss some of these challenges, especially the social aspects of autonomous driving, including interaction between autonomous vehicles and roadside infrastructures, mechanisms of traffic management, the price of anarchy in road networks and automated negotiation between vehicles.
Dongmo Zhang is an Associate Professor in Computer Science and Associate Dean Graduate Studies in School of Computer, Data and Mathematical Sciences at Western Sydney University. He is a leading researcher in Artificial Intelligence, working in a wide range of areas, including multi-agent systems, strategic reasoning, automated negotiation, belief revision, reasoning about action, auctions, trading agent design etc. He has published around 150 papers in international journals and conferences, including the top AI Journals, such as AIJ, AAMAS & JAIR, and the top AI conferences, such as IJCAI, AAAI & AAMAS. He has been an area chair, senior PC or PC for many top AI conferences, IJCAI, AAAI, ECAI, PRICAI, AJCAI, AAMAS, KR&R etc. He and his research team have also received several international awards, such champions of Trading Agent Competitions and best paper awards.
The connection between substructural logics and residuated lattices is one of the most relevant results of algebraic logic. Indeed, it establishes a framework where different systems, or equivalently, classes of structures, can be both compared and studied uniformly. Among the most well-known connections among different structures in this framework surely stands Mundici’s theorem, which establishes a categorical equivalence between the algebraic category of MV-algebras and lattice-ordered abelian groups (abelian l-groups in what follows) with strong order unit (an archimedean element with respect to the lattice order), with unit preserving homomorphisms. This equivalence, connecting the equivalent algebraic semantics of infinite-valued Lukasiewicz logic (i.e., MV-algebras) with ordered groups, has been deeply investigated and also extended to more general structures.
Alternative algebraic approaches to Mundici’s functor have been proposed by other authors. In the present contribution we re-elaborate Rump’s work, which is inspired by Bosbach’s idea, and focuses on structures with only one implication and a constant (whereas Bosbach’s cone algebras have two implications). The key idea is to characterize which structures in this reduced signature embed in an l-group. We find conditions that are different (albeit equivalent) to the ones found by Rump, and moreover we extend some of Rump’s constructions to categorical equivalences of the algebraic categories involved.
Valeria Giustarini is a Master Student at the Department of Information Engineering and Mathematics, University of Siena.
This is a specialized seminar organized by the Logic department. If you want to participate in this seminar, please contact with Tommaso Flaminio <email@example.com>
The seminar has two parts. The first will be from 10:00 to 12:00 and the second from 14:00 to 16:00.
As artificial agents become increasingly embedded in our society, we must ensure that they align with our human values, both at a level of individual interactions and system governance. However, agents must first be able to infer our values, i.e., understand how we prioritize values in different situations, both as individuals and as society. In this talk we explore how artificial agents can infer our values, while helping us reason about them. How can artificial agents understand our deepest motivations, when we are often not even aware of them?
Enrico Liscio is a PhD candidate in the Interactive Intelligence Group at TU Delft and part of the Hybrid Intelligence Centre. His research focuses on Natural Language Processing techniques to estimate human values from text. His work is part of the project to achieve high-quality online mass deliberation, creating AI-supported tools and environments aimed at transforming online conversations into more constructive and inclusive dialogues.
Properties of coercion resistance and voter verifiability refer to the existence of an appropriate strategy for the voter, the coercer, or both. One can try to specify such properties by formulae of a suitable strategic logic. However, automated verification of strategic properties is notoriously hard, and novel techniques are needed to overcome the complexity.
I will start with an overview of the relevant properties, show how they can be specified, and present some new results for model checking of strategic properties.
Wojtek Jamroga is an associate professor at the Polish Academy of Sciences and a research scientist at the University of Luxembourg. His research focuses on modeling, specification and verification of interaction between agents. He has coauthored over 100 refereed publications, and has been a Program Committee member of most important conferences and workshops in AI and multi-agent systems. His research track includes the Best Paper Award at the main conference on electronic voting (E-VOTE-ID) in 2016, and a Best Paper Nomination at the main multi-agent systems conference (AAMAS) in 2018.
Este workshop se enmarca en un ciclo denominado «AIHUB Research Methodology Training» (Metodologías de investigación AIHUB) que brinda oportunidades de formación en metodologías de investigación relacionadas con la IA, la robótica y la ciencia de datos al personal pre y postdoctoral en formación. En el primer workshop se explorará el uso de métodos de investigación cualitativos en la investigación de la interacción humano-robot con el fin de mejorar el diseño y la comprensión del sistema socio-técnico emergente.
Miquel Domènech es Profesor Titular de Psicología Social en la Universitat Autónoma de Barcelona. Es miembro fundador y coordinador del Barcelona Science and Technology Studies Group (STS-b), grupo de investigación reconocido por la Generalitat de Cataluña. Sus intereses de investigación se enmarcan en el campo de los estudios de la ciencia y la tecnología, con un énfasis especial en las temáticas relacionadas con el uso de la tecnología en los procesos de cuidado y en la participación ciudadana en asuntos tecnocientíficos.
Núria Vallès Peris es socióloga, investigadora del grupo Barcelona Science and Technology Studies Group (STS-b) de la UAB. Actualmente investigadora postdoctoral en el Intelligent Data Science and Artificial Intelligence Research Center (IDEAI) de la UPC. Su aproximación se enmarca en los estudios de la ciencia y la tecnología, y la filosofía de la tecnología. Su investigación se ha focalizado en las controversias éticas, políticas y sociales en torno a la robótica y la inteligencia artificial, especialmente en el ámbito de la salud y los cuidados. Está interesada en el estudio de los imaginarios, el diseño de las tecnologías y los procesos de democratización de la tecnociencia.
The Doctoral Consortium will take place on July 19 and 20.consortium
Game theory is the mathematical theory of strategic interactions between self-interested agents. Game theory provides a range of models for representing strategic interactions, and associated with these, a family of solution concepts, which attempt to characterise the rational outcomes of games.
Game theory is important to computer science for several reasons: First, interaction is a fundamental topic in computer science, and if it is assumed that system components are self-interested, then the models and solution concepts of game theory seems to provide an appropriate framework with which to model such systems. Second, the problem of computing with the solution concepts proposed by game theory raises important challenges for computer science, which test the boundaries of current algorithmic techniques.
This course aims to introduce the key concepts of game theory for a computer science audience, emphasising both the applicability of game theoretic concepts in a computational setting, and the role of computation in game theoretic problems.course
It is undenyable that more and more hard and complex procedures are being automated with the aid of artificial intelligence, having led to an era where artificial intelligence can be practically found in any system. As such, it is more and more common that people make decisions guided by the suggestions and recommendations of some intelligent system. As these systems support everyday life’s decisions they unavoidably make people curious about their functionality.Thus, the need for humans to understand the rationale behind AI decisions becomes imperative.
Adequate explanations for decisions made by an intelligent system do not just help describing how the system works, they also earn users’ trust. In this work we focus on a general methodology for justifying why certain teams are formed and others are not by a team formation algorithm (TFA). Specifically, we introduce an algorithm that wraps up any existing TFA and builds justifications regarding the teams formed by such TFA. This is done without modifying the TFA in any way. Our algorithm offers users a collection of commonly-asked questions within a team formation scenario and builds justifications as contrastive explanations. We also report on an empirical evaluation to determine the quality of the explanations provided by our algorithm.
Athina Georgara is currently a PhD candidate in Autonoma Unoversity of Barcelona in collaboration with the Artificial Intelligence Research Institute under the supervision of professors Carles Sierra and Juan A. Rodríguez-Aguilar. Her PhD studies are funded by the consulting company Enzyme Advising Group, where she is employeed during her studies. Athina completed her undergraduate studies and acquired a diploma degree at the school of Electrical and Computer Engineering in Technical University of Crete, and she acquired an M. Sc. in Electronic and Computer Engineering in the same school under the supervision of associate professor Georgios Chalkiadakis.
The scope of her research lies on team formation and task allocation. She works towards automating the process of forming efficient teams for assigning them to tasks combining findings from organisational psychology and social sciences. Due to her prior engagement on the fields Athina also holds interest on Algorithmic Game Theory and Machine Learning, along with their implementation in Multi-agent Systems.
Dealing with the challenges of an interconnected globalised world requires to handle plurality. This is no exception when considering value-aligned intelligent systems, since the values to align with should capture this plurality. So far, most literature on value-alignment has just considered a single value system. Thus, in this talk I will discuss a method for the aggregation of value systems. By exploiting recent results in the social choice literature, we formalise our aggregation problem as an optimisation problem, more concretely, as an ℓp-regression problem. Moreover, our aggregation method allows us to consider a range of ethical principles, from utilitarian (maximum utility) to egalitarian (maximum fairness).
Roger Lera finished the BSc in Physics in the University of Barcelona in 2020. He is currently a Ph.D. student at the Artificial Intelligence Research Institute (IIIA-CSIC) in Bellaterra, Spain. His research interest are ethics & AI, Explainable AI and combinatorial optimisation problems for real-world applications.
In the present seminar, we present a class of algebras obtained by adding a normal modality to Boolean algebras for conditionals so as to provide an algebraic setting for the logic C1 for counterfactual conditionals, axiomatized by Lewis. These modal algebras, that we name “Lewis algebras”, are particular Boolean algebras with operators and, as such, allow a dual relational counterpart that will be called Lewis frames. The main results of this paper show that: (1) Lewis algebras and Lewis frames provide a sound semantics for Lewis logic C1; (2) Lewis’ original sphere semantics for counterfactuals can actually be defined from Lewis frames, and hence, from Lewis algebras. Finally, we will present a new logic for counterfactuals that, taking inspiration from the definition of Lewis algebras, is obtained as a modal expansion of the recently introduced logic LBC to reason about Boolean conditionals.
NOTE: This is an specialized seminar. If you want to attend this seminar, please contact Tommaso Flaminio (firstname.lastname@example.org).
I propose to peek into the possibility of using moral values as a device to harness the autonomy of artificial systems. The talk should outline the challenge of developing a theory of values that has a distinctive AI bias: its motivation, the foundational questions, the distinctive features, the potential artefacts, the methodological challenges, and the practical consequences of such a theory. Fortunately for everyone, it will not. The talk will only look into a restricted understanding of the problem of embedding values into the governance of autonomous systems. In fact, I will only pay attention to some of the obvious practical problems one needs to overcome if one intends to claim that an autonomous system is aligned with a particular set of values. Hopefully, this timid approach will reveal enough of the breath and beauty of an artificial axiology to justify taking a closer look into it.
Pablo Noriega is a tenured scientist of the IIIA. His main research interest is in the governance of open multiagent systems. This talk reflects recent collaboration with Mark d'Inverno (Goldsmiths, U.of London), Julian Padget (U. of Bath), Enric Plaza (IIIA), Harko Verhagen (Stockholm U.) and Toni Perello-Moragues.
Initially started as a project at EPFL, Switzerland, AIcrowd is a community of ~60,000 AI researchers all over the world, who come together to solve real world problems to win cash prizes, travel grants, co-authorships in research papers. At AIcrowd, we use competitions and benchmarks to build meaningful research communities which can come together while they collaborate and compete to push the state of art in Artificial Intelligence Research. The long term vision is to evolve into a giant distributed research lab, which celebrates community led research, for the community by the community.
Sharada Mohanty is the CEO and Founder of AIcrowd, a platform for crowdsourcing Artificial Intelligence for real world problems. His research focuses on using Artificial Intelligence for diagnosing plant diseases, teaching simulated skeletons how to walk, scheduling trains in simulated railway networks, and on AI agents which can perform complex tasks in Minecraft.
He is extremely passionate about benchmarks and building communities. He has led the design and execution of many large-scale machine learning competitions and benchmarks, such as NeurIPS 2017: Learning to Run Challenge, NeurIPS 2018: AI for Prosthetics Challenge, NeurIPS 2018: Adversarial Vision Challenge, NeurIPS 2019: MineRL Competition, NeurIPS 2019: Disentanglement Challenge, NeurIPS 2020: Flatland Competition, NeurIPS 2020: Procgen Competition, NeurIPS 2021 NetHack Challenge, to name a few.
During his Ph.D. at EPFL, he worked on numerous problems at the intersection of AI and health, with a strong interest in reinforcement learning. In his previous roles, he has worked at the Theoretical Physics department at CERN on crowdsourcing compute for PYTHIA powered Monte-Carlo simulations; he has had a brief stint at UNOSAT building GeoTag-X, a platform for crowdsourcing analysis of media coming out of disasters to assist in disaster relief efforts. In his current role, he focuses on building better engineering tools for AI researchers and making research in AI accessible to a larger community of engineers.
With the availability of large datasets and ever-increasing computing power, there has been a growing use of data-driven Artificial Intelligence systems, which have shown their potential for successful application in diverse domains related to social platforms. However, many of these systems are not able to provide information about the rationale behind their decisions to their users. Lack of understanding of such decisions can be a major drawback, especially in critical domains such as those related to cybersecurity, of which malicious behavior in social platforms is a clear example. This phenomenon has many faces, which for instance appear in the form of bots, sock puppets, creation and dissemination of fake news, Sybil attacks, and actors hiding behind multiple identities. In this talk, we discuss HEIST (Hybrid Explainable and Interpretable Socio-Technical systems), a framework for the implementation of intelligent socio-technical systems that are explainable by design, and study an instantiation for analysis of fake news dissemination.
Dr. Maria Vanina Martinez obtained her PhD at University of Maryland College Park and pursued her postdoctoral studies at Oxford University in the Information Systems Group in Artificial Intelligence (AI) and Database Theory. Currently, she is an adjunct researcher at CONICET as a member of the Institute for Research in Computer Science (ICC, UBA - CONICET) and an assistant professor at the Department of Computer Science at University of Buenos Aires, Argentina. In 2018 was selected by IEEE Intelligent Systems as one of the ten prominent researchers in AI to watch. In 2021 he received the National Academy of Exact, Physical and Natural Sciences Stimulus Award in the area of Engineering Sciences in Argentina. Her research is in the area of knowledge representation and reasoning, with a focus on the formalization of knowledge dynamics, the management of inconsistency and uncertainty, and the study of the ethical and social impact on the development and use of systems based on Artificial Intelligence.
She is a member of the ethics committee of the Ministry of Science and Technology, has participated in various international events organized, among others, by UNESCO, UNIDIR, Pugwash, Sehlac (Human Security in Latin America and the Caribbean), the Campaign to stop killer robots, speaking about the benefits and challenges involved in the advancement of Artificial Intelligence.
Galaxies exhibit a wide variety of morphologies which are strongly related to their star formation histories. Having large samples of morphologically classified galaxies is fundamental to understand their formation and evolution. In this talk, I will review my research related to deep learning algorithms for morphological classification of galaxies which have resulted in the release of morphological catalogues for large international surveys such as SDSS, MaNGA or Dark Energy Survey. I will describe the methodology, based on supervised learning and convolutional neural networks (CNN). The main disadvantage of such approach is the need of large labelled training samples which we overcome by applying transfer learning or by ‘emulating’ the faint galaxy population.
Helena Domínguez Sánchez is a research fellow astrophysicist at Institute of Space Sciences (ICE-CSIC) trying to understand how and why the properties of galaxies have changed across the history of the Universe. During the last years, she has pioneered the use of Deep Learning techniques in astronomy. She did her PhD in Bologna (2009-2012) and the she had several post-docs positions in UCM (Madrid), Paris Observatoire and University of Pennsylvania (USA). She is currently visiting the Instituto de Astrofísica de Canarias (IAC, Tenerife) for a semester and she just accepted a tenure track position at Centro de Estudios de Física del Cosmos de Aragón (CEFCA, Teruel), starting September 2022.
75 years ago the transistor was invented. In hindsight, that moment can be considered the big bang of the Information Society we are living in today. The recent semiconductor crisis has shown how important chips are in our world. However, it is relatively unknown what chip-making entails. Moreover, chips come in many forms. Leveraging on IMB-CNM activities, I would like to show that miniaturization and scalability makes possible not only place chips inside computers and smartphones, but also to deploy microdevices in so demanding and so far apart scenarios as inside living cells and on-board of space missions.
Luis Fonseca has developed his scientific career at the Institute of Microelectronics of Barcelona. Physicist by training he joined IMB-CNM in 1989 as a predoc and he is today its current director. His scientific interests have revolved about micro and nanotechnologies for gas sensing and energy harvesting.
Reward is a foundation of behaviour: we move to attain valuable states. However, moving towards those states implies investing some effort and deploying motor strategies that are very much dependent on the person’s motivation. We performed a decision-making task in with human participants had to accumulate reward by selecting one of two reaching movements of opposite motor cost, to be performed precisely. Our results show that performance and social status were taken into consideration by diminishing error as a function of the partner. This also transpired into an increase movement time between the baseline condition and any social condition. We interpret this as an adaptive process of trade-off between precision, reward and time. Other effects on the movement amplitude became significant when the skill of the companion player was clearly unattainable, such as a reduction of the amplitude, thus escaping the traditional context of the speed-accuracy trade-off. As a context for the study of motivation and motor adaptation we developed a model based on movement benefit and costs optimization. Remarkably, its predictions show that this optimization depends on the context where the movements and the choices are performed, incorporating motivation as part of its internal dynamics.
Ignasi Cos (Barcelona, 1973; MEng Electronics 1996 – Politecnico di Torino, MEng Telecomunications 1997 – Universitat Politècnica de Catalunya; PhD in Cognitive Science and Artificial Intelligence 2006 - University of Edinburgh). After PhD graduation, he went to train as a postdoctoral fellow at the University of California, Berkeley, and at the University of Montreal, where he specialized in the neuroscience of motor control and decision-making. He also trained in theoretical neuroscience at the Université Pierre and Marie Curie, at the Brain and Spine Institute of Paris, and at the Universitat Pompeu Fabra. He is currently an Assistant Professor at the Faculty of Mathematics & Informatics, Universitat de Barcelona, and a member of the Institute of Mathematics (IMUB). His research focuses on developing mathematical techniques to characterize the brain operation, as a whole, in the context of how the brain controls movement.
Deep Neural Networks (DNNs) have achieved great success at solving numerous tasks, sometimes surpassing human performance. However, it is still not well understood how they represent data internally and what are the characteristics of these representations. In this talk we will present some research works that study internal representations of DNNs and leverage them for controlled text generation, representation learning and bias analysis.
Xavier Suau holds a PhD in Computer Vision and Machine Learning from BarcelonaTech. Before that, he graduated from BarcelonaTech in Telecommunications Engineering and from Supaéro (Toulouse, France) in Aeronautics and Space Engineering. He is currently a research scientist at Apple's ML Research team, where he conducts research in ML representation learning and robustness. Before joining Apple, Xavier was a co-founder of the start-up Gestoos, an AI centric company tackling human-machine interaction.
In this talk I will describe what working as a data science at Decathlon is like. What are the daily tasks we face as Data Scientist, technologies used and methodologies applied, and present a few interesting projects we are currently working on. If time allows, we will then jump to our last paper accepted with the team @ King's College London, "Discovering and Interpreting Biased Concepts in Online Communities", in which I will present a data-driven method to automatically discover and help interpret biased concepts encoded in word embeddings in the context of NLP and AI Fairness and Algorithmic bias.
Xavier Ferrer Aran is a Data Scientist at Decathlon UK and Visiting Research Associate at King's College London. He obtained his PhD in Informatics in 2017 from the Institut d'Investigació en Intel·ligència Artificial (IIIA-CSIC) and the Universitat Autonoma de Barcelona (UAB). Afterwards, he worked as a Research Associate in Digital Discrimination at the Department of Informatics at King's College London, and 1 year ago he started to work as a Data Scientist at Decathlon UK until today. His research interests are at the intersection of applied natural language processing, machine learning and fairness.
L´ètica ha consistit, fins ara, en una relació dels éssers humans entre ells i guiada per ells mateixos, en condicions bàsiques de presencialitat, reciprocitat, discursivitat i intersubjectivitat, tot suposant en cada individu la capacitat de guiar la seva acció, més enllà de l´instint i dels interessos primaris, per l´aprenentatge social de pautes de conducta moral i l´aplicació d´aquestes a partir d´un procés individual de decisió basat en l´ús personal de les facultats de sentir, raonar, voler i reflexionar, comunes amb la resta d´individus.
La computerització transforma totes les esmentades facultats i condicions de decisió de la conducta moral. Hem de veure en quina mesura, i quins poden ser els principals inconvenients i avantatges en relació a allò que encara pensem com a “ètica”. O és que la noció d´aquesta també està en revisió? Ciència i filosofia tenen ara el repte de respondre i apuntar vers alguna direcció favorable als interessos de la humanitat.
Norbert Bilbeny i García és professor universitari, filòsof i escriptor, catedràtic d'ètica a la Universitat de Barcelona. Fou degà de la Facultat de Filosofia, essent escollit el 2011, des d'on va defensar un model d'internacionalització de la investigació catalana i la transferència societat-universitat. Actualment és director del Màster de Ciutadania i Drets Humans, i ho va ser del Màster d'Immigració i Educació intercultural. Entre la seva tasca pedagògica en destaquen estades de professor visitant a universitats estrangeres com ara la Universitat de Chicago, l'Institut Tecnològic i d'Estudis Superiors de Monterrey i la Universitat Loyola de Chicago. Va ser visiting scholar a Berkeley (Facultat de Dret), Harvard, Toronto, CNRS i Northwestern. Darrer llibre: La enfermedad del olvido. El mal del Alzheimer y la persona. http://www.norbertbilbeny.com
The health domain has been an application area of artificial intelligence since the early years. Until very recently the use of artificial intelligence in the health domain has mostly focused on clinical data, including images, genetics, and clinical records. It has not been until recently that data-driven solutions in the health domain started to rely on patient-generated data coming from social networks, mobile and wearable devices. These include applications for classification, health outcome predictions, conversational agents, and recommender systems. This lecture will focus on the human factors and applications of artificial intelligence for empowering people living with chronic conditions. We will discuss the main technical challenges and their human factors implications for the building of actionable and trustworthy solutions that support patients, caregivers, and their clinicians.
Luís Fernández-Luque: My research focus has been on the adaptation of mobile and web technologies for patient support and public health. My scientific contributions in mobile health, which includes both mobile and wearable devices, are among the most cited and pioneering in the field dating back to the year 2006. I have substantial contributions in the creation and validation of Artificial Intelligence applications based on mobile and wearable technologies, including technologies such as deep learning and health recommender systems. My career has been always focused on the crossroads between computer science and behavioral change. I have ample experience in combining human factors research with artificial intelligence, that know-how is of crucial importance for the successful completion of the two aims of the project. My focus on human factors and data-driven applications dates back to my Ph.D. dissertation which focused on trustworthiness aspects of information retrieval of patient education.
As Chief Scientific Officer at Adhera Health (Palo Alto, CA, USA), I oversee the implementation of our research roadmap for our digital therapeutics' platform. Our evidence-based platform combines mobile technologies with artificial intelligence (Recommender Systems) to provide personalized patient support designed to improve the physical and mental wellbeing of people living with chronic conditions. In addition, I am a senior member of the IEEE Engineering in Medicine and Biology Society and Vice-President of the International Medical Informatics Association. I have over 100 publications cited in Google Scholar (https://scholar.google.com/citations?hl=en&user=N9Pdr2IAAAAJ).
In the last few years, blockchain technologies have fuelled the emergence of DAOs (Decentralised Autonomous Organisations) as socio-technical systems pursuing a variety of goals: decentralise finances, raise funds, create guilds, promote cultural and artistic initiatives, etc. Developments in this space have also gone hand in with innovations in governance technologies (on-chain and off-chain voting systems, legal smart contracts, coordination tools, etc.). This presentation will provide a preliminary overview of how DAOs have are deploying governance mechanisms aiming at progressive decentralization and autonomy, while also considering the limitations and challenges that these systems are grappling with as they claim their space in the Web 3.
Marta Poblet Balcell is a Professor at RMIT University’s Graduate School of Business and Law. She is one of the co-founders of the Institute of Law and Technology at the Autonomous University of Barcelona and a former researcher at ICREA (Catalonia). Marta holds a JSD in law (Stanford University 2002) and a Master in International Legal Studies (Stanford University 2000). Her research interests cut across many disciplines, including political science, law, technology and sociology. She is also interested in the connections between technology developments (AI, blockchain, human computer interaction) and different theories of democracy and citizenship. Her particular area of interest is in how technologies can provide outcomes for citizens in the areas of justice, security, privacy, disaster relief, or emergency management.
As the world population continues to expand, it is predicted that crop yields will have to increase by 50% over the next 35 years. Traditional breeding programs cannot keep pace with this current population growth rate. One of the main determinants of crop yield is the capacity of the plant to harvest light and convert it into sugars through photosynthesis. Despite this, photosynthesis improvement is still underexploited for the purpose of increasing yield. Plants have evolved a wide variety of photosynthesis flavours, some of them more efficient than others. This provides an “evolutionary guide” for engineering some of these traits in target crops like rice. In this talk I will describe some of the strategies we use in improving photosynthesis and some of the problems we face where the interaction with researchers in Artificial intelligence could prove beneficial.
Ivan Reyna-Llorens received a Ph.D in Plant Sciences from the University of Cambridge in 2016 using evolution as a guide to improve agricultural traits. He then worked as a postdoctoral fellow in the same University looking to develop methods for studying plant genomes. In 2021 he started the synthetic biology and photosynthesis group as a Junior Group Leader at CRAG focusing on understanding how global re-arrangements of gene regulatory networks have shaped the evolution of photosynthesis in plants, more specifically the adaptation of the photosynthetic machinery to different light conditions.
One of the major assets of deep neural networks is that when trained on large data sets (source data), their knowledge can be transferred to small datasets (target data). Transfer learning for deep neural networks can be simply performed by finetuning the network on the new data. In this talk, I will introduce the research field of continual learning where the aim is to not only adapt to the target data but also keep the performance on the original source data. In addition, during adaptation to the target, the learner has no longer access to the source data. This process can be repeated into a sequence of tasks that are learned one at a time. The aim for the learner is to perform well on all previous tasks at the end of the training process. The main challenge for continual learning is called catastrophic forgetting, where the learner suffers from a significant drop in performance on previous tasks. I will discuss a number of strategies to prevent catastrophic forgetting and will explain several methods developed in our group to address this problem.
Joost van de Weijer is a Senior Scientist at the Computer Vision Center and leader of the Learning and Machine Perception (LAMP) group. He received his Ph.D. degree in 2005 from the University of Amsterdam. From 2005 to 2007, he was a Marie Curie Intra-European Fellow in the LEAR Team, INRIA Rhone-Alpes, France. From 2008 to 2012, he was a Ramon y Cajal Fellow at the Universidad Autonoma de Barcelona. He has served as an area chair for the main computer vision and machine learning conferences CVPR; ICCV; ECCV, NeurIPS. His main research interests include active learning, continual learning, transfer learning, domain adaptation, and generative models.
La ciencia está cada vez más presente en nuestra sociedad. Y la ciudadanía demanda una información científica veraz y accesible, lejos de los lenguajes herméticos de los especialistas. En este contexto, la narrativa aparece como un magnífico medio para divulgar ciencia. Con ese punto de partida, en el seminario, describiremos cuatro formas comunes de introducir ciencia en la narrativa. Y en esa iniciativa, comentaremos otras tantas novelas nada trasnochadas, de la segunda mitad del siglo XX —incluyendo la espléndida Cien años de soledad— y del siglo XXI, que contienen elementos científicos significativos. Veremos el papel que la ciencia —bajo la pluma de escritores notables— realiza en cada una de esas obras. Y concluiremos con una especulación sobre la ciencia en la narrativa de las próximas décadas.
Pedro Meseguer es investigador del CSIC en el IIIA. En su trayectoria investigadora ha realizado numerosas actividades (publicaciones en congresos y revistas especializadas, direcciones de tesis doctorales, tareas editoriales, docencia), a las que se añade su interés reciente por la divulgación científica en donde ha promovido o desarrollado diversos eventos y materiales: un cómic sobre el sistema Watson para estudiantes de bachillerato, presentaciones en la Festa de la Ciència del Ajuntament de BCN, algunos vídeos divulgativos y colaboraciones en blogs científicos. Desde hace un tiempo se ha visto atraído por la narrativa, ha publicado una novela y ha colaborado con varios medios escritos. En la actualidad, realiza diversas actividades de evaluación para la Agencia Estatal de Investigación y efectúa tareas docentes en el grado de IA de la UAB (y también en el nuevo curso “Bojos per la IA”, que el IIIA coordina en el marco de la Fundació Catalunya La Pedrera), todo ello combinado con propuestas y labores de divulgación.
How useful would it be to attain a formal, mathematical characterization of the neuro-dynamics of individual patients affected by stroke, Parkinson’s Disease or other neuro-degenerative disorders in a straightforward manner? Beyond the obvious clinical application, the answer to that question depends on attaining accurate models of the brain, on how general are their predictions, and on how adaptive to the clinical context and in particular to the single patient medical praxis. Current tools for brain state characterization have made a remarkable progress in the past ten years, yielding mathematical techniques, gradually amassing a huge amount of knowledge about brain structure and dynamics during resting state and performance of specific tasks. Pending further research to perfect them, these techniques are to emerge as promise both for a deeper, more formal understanding of brain function, and as a reliable tool for the clinical diagnose of neuro-degenerative disorders.
Ignasi Cos (Barcelona, 1973; MEng Electronics 1996 – Politecnico di Torino, MEng Telecomunications 1997 – Universitat Politècnica de Catalunya; PhD in Cognitive Science and Artificial Intelligence 2006 - University of Edinburgh). After PhD graduation, he went to train as a postdoctoral fellow at the University of California, Berkeley, and at the University of Montreal, where he specialized in the neuroscience of motor control and decision-making. He also trained in theoretical neuroscience at the Université Pierre and Marie Curie, at the Brain and Spine Institute of Paris, and at the Universitat Pompeu Fabra. He is currently an Assistant Professor at the Faculty of Mathematics & Informatics, Universitat de Barcelona, and a member of the Institute of Mathematics (IMUB). His research focuses on developing mathematical techniques to characterize the brain operation, as a whole, in the context of how the brain controls movement.
We developed a model of recruitment to terrorism based on Social Structure Social Learning Theory (SSSL), Routine Activities Theory (RAT) and Situational Action Theory (SAT). Using real-world data, our experiments were enacted in a proto-typical European city borough. After introducing the model and its main results, we critically discuss the decisions taken from the point of view of calibration vs. arbitrary decisions, theory vs. mechanisms, and cost/benefit of modeling choices.
Mario Paolucci is a Senior Researcher with the Italian National Research Council (CNR). Mario received a degree in Physics from the Sapienza University of Rome, under the supervision of Antonio Degasperis, and a PhD from the University of Florence, carrying out research activity with Rosaria Conte and Cristiano Castelfranchi. Mario has been co-PI together with Giulia Andrighetto of the Laboratory of Agent-based Social Simulation (LABSS), located at ISTC. He has been scientific coordinator and PI of EC projects (eRep, FuturICT 2.0 (https://futurict2.eu/)). He is also author of about 100 scientific publications, among which a monograph on Reputation written with Rosaria Conte, and articles on peer-reviewed journals such as Advances in Complex Systems, Scientometrics, and the International Journal of Approximate Reasoning.
In this seminar, we will describe the topic, objectives and previous works concerning a MCSA project starting in October 2021 at the IIIA. The project is focused in study clausal-form systems for real and rational-valued events, facing the questions of their general definition, usage and solvable problems (SAT and optimality) from the point of view of their complexity, algorithmic design and applicability. The approach will be based on the study from a formal point of view of restricted classes of formulas in substructural and many-valued logics that have, ideally, good expressive power but are simpler than the whole logical systems, and the analysis of their computational behavior. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 101027914.
Amanda Vidal is graduated both in Mathematics and in Computer Science in 2010 at Autonomous University of Madrid, and obtained my Master Degree (2012) and PhD in Pure and Applied Logics in the UB and the IIIA-CSIC (under the supervision of F. Bou, F.Esteva, and L.Godo) in 2015. Afterwards, she has spent 4 years with different postdoctoral fellowships at the Institute of Computer Science of the Czech Academy of Sciences. Since October 2021, she is a MCSA fellow under the supervision of F. Manya back at the IIIA-CSIC.
Los avances en Inteligencia Artificial salen de los laboratorios e institutos de investigación para conectarse con la industria en el primer evento de la CONEXIÓN AIHUB, la red de centros del CSIC en torno a la Inteligencia Artificial. El acto, que tendrá lugar el próximo 14 de diciembre en la Fundación Universidad-Empresa de la Universidad de Valencia (ADEIT) en Valencia, reunirá empresas con investigadores especializados en diferentes áreas de la IA para debatir los retos presentes en diversos sectores industriales, como son la salud, la movilidad, la educación, la agricultura, la seguridad… El encuentro está coorganizado por el IFIC-UV-CSIC y el AVI.
The EU-funded GLOTECH project comprises a study of the role of technology in processes of modernisation and globalisation using the press, big data and computational research methods. It will explore the role of technology as a factor of time standardisations in Western industrialised societies as well as a booster of cultural homogenisation, and as a consequence, an agent of modernisation and globalisation. The analysis will focus on the press in European countries, the United Kingdom, and the United States. The methodology will include different computational research methods, contributing to significant advances in digital humanities and computational social sciences. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie (MSC) grant agreement No 101024996
Elena Fernandez is a Marie Curie Post-Doctoral researcher based at the Department of Computational Linguistics, University of Zurich, and the Principal Investigator of GLOTECH. From 2019-2021, she was a Eurotech Post-Doctoral Fellow, and the Principal Investigator of PRESSTECH. She completed a PhD in Hispanic Languages and Literatures at the University of California, Berkeley (2019), a M.A. in Spanish Studies at the University of Virginia (2013), and a B.A. in English Philology at the University of Salamanca (2011). Her research profile that lies at the intersection between Computational Social Science, Digital Humanities, and Media and Communication Studies.
Values are the abstract motivations that justify opinions and actions. The pursuit of values drives human behavior and promotes cooperation. Existing research is focused on general (e.g., Schwartz) values that transcend contexts. However, the context-specific nature of values must be considered to (1) understand human decisions, and (2) engineer intelligent agents that can elicit human values and take value-aligned actions. Further, in practical applications (e.g., to conduct meaningful conversations or to identify online trends), artificial agents should be able to understand values on the fly from natural language.
We outline an approach for estimating context-specific values from text. At first, the values relevant to a context must be identified. To this end, we propose Axies, a hybrid (human and AI) methodology to identify context-specific values. Then, we examine the effectiveness of NLP models in classifying values in text. As context influences how we express values in natural language, we investigate the extent to which the learned value rhetoric can be transferred across contexts. Subsequently, we propose explainability techniques to inspect whether value classifiers have learned the context-specific connotations of values. Finally, we combine the steps above into a single method for swiftly estimating context-specific values from users.
Enrico Liscio (https://enricoliscio.github.io) is a PhD candidate in the Interactive Intelligence Group at TU Delft and part of the Hybrid Intelligence Centre. He obtained cum laude MSc. in Systems and Control from TU Delft (the Netherlands, 2017) and cum laude BSc. in Automation Engineering from the University of Bologna (Italy, 2015). Between his MSc. studies and the current position, he worked for 2.5 years as deep learning developer and technical project lead at Fizyr (the Netherlands).
In this presentation, we discuss the use of Agent and Multi-agent techniques in Space systems. We first identify some AI research challenges related to satellite constellations, especially concerning Earth Observation applications. These challenges range from constellation design to on-board in-space decision, and raise opportunities for investigation efforts in Multi-agent based Simulation, to Distributed Problem Solving, by the way of Machine Learning and Game Theory. We then focus on case studies related to constellation resource allocation and scheduling. The first case study concerns the allocation of exclusive orbit slots to privileged constellation users. In this problem, the constellation operator aims at allocating the resources (orbit slots) as optimally and fairly as possible, prior to any scheduling, only using some simple requirements from clients. This problem is long-term, over horizons of several months. We explore here the use of utilitarian and Leximin-optimal techniques. The second case study investigates how distributed and coordinated decision techniques can be used as to schedule observation tasks over such exclusive orbit portions, so that exclusive users do not disclose their own agenda. This problem is short-term, over horizons of few hours. Here, we make use of distributed constraint optimization and sequential auctions to distribute decisions over the set of exclusive users.
Gauthier Picard received a Ph.D. in Computer Science from the University of Toulouse in 2004, and the Habilitation degree in Computer Science from the University of Saint-Etienne in 2014. He was Associate Professor and then Full professor in Computer Science at MINES Saint-Etienne, before reaching a Senior Researcher position at ONERA, The French Aerospace Lab. His research focuses on cooperation and adaptation in multi-agent systems and distributed optimization with applications to aircraft design, ambient intelligence, intelligent transport and space operations.
Increase in access to mobile phone devices and social media networks has changed the way people report and respond to disasters. Community-driven initiatives such as Stand By Task Force (SBTF) or GISCorps have shown great potential by crowdsourcing the acquisition, analysis, and geolocation of social media data for disaster responders. To make social media information suitable for emergency responders, these initiatives face two main challenges: (1) Most of social media content such as photos and videos are not geolocated, thus preventing the information to be used by emergency responders, and (2) they lack tools to manage volunteers' contributions and aggregate them in order to ensure high quality and reliable results.
This seminar illustrates Crowd4EMS a crowdsourcing platform developed under the EU project E2mC: Evolution of Emergency Copernicus services. Crowd4EMS combines automatic methods for gathering information from social media and crowdsourcing techniques in order to manage, aggregate volunteers' contributions, and ensure reliable for emergency responders in disaster management.
Dr. Jose Luis Fernandez-Marquez (Male) is Senior Lecturer at the University of Geneva (UNIGE), and head of the Geneva-Tsinghua Initiative Accelerator. He has a computer science background, PhD in collective artificial intelligence, and wide experience in Citizen Science. In 2011 he joint UNIGE after his PhD defence at the Artificial Intelligence Research Institute (IIIA-CSIC). In 2014, he formally joint the Citizen Cyberlab a partnership between UNIGE, CERN, and the United Nation for Training and Research (UNITAR) aiming at encouraging citizens and scientists to collaborate in new ways to solve big challenges. Since 2019, he is technical coordinator of the Crowd4SDG EU project which focuses on demonstrating the potential of Citizen Science for monitoring and achieving the SDGs.
His current research focus on citizen science data quality analysis and methodologies to make citizen science data suitable for decision/policy makers.
In recent years, mobility has undergone a massive change. Car sharing, car pooling, shared e-scooters or bikes are a few examples of new mobility services that have appeared lately. The typical paradigm of owning a car is also changing, especially with the soon-to-come autonomous vehicles. On top of that, the pandemic the world is living has also changed mobility patterns and behaviours. With such an unpredictable situation, all stakeholders (from local authorities, to mobility service providers, or vehicle manufacturers) need tools to evaluate future scenarios and understand how best to respond to mobility demand. This has often been done with very specific and complicated tools, only available to specialised consultants. At Immense we have developed an easy to use simulation platform that allows non experts users to quickly formulate "what if" questions regarding mobility scenarios. In this talk I'll give an overview of our platform, making special emphasis on how AI can help in this area.
Didac Busquets is a Computer Scientist specializing in Artificial Intelligence, and more specifically on agent-based simulation, task and resource allocation, self-organization, and robotics. He has a BSc (1999) and PhD (2003) in Computer Science, both from the Technical University of Catalonia (UPC). After completing his PhD in the area of Robotics at IIIA, he obtained a Fulbright Research Fellowship to do a postdoc at Carnegie Mellon University. He then spent 5 years at Universitat de Girona doing research on auction mechanisms. Then he went to Imperial College London with a Marie Curie Fellowship to apply social science to multi-agent resource allocation. In 2015 he decided to jump to industry and joined the Transport Systems Catapult (UK) to work on mobility simulation. In 2016 he co-founded Immense Simulations, where he's been in charge of developing the core simulation engine of their platform.
In this talk I'll walk you through my research interests since I left the IIIA in 2008. Mainly, how I moved from working with a team of robots, to working with robots interacting with humans. Such shift of actors in the scene involves huge differences when it comes to developing robots. Human-Robot Interaction is a young field that is currently pushing the boundaries to achieve smooth interactions between people and robots challenging many assumptions done so far in robotics. I'll talk about my research in the ALIZ-E project, focused on child-robot interaction and some strategies aimed at sustaining long lasting interactions.
Dr. Raquel Ros is a researcher at the Group on Media Technologies (GTM) working in the area of Human-Robot Interaction. She graduated in Computer Science from the Universitat Autònoma de Barcelona on 2003 and received her PhD from the Institut d'Investigació en Intel.ligència Artificial (IIIA-CSIC). In 2008 she moved to Toulouse as a Marie-Curie fellow to work in the area of HRI at the Laboratoire d'Analyse et d'Architecture des Systèmes (LAAS-CNRS) where she worked on collaborative robotics. She then moved to Imperial College London (Personal Robotics Lab) to continue her research on social robots in educational environments. After a stint in industry at Cambridge Consultants as user-center designer, she now is at La Salle-Universitat Ramon Llull in Barcelona, where she studies human-robot interaction and its interdisciplinary connection with cognitive sciences, psychology, sociology, health and education, with emphasis on long-term interaction.
How can we trust systems built from machine learning components? We need advances in many areas, including machine learning algorithms, software engineering, ML ops, and explanation. This talk will describe our recent work in two important directions: obtaining calibrated performance estimates and performing run-time monitoring with guarantees. I will first describe recent work Jesse Hostetler on performance guarantees for reinforcement learning. Then I'll review our research on providing guarantees for open category detection and anomaly detection for run-time monitoring of deployed systems. I'll conclude with some speculations concerning meta-cognitive situational awareness for AI systems.
Dr. Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; PhD Stanford University 1984) is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. Dietterich is one of the pioneers of the field of Machine Learning and has authored more than 200 refereed publications and two books. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability.
The Doctoral Consortium will take place on July 20 and 21.consortium
Cognitive science informs us that cognition comprises a collection of constructive feedback processes between the environment and our perception of it. Sense-making is the process of an autonomous agent bringing its own original meaning upon its environment. We model the sense-making process as the conceptual blending of image schemas with a structural description of a stimulus. The case study we have used is diagrams and their geometric configurations. Image schemas comprise mental structures abstracting the invariances of repeated sensorimotor contingencies such as SUPPORT, VERTICALITY and BALANCE. They structure our perception and reasoning by transferring their structure to our percepts according to the principles of conceptual blending. In our work we model the conceptual blend of various image schemas with the geometry of a diagram, obtaining a blend that reflects the interpreted diagram. The resulting blend has emergent structure, representing a meaningful diagram. For example, a Hasse diagram (representing a poset) as a SCALE with levels, minimum and maximum elements etc. Our work on diagrams can provide guidelines for effective visualizations, and our general framework can be developed into a system that constructs possible conceptual meanings for various stimuli types.
In this talk, I will explain the theories of image schemas and conceptual blending and how they approach meaning, and then I will discuss how we take advantage of them to build a computational model of the sense-making of diagrams.
Dimitra Bourou is a predoctoral researcher at IIIA. Her field of expertise may best be summarized as computational cognitive science. The goal of her doctorate research is to develop a computational framework for sense-making of stimuli, following theories of cognitive science related to embodiment. She has graduated from the interdisciplinary master’s program Brain and Mind in the University of Crete, where she was exposed to a variety of courses ranging from neuroscience, psychology and philosophy of mind, to signal processing, machine learning and artificial intelligence. During that time she undertook research in affective computing, resulting in a publication on pain level estimation from videos of subjects (Bourou et al., 2018). Dimitra is also very familiar with modal logics and multiagent systems, as well as computational linguistics, through participation in extensive tutorials in several summer schools. Her first degree is in Biology.
Since their invention in 2017, "Transformer" models have revolutionized the field of natural language processing. Models such as BERT, GPT3 and T5 have achieved state-of-the-art performance in many challenging NLP tasks, getting closer and closer to human performance. Moreover, Transformer-based models are also starting to make inroads into other areas such as computer vision, rivaling traditional convolutional architectures. However, despite their success, Transformers have many limitations. In this talk, I'll discuss our most recent work at Google on pushing the limits of Transformer models to address such limitations. In particular, I'll talk about our work towards solving tasks that require the models to process very long inputs (e.g., question answering over long documents), structured inputs (where the input are not just raw sequences of words or pixels but have some sort of graph structure), and tasks that require compositional generalization.
Santiago Ontañón is a Research Scientist at Google Research. His research focus lies at the intersection of AI and machine learning, with applications to natural language processing and computer games. He is also an Associate Professor at Drexel University (on leave). He obtained his PhD at the Artificial Intelligence Research Institute (IIIA) in Barcelona, and held postdoctoral positions at IIIA, the Georgia Institute of Technology and the University of Barcelona.
The view of optimal control as probabilistic inference is being rediscovered again and again in planning and reinforcement learning. It has recently gained interest with the use of deep learning to represent policies and value functions, and the widespread use of entropy regularization in reinforcement learning. This seminar will introduce and explain the class of Kullback-Leibler control problems (also known as linearly-solvable optimal control) and define its relation with entropy-regularized reinforcement learning. I will present the discrete and continuous formulation of this framework for control and inference and present recent advances that exploit its analytical properties for efficient policy optimization. These advances lead to a practical algorithm in the reinforcement learning setting that can be applied to high-dimensional robotics tasks, addressing the main challenge of translating this theory into practical methods for large-scale control problems.
Adaptive Smoothing for Path Integral Control
Dominik Thalmeier, Hilbert J. Kappen, Simone Totaro, Vicenç Gómez; 21(191):1−37, 2020.
Vicenç Gómez received the Computer Science engineering degree in 2002 from the Universitat Politècnica de Catalunya, and the PhD in Computer Science and Digital Communication from the Universitat Pompeu Fabra (UPF), Barcelona in 2008. He has been a postdoctoral researcher at the Radboud university medical center (2009–2011) and at the Donders Institute for Brain, Cognition and Behavior (2011–2014) in Nijmegen (The Netherlands). He has held visiting appointments in Los Alamos National Laboratory (USA), the IAS group at Technische Universitaet Darmstadt (Germany), and at University College London (UK). In 2014 he obtained a transnational academic career grant (FP7 Marie Curie Actions) and joined the Artificial Intelligence and Machine Learning group at the Department of Information and Communications Technologies (UPF). In 2016 he was awarded with a Ramon y Cajal fellowship. He is currently a tenure-track professor at UPF. His main research interests are machine learning and optimal control in applications to different areas such as complex networks, robotics, and brain computer interfaces.Webinar
From climate change and ecosystem and habitat destruction to the spread of infectious diseases such as COVID-19, many contemporary societal challenges are exacerbated by collective action problems. In these situations, groups would benefit from a shared outcome but the incentives available to individuals drive them to free ride. While laws, treaties and other formal institutions could in principle address these global issues and create cooperation, they are often unavailable, unenforceable, or insufficient and informal institutions, such as social norms become essential. Under the right conditions, poor and destructive norms may disappear and new norms may spontaneously emerge, which motivate people to act against their self-interest and cooperate for the good of the collective. Despite their importance, evidence on the causal effect of social norms in promoting cooperation in humans is still limited. In this talk, I will present work on the formation and change of social norms and their effect in promoting human cooperation. I will discuss results from recent laboratory experiments and agent based simulations showing that social norms are causal drivers of behavior and can explain cooperation-related regularities.
Giulia Andrighetto is a researcher at the Institute of Cognitive Sciences and Technologies (ISTC) at the National Research Council of Italy, where she coordinates the Laboratory of Agent Based Social Simulation (LABSS). She is also a researcherat Mälardalen University, Västerås, Sweden. Her research examines the nature and dynamics of social norms, namely how norms may emerge and become stable, why norms may suddenly change, how is it possible that inefficient or unpopular norms survive, and what motivates people to obey norms. In 2013, she was awarded the Ricercat@mente Prize for the best under 35 italian researcher in the field of social sciences & humanities by the National Research Council and the Accademia dei Lincei. In 2016, she was awarded a Wallenberg Academy Fellowship by the Knut and Alice Wallenberg Foundation, Sweden.Webinar
A brain-machine interface (BMI) is a system that enables users to interact with computers and robots through the voluntary modulation of their brain activity. Such a BMI is particularly relevant as an aid for patients with severe neuromuscular disabilities, although it also opens up new possibilities in human-machine interaction for able-bodied people. Real-time signal processing and decoding of brain signals are certainly at the heart of a BMI. Yet, this does not suffice for subjects to operate a brain-controlled device.
In the first part of my talk I will review some of our recent studies, most involving participants with severe motor disabilities, that illustrate additional principles of a reliable BMI that enable users to operate different devices. In particular, I will show how an exclusive focus on machine learning is not necessarily the solution as it may not promote subject learning. This highlights the need for a comprehensive mutual learning methodology that foster learning at the three critical levels of the machine, subject and application. To further illustrate that BMI is more than just decoding, I will discuss how to enhance subject learning and BMI performance through appropriate feedback modalities. Finally, I will show how these principles translate to motor rehabilitation, where in a controlled trial chronic stroke patients achieved a significant functional recovery after the intervention, which was retained 6-12 months after the end of therapy.
Dr. José del R. Millán is a professor and holds the Carol Cockrell Curran Endowed Chair in the Department of Electrical and Computer Engineering at The University of Texas at Austin. He is also a professor in the Department of Neurology of the Dell Medical School.
He received a PhD in computer science from the Technical University of Catalonia, Barcelona, in 1992. Previously, he was a research scientist at the Joint Research Centre of the European Commission in Ispra (Italy) and a senior researcher at the Idiap Research Institute in Martigny (Switzerland). Most recently, he held the Defitech Foundation Chair in Brain-Machine Interface at the École Polytechnique Fédérale de Lausanne in Switzerland (EPFL), where he helped establish the Center for Neuroprosthetics
Dr. Millán has made several seminal contributions to the field of brain-machine interfaces (BMI), especially based on electroencephalogram signals. Most of his achievements revolve around the design of brain-controlled robots. He has received several recognitions for these seminal and pioneering achievements, notably the IEEE-SMC Nobert Wiener Award in 2011, elevation to IEEE Fellow in 2017 and elected fellow of the International Academy of Medical and Biological Engineering in 2020. In addition to his work on the fundamentals of BMI and design of neuroprosthetics, Dr. Millán is prioritizing the translation of BMI to end-users suffering from motor and cognitive disabilities. In parallel, he is designing BMI technology to offer new interaction modalities for able-bodied people.
Agent based simulation and social norms marked Daniel's scientific journey while his time at the IIIA-CSIC. At the time he combined diverse research areas such as agent based simulation, game-theory, social networks analysis, experimental economics and human-computer interaction, in order to understand better how societies can improve their self-governance. After graduating, he moved into applied research investigating how empirical data could be used to improve all those methods.
All this happened during the emergence of a new discipline that ended up being "the sexiest job of the 21st century". In this talk we'll review the key elements to make data-products for companies to improve their decision making, using real examples from companies.
Dr. Daniel Villatoro is Chief Data Scientist @ Openbank (Grupo Santander) and Co-founder of Databeers (ONG for the cultural dissemination of Data projects, present in 27 locations in 16 countries around the world)Webinar
Recent research has demonstrated that AI can introduce new risks and vulnerabilities in a system. In particular, I will talk about two main risks: security and privacy. I will show that attacks that can be performed to exploit AI models and attack the systems that use them, and that AI-based systems can be privacy-intrusive. I will then outline our current research and projects on making AI safer.
Dr Jose M Such is Reader in Security and Privacy at King’s College London and Director of the King’s Cybersecurity Centre, an Academic Centre of Excellence in Cyber Security Research (ACE-CSR) recognised by NCSC (part of GCHQ) and EPSRC. Dr Such was Senior Lecturer at King’s College London from 2016 to 2018, and before that, he was Lecturer at Lancaster University from 2012 to 2016. His research interests are at the intersection between Artificial Intelligence, Human-Computer Interaction, and Cyber Security. His research has been funded through a multi-million pound portfolio of projects by UKRI, EPSRC, Google, ICO, UK Government, and InnovateUK.Webinar
The mathematician and inventor Charles Babbage wrote 26 programs between 1836 and 1841 for the unfinished "Analytical Engine“ (AE). The code is embedded implicitly in tables summarizing program traces. In this talk, I present the programming architecture of Babbage’s mechanical computer based on the first code written for the machine. The AE had a processor separate from memory, and worked using a kind of dataflow approach. The stream of arithmetical operations was independent from the stream of memory addresses. Special "combinatorial" cards allowed the processor to execute FOR and WHILE loops. Combinatorial cards also allowed independent looping through the stream of memory addresses. Quite sophisticated computations were possible and illustrate why Babbage talked about the possibility of doing "algebra" with his machine. The programs I will discuss predate by several years the account published by Menabrea in 1842 and translated later by Lady Lovelace with notes of her own.
Raúl Rojas González is a professor in the Dept. of Mathematics and Computing from the Free University of Berlin. He is a graduate of the IPN (Mexico), where he got his Bachelor's and Master's degrees in mathematics. Later he performed doctoral studies and obtained the habilitation in Sciences of Computing at the Free University of Berlin. He has written about the history of computing and is the author of the book "The First Computers" (MIT Press, 2000). His articles on the Babbage machine have appeared in journals German and in Annals of the History of Computing. Raúl Rojas was Professor of the Year in 2014 in Germany and is the National Science Award of Mexico in the 2015 cohort.Webinar
Automated assessment and feedback of open-response assignments remain a challenge in Computer Science despite recent milestones in fields as natural language processing. Even if we could make quality assessments with complete human supervision independence, many would argue against it. Competence assessment is a sensitive topic possibly impacting the issuance of a certificate that asserts that a student is ready for her insertion in the labour market or to continue her progress in the education system.
Despite the efforts on "opening" the black box of neural networks, current neural models are rarely equipped with logical narratives of the decision chains that lead them to a final prediction or classification. Nevertheless, transparency and explainability are desirable requisites for automated assessment systems.
For all the above, many researchers propose hybrid solutions combining the benefits of automation with human judgement. Probabilistic models of competence assessment join the benefits of automation with human judgement. In this work, two probabilistic models of peer assessment (PG1-bias and PAAS) are replicated and compared. We also present PG-bivariate, a model combining the approaches from the first two.
Alejandra López de Aberasturi is a PhD candidate at the IIIA-CSIC.Webinar
Reinforcement learning (RL) is a field of AI in which actions are taken based upon a states, transitions, expected rewards and other available information. However, when the states and action spaces are not discrete or finite, RL needs to be reformulated and other methods can be applied. In this presentation, I will talk about some of these changes and methods that are used for learning in continuous state-action spaces, and their application to robot motion learning.
Adrià Colomé is a postdoctoral researcher at the Institut de Robòtica i Informàtica Industrial. He focused his PhD in robot motion learning from several perspectives, such as robot kinematics, robot dynamics, reinforcement learning in latent spaces, sample efficiency for robot learning, and learning robot motion adaptability. Currently, he is working towards the challenging topic of learning to manipulate cloth.Webinar
The overarching aim of the UKRI Centre for Doctoral Training (CDT) in Safe and Trusted Artificial Intelligence (STAI) is to train the first generation of AI scientists and engineers in methods of safe and trusted AI. An AI system is considered safe when we can provide assurances about the correctness of its behaviour, and it is considered trusted if the average user can have confidence in the system and its decision making. The CDT focuses particularly on the use of model-based AI techniques for ensuring the safety and trustworthiness of AI systems. Model-based AI techniques provide an explicit language for representing, analysing and reasoning about systems and their behaviours. Models can be verified and solutions based on them can be guaranteed as safe and correct, and models can provide human-understandable explanations and support user collaboration and interaction with AI – key for developing trust in a system. In this talk, we will present the central vision, programme, and core research areas.
Dr Natalia Criado is a Senior Lecturer in Computer Science at King's College London and a member of the UKRI Centre for Doctoral Training (CDT) in Safe and Trusted Artificial Intelligence (STAI).Webinar
This webinar is a PhD thesis defense.
This PhD thesis contributes to the systematic study of Horn clauses of predicate fuzzy logics and their use in knowledge representation for the design of an art painting style classification algorithm. We first focus the study on relevant notions in logic programming, such as free models and Herbrand structures in mathematical fuzzy logic. We show the existence of free models in fuzzy universal Horn classes, and we prove that every equality-free consistent universal Horn fuzzy theory has a Herbrand model. Two notions of minimality of free models are introduced, and we show that these notions are equivalent in the case of fully named structures. Then, we use Horn clauses combined with qualitative modelling as a fuzzy knowledge representation framework for art painting style categorization. Finally, we design a style painting classifier based on evaluated Horn clauses, qualitative colour descriptors, and explanations. This algorithm, called l-SHE, provides reasons for the obtained results and obtains percentages of accuracy in the experimentation that are competitive.Webinar
Machine learning enables new approaches to inverse problems in many fields of science. We present a novel probabilistic programming framework that couples directly to existing scientific simulators through a cross-platform probabilistic execution protocol, which allows general-purpose inference engines to record and control random number draws within simulators in a language-agnostic way. The execution of existing simulators as probabilistic programs enables highly interpretable posterior inference in the structured model defined by the simulator code base. We demonstrate the technique in particle physics, on a scientifically accurate simulation of the tau lepton decay, which is a key ingredient in establishing the properties of the Higgs boson. Inference efficiency is achieved via amortized inference where a deep recurrent neural network is trained to parameterize proposal distributions and control the stochastic simulator in a sequential importance sampling scheme, at a fraction of the computational cost of a Markov chain Monte Carlo baseline.
Dr Atilim Güneş Baydin is a Departmental Lecturer in machine learning at the Department of Computer Science and a Senior Researcher in machine learning at the Department of Engineering Science, University of Oxford. He works with Philip H. S. Torr as a member of Torr Vision Group. He is also a Research Member of the Common Room at Kellogg College, a research consultant for Microsoft Research Cambridge, and a member of European Lab for Learning and Intelligent Systems (ELLIS).Webinar
"Let's think about our presentations: besides being informative, are they engaging and motivating?" Carme will give us some hints about how to improve our scientific oral presentations and how to make them live and motivating for the audience.
Carme Roig is in charge of educational and technological innovation at STBCO, Department of Education, Catalonia, Spain. She has a degree in English Philology (UB, 1986) and a Master’s degree in TESOL (Institute of Education, UCL, 1996). She has coordinated a team of teachers who work to introduce and promote cooperative learning practices in high schools as members of XCB (Xarxa de Competències Bàsiques). In recent years, she has been working with several research groups (Goldsmiths College, London, University of Ghent, Belgium, SONY Labs, Paris, IIIA-CSIC Bellaterra) on issues of collaborative distance learning, self-assessment and co-assessment, and participated in experiments to validate a number of computer tools developed within the framework of a European project and in collaboration with the Research Institute for Artificial Intelligence, IIIA, CSIC. These tools include the automatic design of lesson plans, assessment tools to assess large numbers of students (MOOC) and tools for students’ grouping. Her main interests are Automated team formation, collaborative work, formative assessment and multilingualism.Webinar
Optical imaging methods using fluorescence indicators are critical for monitoring the activity of large neuronal populations in vivo. Imaging experiments typically generate a large amount of data that needs to be processed to extract the activity of the imaged neuronal sources. While deriving such processing algorithms is an active area of research, most existing methods require the processing of large amounts of data at a time, rendering them vulnerable to the volume of the recorded data, and preventing real-time experimental interrogation. In this talk I will describe CaImAn Online, a framework for the analysis of streaming calcium imaging data, including i) motion artifact correction, ii) neuronal source extraction, and iii) activity denoising and deconvolution. Our approach combines and extends previous work on online dictionary learning and calcium imaging data analysis, to deliver an automated pipeline that can discover and track the activity of hundreds of cells in real time, thereby enabling new types of closed-loop experiments.
Dr. Andrea Giovannucci is an Assistant Professor in Neural Engineering at the UNC/NCSU department of Bioengineering. Prior to this appointment, Dr. Giovannucci was a machine learning data scientist at the Flatiron Institute (Simons Foundation) and a postdoctoral fellow (experimental neuroscience) at the Princeton Neuroscience Institute. Dr. Giovannucci obtained his PhD in artificial intelligence from the Autonoma University of Barcelona and the Artificial Intelligence Research Institute of Bellaterra (IIIA-CSIC), Spain. Dr. Giovannucci is affiliated with the UNC/NCSU joint Bioengineering department, the Closed-loop Engineering for Advanced Rehabilitation (CLEAR) and the UNC Neuroscience Center.Webinar
Transition Edge Sensors (TES) detector devices, like the one that will be onboard the Athena X-ray Observatory, produce current pulses as a response to the incident X-ray photons. The reconstruction of these pulses aims at recovering the energy of the impacting photon, its arrival time and its physical position in the detector. This has been traditionally performed by means of a triggering algorithm based on the derivative signal overcoming a threshold (detection) followed by optimal filtering (to retrieve the energy of each event). However, when the arrival of the photons is very close in time, the triggering algorithm is incapable of detecting all the individual pulses. Aiming at improving the efficiency of the detection process, we use an alternative approach with Machine Learning techniques. For this purpose, we construct and train a series of Neural Networks (NNs) not only for the detection but also to recover the energy of simulated X-ray pulses. The dataset used to train the NNs consists of simulations performed with SIXTE/xifusim, the Athena/X-IFU official simulator. Although much expensive in terms of computational cost, the performance of our classification NN clearly surpasses the detection performance of the classical triggering approach for the full range of photon energy combinations showing excellent metrics. The reconstruction efficiency for the recovery of the energy of the photons cannot however currently compete with the optimal filtering algorithm.Webinar
Complex networks are ubiquitous to represent real systems in many contexts, such as social networks, computer networks, or biological networks, among others. Most of the real-world networks exhibit non-trivial topological features, and the interest in analyzing their properties has resulted in the emergence of random models to generate them. Probabilistic models are, in general, based on the probability of each edge to occur, and the topology of the network is the consequence of such a probability distribution. In deep generative approaches, a model is trained to learn the features of a training set of examples and generate new networks with similar properties. In this seminar we will review a (non-exhaustive) list of random models of complex networks generation, and analyze how these models can be applied to another challenging problem: the generation of realistic random SAT instances.Webinar
Ant colony optimization is a metaheuristic that is mainly used for solving hard combinatorial optimization problems. The distinctive feature of ant colony optimization is a learning mechanism that is based on learning from positive examples. Examples from nature, however, indicate that negative learning—in addition to positive learning—can beneficially be used for certain purposes. Several research papers have explored this topic over the last decades in the context of ant colony optimization, mostly with limited success. In this talk I present an alternative mechanism making use of mathematical programming for the incorporation of negative learning in ant colony optimization. The study considers two classical combinatorial optimization problems: the minimum dominating set problem and the multi dimensional knapsack problem. In both cases our approach significantly improves over standard ant colony optimization and over the competing negative learning mechanisms from the literature.Webinar
Natural Language Understanding (NLU) is the broad research area in Natural Language Processing (NLP) that develops methods to analyze natural language and understand its meaning. It is a key component of any AI system that aims at truly interacting with humans. It is also a key component for automatic systems that do machine reading of the web and social media, which, given the current volumes of information, is the only practical way to access this content.
First I will give a brief overview of Natural Processing Processing tasks, and the evolution of machine learning approaches in recent years. Natural language is structured, very rich, ambiguous, and offers limitless ability to say new things. Because of this, the desire is to have machine learning algorithms that learn hidden-state compositional models of language, and answer questions such as: what are the units and parts of a language? what is the meaning of each part? how do we compose parts into bigger parts? what is the meaning of a composed expression? how do we use these models to solve specific needs?
Deep learning has made great progress on these questions. Today we have giant neural models like BERT or GPT-3 that are trained at worldwide scale, and are found useful for virtually any empirical NLP task. However, it's largely unclear what these models are learning, and what is their capacity to generalize (as opposed to memorizing data). Also, the costs of learning these models is huge.
In the second part of this talk, I will focus on compositional models of language that take the form of weighted automata, which are a restricted class of recurrent neural networks. I will describe Spectral Learning algorithms, a family of learning algorithms that reduces the problem of learning a weighted automata to some form of matrix learning. This reduction is based on theoretical connections between formal languages and distributions over the strings they generate. I will highlight several good properties of this family of techniques, and contrast them with deep learning approaches.
Finally, I will describe some research lines on unsupervised spectral learning of natural language grammars that I will pursue in the next few years.Webinar
Classical solution concepts in game theory, such as the Nash equilibrium and the subgame-perfect equilibrium, are based on the assumption that players make their choices on a purely individual basis and that they are not able to coordinate their actions through binding agreements. This sometimes yields counter-intuitive results, such as in the Prisoner's Dilemma. In most real-world situations that are similar to the Prisoner's dilemma, people can negotiate and jointly agree to choose their actions in a way that prevents them from hurting each other. If necessary, with the help of legally binding contracts.
In this talk I will therefore introduce a new game-theoretical solution concept that does take into account the possibility for the players to make binding agreements about their actions. I will use a classical text-book game known as the Centipede Game as an example, and show how this new solution concept prescribes a more satisfactory outcome than the classical subgame-perfect equilibrium. Furthermore, I will present experimental results obtained with a negotiation algorithm based on Monte Carlo Tree Search.Webinar
In this talk I will first give a short overview on optimization and on the related topics that have been subject of our work during the last years. In the second part of the talk I will report on an industrial project that we conducted in 2020 in cooperation with IKERLAN S. Coop. in the context of the optimization of safety-critical systems.Webinar
Computer Vision has become one of the most relevant fields of work in AI. During recent years, and with the explosion of Deep Learning and the possibility to have access to massive data sets, Computer Vision itself has also become one of the main driving forces of the AI market, with multiple applications in different areas of social impact such as autonomous mobility, health and well-being, intelligent media analysis, industry 4.0, etc. Tools such as Convolutional Neural Networks have become prominent and omnipresent in approaches tackling both general and specific problems, and the pace at which these new solutions are appearing, day after day, is changing the Computer Vision research scenario dramatically. In this seminar, Prof. Fernando Vilariño (Associate Director and Group Responsible for Research Projects at Computer Vision Centre (CVC) (http://www.cvc.uab.es/)) will provide an introduction to the main areas of impact tackled by the Computer Vision Center, by introducing a number of paradigmatic examples of Computer Vision-based projects, putting emphasis on the specific techniques used. The presentation will have a very practical approach and will allow those interested in deepening in the Computer Vision field to have a set of pointers to dig in, both from a purely scientific or a more implementation-oriented perspective.Webinar
Presentation of the High-Performance Cluster for Artificial Intelligence of the IIIA: Technical characteristics, rules of use and operation, available software and mini user guide.
[This is an internal webinar for people working at the IIIA-CSIC]Webinar
We will provide a brief overview of the CorporIS project, funded by Spain’s Ministerio de Ciencia e Innovación. The project aims at contributing to the conceptual and theoretical foundations for a mathematical and computational model of embodied conceptualisation, driven by its potential deployment and application in cognitive musicology and musical creativity.Webinar
Prof. Mark d'Inverno: "Using a piano (I hope), AI software and a few videos I will aim to try and answer this question from the perspectives of researcher, musician and lecturer."
Mark d'Inverno has spent the last 20 years undertaking cutting-edge research at the frontiers of AI, creativity and learning –luckily for him that much has been with colleagues at IIIA - asking how they relate to each other and how the different academic disciplines can provide us with insights into the role we want AI to play in learning, in creative practice and in society in general. Mark's PhD from UCL investigated the concepts of agency and autonomy in artificial systems, and since then he has published around peer-reviewed 200 articles and several books (including the edited book "Computers and Creativity"). Mark was formerly Pro-Warden (Pro-Vice-Chancellor) at Goldsmiths, University of London - known for an array of alumni who have contributed to the creative and cultural industries nationally and internationally - where he has led on developing the College's international profile and engagement and before that led the research and enterprise brief. He is a critically acclaimed jazz pianist (Guardian, Observer, BBC) and for nearly 40 years has led a variety of successful bands in a range of different musical genres.Webinar
Professor Gopal Ramchurn from the University of Southampton will give us a brief overview of some of the latest research he has carried out in the area of human-agent collectives and will articulate some of the key challenges that arise when building AI needs to be trustworthy by design and trusted in practice. Then he will detail the programme of the UKRI Trustworthy Autonomous Systems Hub (www.tas.ac.uk), which is a newly funded £12m programme to coordinate a portfolio of £21m of research projects across multiple universities in the UK.
Speaker: Filippo Bistaffa - researcher at the IIIA-CSIC.
Filippo will present an approach that allows one to approximate every characteristic function games (CFG) as an induced subgraph game (ISG), a succinct game representation that is based on a weighted graph among the agents. The proposal outperforms existing CSG approaches for ISGs by using off-the-shelf optimisation solvers.Webinar
Marta R. Costa-jussà, a Ramon y Cajal Researcher at the Universitat Politècnica de Catalunya (UPC, Barcelona), will be giving some deep insights into (spoken) multilingual language translation pursuing similar quality for all languages. Also, we are going to discuss how we can efficiently add new languages in a highly multilingual system. Finally, we are going to give details on the fairness challenge, why neutral words as “doctor” tend to infer the “male” gender when translated into a language that requires gender flexion for this word?Webinar
Professor Juan Antonio Rodríguez will present to us the AI4EU project. AI4EU is the European Union’s landmark Artificial Intelligence project, which seeks to develop a European AI ecosystem, bringing together the knowledge, algorithms, tools and resources available and making it a compelling solution for users. Involving 80 partners, covering 21 countries, the €20m project kicked off in January 2019 and will run for three years.Webinar
Professor Cecilio Angulo will present us the IDEAI-UPC (https://ideai.upc.edu/en) research lab. Cecilio is the current director of the IDEAI-UPC. He will present some of the projects that are being carried out at the IDEAI and that perhaps may open up new collaboration lines between the two institutes.Webinar
The Doctoral Consortium will take place on July 21 and 22. Due to concerns regarding COVID-19 the DC2020 will be held online.consortium
Crowdsourcing is often associated with the darker side of Artificial Intelligence...Webinar
This year's Christmas Concert coincides with the 25th anniversary of the institute.music performance
La jornada consistirà en un parell de taules rodones sobre la història i el futur del IIIA i una conferència a càrrec del Professor Luc Steels, seguit d’un dinar.25th anniversary conference round table
Amb el motiu de la commemoració dels 25 anys de l'Institut d'Investigació en Intel·ligència Artificial (IIIA-CSIC), ens complau convirdar-vos a la jornada "25 anys fent IA: El Repte d'Innovar" que tindrà lloc el proper 18 de desembre al mateix institut.25th anniversary discussion talk
Estas jornadas se enmarcan dentro de las acciones iniciales de AIHUB.CSIC y tienen el objetivo de completar el mapa de competencias del CSIC reuniendo a representantes y miembros de los grupos activos en inteligencia artificial.AIHUB workshop
El Dr. Ramon López de Mántaras (expert en Intel·ligència Artificial) i el Dr. Jordi Isern (expert en Ciències de l'Espai) parlaran sobre l'impacte de la intel·ligència artificial en el marc dels viatges interplanetaris tripulats, i també sobre els límits ètics i les implicacions per a l'ésser humà. Un diàleg a la frontera entre ciència i filosofia, per plantejar reptes futurs ara que fa 50 anys que vam trepitjar la Lluna i que es plantegen els viatges a Mart.25th anniversary discussion science week talk
The Doctoral Consortium will take place on July 16 and 17.consortium
The Doctoral Consortium will take place on July 17 and 18.consortium
The Doctoral Consortium will take place on July 18 and 19.consortium
The Doctoral Consortium will take place on July 21 and 22.consortium
The Doctoral Consortium will take place on July 15.consortium
The Doctoral Consortium will take place on July 16.consortium
The Doctoral Consortium will take place on July 22 and 23.consortium
The Doctoral Consortium will take place on June 19, 20 and 21.consortium
The Doctoral Consortium will take place on June 20 and 21.consortium