The EU-funded GLOTECH project comprises a study of the role of technology in processes of modernisation and globalisation using the press, big data and computational research methods. It will explore the role of technology as a factor of time standardisations in Western industrialised societies as well as a booster of cultural homogenisation, and as a consequence, an agent of modernisation and globalisation. The analysis will focus on the press in European countries, the United Kingdom, and the United States. The methodology will include different computational research methods, contributing to significant advances in digital humanities and computational social sciences. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie (MSC) grant agreement No 101024996
Elena Fernandez is a Marie Curie Post-Doctoral researcher based at the Department of Computational Linguistics, University of Zurich, and the Principal Investigator of GLOTECH. From 2019-2021, she was a Eurotech Post-Doctoral Fellow, and the Principal Investigator of PRESSTECH. She completed a PhD in Hispanic Languages and Literatures at the University of California, Berkeley (2019), a M.A. in Spanish Studies at the University of Virginia (2013), and a B.A. in English Philology at the University of Salamanca (2011). Her research profile that lies at the intersection between Computational Social Science, Digital Humanities, and Media and Communication Studies.
Values are the abstract motivations that justify opinions and actions. The pursuit of values drives human behavior and promotes cooperation. Existing research is focused on general (e.g., Schwartz) values that transcend contexts. However, the context-specific nature of values must be considered to (1) understand human decisions, and (2) engineer intelligent agents that can elicit human values and take value-aligned actions. Further, in practical applications (e.g., to conduct meaningful conversations or to identify online trends), artificial agents should be able to understand values on the fly from natural language.
We outline an approach for estimating context-specific values from text. At first, the values relevant to a context must be identified. To this end, we propose Axies, a hybrid (human and AI) methodology to identify context-specific values. Then, we examine the effectiveness of NLP models in classifying values in text. As context influences how we express values in natural language, we investigate the extent to which the learned value rhetoric can be transferred across contexts. Subsequently, we propose explainability techniques to inspect whether value classifiers have learned the context-specific connotations of values. Finally, we combine the steps above into a single method for swiftly estimating context-specific values from users.
Enrico Liscio (https://enricoliscio.github.io) is a PhD candidate in the Interactive Intelligence Group at TU Delft and part of the Hybrid Intelligence Centre. He obtained cum laude MSc. in Systems and Control from TU Delft (the Netherlands, 2017) and cum laude BSc. in Automation Engineering from the University of Bologna (Italy, 2015). Between his MSc. studies and the current position, he worked for 2.5 years as deep learning developer and technical project lead at Fizyr (the Netherlands).
In this presentation, we discuss the use of Agent and Multi-agent techniques in Space systems. We first identify some AI research challenges related to satellite constellations, especially concerning Earth Observation applications. These challenges range from constellation design to on-board in-space decision, and raise opportunities for investigation efforts in Multi-agent based Simulation, to Distributed Problem Solving, by the way of Machine Learning and Game Theory. We then focus on case studies related to constellation resource allocation and scheduling. The first case study concerns the allocation of exclusive orbit slots to privileged constellation users. In this problem, the constellation operator aims at allocating the resources (orbit slots) as optimally and fairly as possible, prior to any scheduling, only using some simple requirements from clients. This problem is long-term, over horizons of several months. We explore here the use of utilitarian and Leximin-optimal techniques. The second case study investigates how distributed and coordinated decision techniques can be used as to schedule observation tasks over such exclusive orbit portions, so that exclusive users do not disclose their own agenda. This problem is short-term, over horizons of few hours. Here, we make use of distributed constraint optimization and sequential auctions to distribute decisions over the set of exclusive users.
Gauthier Picard received a Ph.D. in Computer Science from the University of Toulouse in 2004, and the Habilitation degree in Computer Science from the University of Saint-Etienne in 2014. He was Associate Professor and then Full professor in Computer Science at MINES Saint-Etienne, before reaching a Senior Researcher position at ONERA, The French Aerospace Lab. His research focuses on cooperation and adaptation in multi-agent systems and distributed optimization with applications to aircraft design, ambient intelligence, intelligent transport and space operations.
Increase in access to mobile phone devices and social media networks has changed the way people report and respond to disasters. Community-driven initiatives such as Stand By Task Force (SBTF) or GISCorps have shown great potential by crowdsourcing the acquisition, analysis, and geolocation of social media data for disaster responders. To make social media information suitable for emergency responders, these initiatives face two main challenges: (1) Most of social media content such as photos and videos are not geolocated, thus preventing the information to be used by emergency responders, and (2) they lack tools to manage volunteers' contributions and aggregate them in order to ensure high quality and reliable results.
This seminar illustrates Crowd4EMS a crowdsourcing platform developed under the EU project E2mC: Evolution of Emergency Copernicus services. Crowd4EMS combines automatic methods for gathering information from social media and crowdsourcing techniques in order to manage, aggregate volunteers' contributions, and ensure reliable for emergency responders in disaster management.
Dr. Jose Luis Fernandez-Marquez (Male) is Senior Lecturer at the University of Geneva (UNIGE), and head of the Geneva-Tsinghua Initiative Accelerator. He has a computer science background, PhD in collective artificial intelligence, and wide experience in Citizen Science. In 2011 he joint UNIGE after his PhD defence at the Artificial Intelligence Research Institute (IIIA-CSIC). In 2014, he formally joint the Citizen Cyberlab a partnership between UNIGE, CERN, and the United Nation for Training and Research (UNITAR) aiming at encouraging citizens and scientists to collaborate in new ways to solve big challenges. Since 2019, he is technical coordinator of the Crowd4SDG EU project which focuses on demonstrating the potential of Citizen Science for monitoring and achieving the SDGs.
His current research focus on citizen science data quality analysis and methodologies to make citizen science data suitable for decision/policy makers.
In recent years, mobility has undergone a massive change. Car sharing, car pooling, shared e-scooters or bikes are a few examples of new mobility services that have appeared lately. The typical paradigm of owning a car is also changing, especially with the soon-to-come autonomous vehicles. On top of that, the pandemic the world is living has also changed mobility patterns and behaviours. With such an unpredictable situation, all stakeholders (from local authorities, to mobility service providers, or vehicle manufacturers) need tools to evaluate future scenarios and understand how best to respond to mobility demand. This has often been done with very specific and complicated tools, only available to specialised consultants. At Immense we have developed an easy to use simulation platform that allows non experts users to quickly formulate "what if" questions regarding mobility scenarios. In this talk I'll give an overview of our platform, making special emphasis on how AI can help in this area.
Didac Busquets is a Computer Scientist specializing in Artificial Intelligence, and more specifically on agent-based simulation, task and resource allocation, self-organization, and robotics. He has a BSc (1999) and PhD (2003) in Computer Science, both from the Technical University of Catalonia (UPC). After completing his PhD in the area of Robotics at IIIA, he obtained a Fulbright Research Fellowship to do a postdoc at Carnegie Mellon University. He then spent 5 years at Universitat de Girona doing research on auction mechanisms. Then he went to Imperial College London with a Marie Curie Fellowship to apply social science to multi-agent resource allocation. In 2015 he decided to jump to industry and joined the Transport Systems Catapult (UK) to work on mobility simulation. In 2016 he co-founded Immense Simulations, where he's been in charge of developing the core simulation engine of their platform.
In this talk I'll walk you through my research interests since I left the IIIA in 2008. Mainly, how I moved from working with a team of robots, to working with robots interacting with humans. Such shift of actors in the scene involves huge differences when it comes to developing robots. Human-Robot Interaction is a young field that is currently pushing the boundaries to achieve smooth interactions between people and robots challenging many assumptions done so far in robotics. I'll talk about my research in the ALIZ-E project, focused on child-robot interaction and some strategies aimed at sustaining long lasting interactions.
Dr. Raquel Ros is a researcher at the Group on Media Technologies (GTM) working in the area of Human-Robot Interaction. She graduated in Computer Science from the Universitat Autònoma de Barcelona on 2003 and received her PhD from the Institut d'Investigació en Intel.ligència Artificial (IIIA-CSIC). In 2008 she moved to Toulouse as a Marie-Curie fellow to work in the area of HRI at the Laboratoire d'Analyse et d'Architecture des Systèmes (LAAS-CNRS) where she worked on collaborative robotics. She then moved to Imperial College London (Personal Robotics Lab) to continue her research on social robots in educational environments. After a stint in industry at Cambridge Consultants as user-center designer, she now is at La Salle-Universitat Ramon Llull in Barcelona, where she studies human-robot interaction and its interdisciplinary connection with cognitive sciences, psychology, sociology, health and education, with emphasis on long-term interaction.
How can we trust systems built from machine learning components? We need advances in many areas, including machine learning algorithms, software engineering, ML ops, and explanation. This talk will describe our recent work in two important directions: obtaining calibrated performance estimates and performing run-time monitoring with guarantees. I will first describe recent work Jesse Hostetler on performance guarantees for reinforcement learning. Then I'll review our research on providing guarantees for open category detection and anomaly detection for run-time monitoring of deployed systems. I'll conclude with some speculations concerning meta-cognitive situational awareness for AI systems.
Dr. Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; PhD Stanford University 1984) is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. Dietterich is one of the pioneers of the field of Machine Learning and has authored more than 200 refereed publications and two books. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability.
The Doctoral Consortium will take place on July 20 and 21.consortium
Cognitive science informs us that cognition comprises a collection of constructive feedback processes between the environment and our perception of it. Sense-making is the process of an autonomous agent bringing its own original meaning upon its environment. We model the sense-making process as the conceptual blending of image schemas with a structural description of a stimulus. The case study we have used is diagrams and their geometric configurations. Image schemas comprise mental structures abstracting the invariances of repeated sensorimotor contingencies such as SUPPORT, VERTICALITY and BALANCE. They structure our perception and reasoning by transferring their structure to our percepts according to the principles of conceptual blending. In our work we model the conceptual blend of various image schemas with the geometry of a diagram, obtaining a blend that reflects the interpreted diagram. The resulting blend has emergent structure, representing a meaningful diagram. For example, a Hasse diagram (representing a poset) as a SCALE with levels, minimum and maximum elements etc. Our work on diagrams can provide guidelines for effective visualizations, and our general framework can be developed into a system that constructs possible conceptual meanings for various stimuli types.
In this talk, I will explain the theories of image schemas and conceptual blending and how they approach meaning, and then I will discuss how we take advantage of them to build a computational model of the sense-making of diagrams.
Dimitra Bourou is a predoctoral researcher at IIIA. Her field of expertise may best be summarized as computational cognitive science. The goal of her doctorate research is to develop a computational framework for sense-making of stimuli, following theories of cognitive science related to embodiment. She has graduated from the interdisciplinary master’s program Brain and Mind in the University of Crete, where she was exposed to a variety of courses ranging from neuroscience, psychology and philosophy of mind, to signal processing, machine learning and artificial intelligence. During that time she undertook research in affective computing, resulting in a publication on pain level estimation from videos of subjects (Bourou et al., 2018). Dimitra is also very familiar with modal logics and multiagent systems, as well as computational linguistics, through participation in extensive tutorials in several summer schools. Her first degree is in Biology.
Since their invention in 2017, "Transformer" models have revolutionized the field of natural language processing. Models such as BERT, GPT3 and T5 have achieved state-of-the-art performance in many challenging NLP tasks, getting closer and closer to human performance. Moreover, Transformer-based models are also starting to make inroads into other areas such as computer vision, rivaling traditional convolutional architectures. However, despite their success, Transformers have many limitations. In this talk, I'll discuss our most recent work at Google on pushing the limits of Transformer models to address such limitations. In particular, I'll talk about our work towards solving tasks that require the models to process very long inputs (e.g., question answering over long documents), structured inputs (where the input are not just raw sequences of words or pixels but have some sort of graph structure), and tasks that require compositional generalization.
Santiago Ontañón is a Research Scientist at Google Research. His research focus lies at the intersection of AI and machine learning, with applications to natural language processing and computer games. He is also an Associate Professor at Drexel University (on leave). He obtained his PhD at the Artificial Intelligence Research Institute (IIIA) in Barcelona, and held postdoctoral positions at IIIA, the Georgia Institute of Technology and the University of Barcelona.
The view of optimal control as probabilistic inference is being rediscovered again and again in planning and reinforcement learning. It has recently gained interest with the use of deep learning to represent policies and value functions, and the widespread use of entropy regularization in reinforcement learning. This seminar will introduce and explain the class of Kullback-Leibler control problems (also known as linearly-solvable optimal control) and define its relation with entropy-regularized reinforcement learning. I will present the discrete and continuous formulation of this framework for control and inference and present recent advances that exploit its analytical properties for efficient policy optimization. These advances lead to a practical algorithm in the reinforcement learning setting that can be applied to high-dimensional robotics tasks, addressing the main challenge of translating this theory into practical methods for large-scale control problems.
Adaptive Smoothing for Path Integral Control
Dominik Thalmeier, Hilbert J. Kappen, Simone Totaro, Vicenç Gómez; 21(191):1−37, 2020.
Vicenç Gómez received the Computer Science engineering degree in 2002 from the Universitat Politècnica de Catalunya, and the PhD in Computer Science and Digital Communication from the Universitat Pompeu Fabra (UPF), Barcelona in 2008. He has been a postdoctoral researcher at the Radboud university medical center (2009–2011) and at the Donders Institute for Brain, Cognition and Behavior (2011–2014) in Nijmegen (The Netherlands). He has held visiting appointments in Los Alamos National Laboratory (USA), the IAS group at Technische Universitaet Darmstadt (Germany), and at University College London (UK). In 2014 he obtained a transnational academic career grant (FP7 Marie Curie Actions) and joined the Artificial Intelligence and Machine Learning group at the Department of Information and Communications Technologies (UPF). In 2016 he was awarded with a Ramon y Cajal fellowship. He is currently a tenure-track professor at UPF. His main research interests are machine learning and optimal control in applications to different areas such as complex networks, robotics, and brain computer interfaces.Webinar
From climate change and ecosystem and habitat destruction to the spread of infectious diseases such as COVID-19, many contemporary societal challenges are exacerbated by collective action problems. In these situations, groups would benefit from a shared outcome but the incentives available to individuals drive them to free ride. While laws, treaties and other formal institutions could in principle address these global issues and create cooperation, they are often unavailable, unenforceable, or insufficient and informal institutions, such as social norms become essential. Under the right conditions, poor and destructive norms may disappear and new norms may spontaneously emerge, which motivate people to act against their self-interest and cooperate for the good of the collective. Despite their importance, evidence on the causal effect of social norms in promoting cooperation in humans is still limited. In this talk, I will present work on the formation and change of social norms and their effect in promoting human cooperation. I will discuss results from recent laboratory experiments and agent based simulations showing that social norms are causal drivers of behavior and can explain cooperation-related regularities.
Giulia Andrighetto is a researcher at the Institute of Cognitive Sciences and Technologies (ISTC) at the National Research Council of Italy, where she coordinates the Laboratory of Agent Based Social Simulation (LABSS). She is also a researcherat Mälardalen University, Västerås, Sweden. Her research examines the nature and dynamics of social norms, namely how norms may emerge and become stable, why norms may suddenly change, how is it possible that inefficient or unpopular norms survive, and what motivates people to obey norms. In 2013, she was awarded the Ricercat@mente Prize for the best under 35 italian researcher in the field of social sciences & humanities by the National Research Council and the Accademia dei Lincei. In 2016, she was awarded a Wallenberg Academy Fellowship by the Knut and Alice Wallenberg Foundation, Sweden.Webinar
A brain-machine interface (BMI) is a system that enables users to interact with computers and robots through the voluntary modulation of their brain activity. Such a BMI is particularly relevant as an aid for patients with severe neuromuscular disabilities, although it also opens up new possibilities in human-machine interaction for able-bodied people. Real-time signal processing and decoding of brain signals are certainly at the heart of a BMI. Yet, this does not suffice for subjects to operate a brain-controlled device.
In the first part of my talk I will review some of our recent studies, most involving participants with severe motor disabilities, that illustrate additional principles of a reliable BMI that enable users to operate different devices. In particular, I will show how an exclusive focus on machine learning is not necessarily the solution as it may not promote subject learning. This highlights the need for a comprehensive mutual learning methodology that foster learning at the three critical levels of the machine, subject and application. To further illustrate that BMI is more than just decoding, I will discuss how to enhance subject learning and BMI performance through appropriate feedback modalities. Finally, I will show how these principles translate to motor rehabilitation, where in a controlled trial chronic stroke patients achieved a significant functional recovery after the intervention, which was retained 6-12 months after the end of therapy.
Dr. José del R. Millán is a professor and holds the Carol Cockrell Curran Endowed Chair in the Department of Electrical and Computer Engineering at The University of Texas at Austin. He is also a professor in the Department of Neurology of the Dell Medical School.
He received a PhD in computer science from the Technical University of Catalonia, Barcelona, in 1992. Previously, he was a research scientist at the Joint Research Centre of the European Commission in Ispra (Italy) and a senior researcher at the Idiap Research Institute in Martigny (Switzerland). Most recently, he held the Defitech Foundation Chair in Brain-Machine Interface at the École Polytechnique Fédérale de Lausanne in Switzerland (EPFL), where he helped establish the Center for Neuroprosthetics
Dr. Millán has made several seminal contributions to the field of brain-machine interfaces (BMI), especially based on electroencephalogram signals. Most of his achievements revolve around the design of brain-controlled robots. He has received several recognitions for these seminal and pioneering achievements, notably the IEEE-SMC Nobert Wiener Award in 2011, elevation to IEEE Fellow in 2017 and elected fellow of the International Academy of Medical and Biological Engineering in 2020. In addition to his work on the fundamentals of BMI and design of neuroprosthetics, Dr. Millán is prioritizing the translation of BMI to end-users suffering from motor and cognitive disabilities. In parallel, he is designing BMI technology to offer new interaction modalities for able-bodied people.
Agent based simulation and social norms marked Daniel's scientific journey while his time at the IIIA-CSIC. At the time he combined diverse research areas such as agent based simulation, game-theory, social networks analysis, experimental economics and human-computer interaction, in order to understand better how societies can improve their self-governance. After graduating, he moved into applied research investigating how empirical data could be used to improve all those methods.
All this happened during the emergence of a new discipline that ended up being "the sexiest job of the 21st century". In this talk we'll review the key elements to make data-products for companies to improve their decision making, using real examples from companies.
Dr. Daniel Villatoro is Chief Data Scientist @ Openbank (Grupo Santander) and Co-founder of Databeers (ONG for the cultural dissemination of Data projects, present in 27 locations in 16 countries around the world)Webinar
Recent research has demonstrated that AI can introduce new risks and vulnerabilities in a system. In particular, I will talk about two main risks: security and privacy. I will show that attacks that can be performed to exploit AI models and attack the systems that use them, and that AI-based systems can be privacy-intrusive. I will then outline our current research and projects on making AI safer.
Dr Jose M Such is Reader in Security and Privacy at King’s College London and Director of the King’s Cybersecurity Centre, an Academic Centre of Excellence in Cyber Security Research (ACE-CSR) recognised by NCSC (part of GCHQ) and EPSRC. Dr Such was Senior Lecturer at King’s College London from 2016 to 2018, and before that, he was Lecturer at Lancaster University from 2012 to 2016. His research interests are at the intersection between Artificial Intelligence, Human-Computer Interaction, and Cyber Security. His research has been funded through a multi-million pound portfolio of projects by UKRI, EPSRC, Google, ICO, UK Government, and InnovateUK.Webinar
The mathematician and inventor Charles Babbage wrote 26 programs between 1836 and 1841 for the unfinished "Analytical Engine“ (AE). The code is embedded implicitly in tables summarizing program traces. In this talk, I present the programming architecture of Babbage’s mechanical computer based on the first code written for the machine. The AE had a processor separate from memory, and worked using a kind of dataflow approach. The stream of arithmetical operations was independent from the stream of memory addresses. Special "combinatorial" cards allowed the processor to execute FOR and WHILE loops. Combinatorial cards also allowed independent looping through the stream of memory addresses. Quite sophisticated computations were possible and illustrate why Babbage talked about the possibility of doing "algebra" with his machine. The programs I will discuss predate by several years the account published by Menabrea in 1842 and translated later by Lady Lovelace with notes of her own.
Raúl Rojas González is a professor in the Dept. of Mathematics and Computing from the Free University of Berlin. He is a graduate of the IPN (Mexico), where he got his Bachelor's and Master's degrees in mathematics. Later he performed doctoral studies and obtained the habilitation in Sciences of Computing at the Free University of Berlin. He has written about the history of computing and is the author of the book "The First Computers" (MIT Press, 2000). His articles on the Babbage machine have appeared in journals German and in Annals of the History of Computing. Raúl Rojas was Professor of the Year in 2014 in Germany and is the National Science Award of Mexico in the 2015 cohort.Webinar
Automated assessment and feedback of open-response assignments remain a challenge in Computer Science despite recent milestones in fields as natural language processing. Even if we could make quality assessments with complete human supervision independence, many would argue against it. Competence assessment is a sensitive topic possibly impacting the issuance of a certificate that asserts that a student is ready for her insertion in the labour market or to continue her progress in the education system.
Despite the efforts on "opening" the black box of neural networks, current neural models are rarely equipped with logical narratives of the decision chains that lead them to a final prediction or classification. Nevertheless, transparency and explainability are desirable requisites for automated assessment systems.
For all the above, many researchers propose hybrid solutions combining the benefits of automation with human judgement. Probabilistic models of competence assessment join the benefits of automation with human judgement. In this work, two probabilistic models of peer assessment (PG1-bias and PAAS) are replicated and compared. We also present PG-bivariate, a model combining the approaches from the first two.
Alejandra López de Aberasturi is a PhD candidate at the IIIA-CSIC.Webinar
Reinforcement learning (RL) is a field of AI in which actions are taken based upon a states, transitions, expected rewards and other available information. However, when the states and action spaces are not discrete or finite, RL needs to be reformulated and other methods can be applied. In this presentation, I will talk about some of these changes and methods that are used for learning in continuous state-action spaces, and their application to robot motion learning.
Adrià Colomé is a postdoctoral researcher at the Institut de Robòtica i Informàtica Industrial. He focused his PhD in robot motion learning from several perspectives, such as robot kinematics, robot dynamics, reinforcement learning in latent spaces, sample efficiency for robot learning, and learning robot motion adaptability. Currently, he is working towards the challenging topic of learning to manipulate cloth.Webinar
The overarching aim of the UKRI Centre for Doctoral Training (CDT) in Safe and Trusted Artificial Intelligence (STAI) is to train the first generation of AI scientists and engineers in methods of safe and trusted AI. An AI system is considered safe when we can provide assurances about the correctness of its behaviour, and it is considered trusted if the average user can have confidence in the system and its decision making. The CDT focuses particularly on the use of model-based AI techniques for ensuring the safety and trustworthiness of AI systems. Model-based AI techniques provide an explicit language for representing, analysing and reasoning about systems and their behaviours. Models can be verified and solutions based on them can be guaranteed as safe and correct, and models can provide human-understandable explanations and support user collaboration and interaction with AI – key for developing trust in a system. In this talk, we will present the central vision, programme, and core research areas.
Dr Natalia Criado is a Senior Lecturer in Computer Science at King's College London and a member of the UKRI Centre for Doctoral Training (CDT) in Safe and Trusted Artificial Intelligence (STAI).Webinar
This webinar is a PhD thesis defense.
This PhD thesis contributes to the systematic study of Horn clauses of predicate fuzzy logics and their use in knowledge representation for the design of an art painting style classification algorithm. We first focus the study on relevant notions in logic programming, such as free models and Herbrand structures in mathematical fuzzy logic. We show the existence of free models in fuzzy universal Horn classes, and we prove that every equality-free consistent universal Horn fuzzy theory has a Herbrand model. Two notions of minimality of free models are introduced, and we show that these notions are equivalent in the case of fully named structures. Then, we use Horn clauses combined with qualitative modelling as a fuzzy knowledge representation framework for art painting style categorization. Finally, we design a style painting classifier based on evaluated Horn clauses, qualitative colour descriptors, and explanations. This algorithm, called l-SHE, provides reasons for the obtained results and obtains percentages of accuracy in the experimentation that are competitive.Webinar
Machine learning enables new approaches to inverse problems in many fields of science. We present a novel probabilistic programming framework that couples directly to existing scientific simulators through a cross-platform probabilistic execution protocol, which allows general-purpose inference engines to record and control random number draws within simulators in a language-agnostic way. The execution of existing simulators as probabilistic programs enables highly interpretable posterior inference in the structured model defined by the simulator code base. We demonstrate the technique in particle physics, on a scientifically accurate simulation of the tau lepton decay, which is a key ingredient in establishing the properties of the Higgs boson. Inference efficiency is achieved via amortized inference where a deep recurrent neural network is trained to parameterize proposal distributions and control the stochastic simulator in a sequential importance sampling scheme, at a fraction of the computational cost of a Markov chain Monte Carlo baseline.
Dr Atilim Güneş Baydin is a Departmental Lecturer in machine learning at the Department of Computer Science and a Senior Researcher in machine learning at the Department of Engineering Science, University of Oxford. He works with Philip H. S. Torr as a member of Torr Vision Group. He is also a Research Member of the Common Room at Kellogg College, a research consultant for Microsoft Research Cambridge, and a member of European Lab for Learning and Intelligent Systems (ELLIS).Webinar
"Let's think about our presentations: besides being informative, are they engaging and motivating?" Carme will give us some hints about how to improve our scientific oral presentations and how to make them live and motivating for the audience.
Carme Roig is in charge of educational and technological innovation at STBCO, Department of Education, Catalonia, Spain. She has a degree in English Philology (UB, 1986) and a Master’s degree in TESOL (Institute of Education, UCL, 1996). She has coordinated a team of teachers who work to introduce and promote cooperative learning practices in high schools as members of XCB (Xarxa de Competències Bàsiques). In recent years, she has been working with several research groups (Goldsmiths College, London, University of Ghent, Belgium, SONY Labs, Paris, IIIA-CSIC Bellaterra) on issues of collaborative distance learning, self-assessment and co-assessment, and participated in experiments to validate a number of computer tools developed within the framework of a European project and in collaboration with the Research Institute for Artificial Intelligence, IIIA, CSIC. These tools include the automatic design of lesson plans, assessment tools to assess large numbers of students (MOOC) and tools for students’ grouping. Her main interests are Automated team formation, collaborative work, formative assessment and multilingualism.Webinar
Optical imaging methods using fluorescence indicators are critical for monitoring the activity of large neuronal populations in vivo. Imaging experiments typically generate a large amount of data that needs to be processed to extract the activity of the imaged neuronal sources. While deriving such processing algorithms is an active area of research, most existing methods require the processing of large amounts of data at a time, rendering them vulnerable to the volume of the recorded data, and preventing real-time experimental interrogation. In this talk I will describe CaImAn Online, a framework for the analysis of streaming calcium imaging data, including i) motion artifact correction, ii) neuronal source extraction, and iii) activity denoising and deconvolution. Our approach combines and extends previous work on online dictionary learning and calcium imaging data analysis, to deliver an automated pipeline that can discover and track the activity of hundreds of cells in real time, thereby enabling new types of closed-loop experiments.
Dr. Andrea Giovannucci is an Assistant Professor in Neural Engineering at the UNC/NCSU department of Bioengineering. Prior to this appointment, Dr. Giovannucci was a machine learning data scientist at the Flatiron Institute (Simons Foundation) and a postdoctoral fellow (experimental neuroscience) at the Princeton Neuroscience Institute. Dr. Giovannucci obtained his PhD in artificial intelligence from the Autonoma University of Barcelona and the Artificial Intelligence Research Institute of Bellaterra (IIIA-CSIC), Spain. Dr. Giovannucci is affiliated with the UNC/NCSU joint Bioengineering department, the Closed-loop Engineering for Advanced Rehabilitation (CLEAR) and the UNC Neuroscience Center.Webinar
Transition Edge Sensors (TES) detector devices, like the one that will be onboard the Athena X-ray Observatory, produce current pulses as a response to the incident X-ray photons. The reconstruction of these pulses aims at recovering the energy of the impacting photon, its arrival time and its physical position in the detector. This has been traditionally performed by means of a triggering algorithm based on the derivative signal overcoming a threshold (detection) followed by optimal filtering (to retrieve the energy of each event). However, when the arrival of the photons is very close in time, the triggering algorithm is incapable of detecting all the individual pulses. Aiming at improving the efficiency of the detection process, we use an alternative approach with Machine Learning techniques. For this purpose, we construct and train a series of Neural Networks (NNs) not only for the detection but also to recover the energy of simulated X-ray pulses. The dataset used to train the NNs consists of simulations performed with SIXTE/xifusim, the Athena/X-IFU official simulator. Although much expensive in terms of computational cost, the performance of our classification NN clearly surpasses the detection performance of the classical triggering approach for the full range of photon energy combinations showing excellent metrics. The reconstruction efficiency for the recovery of the energy of the photons cannot however currently compete with the optimal filtering algorithm.Webinar
Complex networks are ubiquitous to represent real systems in many contexts, such as social networks, computer networks, or biological networks, among others. Most of the real-world networks exhibit non-trivial topological features, and the interest in analyzing their properties has resulted in the emergence of random models to generate them. Probabilistic models are, in general, based on the probability of each edge to occur, and the topology of the network is the consequence of such a probability distribution. In deep generative approaches, a model is trained to learn the features of a training set of examples and generate new networks with similar properties. In this seminar we will review a (non-exhaustive) list of random models of complex networks generation, and analyze how these models can be applied to another challenging problem: the generation of realistic random SAT instances.Webinar
Ant colony optimization is a metaheuristic that is mainly used for solving hard combinatorial optimization problems. The distinctive feature of ant colony optimization is a learning mechanism that is based on learning from positive examples. Examples from nature, however, indicate that negative learning—in addition to positive learning—can beneficially be used for certain purposes. Several research papers have explored this topic over the last decades in the context of ant colony optimization, mostly with limited success. In this talk I present an alternative mechanism making use of mathematical programming for the incorporation of negative learning in ant colony optimization. The study considers two classical combinatorial optimization problems: the minimum dominating set problem and the multi dimensional knapsack problem. In both cases our approach significantly improves over standard ant colony optimization and over the competing negative learning mechanisms from the literature.Webinar
Natural Language Understanding (NLU) is the broad research area in Natural Language Processing (NLP) that develops methods to analyze natural language and understand its meaning. It is a key component of any AI system that aims at truly interacting with humans. It is also a key component for automatic systems that do machine reading of the web and social media, which, given the current volumes of information, is the only practical way to access this content.
First I will give a brief overview of Natural Processing Processing tasks, and the evolution of machine learning approaches in recent years. Natural language is structured, very rich, ambiguous, and offers limitless ability to say new things. Because of this, the desire is to have machine learning algorithms that learn hidden-state compositional models of language, and answer questions such as: what are the units and parts of a language? what is the meaning of each part? how do we compose parts into bigger parts? what is the meaning of a composed expression? how do we use these models to solve specific needs?
Deep learning has made great progress on these questions. Today we have giant neural models like BERT or GPT-3 that are trained at worldwide scale, and are found useful for virtually any empirical NLP task. However, it's largely unclear what these models are learning, and what is their capacity to generalize (as opposed to memorizing data). Also, the costs of learning these models is huge.
In the second part of this talk, I will focus on compositional models of language that take the form of weighted automata, which are a restricted class of recurrent neural networks. I will describe Spectral Learning algorithms, a family of learning algorithms that reduces the problem of learning a weighted automata to some form of matrix learning. This reduction is based on theoretical connections between formal languages and distributions over the strings they generate. I will highlight several good properties of this family of techniques, and contrast them with deep learning approaches.
Finally, I will describe some research lines on unsupervised spectral learning of natural language grammars that I will pursue in the next few years.Webinar
Classical solution concepts in game theory, such as the Nash equilibrium and the subgame-perfect equilibrium, are based on the assumption that players make their choices on a purely individual basis and that they are not able to coordinate their actions through binding agreements. This sometimes yields counter-intuitive results, such as in the Prisoner's Dilemma. In most real-world situations that are similar to the Prisoner's dilemma, people can negotiate and jointly agree to choose their actions in a way that prevents them from hurting each other. If necessary, with the help of legally binding contracts.
In this talk I will therefore introduce a new game-theoretical solution concept that does take into account the possibility for the players to make binding agreements about their actions. I will use a classical text-book game known as the Centipede Game as an example, and show how this new solution concept prescribes a more satisfactory outcome than the classical subgame-perfect equilibrium. Furthermore, I will present experimental results obtained with a negotiation algorithm based on Monte Carlo Tree Search.Webinar
In this talk I will first give a short overview on optimization and on the related topics that have been subject of our work during the last years. In the second part of the talk I will report on an industrial project that we conducted in 2020 in cooperation with IKERLAN S. Coop. in the context of the optimization of safety-critical systems.Webinar
Computer Vision has become one of the most relevant fields of work in AI. During recent years, and with the explosion of Deep Learning and the possibility to have access to massive data sets, Computer Vision itself has also become one of the main driving forces of the AI market, with multiple applications in different areas of social impact such as autonomous mobility, health and well-being, intelligent media analysis, industry 4.0, etc. Tools such as Convolutional Neural Networks have become prominent and omnipresent in approaches tackling both general and specific problems, and the pace at which these new solutions are appearing, day after day, is changing the Computer Vision research scenario dramatically. In this seminar, Prof. Fernando Vilariño (Associate Director and Group Responsible for Research Projects at Computer Vision Centre (CVC) (http://www.cvc.uab.es/)) will provide an introduction to the main areas of impact tackled by the Computer Vision Center, by introducing a number of paradigmatic examples of Computer Vision-based projects, putting emphasis on the specific techniques used. The presentation will have a very practical approach and will allow those interested in deepening in the Computer Vision field to have a set of pointers to dig in, both from a purely scientific or a more implementation-oriented perspective.Webinar
Presentation of the High-Performance Cluster for Artificial Intelligence of the IIIA: Technical characteristics, rules of use and operation, available software and mini user guide.
[This is an internal webinar for people working at the IIIA-CSIC]Webinar
We will provide a brief overview of the CorporIS project, funded by Spain’s Ministerio de Ciencia e Innovación. The project aims at contributing to the conceptual and theoretical foundations for a mathematical and computational model of embodied conceptualisation, driven by its potential deployment and application in cognitive musicology and musical creativity.Webinar
Prof. Mark d'Inverno: "Using a piano (I hope), AI software and a few videos I will aim to try and answer this question from the perspectives of researcher, musician and lecturer."
Mark d'Inverno has spent the last 20 years undertaking cutting-edge research at the frontiers of AI, creativity and learning –luckily for him that much has been with colleagues at IIIA - asking how they relate to each other and how the different academic disciplines can provide us with insights into the role we want AI to play in learning, in creative practice and in society in general. Mark's PhD from UCL investigated the concepts of agency and autonomy in artificial systems, and since then he has published around peer-reviewed 200 articles and several books (including the edited book "Computers and Creativity"). Mark was formerly Pro-Warden (Pro-Vice-Chancellor) at Goldsmiths, University of London - known for an array of alumni who have contributed to the creative and cultural industries nationally and internationally - where he has led on developing the College's international profile and engagement and before that led the research and enterprise brief. He is a critically acclaimed jazz pianist (Guardian, Observer, BBC) and for nearly 40 years has led a variety of successful bands in a range of different musical genres.Webinar
Professor Gopal Ramchurn from the University of Southampton will give us a brief overview of some of the latest research he has carried out in the area of human-agent collectives and will articulate some of the key challenges that arise when building AI needs to be trustworthy by design and trusted in practice. Then he will detail the programme of the UKRI Trustworthy Autonomous Systems Hub (www.tas.ac.uk), which is a newly funded £12m programme to coordinate a portfolio of £21m of research projects across multiple universities in the UK.
Speaker: Filippo Bistaffa - researcher at the IIIA-CSIC.
Filippo will present an approach that allows one to approximate every characteristic function games (CFG) as an induced subgraph game (ISG), a succinct game representation that is based on a weighted graph among the agents. The proposal outperforms existing CSG approaches for ISGs by using off-the-shelf optimisation solvers.Webinar
Marta R. Costa-jussà, a Ramon y Cajal Researcher at the Universitat Politècnica de Catalunya (UPC, Barcelona), will be giving some deep insights into (spoken) multilingual language translation pursuing similar quality for all languages. Also, we are going to discuss how we can efficiently add new languages in a highly multilingual system. Finally, we are going to give details on the fairness challenge, why neutral words as “doctor” tend to infer the “male” gender when translated into a language that requires gender flexion for this word?Webinar
Professor Juan Antonio Rodríguez will present to us the AI4EU project. AI4EU is the European Union’s landmark Artificial Intelligence project, which seeks to develop a European AI ecosystem, bringing together the knowledge, algorithms, tools and resources available and making it a compelling solution for users. Involving 80 partners, covering 21 countries, the €20m project kicked off in January 2019 and will run for three years.Webinar
Professor Cecilio Angulo will present us the IDEAI-UPC (https://ideai.upc.edu/en) research lab. Cecilio is the current director of the IDEAI-UPC. He will present some of the projects that are being carried out at the IDEAI and that perhaps may open up new collaboration lines between the two institutes.Webinar
The Doctoral Consortium will take place on July 21 and 22. Due to concerns regarding COVID-19 the DC2020 will be held online.consortium
Crowdsourcing is often associated with the darker side of Artificial Intelligence...Webinar
This year's Christmas Concert coincides with the 25th anniversary of the institute.music performance
La jornada consistirà en un parell de taules rodones sobre la història i el futur del IIIA i una conferència a càrrec del Professor Luc Steels, seguit d’un dinar.25th anniversary conference round table
Amb el motiu de la commemoració dels 25 anys de l'Institut d'Investigació en Intel·ligència Artificial (IIIA-CSIC), ens complau convirdar-vos a la jornada "25 anys fent IA: El Repte d'Innovar" que tindrà lloc el proper 18 de desembre al mateix institut.25th anniversary discussion talk
Estas jornadas se enmarcan dentro de las acciones iniciales de AIHUB.CSIC y tienen el objetivo de completar el mapa de competencias del CSIC reuniendo a representantes y miembros de los grupos activos en inteligencia artificial.AIHUB workshop
El Dr. Ramon López de Mántaras (expert en Intel·ligència Artificial) i el Dr. Jordi Isern (expert en Ciències de l'Espai) parlaran sobre l'impacte de la intel·ligència artificial en el marc dels viatges interplanetaris tripulats, i també sobre els límits ètics i les implicacions per a l'ésser humà. Un diàleg a la frontera entre ciència i filosofia, per plantejar reptes futurs ara que fa 50 anys que vam trepitjar la Lluna i que es plantegen els viatges a Mart.25th anniversary discussion science week talk
The Doctoral Consortium will take place on July 16 and 17.consortium
The Doctoral Consortium will take place on July 17 and 18.consortium
The Doctoral Consortium will take place on July 18 and 19.consortium
The Doctoral Consortium will take place on July 21 and 22.consortium
The Doctoral Consortium will take place on July 15.consortium
The Doctoral Consortium will take place on July 16.consortium
The Doctoral Consortium will take place on July 22 and 23.consortium
The Doctoral Consortium will take place on June 19, 20 and 21.consortium
The Doctoral Consortium will take place on June 20 and 21.consortium