Artificial Intelligence and Agnostic Science – Doing Science in the Age of Artificial Intelligence

This international colloquium is organized within the framework of the project “Artificial Intelligence and Agnostic Science”, funded by the CNRS MITI (Mission pour les Initiatives Transverses et Interdisciplinaires). The project is scientifically led by Marco Panza, and includes Gérard Biau and Xavier Fresquet (SCAI), Christophe Denis and Jean-Gabriel Ganascia (LIP6), Philippe Codognet and Arnaud Lieffooghe (JFLI), as well as Pierre Wagner, Marion Vorms, Henri Stephanou, Olivier Rey, Alberto Naibo, Matteo Mossio, Philippe Huneman and Solange Haas (IHPST).

 

14 December 2020 (Zoom link: https://us02web.zoom.us/j/87347741829)

15h-16h: Hiroaki Kitano (President of Sony Computer Science Laboratories and chairperson of the AI Japan R&D Network)

  • Title: Nobel Turing Challenge: Creating the Engine for Scientific Discovery
  • Abstract: One of the most exciting and disruptive research in AI is to develop AI systems that can make major scientific discovery by itself with high-level of autonomy. In this talk, I propose “Nobel Turing Challenge” to the grand challenge bridging AI and other scientific communities. The challenge calls for development of AI systems that can make major scientific discoveries some of which worth Nobel Prize, and the Nobel Committee, and the rest of the scientific community, may not be able to distinguish if it was discovered by human scientist or AI (Kitano, H., AI Magazine, 37(1) 2016). This challenge is particularly significant in biomedical domain where progress of systems biology (Kitano, H., Science, 295, 1662-1664, 2002; Kitano, H., Nature, 420, 206-210, 2002) resulted in total overflow of data and knowledge far beyond human comprehension. After 20 years of my journey in systems biology research, I have concluded that next major breakthrough in systems biology requires AI-driven scientific discovery. Initially, it shall be introduced as AI-assisted science, but it will result in AI scientists with high-level of autonomy. This challenge poses a series of fundamental questions on the nature of scientific discovery, limits of human cognition, implications of individual paths toward major discoveries, computational meaning of serendipity or scientific intuition, and many other issues that may bring AI research into the next stage.

16h-17h: Julie Jebeile (University of Bern, Switzerland)

  • Title: Understanding with Climate Models and the Impact of Machine Learning
  • Abstract: With Vincent Lam and Tim Räz, we published a paper entitled “Understanding Climate Change with Statistical Downscaling and Machine Learning”. In the talk, I will give a presentation of our main arguments. The aim of our paper is to assess the extent to which the use machine learning techniques in climate models affect our ability to understand with those models. Understanding plays an important role in model evaluation. Yet, while machine learning techniques are presumably highly predictive, they are non-physics-based ‘black boxes’, and therefore may hamper understanding with climate models. Our strategy is to put forward evaluative criteria for understanding, and then to compare the impact of machine learning on understanding with the impact of standard statistical techniques used to downscale regional climate models outputs. We show that, with respect to understanding, there is no categorical difference between the two families of techniques. There is instead a continuum of understanding along five dimensions: intelligibility, representational accuracy, empirical accuracy, physical consistence, and domain of validity.

17h-18h: Viola Schiaffonati (Politecnico di Milano, Italy)

  • Title: Computers, robots and experiments
  • Abstract: In this talk I will argue that AI and robotics are engineering disciplines and that this should play a central role in the conceptualization of their experimental method. I will then present the notion of explorative experimentation and discuss how the traditional epistemic, but also ethical, categories should be partly revised to take into account some of the peculiarities of AI.

 

15 December 2020 (Zoom link: https://us02web.zoom.us/j/89849563695)

16h-17h: Sabina Leonelli (University of Exeter, UK)

  • Title: Are findings from philosophy of big data compatible with views of AI as agnostic science?
  • Abstract: I will briefly summarise some of the serious concerns around bias, discrimination and inequity emerging from critical studies of big data, such as my own short book La Recherche Scientifique à l’Ère des Big Data: Cinq Façons Donc les Données Massive Nuisent à la Science, et Comment la Sauver     (Leonelli 2019, see also Leonelli 2020). I will then start a discussion around whether such concerns are compatible with the understanding of AI and related data science as ‘agnostic’ – and suggest that this may not be the case, given the substantive conceptual, social and political commitments brought into AI technologies by the data used to train and implement those systems.

17h-18h: Daniele Struppa (Chapman University, USA)

  • Title: Agnostic Science and Mathematics
  • Abstract: I will present joint work with D. Napoletani and M. Panza. After describing the notion of agnostic science, I will discuss how the rise of such approach has impacted and is impacting mathematics: both in terms of how we do mathematics, and in terms of what we expect from mathematics. If time allows I will discuss the Brandt’s Principle from the theory of algorithms, and its biological homologue, the Principle of Developmental Inertia, that may be helpful in developing conjectures as to the reasons behind the success of agnostic science.

 

16 December 2020: session dedicated to young researchers (Zoom link: https://us02web.zoom.us/j/81558102493)

Presentations will take place either in French or in English.

15h30-16h15: Lyu Fu (Master’s student, University Paris 1 Panthéon-Sorbonne and AIAS trainee, France)

  • Title: Is data enough? The role of model in data-intensive sciences.
  • Abstract: Here is the postulate of Big data: if we have collected enough data that is sufficiently diverse, then we can answer most questions about the phenomenon, even without having a structural and general understanding of the underlying mechanism. Theories are therefore dead, as well as theoretical models. Science is becoming agnostic. Now, is this postulate, which seems to be confirmed by the success of the « fourth paradigm », really plausible?

16h15-17h00: Edwige Cyffers (Master’s student, University Paris 1 Panthéon-Sorbonne, France)

  • Title: Differential Privacy: a new notion of the privacy for a new paradigm of data?
  • Abstract: The recent advances of technology, that make the storage and processing of large amount of data affordable, have led to increasing collect of sensitive data. A consequence of the Big Data phenomenon is that the traditional definition of what is a personal information, used to define boundary of privacy, seems inaccurate. Indeed, « anonymized » dataset can be massively reidentificate, and increases further desanonymization in a vicious circle. Differential Privacy has been proposed as a more relevant way ta measure to what extend a contribution to a database threatens the individual, and it has become the gold standard in Machine Learning research. I will briefly present the key points of the definition, highlight its strengths and the issues that are imposed by this framework.

17h00-17h45: Maxime Darrin (Master’s student, University Paris 1 Panthéon-Sorbonne, France)

  • Title: What notion of robustness for AI?
  • Abstract: The wide and quick spread of AI in society inevitably leads to a need of regulation of some kind, especially for critical applications such as autonomous driving, medical diagnose etc… In order to allow policymakers to design regulation framework, we need notions of robustness of machine learning models. Indeed, usual evaluation by testing has shown critical limits with the discovery of adversarial attacks, thus highlighting the need for better assessment methods. I propose to explore the different notions of robustness which exist in other fields and already are in use to build legal frameworks. I want to discuss what properties a good notion of robustness of AI should have for policymakers and what mathematical or formal notions could fit those requirements.

17h45-18h30: Julio Cárdenas (PhD student, Sorbonne University, France)

  • Title: Convolutional Neural Networks for Geophysical Magnetics Methods Inversion

Entretiens

Colloques

La philosophie médiatique

Coups de cœur

Histoire de la philosophie

Actualité éditoriale des rédacteurs

Le livre par l’auteur

La philosophie politique

La philosophie dans tous ses états

Regards croisés

Posted in Annonces de colloques, La Science à l'épreuve de la Philosophie and tagged , .

Ancien élève de l’ENS Lyon, agrégé et docteur en Philosophie, Thibaut Gress est professeur de Philosophie en Première Supérieure au lycée Blomet. Spécialiste de Descartes, il a publié Apprendre à philosopher avec Descartes (Ellipses), Descartes et la précarité du monde (CNRS-Editions), Descartes, admiration et sensibilité (PUF), Leçons sur les Méditations Métaphysiques (Ellipses) ainsi que le Dictionnaire Descartes (Ellipses). Il a également dirigé un collectif, Cheminer avec Descartes (Classiques Garnier). Il est par ailleurs l’auteur d’une étude de philosophie de l’art consacrée à la peinture renaissante italienne, L’œil et l’intelligible (Kimé), et a publié avec Paul Mirault une histoire des intelligences extraterrestres en philosophie, La philosophie au risque de l’intelligence extraterrestre (Vrin). Enfin, il a publié six volumes de balades philosophiques sur les traces des philosophes à Paris, Balades philosophiques (Ipagine).