Institut de Recherche en Informatique Fondamentale (IRIF)

CNRS

Université Paris Cité

L'IRIF est une unité mixte de recherche (UMR 8243) entre le CNRS et l'Université Paris Cité, et héberge une équipe-projet Inria.

Les recherches menées à l'IRIF reposent sur l’étude et la compréhension des fondements de toute l’informatique, afin d’apporter des solutions innovantes aux défis actuels et futurs des sciences numériques.

L'IRIF regroupe près de deux cents personnes. Sept de ses membres ont été lauréats de l'European Research Council (ERC), trois sont membres de l'Institut Universitaire de France (IUF), deux sont membres de l'Academia Europæa, et un est membre de l'Académie des sciences.

Suivez nous sur LinkedIn, Bluesky et Mastodon :

LinkedIn  Bluesky  Mastodon

@Collège de France

4.11.2025
La prochaine conférence « On éteint, on réfléchit, on discute » aura lieu le mardi 18 novembre. Elle portera sur l’IA.

Rappel: ces conférences s’adressent aux étudiant-e-s de l’UFR d’info, mais pas seulement, toute personne intéressée est la bienvenue

IA: derrière les grands discours, quelles réalités ? Ces dernières années, l’IA fait la une des journaux: on nous annonce des révolutions technologiques tous les 6 mois, la disparition de la plupart des métiers sous peu, des investissements colossaux (en centaines de milliards de dollars), des applications pour un peu tout, une intelligence sur-humaine dans la plupart des domaines… Au delà des discours tonitruants, nous essaierons de démythifier et de faire le point sur ce phénomène. Quelles avancées scientifiques en IA ont été faites ces dernières années ? Comment décrypter le récit médiatique sur l’IA ? Quelle société et quelle planète, l’IA nous promet-t-elle vraiment ?

Avec : Claire Mathieu, Directrice de recherche CNRS à l’IRIF, et Thibault Prévost, journaliste, auteur de « Les prophètes de l’IA. Pourquoi la Silicon Valley nous vend l’apocalypse » (Lux, 2024).

Démystifier le quantique, la chaire médiation de l’IUF de Sophie Laplante

24.10.2025
Découvrez l'interview de Sophie Laplante, “Démystifier le quantique, la chaire médiation de l’IUF de Sophie Laplante

Delia Kesner's Festschrift

30.9.2025
Le 12 septembre, l'IRIF organisait le Delia Kesner's Festschrift. L'occasion de célébrer le parcours de recherche de Delia Kesner (et de fêter son anniversaire !).

https://www.tcs4f.org/

26.9.2025
TCS4F (theoretical computer science for future), dont Thomas Colcombet et Hugo Férée de l’IRIF sont membres fondateurs, lance « low-co2 research paper », qui offre des outils graphiques et des règles d’utilisation pour mentionner sur ses travaux que le coût carbone de sa recherche et sa dissémination est faible.

Vous trouverez plus de détails sur cette page

https://malinca.gitlabpages.inria.fr/malinca.gitlab.io/events-kickoff/

30.9.2025
MALINCA (Mathematicae Lingua Franca) aim to develop proof assistant technologies of an entirely new nature, including a formal language and a foundational approach to mathematical meaning, with the versatility necessary to represent the dynamic linguistic structures to be found in the daily practice of mathematics.

Bridging the Linguistic Gap Between the Mathematician and the Machine The kickoff meeting will be held at the Institut Henri Poincaré on October 1-3, 2025.

Village des sciences de l'Université Paris-Cité

9.10.2025
Le vendredi 10 et samedi 11 octobre, l'IRIF participera au village des sciences de l'Université Paris-Cité.

Venez découvrir l'IRIF, rencontrer nos scientifiques et participer à nos ateliers d'informatique débranché.


(Ces actualités sont présentées selon un classement mêlant priorité et aléatoire.)

Combinatoire énumérative et analytique
Jeudi 13 novembre 2025, 11 heures, Salle 4071
Hadrien Notarantonio (LaBRI – IRIF) Non encore annoncé.

Preuves, programmes et systèmes
Jeudi 13 novembre 2025, 10 heures, Salle 3052
Damien Pous (CNRS, ENS Lyon, LIP) Coinduction-free coinduction, on the board and in Rocq

Bisimilarity, and more generally coinductive predicates, are usually presented as greatest-fixpoints. In this setting, Knaster-Tarski's fixpoint theorem corresponds to the well-known coinduction principle. For instance “to prove some processes bisimilar, find a bisimulation containing them”. By moving to Kleene iteration, and following Schäfer and Smolka, I will show how to emulate such a coinduction principle using an induction principle instead: “to prove some processes bisimilar, show that they resist to one more observation”. This principle makes it trivial to develop “up-to techniques” in a modular way ; it is also much more convenient when it comes to formalisation: it makes it possible to freely use the host language and avoids the need for guardedness checks. I will illustrate this by showing how to formalise simple examples in Rocq, using the coinduction library I developed.

Soutenances de thèses
Jeudi 13 novembre 2025, 12 heures 30, Salle 3052, Bâtiment Sophie Germain
Adrienne Lancelot (IRIF) Comparing functional programs, or how to put λ-terms back in order

Program equivalence is a central topic in the study of programming languages. Certifying that compiler transformations do not change the semantics of a program is needed to formally prove correct compilers.

This thesis aims at further developing the theory of program equivalences for the untyped pure λ-calculus, on several viewpoints.

We suggest to switch from program equivalences to program preorders, which is a usual switch in the study of programming languages but perhaps not something fully accepted at the level of the λ-calculus. We show that some central concepts in the λ-calculus may be reformulated to simple properties on preorders for λ-terms.

Contextual equivalences are the most natural notion of comparison for programs: contextual comparison is about testing programs in any execution environment and checking that they behave in a similar way. This equivalence is mostly unusable to provide proof of equivalences for expressive enough programming languages, because of the universal quantification over execution environments—akin to testing for all possible inputs.

We introduce a quantitative refinement of contextual equivalence, trying to obtain a cost-sensitive program equivalence as natural as contextual comparison. Interaction equivalence tests programs in any execution environment, checks that their behavior is similar but also takes into account the number of communications (or interactions) between the program and its environment.

Finally, we explore program equivalences for another evaluation paradigm, namely call-by-value. Call-by-value is more efficient and closer to the actual evaluation mechanism used by programming languages despite it being less studied than the original evaluation of the λ-calculus. It leads to a wide range of program equivalences, usually studied with effects added to the language. We study it here in a pure setting and show that call-by-value is a rich framework with many different notions of equivalences.

We provide a unifying view of all these program equivalences and their main properties, abstracting over the evaluation mechanisms considered.

Séminaire des membres non-permanents
Vendredi 14 novembre 2025, 11 heures, Salle 3052
Shamisa Nematollahi (IRIF) Economy of Scale in Algorithm Design: ​Approximation Algorithms for Submodular Optimization and Network Design ​

​The economy of scale principle, where the cost per unit decreases as the scale of operation increases, is a foundational concept in economics. It naturally arises in diverse real-world systems, from logistics and infrastructure planning to data summarization and information propagation. In such systems, grouping resources, demands, or actions often leads to more efficient outcomes. This behavior is reflected in mathematical models through properties such as submodularity, which captures diminishing returns on the value side, and subadditive capacity cost functions, which formalize buy-at-bulk effects on the cost side. In submodular optimization problems, the economy of scale appears as diminishing marginal gains: each additional element contributes less marginal gain as the solution grows. This framework captures key phenomena, including information redundancy and influence propagation. In the buy-at-bulk facility location problem, the economy of scale manifests on the cost side: satisfying a larger demand becomes cheaper per unit due to amortized setup costs. By leveraging this shared structure, this thesis explores the design of approximation and streaming algorithms for these two classes of problems that capture the principle of economy of scale. We develop algorithmic techniques with provable performance guarantees for problems with complex constraints, across different computational models. These results contribute to the broader understanding of scalable algorithms for decision making tasks where gains or costs exhibit diminishing return structures.

Vérification
Lundi 17 novembre 2025, 11 heures, Salle 3052
Olzhas Zhangeldinov LTL Model Checking of Concurrent Self Modifying Code

We consider the LTL model-checking problem of concurrent self modifying code, i.e., concurrent code that has the ability to modify its own instructions during execution time. This style of code is frequently utilized by malware developers to make their malicious code hard to detect. To model such programs, we consider Self-Modifying Dynamic Pushdown Networks (SM-DPN). A SM-DPN is a network of Self-Modifying Pushdown processes, where each process has the ability to modify its current set of rules and to spawn new processes during execution time. We consider model checking SM-DPNs against single indexed LTL formulas, i.e., conjunctions of separate LTL formulas on each single process. This problem is non trivial since the number of spawned processes in a given run can be infinite. Our approach is based on computing finite automata representing the set of configurations from which the SM-DPN has a run that satisfies the single-indexed LTL formula. We implemented our techniques in a tool and obtained promising results. In particular, our tool was able to detect concurrent, self-modifying malware.

This is a joint work with Tayssir Touili.

Programmation
Mardi 18 novembre 2025, 10 heures 30, 3063
Gabriel Scherer (IRIF) Omnidirectional type inference for ML: principality any way

The Damas-Hindley-Milner (ML) type system (used for example as the core of OCaml, SML and Haskell) is _principal_: every well-typed expression has a unique most general type. This makes inference predictable and efficient. Yet many extensions of ML (for example GADTs, higher-rank polymorphism, and static overloading) endanger principality by introducing _fragile_ constructs that resist principal inference. Existing approaches recover principality through _directional_ inference algorithms, which propagate _known_ type information in a fixed order (e.g. as in bidirectional typing) to disambiguate such constructs. However, the rigidity of a fixed inference order often causes otherwise well-typed programs to be rejected.

We propose _omnidirectional_ type inference, in which typing constraints may be solved in any order, suspending when progress requires known type information and resuming once it becomes available, using _suspended match constraints_. This approach is straightforward for simply-typed systems, but extending it to ML is challenging due to _let-generalization_. Existing ML inference algorithms type `let`-bindings `let x = e1 in e2` in a fixed order: type `e1`, generalize its type, and then type `e2`. To overcome this, we introduce _incremental instantiation_, allowing partially solved type schemes containing suspended constraints to be instantiated, with a mechanism to incrementally update instances as the scheme is refined.

It is also difficult to give a good declarative specification to this inference mechanism – the natural attempts to do this would allow “out of thin air” behavior due to causality cycles between suspended constraints. If time allows we will discuss the importance of declarative semantics for type-inference algorithms, and sketch our declarative semantics for suspended constraints.

Sémantique
Mardi 18 novembre 2025, 15 heures, Salle 3052
Sanjiv Ranchod (Cambridge) Single-Variable Substitution for Substructural Theories

In [1], Fiore, Plotkin and Turi develop category theoretic models for both single-variable substitution (as substitution algebras) and simultaneous substitution (as monoids for the substitution tensor) for single-sorted cartesian syntax. Since then, the literature accounts for the model of simultaneous substitution in the substructural settings of linear and affine syntax, extends it to account for multi-sorted variants, and more recently presents the theory of Superspecies to uniformly model simultaneous substitution in, for instance, the aforementioned settings and combinations thereof. In this talk, we consider models of single-variable substitution for cartesian, linear, affine and relevant syntax [2]. Crucially, this involves a modification to the uniform development of each setting — a shift from symmetric monoidal theories to (arity/coarity) monoidal theories — necessary in providing the structure to define each variation of a substitution algebra. As an application, we develop the abstract syntax of binding signatures in each of these settings and, using a generalised recursion principle, show that they are canonically equipped with the structure of a substitution algebra. In doing so, we develop machinery towards a uniform axiomatisation of single-variable substitution across substructural settings.

Joint work with Marcelo Fiore.

[1] M. Fiore, G. Plotkin and D. Turi, Abstract Syntax and Variable Binding, 14th Symposium on Logic in Computer Science (1999), 193–202. [2] M. Fiore, S. Ranchod, Substructural Abstract Syntax with Variable Binding and Single-Variable Substitution, 40th Symposium on Logic in Computer Science (2025), 196-208.

Soutenances de thèses
Mardi 18 novembre 2025, 14 heures, Amphithéâtre Pierre-Gilles de Gennes (Bâtiment Condorcet)
Shamisa Nematollahi (IRIF) Economy of Scale in Algorithm Design: ​Approximation Algorithms for Submodular Optimization and Network Design

The economy of scale principle, where the cost per unit decreases as the scale of operation increases, is a foundational concept in economics. It naturally arises in diverse real-world systems, from logistics and infrastructure planning to data summarization and information propagation. In such systems, grouping resources, demands, or actions often leads to more efficient outcomes. This behavior is reflected in mathematical models through properties such as submodularity, which captures diminishing returns on the value side, and subadditive capacity cost functions, which formalize buy-at-bulk effects on the cost side. In submodular optimization problems, the economy of scale appears as diminishing marginal gains: each additional element contributes less marginal gain as the solution grows. This framework captures key phenomena, including information redundancy and influence propagation. In the buy-at-bulk facility location problem, the economy of scale manifests on the cost side: satisfying a larger demand becomes cheaper per unit due to amortized setup costs. By leveraging this shared structure, this thesis explores the design of approximation and streaming algorithms for these two classes of problems that capture the principle of economy of scale. We develop algorithmic techniques with provable performance guarantees for problems with complex constraints, across different computational models. These results contribute to the broader understanding of scalable algorithms for decision making tasks where gains or costs exhibit diminishing return structures.

One world numeration seminar
Mardi 18 novembre 2025, 14 heures, Online
Paul Glendinning (University of Manchester) Positive Hausdorff dimension for the survivor set and transitions to chaos for piecewise smooth maps

We consider two related problems. The transition to chaos in the sense of positive topological entropy for one-dimensional piecewise smooth maps, and the transition to positive Hausdorff dimension for the survivor set of associated open maps. We describe an iterative process that determines the boundaries of positive topological entropy (resp. positive Hausdorff dimension). The boundary can then be characterised via substitution sequences that generalise the Thue-Morse sequence for continuous maps of the interval. This work is joint with Clément Hege.

Algorithmes et complexité
Mercredi 19 novembre 2025, 11 heures, Salle 3052
Sujoy Bhore (IIT Bombay) Geometry in Optimization: To Choose or Not to Choose

Geometry lies at the heart of optimization, where the seemingly simple decision of “to choose or not to choose an object” often uncovers profound insights into the structure and solutions of complex problems. This geometric perspective has wide-ranging applications across various fields, including machine learning, logistics, computer graphics, sensor networks, and many others. In this talk, I will focus on two fundamental aspects of geometric optimization problems: Packing and Independence.

Consider a set of D-dimensional objects (each with associated profits), and the goal is to find the maximum-profit subset that can be packed without overlap into a given D-dimensional hypercube. This problem is known as the Geometric Knapsack Problem. The packing of various kinds of objects has been extensively studied in mathematics over the centuries. Interestingly, the problem becomes computationally intractable even in rather simple settings, e.g., unit disks in 2D. In this talk, I will present a polynomial-time approximation scheme for packing D-dimensional balls.

On the other hand, the Independent Set problem for a set of objects in D-dimensional space aims to find a maximum-cardinality subset of independent (i.e., pairwise-disjoint) objects. Independent Set is one of the most fundamental problems in theoretical computer science, and unfortunately, it is known to be inapproximable in the most general cases. There has been extensive research on polynomial-time algorithms with improved approximation ratios for geometric inputs, sometimes trading off efficiency in running times. In this talk, I will present near-linear-time constant-factor approximation algorithms for various natural families of objects, e.g., balls, boxes, fat objects, etc.