**Conor McBride (Strathclyde) - Cubical Adventures**

Inspired by Thierry Coquand and the Cubical team, the Strathclyders have been trying to comprehend and simplify Cubical Type Theory, where "equality" becomes the notion of "path", given as continuous mappings from the interval [0,1]. Our language of points differs from Cubical's use of de Morgan algebras, in that it consists only of 0, 1, and linear rescalings i(p0,p1) running from p0 to p1 as i runs from 0 to 1. The normal forms of points are exactly the reduced ordered binary decision diagrams, exposing the relevance (or otherwise) of point variables. Our mission is to ensure that a constant path on a type yields the identity transportation of values. Writing this abstract a month in advance, I cannot tell you where in the interval [disaster,triumph] we will be found by the time of the talk, but I hope at least to put a finger on the problems.**Brigitte Pientka (McGill) - POPLMark Reloaded: Mechanizing Logical Relations Proofs**

Mechanizing formal systems, given via axioms and inference rules, together with proofs about them plays an important role in establishing trust in formal developments. Over the past decade, the POPLMark challenge popularized the use of proof assistants in mechanizing the metatheory of programming languages. Focusing on the the meta-theory of System F with subtyping, it allowed the programming languages community to survey existing techniques to represent and reason about syntactic structures with binders and promote the use of proof assistants. Today, mechanizing proofs is a stable fixture in the daily life of programming languages researchers.

As a follow-up to the POPLMark Challenge, we propose a new collection of benchmarks that use proofs by logical relations. Such proofs are now used to attack problems in the theory of complex languages models, with applications to issues in equivalence of programs, compiler correctness, representation independence and even more intensional properties such as non-interference, differential privacy and secure multi-language inter-operability. Yet, they remain challenging to mechanize.

In this talk, we focus on one particular challenge problem, namely strong normalization of a simply-typed lambda-calculus with a proof by Kripke-style logical relations. We will advocate a modern view of this well-understood problem by formulating our logical relation on well-typed terms. Using this case study, we share some of the lessons learned tackling this challenge problem in Beluga, a proof environment that supports higher-order abstract syntax encodings, first-class context and first-class substitutions. We also discuss and highlight similarities, strategies, and limitations in other proof assistants when tackling this challenge problem.

We hope others will be motivated to submit solutions! The goal of this talk is to engage the community in discussions on what support in proof environments is needed to truly bring mechanized metatheory to the masses.**Gabriel Scherer (Inria) - Proof theory for type systems**

In this talk I would like to summarize some of my work of the past few years on using techniques of proof theory, in particular focusing, to better understand the canonical structure of simply-typed lambda-terms -- in particular, when positive types such as sums are present. I will also talk about some opportunities for future work in that area: the challenging extension to polymorphic type system, but also more general questions on bidirectionality and other proof-theoretic ideas.

I have written about these topics for a not-so-technical audience in Search for Program Structure at SNAPL 2017.**Danko Ilik (Trusted Labs) - Industrial use of proof assistants: Common Criteria evaluations**

Common Criteria [ISO/IEC 15408] is an international standard for certifying the security of computer systems. When high evaluation assurance levels (EAL6 and EAL7) are required of a system, Common Criteria (CC) requires formal modelling and mechanized proofs of security properties. In this talk, we will first have an overall look at the schema for CC evaluations, after which we will concentrate on the scientific aspects relating to proofs and programming languages, and the special role that proof assistants play during these evaluations.

**Thierry Coquand (Gothenburg) - Some contributions of Vladimir Voevodsky to dependent type theory**

After a short description of dependent type theory before Voevodsky’s contributions, I will try to describe three of them: the formulation of homotopy levels and the univalence axiom, the UniMath library and the simplicial set models. I will then try to present some research questions motivated by these contributions.**Nicolas Tabareau (Inria) - Homotopy Types and Resizing Rules: a fresh look at the impredicate sort of CIC**

In this talk, I will recall the notion of homotopy n-types and of universe resizing rules proposed by VV and show that it provides a fresh look at the impredicative universe Prop of CIC. As a consequence, I will sketch a possible extension of the Coq proof assistant with definitional proof irrelevance that is currently being implemented by Gaetan Gilbert.**Andrej Bauer (Ljubljana) - Derivations as computations**

A derivation of a judgment in type theory, or any other formal system, is typically presented as a well-founded tree. To put it another way, the inference rules freely generate the algebra of derivations. But we can allow the inference rules to act non-freely, which amounts to giving a transformation of derivation trees, or an interpretation of the derivation rules as computational effects. These ideas can be put to good use in the design of a trusted kernel of a proof assistant. Starting from a fixed trusted kernel, we may generate other kernels--still trusted--that transform and augment the original one.

**Stefano Berardi (Torino) - A possible reformulation of polymorphism: the simply typed lambda calculus N**

We investigate the possibility of using a simply typed lambda-calculus, system N, in order to represent all well-founded trees and polymorphic maps on them which are definable in system F. Reduction of N include usual algebraic reductions, reductions for primitive recursion on trees, but also reductions adding one constructor to a recursive definition. The result we already have about system N are: normalization (with an intuitionistic proof); all trees denoted by some term in some data type of system N are well-founded; and the fact that system N defines a Realization Model} of second order intuitionistic arithmetic. The result we are checking is: equivalence between system N and polymorphism: system N and system F define the same well-founded trees and the same maps on them.**Jan Bessai (Dortmund) - Combinatory Logic Beyond S and K**

We explore how the customizable basis of typed combinatory logic can encode all multi-sorted algebraic signatures, which are the mathematical essence of APIs. We make the encoding meaningful by constructing a universal (sound, complete, and unique) translation from combinator terms to any given signature interpretation. Results about the inhabitation problem in typed combinatory logic with intersection types are applied to enable language-agnostic component-oriented program synthesis controllable by additional semantic specifications inexpressible within most target programming languages. These theoretical considerations are put to use in automatic synthesis of executable user-interactive programs.**Pawel Urzyczyn (Warsaw) - First-order answer set programming as constructive proof search**

This is joint work with Aleksy Schubert.

We propose an interpretation of the first-order answer set programming (FOASP) in terms of intuitionistic proof theory. It is obtained by two polynomial translations between FOASP and the bounded-arity fragment of the Sigma_1 level of the Mints hierarchy in first-order intuitionistic logic. It follows that Sigma_1 formulas using predicates of fixed arity (in particular unary) is of the same strength as FOASP. Our construction reveals a close similarity between constructive provability and stable entailment, or equivalently, between the construction of an answer set and an intuitionistic refutation.**Bas Spitters (Arhus) - Internal Universes in Models of Homotopy Type Theory**

Joint work of Dan Licata, Ian Orton, Andrew Pitts, and Bas Spitters.

When constructing categorical models of type theory it is often useful to work in the category's internal language. However, we show that the universe in the Cohen-Coquand-Huber-Mörtberg (CCHM) model of homotopy type theory has an essentially global character -- it cannot be described in the internal language of the topos of cubical sets. We get around this problem using the part of Shulman's spatial type theory that extends the language of type theory with a modal operator for expressing properties of global (discrete) elements. In this setting we show how to construct a universe that is weakly generic for global CCHM fibrations starting from the assumption that the interval is tiny, which holds for cubical sets. This leads to a completely internal development of a model of homotopy type theory based upon the CCHM notion of fibration within what we call local type theory.**Ambrus Kaposi (Budapest) - Towards a syntax for higher inductive inductive types**

The goal of this project is to describe a general class of inductive types called higher inductive inductive types (HIITs). HIITs allow the mutual definition of multiple types where one type can be indexed over the others and they allow equality constructors as well. An example is the well-typed syntax of type theory where equality constructors represent conversion rules.

In this talk I will present a small type theory called the theory of codes. A context in the theory of codes is a signature for a HIIT. The theory of codes admits a category model in which the interpretation of a context is the category of algebras corresponding to the signature. I will show how to derive the type of the dependent eliminator and the computation rules from a signature. The "towards" in the title refers to the fact that we only considered the (wild) category model and not an (∞,1)-category model which is needed to describe higher equalities. I will also present ongoing work on the existence of initial algebras. This is joint work with Thorsten Altenkirch and András Kovács.**Niels van der Weide (Nijmegen) - Finite sets in HoTT**

We study different formalizations of finite sets in homotopy type theory to obtain a general definition that exhibits both the computational facilities and the proof principles expected from finite sets. We use higher inductive types to define the type K(A) of "finite sets over type A" á la Kuratowski without assuming that A has decidable equality. We show how to define basic functions and prove basic properties after which we give two applications of our definition.

On the foundational side, we use K(-) to define the notions of "Kuratowski-finite type" and "Kuratowski-finite subobject", which we contrast with established notions, e.g. Bishop-finite types and enumerated types. We argue that Kuratowski-finiteness is the most general and flexible one of those and we define the usual operations on finite types and subobjects.

From the computational perspective, we show how to use K(A) for an abstract interface for well-known finite set implementations such as tree- and list-like data structures. This implies that a function defined on a concrete finite sets implementation can be obtained from a function defined on the abstract finite sets K(A) and that correctness properties are inherited. Hence, HoTT is the ideal setting for data refinement. Beside this, we define bounded quantification, which lifts a decidable property on A to one on K(A).

**Carsten Schürmann (Copenhagen) - Evidence**

In security, where we are often concerned to keep the bad guys out, we often forget to include the good guys, who want to be sure that the bad guys really didn’t make it in. One way to solve this is to produce evidence. In my talk I will discuss different forms of evidence, logical, cryptographic and statistical, and compare them in the hope to start a discussion of how proof theory can assist solving some of the really hard challenges in information security.**Chantal Keller (Paris) - Mixing type-based proof techniques for the verification of the SQL compilation chain**

SQL, the widely used language to manipulate relational data, is a declarative language whose compilation involves non trivial transformation and optimization steps, and execution is based on optimized online algorithms. In an ongoing effort to deeply specify and certify the SQL compilation chain and backend using the Coq proof assistant, we combine multiple techniques that this theorem prover offers: a high-level degree of abstraction; "traditional" proofs of some algorithms; reflexive verification of traces; program extraction, ... After explaining the SQL compilation chain, I will detail the panel of verification techniques that are crucial to this development.

This is joint work with Léo Andres, Véronique Benzaken, Évelyne Contejean, Raphaël Cornet and Eunice Martins**Wilmer Ricciotti (Edinburgh) Mechanising the Metatheory of SQL with Nulls**

SQL is the standard query language used by relational databases, a multi-billion dollar industry. SQL’s semantics is notoriously subtle: the standard uses natural language that implementations interpret differently. Commercial databases optimise queries by applying rewriting rules to convert a request into an equivalent one that can be executed more efficiently. The lack of a well-understood formal semantics for SQL makes it very difficult to validate the soundness of candidate rewriting rules, and even widely used database systems have been known to return incorrect results. Recent research has yielded plausible formal semantics for a large fragment of SQL with multisets and nulls. In this talk, we report on our ongoing work on its mechanisation using the Coq proof assistant.**Kenji Maillard (Paris) - F*, from practice to theory**

F* is a programming language as well as a proof-assistant putting a strong emphasis on practical verification of effectful programs. In order to enable the verification of security critical programs such as cryptographic primitives (HACL) or protocols (TLS in MiTLS), F* provides a rich logic with dependent types and universes, an effect-based weakest precondition calculus and an smt-automation backend. In this presentation, we will present how these ingredients are used in actual program verification and then give an account of their metatheoretical aspects.**Daniel Ahman (Paris) - Leveraging monotonic state in F***

In this talk I will survey the key role that monotonic state plays in F*, both for programming and for verification. First, I will present a monotonic variant of F*'s global state effect that captures the following idea: a property witnessed in a prior state can be soundly recalled in the current state, provided (1) state evolves according to a given preorder, and (2) the property is preserved by this preorder. Next, I will present some example uses of this notion of monotonicity, ranging from basic examples (such as monotonic counters), to advanced examples (such as heap-based memory models supporting both untyped and (monotonic) typed references), to some more involved case studies. Finally, I will discuss some ongoing work on improving the support for monadic reification for such monotonic-state programs, so as to prove relational properties of such programs, e.g., noninterference.

(based on joint work with Cédric Fournet, Cătălin Hriţcu, Kenji Maillard, Aseem Rastogi, and Nikhil Swamy)**Cyril Cohen (Sophia Antipolis) - Foundations of 3D geometry to model robot manipulators**

Joint work with Reynald Affeldt.

We are interested in the formal specification of safety properties of robot manipulators down to the mathematical physics. To this end, we have been developing a formalization of the mathematics of rigid body transformations in the Coq proof-assistant. It can be used to address the forward kinematics problem, i.e., the computation of the position and orientation of the end-effector of a robot manipulator in terms of the link and joint parameters. Our formalization extends the Mathematical Components library with several theories includig algebraic angles, rotation matrices and direct affine isometries. We also give a formal account for the Denavit-Hartenberg convention, and we illustrate this aspect by an application to the SCARA robot manipulator.**Milena Vujosevic Janicic (Belgrade) - A calculus for a LLVM-based software verification tool LAV**

The role of software verification tools is extremely important as incorrect software has impact worth billions of dollars. However, a question of correctness of verification tools must be raised. For instance, is a tool sound and complete or, in other words, does it report false positives and false negatives? We present a formal calculus that describes working of a static software verification tool LAV which is based on LLVM intermediate representation, bounded model checking, and symbolic execution. This strict calculus enables formal reasoning about properties of this tool i.e. proving its soundness and completeness in some cases. The calculus can serve as a basis for formalization within a proof assistant.

**Ralf Hinze (Nijmegen) - Combles***comble.*The ‘crown’ or culmination. [A gallicism.]*french.*adj. packed, crowded.*latin.*n. cumulus heap, pile, heap above the full measure, crown.**Ralph Matthes (CNRS Toulouse)- Intensional and univalent aspects of rule-based programming**

I will discuss various views of what "intensional" datatypes could be, and approaches to obtain them. In particular, this includes univalent foundations. Some other benefits of univalent foundations for programming will be mentioned, including for SQL query rewrites (triggered by the work of Chu et al on HoTTSQL).**Ulrich Schöpp (Munich) - Defunctionalisation as Modular Closure Conversion**

We consider the translation of call-by-value PCF to a first-order low-level language by defunctionalisation. Such defunctionalising translations are typically defined by induction on the structure of the source term: each sub-term is translated to a low-level program fragment and the translation of the whole term is a composition of these fragments. For a number of applications, such as separate compilation or compositional verification, it is desirable to go a step further and consider the translated low-level program fragments as little program modules with specified interface and behaviour. This talk presents a modular closure conversion method that translates call-by-value PCF into low-level program modules. The talk will outline how this approach can capture defunctionalisation as a two-step translation: a closure conversion from PCF to modules and a low-level interpretation of these modules.**Andreas Nuyts (Leuven) - Degrees of Relatedness**

We consider a dependent type system with modalities for relational parametricity, irrelevance (i.e. type-checking time erasability of an argument) and ad hoc polymorphism. The interplay of three modalities and dependent types raises a number of questions. For example: If a term depends on a variable with a given modality, then how does its type need to depend on it? Are all modalities always applicable, e.g. should we consider parametric functions from the booleans to the naturals? Do we need any further modalities in order to properly reason about these three? We develop a framework that answers these questions. The core idea is to equip every type with a number of relations --- just equality for small types, but more for larger types --- and to describe function behaviour by saying how functions act on those relations.**Wouter Swierstra (Utrecht) - Calculating correct programs**

The refinement calculus and type theory are both frameworks that support the specification and verification of programs. This paper presents an embedding of the refinement calculus in the interactive theorem prover Coq, clarifying the relation between the two. As a result, refinement calculations can be performed in Coq, enabling the interactive calculation of formally verified programs from their specification.**Andreas Abel (Gothenburg) - Normalization by Evaluation for Sized Dependent Types**

Sized types have been developed to make termination checking more perspicuous, more powerful, and more modular by integrating termination into type checking. In dependently-typed proof assistants where proofs by induction are just recursive functional programs, the termination checker is an integral component of the trusted core, as validity of proofs depend on termination. However, a rigorous integration of full-fledged sized types into dependent type theory is lacking so far. Such an integration is non-trivial, as explicit sizes in proof terms might get in the way of equality checking, making terms appear distinct that should have the same semantics.

In this talk, we integrate dependent types and sized types with higher-rank size polymorphism, which is essential for generic programming and abstraction. We introduce a size quantifier ∀ which lets us ignore sizes in terms for equality checking, alongside with a second quantifier Π for abstracting over sizes that do affect the semantics of types and terms. Judgmental equality is decided by an adaptation of normalization-by-evaluation for our new type theory, which features type-shape-directed reflection and reification. It follows that subtyping and type checking of normal forms are decidable as well, the latter by a bidirectional algorithm.

**Thibaut Benjamin (Paris) - Type theory for weak ω-categories**

Firstly, I will introduce the theory of weak ω-categories along with the type-theoretic prover catt, that describes this theory. I will stress upon the low-dimensional cases and the intuition we can get out of them to give a concrete idea of this theory, as well as to justify the motivations for developing catt. I will then focus on the type-theoretical aspects of this theory, and further discuss the computational content, in comparison with the classical case where we have the Curry-Howard isomorphism.**Sophie Bernard (Inria) - An extension of libraries for polynomials**

In the context of transcendence proofs, polynomials are the building blocks of the proof, whether they be univariate or multivariate. In this talk, I will present several properties on polynomials that needed to be formalized in order to prove the Lindemann-Weierstrass theorem in Coq, using the library MathComp. This ranges from the minimal polynomial of an algebraic number, the conjugate roots of a polynomial, to symmetric multivariate polynomial and modified versions of the fundamental theorem of symmetric polynomials.**Łukasz Czajka (Copenhagen) - CoqHammer: A General-Purpose Automated Reasoning Tool for Coq**

We present CoqHammer: the first full hammer system for the Coq proof assistant. The system translates Coq logic to untyped first-order logic and uses external automated theorem provers (ATPs) to prove the translations of user given conjectures. Based on the output of the ATPs, the conjecture is then re-proved in the logic of Coq using our proof search algorithm. Together with machine-learning based selection of relevant premises this constitutes a full hammer system.

The performance of the overall procedure has been evaluated in a bootstrapping scenario emulating the development of the Coq standard library. Over 40% of the theorems in the Coq standard library can be proved in a push-button mode in about 40 seconds of real time on a 8-CPU system.

In the talk we will describe the architecture of the system, discuss common use-case scenarios as well as some limitations, and give a live presentation of the tool.**Johannes Hölzl (Amsterdam) - Mathlib: Lean's math library**

The idea of Lean's mathematical library is to provide the basics for mathematical theories, like (linear) algebra, topology, analysis, and set theory (cardinals & ordinals). In this talk I want to give an overview of this development: what is already available, how it is structured, etc. I also want to show how Lean's implementation of DTT (e.g. quotient types, non-cumulative universes, etc) affects this library.**Pierre-Marie Pédrot (Saarbrücken) - Proof assistants for free[*]!**

Syntactic models are a particular kind of model of type theory that can be described as a program translation from some expanded type theory into a host type theory. When the latter is CIC, this allows to easily create a new proof assistant for free by leveraging the Coq implementation: simply write the relatively small compiler that translates the source theory into Coq low-level terms. It is then possible to make the user believe she is living in this new theory in a relatively transparent way. We present a few instances of this technique, as well as their potential use cases.

[*] Rates may apply.**Matthieu Sozeau (Paris) - Nested, Well-Founded and Mutual recursion in Equations**

Equations is a plug-in for Coq providing notation for defining functions by dependent pattern-matching and recursion, in the style of Agda & Epigram. In this talk I will focus on the treatment of recursive definitions in the system, starting from simple mutually recursive structural fixpoints, showing when and why the guard condition of Coq gets in the way when defining complex dependently-typed programs and how logic can save the day, going back to Noetherian induction. I will demonstrate how Equations helps with writing complex recursive definitions, reason about them and compute with them in Coq.**Philipp Haselwarter (Ljubljana) - Algorithmic inversion principles for Extensional Type Theory**

In type theory an inversion principle says that a derivable judgement has a derivation of a specific shape depending on the judgement only (other derivations may exist). There are many uses of inversion principles. For example, in a proof assistant we usually need an algorithmic version of inversion principles that allows us to deconstruct derived judgements. Unfortunately, in extensional type theory only weak forms of inversion principles hold, and they are normally presented in a declarative way, not algorithmically. Judgements simply do not carry enough information for stronger inversion principles to hold. We present an enrichment of extensional type theory in which judgements carry additional information about the derivation. The enriched type theory has algorithmic inversion principles, and is sound and complete with respect to standard extensional type theory in a precise sense. We may thus safely implement inversion in a proof assistant for extensional type theory.

This is work in progress, done jointly with Andrej Bauer.**Dmitriy Traytel (Zürich) - Foundational Nonuniform (Co)datatypes in Higher-Order Logic**

Nonuniform (or "nested" or "heterogeneous") datatypes are recursively defined types in which the type arguments vary recursively. They arise in the implementation of finger trees and other efficient functional data structures. We show how to reduce a large class of nonuniform datatypes and codatatypes to uniform types in higher-order logic. We programmed this reduction in the Isabelle/HOL proof assistant, thereby enriching its specification language. Moreover, we derive (co)induction and (co)recursion principles based on a weak variant of parametricity.

This is joint work with Jasmin Blanchette, Fabian Meier, and Andrei Popescu