Technical Presentations
Monday
-
KC Sivaramakrishnan: Securing Functional Programs with Hardware Support
Abstract
Functional programming languages such as OCaml are memory-safe. MirageOS Unikernels, where many OS services are also written in OCaml, are safer than their counterparts running on Linux. However, even MirageOS Unikernels will have to include unsafe code written in C and C++ because that functionality does not exist in OCaml (say SQLite) or where tight control over memory is necessary (device drivers). Vulnerabilities in C and C++ may violate the safety guarantees provided by OCaml. How can we safeguard safe code from unsafe code in the same application? In this talk, I will present a hardware-assisted intra-process compartmentalisation technique for applications that mix safe (OCaml) and unsafe (C) code. The technique ensures that vulnerabilities in C code do not affect OCaml code. Notably, only the C code pays the cost of additional safety. While compartment techniques are fairly standard, extending them to functional programming throws unique challenges, which we address in this work. We have implemented the technique by extending the open-source Shakthi RISC-V processor running bare metal MirageOS applications.
-
Manuel Chakravarty: Functional Programming in Swift
Abstract
When people talk about functional programming in modern multi-paradigm languages, they usually mention Rust, Scala, or Kotlin. You rarely hear Swift being mentioned. This is odd, as one might argue that, of these languages, Swift places the strongest emphasis on functional programming.
In this talk, I will explain the core functional programming features of Swift, including its expressive type system, value types, and mutability control. Furthermore, I will discuss how Swift’s language design is influenced by the desire to create a language that addresses the whole spectrum from low-level systems programming up to high-level applications with sophisticated graphical user interfaces. Beyond the core language itself, functional programming also permeates Swift’s rich ecosystem of libraries. To support this point, I will outline some FP-inspired core libraries.
Finally, I will briefly summarise practical considerations for using Swift. This includes the cross-platform toolchain, the package manager, and interoperability with other languages.
-
Sven-Bodo Scholz: Type Patterns in SaC — on the interplay of constraints, error messages and partial evaluation
Abstract
Type Patterns in SaC --- on the interplay of constraints, error messages and partial evaluation
Abstract: In this talk I present SaC’s Type Patterns and their implementation in the latest release of our compiler. Type patterns allow programmers to specify arbitrarily complex constraints between domains and co-domains of functions. As for most programs the correctness of these constraints cannot be fully statically guaranteed, we follow a hybrid approach for checking these constraints. By employing partial evaluation, we resolve a part of the constraints statically while relying on dynamic checks for those constraints we cannot resolve at compile time. As it turns out, this approach exposes an interesting challenge regarding the interplay between the quality of error messages and the effectiveness of partial evaluation.
-
Lennart Augustsson: Combinators, revived. In memorian David Turner
No Abstract
-
Andreas Rossberg: Engineering a Formal Language Spec (at industrial scale)
Abstract
WebAssembly (Wasm) is a portable low-level byte code language and virtual machine. Its official specification includes a full formal semantics, and every new feature must be specified in this formal semantics, in prose, and in the official reference interpreter before it can be standardised. This manual process with its redundancies is laborious and error-prone. We present SpecTec, a DSL and toolchain that facilitates both the Wasm specification and the generation of artefacts necessary for standardisation. A definition of the Wasm semantics in SpecTec serves as a single source of truth, from which we can generate a typeset specification, including formal definitions and prose pseudocode descriptions, and a meta-level interpreter. Further backends for test generation and meta-theory in theorem provers are work in progress. Our ultimate aim is that SpecTec is adopted by the Wasm committee to specify future versions of the standard.
-
Wouter Swierstra: A correct-by-construction conversion to combinators
Abstract
This talks shows how to define a conversion from the simply typed lambda calculus to combinatory logic that preserves both the types and semantics by construction.
-
Wouter Swierstra: The functional essence of imperative binary search trees
Abstract
Algorithms on restructuring binary search trees are typically presented in imperative pseudocode. Understandably so, as their performance relies on in-place execution, rather than the repeated allocation of fresh nodes in memory. Unfortunately, these imperative algorithms are notoriously difficult to verify as their loop invariants must relate the unfinished tree fragments being rebalanced. This talk presents several novel functional algorithms for accessing and inserting elements in a restructuring binary search tree that are as fast as their imperative counterparts; yet the correctness of these functional algorithms is established using a simple inductive argument.
Tuesday
-
Jesper Cockx: Agda Core: The Dream and the Reality
Abstract
Type systems are essential tools for us programmers and mathematicians to build highly trustworthy theories and systems, hence a natural question is: how can we trust them? The gold standard for trustworthiness is to have a small core language that can independently verify our developments. Unlike its cousins like Coq and Lean, Agda does not have a proper core language, so three years ago I optimistically set out to change that. However, carving out a core for a language not designed with one in mind is hard, especially a feature-rich and organically grown language such as Agda.
In this talk I will share my initial objectives for developing Agda Core as well as the difficulties I encountered working towards them. I would also like to open up a discussion around more general questions related to accidental complexity, technical debt, and the curse of novelty in research on type system engineering.
-
Didier Rémy: Avoiding Signature Avoidance
Abstract
Signature avoidance is still one of the annoying problems that arise with ML modules. OCaml solves it in often surprising ways. We will discuss the problem through examples, present some design space, and give an incomplete but simple specification for which we have a complete algorithm.
-
Youyou Cong: Towards a Type Theory for Verifying Gradable Properties
Abstract
Type theories have mainly been used to verify properties that either hold or do not hold. In the real world, however, there are also gradable properties that hold to some degree. I will present my preliminary ideas on designing a type theory for verifying gradable properties in various domains, including programming, natural language, and music.
-
Niki Vazou: Design, implementation, and mechanization of Dynamic Policies
Abstract
In this talk, I present various interpretations of the security condition of dynamic policies, concretely, time transitive and no time transitive. I present dynLIO, en enforcement mechanism that extends LIO to ensure safety of IFC policies and discuss the development of the meta theory in Liquid Haskell.
-
Thomas Bourgeat: Specifying and verifying the tricks of computer architects
Abstract
After a quick introduction to the semantics of modules in a rule-based language and a manual proof of refinement between two simple modules, we will use the old idea of Flushing (as proposed by Burch & Dill in CAV '94) to define a refinement map inductively. This technique allows for proving refinement by induction on the derivation of the refinement map, boiling down — in simple cases — to the commutative properties between the transitions of the system.
Wednesday
-
Jacco Krijnen: A Layered Certifying Compiler Architecture
Abstract
The formal verification of an optimising compiler for a realistic programming language is no small task. Most verification efforts develop the compiler and its correctness proof hand in hand. Unfortunately, this approach is less suitable for today’s constantly evolving community-developed open-source compilers and languages. I will present an alternative approach to high-assurance compilers, where a separate certifier uses translation validation to assess and certify the correctness of each individual compiler run. I will also demonstrate that an incremental, layered architecture for the certifier improves assurance step-by-step and may be developed largely independently from the constantly changing main compiler code base. This approach to compiler verification is practical, as witnessed by the development of a certifier for the deployed, in-production compiler for the Plutus smart contract language.
-
Ulf Norell: Constraint-based Random Generators
No Abstract
-
Xavier Leroy: Sharing is caring: a somewhat efficient convertibility test
Abstract
Type-checkers / proof-checkers for dependently-typed systems perform lots of convertibility tests ("are these two terms equal modulo reductions?"). Except for proofs by large-scale reflection, which are best served by specialized compiled evaluators, most convertibility tests in practice are small and easy-looking, yet can still take a very long time if unnecessary evaluations are performed. In particular, the textbook strategy of reducing both terms to normal form and comparing the normal forms is often unusable.
As part of her PhD thesis, Nathanaëlle Courant has developed and verified a novel convertibility check that tries hard not to compute too much before concluding. It relies on two ingredients: careful use of lazy evaluation to share reductions in normal-order weak and strong normalization; and a presentation of the convertibility check as an "and-or" decision tree, to be explored both lazily and in parallel (using fair interleaving).
-
David Walker: Relational Network Verification
Abstract
Relational network verification is a new approach to validating network changes. In contrast to traditional network verification, which analyzes specifications for a single network snapshot, relational network verification analyzes specifications concerning two network snapshots (e.g., pre- and post-change snapshots) and captures their similarities and differences. Relational change specifications are compact and precise because they specify the flows or paths that change between snapshots and then simply mandate that other behaviors of the network "stay the same", without enumerating them. To achieve similar guarantees, single-snapshot specifications need to enumerate all flow and path behaviors that are not expected to change, so we can check that nothing has accidentally changed. Thus, precise single-snapshot specifications are proportional to network size, which makes them impractical to generate for many real-world networks.
To demonstrate the value of relational reasoning, we develop a high-level relational specification language and a tool called Rela to validate network changes. Rela first compiles input specifications and network snapshot representations to finite state transducers. It then checks compliance using decision procedures for automaton equivalence. Our experiments using data on complex changes to a global backbone (with over 10^3 routers) find that Rela specifications need fewer than 10 terms for 93% of them and it validates 80% of them within 20 minutes.
-
Nadia Polikarpova: E-Graph Guided Lemma Discovery for Inductive Proofs
Abstract
The problem of automatically proving the equality of terms over recursive functions and inductive data types is challenging, as such proofs often require auxiliary lemmas which must themselves be proven. Previous attempts at lemma discovery compromise on either efficiency or efficacy. Goal-directed approaches are fast but limited in expressiveness, as they can only discover auxiliary lemmas which entail their goals. Theory exploration approaches are expressive but inefficient, as they exhaustively enumerate candidate lemmas.
We introduce e-graph guided lemma discovery, a new approach to finding equational proofs that makes theory exploration goal-directed. We accomplish this by using e-graphs and equality saturation to efficiently construct and compactly represent the space of all goal-oriented proofs. This allows us to explore only those auxiliary lemmas guaranteed to help make progress on some of these proofs. We implemented our method in a new prover called CCLemma and compared it with three state-of-the-art provers across a variety of benchmarks. CCLemma performs consistently well on two standard benchmarks and additionally solves 50% more problems than the next best tool on a new challenging set.
-
Simon Peyton-Jones: Verification in Verse
Abstract
This talk presents our approach to type-checking (aka verification) of Verse program.
-
John Hughes: Towards a DSL for requirements on smart contracts
Abstract
When testing smart contracts, requirements traceability and requirements coverage is arguably more important then code coverage; negative testing of requirements (to make sure they are enforced, not just permitted) requires carefully generated test cases and MCDC-like coverage measurement. This talk describes a DSL for requirements, inspired by semi-formal requirements on real Cardano smart contracts, that supports positive and negative MCDC-like testing, and reveals weaknesses in test case generation. The next steps would be to support a form of temporal logic, and a mutable requirements state.
-
Dave MacQueen: How SML Started — Where it might go from here
Abstract
Title: How SML started — Where it might go from here Speaker: David MacQueen, University of Chicago (Emeritus) Where: IFIP Working Group 2.8 Meeting, Utrecht, April 24, 2024
I discuss the state of PL research leading up to and including the design of Standard ML in the mid-1980s, followed by SML-related developments in the decades since then.
A second part (not presented), covers some possible changes to SML contemplated by the speaker.
Thursday
-
Neel Krishnaswami: The Denotational Semantics of SSA
Abstract
Static Single Assignment form is one of the most widely-used compiler intermediate representations. In this talk, I present joint work with my student Jad Ghalayini giving a type theory and categorical semantics for static single assignment form. Having a categorical semantics permits showing that (for example) the TSO weak memory semantics is sound with respect to the equational theory of SSA.
-
Amal Ahmed: Realistic Realizability: Specifying ABIs You Can Count On
Abstract
The Application Binary Interface (ABI) for a language specifies the interoperability rules for each of its target platforms, including properties such as data layout and calling conventions, such that compliance with the rules will ensure "safe" execution and perhaps provide certain guarantees about resource usage. These rules are relied upon by compilers for that language and others, libraries, and foreign-function interfaces. Unfortunately, ABIs are typically specified in prose and, while type systems for source languages have grown richer over time, ABIs have largely remained the same, lacking analogous advances in expressivity and safety guarantees.
I’ll outline a vision for richer, semantic ABIs that would facilitate safer interoperability and library integrations, supported by a methodology for formally specifying ABIs using realizability models. These semantic ABIs relate abstract, high-level types to unwieldy, but well-behaved, low-level code. We demonstrate the approach with a case study that formalizes the ABI of a functional source language in terms of a reference-counting implementation in a C-like target language. An important contribution is our reference-counting realizability model which uses a novel extension to separation logic that captures how each reference owns a share to the reference-counted resource. I’ll discuss how different practically-motivated ABI design decisions can be interpreted via a semantic ABI and present the first formalization of a Swift-style ABI with library evolution.
-
Norman Ramsey: A Declarative Compiler Driver (An exercise in function composition)
Abstract
For teaching compilers, I have developed a compiler driver that is directed declaratively, by specifying only the (source, target, or intermediate) language to be input and the language to be output. Each high-level intermediate language can be represented using a subset of the source language; for debugging, these representations can be run using a reference interpreter. Free choice of both input and output languages implies a quadratic number of translations, but the driver implements them with a linear amount of work: three functions per language, each of constant size. One function reads the language, one writes the language, and one translates the language’s predecessor in the compiler pipeline. These functions illustrate the power of function composition.
-
Fritz Henglein: Tabsxxxx Types versus spaces
No Abstract
-
Benjamin Pierce & John Hughes: Random Heaps
No Abstract
-
François Pottier: Will It Fit? Worst-Case Heap Space Complexity Bounds for Concurrent Programs with GC
No Abstract
Friday
-
Jesper Cockx & Ulf Norell: Opaque Definitions in Agda
No Abstract
-
Peter Thiemann: Law and Order for Typestate
Abstract
Typestate systems are notoriously complex as they require sophisticated machinery for tracking aliasing. We propose a new, transition-oriented foundation for typestate in the setting of impure functional programming. Our approach relies on ordered types for simple alias tracking and its formalization draws on work on bunched implication. Yet, we support a flexible notion of borrowing in the presence of typestate.
Our core calculus comes with a notion of resource types indexed by an ordered partial monoid that models abstract state transitions. We prove syntactic type soundness with respect to a resource-instrumented semantics. We give an algorithmic version of our type system and prove its soundness. Algorithmic typing facilitates a simple surface language that does not expose tedious details of ordered types. We implemented a typechecker for the surface language along with an interpreter for the core language.