Presented at
32C3 (2015),
Dec. 28, 2015, 7 p.m.
(30 minutes).
In this presentation I will present the experimental language Ling. We shall get an intuitive understanding of the language through familiar concepts from imperative programming. We shall cover how Ling enables a modular and precise control on memory allocation, through a general optimization called fusion. This optimization, fusion is a cost-free abstraction mechanism which brings high level programming to system programming.
The design of <a href="https://github.com/np/ling">Ling</a> is the result of my researches in collaboration with Daniel Gustafsson and Nicolas Guenot at the IT-University of Copenhagen and also from the language <a href="https://lopezjuan.com/limestone/">Limestone</a> by Jean-Philippe Bernardy and Víctor López Juan at the University of Chalmers. These two lines of research stand upon the longstanding research topics of process calculi (such as the π-calculus), term calculi (such as λ-calculus), Linear Logic, and dependent Type Theory (such as used in Coq and Agda to write proofs and programs).
The research on the λ-calculus and Type Theory gave rise to a powerful family of languages including but not limited to: Haskell, OCaml, Coq, Idris, and Agda. The research on the π-calculus gave rise to a vast family of calculi for concurrency. However type systems for these languages took much longer to emerge and progress. For instance the main concurrent programming language in use today is still dynamically typed. This is changing as we understand better how to the use the formulae of Linear Logic as behavior types (or session types) for concurrent processes.
Still the aim of this experimental language is to program systems precisely and modularly. The need for precision comes from the resource constraints such as memory, file handles and the need for modularity comes the desire to reduce programming mistakes by solving problems at the right abstraction level. Functional programming offers a pretty good framework for modularity. This modularity comes at a cost which is rather difficult to predict. One the one hand optimizing compilers can fuse function composition to eliminate the need for intermediate data-structures. One the other hand when such an optimization fails to trigger the resulting program might poorly perform. The system of Ling controls when fusion can happen. Therefore one knows statically when fusion occurs and when intermediate buffers are needed.
Today concurrent systems are built out of shared memory. However, the shared memory model is a nightmare for programmers. Here the approach is reversed we start from a concurrent programming language and apply it also for shared memory. At first the goal is not necessarily to target a parallel architecture but to program at level of abstraction where the programmer knows precisely the resources needed and the compiler still has plenty of opportunity to re-order and parallelize safely some instructions.
This talk is intended at an audience familiar with imperative programming. Using the language should not require to understand anything about Linear Logic even though it is used fundamentally. Finally this presentation is an open call for comments and contributions to the open development of the language and infrastructure.
Presenters:
-
Nicolas Pouillard
Type Theory Hacker
<p>
My current areas of research are a mixture of Type Theory and Cryptography/Security.
I'm currently postdoc researcher with the Demtech project at IT Univeristy of Copenhagen.
<p/>
<p>
Previously I was an engineer and then a PhD student in the Gallium research team at INRIA Paris Rocquencourt. At that time I was mainly focused on the design of programming languages and in particular meta-programming.
</p>
<p>
I learned programming and computer science at the engineering school of EPITA in Paris.
<p/>
Links:
Similar Presentations: