Entries tagged correctness
There are many ways to introduce dependent type systems, depending on which side of the Curry-Howard lens you look through. But if your interest is more in programming languages than proof assistants, then length-indexed vectors is your Hello, World!, and an interpreter for the simply-typed lambda calculus is your FizzBuzz. I think the usual presentation of the latter is a bit of a cheat, for reasons which I will explain below. I'll also show a "non-cheating" version.Continue reading »
First of all, before I start on the actual blog post, let me put this in context. I rembember a couple of years ago when I developed an interest in functional programming languages, and Haskell in particular. There was a phase when I was able to use Haskell to solve problems in the small. I understood most of the basics of pure functional programming; then there were things I regarded as magic; and of course there was a lot of things I didn't even know that I didn't know about. But none of it did I grok.
I feel like I'm starting to get to the same level with Agda now. So this is going to be one of those "look at this cool thing I made" posts where the actual result is probably going to be trivial for actual experts of the field; but it's an important milestone for my own understanding of the subject.
I wanted to play around with simple but Turing-complete languages, and I started implementing an interpreter for a counter machine. More on that in a later post; this present post describes just the representation of register values. In the model that I implemented, values of registers are byte counters, meaning they have 256 different values, and two operations +1 and -1 that the inverses of each other. Incrementing/decrementing should roll over: 255 +1 = 0 and 0 -1 = 255.Continue reading »
My coworker Encsé posted a challange use the 9-digit number problem to demonstrate interesting programming techniques. So I sat down and wrote my first practical program using dependant types, in Agda. I've been playing around with Agda previously, but this seemed like a good opportunity to try to write an actual, self-contained, provably correct program with it.
I'm not going to get into details now about the resulting Agda code, because I'm planning to present it in detail in a later post. In its current form, my program is formally proven to produce only good nine-digit numbers; the only property that still needs proving is that it finds all of them.
But the sad surprise came when I tried to actually run it. It was unbearably slow to just get to all possible combinations for the first three digits. I'm talking about 12 minutes just to list them from 123 to 423 (at which point I killed the process). For comparison, the following Haskell program, which is an implementation of the same naïve algorithm, finds the (unique) solution in 4 milliseconds:
import Control.Monad.List fromDigits = foldr shift 0 where shift d s = 10 * s + d p `divides` q = q `mod` p == 0 encse :: [Int] encse = map fromDigits $ encse' 0  where encse' 9 ds = return ds encse' n ds = do d <- [1..9] let ds' = d:ds n' = n + 1 guard $ not (d `elem` ds) guard $ n' `divides` fromDigits ds' encse' n' ds'
So where's that slowdown coming from?
The first intuition would be that the code generated by Agda is slow because in parallel to the actual computation, it is also evaluating all kinds of proofs. But the proofs exist only in the world of types, so they shouldn't matter once the program is compiled.
The real answer is that calculating in unary representation is slow. Very, very slow. Even slower than you'd imagine.Continue reading »
In my previous post, I introduced the B method and showed the steps of writing a simple program for finding the nth element of a sequence satisfying a given predicate p. While you may think the resulting program is correct, we can't just say so and be done with it. The whole point of the B method is that the resulting program can be formally proven correct.
The B software generates 69 so-called proof obligations for the code from the first part. These are assertions about the program actually behaving as specified. For example, let's look at PO69 which asserts that ll is correctly set. Recall first the relevant portion of the specification:
And the invariant of the implementation:
So what we have to prove is that given the preconditions, by the time the loop in the implementation terminates, the invariant makes sure ll is equal to its specified value. This is what's described (somewhat more verbosely) by the actual proof obligation below.Continue reading »
I'm going to introduce B from the perspective of a traditional programmer. You can find documentation on the web, but most of it is from the viewpoint of designing complex state machines. However, B is also a Pascal-like language than can be used for imperative programming. The big added value of using B is that the resulting program can be proven to be correct. In fact, I first encountered B a year ago while working on my B.Sc. thesis, which was about deriving new correct programs from existing ones.
As for its theoretical backgrounds, B is based on the Hoare-Dijkstra model of imperative programming. I'm not going to delve into it; instead, let's look at a concrete example, and I'll guide you through the process of first writing a formal specification, then creating the implementation (this part), and then proving its correctness (in the next part).Continue reading »