Posts tagged clash

A tiny computer for Tiny BASIC

17 November 2020 (programming haskell clash fpga retro)

Just a quick post to peel back the curtain a bit on a Clash example I posted to Twitter. My tweet in question shows the following snippet, claiming it is "a complete Intel 8080-based FPGA computer that runs Tiny BASIC":

logicBoard
    :: (HiddenClockResetEnable dom)
    => Signal dom (Maybe (Unsigned 8)) -> Signal dom Bool -> Signal dom (Maybe (Unsigned 8))
logicBoard inByte outReady = outByte
  where
    CPUOut{..} = intel8080 CPUIn{..}

    interruptRequest = pure False

    (dataIn, outByte) = memoryMap _addrOut _dataOut $ do
        matchRight $ do
            mask 0x0000 $ romFromFile (SNat @0x0800) "_build/intel8080/image.bin"
            mask 0x0800 $ ram0 (SNat @0x0800)
            mask 0x1000 $ ram0 (SNat @0x1000)
        matchLeft $ do
            mask 0x10 $ port $ acia inByte outReady

I got almost a hundred likes on this tweet, which isn't too bad for a topic as niche as hardware design with Haskell and Clash. Obviously, the above tweet was meant as a brag, not as a detailed technical description; but given the traction it got, I thought I might as well try to expand a bit on it.

Continue reading »

A "very typed" container for representing microcode

15 September 2020 (programming haskell clash)

I've been thinking a bit about describing microcode lately. My motivation was the Intel 8080-compatible CPU I've been building for my upcoming Clash book. As with everything else for that book, the challenge is not in getting it to work — rather, it is in writing the code as close as possible to the way you would want to explain it to another person.

So in the context of a microprocessor as simple as the Intel 8080 and using synchronous RAM, I think of the microcode as a sequence of steps, where each step consists of an internal CPU state transition, and a memory read or write request. For example, the machine instruction 0x34 (mnemonic INR M) increments by one the byte pointed to by the register pair HL. In my core, the micro-architecture has an 8-bit value- and a 16-bit address-register; the latter can be used for memory addressing. To use something else for addressing, you need to load it into the address buffer first. So the steps to implement INR M are:

  1. Get value of HL register pair into the address buffer
  2. Indirect read via the address buffer into the value buffer
  3. Replace value buffer's contents with its increment
  4. Update the status register (flags like "was the latest value zero")
  5. Indirect write

However, memory access happens on the transition between cycles, so the final write will not be its own step; rather, it happens as the postamble of step 4. Similarly, the correct address will have to be put on the address pins in the preamble of step 2 for the load to work out:

  1. Get HL into address buffer
  2. Set address to address buffer's contents for reading
  3. Store read value from data-in into value buffer
  4. Increment value buffer
  5. Update status register
  6. Set address to address buffer's contents for writing

What makes this tricky is that on one hand, we want to describe preambles as part of their respective step, but of course for the implementation it is too late to process them when we get to that step. So I decided to write out the microcode as a sequence of triplets, corresponding to the preamble, the state transition, and the postamble, and then transform it into a format where preambles are attached to the previous step:

[ (Nothing,       Get2 rHL,          Nothing)
, (Just Indirect, ReadMem            Nothing)
, (Nothing,       ALU ADD Const0x01, Nothing)
, (Nothing,       UpdateFlags,       Just Indirect)
]

Here, Indirect addressing means setting the address pins from the address buffer (as opposed to, e.g. the program counter); if it is in the postamble (i.e. write) position, it also means the write-request pin should be asserted.

So this is what the microcode developer writes, but then we can transform it into a format that consists of a state transition paired with the addressing:

(Nothing,
[ (Get2 rHL,          Just (Left Indirect))
, (ReadMem            Nothing)
, (ALU ADD Const0x01, Nothing)
, (UpdateFlags,       Just (Right Indirect))
])

So we're done, without having done anything interesting enough to warrant a blog post.

Or are we?

Disallowing memory addressing conflicts

Note that in the format we can actually execute, the addressing at each step is either a Left read address, or a Right write address (or Nothing at all). But what if we had two subsequent micro-steps, where the first one has a write request in its postamble, and the second one has a read request in its preamble? We are describing a CPU more than 40 years old, it is to be connected to single-port RAM, so we can't do read and write at the same time. This constraint is correctly captured by the Maybe (Either Read Write) type of memory requests in the normalized form, but it is not enforced by our naïve [(Maybe Read, Transition, Maybe Write)] type for what the microcode developer writes.

So this is what I set out to solve: to give an API for writing microcode that has self-contained steps including the read addressing, but still statically disallows conflicting writes and reads from subsequent steps. We start by going full Richard Eisenberg and lifting the memory addressing directives to the type level using singletons. While we're at it, let's also turn on Haskell 98 mode:

{-# LANGUAGE DataKinds, PolyKinds, ConstraintKinds, GADTs, FlexibleContexts #-}
{-# LANGUAGE TypeOperators, TypeFamilies, TypeApplications, ScopedTypeVariables #-}
{-# LANGUAGE StandaloneDeriving, DeriveFunctor #-}

data Step (pre :: Maybe a) (post :: Maybe b) t where
    Step :: Sing pre -> t -> Sing post -> Step pre post t
deriving instance Functor (Step pre post)

The plan, then, is to do enough type-level magic to only allow neighbouring Steps if at most one of the first post- and the second preamble is a type-level Just index.

The operations we want to support on microcode fragments is cons-ing a new Step and appeding fragments. For the first one, we need to check that the postamble of the new Step is compatible with the first preamble of the existing fragment; for the latter, we need the same check between the last postamble of the first fragment and the first preamble of the second fragment. First, let's codify what "compatible" means here:

type family Combine (post :: Maybe b) (pre :: Maybe a) :: Maybe (Either a b) where
    Combine Nothing Nothing = Nothing
    Combine (Just post) Nothing = Just (Right post)
    Combine Nothing (Just pre) = Just (Left pre)

Importantly, there is no clause for Combine (Just post) (Just pre).

Getting dizzy with the thin air of the type level? Let's leave ourselves a thread leading back to the term level:

combine
    :: forall a b (post :: Maybe b) (pre :: Maybe a).
       (SingKind a, SingKind b, SingI (Combine post pre))
    => Sing post -> Sing pre -> Demote (KindOf (Combine post pre))
combine _ _ = demote @(Combine post pre)

(This post is not sponsored by Singpost BTW).

Cons- and append-able fragments

For the actual fragments, we can store them internally almost in the normalized format, i.e. as a term-level list of (Maybe a, [(t, Maybe (Either a b))]). Almost, but not quite, because the first a and the last b need to appear in the index, to be able to give a restricted type to cons and append. So instead of storing them in the list proper, we will store them as separate singletons:

data Ends a b
    = Empty
    | NonEmpty (Maybe a) (Maybe b)

data Amble (ends :: Ends a b) t where
    End :: Amble Empty t
    More
        :: forall (a0 :: Maybe a) (bn :: Maybe b) n t. ()
        => Sing a0
        -> [(t, Demote (Maybe (Either a b)))]
        -> t
        -> Sing bn
        -> Amble (NonEmpty a0 bn) t
deriving instance Functor (Amble ends)

Note that we need a special Empty index value for End instead of just NonEmpty Nothing Nothing, because starting with an empty Amble, the first cons needs to change both the front-end preamble and the back-end postamble, whereas later cons operators should only change the front-end.

type family Cons (b1 :: Maybe b) (ends :: Ends a b) where
    Cons b1 Empty = b1
    Cons b1 (NonEmpty a1 bn) = bn

We can now try writing the term-level cons. The first cons is easy, because there is no existing front-end to check compatibility with:

cons
    :: forall (a0 :: Maybe a) b1 (ends :: Ends a b) t. ()
    => Step a0 b1 t -> Amble ends t -> Amble (NonEmpty a0 (Cons b1 ends)) t
cons (Step a0 x b1) End = More a0 [] x b1
cons (Step a0 x b1) (More a1 xs xn bn) = More a0 ((x, _):xs) xn bn

We get into trouble when trying to fill in the hole in the cons to a non-empty Amble. And we should be, because nowhere in the type of cons so far have we ensured that b1 is compatible with the front-end of ends. We will have to use another type family for that, to pattern-match on Empty and NonEmpty ends:

type family CanCons (b1 :: Maybe b) (ends :: Ends a b) :: Constraint where
    CanCons b1 Empty = ()
    CanCons (b1 :: Maybe b) (NonEmpty a1 bn :: Ends a b) =
        (SingKind a, SingKind b, SingI (Combine b1 a1))

Unsurprisingly, the constraints needed to be able to cons are exactly what we need to fill the hole with the term-level value of combine b1 a1:

cons
    :: forall (a0 :: Maybe a) b1 (ends :: Ends a b) t. (CanCons b1 ends)
    => Step a0 b1 t -> Amble ends t -> Amble (NonEmpty a0 (Cons b1 ends)) t
cons (Step a0 x b1) End = More a0 [] x b1
cons (Step a0 x b1) (More a1 xs xn bn) = More a0 ((x, combine b1 a1):xs) xn bn

Now we are cooking with gas: we can re-use this idea to implement append by ensuring we CanCons the first fragment's backend to the second fragment:

type family CanAppend (ends1 :: Ends a b) (ends2 :: Ends a b) :: Constraint where
    CanAppend Empty ends2 = ()
    CanAppend (NonEmpty a1 bn) ends2 = CanCons bn ends2

type family Append (ends1 :: Ends a b) (ends2 :: Ends a b) where
    Append Empty ends2 = ends2
    Append ends1 Empty = ends1
    Append (NonEmpty a0 bn) (NonEmpty an bm) = NonEmpty a0 bm

append :: (CanAppend ends1 ends2) => Amble ends1 t -> Amble ends2 t -> Amble (Append ends1 ends2) t
append End ys = ys
append (More a0 xs xn bn) End = More a0 xs xn bn
append (More a0 xs xn bn) (More an ys ym bm) = More a0 (xs ++ [(xn, combine bn an)] ++ ys) ym bm

We finish off the implementation by writing the translation into the normalized format. Since the More constructor already contains almost-normalized form, we just need to take care to snoc the final element onto the result:

stepsOf
    :: forall (ends :: Ends a b) t. (SingKind a, SingKind b)
    => Amble ends t
    -> (Maybe (Demote a), [(t, Maybe (Demote (Either a b)))])
stepsOf End = (Nothing, [])
stepsOf (More a0 xs xn bn) = (fromSing a0, xs ++ [(xn, Right <$> fromSing bn)])

Putting a bow on it

What we have so far works, but there are a couple of straightforward improvements that would be a shame not to implement.

Nicer way to take steps

As written, you would have to use Step like this:

Step (sing @Nothing) UpdateFlags (sing @(Just Indirect))

All this singing noise would be more annoying than the Eurovision Song Contest, so I wanted to avoid it. The idea is to turn those Sing-typed arguments into just type-level arguments; then do some horrible RankNTypes magic to keep the parameter order. Prenex? What is that?

{-# LANGUAGE RankNTypes #-}
      
step :: forall pre. (SingI pre) => forall t. t -> forall post. (SingI post) => Step pre post t
step x = Step sing x sing

So now we will be able to write code like step @Nothing UpdateFlags @(Just Indirect) and get a Step type inferred that has the preamble and the postamble appearing in the indices.

Custom type error message

Suppose we make a mistake in our microcode, and accidentally want to write after one step and read before the next (using >:> for infix cons):

step @Nothing         (Get2 rHL)                       @(Just IncrPC) >:>
step @(Just Indirect) ReadMem                          @Nothing >:>
step @Nothing         (Compute Const01 ADD KeepC SetA) @Nothing >:>
step @Nothing         UpdateFlags                      @(Just Indirect) >:>
End

This is rejected by the type checker, of course; however, the error message is not as informative as it could be, as it faults the missing SingI instance for a stuck type family application:

• No instance for (SingI (Combine ('Just 'IncrPC) ('Just 'Indirect)))
arising from a use of ‘>:>

With GHC's custom type errors feature, we can add a fourth clause to our Combine type family. Unfortunately, this requires turning on UndecidableInstances for now:

{-# LANGUAGE UndecidableInstances #-}      
import GHC.TypeLits

type Conflict post pre =
    Text "Conflict between postamble" :$$: Text "  " :<>: ShowType post :$$:
    Text "and next preamble" :$$: Text "  " :<>: ShowType pre

type family Combine (post :: Maybe b) (pre :: Maybe a) :: Maybe (Either a b) where
    Combine Nothing Nothing = Nothing
    Combine (Just post) Nothing = Just (Right post)
    Combine Nothing (Just pre) = Just (Left pre)
    Combine (Just post) (Just pre) = TypeError (Conflict post pre)

With this, the error message changes to:

• Conflict between postamble 'IncrPC and next preamble 'Indirect

Much nicer!

Tracking fragment length

The final difference between what we have described here and the code I use for real is that in the real version, Amble also tracks its length in an index. This is needed because the CPU core is used not just for emulation, but also FPGA synthesis; and in real hardware, we can't just store lists of unbounded size in the microcode ROM. So instead, microcode is described as a length-indexed Amble n ends t, and then normalized into a Vec n instead of a list. Each instruction can be at most 10 steps long; everything is then ultimately normalized into a uniformly typed Vec 10 by padding it with "go start fetching next instruction" micro-ops.

The full implementation

Find the full code on GitHub, next to the rest of the Intel 8080 core.

Integrating Verilator and Clash via Cabal

7 May 2020 (programming haskell clash fpga)

TL;DR: This is a detailed description of how I got Clashilator working seamlessly from Cabal. It took me three days to figure out how the pieces need to fit together, and most of it was just trawling the documentation of Cabal internals, so I better write this down while I still have any idea what I did.

Background information

We set the scene with the dramatis personæ first:

When designing some circuit, it is very useful to be able to simulate its behaviour. Getting debugging information out of a hardware FPGA is a huge hassle; iteration is slow because FPGA synthesis toolchains, by and large, suck; and driving the circuit (i.e. setting the right inputs in the right sequence and interpreting the outputs) can be almost as complicated as the circuit under testing itself. Of course, all of these problems apply doubly to someone like me who is just dabbling in FPGAs.

So during development, instead of synthesizing a circuit design and loading it onto an FPGA, we want to simulate it; and of course we want to use nice expressive languages to write the test bench that envelopes the simulated circuit. One way to do this is what I call very high-level simulation: in this approach, we take the Haskell abstractions we use in our circuit, and reinterpret them in a software context. For example, we might have a state machine described as i -> State s o: instead of lifting it into a signal function, we can just runState it in a normal Haskell program's main and do whatever we want with it.

However, sometimes we want to simulate whole circuits, i.e. Signal dom i -> Signal dom o functions that might have who knows what registers and memory elements inside. For example, if we have a circuit that generates video output from a frame buffer, there's a difference between a high-level simulation that renders the frame buffer's contents to the screen, and a lower level one that interprets the VGA signal output of the circuit. Timing issues in synchronizing the VGA blanking signals with the color lines will only manifest themselves in the latter. So for this kind of applications, Clash of course contains a signal simulator that can be used to feed inputs into a circuit and get outputs. For example, here's a simulation of a Brainfuck computer where only the peripheral lines are exposed: internal connections between the CPU and the RAM and ROM are all part of the Clash circuit.

There is only one problem with the Clash simulator: its performance. This small benchmark shows how long it takes to simulate enough cycles to draw 10 full video frames at 640 ⨯ 480 resolution (i.e. 4,192,000 cycles). Clash does it in ~13 seconds; remember, at 60 FPS, it shouldn't take more than 166 milliseconds to draw 10 frames if we want to simulate it in real time. Of course, real-time simulation at this level of detail isn't necessarily feasable on consumer hardware; but less than one frame per second means any kind of interactive applications are out.

In contrast, Verilator, an open-source Verilog simulator can run all 4,192,000 cycles in 125 ms. This could be faster than realtime, were it not for the additional overhead of drawing each frame to the screen (and the Haskell FFI of 4 million calls accross to C...), and of course this is a very simple circuit that only renders a single bouncing ball; anything more complex will only be slower. But still, that same benchmark shows that 20+ FPS is possibe, end-to-end, if we ditch Clash and use Verilator instead. Pong is fully playable at 13 FPS.

Clash, Verilog and Haskell

The interface between Clash and Verilator is simple: Verilator consumes Verilog code, so we can simply run Clash and point Verilator at its output. However, we still need to connect that Verilator code to the Haskell code to drive the inputs and interpret the outputs. Here are the glue files from the Pong example:

As I was preparing to write the next chapter of a Clash book I've been working on, I made a new Clash project and then, because I needed a Verilator simulation for it, I started going through the steps of making all these files. And of course, I realized this should be all automated. But what is all in that sentence?

Step one was to write generators for the C++ and Haskell source files and the Makefile. This is quite easy, actually; after all, it is the fact that these files are so regular that makes it infuriating writing them by hand. So we do a bit of text template substitution, using Clash's .manifest output as the source of input/output pin names and bus widths. This gives us a simple code generator: you run Clash yourself, point Clashilator at a .manifest file, and it outputs a bunch of files, leaving you ready to run make. Mission accomplished?

No, not really.

Clash, Verilog and... Cabal

While we've eliminated the boilerplate in the source files, one source of boilerplate remains: the Cabal package settings. Here's the relevant HPack package.yaml section from the Pong example:

extra-libraries: stdc++ 
extra-lib-dirs: verilator
include-dirs: verilator
build-tools: hsc2hs

ghc-options:
  -O3
  -fPIC -pgml g++
  -optl-Wl,--whole-archive -optl-Wl,-Bstatic
  -optl-Wl,-L_build/verilator -optl-Wl,-lVerilatorFFI
  -optl-Wl,-Bdynamic -optl-Wl,--no-whole-archive

Oof, that hurts. All those magic GHC options just to statically link to libVerilatorFFI.a, to be repeated accross all Clash projects that use Verilator...

Also, while Clashilator outputs a Makefile to drive the invocation of Verilator and the subsequent compilation of the C++ bits, it doesn't give you a solution for running *that* Makefile at the right time — not to mention running Clashilator itself!

The problem here is that in order to compile a Verilator-using Haskell program, we first need to compile the other, non-simulation modules into Verilog. And this is tricky because those modules can have external dependencies: remember Clash is just a GHC backend, so you can import other Haskell libraries. And how are we going to bring those libraries in scope? I myself use Stack but your mileage may vary: you could be using Cabal directly, or some Cabal-Nix integration. In an case, you'd basically need to build your package so you can compile to Verilog so you can run Verilator so you can... build your package.

To solve this seemingly circular dependency, and to get rid of the Cabal file boilerplate, I decided to try and do everything in the Cabal workflow. Whatever your preferred method of building Haskell packages, when the rubber hits the road, they all ultimately run Cabal. If we run Clash late enough in the build process, all dependencies will be installed by then. If we run Verilator early enough in the build process, the resulting library can be linked into whatever executable or library Cabal is building.

If we do all this during cabal build, everything will happen at just the right time.

Is such a thing       even possible? Yes it is.

So, what gives us at least a fighting chance is that Cabal is extensible with so-called hooks. You can write a custom Setup.hs file like this:

import Distribution.Simple

main = defaultMainWithHooks simpleUserHooks

Here, simpleUserHooks is a record with a bunch of fields for extension points; of particular interest to us here is this one:

buildHook :: PackageDescription -> LocalBuildInfo -> UserHooks ->
BuildFlags -> IO ()

At ths point, we have breached the defenses: inside buildHook, we can basically do arbitrary things as long as the effect is building the package. In particular, we can:

The result of all this is a bunch of new files under the build directory, and modified BuildInfos for all the components marked with x-clashilator-top-is. We put these back into the PackageDescription and then call the default buildHook, which then goes on to compile and link the Haskell simulation integrating the Verilator parts:

clashilatorBuildHook :: PackageDescription -> LocalBuildInfo -> UserHooks -> BuildFlags -> IO ()
clashilatorBuildHook pkg localInfo userHooks buildFlags = do
    pkg' <- clashilate pkg localInfo buildFlags
    buildHook simpleUserHooks pkg' localInfo userHooks buildFlags

All the details are in the full implementation, which is probably worth a look if you are interested in Cabal's internals. As I wrote in the beginning of this post, it took me days to wade through the API to find all the moving parts that can be put together to apply this level of violence to Cabal. The final code also exports a convenience function clashilatorMain for the common case where enabling Clashilator is the only desired Setup.hs customization; and also clashilate itself for hardcore users who want to build their own buildHooks.

The implementation is almost 150 lines of not particularly nice code. It is also missing some features; most notably, it doesn't track file changes, so Clash and Verilator is always rerun, even if none of the Clash source files have changed. It is also completely, utterly untested. But it does give us what we set out to do: completely boilerplate-less integration of Clash and Verilator. A complete example package is here, and here's the money shot: the executables section of the package.yaml file.

executables:
  simulator:
    main: simulator.hs
    verbatim:
      x-clashilator-top-is: MyCircuit.Nested.Top
      x-clashilator-clock: CLK

Composable CPU descriptions in CλaSH, and wrap-up of RetroChallenge 2018/09

30 September 2018 (programming haskell fpga electronics retrochallenge retro clash)

My last post ended with some sample CλaSH code illustrating the spaghetti-ness you can get yourself into if you try to describe a CPU directly as a function (CPUState, CPUIn) -> (CPUState, CPUOut). I promised some ideas for improving that code.

To start off gently, first of all we can give names to some common internal operations to help readability. Here's the code from the previous post rewritten with a small kit of functions:

intensionalCPU (s0, CPUIn{..}) = case phase s of
    WaitKeyPress reg ->
        let s' = case cpuInKeyEvent of
                Just (True, key) -> goto Fetch1 . setReg reg key $ s
                _ -> s
            out = def{ cpuOutMemAddr = pc s' }
        in (s', out)                   
    Fetch1 ->
        let s' = goto Exec s{ opHi = cpuInMem, pc = succ $ pc s }
            out = def{ cpuOutMemAddr = pc s' }
        in (s', out)
  where
    s | cpuInVBlank = s0{ timer = fromMaybe 0 . predIdx $ timer s0 }
      | otherwise = s0

Using a State monad

To avoid the possibility of screwing up the threading of the internal state, we can use the State CPUState monad:

Continue reading »

CPU modeling in CλaSH

23 September 2018 (programming haskell fpga electronics retrochallenge retro clash)

My entry for RetroChallenge 2018/09 is building a CHIP-8 computer. Previously, I've talked in detail about the video signal generator and the keyboard interface; the only part still missing is the CPU.

The CHIP-8 instruction set

Since the CHIP-8 was originally designed to be an interpreted language run as a virtual machine, some of its instructions are quite high-level. For example, the framebuffer is modified via a dedicated blitting instruction; there is a built-in random number generator; and instructions to manipulate two 60 Hz timers. Other instructions are more in line with what one would expect to see in a CPU, and implement basic arithmetic such as addition or bitwise AND. There is also a generic escape hatch instruction but that doesn't really apply to hardware implementations.

The CPU has 16 generic-purpose 8-bit registers V0VF; register VF is also used to report flag results like overflow from arithmetic operations, or collision during blitting. Most instructions operate on these general registers. Since the available memory is roughly 4K, these 8-bit registers wouldn't be too useful as pointers. Instead, there is a 12-bit Index register that is used as the implicit address argument to memory-accessing instructions.

For flow control, the program counter needs 12 bits as well; the CHIP-8 is a von Neumann machine. Furthermore, it has CALL / RET instructions backed by a call-only stack (there is no argument passing or local variables).

Modeling the CPU's internal state

We can collect all of the registers described above into a single Haskell datatype. I have also added two 8-bit registers for the high and low byte of the current instruction, but in retrospect it would be enough to just store the high byte, since the low byte is coming from RAM exactly when we need to dispatch on it anyway. The extra phase register is to distinguish between execution phases such as fetching the first byte of the next instruction, or for instructions that are implemented in multiple clock cycles, like clearing the frame buffer (more on that below).

type Addr = Unsigned 12
type Reg = Index 16

data CPUState = CPUState
    { opHi, opLo :: Word8
    , pc, ptr :: Addr
    , registers :: Vec 16 Word8
    , stack :: Vec 24 Addr
    , sp :: Index 24
    , phase :: Phase
    , timer :: Word8
    , randomState :: Unsigned 9
    }

I implemented the random number generator as a 9-bit linear-feedback shift register, truncated to its lower 8 bits; this is because a maximal 8-bit LFSR wouldn't generate 0xFF.

lfsr :: Unsigned 9 -> Unsigned 9
lfsr s = (s `rotateR` 1) `xor` b4
  where
    b = fromIntegral $ complement . lsb $ s
    b4 = b `shiftL` 4

Input and output "pins"

Similar to how a real chip has various pins to interface with other parts, our CPU description will also have multiple inputs and outputs. The input consists of the data lines read from main memory and the framebuffer; the events coming from the keypad, and the keypad state; and the 60 Hz VBlank signal from the video generator. This latter signal is used to implement the timer register's countdown. The keypad's signals are fed into the CPU both as events and statefully; I've decided to do it this way so that only the peripheral interface needs to be changed to accomodate devices that are naturally either parallel (like a keypad matrix scanner) or serial (like a computer keyboard on a PS/2 connector).

type Key = Index 16
type KeypadState = Vec 16 Bool

data CPUIn = CPUIn
    { cpuInMem :: Word8
    , cpuInFB :: Bit
    , cpuInKeys :: KeypadState
    , cpuInKeyEvent :: Maybe (Bool, Key)
    , cpuInVBlank :: Bool
    }

The output is even less surprising: there's an address line and a data out (write) line for main memory and the video framebuffer.

type VidX = Unsigned 6
type VidY = Unsigned 5

data CPUOut = CPUOut
    { cpuOutMemAddr :: Addr
    , cpuOutMemWrite :: Maybe Word8
    , cpuOutFBAddr :: (VidX, VidY)
    , cpuOutFBWrite :: Maybe Bit
    }

So, what is a CPU?

As far as CλaSH is concerned, the CPU is extensionally a circuit converting input signals to output signals, just like any other component:

extensionalCPU :: Signal dom CPUIn -> Signal dom CPUOut

The internal CPU state is of no concern at this level. Internally, we can implement the above as a Mealy machine with a state transition function that describes behaviour in any given single cycle:

intensionalCPU :: (CPUState, CPUIn) -> (CPUState, CPUOut)

extensionalCPU = mealy intenstionalCPU initialState

As far as a circuit is concerned, a clock cycle is a clock cycle is a clock cycle. If we want to do any kind of sequencing, for example to fetch two-byte instruction opcodes from the byte-indexed main memory in two steps, we need to know in intensionalCPU which step is next. This is why we have the phase field in CPUState, so we can read out what we need to do, and store what we want to do next. For example, in my current version the video framebuffer is bit-indexed (addressed by the 6-bit X and the 5-bit Y coordinate), and there is no DMA to take care of bulk writes; so to implement the instruction that clears the screen, we need to write low to all framebuffer addresses, one by one, from (0, 0) to (63, 31). This requires 2048 cycles, so we need to go through the Phase that clears (0, 0), to the one that clears (0, 1), all the way to (63, 31), before fetching the first byte of the next opcode to continue execution. Accordingly, one of the constructors of Phase stores the (x, y) coordinate of the next bit to clear, and we'll need to add some logic so that if phase = ClearFB (x, y), we emit (x, y) on the cpuOutFBAddr line and Just low on the cpuOutFBWrite line. Blitting proceeds similarly, with two sub-phases per phase: one to read the old value, and one to write back the new value (with the bitmap image xor'd to it)

data Phase
    = Init
    | Fetch1
    | Exec
    | StoreReg Reg
    | LoadReg Reg
    | ClearFB (VidX, VidY)
    | Draw DrawPhase (VidX, VidY) Nybble (Index 8)
    | WaitKeyPress Reg
    | WriteBCD Word8 (Index 3)

So how should we write intensionalCPU? We could do it in direct style, i.e. something like

If you think this is horrible and unreadable and unmaintainable, then yes! I agree! Which is why I've spent most of this RetroChallenge (when not fighting synthesizer crashes) thinking about nicer ways of writing this.

This post is getting long, let's end on this note here. Next time, I am going to explain how far I've gotten so far in this quest for nicely readable, composable descriptions of CPUs.

Older posts:

Posts from all tags