Friday, November 10, 2017

Ghcid with VS Code

Summary: New versions of Ghcid and the VS Code extension work even better together.

I've just released Ghcid v0.6.8 and the associated VS Code extension haskell-ghcid v0.2.0. Together they vastly simplify the Ghcid VS Code experience.

Ghcid reads .ghcid files

A new feature in Ghcid is that if there is a .ghcid file in the current directory it will load it as additional arguments. For example, in the Shake repo I have a .ghcid file:

-c "ghci -fno-code -ferror-spans"

Which tells ghcid to not guess at the command (e.g. using stack if you have a .stack-work) but always run ghci -fno-code -ferror-spans. This command works because I have a .ghci file which loads all the necessary files, while -fno-code speeds up compilation and -ferror-spans gives better error highlighting.

Ghcid VS Code starts ghcid

A new feature in the VS Code extension is the action Start Ghcid which starts a new ghcid terminal, writes the output to a temporary file, and uses that output to populate the Problems pane. Importantly, the extension runs ghcid with no command line arguments, so having a sensible .ghcid lets you control what it does.

The effect of these changes is that to start ghcid in VS Code is now a few key strokes, whereas before it required special flags, opening files, running commands etc.

Saturday, November 04, 2017

Understanding HLint rules

Summary: I added a degenerate foldr to map rule in the new version of HLint, here I describe how it works.

I've just released HLint 2.0.10, which includes a rule to recognise uses of foldr that should really be map. As an example:

foldr (\curr acc -> (+1) curr : acc) []

Can be rewritten as:

map (\curr -> (+1) curr)

Which is much more readable (and then subsequently HLint will suggest map (+1), which is vastly clearer than the initial foldr). The change required to HLint was to add a rule to the hlint.yaml saying:

- warn: {lhs: "foldr (\\c a -> x : a) []", rhs: "map (\\c -> x)"}

You can read this statement as saying if you see foldr (\c a -> x : a) [], suggest map (\c -> x) as a warning. The HLint matching engine then applies that template to every subexpression in your program. In the rest of the post I'll talk through the steps HLint performs.

Step 1: Unification

The first step is to try unifying the template foldr (\c a -> x : a) [] against the users subexpression, namely foldr (\curr acc -> (+1) curr : acc) []. HLint is trying to find assignments for the single-letter variables in the template (namely c, a and x) which cause it to match the subexpression. Unification proceeds top-down, and if it finds anything concrete that does not match (e.g. the user had written foldl) then it fails. In this case the unification succeeds with the bindings:

  • c = curr (from the first argument to the lambda)
  • a = acc (from the second argument to the lambda)
  • x = (+1) curr (from before the cons)
  • a = acc (from after the cons)

An example of a subexpression that would have failed unification is foldl (\curr acc -> (+1) curr : acc) [].

Step 2: Validity

The next step is to check that any value which has been bound more than once is equal in all bindings. In our case only a has been used twice, and it always binds to acc, so the unification is valid.

An example of a subexpression that would have failed validity is foldr (\curr acc -> (+1) curr : xs) [].

Step 3: Substitution

Now we've got some bindings, we can substitute them into the RHS, namely map (\c -> x). We replace c and x using the bindings above. Note that a isn't mentioned on the RHS, so we don't use it. After substitution we get:

map (\curr -> (+1) curr)

Step 4: Free variable check

Consider the expression foldr (\curr acc -> f acc : acc) []. Using the rules above we'd end up with map (\curr -> f acc), which is terrible, since we've gone from referring to a locally bound acc to whatever acc is in scope (if any). To solve that, we check that the result doesn't introduce any new free variables:

(freeVars result \\ freeVars hintRuleRHS) `isSubsetOf` freeVars original

Specifically any free variables introduced in the result, which weren't in the RHS (excluding the fake unification variables), must have been in the original subexpression.

With that, for foldr, we're done. There are a handful of other steps that apply in some cases.

Step A: Dot expansion in the template

If you write a hint map f (map g x) ==> map (f . g) x then HLint notices that also implies the rule map f . map g ==> map (f . g) and adds it. As a result, you shouldn't write your HLint rules in point-free style.

Step B: Dot/dollar expansion in the subexpression

When matching a subexpression HLint will expand f $ x and (f . g) x if doing so results in a match. These operators are used commonly enough that they are often treated more like brackets than functions.

Step C: Scope matching

When unifying qualified function names, HLint uses the active imports to guess whether they match. If you have import qualified Data.Vector as V then the subexpression V.length will unify with Data.Vector.length. Since HLint doesn't have complete import information it uses a few heuristics to figure out matching.

Step D: Scope moving

Similarly to scope matching on the LHS of a rule, after matching, HLint tries to requalify any necessary values on the RHS. As an example, assuming we are producing Data.Vector.null, if we know about import qualified Data.Vector as V then we suggest V.null.

Full code

To see the full code and all supporting definitions go to the HLint source, which defines matchIdea - here I show a gently simplified version. Given scope information, a rule (LHS and RHS) and a subexpression, we optionally produce a resulting expression after substitution.

matchIdea :: Scope -> HintRule -> Exp_ -> Maybe Exp_
matchIdea s HintRule{..} original = do
    u <- unifyExp hintRuleLHS original
    u <- validSubst u
    -- need to check free vars before unqualification, but after subst (with e)
    -- need to unqualify before substitution (with res)
    let result = substitute u hintRuleRHS
    guard $ (freeVars result Set.\\ Set.filter (not . isUnifyVar) (freeVars hintRuleRHS))
            `Set.isSubsetOf` freeVars original
        -- check no unexpected new free variables
    return result

Wednesday, September 20, 2017

Shake 0.16 - revised rule definitions

Summary: I've just released shake v0.16. A lot has changed, but it's probably only visible if you have defined your own rules or oracles.

Shake-0.16 is now out, 8 months since the last release, and with a lot of improvements. For full details read the changelog, but in this post I'm going to go through a few of the things that might have the biggest impact on users.

Rule types redefined

Since the first version of Shake there has been a Rule key value type class defining all rule types - for instance the file rule type has key of filename and value of modification time. With version 0.16 the type class is gone, rules are harder to write, but offer higher performance and more customisation. For people using the builtin rule types, you'll see those advantages, and in the future see additional features that weren't previously possible. For people defining custom rule types, those will require rewriting - read the docs and if things get tough, ask on StackOverflow.

The one place many users might encounter the changes are that oracle rules now require a type instance defining between the key and value types. For example, if defining an oracle for the CompilerVersion given the CompilerName, you would have to add:

type instance RuleResult CompilerName = CompilerVersion

As a result of this type instance the previously problematic askOracle can now infer the result type, removing possible sources of error and simplifying callers.

The redefining of rule types represents most of the work in this release.

Add cmd_

The cmd_ function is not much code, but I suspect will turn out to be remarkably useful. The cmd function in Shake is variadic (can take multiple arguments) and polymorphic in the return type (you can run it in multiple monads with multiple results). However, because of the overloading, if you didn't use the result of cmd it couldn't be resolved, leading to ugly code such as () <- cmd args. With cmd_ the result is constrained to be m (), so cmd_ args can be used.

Rework Skip/Rebuild

Since the beginning Shake has tried to mirror the make command line flags. In terms of flags to selectively control rebuilding, make is based entirely on ordered comparison of timestamps, and flags such as --assume-new don't make a lot of sense for Shake. In this release Shake stops trying to pretend to be make, removing the old flags (that never worked properly) and adding --skip (don't build something even if it is otherwise required) and --build (build something regardless). Both these flags can take file patterns, e.g, --build=**/*.o to rebuild all object files. I don't think these flags are finished with, but it's certainly less of a mess than before.

Sunday, September 17, 2017

Existential Serialisation

Summary: Using static pointers you can perform binary serialisation of existentials.

Many moons ago I asked how to write a Binary instance for a type including an existential, such as:

data Foo = forall a . (Typeable a, Binary a) => Foo a

Here we have a constructor Foo which contains a value. We don't statically know the type of the contained value, but we do know it has the type classes Typeable (so we can at runtime switch on its type) and Binary (so we can serialise it). But how can we deserialise it? We can store the relevant TypeRep when serialising, but when deserialising there is no mechanism to map from TypeRep to a Binary instance.

In Shake, I needed to serialise existentials, as described in the S4.1 of the original paper. My solution was to build a global mapping table, storing pairs of TypeRep and Binary instances for the types I knew were relevant. This solution works, but cannot deserialise anything that has not already been added to the global table, which required certain functions to live in weird places to ensure that they were called before deserialisation. Effective, but ugly.

Recently Abhiroop Sarkar suggested using the relatively new static pointers extension. This extension lets you turn top-level bindings with no arguments into a StaticPtr which can then be serialised/deserialsed, even between different instances of a process. To take advantage of this feature, we can redefine Foo as:

data Foo = forall a . (StaticFoo a, Binary a) => Foo a

class StaticFoo a where
    staticFoo :: a -> StaticPtr (Get Foo)

The approach is to switch from serialising the TypeRep (from which we try to look up Get Foo), to serialising the Get Foo directly. We can write a Binary Foo instance by defining put:

put :: Foo -> Put
put (Foo x) = do
    put $ staticKey $ staticFoo x
    put x

Here we simply grab a StaticPtr (Get Foo) which can deserialise the object, then use staticKey to turn it into something that can be serialised itself. Next, we write out the payload. To reverse this process we define get:

get :: Get Foo
get = do
    ptr <- get
    case unsafePerformIO (unsafeLookupStaticPtr ptr) of
        Just value -> deRefStaticPtr value :: Get Foo
        Nothing -> error "Binary Foo: unknown static pointer"

We first get the staticKey, use unsafeLookupStaticPtr to turn it into a StaticPtr (Get Foo) followed by deRefStaticPtr to turn it into a Get Foo. The unsafe prefix on these functions is justified - bugs while developing this code resulted in segfaults.

The final piece of the puzzle is defining StaticFoo instances for the various types we might want to serialise. As an example for String:

instance StaticFoo String where
    staticFoo _ = static (Foo <$> (get :: Get String))

We perform the get, wrap a Foo around it, and then turn it into a StaticPtr. All other types follow the same pattern, replacing String with Int (for example). The expression passed to static must have no free variables, including type variables, so we cannot define an instance for a, or even an instance for [a] - it must be [Char] and [Int] separately.

A complete code sample and test case is available here.

This approach works, and importantly allows extra constraints on the existential. The two disadvantages are: 1) that static isn't very flexible or easy to abstract over, resulting in a lot of StaticFoo boilerplate; 2) the static pointer is not guaranteed to be valid if the program changes in any way.

Will Shake be moving over to this approach? No. The next version of Shake has undergone an extensive rewrite, and in the process, moved away from needing this feature. A problem I had for 8 years has been solved, just as I no longer need the solution!

Sunday, August 20, 2017

Ghcid and VS Code

Summary: There's now a Ghcid VS Code addin that gives you red squiggles.

I've been using Ghcid for about 3 years, and VS Code for about 6 months. Ghcid alone is able to display the errors and warnings, and update them whenever you save. In this post I'll show how VS Code users can also click on errors to jump to them, and how to get red squiggles in the text buffer for errors.

Clicking on errors

Using a recent VS Code, if you run ghcid from the terminal window, hold Ctrl and click the filename and it jumps to the right location in the erroneous file.

Red squiggles

Red squiggles are now possible using the haskell-ghcid addin. To get it working:

  • Run ghcid -o ghcid.txt which will produce a file ghcid.txt which updates every time ghcid updates. Running the underlying ghci with -ferror-spans will significantly improve the errors reported.
  • Open ghcid.txt in VS Code as the active editor. Run the VS Code command (Ctrl+Shift+P) named "Watch Ghcid output".

These steps cause the ghcid errors to appear in the VS Code Problems pane, and have red squiggles in the editor. Even though the errors are in the proper problems pane, I still prefer the output provided by the ghcid terminal, so still look at that.

The VS Code addin is not very well polished - but I'm using it on a daily basis.

Thursday, July 06, 2017

HaskellX Bytes talk next Tuesday

I'm talking about "Static Analysis in Haskell" at HaskellX Bytes next Tuesday (11th July), in London UK. Registration is free. The abstract is:

Haskell is a strongly typed programming language, which should be well suited to static analysis - specifically any insights about the program which don't require running the program. Alas, while type systems are becoming increasingly powerful, other forms of static analysis aren't advancing as fast. In this talk we'll give an overview of some of the forms of non-type-based static analysis that do exist, from the practical (GHC warnings, HLint, Weeder) to the research (LiquidHaskell, Catch).

I'm a big fan of static analysis, so this will be part summary, part sales pitch, and part call to arms. Followed by beer.

Clarification: I originally got the day wrong, and the url persists the original incorrect day. The talk is on Tuesday 11th July.

Tuesday, June 20, 2017

Announcing Weeder: dead export detection

Most projects accumulate code over time. To combat that, I've written Weeder which detects unused Haskell exports, allowing dead code to be removed (pulling up the weeds). When used in conjunction with GHC -fwarn-unused-binds -fwarn-unused-imports and HLint it will enable deleting unused definitions, imports and extensions.

Weeder piggy-backs off files generated by stack, so first obtain stack, then:

  • Install weeder by running stack install weeder --resolver=nightly.
  • Ensure your project has a stack.yaml file. If you don't normally build with stack then run stack init to generate one.
  • Run weeder . --build, which builds your project with stack and reports any weeds.

What does Weeder detect?

Weeder detects a bunch of weeds, including:

  • You export a function helper from module Foo.Bar, but nothing else in your package uses helper, and Foo.Bar is not an exposed-module. Therefore, the export of helper is a weed. Note that helper itself may or may not be a weed - once it is no longer exported -fwarn-unused-binds will tell you if it is entirely redundant.
  • Your package depends on another package but doesn't use anything from it - the dependency should usually be deleted. This functionality is quite like packunused, but implemented quite differently.
  • Your package has entries in the other-modules field that are either unused (and thus should be deleted), or are missing (and thus should be added). The stack tool warns about the latter already.
  • A source file is used between two different sections in a .cabal file - e.g. in both the library and the executable. Usually it's better to arrange for the executable to depend on the library, but sometimes that would unnecessarily pollute the interface. Useful to be aware of, and sometimes worth fixing, but not always.
  • A file has not been compiled despite being mentioned in the .cabal file. This situation can be because the file is unused, or the stack compilation was incomplete. I recommend compiling both benchmarks and tests to avoid this warning where possible - running weeder . --build will use a suitable command line.

Beware of conditional compilation (e.g. CPP and the Cabal flag mechanism), as these may mean that something is currently a weed, but in different configurations it is not.

I recommend fixing the warnings relating to other-modules and files not being compiled first, as these may cause other warnings to disappear.

Ignoring weeds

If you want your package to be detected as "weed free", but it has some weeds you know about but don't consider important, you can add a .weeder.yaml file adjacent to the stack.yaml with a list of exclusions. To generate an initial list of exclusions run weeder . --yaml > .weeder.yaml.

You may wish to generalise/simplify the .weeder.yaml by removing anything above or below the interesting part. As an example of the .weeder.yaml file from ghcid:

- message: Module reused between components
- message:
  - name: Weeds exported
  - identifier: withWaiterPoll

This configuration declares that I am not interested in the message about modules being reused between components (that's the way ghcid works, and I am aware of it). It also says that I am not concerned about withWaiterPoll being a weed - it's a simplified method of file change detection I use for debugging, so even though it's dead now, I sometimes do switch to it.

Running with Continuous Integration

Before running Weeder on your continuous integration (CI) server, you should first ensure there are no existing weeds. One way to achieve that is to ignore existing hints by running weeder . --yaml > .weeder.yaml and checking in the resulting .weeder.yaml.

On the CI you should then run weeder . (or weeder . --build to compile as well). To avoid the cost of compilation you may wish to fetch the latest Weeder binary release. For certain CI environments there are helper scripts to do that.

Travis: Execute the following command:

curl -sL https://raw.github.com/ndmitchell/weeder/master/misc/travis.sh | sh -s .

The arguments after -s are passed to weeder, so modify the final . if you want other arguments.

Appveyor: Add the following statement to .appveyor.yml:

- ps: Invoke-Command ([Scriptblock]::Create((Invoke-WebRequest 'https://raw.githubusercontent.com/ndmitchell/weeder/master/misc/appveyor.ps1').Content)) -ArgumentList @('.')

The arguments inside @() are passed to weeder, so add new arguments surrounded by ', space separated - e.g. @('.' '--build').

What about Cabal users?

Weeder requires the textual .hi file for each source file in the project. Stack generates that already, so it was easy to integrate in to. There's no reason that information couldn't be extracted by either passing flags to Cabal, or converting the .hi files afterwards. I welcome patches to do that integration.