Safer C programming

TL;DR – check out elfs-clang-plugins, cool plugins for clang made at elastifile.

Have you ever made the mistake of returning a bool instead of an enum?

enum Result do_something(void) {
    ...
    return true;
}

In C that’s valid (in C++ you can use ‘class enum’ to avoid it, but if you didn’t you’d have the same problem).

No compiler that I know of warns about this C code. One of our newly-open-sourced clang plugins, flags this (and many other) enum-related mistakes:

clang -Xclang -load -Xclang ./clang_plugins.so \
      -Xclang -add-plugin -Xclang enums_conversion \
      -c /tmp/test.c
/tmp/test.c:7:12: error: enum conversion to or from enum Result
    return true;
           ^
1 error generated.

The package includes:

  • enums_conversion: Finds implicit casts to/from enums and integral types
  • include_cleaner: Finds unused #includes
  • large_assignment: Finds large copies in assignments and initializations (size is configurable)
  • private: Prevents access to fields of structs that are defined in private.h files

More information at https://github.com/sinelaw/elfs-clang-plugins

Because C is… not so safe.

 

Safer C programming

Const when you need it

infernu uses row-type polymorphism to propagate read/write capabilities on record fields. Using row-type polymorphism to describe more than just which fields are present bears a vague resemblance to polymorphic constraints.

In C, a pointer to a field in a const struct is automatically const’ed:

struct foo { int field; };

void useInt(const int *);

int main(void) {

    const struct foo x;

    useInt(&x.field); // no warnings because &x.field is 'const int *'

    return 0;
}

Thus, a function that extracts a pointer to a (possibly deep) field from a const struct, will also return a const pointer:

const int *getField(const struct foo *x) {

    return &x->field;

}

(All the code compiles with `-Wall` and `-Wextra`)

But, what if I want to use `getField` on a non-const struct, to get an accessor to a field within it? Almost works:

struct foo y;
int *bla = getField(&y);
*bla = 2;

Uh oh. We get a warning:

warning: initialization discards ‘const’ qualifier 
from pointer target type [enabled by default]
     int *bla = getField(&y);
                ^

The compiler is angry because `int *bla` should be `const int *bla`. But we don’t want that! We just want to get an accessor – a writable accessor – to a field in our not-const struct value.

C++ (not C) does have a non-solution: const_cast. That isn’t what we want: it’s unsafe. What we want is, if a function doesn’t get a const struct, the ‘non-constness’ should propagate to the field accessor being returned (and vice versa: if the given struct was const, so should the accessor).

In fancier words, we need const polymorphism, which I imagine would be written with a ‘constness type variable’ C like this made-up syntax:

const<C> int *getField(const<C> struct foo *x) {
    return &x->field;
}

And then we would expect this to compile with no problems:

    struct foo y;
    int *bla = getField(&y);

…because, as ‘y’ is not const, ergo the pointer returned from getField is not pointing at a const.

Unfortunately, no such thing. We could represent this in a type system in a number of ways. One simple way is to say that constness is a constraint on a type (using something like Haskell’s type classes). Another way is to have ‘write into a field’ be a kind of a capability that’s part of the type.

The latter, write-capability approach is what I use in Infernu. Here there are no structs (it’s JavaScript) but there are polymorphic records. The type system includes two flavors for each field label: Get and Set. If a field is only being read, the record (or object or row) that contains it only needs to have the ‘Get’ accessor for that field. Here’s infernu’s output for a simple example:

//  obj :  { subObj:  { val: Number } }
var obj = { subObj: { val: 3 } };

Our object is simple. The comment is what infernu infers, a reasonably simple type.

In the notation I (shamelessly) invented, read-only fields have a prefix ‘get’ in the type, and read/write fields don’t have any prefix. So a read-only field ‘bla’ would be: { get bla : t }. If ‘bla’ is required to be writable, the type is written as { bla : t }. So in the above ‘obj’ example, we see that literal objects are by default inferred to be writable (type annotations would allow you to control that).

Next let’s make a function that only reads ‘subObj’:

//       readSubObj : ( { get subObj: h | g} -> h)
function readSubObj(x) { return x.subObj; }

The type inferred says “readSubObj is a function, that takes an object with a readable field subObj, (hence “get subObj”: it doesn’t require the ‘Set’ capability!). subObj has any type ‘h’, and the function returns that same type, ‘h’. (By the way, that ‘| g‘ means the passed object is allowed to contain also other fields, we don’t care.)

Example of a nested read:

//       readVal : ( { get subObj:  { get val: d | c} | b} -> d)
function readVal(x) { return x.subObj.val; }

Now we need to ‘get subObj’ but subObj itself is an object with a readable field ‘val’ of type d. The function returns a ‘d’.

We can use readSubObj on a writable object with no problems:

//  sub :  { val: Number }
var sub = readSubObj(obj);

When infernu supports type annotations (eventually) one could take advantage of this type-system feature by marking certain fields ‘get’.

While this isn’t exactly the same as the problem we discussed with C const pointers, the same idea could be used to implement polymorphic constness.

The main idea here is that ideas from row-type polymorphism can be used to implement a certain kind of ‘capabilities’ over types, constraints that are propagated. This may be a nicer way to implement (some kind of) polymorphic constraints.

(For example, in a language that supports row extension/reduction, a function { x : Int | r } -> { | r } would retain the unspecified constraints from ‘r’. I’m sure there are more interesting examples.)

If you can refer me to something like this, please do!

Const when you need it

Two implementations of DHM type inference

Here are two simple implementations of Damas-Hindley-Milner type inference.

First, is my Haskell version of a region-based optimized type checker as explained by Oleg Kiselyov, in his excellent review of the optimizations to generalization used in OCaml. Oleg gives an SML implementation, which I’ve Haskellized rather mechanically (using ST instead of mutable references, etc.) The result is a bit ugly, but it does include all the optimizations explained by Oleg above (both lambda-depth / region / level fast generalization and instantiation, plus path compression on linked variables, and not doing expensive occurs checks by delaying them to whenever we traverse the types anyway).

Second, here’s my much shorter and more elegant implementation using the neat unification-fd package by Wren Romano. It’s less optimized though – currently I’m not doing regions or other optimizations. I’m not entirely satisfied with how it looks: I’m guessing this isn’t how the author of unification-fd intended generalization to be implemented, but it works. Generalization does the expensive lookup of free metavariables in the type environment. Also the instantiate function is a bit clunky. The unification-fd package itself is doing inexpensive occurs checks as above, and path compression, but doesn’t provide an easy way to keep track of lambda-depth so I skipped that part. Perhaps the Variable class should include a parameterized payload per variable, which could be used for (among other things) keeping track of lambda-depth.

Two implementations of DHM type inference

In Python, don’t initialize local variables unnecessarily

A common pattern:

def foo():
    x = some value # but do we need this? (short answer: no)
    if something:
        # ... do stuff ...
        x = 'bla'
    else:
        x = 'blo'

The variable x is being initialized before the if/else, but the intention of the programmer is that its value will actually be determined by the if/else itself. If somebody later comes around and mistakenly removes one of the assignments (inside ‘if’ or ‘else’), no runtime error will occur and x will remain initialized to a probably wrong value.

Leaving out the initialization is better – in that case, forgetting to set x in one of the branches will cause an UnboundLocalError:

>>> def foo():
...     if False:
...         x = 0
...     return x
... 
>>> foo()
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 4, in foo
UnboundLocalError: local variable 'x' referenced before assignment

Errors are good! (when they flag buggy code)

Now, what if we also have an x declared in the global scope? Because of how Python handles variable scope, the error will still happen (which is good).

>>> x = 1
>>> def foo():
...     if False:
...         x = 0
...     return x
... 
>>> foo()
Traceback (most recent call last):
..
UnboundLocalError: local variable 'x' referenced before assignment

Summary: In Python, don’t initialize variables until you know what value to assign to them.

In Python, don’t initialize local variables unnecessarily

Beware of ‘var’ in for loops, JS programmers

Time and time again, I see people using ‘var’ in the initialization part of a for loop. Example from MDN (Mozilla Developer Network):

for (var i = 0; i < 9; i++) {
   console.log(i);
   // more statements
}

What’s wrong with var i = 0 above? The problem is that variables declared in a for initialization have function scope, just like any var declaration does. In other words, they affect the scope of the entire function. Consider the following:

function outer() {
    var x = 'outer';
    function inner() {
        x = 'inner';
        //
        // ... lots o' code
        //
        for (var x = 0; x < 1; x++) {
            // in for
        }
    }
    inner();
}

In the inner function, x shadows the outer variable throughout, not just inside the for loop. So also the initial statement x = 'inner' at the head of ‘inner’ affects only the locally scoped variable.

This is a classic example of var hoisting, which should qualify as one of JavaScript’s awful parts.

Don’t do it! Move all your ‘var’ statements to the head of each function, please.

Beware of ‘var’ in for loops, JS programmers

infernu news

In the past month, most of the work on infernu was to stabilize the type system, making decisions about some trade-offs. Here’s a short summary:

  • Colors! I refactored all the pretty printing code to use the ansi-wl-pprint package, which fixes indentation and formatting issues and also adds ansi colors to the output. One neat thing is that identical type variables have the same color:
    infernu-colors
  • Changed the way constructor functions are treated. Until now, constructor functions were treated at definition site as regular functions. The difference between constructors and non-constructors only happened at “new” calls, where the expected “this” value was forced to be a closed row type. Unfortunately this breaks if you call “new Foo()” inside the definition of “Foo”.
    To avoid this issue, functions with uppercase names are now treated specially and the “this” row-type is forced to be closed when the function name is added to the environment. This allows maximum flexibility while defining Foo, while ensuring Foo’s type is closed outside the constructor to prevent junk code like var x = new Foo(); x.myWrongProperty = 2;
  • Explored the idea of “Maybe” (or “optional”) types, including having common JS APIs use them for stronger safety. For example, array access should return a Maybe value.
    Unfortunately, since there is no syntax to construct a Maybe-typed value (no “Just”), all values can be implicitly “upgradeed” to Maybe. In other words, there is an ambiguity that break type inference. So for now, not implementing Maybe types (perhaps with explicit annotations they can come back).
  • Decided to disable generalization of row fields (object properties). This decision means that user-defined objects will by default not have polymorphic methods, although the object itself could be polymorphic (and thus different occurrences of the object in the program syntax, will allow instantiation to different types). The reason for this decision is that overly-polymorphic fields cause unexpected type errors, such as when passing objects that contain them as parameters to functions (contravariance can surprise you).
  • Started a document sketching out denotational semantics of JS, as infernu decides these should be, which helped clarify a few issues in the JS -> LC translator. The next step is to change all translations to preserve semantics, currently they only preserve types.
  • Bug fixes: polymorphic subsumption checking, unification of recursive types.
  • Increased compatibility: now using base-compat and a custom prelude to increase compatibility with different GHC versions (thanks to RyanGlScott for submitting a fix to use base-orphans which prompted me to do this).
infernu news

A simple problem with recursive types and subtyping

Here’s a simple example of recursive types interacting badly with subtyping:

T={ foo: T -> A}
U={ foo: U -> B}

Consider T <: U, therefore

(T -> A) <: (U -> B)

Which implies:

U <: T

So T <: U but also U <: T, which is true iff A <: B and B <: A.

In my case, the subtyping relation is polymorphic subsumption: T is subsumed by U iff U is “more polymorphic”, intuitively, it can be instantiated to all types that T can.

This situation arises in rather simple code, involving polymorphic vs. non-polymorphic row fields. For example, A is a row with a polymorphic method, whereas B is a row with a monomorphic (but compatible) method, such as:

A = { method: forall a. a -> () }
B = { method: String -> () }

In this case subsumption (the form of subtyping in play) fails.

One way around this is to avoid subsumption issues altogether by keeping things rank-1, and not using higher-rank row fields. Unfortunately, throwing away polymorphic methods is very bad: consider a non-polymorphic array.map (in JS).

A slightly better workaround is to push the foralls all the way out, keeping all types (including row types) rank-1. Every time an object method is accessed via the property syntax obj.method, we end up instantiating the object’s row type, and get a “fresh” type for the method. We get practically polymorphic methods. That’s the approach I’m investigating for infernu.

A simple problem with recursive types and subtyping