Cocoa for Scientists (Part XXVII): Getting Closure with Objective-C

·

·

Last week, Chris Lattner — who manages the Clang, LLVM, and GCC groups at Apple — announced that work was well underway to bring ‘blocks’ to the GCC and Clang compilers. ‘So what?’, I hear you ask, ‘My kid has been using blocks since he was 9 months old.’ Fair point, but maybe not these blocks.

A Demonstration of ‘Blocks’

Blocks, or closures as they are often called, have existed in other languages for quite some time. Ruby, for instance, is famous for them. They also exist in Python, which I’ll use here to demonstrate the principle.

Take this Python code

def EvalFuncOnGrid(f, forceConst):
    for i in range(5):
        x = i*0.1
        print x, f(forceConst, x)

def QuadraticFunc(forceConst, x): 
    return 0.5 * forceConst * x * x 

def Caller():
    forceConst = 3.445
    EvalFuncOnGrid(QuadraticFunc, forceConst)

Caller()

This simple program begins with a call to the Caller function. The Caller function calls to the EvalFuncOnGrid function to evaluate the function passed, in this case QuadraticFunc, which represents a simple quadratic function. The result is the value of the quadratic function on a grid of points.

0.0 0.0
0.1 0.017225
0.2 0.0689
0.3 0.155025
0.4 0.2756

Unquestionably exciting stuff, but what I want to draw attention to is the extra data that was passed along with the function itself. The QuadraticFunc function takes two arguments: the coordinate (x), and a force constant. This force constant needs to be passed along with function, because the function itself has no way to store it. This may not seem like a big deal, but suppose now we want to reuse EvalFuncOnGrid to print values of a different type of function, one that does not have a force constant, and instead takes a wave number parameter. Hopefully you can see that passing ‘state’ for the function, in the form of data and parameters, is limiting the flexibility of our code.

One viable solution would be to make QuadraticFunc a class, but that is a bit heavy-handed. Besides, this solution would work for our own functions, but not for built-in functions, or functions from libraries. We need some way to pass state to EvalFuncOnGrid, so that it can use that state when evaluating the function. This is exactly what ‘blocks’ allow us to do.

Here is the Python code rewritten to use a block:

def EvalFuncOnGrid(f):
    for i in range(5):
        x = i*0.1
        print x, f(x)


def Caller():
    const = 3.445

    def QuadraticFunc(x):
        return 0.5 * const * x * x 

    EvalFuncOnGrid(QuadraticFunc)

Caller()

If you run it, you will find this code produces the same output as before.

So what’s changed? You’ll note that the force constant has been removed from all argument lists, and no reference is made to it at all in EvalFuncOnGrid. This was a key objective: to have EvalFuncOnGrid be completely general, and work with any function. But the force constant must still be there, otherwise how does the quadratic function get evaluated?

You will have noticed that the QuadraticFunc function has been moved into the Caller function. The effect of this is that QuadraticFunc gets a copy of the stack of Caller, that is, it ‘inherits’ any variables and constants that are set in Caller. Because const is set, QuadraticFunc copies its value to its own stack, and can access it later in EvalFuncOnGrid. This is the essence of blocks: it is similar to passing a function argument, with the difference that the block has a copy of the stack from the scope where it was defined.

Blocks in Objective-C

Chris Lattner’s announcement details how blocks will be used in C and Objective-C, and — in essence — it is similar to the Python example above. Here is that example rewritten in the new C syntax:

void EvalFuncOnGrid( float(^block)(float) ) {
    int i;
    for ( i = 0; i < 5 ; ++i ) {
        float x = i * 0.1;
        printf("%f %f", x, block(x));
    }
}

void Caller(void) {
    float forceConst = 3.445;
    EvalFuncOnGrid(^(float x){ return 0.5 * forceConst * x * x; });
}

void main(void) {
   Caller();
}

(I’m not sure if this is 100% correct, because I haven’t tried to compile it yet, but it should at least give you the idea.)

The block syntax in C is very similar to the standard syntax for function pointers, but you use a caret (^) in place of the standard asterisk pointer (*). The block itself looks like a function definition, but is anonymous, and is embedded directly in the argument list. (Note that we named our ‘block’ in Python, but Python does also support anonymous functions.)

Inside-Out Programming

Another way to think about closures/blocks is that they allow you to rewrite the inside of functions, such as EvalFuncOnGrid in the example. I like to think of this as ‘inside-out programming’: Traditionally, you call functions from outside, and pass them what they need to get the job done. With blocks, you get to pass in the guts of a function, effectively rewriting it on the fly.

Why Blocks?

Why is all of this important, and why now? Well, as you are undoubtedly aware, there has been a vicious war raging the last few years, and it is only going to get worse before it gets better. That’s right — it’s the War on Multicore.

Our chips no longer get faster, they just get more abundant, like the broomsticks in Disney’s Fantasia. Chipmakers just take existing designs, and chop them in half, and then in half again, and software developers are expected to do something useful with that extra ‘power’.

It turns out that blocks could be a very useful weapon in the War on Multicore, because they allow you to create units of work, which each have their own copy of the stack, and don’t step on each others toes as a result. What’s more, you can pass these units around like they are values, when in actual fact they contain a whole stack of values (pun intended), and executable code to perform some operation.

In fact, blocks could be seen as a low-level form of NSOperation. For example, if you are parallelizing a loop, you could easily generate blocks for each of the iterations in the loop, and schedule them to run in parallel, in the same way that NSOperationQueue does this with instances of NSOperation. The advantage of blocks is that they are at a lower level, built into the language, and require much less overhead. Stay tuned, because Apple undoubtedly has some big things planned along these lines in Snow Leopard.


Leave a Reply

Your email address will not be published. Required fields are marked *