A Google Doodle Game published on Dec 4, 2017 in celebration of 'kids coding' looks like this:
The object of this game is to arrange the instruction blocks (
turn-left, etc.) in a sequence such that the bunny following the instructions
can eat all the carrots.
Unsurprisingly, you only find out if your program works after you press the play button. Why is that? Why no immediate feedback of what the bunny will do as you build the sequence? Is it not feasable? Of course it is! The real reason there is no feedback is because that's not how we think about programming.
The 'usual business' of programming
Correctly predicting what the computer will do, given some source code, is considered the usual business of programming. This involves thinking about things like:
- how a function, line by line, will transform specific values that pass through
- what a generic class will become, when fused with a concrete data type
- how specific modifications will change some runtime behavior
Lets look at it a little deeper - we write code as text but think hard about all possible effects in the running system after a series of transformations.
In other words, we simulate the computer in our head, while sitting in front of a computer - a powerful simulation machine. Who was supposed to simulate who, again?
The irony is deep here, but I completely missed it for the majority of my programming lifetime. The joy of programming was largely the joy of puzzle solving - the correct solution being the accurate prediction of the behavior of some code. This fun kept me from seeing the real purpose of programming - it's not making and solving puzzles, however elegant or clever - it's something else.
The idea that a program is a description to be mentally simulated is deeply entrenched -- but is it a good idea?
Batch oriented or interactive?
Constrast the 'batch oriented' programming workflow used by the Google game above, to the interactive experience in Joy JS.
Here, there is immediate visual feedback as you modify your program. If you've ever used the 'developer tools' tab in a web browser to modify some code and see the effect live, you're doing this live interactive programming as well. Taking this idea further, we can manipulate the effect directly, and have the program update at the appropriate places.
The interesting question is how far can we take this idea? Will this just work for little toy programs? Is it only applicable where the final effect is a drawing?
I believe what's holding us back is the choice of core constructs in the design of programming languages and systems (the 'systems' is important here because you want the effect simulation to span all parts of the system, across all 'processes', not just within one 'program'). Our systems and languages are not designed to be interactively evaluated and explored, but rather to be batch processed by a compiler. We're very far from designing or thinking about systems that can interactively show us the effects of any program modification. Even the notion of a 'programming language' that needs to be fed through a compiler is rooted in batch style programming:
- Write large piece of text - the 'program' - while mentally simulating the code
- Submit to computer
- Evaluate effect
Is this a good model? Of particular interest is the complexity added in #3 above, when you consider the combined effect in a system containing two or more OS processes with distinct programming languages and toolchains.
Interestingly, Smalltalk environments from the 70s explored a more interactive and live model than what I describe above, but the batch model is still predominant today, in spite of the increase in computing power.
How much of our mental simulation could be offloaded to the computer, if we choose suitable abstractions to represent systems and programs? Shouldn't computers simulate and visualize, to a very large degree, the interesting effects and behaviors of our programs, as we create them?