Note: This post is assumes a little bit (byte) of familiarity with computer science — specifically the binary search tree data structure.
As an undergraduate I collaborated with another student (now graduated) to prototype a few computer science-related sketches in Chalktalk. The binary search tree (BST) was our first target, so we made the following sketch:
There are many operations to implement, and my question was how to activate, for example, visualizations for the various traversals (breadth first search, and pre-, in-, and post-order).
What I didn’t want was to introduce buttons or sliders that would clutter the interface or use mouse swipes that didn’t make sense.
Instead, I tried creating a sort of visual mnemomic for the traversals. I often find it easier to remember something if I can draw it or trace it in the air:
Glyphs are already recognized as sketches in Chalktalk, so what if I reused the recognition functionality so I could draw glyphs atop the tree to be recognized as commands? I made the following patterns:
The red squiggles / arrows mimmick the visitation order of nodes during a recursive procedure — something like the following, which runs an pre-order recursive traversal rooted at the provided node (see the in-line comments on lines beginning with //):
void BST_traversal_preorder(BST_Node* root)
{
// back-track up the tree if there is no child node
if (root == NULL) {
return;
}
// print the current node first (arrow starts at the root)
printf("%s\n", root->value);
// visit the left sub-tree first (arrow passes through the left child)
BST_traversal_preorder(root->left);
// visit the right sub-tree (arrow ends at the right child)
BST_traversal_preorder(root->right);
}
For breadth-first search, a zig-zag represents the layer-by-layer sequence through the tree:
Here is a video showing the “glyph commands” for the tree traversals:
I’d like to look into these sorts of visual mnemonics more to see what effects they may have on one’s learning ability.
Also, I think it makes sense for strokes to be recognized as not only sketches, but also as other entities (such as sketch-specific commands or sub-worlds within the Chalktalk environment) depending on the context. That idea seems to be worth exploring further.
Pingback: Leaning Forwards – Future Reality Lab
Love the visuals for this, but you should also include videos of Chalktalk in action so people can see what it is like. I think it is a revolutionary idea but it needs better marketing! This video of Ken Perlin explaining it on Vimeo is the best I’ve found: https://vimeo.com/232230096 . Though also this one: https://vimeo.com/224597212 and this one: https://www.youtube.com/watch?v=4YnVhTyrYbo . If there is any way I can be of assistance from down in UNC Chapel Hill just let me know. You’re doing great work. 🙂