In the NYU Future Reality Lab we spend much of our time thinking about the near future, because that’s what we can actually prototype and then study. But what about the far future — not five years from now but perhaps fifty years or more from now?
One thing that I’ve been thinking about recently which falls into the second category is a direct brain interface for vision. The occipital lobe in our brain’s neocortex contains far more neurons than we actually use in vision.
Human vision is currently limited not by the brain itself, but by a bottleneck at input: We can only see whatever light impinges on our retina and then gets carried back to the occipital lobe through the optic nerve.
So, for example, we only see three bands of color. But that’s not a limitation of the brain, only of the retina.
To give another example, our brain reconstructs 3D from a stereo pair of 2D images. Yet from the perspective of the brain itself, it might be perfectly reasonable to directly receive volumetric data, or even four dimensional data.
We really don’t know the limitations of what sorts of input our brain is capable of interpreting. But we do know that the brain has a seemingly limitless ability, given enough time, to learn and grow in response to enormously different kinds of stimulus.
So in the far future, given the ability to directly stimulate the brain, humans might end up having a kind of “super vision” — an ability to see and comprehend things far beyond what we can now even imagine. Children will get implants at a very early age, and their consequent ability to see (and potentially to hear) in profoundly enhanced ways will simply be considered normal.
People in that far future will look back on this time and wonder how we primitive humans ever got along with such impoverished sensory ability — only three primary colors and the paltry ability to see two flat images in front of our heads.