December 27, 2010

Paper of the Week (Computational Physics of Film)


The first official paper of the week is "Computational Physics of Film" published in this week's Science




This is a really good overview of the latest and greatest work in computer graphics research as applied to virtual worlds and film. Have you ever wondered how a glass can unbreak itself? Or Neo was able to dodge bullets in "The Matrix"? While some may attribute it to movie "magic" (e.g. hocus pocus), now you can learn the technical details in one easy package. Virtual world physics is an area with a lot of potential, and this review provides only a glimpse into current research.

P.S. If you are interested in the generation of virtual sound, take a look at the work of Douglas James at Cornell.

December 23, 2010

Recursive algorithmism: a religion for the other .005 of us

In light of the recently released Tron sequel, I decided to start a new religion called Recursive Algorithmism.

Just check out these testimonials:

"One of the great things about our religion is that there is that repentance takes the form of a for loop". Anonymous.

One of our many planned churches (image to scale).

"In the year 1954, brother Turing traversed a strict hierarchy for 40 days and 40 nights. After he reached the tip of the tree branch, he looked out over a sparse array and saw a lone pixel with a value of "1". This is the rock of our church". Anonymous.

"And on the eighth clock cycle, the program halted. Holy is that instance." Anonymous.

"We give the phrase 'Ghost in the Machine' a whole new meaning". Anonymous.

Potential Pixellated Practitioner/Prosletyzer

December 22, 2010

Surrealism of the Month II

The definition of surrealism is......a "Butthole Surfers" album for the masses. Really, take your pick (the VH1 airtime [1] or the Lady Gaga video suggested viewing [2]) on this one.......

December 11, 2010

Jeff Hawkins, HTM, and "intelligence"

Jeff Hawkins (the theoretical neuroscientist/mobile computing pioneer) recently gave a lecture at the Beckman Institute on his work at Numenta on Hierarchical Temporal Memory (HTM). Actual title: "Advances in Modeling Neocortex and Its Impact on Machine Intelligence". A video of the lecture can be viewed here.

Basically, Jeff is proposing a new paradigm for thinking about brains and technology. With the advent of "soft" computing techniques (e.g. evolutionary algorithms, neural networks), bio-inspired software, and new techniques to peer into the brain (e.g. fMRI/EEG and fNIR) we need a new way to both produce machine intelligence and theoretically understand what is going on in the brain. The fact that he makes this link, and has been interested in this for most of his career automatically makes me a fan.

Yet while I like Jeff Hawkins (I basically bought into the argument he laid out in "On Intelligence"), I do not agree with some of the details featured in this talk (although the work is technically impressive and correct). Mainly the idea that neocortex (the 6-layered tissue responsible for much of mammalian higher cognition more properly called isocortex) is computationally powerful because it has a repetitive structure.

I have encountered this idea in a number of computational neuroscience papers. I guess my objection is to the idea of repetitive structures being limited to the neocortex, and that the neocortex defines intelligence. This is incorrect on two counts:

1) there are other structures (cerebellum, parts of the medial temporal lobe) which also exhibit repetition. It's not that these structures do not produce intelligent behavior. In fact, the cerebellum is known for movement and other behavioral regulation, while the medial temporal lobe is known to be involved in memory consolidation and spatial navigation. The problem is that Hawkins all too often equates repetition of structure with pattern recognition and predictive capacity. It may work when running HTM simulations, but is it biologically accurate and ultimately robust? While this is certainly true of visual cortex, it is not true of all neocortical regions. There are other attributes such as convergence and higher-order feedback that exploit this repetitive, hierarchical structure that do not require nor preclude pattern recognition.

2) birds use pallial-derived structure to generate intelligent behavior. While one could argue that this structure is also hierarchical (it is certainly layered), it does not share many of the design principles found in mammalian neocortex. The neural substrate of insects, who can likewise generate complex behaviors, is also not equivocal to the mammalian neocortex. While hierarchical processing may also exist in avian pallium and insect neuropil/ganglia networks, it may or may not be consistent with Hawkins' HTM.

The other problem I have with current artificial intelligence research (and machine learning in general) is the focus on pattern recognition. While pattern recognition may be a necessary condition for intelligence, it is not the only hallmark of intelligence. To his credit, Hawkins argues that prediction is actually the hallmark of intelligent behavior. This is much more powerful than blind pattern recognition, which can produce a lot of false positives (e.g. seeing
an image of the virgin Mary on the side of a barn). The ability to predict upcoming events in the environment may not only define intelligence in the brain (neuronal populations), but among cell and organismal populations as well.


Yet there may be ways to define intelligent behavior outside the realm of prediction. For several years now (since the early years of my PhD studies), I have been fascinated by sensory integration and signal convergence in the brain. For example, perception of a coffee mug being lifted, brought to the mouth, and set down again involves visual, auditory, and tactile cues -- all of which need to be integrated in the course of producing the intelligent behavior of consciousness we all take for granted. There are centers in the brain (e.g. superior colliculus) in which single neurons will integrate inputs of different sensory types, and depending on how they are weighted, will produce either an additive, suppressive, or superadditive response. The superadditive response is the outcome that has intrigued me the most, as taken across cells could produce a very complex (and fascinating) emergent phenomenon. And, like it or not, this may produce intelligent behavior with no direct connection to prediction nor pattern recognition.

Further Reading:
Anastasio, T.J., Patton, P.E., and Belkacem-Boussaid, K. (2000). Using Bayes' rule to model multisensory enhancement in the superior colliculus. Neural Computation, 12, 1165-1187.

Ernst, M.D. and Banks, M.S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415, 429-433.

Floreano, D. and Mattiussi, C. (2008). Bio-inspired Artificial Intelligence. MIT Press, Cambridge, MA.

Hawkins, J. and Blakeslee, S. (2004). On Intelligence. Times Books, New York.

Jehee, J.F.M. and Murre, J.M.J (2008). The scalable mammalian brain: emergent distributions of glia and neurons. Biological Cybernetics, 98(5), 439-445.

Jarvis, E.D. et.al (2005) Avian brains and a new understanding of vertebrate brain evolution. Nature Review Neuroscience, 6(2), 151-159.

Meredith, M.A. and Stein, B.E. (1983). Interactions among converging sensory inputs in the superior colliculus. Science, 221(4608), 389-391.

Richards, W. (1988). Natural Computation. MIT Press, Cambridge, MA.

Shadmehr, R. and Wise, S.P. (2005). Computational Neurobiology of Reaching and Pointing. MIT Press, Cambridge, MA.

Shasha, D.E. and Lazere, C. (2010). Natural computing: DNA, quantum bits, and the future of smart machines. W.W. Norton, New York.

Stein, B.E. and Meredith, M.A. (1993). The merging of the senses. MIT Press, Cambridge, MA.

Stein, B.E. (1998). Neuronal mechanisms for synthesizing sensory information and producing adaptive behaviors. Experimental Brain Research, 123, 124-125.

Strausfeld, N.J. et.al (1998). Evolution, Discovery, and Interpretations of Arthropod Mushroom Bodies. Learning and Memory, 5, 11-37.

December 7, 2010

Aliasing vs. higher-dimensionality: a general question

I just got into the show "Time Warp" (a Discovery Channel creation) on DVD. The basic idea is that a scientist and a high-speed camera expert get together and film various processes, such as a Mentos and Coke explosion or putting things in a blender. The interesting part is when they film it at 1000 frames per second, and then play it back in super slow motion.

Profile of "Time Warp" on Wikipedia

This got me thinking about our understanding of everyday processes. For example, in "Time Warp", the extra fast video allows us to see "hidden" aspects of a process. Cracking an egg and capturing it at 2000 Hz reveals some interesting dynamics indeed.

Egg cracking at 2000 fps

There is a rich history in the biomechanics community of recording motion (either with motion sensors or video) at high sampling rates. These high sampling rates have become possible with advances in technology, so that the ability to record at 1000 (or even 10,000) frames per second is becoming increasingly cheap and portable.

Also keep in mind that there exists a concept called aliasing which places some constraints on how we sample a given process

Definition of aliasing from Wikipedia

The most relevant aspect of aliasing to this discussion is the issue of oversampling which can lead to distortion. On the Wikipedia page above, the author has provided some examples of aliased images. A more intuitive version of aliasing is if you were to put a marker on a bicycle wheel and spin it at a high speed. The card would appear to first hover in place, and then drift in the reverse direction of the spin.

So I wonder: does observation at ultra-high speeds (an experimental camera exists that can capture motion at 1,000,000 frames per second) reveal new, higher-dimensional modes of the process, or does it lead to aliasing at some point? For example, in arm motion, there are higher-dimensional derivatives of position called jerk, snap, crackle, and pop. Can we capture higher-dimensional motion such as this just by implementing higher resolution measurement devices, or is there an upper limit to our observational ability?

Printfriendly