Computing beyond a million processors

Andrew Brown
Seminar

The human brain remains as one of the great frontiers of science – how does this organ upon which we all depend so critically, actually do its job? A great deal is known about the underlying technology – the neuron – and we can observe large-scale brain activity through techniques such as magnetic resonance imaging, but this knowledge barely starts to tell us how the brain works. Something is happening at the intermediate levels of processing that we have yet to begin to understand, but the essence of the brain's information processing function probably lies in these intermediate levels. To get at these middle layers requires that we build models of very large systems of spiking neurons, with structures inspired by the increasingly detailed findings of neuroscience, in order to investigate the emergent behaviors, adaptability and fault-tolerance of those systems.

What has changed, and why could we not do this ten years ago? Simply put, it is now possible to build ensembles of a million cores in a University research environment, something that was impossible a decade ago.

Biological inspiration draws us to two parallel, synergistic directions of enquiry:

•    How can massively parallel computing resources accelerate our understanding of brain function?
•    How can our growing understanding of brain function point the way to more efficient parallel, fault-tolerant computation?

We start from the following question: what will happen when processors become so cheap that there is, in effect, an unlimited supply of them? The goal is now to get the job done as quickly and/or energy-efficiently as possible, and as many processors can be brought into play as is useful; this may well result in a significant number of processors doing identical calculations, or indeed nothing at all - they are a free resource.

One significant outcome of the research to date is the recognition that a machine with the architecture outlined above can do much, much more than we had originally anticipated. Any problem that can be represented as a set, or grid, of interacting elements - be they biological neurons, or abstract mathematical entities - can benefit enormously from the computational power that a million processors can bring to bear. Complex system models, molecular dynamics; application domains traditionally the territory of (extremely expensive) high-performance computer systems are proving to be biddable to the capabilities of this machine. We call this new domain atomic computing, indicative of the small scale of the individual processing elements that combine to produce such powerful effects.