Atomic computing - a different perspective on massively parallel problems
Atomic computing - a different perspective on massively parallel problems
As the size of parallel computing systems inexorably increases, the proportion of resource consumption (design effort, operating power, communication and calculation latency) absorbed by ‘non-computing’ tasks (communication and housekeeping) increases disproportionally. The SpiNNaker (Spiking neural net architecture) engine [1,2] sidesteps many of these issues with a novel architectural model: it is an isotropic ‘mesh’ of (ARM9) cores, connected via a hardware communication network. The topology allows uniform scalability up to a hard limit of just over a million cores, and the communications network – hardware handling packets of 72 bits – achieves a bisection bandwidth of 5 billion packets/s. The state of the machine is maintained in over 8TB of 32-bit memory, physically distributed throughout the system. There is no central processing ‘overseer’ or synchronised clock. This paper discusses opportunities and challenges in applying the SpiNNaker architecture, within neural simulation and beyond.
334-343
Reeve, Jeffrey
dd909010-7d44-44ea-83fe-a09e4d492618
Brown, Andrew
5c19e523-65ec-499b-9e7c-91522017d7e0
Mills, Rob
3d53d4bc-e1de-4807-b89b-f5813f2172a7
Dugan, Kier
1673d1bb-5b55-484d-9c95-4eae04d0cdfb
Furber, Steve
5060db9f-746b-4af3-b8e0-53c8b5c9f4a0
Reeve, Jeffrey
dd909010-7d44-44ea-83fe-a09e4d492618
Brown, Andrew
5c19e523-65ec-499b-9e7c-91522017d7e0
Mills, Rob
3d53d4bc-e1de-4807-b89b-f5813f2172a7
Dugan, Kier
1673d1bb-5b55-484d-9c95-4eae04d0cdfb
Furber, Steve
5060db9f-746b-4af3-b8e0-53c8b5c9f4a0
Reeve, Jeffrey, Brown, Andrew, Mills, Rob, Dugan, Kier and Furber, Steve
(2013)
Atomic computing - a different perspective on massively parallel problems.
International Conference on Parallel Computing - ParCo 13, Munich, Germany.
10 - 13 Sep 2016.
.
(doi:10.3233/978-1-61499-381-0-334).
Record type:
Conference or Workshop Item
(Paper)
Abstract
As the size of parallel computing systems inexorably increases, the proportion of resource consumption (design effort, operating power, communication and calculation latency) absorbed by ‘non-computing’ tasks (communication and housekeeping) increases disproportionally. The SpiNNaker (Spiking neural net architecture) engine [1,2] sidesteps many of these issues with a novel architectural model: it is an isotropic ‘mesh’ of (ARM9) cores, connected via a hardware communication network. The topology allows uniform scalability up to a hard limit of just over a million cores, and the communications network – hardware handling packets of 72 bits – achieves a bisection bandwidth of 5 billion packets/s. The state of the machine is maintained in over 8TB of 32-bit memory, physically distributed throughout the system. There is no central processing ‘overseer’ or synchronised clock. This paper discusses opportunities and challenges in applying the SpiNNaker architecture, within neural simulation and beyond.
Text
paperV1.doc
- Accepted Manuscript
More information
e-pub ahead of print date: 10 September 2013
Venue - Dates:
International Conference on Parallel Computing - ParCo 13, Munich, Germany, 2016-09-10 - 2016-09-13
Organisations:
EEE
Identifiers
Local EPrints ID: 401053
URI: http://eprints.soton.ac.uk/id/eprint/401053
PURE UUID: 2184ef43-c177-4c43-a09d-22eb2a7030cb
Catalogue record
Date deposited: 04 Oct 2016 14:18
Last modified: 15 Mar 2024 02:37
Export record
Altmetrics
Contributors
Author:
Jeffrey Reeve
Author:
Andrew Brown
Author:
Rob Mills
Author:
Kier Dugan
Author:
Steve Furber
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics