The University of Southampton
University of Southampton Institutional Repository

Neuromorphic silicon photonics and hardware-aware deep learning for high-speed inference

Neuromorphic silicon photonics and hardware-aware deep learning for high-speed inference
Neuromorphic silicon photonics and hardware-aware deep learning for high-speed inference
The relentless growth of Artificial Intelligence (AI) workloads has fueled the drive towards non-Von Neuman architectures and custom computing hardware. Neuromorphic photonic engines aspire to synergize the low-power and high-bandwidth credentials of light-based deployments with novel architectures, towards surpassing the computing performance of their electronic counterparts. In this paper, we review recent progress in integrated photonic neuromorphic architectures and analyze the architectural and photonic hardware-based factors that limit their performance. Subsequently, we present our approach towards transforming silicon coherent neuromorphic layouts into high-speed and high-accuracy Deep Learning (DL) engines by combining robust architectures with hardware-aware DL training. Circuit robustness is ensured through a crossbar layout that circumvents insertion loss and fidelity constraints of state-of-the-art linear optical designs. Concurrently, we employ DL training models adapted to the underlying photonic hardware, incorporating noise- and bandwidth-limitations together with the supported activation function directly into Neural Network (NN) training. We validate experimentally the high-speed and high-accuracy advantages of hardware-aware DL models when combined with robust architectures through a SiPho prototype implementing a single column of a 4:4 photonic crossbar. This was utilized as the pen-ultimate hidden layer of a NN, revealing up to 5.93% accuracy improvement at 5GMAC/sec/axon when noise-aware training is enforced and allowing accuracies of 99.15% and 79.8% for the MNIST and CIFAR-10 classification tasks. Channel-aware training was then demonstrated by integrating the frequency response of the photonic hardware in NN training, with its experimental validation with the MNIST dataset revealing an accuracy increase of 12.93% at a record-high rate of 25GMAC/sec/axon.
Neural networks, neuromorphic computing, neuromorphic photonics, optical neural network accelerators
0733-8724
3243 - 3254
Moralis-Pegios, ‪Miltiadis
ed2f6e60-508e-4307-a692-620fad3aa31e
Mourgias-Alexandris, ‪George
57c31ebb-de8d-4d7e-b17e-8be4ec646318
Tsakyridis‬, ‪Apostolos
d938ba9c-0ea8-4772-bd4a-6107ed9ef4f2
Giamougiannis, George
a0bd3d10-fe61-4264-b654-a9a5a2bd3b6d
Totovic, Angelina R.
fcfddba1-c9e7-497f-8666-071a00ec7979
Dabos, George
b5ea6c10-d366-40a8-9c4b-86c0fcbb045f
Passalis, ‪Nikolaos
f2b2d030-36ec-4c04-81d5-84b1fc5c6f15
Kirtas, ‪Manos
cd8891ae-f270-423d-bc40-9d624003986c
Rutirawut, Teerapat
590101f7-65c8-4da3-9a5d-e3d2efd74349
Gardes, Frederic
7a49fc6d-dade-4099-b016-c60737cb5bb2
Tefas, Anastasios
b88c667d-2147-4efc-8138-8dfe142457d1
Pleros, ‪Nikos
31745dfe-bbc5-490a-bd49-0384f573f353
Moralis-Pegios, ‪Miltiadis
ed2f6e60-508e-4307-a692-620fad3aa31e
Mourgias-Alexandris, ‪George
57c31ebb-de8d-4d7e-b17e-8be4ec646318
Tsakyridis‬, ‪Apostolos
d938ba9c-0ea8-4772-bd4a-6107ed9ef4f2
Giamougiannis, George
a0bd3d10-fe61-4264-b654-a9a5a2bd3b6d
Totovic, Angelina R.
fcfddba1-c9e7-497f-8666-071a00ec7979
Dabos, George
b5ea6c10-d366-40a8-9c4b-86c0fcbb045f
Passalis, ‪Nikolaos
f2b2d030-36ec-4c04-81d5-84b1fc5c6f15
Kirtas, ‪Manos
cd8891ae-f270-423d-bc40-9d624003986c
Rutirawut, Teerapat
590101f7-65c8-4da3-9a5d-e3d2efd74349
Gardes, Frederic
7a49fc6d-dade-4099-b016-c60737cb5bb2
Tefas, Anastasios
b88c667d-2147-4efc-8138-8dfe142457d1
Pleros, ‪Nikos
31745dfe-bbc5-490a-bd49-0384f573f353

Moralis-Pegios, ‪Miltiadis, Mourgias-Alexandris, ‪George, Tsakyridis‬, ‪Apostolos, Giamougiannis, George, Totovic, Angelina R., Dabos, George, Passalis, ‪Nikolaos, Kirtas, ‪Manos, Rutirawut, Teerapat, Gardes, Frederic, Tefas, Anastasios and Pleros, ‪Nikos (2022) Neuromorphic silicon photonics and hardware-aware deep learning for high-speed inference. IEEE Journal of Lightwave Technology, 40 (10), 3243 - 3254. (doi:10.1109/JLT.2022.3171831).

Record type: Article

Abstract

The relentless growth of Artificial Intelligence (AI) workloads has fueled the drive towards non-Von Neuman architectures and custom computing hardware. Neuromorphic photonic engines aspire to synergize the low-power and high-bandwidth credentials of light-based deployments with novel architectures, towards surpassing the computing performance of their electronic counterparts. In this paper, we review recent progress in integrated photonic neuromorphic architectures and analyze the architectural and photonic hardware-based factors that limit their performance. Subsequently, we present our approach towards transforming silicon coherent neuromorphic layouts into high-speed and high-accuracy Deep Learning (DL) engines by combining robust architectures with hardware-aware DL training. Circuit robustness is ensured through a crossbar layout that circumvents insertion loss and fidelity constraints of state-of-the-art linear optical designs. Concurrently, we employ DL training models adapted to the underlying photonic hardware, incorporating noise- and bandwidth-limitations together with the supported activation function directly into Neural Network (NN) training. We validate experimentally the high-speed and high-accuracy advantages of hardware-aware DL models when combined with robust architectures through a SiPho prototype implementing a single column of a 4:4 photonic crossbar. This was utilized as the pen-ultimate hidden layer of a NN, revealing up to 5.93% accuracy improvement at 5GMAC/sec/axon when noise-aware training is enforced and allowing accuracies of 99.15% and 79.8% for the MNIST and CIFAR-10 classification tasks. Channel-aware training was then demonstrated by integrating the frequency response of the photonic hardware in NN training, with its experimental validation with the MNIST dataset revealing an accuracy increase of 12.93% at a record-high rate of 25GMAC/sec/axon.

Text
JLT_Neuromorphic Silicon Photonics and Hardware-aware Deep Learning for High-Speed Inference - Accepted Manuscript
Restricted to Registered users only
Download (3MB)
Request a copy
Text
Published version - Version of Record
Restricted to Repository staff only
Request a copy

More information

Accepted/In Press date: 8 April 2022
e-pub ahead of print date: 3 May 2022
Published date: 15 May 2022
Additional Information: Funding Information: This work was supported in part by the EC through H2020 Projects PLASMONIAC under Grant 871391 and in part by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the First Call for H.F.R.I. Research Projects to support Faculty Members and Researchers and the Procurement of Highcost Research Equipment under Project 4233, project acronym DeepLight. Publisher Copyright: © 2022 IEEE.
Keywords: Neural networks, neuromorphic computing, neuromorphic photonics, optical neural network accelerators

Identifiers

Local EPrints ID: 457195
URI: http://eprints.soton.ac.uk/id/eprint/457195
ISSN: 0733-8724
PURE UUID: 371b1a0b-33e5-47e6-9b0b-4fa87493f44e
ORCID for Frederic Gardes: ORCID iD orcid.org/0000-0003-1400-3272

Catalogue record

Date deposited: 26 May 2022 16:36
Last modified: 17 Mar 2024 03:26

Export record

Altmetrics

Contributors

Author: ‪Miltiadis Moralis-Pegios
Author: ‪George Mourgias-Alexandris
Author: ‪Apostolos Tsakyridis‬
Author: George Giamougiannis
Author: Angelina R. Totovic
Author: George Dabos
Author: ‪Nikolaos Passalis
Author: ‪Manos Kirtas
Author: Frederic Gardes ORCID iD
Author: Anastasios Tefas
Author: ‪Nikos Pleros

Download statistics

Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.

View more statistics

Atom RSS 1.0 RSS 2.0

Contact ePrints Soton: eprints@soton.ac.uk

ePrints Soton supports OAI 2.0 with a base URL of http://eprints.soton.ac.uk/cgi/oai2

This repository has been built using EPrints software, developed at the University of Southampton, but available to everyone to use.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.

×