READ ME File For 'Enabling ImageNet-Scale Deep Learning on MCUs for Accurate and Efficient Inference' ReadMe Author: SULAIMAN SADIQ, University of Southampton This dataset supports the publication: AUTHORS: Sulaiman Sadiq, Jonathon Hare, Partha Maji, Simon Craske, Geoff Merrett TITLE: Enabling ImageNet-Scale Deep Learning on MCUs for Accurate and Efficient Inference JOURNAL: IEEE Internet of Things Journal DOI: https://doi.org/10.5258/SOTON/D2850 CODE: https://github.com/sulaimansadiq/TinyOps This dataset contains: ---- Fig-1 ---- Data to make bar chart top in Figure 1. The figure shows how models from the TinyOps design space achieved higher accuracy, lower latency and energy efficiency with higher power consumption than models from the internal memory design space Filename: 'fig1.csv' Data: Accuracy , Latency, Power and Energy of models from the internal and TinyOps design space ---- Fig-3 ---- Data to make graph in Figure 3 which plots Accuracy-MACs and Accuracy-Latency pareto frontier Filename: 'fig3.csv' Data: The accuracy and MACs of models to generate a pareto frontier. Additionally the accracy and latency of models when deployed with internal memory, external memory or TinyOps to generate the Accuracy-Latency pareto frontier ---- Fig-5b ---- Data to create Figure 5b which plots the latency of operations of varying complexity including 3x3, 5x5, 7x7 DepConv and Conv operations Filename: 'fig5b.csv' Data: The latency, MACs and parameters of different operations including input/output resolution, input/output channels, stride and kernel size. ---- Fig-6 ---- Data to create Figure 6 which plots bar chart of model MACs and latencies when comlexity is reduced with either width or resolution Filename: 'fig6.csv' Data: The latency, MACs and parameters of different operations that comprise the models including input/output resolution, input/output channels, stride and kernel size. ---- Fig-9 ---- Data to to create Figure 9 which plots bar chart which demonstrates how latency can be reduced on different platform with varying memory constraints Filename: 'fig6.csv' Data: The latency and memory usage for deployment of a model on the F469 and F746 ---- Fig-10 ---- Data for Figure 10 to plot bar chart of how overlaying different data of the model reduces latency and how much memory is consumed Filename: 'fig10.csv' Data: The latency reduced and memory used by overlaying tiny tensors, filters, biases and quantisation parameters ---- Fig-11 ---- Data for Figure 11 to plot bar charts which show MAC, Parameter and Latency distribution of different models Filename: 'fig11.csv' Data: The latency, parameters and MACs of different operations that comprise the models compared against each other ---- Table-1 ---- Data in Table 1showing the parameters of different devices used in our experimental setup Filename: 'table1.csv' Data: The parameters of the device including chip name, architecture, clock, d-cache and internal and external memory configuration ---- Table-2 ---- Data in Table 2 comparing the accuracy, latency, power and energy efficiency of models derived from the internal and TinyOps design space Filename: 'table2.csv' Data: The model names, parameters and design space they were derived from along with device they were deployed to. Additionally deployment performance including accuracy, latency, power and energy per inference. ---- Table-3 ---- Data in Table 3 measuring power consumption of different memory configurations Filename: 'table3.csv' Data: The power consumption and latency of deploying the same model under different memory configurations Date of data collection: 2023 Licence: CC BY Related projects: This work was supported by the UK Research and Innovation (UKRI) Centre for Doctoral Training in Machine Intelligence for Nano-electronic Devices and Systems [EP/S024298/1], the Engineering and Physical Sciences Research Council (EPSRC) International Centre for Spatial Computational Learning [EP/S030069/1] and ARM Limited. The authors also acknowledge the use of the IRIDIS High Performance Computing Facility, and associated support services at the University of Southampton, in the completion of this work. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. Date that the file was created: Nov, 2023