READ ME File For 'Similarity-aware CNN for Efficient Video Recognition at the Edge"' ReadMe Author: Amin Sabet, University of Southampton This dataset supports the publication: AUTHORS: Amin Sabet, Jonathon Hare, Bashir Al-Hashimi , Geoff V. Merrett TITLE: Similarity-aware CNN for Efficient Video Recognition at the Edge Journal: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems PAPER DOI IF KNOWN: https://doi.org/10.5258/SOTON/D2067 This dataset contains: Data for Figure.2, Figure.7 ,Figure.9, Figure.11 ,Figure.12 ,Figure.13, Figure.14, Figure.15, Figure.16, Figure.17. The figures are as follows: Figure.2 The averaged of unchanged pixels into all convolutional layers between consecutive frames when processing Video1, video2 and video3. Figure. 7 Energy breakdown of RS Figure. 9 Comparing mean average precision of video object detection of floating-point precision (FP32), INT8, CBinfer, DeepCach with SQS implementation of YOLOv3. Figure. 11 Comparing the average required number of MAC operation for video object detection of floating-point precision(FP32), CBinfer, DeepCach, and SRS implementations ofYOLOv3 Figure. 12 Energy breakdown of Conv2 for RS and SRSdataflows Figure. 13 Energy Consumption breakdown of Convolutional layers in table 3 processed by row-stationary dataflow (RS)and similarity-aware dataflow (SRS). Figure. 14 Energy consumption and energy breakdown of convolutional layer for different level of similarity between ifmaps.For each convolutional layer, the bars from left to right show the energy breakdown for RS dataflow, Figure. 15 Trade off between energy consumption and quantization error of convolutinal layer Figure. 16 Histogram of similarity between the features of consecutive frames from stationary Figure. 17 Comparing the energy consumption of similarity-aware Yolo and Resnet to that of conventional Yolo and Resnet Licence: CC BY Related projects: International Centre for Spatial Computation Date that the file was created: December 2021