READ ME File For 'Dataset for Exploring sequence transformation in magnetic resonance imaging via deep learning using data from a single asymptomatic patient' Dataset DOI: 10.5258/SOTON/D1634 ReadMe Author: James A Grant-Jacob University of Southampton https://orcid.org/0000-0002-4270-4247 This dataset supports the publication: AUTHORS:James A. Grant-Jacob, Chris Everitt, Robert W. Eason, Leonard J. King and Ben Mills TITLE:Exploring sequence transformation in magnetic resonance imaging via deep learning using data from a single asymptomatic patient JOURNAL: IOP Journal of Physics Communications PAPER DOI IF KNOWN: This dataset contains: Figure_1.png Figure_2.png Figure_3.png Figure_4.png Figure_5.png Figure_6.png Figure_7.png Figure_8.png Figure_9.png Figure_10.png Figure_11.png Figure_12.png Figure_13.png Figure_14.png Figure_11.txt Data for slice through the actual and generated images, indicated by the red, yellow and green solid and dotted lines, respectively in Figure 11. Figure_13.txt Data for pixel intensity for 3 different × points in the generated images (position shown in FIgure 13(e)) as a function of window width in pixels in X. Actual T2 signal intensity for these positions is indicated by the dashed line. Table1.txt Data for NRMSE, PSNR and SSIM of the images generated by the three trained neural networks. Table2.txt NRMSE, PSNR and SSIM for all the Pix2Pix generated images in each plane. The figures are as follows: Figure 1. Schematic of the concept for using deep learning to transform T1 VIBE sequence images into T2 SPACE sequence images, via training on images from the left hand and testing on images from the right hand to generate T2 SPACE sequence images. Figure 2. Diagram illustrating the Pix2Pix network architecture. Figure 3. Diagram illustrating the process for training the Pix2Pix neural network. Figure 4. Diagram illustrating the CycleGan network architecture. Figure 5. Diagram illustrating the process for training the CycleGan neural network. Figure 6. Diagram illustrating the UNIT network architecture. Figure 7. Diagram illustrating the shared latest space for training the UNIT neural network. Figure 8. Capability of the trained neural network for generating a T2 image of the right hand from a single asymptomatic patient, showing input T1 image (1st column), actual T2 images (2nd column), Pix2Pix generated T2 images (3rd column), CycleGan generated T2 images (4th column) and UNIT generated T2 images (5th column), for the same coronal view of the centre of the hand. The absolute differences between the generated and actual T2 images are displayed in the 2nd row. Figure 9. Capability of the trained neural network for generating a T2 image of the right hand, showing T1 images (1st column), actual T2 images (2nd column), generated T2 images (3rd column) and absolute difference between actual and generated (4th column), for (a) coronal, (b) sagittal (along the middle finger) and (c) axial planes. Figure 10. Actual T2 image (left), generated T2 image (middle) and magnified generated T2 image (right), for the metacarpal bone of the thumb. An artefactual grid pattern can be seen on the generated image. Figure 11. (a) Actual T2 image (left), generated T2 image (right) and (b) slice through the actual and generated images, indicated by the red, yellow and green solid and dotted lines, respectively. Figure 12. Signal intensity dependence analysis of the trained neural network for generating a T2 image of the wrist showing (a) T1 input image fed into the network, (b) actual image, (c) generated image, (d) colour-transformed image that uses the most probable data transformation, (e) transformation histogram of intensity for actual T1 and actual T2 pixel intensities values (0-255) and (f) transformation histogram of signal intensity for actual T1 and generated T2 pixel intensities values (0-255). Figure 13. Adjacent pixel dependence analysis of the trained neural network for generating a T2 image of the right hand, showing input T1 image (top row) and generated T2 image (middle row), for ((a) and (e)) 5-pixel width, ((b) and (f)) 25-pixel width, ((c) and (g)) 50-pixel width, and ((d) and (h)) 75-pixel width window of a sagittal image slice, where the rest of the image outside of the window has been set to zero. (i) Pixel intensity for 3 different × points in the generated images (position shown in (e)) as a function of window width in pixels in X. Actual T2 signal intensity for these positions is indicated by the dashed line. Figure 14. Generation of a 3D MRI T2 volume, generated from a neural network directly from T1 images, showing (a) eight generated T2 axial image slices along the hand and (b) a combination of all generated axial T2 image slices with a false colour map (blue indicates low signal intensity, yellow indicates high signal intensity) to enhance visual clarity. Date of data collection: 28/1/2020 Information about geographic location of data collection: University Hospital Southampton NHS Foundation Trust, Tremona Road, Southampton, Hampshire, SO16 6YD, UK Licence: CC-BY Related projects: EPSRC grant EP/N03368X/1 EPSRC grant EP/T026197/1 Date that the file was created: 09, 2021