Data for “Modeling adult skeletal stem cell response to laser-machined topographies through deep learning” published in TISSUE & CELL Authors: BENITA S. MACKAY,1,* MATTHEW PRAEGER,1 JAMES A. GRANT-JACOB, 1 JANOS KANCZLER, 2 ROBERT W. EASON, 1 RICHARD O.C. OREFFO 2 AND BEN MILLS 1 1Optoelectronics Research Centre, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom, SO17 1BJ 2Bone and Joint Research Group, Centre for Human Development, Stem Cells and Regeneration, Institute of Developmental Sciences, Faculty of Medicine, University of Southampton, Southampton, United Kingdom, SO16 6HW *b.mackay@soton.ac.uk This data and accompanying article are gold open access under CC BY 4.0 licence. BM was supported by an EPSRC Early Career Fellowship (EP/N03368X/1). Research funding to RO by the Biotechnology and Biological Sciences Research Council (BB/P017711/1) and the UK Regenerative Medicine Platform Acellular / Smart Materials – 3D Architecture (MR/R015651/1) is gratefully acknowledged. ## List of files and descriptions ## #Images at original resolution# Fig 1.png : Adult skeletal stem cell alignment and adhesion parallel to microscale laser machined lines taken with brightfield (a) and fluorescent (b) microscopy in response to (c) the surface topography, seen as an array depicting laser machined areas (white) on an otherwise smooth topography (black). This is compared to (d, e) adult skeletal stem cell positioning on (f) the smooth surface without altered topography. Scale bar in (c) applies to (a-f). Fig 2.png: Method for generating a model for cell response. Step 1 topography design; step 2 laser machining of the topography onto a glass sample; step 3 image cells grown on the adopted topography to determine cell response and step 4, process and align the images to the corresponding input topography in step 1. Deep learning can be used to predict image 4 from image 1, without the need for steps 2 and 3. Fig 3.png: Three different laser-machined patterns on a glass coverslip, imaged with a scanning electron microscope (SEM). 1a-c has lines machined with a separation of 5 µm: close enough together that the deeper ablated lines, seen as darker grey and black in 2a-c and 3a-c, are not produced. Instead, disordered nanoscale features are present in uniform microscale lines. A larger separation of 10 µm (2a-c) or greater (3a-c) creates larger variation in depth, with clear ablated lines surrounded by disordered nanoscale variation. Areas not laser machined are smooth, with virtually no microscale features. Fig 4.png: The deep neural network W-Net architecture consists of multiple convolutional layers and skip connections between encoder and decoder sections. The input contains three data channels: topography (a), time (b) and density (c). The output (d) shows the neural network prediction of cell growth (as it would appear under a fluorescence microscope) for the topography, timepoint and randomized cell density seed shown in A, B and C respectively. Fig 5.png: Testing the independence of time and density channels, an input without laser-machined topography (a) is used to compare network output while other parameters vary, including variation of density for a set time point (b-d and g-h) and variation of time points for a set density (c, e-g) and (d, h). Low, Medium and High labelling at the bottom of images (b-h) indicate the cell density input into the network, relative to confluency, while the time points are labelled at the top left. Images (b-d) show how increasing density while time is static results in different outputs. Images (c, e-g) show how increasing time while density remains unchanged results in different outputs, both from each other and from (b-d). Image (h) shows the result of both a high time point and density. Scale bar in (a) applies for (a-h). Fig 6.png: Testing the connection between input topography and density channels: two topographical inputs are used, one of parallel lines (a) and another of crossed parallel lines at right angles (e), to compare network output while the time input channel remains unchanged (b-d, f-h). Low, Medium and High labelling at the bottom of images (b-h) indicate the cell density input into the network, relative to confluency, while the unchanged timepoint of Day 0 is labelled at the top left. Scale bar in (a) applies for (a-h). Fig 7.png: Testing the connection of input topography only, the time and density channels remaining unvaried. The input topographies are parallel lines of uniform 25 µm width at varying separation (a, c, e, g) and the output is the predicted cell positioning (b, d, f, h). Scale bar in (a) applies for (a-h). Fig 8.png: Extending the range of topographical parameters for input to the network. The network was examined with concentric circles in pair 1, curves in pair 2, filled circles in pair 3 and alphanumeric characters in pair 4. Each pair consists of the topographical input (a) and the network predicted output (b). Scale bar in (a) applies for (a-h). Fig 9.png: Successful testing of the neural network using cells acquired from an unseen patient adhered to an unseen topography for validation. A randomly selected input topography (a) is input into the neural network and the predicted cell positioning (b) is output by the network. This is then compared to (c) to produce a comparison figure (d), where blue is the output network, red is a real cell positioning and green is areas of agreement. E-H are comparison images of cell positioning with transparent grey lines showing the laser-machined areas and input topography. (e-g) are statistically significant, where (e-f) P < 0.01, shown with **, and (g) P < 0.001, shown with ***. Scale bar in (a) applies for (a-h) Fig 10.png: A series of network predicted (a) and experimentally imaged (b) pairs. The top row, pairs 1 and 2, are for a cell density of virtually zero and the bottom row, pairs 3 and 4, are for an area with no topographical patterning. The central row is a copy of pairs 1 and 2 with enhanced contrast and brightness for visual clarity. Scale bar in 1a applies to all images in this figure. Fig 11.png: A graph of minimum separation for cell alignment, obtained using model predictions from network generated images. The red linear trend line shows the (lack of) relationship of line separation in respect to line width, blue dots are the average minimum line separation for a given line width resulting in cell alignment and the grey shaded area is the error in the red trend line. The surrounding figures are experimentally obtained fluorescent images for varying line separation where there is and is not cell alignment.