Supporting data for the submission of Matthew Anderson's thesis: Category and Depth Discrimination in Real-World Scenes (2021) Contact Matt Anderson (matt.anderson@soton.ac.uk) for further information / if there are any issues. ---------------------- General Info ---------------------- Thesis doi: https://doi.org/10.5258/SOTON/D1990 ORCID ID (Matt Anderson): 0000-0002-7498-2719 Research funded by a University of Southampton Jubilee Scholarship Chapters 3-5 are empirical papers. ---------------------- Chapter 3 ---------------------- The data for Chapter 3, Category Systems for Real-World Scenes, is already available here: https://doi.org/10.1167/jov.21.2.8. ---------------------- Chapter 4 ---------------------- The data and code for chapter 4 is located in the folder Chapt4_Data. The files AggData/SEM_categorization.txt and AggData/STR_categorization.txt contain trial-by-trial response data from the semantic and spatial structure tasks respectively. The columns should be intuitively named, but here is a brief description of each: - Ppt_No: unique participant identifier - Scene: scene identifier in the SYNS database (see https://syns.soton.ac.uk/) - View: view identifier in the SYNS database - cat_agreement: proportion of participants who selected the ground-truth category, using unlimited viewing durations (see Anderson et al., 2021) - GT_Category: Ground-truth category. In the semantic file, the numbers 1->6 correspond to Nature, Road, Residence, Farm, Beach, and Car Park / Commercial respectively. In the spatial structure file, numbers 1->4 correspond to Cluttered, Closed Off, Flat, and Tunnel respectively. - TrialNumber: order in which the image was presented for a given participant - imageID: unique identifier for each image - distance_*: The next 6 variables compute various statistics of the image using coregistered ground-truth LiDAR data. These are the (i) mean across the image, standard deviation, and range. These statistics are also computed for a small patch around the fixation location. - distance_bin: median split by the column distance_mean - Colour_GrayScale: 1 = Grayscale, 2 = Colour - Stereo_Cond: 1 = Mono, 2 = Stereo, 3 = Stereo-Reversed - Pres_Time: number of frames, as a multiple of 13.33 msecs (75 Hz refresh rate) - SceneCat: participant-selected category - DepthCat: participant-selected depth - ResponsePeriod: Response time (dunno why I called it period) - Elapsed: empirical measure of computer flip interval (psychtoolbox) - Cat_correct: 1 = correct, 0 = incorrect - Depth_Correct: 1 = correct, 0 = incorrect The folder AnalysisScripts contains: mafc_dprime.m - This script computes dprime for a task with >2 categories. For tasks with 2 categories, it produces d-prime estimates identical to the standard method RapidSceneCatLMMs_CategoryAnalyses.R - R script that runs the linear mixed models on the category data. This works on data from both tasks by changing which txt file you load in. RapidSceneCatLMMs_DepthAnalyses.R - Same as above, but for the depth responses ---------------------- Chapter 5 ---------------------- The data and code for chapter 5 is located in the folder Chapt5_Data. The file AggData/alldata.txt contains trial-by-trial response data from the online and in-lab experiment. The columns should be intuitively named, but here is a brief description of each: - Ppt: unique participant identifier - Expt: 1 = online experiment with Prolific participants, 2 = in-lab experiment - Prestime: Presentation duration in msecs - gt: ground-truth depth judgement. 0 = left, 1 = right - Human_resp: human ordinal response. 0 = left, 1 = right - ID: unique identifier for each image (1800 spanning both experiments) - ProbeLoc_L_1: Row-location of the first probe in a 800x1200 resized version of the (uncropped) SYNS stereo images - ProbeLoc_L_2: Column-location of the first probe - ProbeLoc_R_1: Row-location of the second probe - ProbeLoc_R_2: Column-location of the second probe - Scene: scene identifier in the SYNS database (see https://syns.soton.ac.uk/) - View: view identifier in the SYNS database - meandepth: mean depth condition - elev: elevation type (variable or same) - depthcontrast: binned from 0-200% of the mean depth. Bins have uniform width in log-space - ddiff: absolute depth difference between the two probes - depthmu: empirical measurement of the mean depth of the two probe locations - stereo: stereo condition, 1 = mono, 2 = stereo - correct: ordinal response correct or not? - human_respratio: well, this is just the human ratio response - gtratio: ground-truth depth ratio. - RT: response time in msecs The folder AnalysisScripts contains R scripts that run the linear mixed models on all the above data. The names are self explanatory.