Supporting data and code from 'The time-course of real-world spatial and semantic processing'

----------------------		General Info		----------------------

If there are any bugs/issues, contact Matt Anderson: Matt.Anderson@soton.ac.uk

doi: //doi.org/10.5258/SOTON/D2036

ORCID ID (Matt Anderson): 0000-0002-7498-2719

Research funded by a University of Southampton Jubilee Scholarship, EPSRC grant EP/K005952/1, EPSRC grant EP/S016368/1, and a York University VISTA Visiting Trainee Award

----------------------		File Info		----------------------

The files AggData/SEM_categorization.txt and AggData/STR_categorization.txt contain trial-by-trial response data from the semantic and spatial structure tasks respectively. The columns should be intuitively named, but here is a brief description of each:

- Ppt_No: unique participant identifier
- Scene: scene identifier in the SYNS database (see https://syns.soton.ac.uk/)
- View: view identifier in the SYNS database
- cat_agreement: proportion of participants who selected the ground-truth category, using unlimited viewing durations (see Anderson et al., 2021)
- GT_Category: Ground-truth category. In the semantic file, the numbers 1->6 correspond to Nature, Road, Residence, Farm, Beach, and Car Park / Commercial respectively. In the spatial structure file, numbers 1->4 correspond to Cluttered, Closed Off, Flat, and Tunnel respectively. 
- TrialNumber: order in which the image was presented for a given participant
- imageID: unique identifier for each image
- distance_*: The next 6 variables compute various statistics of the image using coregistered ground-truth LiDAR data. These are the (i) mean across the image, standard deviation, and range. These statistics are also computed for a small patch around the fixation location. 
- distance_bin: median split by the column distance_mean
- Colour_GrayScale: 1 = Grayscale, 2 = Colour
- Stereo_Cond: 1 = Mono, 2 = Stereo, 3 = Stereo-Reversed
- Pres_Time: number of frames, as a multiple of 13.33 msecs (75 Hz refresh rate)
- SceneCat: participant-selected category
- DepthCat: participant-selected depth
- ResponsePeriod: Response time (dunno why I called it period)
- Elapsed: empirical measure of computer flip interval (psychtoolbox)
- Cat_correct: 1 = correct, 0 = incorrect
- Depth_Correct: 1 = correct, 0 = incorrect

The subfolder CategorizationData_UnlimViewTime/ contains one .mat datafile per participant, where they judged either the 
semantic or spatial category of all images in the brief presentation experiment. These data are used to estimate the noise ceiling. 
'*_task_1.mat' corresponds to the spatial task, and '*_task_2.mat' corresponds to the semantic task. The variable 'ImOrder' gives the
order in which the stimuli appeared. Each row is a given trial, and the columns indicate stereo-pair scenes and views in the SYNS database. 
So, ImOrder(1,1) and ImOrder(1,2) gives the scene and view of the first trial respectively. Labelsclicked gives the category selected 
by a given observer, for that particular image. 



The folder AnalysisScripts contains:

mafc_dprime.m
- This script computes dprime for a task with >2 categories. For tasks with 2 categories, it produces d-prime estimates identical to the standard method. 

category_sim.m
- This script simulates categorisation data for tasks with an arbitrary number of categories, and measures variability in d-prime estimates for different levels of task difficulty, and different levels of inter-observer agreement. This script outputs a lookup table that can be used to adjust the d-prime values given by mafc_dprime.m, to compare performance across tasks with different category systems. 

dprime_lookuptable.mat
- An example lookup table for 2-6 categories. For details, see category_sim.m script

adj_dprime.m
- A method for adjusting d-prime estimates using the lookup table described above. 

RapidSceneCatLMMs_CategoryAnalyses.R
- R script that runs the linear mixed models on the category data. This works on data from both tasks by changing which txt file you load in. 

RapidSceneCatLMMs_DepthAnalyses.R
- Same as above, but for the depth responses



The folder EthicsDocs contains general Ethical documents