READ ME File For the University of Southampton Doctoral Thesis "Exploring social anxiety in digital platforms within novel experimental laboratory paradigms" Dataset DOI: 10.5258/SOTON/D2972 ReadMe Author: Neslihan Ozhan, 0000-0001-8648-2458 This dataset supports the thesis entitled "Exploring social anxiety in digital platforms within novel experimental laboratory paradigms" AWARDED BY: University of Southampton DATE OF AWARD: 2024 DESCRIPTION OF THE DATA The datasets include all data that were used in the thesis, for Chapters 2, 3, and 4. Chapter2_HR_imputedforCCC_copy.csv: Data were gathered using the Fitbit manufacturers' Application Programming Interface (API). The Fitbit heart rate data (timestamps provided based on Network Time Protocol, GMT +1) were matched with the experimental task start and end time and date, which were manually recorded in seconds for each participant.Because the sampling frequency is determined by the Fitbit, we incorporated missing values (NAs) in the dataset at the one-second level and then smoothed the data with a rolling average, with a window size of two, that takes four true observations from each sides (two left, two right) into account, using the ‘imputeTS’ package in R (Moritz & Bartz-Beielstein, 2017) per participant and key event. We then aggregated the data over time periods that corresponded to five key events: Baseline, Inhalation, Preparation, Speech, Recovery. Chapter2_co2labdata_sum1.csv:The dynamic virtual scenario was developed using Visual Studio (version 15.9.38), and relevant data were stored in the Unity game engine software, version 5.6.6. Survey questionnaires were used through Qualtrics. The VR equipment used was the Oculus Rift consumer version headset (Facebook Inc.) with integrated audio system, offering a 110-degree of field view with 640×800 resolution per eye, which was running on a Dell Desktop computer with an Intel i7 processor (Windows 10 operating system). Participant response input was recorded using the Oculus Rift Bluetooth touch controller.Participants completed GAD-7Modified and SUDS)throughout the VR task during key events. The following data were recorded at pre-test baseline and immediately after the experimental task: positive and negative affect using Positive and Negative Affect Scale (PANAS) (Watson et al., 1988), panic-like symptomology and the associated symptoms of autonomic arousal (e.g., shaking) using the Panic Symptom Inventory (PSI) (Clark & Hemsley, 1982), and the tendency to fear the symptoms of experienced anxiety using the Anxiety Sensitivity Index (ASI) (Reiss et al., 1986).The following data were recorded only immediately after the experimental session: (a) Speech Performance Scale (SPS) (Rapee & Lim, 1992)(b) Anticipatory Processing Questionnaire (APQ)(c) Presence Questionnaire (PQ) (Witmer & Singer, 1998) (all via Qualtrics). Chapter2_screening_data.csv: This data is collected via telephone. We telephone-screened 162 potential participants (88 females, Mage = 22.78, SDage = 5.43) for any of the following current or past psychiatric diagnoses using a truncated diagnostic interview based on DSM-IV criteria using the Mini International Neuropsychiatric Interview (MINI) (Sheehan et al., 1998): depression, mania, generalised anxiety, panic disorder, agoraphobia, social anxiety disorder (SAD), obsessive compulsive disorder (OCD), post-traumatic stress disorder (PTSD), and alcohol and/or substance abuse or dependence. In an attempt to recruit healthy volunteers, further exclusion criteria included being physically unfit (e.g., cardiovascular conditions, migraines), smoking, prescribed or recent medication use, pregnancy, hypertension (>140/90 mm Hg), recent use of recreational drugs, body mass index <18 or ⩾28 kg/m2), a long or recent COVID-19 diagnosis, and acute illness prior to the testing session. Chapter3_HR_updated_VR1_rolled.csv: Data were gathered using the Fitbit manufacturers' Application Programming Interface (API). The Fitbit heart rate data (timestamps provided based on Network Time Protocol, GMT +1) were matched with the experimental task start and end time and date, which were manually recorded in seconds for each participant.Because the sampling frequency is determined by the Fitbit, we incorporated missing values (NAs) in the dataset at the one-second level and then smoothed the data with a rolling average, with a window size of two, that takes four true observations from each sides (two left, two right) into account, using the ‘imputeTS’ package in R (Moritz & Bartz-Beielstein, 2017) per participant and key event. We then aggregated over time periods which corresponded to key events in the experiment. The key events in the VR task were Baseline, Anticipation, Talk, and Recovery. We averaged three-minute periods from each phase (for Baseline and Recovery, we took an average of the last three minutes of that particular phase). The key events for the face-to-face task were Baseline and Task. Baseline corresponds to the final one minute of the participants' wait period before the panel members entered the room. Because the time duration for the Task phase varied per participant (minimum time spent = 1 minute 20 seconds, Mtime spent = 1 minute 51 seconds, SDtime spent = 39 seconds), we averaged the heart rate over the first minute of the Task when the instructions were revealed and whilst the basic calculations were performed. Chapter3_pro_data_sum.csv: We pre-recorded a lecture theatre at the University of Southampton's Psychology department twice with an Insta360 Pro 2 (https://www.insta360.com), an 8K 360° video camera: (i) when the lecture room was non-populated; and (ii) when it was populated with a real audience. Recorded raw videos have been stitched using Insta360 STITCHER (content type: stereo, left eye on top, stitching mode: new optical flow, Zenith optimisation on). The footage was then encoded using the Handbrake application (https://handbrake.fr/) before being transferred into the Unity software. The source code was written in C# using Visual Studio version 15.9.38. The videos were trimmed to time periods six minutes long, which corresponded to a preparation phase of three minutes and a speech phase of three minutes.The VR equipment used was the Oculus Rift consumer version headset (Facebook Inc.) with integrated audio system, offering a 110-degree of field view with 640×800 resolution per eye, which was running on a Dell Desktop computer with an Intel i7 processor (Windows 10 operating system). Participant response input was recorded using the Oculus Rift Bluetooth touch controller. To characterise our sample (Qualtrics), we first measured the levels of trait anxiety via the Generalised Anxiety Disorder Assesment (GAD-7) (Spitzer et al., 2006), trait social anxiety via the Social Phobia Inventory (SPIN) (Connor et al., 2000), trait social anxiety in relation to interacting with others via the Social Interaction Anxiety Scale (SIAS) (Mattick & Clarke, 1998), and trait social anxiety in relation to experienced avoidance and fear during interaction and performance related social events via the Liebowitz Social Anxiety Scale (LSAS) (Liebowitz, 1987), communication apprehension levels (given that participants performed a public speaking task) by administering the Personal Report of Communication Apprehension - public speaking sub scale (PRCA-24) (McCroskey, 2015), self-reported data for depression severity (Patient Health Questionnaire, PHQ-9) (Kroenke & Spitzer, 2002) and paranoia levels (the Revisited Green Paranoid Thoughts Scale - Persecutory Paranoia Subscale, R-GPTS) (Freeman et al., 2019) and The Abbreviated Math Anxiety Scale (AMAS) (Hopko et al., 2003). Participants completed GAD-7Modified, SUDS and SASCI throughout the VR task during key events. The following data were recorded at pre-test baseline and immediately after the experimental task: positive and negative affect using Positive and Negative Affect Scale (PANAS) (Watson et al., 1988), panic-like symptomology and the associated symptoms of autonomic arousal (e.g., shaking) using the Panic Symptom Inventory (PSI) (Clark & Hemsley, 1982), and the tendency to fear the symptoms of experienced anxiety using the Anxiety Sensitivity Index (ASI) (Reiss et al., 1986).The following data were recorded only immediately after the experimental session: (a) Speech Performance Scale (SPS) (Rapee & Lim, 1992)(b) Thoughts Questionnaire (TQ)(c) Presence Questionnaire (PQ) (Witmer & Singer, 1998) (all via Qualtrics). Chapter4_OnlineTrierdata_eligible_121.csv: The videoconferencing session took place on Blackboard Collaborate (https://ca.bbcollab.com/). We randomly assigned participants to the combinations of our two between-subjects variables: (a) Audience: the dummy audience profile images were displayed (camera on/off); and (b) Speaker: the participant’s web camera was turned on/off.Participants who scored higher than 15 on the SPIN (Connor et al., 2000) with a properly functioning web camera on a desktop were invited to an online videoconferencing session.This file contains eligible participant data. Participants completed GAD-7Modified, SUDS and SASCI throughout the VR task during key events. The following data were recorded at pre-test baseline and immediately after the experimental task: positive and negative affect using Positive and Negative Affect Scale (PANAS) (Watson et al., 1988), panic-like symptomology and the associated symptoms of autonomic arousal (e.g., shaking) using the Panic Symptom Inventory (PSI) (Clark & Hemsley, 1982), and the tendency to fear the symptoms of experienced anxiety using the Anxiety Sensitivity Index (ASI) (Reiss et al., 1986).The following data were recorded only immediately after the experimental session: (a) Speech Performance Scale (SPS) (Rapee & Lim, 1992)(b) Thoughts Questionnaire (TQ), observer/field perspective (all via Qualtrics. Chapter4_OnlineTrierdata_full.csv: Data were collected via Qualtrics. Three hundred and ninety-two participants were screened. The exclusion criteria were: (a) aged <18 or >55 years; (b) not having access to a properly functioning video camera or a microphone; (c) participation via mobile phones; and (d) scoring less than 15 on the Social Phobia Inventory (SPIN) (Connor et al., 2000), used to characterise a sample with subclinical to clinical social anxiety (via Qualtrics). A threshold of score of 15 is used to distinguish between those with varying severity of social anxiety and those who do not exhibit any signs of social anxiety. Date of data collection: Chapter 2: between November 2019 and August 2022 (inclusive). Note that due to the COVID-19 pandemic, the recruitment had to be paused from March 2020 to September 2021. Chapter 3: between March 2022 and June 2022 (inclusive). Chapter 4: between March 2021 and February 2022 (inclusive). Information about geographic location of data collection: United Kingdom Licence: CC-BY Related projects/Funders: NA Related publication: NA Date that the file was created: February, 2024 -------------- Notes: 1. Rename file, giving it an appropriate name and removing the word 'template'. 2. Remove [] adding in information where required. 3. Remove any sections not relevant to your dataset 4. Remove these notes before saving