READ ME File For 'Data and analysis: Efficient operator method for modeling mode mixing in misaligned optical cavities' Dataset DOI: 10.5258/SOTON/D2938 Date that the file was created: January, 2024 ------------------- GENERAL INFORMATION ------------------- ReadMe Author: William James Hughes, University of Southampton Date of data collection: 2022-2024 Information about geographic location of data collection: Data generated by computers located in Oxford, UK and Southampton, UK Related projects: EPSRC Hub in Quantum Computing and Simulation (EP/T001062/1) European Union Quantum Technology Flagship project AQTION (Project No. 820495) -------------------------- SHARING/ACCESS INFORMATION -------------------------- Licenses/restrictions placed on the data, or limitations of reuse: CC BY 4.0 license Recommended citation for the data: This dataset supports the publication: AUTHORS: W. J. Hughes, T. H. Doherty, J. A. Blackmore, P. Horak, J. F. Goodwin TITLE: Efficient operator method for modeling mode mixing in misaligned optical cavities JOURNAL: Physical Review A PAPER DOI IF KNOWN: Links to other publicly accessible locations of the data: Links/relationships to ancillary or related data sets: -------------------- DATA & FILE OVERVIEW -------------------- This dataset contains: The data and code files required to reproduce the figures in the publication The data and code for each figure are located in a separate directory labelled by the figure number, with the exception of fig_3_4, which houses the code and data for figures 3 and 4 together Within each folder, there is a separate readme file ('readme.txt') explaining the contents of that folder. The folder will also contain a subfolder 'data' which houses the data, and a Python script plot.py, which may be run to generate the figure (the exception is fig_3_4, which has three Python files: fig_3_plot.py to plot Figure 3, fig_4_plot.py to plot Figure 4, and common.py which is used by both the plot files, but need never be excecuted itself) The code may be run in a Python environment with packages numpy, scipy, and matplotlib. The details of the environment that was used to run the code is given in requirements.txt. The data is encoded in JSON format for text and single field data, and .npy arrays for arrays of numbers (chosen for its storage efficiency) The code to read the data is contained within the plot.py scripts. If one wishes to view the numerical data, the data folders can be read by using the same code of the plot.py scripts, and any subsequent processing the reader wishes to perform can be done in Python. This data loader will load the entire data folder into a single Python dictionary, with the exception of the folder fig_7, for which the data contains a handful of subfolders The data was generated using the methods described in the paper Relationship between files, if important for context: Each folder contains code and data for a figure. The figure number can be inferred from the name of the folder (e.g. the folder 'fig_1' pertains to figure 1) No files outside the folder of a figure are required to generate the figure Additional related data collected that was not included in the current data package: If data was derived from another source, list source: Part of the data for Fig. 7 was taken from the dataset of a previously published paper. This data was used to compare our methods with literature values. This data can be found doi:10.5258/SOTON/403718 If there are there multiple versions of the dataset, list the file updated, when and why update was made: -------------------------- METHODOLOGICAL INFORMATION -------------------------- Description of methods used for collection/generation of data: The data was generated using the methods described in the paper using an author-written set of Python scripts The exception is that part of the data for Fig. 7 was taken from the dataset of a previously published paper in order to compare that to our methods. This data can be found doi:10.5258/SOTON/403718. The data was read and resaved into a .npy format for consitency with the rest of our data, but the numerical data was not processed in any way Methods for processing the data: The submitted data (with the exception of the literature data for Fig. 7) was the data generated by the author-written codes. The presentation of this data results from the processing detailed in the Python scripts contained within the dataset Software- or Instrument-specific information needed to interpret the data, including software and hardware version numbers: The code may be run in a Python environment with packages numpy, scipy, and matplotlib. The details of the environment that was used to run the code is given in requirements.txt. Standards and calibration information, if appropriate: Environmental/experimental conditions: Describe any quality-assurance procedures performed on the data: People involved with sample collection, processing, analysis and/or submission: William J. Hughes (Affilliated University of Oxford for the work, currently University of Southampton) Thomas H. Doherty (University of Oxford) Jacob A. Blackmore (University of Oxford) Peter Horak (University of Southampton) Joseph F. Goodwin (University of Oxford) -------------------------- DATA-SPECIFIC INFORMATION -------------------------- Descriptions of each set of data are given in the readme.txt for for that Figure