Decoding mental images: combining pareidolia, genetic algorithms, and deep learning for objectively quantifying mental imagery
Decoding mental images: combining pareidolia, genetic algorithms, and deep learning for objectively quantifying mental imagery
Mental imagery—our ability to create or recreate perceptual experiences in our mind’s eye—is traditionally assessed using subjective self-report measures. To objectively measure mental images, we explored a novel approach using pareidolia, our tendency to perceive familiar patterns in noise. Specifically, we conducted a mind-reading experiment. Eighteen participants were first asked to choose a digit from 1-9, and then performed a pareidolia task. Participants were shown images of pure pixel noise and selected those most resembling their mental image of their chosen digit. Using a genetic search algorithm, we guided this selection process, progressively growing the digit image out of the noise. This created a ‘classification image’, a visual representation of the participant’s mental image of their chosen digit. We input the generated classification images to deep neural network (DNN) image classifiers trained to recognise digits. The classifiers correctly identified participants’ imagined digits 46% of the time, significantly above the 11% chance level (p<.001). Additionally, a new set of 20 participants correctly identified the digits from the classification images 66% of the time (p<.001). These results demonstrate that both humans and DNNs could decode the digits participants were thinking about. Our work thus provides a proof-of-concept demonstration of how we can visualize, decode, and objectively quantify mental images, and opens new avenues for exploring visual imagination. Future applications include fine tuning DNNs and using them as simulated observers to design stimulus sets to induce and further test pareidolia, enabling experiments measuring individual differences in mental imagery activation and frequency.
Poster Presentation, Visual Mental Imagery, Reverse Correlation, Genetic Algorithms, Machine Learning, CNN - convolutional neural network
Villani, Saivydas
83e70ad1-038a-4392-9c2c-66d14872aa92
Tajari, Ahmadreza
ce11764e-2645-43d9-9a60-5ab703059abd
Storrs, Katherine R.
d3376207-a72a-4931-a7fb-1cbef061c3f7
Rideaux, Reuben
36448c22-f46c-406a-b308-fc040f2dfa17
Wallis, Thomas S.A.
99e12851-41af-48fe-bb52-593e46ce1e76
Harrison, William J.
91173061-c706-4664-a4bd-f51c71d64208
Fleming, Roland W.
41afa8fc-d604-4530-a455-2398c2902109
Maiello, Guido
c122b089-1bbc-4d3e-b178-b0a1b31a5295
16 December 2024
Villani, Saivydas
83e70ad1-038a-4392-9c2c-66d14872aa92
Tajari, Ahmadreza
ce11764e-2645-43d9-9a60-5ab703059abd
Storrs, Katherine R.
d3376207-a72a-4931-a7fb-1cbef061c3f7
Rideaux, Reuben
36448c22-f46c-406a-b308-fc040f2dfa17
Wallis, Thomas S.A.
99e12851-41af-48fe-bb52-593e46ce1e76
Harrison, William J.
91173061-c706-4664-a4bd-f51c71d64208
Fleming, Roland W.
41afa8fc-d604-4530-a455-2398c2902109
Maiello, Guido
c122b089-1bbc-4d3e-b178-b0a1b31a5295
Villani, Saivydas, Tajari, Ahmadreza, Storrs, Katherine R., Rideaux, Reuben, Wallis, Thomas S.A., Harrison, William J. and Fleming, Roland W.
(2024)
Decoding mental images: combining pareidolia, genetic algorithms, and deep learning for objectively quantifying mental imagery.
Maiello, Guido
(ed.)
Applied Vision Association Christmas meeting 2024, Cardiff University, Cardiff, United Kingdom.
1 pp
.
Record type:
Conference or Workshop Item
(Poster)
Abstract
Mental imagery—our ability to create or recreate perceptual experiences in our mind’s eye—is traditionally assessed using subjective self-report measures. To objectively measure mental images, we explored a novel approach using pareidolia, our tendency to perceive familiar patterns in noise. Specifically, we conducted a mind-reading experiment. Eighteen participants were first asked to choose a digit from 1-9, and then performed a pareidolia task. Participants were shown images of pure pixel noise and selected those most resembling their mental image of their chosen digit. Using a genetic search algorithm, we guided this selection process, progressively growing the digit image out of the noise. This created a ‘classification image’, a visual representation of the participant’s mental image of their chosen digit. We input the generated classification images to deep neural network (DNN) image classifiers trained to recognise digits. The classifiers correctly identified participants’ imagined digits 46% of the time, significantly above the 11% chance level (p<.001). Additionally, a new set of 20 participants correctly identified the digits from the classification images 66% of the time (p<.001). These results demonstrate that both humans and DNNs could decode the digits participants were thinking about. Our work thus provides a proof-of-concept demonstration of how we can visualize, decode, and objectively quantify mental images, and opens new avenues for exploring visual imagination. Future applications include fine tuning DNNs and using them as simulated observers to design stimulus sets to induce and further test pareidolia, enabling experiments measuring individual differences in mental imagery activation and frequency.
Text
Villani_et_al_poster_2024_AVA
- Author's Original
More information
Published date: 16 December 2024
Venue - Dates:
Applied Vision Association Christmas meeting 2024, Cardiff University, Cardiff, United Kingdom, 2024-12-16
Keywords:
Poster Presentation, Visual Mental Imagery, Reverse Correlation, Genetic Algorithms, Machine Learning, CNN - convolutional neural network
Identifiers
Local EPrints ID: 506648
URI: http://eprints.soton.ac.uk/id/eprint/506648
PURE UUID: 4a48345e-1c8d-4b09-8e61-3814ef30eaa3
Catalogue record
Date deposited: 13 Nov 2025 17:31
Last modified: 14 Nov 2025 03:10
Export record
Contributors
Author:
Saivydas Villani
Author:
Ahmadreza Tajari
Author:
Katherine R. Storrs
Author:
Reuben Rideaux
Author:
Thomas S.A. Wallis
Author:
William J. Harrison
Author:
Roland W. Fleming
Editor:
Guido Maiello
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics