Advancing spatial-temporal rock fracture prediction with virtual camera-based data augmentation
Advancing spatial-temporal rock fracture prediction with virtual camera-based data augmentation
Predicting rock fractures in unexcavated areas is a critical yet challenging aspect of geotechnical projects. This task involves forecasting the fracture mapping sequences for unexcavated rock faces using the sequences from excavated ones, which is well-suited for spatial–temporal deep learning techniques. Fracture mapping sequences for deep learning model training can be achieved based on field photography. However, the main obstacle lies in the insufficient availability of high-quality photos. Existing data augmentation techniques rely on slices taken from Discrete Fracture Network (DFN) models. However, slices differ significantly from actual photos taken from the field. To overcome this limitation, this study introduces a new framework that uses Virtual Camera Technology (VCT) to generate “virtual photos” from DFN models. The external (e.g., camera location, direction) and internal parameters (e.g., focal length, resolution, sensor size) of cameras can be considered in this method. The “virtual photos” generated from the VCT and conventional slicing method have been extensively compared. The framework is designed to adapt to any distribution of field fractures and camera settings, serving as a universal tool for practical applications. The whole framework has been packaged as an open-source tool for rock “photos” generation. An open-source benchmark database has also been established based on this tool. To validate the framework's feasibility, the Predictive Recurrent Neural Network (PredRNN) method is applied to the generated database. A high degree of similarity is observed between the predicted mapping sequences and the ground truth. The model successfully captured the dynamic changes in fracture patterns across different sections, thereby confirming the framework's practical utility. The source code and dataset can be freely downloaded from GitHub repository (https://github.com/GEO-ATLAS/Rock-Camera).
Data Augmentation, Discrete Fracture Network, Open-Access Database, Rock Fracture, Spatial-temporal Prediction
Xie, Jiawei
8f5bdf89-fcac-4336-a371-9f138872a28b
Chen, Baolin
963fd5b4-058f-46e2-aadd-4015ab1c45b9
Huang, Jinsong
da153fad-3446-47fc-8b4a-5799e42fb59e
Zhang, Yuting
821b7687-fe98-4525-b641-2ea503797319
Zeng, Cheng
bb12ebfb-4c58-46c6-93fe-dc4b101cf5e9
Xie, Jiawei
8f5bdf89-fcac-4336-a371-9f138872a28b
Chen, Baolin
963fd5b4-058f-46e2-aadd-4015ab1c45b9
Huang, Jinsong
da153fad-3446-47fc-8b4a-5799e42fb59e
Zhang, Yuting
821b7687-fe98-4525-b641-2ea503797319
Zeng, Cheng
bb12ebfb-4c58-46c6-93fe-dc4b101cf5e9
Xie, Jiawei, Chen, Baolin, Huang, Jinsong, Zhang, Yuting and Zeng, Cheng
(2025)
Advancing spatial-temporal rock fracture prediction with virtual camera-based data augmentation.
Tunnelling and Underground Space Technology, 158, [106400].
(doi:10.1016/j.tust.2025.106400).
Abstract
Predicting rock fractures in unexcavated areas is a critical yet challenging aspect of geotechnical projects. This task involves forecasting the fracture mapping sequences for unexcavated rock faces using the sequences from excavated ones, which is well-suited for spatial–temporal deep learning techniques. Fracture mapping sequences for deep learning model training can be achieved based on field photography. However, the main obstacle lies in the insufficient availability of high-quality photos. Existing data augmentation techniques rely on slices taken from Discrete Fracture Network (DFN) models. However, slices differ significantly from actual photos taken from the field. To overcome this limitation, this study introduces a new framework that uses Virtual Camera Technology (VCT) to generate “virtual photos” from DFN models. The external (e.g., camera location, direction) and internal parameters (e.g., focal length, resolution, sensor size) of cameras can be considered in this method. The “virtual photos” generated from the VCT and conventional slicing method have been extensively compared. The framework is designed to adapt to any distribution of field fractures and camera settings, serving as a universal tool for practical applications. The whole framework has been packaged as an open-source tool for rock “photos” generation. An open-source benchmark database has also been established based on this tool. To validate the framework's feasibility, the Predictive Recurrent Neural Network (PredRNN) method is applied to the generated database. A high degree of similarity is observed between the predicted mapping sequences and the ground truth. The model successfully captured the dynamic changes in fracture patterns across different sections, thereby confirming the framework's practical utility. The source code and dataset can be freely downloaded from GitHub repository (https://github.com/GEO-ATLAS/Rock-Camera).
This record has no associated files available for download.
More information
Accepted/In Press date: 10 January 2025
e-pub ahead of print date: 17 January 2025
Additional Information:
Publisher Copyright:
© 2025 Elsevier Ltd
Keywords:
Data Augmentation, Discrete Fracture Network, Open-Access Database, Rock Fracture, Spatial-temporal Prediction
Identifiers
Local EPrints ID: 497770
URI: http://eprints.soton.ac.uk/id/eprint/497770
ISSN: 0886-7798
PURE UUID: 3fe35991-1daa-449c-9e9a-ec1ddceb794a
Catalogue record
Date deposited: 30 Jan 2025 18:00
Last modified: 14 May 2025 02:15
Export record
Altmetrics
Contributors
Author:
Jiawei Xie
Author:
Baolin Chen
Author:
Jinsong Huang
Author:
Yuting Zhang
Author:
Cheng Zeng
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics