Deep optimisation: learning and searching in deep representations of combinatorial optimisation problems
Deep optimisation: learning and searching in deep representations of combinatorial optimisation problems
Evolutionary algorithms are a class of optimisation techniques used to solve problems by emulating evolutionary processes (variation and selection) to search the solution space. In this thesis, we focus on the evolutionary process of Evolutionary Transitions in Individuality (ETI). In this case, evolutionary processes are scaled up via a multi-scale process whereby individuality (and hence variation and selection) is continually revised by forming associations between formerly independent entities. This thesis develops a novel Model-Building Optimisation Algorithm (MBOA) called Deep Optimisation (DO) that exploits deep learning methods to enable multi-scale optimisation. DO uses an autoencoder model to induce a multi-level representation of solutions. Variation and selection are then performed within the induced representations, allowing search to continue in a new and reorganised space. By using a class of configurable problems, we find and understand more precisely the distinct problem characteristics that DO can solve that other MBOAs cannot. Specifically, we observe a polynomial vs exponential scaling distinction where DO is the only algorithm to show polynomial scaling for all problems. We also demonstrate that some problem characteristics need a deep network in DO. Further, for the first time, we show that overlap differentiates the performance between current MBOAs that are considered state-of-the-art. DO is then applied to different optimisation problem domains to demonstrate its potential for exploiting unknown problem structure and overcoming infeasible solution spaces. Here, DO shows impressive performance and does so without using a domain-specific operator. This thesis provides a connection between deep learning models and MBOAs, showing results that outperform existing algorithms can be achieved by utilising the tools available in deep learning. This suggests numerous avenues for further investigation, transferring deep learning methods into the domain of MBOAs.
University of Southampton
Caldwell, Jamie, Robert
c5107ff8-32df-4d69-a26d-9f1d1a7859dc
June 2022
Caldwell, Jamie, Robert
c5107ff8-32df-4d69-a26d-9f1d1a7859dc
Watson, Richard
ce199dfc-d5d4-4edf-bd7b-f9e224c96c75
Caldwell, Jamie, Robert
(2022)
Deep optimisation: learning and searching in deep representations of combinatorial optimisation problems.
University of Southampton, Doctoral Thesis, 186pp.
Record type:
Thesis
(Doctoral)
Abstract
Evolutionary algorithms are a class of optimisation techniques used to solve problems by emulating evolutionary processes (variation and selection) to search the solution space. In this thesis, we focus on the evolutionary process of Evolutionary Transitions in Individuality (ETI). In this case, evolutionary processes are scaled up via a multi-scale process whereby individuality (and hence variation and selection) is continually revised by forming associations between formerly independent entities. This thesis develops a novel Model-Building Optimisation Algorithm (MBOA) called Deep Optimisation (DO) that exploits deep learning methods to enable multi-scale optimisation. DO uses an autoencoder model to induce a multi-level representation of solutions. Variation and selection are then performed within the induced representations, allowing search to continue in a new and reorganised space. By using a class of configurable problems, we find and understand more precisely the distinct problem characteristics that DO can solve that other MBOAs cannot. Specifically, we observe a polynomial vs exponential scaling distinction where DO is the only algorithm to show polynomial scaling for all problems. We also demonstrate that some problem characteristics need a deep network in DO. Further, for the first time, we show that overlap differentiates the performance between current MBOAs that are considered state-of-the-art. DO is then applied to different optimisation problem domains to demonstrate its potential for exploiting unknown problem structure and overcoming infeasible solution spaces. Here, DO shows impressive performance and does so without using a domain-specific operator. This thesis provides a connection between deep learning models and MBOAs, showing results that outperform existing algorithms can be achieved by utilising the tools available in deep learning. This suggests numerous avenues for further investigation, transferring deep learning methods into the domain of MBOAs.
Text
Final Thesis
- Version of Record
Text
Permission to deposit thesis - Caldwell (1)
- Version of Record
Restricted to Repository staff only
More information
Published date: June 2022
Identifiers
Local EPrints ID: 458168
URI: http://eprints.soton.ac.uk/id/eprint/458168
PURE UUID: 7430621c-5cb5-4bb9-b026-f68bf6c71878
Catalogue record
Date deposited: 30 Jun 2022 16:35
Last modified: 17 Mar 2024 03:00
Export record
Contributors
Author:
Jamie, Robert Caldwell
Thesis advisor:
Richard Watson
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics