Rethinking Deep Thinking: stable learning of algorithms using Lipschitz Constraints
Rethinking Deep Thinking: stable learning of algorithms using Lipschitz Constraints
Iterative algorithms solve problems by taking steps until a solution is reached. Models in the form of Deep Thinking (DT) networks have been demonstrated to learn iterative algorithms in a way that can scale to different sized problems at inference time using recurrent computation and convolutions. However, they are often unstable during training, and have no guarantees of convergence/termination at the solution. This paper addresses the problem of instability by analyzing the growth in intermediate representations, allowing us to build models (referred to as Deep Thinking with Lipschitz Constraints (DT-L)) with many fewer parameters and providing more reliable solutions. Additionally our DT-L formulation provides guarantees of convergence of the learned iterative procedure to a unique solution at inference time. We demonstrate DT-L is capable of robustly learning algorithms which extrapolate to harder problems than in the training set. We benchmark on the traveling salesperson problem to evaluate the capabilities of the modified system in an NP-hard problem where DT fails to learn.
machine learning, artificial intelligence, iterative algorithms, deep learning, deep thinking, Lipschitz continuity, traveling salesman problem, contraction mapping, dynamical systems
Bear, Jay
5ab27e51-2153-49a5-a946-a644d6db8a31
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
Hare, Jonathon
65ba2cda-eaaf-4767-a325-cd845504e5a9
Bear, Jay
5ab27e51-2153-49a5-a946-a644d6db8a31
Prugel-Bennett, Adam
b107a151-1751-4d8b-b8db-2c395ac4e14e
Hare, Jonathon
65ba2cda-eaaf-4767-a325-cd845504e5a9
Bear, Jay, Prugel-Bennett, Adam and Hare, Jonathon
(2024)
Rethinking Deep Thinking: stable learning of algorithms using Lipschitz Constraints.
Neural Information Processing Systems, Vancouver Convention Center, Vancouver, Canada.
10 - 15 Dec 2024.
26 pp
.
(In Press)
Record type:
Conference or Workshop Item
(Paper)
Abstract
Iterative algorithms solve problems by taking steps until a solution is reached. Models in the form of Deep Thinking (DT) networks have been demonstrated to learn iterative algorithms in a way that can scale to different sized problems at inference time using recurrent computation and convolutions. However, they are often unstable during training, and have no guarantees of convergence/termination at the solution. This paper addresses the problem of instability by analyzing the growth in intermediate representations, allowing us to build models (referred to as Deep Thinking with Lipschitz Constraints (DT-L)) with many fewer parameters and providing more reliable solutions. Additionally our DT-L formulation provides guarantees of convergence of the learned iterative procedure to a unique solution at inference time. We demonstrate DT-L is capable of robustly learning algorithms which extrapolate to harder problems than in the training set. We benchmark on the traveling salesperson problem to evaluate the capabilities of the modified system in an NP-hard problem where DT fails to learn.
Text
RethinkingDeepThinking
- Accepted Manuscript
More information
Accepted/In Press date: 25 September 2024
Venue - Dates:
Neural Information Processing Systems, Vancouver Convention Center, Vancouver, Canada, 2024-12-10 - 2024-12-15
Keywords:
machine learning, artificial intelligence, iterative algorithms, deep learning, deep thinking, Lipschitz continuity, traveling salesman problem, contraction mapping, dynamical systems
Identifiers
Local EPrints ID: 496130
URI: http://eprints.soton.ac.uk/id/eprint/496130
PURE UUID: 2b5fa5ba-9cd5-4699-a2cf-f99b7db1ba04
Catalogue record
Date deposited: 05 Dec 2024 17:30
Last modified: 06 Dec 2024 03:09
Export record
Contributors
Author:
Jay Bear
Author:
Adam Prugel-Bennett
Author:
Jonathon Hare
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics