A stochastic gradient method with biased estimation for faster nonconvex optimization
A stochastic gradient method with biased estimation for faster nonconvex optimization
A number of optimization approaches have been proposed for optimizing nonconvex objectives (e.g. deep learning models), such as batch gradient descent, stochastic gradient descent and stochastic variance reduced gradient descent. Theory shows these optimization methods can converge by using an unbiased gradient estimator. However, in practice biased gradient estimation can allow more efficient convergence to the vicinity since an unbiased approach is computationally more expensive. To produce fast convergence there are two trade-offs of these optimization strategies which are between stochastic/batch, and between biased/unbiased. This paper proposes an integrated approach which can control the nature of the stochastic element in the optimizer and can balance the trade-off of estimator between the biased and unbiased by using a hyper-parameter. It is shown theoretically and experimentally that this hyper-parameter can be configured to provide an effective balance to improve the convergence rate.
Deep learning, Optimisation
337-349
Bi, Jia
e07a78d1-62dd-4b1d-b223-4107aa3627c7
Gunn, Steve R.
306af9b3-a7fa-4381-baf9-5d6a6ec89868
2019
Bi, Jia
e07a78d1-62dd-4b1d-b223-4107aa3627c7
Gunn, Steve R.
306af9b3-a7fa-4381-baf9-5d6a6ec89868
Bi, Jia and Gunn, Steve R.
(2019)
A stochastic gradient method with biased estimation for faster nonconvex optimization.
Nayak, A. and Sharma, A.
(eds.)
In PRICAI 2019: Trends in Artificial Intelligence.
vol. 11671,
Springer Cham.
.
(doi:10.1007/978-3-030-29911-8_26).
Record type:
Conference or Workshop Item
(Paper)
Abstract
A number of optimization approaches have been proposed for optimizing nonconvex objectives (e.g. deep learning models), such as batch gradient descent, stochastic gradient descent and stochastic variance reduced gradient descent. Theory shows these optimization methods can converge by using an unbiased gradient estimator. However, in practice biased gradient estimation can allow more efficient convergence to the vicinity since an unbiased approach is computationally more expensive. To produce fast convergence there are two trade-offs of these optimization strategies which are between stochastic/batch, and between biased/unbiased. This paper proposes an integrated approach which can control the nature of the stochastic element in the optimizer and can balance the trade-off of estimator between the biased and unbiased by using a hyper-parameter. It is shown theoretically and experimentally that this hyper-parameter can be configured to provide an effective balance to improve the convergence rate.
Text
IJCAI19
- Accepted Manuscript
More information
Submitted date: 8 December 2018
e-pub ahead of print date: 23 August 2019
Published date: 2019
Venue - Dates:
2019 International Joint Conference on Artificial Intelligence, Macao, Macao, China, 2019-08-10 - 2019-08-16
Keywords:
Deep learning, Optimisation
Identifiers
Local EPrints ID: 430899
URI: http://eprints.soton.ac.uk/id/eprint/430899
PURE UUID: 9e79ca99-ec59-4ff3-aced-2458b9f8d701
Catalogue record
Date deposited: 17 May 2019 16:30
Last modified: 16 Mar 2024 08:20
Export record
Altmetrics
Contributors
Author:
Jia Bi
Author:
Steve R. Gunn
Editor:
A. Nayak
Editor:
A. Sharma
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics