New Douglas-Rashford splitting algorithms for generalized DC programming with applications in machine learning
New Douglas-Rashford splitting algorithms for generalized DC programming with applications in machine learning
In this work, we propose some new Douglas-Rashford splitting algorithms for solving a class of generalized DC (difference of convex functions) in real Hilbert spaces. The proposed methods leverage the proximal properties of the nonsmooth component and a fasten control parameter which improves the convergence rate of the algorithms. We prove the convergence of these methods to the critical points of nonconvex optimization under reasonable conditions. We evaluate the performance and effectiveness of our methods through experimentation with three practical examples in machine learning. Our findings demonstrated that our methods offer efficiency in problem-solving and outperform state-of-the-art techniques like the DCA (DC Algorithm) and ADMM.
DC programming, Douglas-Rachford splitting algorithm, Machine learning, Nonconvex optimization
Yao, Yonghong
e6568469-548a-4420-bfcb-975964d1a738
Jolaoso, Lateef O.
102467df-eae0-4692-8668-7f73e8e02546
Shehu, Yekini
df727925-5bf0-457a-87fa-f70de3bfd11a
Yao, Jen Chih
036d51bb-3618-4966-a72f-707a4eb6091b
30 April 2025
Yao, Yonghong
e6568469-548a-4420-bfcb-975964d1a738
Jolaoso, Lateef O.
102467df-eae0-4692-8668-7f73e8e02546
Shehu, Yekini
df727925-5bf0-457a-87fa-f70de3bfd11a
Yao, Jen Chih
036d51bb-3618-4966-a72f-707a4eb6091b
Yao, Yonghong, Jolaoso, Lateef O., Shehu, Yekini and Yao, Jen Chih
(2025)
New Douglas-Rashford splitting algorithms for generalized DC programming with applications in machine learning.
Journal of Scientific Computing, 103 (3), [88].
(doi:10.1007/s10915-025-02900-6).
Abstract
In this work, we propose some new Douglas-Rashford splitting algorithms for solving a class of generalized DC (difference of convex functions) in real Hilbert spaces. The proposed methods leverage the proximal properties of the nonsmooth component and a fasten control parameter which improves the convergence rate of the algorithms. We prove the convergence of these methods to the critical points of nonconvex optimization under reasonable conditions. We evaluate the performance and effectiveness of our methods through experimentation with three practical examples in machine learning. Our findings demonstrated that our methods offer efficiency in problem-solving and outperform state-of-the-art techniques like the DCA (DC Algorithm) and ADMM.
Text
2404.14800v1
- Accepted Manuscript
Restricted to Repository staff only until 30 April 2026.
Request a copy
More information
Accepted/In Press date: 23 April 2024
Published date: 30 April 2025
Additional Information:
Publisher Copyright:
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
Keywords:
DC programming, Douglas-Rachford splitting algorithm, Machine learning, Nonconvex optimization
Identifiers
Local EPrints ID: 502654
URI: http://eprints.soton.ac.uk/id/eprint/502654
ISSN: 0885-7474
PURE UUID: b52ac305-8cc4-485c-9788-6a873022f2ad
Catalogue record
Date deposited: 03 Jul 2025 16:38
Last modified: 04 Jul 2025 02:16
Export record
Altmetrics
Contributors
Author:
Yonghong Yao
Author:
Yekini Shehu
Author:
Jen Chih Yao
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics