Building deep networks on Grassmann manifolds
Building deep networks on Grassmann manifolds
Learning representations on Grassmann manifolds is popular in quite a few visual recognition tasks. In order to enable deep learning on Grassmann manifolds, this paper proposes a deep network architecture by generalizing the Euclidean network paradigm to Grassmann manifolds. In particular, we design full rank mapping layers to transform input Grassmannian data to more desirable ones, exploit re-orthonormalization layers to normalize the resulting matrices, study projection pooling layers to reduce the model complexity in the Grassmannian context, and devise projection mapping layers to respect Grassmannian geometry and meanwhile achieve Euclidean forms for regular output layers. To train the Grassmann networks, we exploit a stochastic gradient descent setting on manifolds of the connection weights, and study a matrix generalization of backpropagation to update the structured data. The evaluations on three visual recognition tasks show that our Grassmann networks have clear advantages over existing Grassmann learning methods, and achieve results comparable with state-of-the-art approaches.
3279-3286
Huang, Zhiwu
84f477cd-9097-44dd-a33e-ff71f253d36b
Wu, Jiqing
d82f4921-9c1a-4e3d-b757-90f0497ad93c
Van Gool, Luc
7aa6fbb4-68f5-4b18-8d99-ba71be78844d
2 February 2018
Huang, Zhiwu
84f477cd-9097-44dd-a33e-ff71f253d36b
Wu, Jiqing
d82f4921-9c1a-4e3d-b757-90f0497ad93c
Van Gool, Luc
7aa6fbb4-68f5-4b18-8d99-ba71be78844d
Huang, Zhiwu, Wu, Jiqing and Van Gool, Luc
(2018)
Building deep networks on Grassmann manifolds.
McIlraith, Sheila A. and Weinberger, Kilian Q.
(eds.)
In AAAI'18/IAAI'18/EAAI'18: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence.
AAAI Press.
.
(doi:10.5555/3504035.3504436).
Record type:
Conference or Workshop Item
(Paper)
Abstract
Learning representations on Grassmann manifolds is popular in quite a few visual recognition tasks. In order to enable deep learning on Grassmann manifolds, this paper proposes a deep network architecture by generalizing the Euclidean network paradigm to Grassmann manifolds. In particular, we design full rank mapping layers to transform input Grassmannian data to more desirable ones, exploit re-orthonormalization layers to normalize the resulting matrices, study projection pooling layers to reduce the model complexity in the Grassmannian context, and devise projection mapping layers to respect Grassmannian geometry and meanwhile achieve Euclidean forms for regular output layers. To train the Grassmann networks, we exploit a stochastic gradient descent setting on manifolds of the connection weights, and study a matrix generalization of backpropagation to update the structured data. The evaluations on three visual recognition tasks show that our Grassmann networks have clear advantages over existing Grassmann learning methods, and achieve results comparable with state-of-the-art approaches.
This record has no associated files available for download.
More information
Published date: 2 February 2018
Identifiers
Local EPrints ID: 501233
URI: http://eprints.soton.ac.uk/id/eprint/501233
PURE UUID: 597e1fc4-cb24-48ed-b340-0063ee8d087e
Catalogue record
Date deposited: 27 May 2025 18:03
Last modified: 28 May 2025 02:12
Export record
Altmetrics
Contributors
Author:
Zhiwu Huang
Author:
Jiqing Wu
Author:
Luc Van Gool
Editor:
Sheila A. McIlraith
Editor:
Kilian Q. Weinberger
Download statistics
Downloads from ePrints over the past year. Other digital versions may also be available to download e.g. from the publisher's website.
View more statistics