Two of our papers got accepted at NeurIPS as spotlights. Congrats to Bobak, Jason, and Lukas!

B Kiani, J Wang, M Weber: Hardness of Learning Neural Networks under the Manifold Hypothesis 

We study the complexity of learning neural networks under the manifold hypothesis, showing that learning on input manifolds of bounded curvature is hard, but additional volume assumptions ensure learnability.

B Kiani, L Fesser, M Weber: Unitary Convolutions for Learning on Graphs and Groups 

Group-convolutional networks have shown great success on symmetric domains, but can suffer from instabilities as their depths increases, e.g., over-smoothing in GNNs. We describe unitary convolutions for learning on graphs and groups that allow for deeper networks with less instability.

Andrew will present work on Riemannian Optimization at the OPT2024 Workshop, co-located with NeurIPS. Congrats, Andrew!

A Cheng, M Weber: Structured Regularization for Constrained Optimization on the SPD Manifold 

We introduce structured regularizers for matrix-valued optimization (specifically SPD matrices), leveraging symmetric gauge functions. The resulting unconstrained problem formulation preserves desirable properties, e.g., convexity, and allows for fast solvers with global optimality certificates.