Back

INCHER & Zell Research Colloquium: "The inductive bias of deep neural networks towards feature learning"

Lectureby Dr. Jakob Heiss

Abstract: Machine Learning (ML) models rely on the assumption that similar inputs ($x$) correspond to similar labels ($y$) to make predictions on unseen data. Crucially, the definition of "similar" is dependent on the feature representation. The hidden layers of Deep Neural Networks (DNNs) often learn feature transformations, $h(x)$, that yield a practically meaningful notion of distance between inputs. For example, Large Language Models (LLMs) map the same sentence in different languages to similar representations in their embedding space. This ability to learn useful features often enables transfer learning and multi-task learning. My talk will provide a theory-inspired intuition for why this feature learning happens and, critically, when it provably does not.
Furthermore, an application of deep learning to market design will be presented, which inspired the Machine Learning-powered Course Match (MLCM): an ML-based mechanism for eliciting student preferences and assigning university courses.

-------------------------------------

The INCHER lectures are hybrid events.
If you wish to participate via Zoom please register at koch[at]incher.uni-kassel[dot]de

Venue: International House, Mönchebergstrasse 11a, University of Kassel
34125 Kassel

INCHER Lectures

Related Links