--

In principle yes (a shallow net being a universal approximation), but the question is why to do it? learning invariance is hard, if you know a prior that certain invariance makes sense, it’s better to bake it into the architecture.

--

--

Michael Bronstein
Michael Bronstein

Written by Michael Bronstein

DeepMind Professor of AI @Oxford. Serial startupper. ML for graphs, biochemistry, drug design, and animal communication.

No responses yet