May 19, 2021
In principle yes (a shallow net being a universal approximation), but the question is why to do it? learning invariance is hard, if you know a prior that certain invariance makes sense, it’s better to bake it into the architecture.
In principle yes (a shallow net being a universal approximation), but the question is why to do it? learning invariance is hard, if you know a prior that certain invariance makes sense, it’s better to bake it into the architecture.
DeepMind Professor of AI @Oxford. Serial startupper. ML for graphs, biochemistry, drug design, and animal communication.