2019-06-21
Not really a paper, but more like an opinion piece?
Talks about past (as in before nerual networks became wildly popular) attempts at accelerating neural networks, analyzes the reasons behind its popularization (improved methods, larger datasets, cheap compute in the form of GPGPU, open source libraries), and predicts the future of hardware acceleration of deep learning based on current trends.
These are all things I’m not that familiar with, and the author claims that these elements will shape the architecture of future DL systems, therefore hardware. Should look into references and research. Saving power by utilizing sparse activations sounds cool.
The usual talk about how supervised learning, reinforcement learning is inefficient in terms of extracting information from data, and about how “self-supervised learning” is the future. I remember hearing something similar from the author in a talk from several years ago (here’s a quick summary)
However, chances are that the bulk of the computation in future DL systems will still consist primarily of convolutions.
OH
2019-06-19 - Malfunctional Programming index #