Aim of the Workshop
To reach top-tier performance, deep learning models usually use a large number of parameters and operations, requiring considerable power and memory. Several works have proposed to tackle this problem using quantization of parameters, pruning, clustering of parameters, decompositions of convolutions, or distillation. However, most of these works aim at accelerating only the inference process and disregard the training phase. In practice, however, it is the learning phase that is by far the most complex. Despite recent efforts, promoting efficiency in the training process remains challenging.
In this workshop, we propose to focus on reducing the complexity of the training process. Our aim is to gather researchers interested in reducing energy, time, or memory usage for faster/cheaper/greener prototyping or deployment of deep learning models. Due to the dependence of deep learning on large computational capacities, the outcomes of the workshop could benefit all who deploy these solutions, including those who are not hardware specialists. Moreover, it would contribute to making deep learning more accessible to small businesses and small laboratories.
Checkout the previous workshop edition at ICLR 2021.