Objectives:
The objective of this curse is understanding the different architectural approaches to efficient implement training and inference of deep learning algorithm.
Artificial intelligence (AI), Machine learning (ML) and deep neural networks (DNNs) are compute intensive algorithm that need high computer power. Deploy efficient deep learning algorithm in cloud, fog or in the edge requires different time/power trade off that impacts in the computing architecture to select.