Große Auswahl an günstigen Büchern
Schnelle Lieferung per Post und DHL

From CNN to DNN Hardware Accelerators

Über From CNN to DNN Hardware Accelerators

The past decade has witnessed the consolidation of Artificial Intelligence technology, thanks to the popularization of Machine Learning (ML) models. The technological boom of ML models started in 2012 when the world was stunned by the record-breaking classification performance achieved by combining an ML model with a high computational performance graphic processing unit (GPU). Since then, ML models received ever-increasing attention, being applied in different areas such as computational vision, virtual reality, voice assistants, chatbots, and self-driving vehicles. The most popular ML models are brain-inspired models such as Neural Networks (NNs), including Convolutional Neural Networks (CNNs) and, more recently, Deep Neural Networks (DNNs). They are characterized by resembling the human brain, performing data processing by mimicking synapses using thousands of interconnected neurons in a network. In this growing environment, GPUs have become the de facto reference platform for the training and inference phases of CNNs and DNNs, due to their high processing parallelism and memory bandwidth. However, GPUs are power-hungry architectures. To enable the deployment of CNN and DNN applications on energy-constrained devices (e.g., IoT devices), industry and academic research have moved towards hardware accelerators. Following the evolution of neural networks from CNNs to DNNs, this monograph sheds light on the impact of this architectural shift and discusses hardware accelerator trends in terms of design, exploration, simulation, and frameworks developed in both academia and industry.

Mehr anzeigen
  • Sprache:
  • Englisch
  • ISBN:
  • 9781638281627
  • Einband:
  • Taschenbuch
  • Seitenzahl:
  • 88
  • Veröffentlicht:
  • 6. März 2023
  • Abmessungen:
  • 156x5x234 mm.
  • Gewicht:
  • 149 g.
  Versandkostenfrei
  Versandfertig in 1-2 Wochen.

Beschreibung von From CNN to DNN Hardware Accelerators

The past decade has witnessed the consolidation of Artificial Intelligence technology, thanks to the popularization of Machine Learning (ML) models. The technological boom of ML models started in 2012 when the world was stunned by the record-breaking classification performance achieved by combining an ML model with a high computational performance graphic processing unit (GPU). Since then, ML models received ever-increasing attention, being applied in different areas such as computational vision, virtual reality, voice assistants, chatbots, and self-driving vehicles. The most popular ML models are brain-inspired models such as Neural Networks (NNs), including Convolutional Neural Networks (CNNs) and, more recently, Deep Neural Networks (DNNs). They are characterized by resembling the human brain, performing data processing by mimicking synapses using thousands of interconnected neurons in a network. In this growing environment, GPUs have become the de facto reference platform for the training and inference phases of CNNs and DNNs, due to their high processing parallelism and memory bandwidth. However, GPUs are power-hungry architectures. To enable the deployment of CNN and DNN applications on energy-constrained devices (e.g., IoT devices), industry and academic research have moved towards hardware accelerators. Following the evolution of neural networks from CNNs to DNNs, this monograph sheds light on the impact of this architectural shift and discusses hardware accelerator trends in terms of design, exploration, simulation, and frameworks developed in both academia and industry.

Kund*innenbewertungen von From CNN to DNN Hardware Accelerators



Ähnliche Bücher finden
Das Buch From CNN to DNN Hardware Accelerators ist in den folgenden Kategorien erhältlich:

Willkommen bei den Tales Buchfreunden und -freundinnen

Jetzt zum Newsletter anmelden und tolle Angebote und Anregungen für Ihre nächste Lektüre erhalten.