Home :: Academic Members :: News

view:41879   Last Update: 2023-12-17

Ali Azarpeyvand

Mohsen NourAzar, Vahid Rashtchi, Ali Azarpeyvand, and Farshad Merrikh-Bayat
Code Acceleration Using Memristor-Based Approximate Matrix Multiplier: Application to Convolutional Neural Networks  
Abstract


In this paper, we demonstrate the feasibility of building a memristor-based approximate accelerator to be used in cooperation with general-purpose x86 processors. First, an integrated full system simulator is developed for simultaneous simulation of any multicrossbar architecture as an accelerator for x86 processors, which is performed by coupling a cycle accurate Marss x86 processor simulator with the Ngspice mixed-level/mixed-signal circuit simulator. Then, a novel mixed-signal memristor-based architecture is presented for multiplying floating-point signed complex numbers. The presented multiplier is extended for accelerating convolutional neural networks and finally, it is tightly integrated with the pipeline of a generic x86 processor. To validate the accelerator, first it is utilized for multiplying different matrices that vary in size and distribution. Then, it is used as an accelerator for accelerating the tiny-dnn, an open-source C++ implementation of deep learning neural networks. The memristor-based accelerator provides more than 100x speedup and energy saving for a 64x64 matrix-matrix multiplication, with an accuracy of 90%. Using the accelerated tiny-dnn for the MNIST database classification more than 10x speedup and energy saving along with 95.51% pattern recognition accuracy is achieved.

 

 

Copyright © 2024, University of Zanjan, Zanjan, Iran
master[at]znu.ac.ir