Preprint / Version 1

Hardware-Efficient Neural Network Implementation: A Power-Accuracy Trade-off Analysis for Quantized Classification Neural Network

##article.authors##

DOI:

https://doi.org/10.31224/5974

Keywords:

Neural Networks, ASIC, power optimization

Abstract

This paper presents a comprehensive analysis of power-accuracy trade-offs in quantized neural network implementations for Application-Specific Integrated Circuit (ASIC) design. A three-layer feedforward neural network trained on the Wisconsin Breast Cancer dataset is implemented using a complete design flow from PyTorch model training to ASIC synthesis. The study evaluates 14-bit, 16-bit, and 18-bit uniform post-training quantization schemes and their impact on classification accuracy, power consumption, and area utilization. A lookup table (LUT) based sigmoid activation function is employed to reduce computational complexity in hardware implementation. The design is synthesized using Cadence Stratus High-Level Synthesis (HLS) tool targeting 500 MHz operation frequency on GPDK 45nm technology. Results demonstrate that 18-bit quantization achieves 95.6% accuracy with 2.44 mW power consumption and 183,963 GE (Gate Equivalent) area, representing an optimal balance between computational precision and hardware efficiency. The 16-bit implementation provides a reasonable compromise with 89.4% accuracy, 1.819 mW power, and 162,379 GE area, while the 14-bit version shows significant accuracy degradation to 64.9% despite lower power consumption of 1.924 mW.

Downloads

Download data is not yet available.

Downloads

Posted

2025-12-11