Papers
arxiv:2510.17392

ReLACE: A Resource-Efficient Low-Latency Cortical Acceleration Engine

Published on Oct 20, 2025
Authors:
,
,
,
,

Abstract

A Cortical Neural Pool architecture with a high-speed, resource-efficient CORDIC-based Hodgkin Huxley neuron model demonstrates improved performance and efficiency for edge AI applications.

AI-generated summary

We present a Cortical Neural Pool (CNP) architecture featuring a high-speed, resource-efficient CORDIC-based Hodgkin Huxley (RCHH) neuron model. Unlike shared CORDIC-based DNN approaches, the proposed neuron leverages modular and performance-optimised CORDIC stages with a latency-area trade-off. The FPGA implementation of the RCHH neuron shows 24.5% LUT reduction and 35.2% improved speed, compared to SoTA designs, with 70% better normalised root mean square error (NRMSE). Furthermore, the CNP exhibits 2.85x higher throughput (12.69 GOPS) compared to a functionally equivalent CORDIC-based DNN engine, with only a 0.35% accuracy drop compared to the DNN counterpart on the MNIST dataset. The overall results indicate that the design shows biologically accurate, low-resource spiking neural network implementations for resource-constrained edge AI applications.

Community

We present a Cortical Neural Pool (CNP) architecture featuring a high-speed, resource-efficient CORDIC based Hodgkin–Huxley (RCHH) neuron model. Unlike shared CORDIC-based DNN approaches, the proposed neuron leverages modular and performance-optimised CORDIC stages with a latency-area trade-off. We introduce a novel Constraint-Aware Modular Parallelism (CAMP) with Precision & Stability handling to leverage maximum speedup and utilisation of hardware through hardware software co-design. The FPGA implementation of the RCHH neuron shows 24.5% LUT reduction and 35.2% improved speed, compared to SoTA designs, with 70% better normalised root mean square error (NRMSE). Furthermore, the CNP exhibits 2.85× higher throughput (12.69 GOPS) than a functionally equivalent CORDIC-based DNN engine, with only a 0.35% accuracy drop relative to the DNN counterpart on the MNIST dataset. The overall results indicate that the design shows biologically accurate, low-resource spiking neural network implementations for resource-constrained edge AI applications. The reproducibility codes are publicly available at https://github.com/mukullokhande99/CNP RCHH, facilitating rapid integration and further development by researchers.
https://arxiv.org/abs/2510.17392

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.17392 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.17392 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.17392 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.