Volume 17 , Issue 2 , PP: 426-435, 2025 | Cite this article as | XML | Html | PDF | Full Length Article
Ravi Shankar P. 1 * , S. Balaji 2 , Gokul C. 3 , K. Nagarajan 4 , A. Arulkumar 5 , S. Venkatesh 6
Doi: https://doi.org/10.54216/JISIoT.170228
The rapid proliferation of edge-AI systems in IoT, autonomous robotics, and biomedical monitoring demands ultra-low-power, latency-aware intelligence that conventional deep neural networks struggle to provide due to heavy computation and memory overheads. Neuromorphic computing offers a promising biological-inspired alternative by processing information through sparse spiking events, enabling energy-efficient on-device learning and inference. This paper presents a neuromorphic VLSI accelerator based on a hybrid spiking neural architecture that combines Leaky-Integrate-and-Fire (LIF) neurons, adaptive threshold spiking units, and synaptic plasticity circuits to support both supervised and unsupervised learning modes at the edge. A hierarchical crossbar-memory topology integrated with non-volatile memristive synapses provides dense weight storage and real-time synaptic updates, reducing off-chip memory access by 78%. A pipelined event-driven computation engine and clock-gated spike scheduler minimize dynamic switching, achieving 61% reduction in power and 2.4× throughput improvement compared to conventional CMOS DNN accelerators. The proposed system performs dynamic visual-feature encoding, spike-based temporal fusion, and on-chip learning for anomaly and object detection tasks in low-power sensor nodes. Fabricated in 28-nm CMOS, the prototype achieves 0.29 mW power, 0.42 pJ/spike energy, and 94.3% inference accuracy, outperforming state-of-the-art neuromorphic platforms. Results demonstrate that hybrid spiking architectures integrated with VLSI-efficient plasticity circuits can deliver high-accuracy, self-adaptive AI within stringent edge constraints, enabling next-generation smart-sensing and autonomous micro-robotic intelligence.
Neuromorphic VLSI , Edge AI, Hybrid Spiking Neural Networks , LIF Neurons , Memristive Synapses , On-Chip Learning , Event-Driven Processing , Low-Power Accelerator , Spike-Based Computation , Edge-Aware Intelligence , Adaptive Threshold Neurons , Crossbar Memory Architecture , IoT Sensing , Bio-Inspired Computing , Spiking Plasticity Circuits
[1] C. Mead, "Neuromorphic engineering," Proc. IEEE, vol. 78, no. 10, pp. 1629–1636, 1990.
[2] M. Davies et al., “Loihi: A neuromorphic manycore processor with on-chip learning,” IEEE Micro, vol. 38, no. 1, pp. 82–99, 2018.
[3] P. A. Merolla et al., “A million-spiking-neuron integrated circuit with a scalable communication network,” Science, vol. 345, no. 6197, pp. 668–673, 2014.
[4] Y. Ji et al., “Spike-driven transformer for neuromorphic vision,” Adv. Neural Inf. Process. Syst., pp. 1–12, 2022.
[5] S. Yin et al., “XNOR-RRAM: Computational RRAM supporting logic and in-memory computing,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 68, no. 2, pp. 633–646, 2021.
[6] Sebastian et al., “Memory devices and applications for in-memory computing,” Nat. Nanotechnol., vol. 15, pp. 529–544, 2020.
[7] W. Maass, “Networks of spiking neurons: The third generation of neural network models,” Neural Netw., vol. 10, no. 9, pp. 1659–1671, 1997.
[8] H. Li et al., “Memristor-based neuromorphic hardware: From devices to systems,” Adv. Intell. Syst., vol. 3, no. 7, pp. 1–36, 2021.
[9] S. R. Kulkarni et al., “Event-driven deep intelligence on neuromorphic chips,” Proc. IEEE, vol. 109, no. 5, pp. 665–689, 2021.
[10] Q. Sun et al., “Direct training for spiking neural networks: Faster, larger, better,” Proc. AAAI Conf. Artif. Intell., pp. 1–9, 2024.
[11] S. Roy et al., “Mixed-signal neuromorphic circuits for edge-AI,” IEEE Trans. Circuits Syst. II, Exp. Briefs, vol. 70, no. 2, pp. 389–393, 2023.
[12] C. Frenkel et al., “Bottom-up system-level design of analog neuromorphic accelerators,” Nat. Commun., vol. 14, pp. 1–13, 2023.
[13] X. Peng et al., “ReRAM-based in-memory computing for neuromorphic processors,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst., vol. 40, no. 9, pp. 1706–1719, 2021.
[14] Y. Zhu et al., “Bio-inspired adaptive threshold neurons for efficient spiking networks,” IEEE Trans. Neural Netw. Learn. Syst., early access, 2024.
[15] Basu et al., “Low-power, low-latency neuromorphic computing for edge intelligence,” IEEE Circuits Syst. Mag., vol. 22, no. 3, pp. 6–24, 2022.
[16] D. Kuzum et al., “Synaptic electronics: Beyond complementary metal-oxide semiconductor,” Nat. Commun., vol. 13, pp. 1–15, 2022.
[17] H. Kim et al., “Hybrid CMOS-RRAM neuromorphic processors for on-device learning,” IEEE J. Solid-State Circuits, vol. 59, no. 1, pp. 77–89, 2024.
[18] N. Rathi et al., “Diet-SNN: Direct training of deep SNNs with hybrid coding,” Proc. Int. Conf. Learn. Represent, pp. 1–14, 2023.
[19] Costa et al., “Spiking neural networks for event-based vision at the edge,” IEEE Trans. Pattern Anal. Mach. Intell., early access, 2024.
[20] L. Song et al., “Energy-efficient spiking accelerator with adaptive membrane dynamics,” IEEE Access, vol. 12, pp. 115987–115998, 2024.