top of page

"Energy-Efficient, Edge-Native, Neuromorphic Sensory Processing Units (NSPU) with Online Continuous Learning Capability"

The Neuromorphic Computer Architecture Lab (NCAL) is a new research group in the Electrical and Computer Engineering Department at Carnegie Mellon University, led by Prof. John Paul Shen and Prof. James E. Smith.

RESEARCH GOAL:  New processor architecture and design that captures the capabilities and efficiencies of brain's neocortex for energy-efficient, edge-native, on-line, sensory processing in mobile and edge devices.

  • Capabilities: strong adherence to biological plausibility and Spike Timing Dependent Plasticity (STDP) in order to enable continuous, unsupervised, and emergent learning.

  • Efficiencies: can achieve several orders of magnitude improvements on system complexity and energy efficiency as compared to existing DNN computation infrastructures for edge-native sensory processing.



  1. Targeted Applications:  Edge-Native Sensory Processing 

  2. Computational Model:  Space-Time Algebra (STA)

  3. Processor Architecture:  Temporal Neural Networks (TNN)

  4. Processor Design StyleSpace-Time Logic Design

  5. Hardware Implementation:  Off-the-Shelf Digital CMOS


​1. Targeted Applications:  Edge-Native Sensory Processing 
Targeted application domain: edge-native on-line sensory processing that mimics the human neocortex. The focus of this research is on temporal neural networks that can achieve brain-like capabilities with brain-like efficiency and can be implemented using standard CMOS technology. This effort can enable a whole new family of accelerators, or sensory processing units, that can be deployed in mobile and edge devices for performing edge-native, on-line, always-on, sensory processing with the capability for real-time inference and continuous learning, while consuming only a few mWatts.


2. Computational Model:  Space-Time Algebra (STA)
A new Space-Time Computing (STC) Model has been developed for computing  that communicates and processes information encoded as transient events in time -- action potentials or voltage spikes in the case of neurons. Consequently, the flow of time becomes a freely available, no-cost computational resource.  The theoretical basis for the STC model is the "Space-Time Algebra“ (STA) with primitives that model points in time and functional operations that are consistent with the flow of Newtonian time. [STC/STA was developed by Jim Smith]


3. Processor Architecture:  Temporal Neural Networks (TNN)
Temporal Neural Networks (TNNs) are a special class of spiking neural networks, for implementing a class of functions based on the space time algebra.  By exploiting time as a computing resource, TNNs are capable of performing sensory processing with very low system complexity and very high energy efficiency as compared to conventional ANNs & DNNs. Furthermore, one key feature of TNNs involves using spike timing dependent plasticity (STDP) to achieve a form of machine learning that is unsupervised, continuous, and emergent.  


4. Processor Design Style:  Space Time Logic Design
Conventional CMOS logic gates based on Boolean algebra can be re-purposed to implement STA based temporal operations and functions. Temporal values can be encoded using voltage edges or pulses. We have developed a TNN architecture based on two key building blocks: neurons and columns of neurons. We have implemented the excitatory neuron model with its input synaptic weights as well as a column of such neurons with winner-take-all (WTA) lateral inhibition, all using the space time logic design approach and standard digital CMOS design tools.


5. Hardware Implementation:  Standard Digital CMOS Technology

Based on the STA theoretical foundation and the ST logic design approach, we can design a new type of special-purpose TNN-based "Neuromorphic Sensory Processing Units" (NSPU) for incorporation in mobile SoCs targeting mobile and edge devices. NSPUs can be a new core type for SoCs already with heterogeneous cores. Other than using off-the-shelf CMOS design and synthesis tools, there is the potential for creating a new custom standard cell library and design optimizations for supporting the design of TNN-based NSPUs for sensory processing. 

FCRC 2019 Keynote by James E. Smith

"A Roadmap for Reverse Architecting

the Brain's Neocortex"

New CMU-ECE Course Syllabus (Spring 2024)
18-743 "Neuromorphic Computer Architecture and Processor Design"

New Book Chapter in "Neuromorphic Computing" (March 2023)
"Cortical Columns Computing Systems: Microarchitecture Model, Functional Building Blocks, and Design Tools"


bottom of page