Nowadays, research
topics on AI accelerator designs have attracted great interest, where
accelerating Deep Neural Network (DNN) using Processing-in-Memory (PIM)
platforms is an actively explored direction with great potential. PIM
platforms, which simultaneously address power- and memory-wall bottlenecks,
have shown orders of performance enhancement compared to the conventional
computing platforms with Von-Neumann architecture. As one direction of
accelerating DNN in PIM, resistive memory array (aka. crossbar) has drawn great
research interest owing to its analog current-mode weighted summation operation
which intrinsically matches the dominant Multiplication-and-Accumulation (MAC)
operation in DNN, making it one of the most promising candidates. An alternative
direction for PIM-based DNN acceleration is through bulk bit-wise logic
operations directly performed on the content in digital memories. Thanks to the
high fault-tolerant characteristic of DNN, the latest algorithmic progression
successfully quantized DNN parameters to low bit-width representations, while
maintaining competitive accuracy levels. Such DNN quantization techniques
essentially convert MAC operations to much simpler addition/subtraction or
comparison operations, which can be performed by bulk bit-wise logic operations
in a highly parallel fashion.
The main goal of this seasonal school is to dive deep into the rapidly
developing field of PIM with a focus on the intelligent memory circuit and
system at the host and edge and cover its cross-layer design challenges from
device to algorithms. The IEEE Seasonal School in Circuits and Systems on In-Memory
Computing offers talks and tutorials by leading
researchers from multiple disciplines and prominent universities and
promotes student short presentations to demonstrate new research and
results, discuss the potential and challenges of the PIM accelerators, future
research needs, and directions, and shape collaborations.
Program:
https://events.vtools.ieee.org/m/477163