Unlike other chip companies that focus on temporary solutions of specialization at a cost of programmability and accuracy, MemryX attacks the data throughput and energy-efficiency problems by fundamentally addressing the von Neumann bottleneck in the chip architecture. Through its proprietary immersed in-memory computing technology and data-flow architecture, the whole AI model resides on chip. Bottlenecks for throughput and energy efficiency, including external DRAM, memory buses (both off-chip and on-chip) and centralized controls, are completely eliminated.

This revolutionary architecture, termed Memory Processing Unit (MPU), allows MemryX’s AI chips to offer the best performance and energy efficiency (FPS and FPS/W) in class with seamless software integration, support a broad range of AI models without the need of re-training, and produce better accuracy and reliability due to its native floating point operations as compared to other approaches that rely on Int8 or lower precisions.

MemryX’s IMC fabric and dataflow architecture fundamentally address the von Neumann bottleneck, allowing edge devices to achieve server performance and accuracy without model re-training

MemryX’s IMC fabric and dataflow architecture fundamentally address the von Neumann bottleneck, allowing edge devices to achieve server performance and accuracy without model re-training