pLUTo

Enabling Massively Parallel Computation in DRAM via Lookup Tables

Conference Paper (2022)
Author(s)

Joao Dinis Ferreira (ETH Zürich)

Gabriel Falcao (Universidade de Coimbra)

Juan Gomez-Luna (ETH Zürich)

Mohammed Alser (ETH Zürich)

Lois Orosa (ETH Zürich)

Mohammad Sadrosadati (ETH Zürich)

Jeremie S. Kim (ETH Zürich)

Geraldo F. Oliveira (ETH Zürich)

Taha Shahroodi (TU Delft - Computer Engineering)

More authors (External organisation)

Research Group
Computer Engineering
Copyright
© 2022 Joao Dinis Ferreira, Gabriel Falcao, Juan Gomez-Luna, Mohammed Alser, Lois Orosa, Mohammad Sadrosadati, Jeremie S. Kim, Geraldo F. Oliveira, T. Shahroodi, More Authors
DOI related publication
https://doi.org/10.1109/MICRO56248.2022.00067
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Joao Dinis Ferreira, Gabriel Falcao, Juan Gomez-Luna, Mohammed Alser, Lois Orosa, Mohammad Sadrosadati, Jeremie S. Kim, Geraldo F. Oliveira, T. Shahroodi, More Authors
Research Group
Computer Engineering
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.@en
Pages (from-to)
900-919
ISBN (electronic)
978-1-6654-6272-3
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Data movement between the main memory and the processor is a key contributor to execution time and energy consumption in memory-intensive applications. This data movement bottleneck can be alleviated using Processing-in-Memory (PiM). One category of PiM is Processing-using-Memory (PuM), in which computation takes place inside the memory array by exploiting intrinsic analog properties of the memory device. PuM yields high performance and energy efficiency, but existing PuM techniques support a limited range of operations. As a result, current PuM architectures cannot efficiently perform some complex operations (e.g., multiplication, division, exponentiation) without large increases in chip area and design complexity. To overcome these limitations of existing PuM architectures, we introduce pLUTo (processing-using-memory with lookup table (LUT) operations), a DRAM-based PuM architecture that leverages the high storage density of DRAM to enable the massively parallel storing and querying of lookup tables (LUTs). The key idea of pLUTo is to replace complex operations with low-cost, bulk memory reads (i.e., LUT queries) instead of relying on complex extra logic. We evaluate pLUTo across 11 real-world workloads that showcase the limitations of prior PuM approaches and show that our solution outperforms optimized CPU and GPU base-lines by an average of 713 × and 1.2 ×, respectively, while simultaneously reducing energy consumption by an average of 1855 × and 39.5 ×. Across these workloads, pLUTo outperforms state-of-the-art PiM architectures by an average of 18.3 ×. We also show that different versions of pLUTo provide different levels of flexibility and performance at different additional DRAM area overheads (between 10.2% and 23.1%). pLUTo's source code and all scripts required to reproduce the results of this paper are openly and fully available at https://github.com/CMU-SAFARI/pLUTo.

Files

PLUTo_Enabling_Massively_Paral... (pdf)
(pdf | 1.4 Mb)
- Embargo expired in 01-07-2023
License info not available