Extending the NOEL-V Platform with a RISC-V Vector Processor for Space Applications

Journal Article (2023)
Authors

Stefano Di Mascio (TU Delft - Space Systems Egineering)

A. Menicucci (TU Delft - Space Systems Egineering)

EKA Gill (TU Delft - Space Systems Egineering)

Claudio Monteleone (European Space Agency (ESA))

Research Group
Space Systems Egineering
Copyright
© 2023 S. Di Mascio, A. Menicucci, E.K.A. Gill, Claudio Monteleone
To reference this document use:
https://doi.org/10.2514/1.I011097
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 S. Di Mascio, A. Menicucci, E.K.A. Gill, Claudio Monteleone
Research Group
Space Systems Egineering
Issue number
9
Volume number
20
Pages (from-to)
565-574
DOI:
https://doi.org/10.2514/1.I011097
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This paper describes the work carried out to extend the NOEL-V platform to include data-level parallelism (DLP) by implementing an integer subset of the RISC-V Vector Extension. The performance and resource utilization efficiency of the resulting vector processor for different levels of DLP (i.e., number of lanes) have been compared to the baseline scalar processor on a Xilinx Kintex Ultrascale field-programmable gate array, employing typical kernels for compute-intensive applications. The role of the memory subsystem has also been investigated, comparing the results obtained with a low-latency and a high-latency main memory. The results show that the speed-up due to the use of the vector pipeline increases with the number of lanes in the vector processor, achieving up to 23.0× the performance of the scalar processor with only 4.3× the resources of the baseline scalar processor. Using an implementation with 32 lanes increases performance even for problem sizes larger than the number of lanes, achieving up to more than 11.7× the performance of the scalar processor with just 1.9× its resource utilization for 128 × 128 matrix multiplications. This work proves that implementations of the selected subset are easily scalable and fit for small-processor implementations in highly constrained space embedded systems.

Files

1.i011097.pdf
(pdf | 1.47 Mb)
- Embargo expired in 19-09-2023
License info not available