[JSEN'23] F. Spagnolo, et.al.

Design of a Low-Power Super-Resolution Architecture for Virtual Reality Wearable Devices

F. Spagnolo, et.al. on April 15, 2023
doi.org
obsidian에서 수정하기

Abstract

A custom hardware architecture able to reconstruct high-resolution images by treating foveal region (FR) and peripheral region (PR) through accurate and inaccurate operations, respectively is designed, achieving a 55% energy reduction and a $\times 14$ times higher throughput rate, with respect to state-of-the-art competitors. Head-mounted displays (HMDs) have made a virtual reality (VR) accessible to a widespread consumer market, introducing a revolution in many applications. Among the limitations of current HMD technology, the need for generating high-resolution images and streaming them at adequate frame rates is one of the most critical. Super-resolution (SR) convolutional neural networks (CNNs) can be exploited to alleviate timing and bandwidth bottlenecks of video streaming by reconstructing high-resolution images locally (i.e., near the display). However, such techniques involve a significant amount of computations that makes their deployment within area-/power-constrained wearable devices often unfeasible. This research work originated from the consideration that the human eye can capture details with high acuity only within a certain region, called the fovea. Therefore, we designed a custom hardware architecture able to reconstruct high-resolution images by treating foveal region (FR) and peripheral region (PR) through accurate and inaccurate operations, respectively. Hardware experiments demonstrate the effectiveness of our proposal: a customized fast SR CNN (FSRCNN) accelerator realized as described here and implemented on a 28-nm process technology is able to process up to 214 ultrahigh definition frames/s, while consuming just 0.51 pJ/pixel without compromising the perceptual visual quality, thus achieving a 55% energy reduction and a $\times 14$ times higher throughput rate, with respect to state-of-the-art competitors.

Figure

figure 1 figure 1

figure 2 figure 2

figure 3 figure 3

figure 4 figure 4

figure 5 figure 5

figure 6 figure 6

figure 7 figure 7

figure 8 figure 8

Table

table I table I

table II table II

table III table III

table IV table IV