Meta will showcase two new prototype headsets at SIGGRAPH 2023 next week. SIGGRAPH is an annual conference where researchers present breakthroughs in computer graphics hardware and software.
It is noticed that last year Meta showed Starburst, a high dynamic range (HDR) display research prototype with a brightness of 20,000 nits, the highest of any head-mounted display known. This year, Meta will showcase two prototype headsets: Butterscotch Varifocal will demonstrate near-retina resolution and zoom optics, while Flamera will demonstrate a novel approach to real-world perspective without reprojection. Attendees will be able to try out the two prototype headsets, though to be clear, both the Butterscotch Varifocal and the Flamera are research prototypes meant to explore far-future headset technologies that Meta specifically warns “may never into consumer-facing products”.
Butterscotch Varifocal: Varifocal Retina Resolution
Angular resolution is the true measure of HMD resolution because it takes into account the difference in field of view between different headsets, describing how many pixels you see in each degree of viewing angle, known as pixels per degree (PPD) . For example, if two headsets use the same display, but one has twice the field of view than the other, they will have the same resolution, but the former will have half the PPD of the latter. “Retinal resolution” is a term used to describe angular resolution above the threshold at which the human eye can discern, which in the world of headsets is 60 PPD. No consumer VR headset on the market today comes close to that, with the Quest Pro at around 22 PPD, the Bigscreen Beyond at 32 PPD, and the $2,000 Varjo Aero hitting 35 PPD. And Varjo’s $5,500 commercial headset does achieve retinal resolution, but only in a small rectangular area in the center of the field of view.
Last year, Meta showed off a 55 PDD prototype headset called Butterscotch, designed to demonstrate and study what retinal resolution feels like, though its field of view was only about half that of Quest 2. Butterscotch Varifocal is the next-generation evolution of Butterscotch, combining its retinal angular resolution with the variable focus optics of the Half-Dome prototype.
Butterscotch Varifocal’s Motor Adjusts Display Focus
Originally unveiled in 2018, the Half-Dome incorporates eye-tracking technology that rapidly and mechanically moves the display back and forth to dynamically adjust focus. All headsets on the market today use fixed-focus lenses, where each eye has a separate perspective, but the image is focused at a fixed focal distance, usually a few meters away. Your eyes will point (converge or diverge) at the virtual object you’re looking at, but can’t actually focus (accommodate) the virtual distance to the object. This is known as parallax-accommodation conflict and can cause eyestrain and make close virtual objects appear blurry. Solving this problem is important to making VR feel more realistic and suitable for prolonged use. The Butterscotch Varifocal can be adjusted from 25cm to infinity, Meta says, so you can focus on objects that are close or far away. Meta claims that the combination of retinal resolution and variable-focus optics means the headset offers “clear, bright vision comparable to what you see with the naked eye.”
Flamera: Lightfield perspective without reprojection
Flamera is a prototype head-mounted display without reprojection perspective, which Meta describes as a “computational camera using light field technology.” Headsets like the Quest Pro, Apple Vision Pro, and the upcoming Quest 3 use cameras on the front to show the real world, but because those cameras are not in the same position as the user’s real eyes, they must use image processing algorithms to reproject the camera view, to show what your eyes will see. This process adds latency and causes image processing artifacts to appear. Flamera aims to bypass reprojection methods entirely with a completely new hardware design built with perspective in mind.
Here’s how Meta describes how the headset works:
Unlike traditional light-field cameras (which have an array of lenses), the Flamera (think of it as a "flat camera") intentionally places an aperture behind each lens in the array. These apertures physically block unwanted light so that only the desired light reaches the eye (whereas traditional light field cameras would capture more than this light, resulting in unacceptably low image resolution). The architecture used also focuses the limited sensor pixels on relevant parts of the light field, resulting in higher resolution images. Raw sensor data ends up looking like little dots, each containing only a portion of the headset's desired view of the physical world outside of the headset. Flamera rearranges the pixels and estimates a rough depth map for depth-dependent reconstruction.
Meta claims this results in a perspective effect with lower latency and fewer artifacts.