June 3, 2016
On a high desert plain north of Joshua Tree National Park, Hidden River Road becomes a graded dirt track, flanked on either side by single-family homes, unadorned, prefab steel buildings, and trailers. As you continue east, at the base of a low hill covered in reddish granite boulders, two strange structures appear. One is black; the other is gold. Designed and built by the Los Angeles-based architect Robert Stone, Rosa Muerta and Acido Dorado are nominally habitable residences. Both have refrigerators and bathrooms, for instance. But in philosophy and execution, the homes are as austere as the desert surrounding them. In fact, in a sense, the homes are the desert surrounding them. The roof of Rosa Muerta, except directly above the bed, is open to the sky. When it rains, it rains indoors as well. The walls of Acido Dorado's common space—great, reflective golden windows hung on tracks—slide back to let in the breeze, or whatever else is coursing through the desert. One morning Stone woke to find a rattlesnake coiled in the middle of his living room.
Stone's houses are defined, therefore, as much by what they're missing as what they feature. And this architecture of subtraction, rather than inviting the inhabitant to marvel at the structure, asks her to ponder the border between the artificial and the natural. It may seem axiomatic to refer to a building as immersive (after all, a building literally surrounds the subject), but Acido Dorado demands self-awareness. In addition to its reflective, sliding walls, the ceiling is tiled in mirrors. And a reflecting pool extends the width of the home's bow. So wherever you look, you see not just a surface, but a second—or third, or fourth, or fifth. At dusk, with the northern wall shuttered, you see the surface of the reflective glass, the reflection in the reflective glass (you, the furniture around you, and the mirrored ceiling that contains another version of your reflection and the furniture), and through the reflective glass, the pool beyond it (which reflects the ceiling, which in turn reflects the pool), and past all that, the desert. To stand alone in that space at twilight, with a gentle wind rustling the Joshua trees and yuccas, is to find yourself in a metaphysical limbo where the sense of self feels as fragmented as light through a prism.
I'm beginning this series of essays on virtual reality with a discussion of Robert Stone's Acido Dorado for two reasons. The first is that we shot our first VR experience—"The Visitor"—there, and our experience filming in the house continues to inform our practical approach to conceiving and producing VR. The second is that the house provides a metaphorical touchstone for what we think VR is: Namely, an artificial environment wherein the viewer retains attentional agency, and in-so-doing earns presence in space. But more on that, later.
Before we dive into the philosophical and ethical considerations every VR creator should explore—both in the medium's present and future states—let's talk about WHAT THE HELL VIRTUAL REALITY IS.
The general term virtual reality (VR), for our purposes, refers to a 360-degree, fully immersive experience viewed through a head-mounted display (HMD). This shouldn't be confused with Augmented Reality (AR), which, using any variety of transparent screens (a shield, eyeglasses, contact lenses) projects a layer of visual information atop the natural world. Google Glass is a proto-example of AR viewing hardware. Google Cardboard is a proto-example of VR viewing hardware.
For the duration of this essay series, we're going to concern ourselves with the conception and production of live-action VR because we're approaching this new medium as filmmakers who tell stories with moving images, captured with cameras. Some of the greatest early experiments in VR tell stories in animated, or otherwise fully computer-generated, worlds. But for those of us most capable of crafting narratives using people, movement, and dialogue, live-action VR is our playground.
So let's get down to the very basics.
The cameras used to capture VR experiences are, in these early days, highly non-standard. Until the Jaunt Neo, the Nokia Ozo, the Lytro Immerge, and most recently, the GoPro Odyssey, most VR cameras were (and still are) actually arrays of individual cameras (often GoPros), tethered together with myriad, often open-source, 3-D-printed plastic housings. Our partner, Wevr, just acquired an Odyssey. But over the last two years, they’ve achieved incredible results from simple, 4-GoPro arrays. Both "The Visitor" and Janicza Bravo's Sundance-premiering "Hard World for Small Things” were shot with such a quad-camera setup.
Because I have the most experience with a 4-GoPro array (and also because its mechanics are essentially identical to more complex arrays, with a few key differences that I'll highlight), I'll use it as my touchstone for how—in the simplest terms—a 360-degree video experience gets filmed.
The key to a 360-degree camera rig is that it can see in, well, 360 degrees. On the GoPro-based array we've used, there are four cameras, one positioned at each of the cardinal directions. Affixed to each camera is an 8mm fisheye lens with a 190-degree field of view. When you have four cameras arranged in such a configuration, with overlapping fields of view, you're capturing a fully spherical image. Front, back, left, right, up, down. Everything.
In future installments, I'll talk about the paradigm shift required to think about filming VR from a script/directorial perspective. But for now, it's important that you start thinking, not in terms of frame, but in terms of space. VR is not a new genre; it's a new medium.
What comes out of the camera
When you're shooting VR, you have a 360-degree field of view. But since only a few extremely high-end, still-in-development VR cameras are capturing all 360 degrees of image onto a single sensor (or series of interlinked sensors), most VR is shot as a series of separate slices of that 360-degree sphere. A good way to picture the disparate images captured by a VR array are like the sections of an orange. Each camera captures one section. Then in post-production, the sections are stitched together into one piece of seamless fruit, surrounded in a shiny, durable peel.
A number of companies, including Oculus, Google, and Nokia—to name three behemoths—are working on software that will automate the stitching process. For the time being, though, the most complicated and expensive aspect of the VR post-production workflow is taking the individual slices of the orange, and fusing them into a single piece of fruit. There are an increasing number of vendors capable of doing this work. But the cost and quality varies greatly. Luckily for you, by the end of 2016, the power and accessibility of auto-stitching software will have increased dramatically.
After you've recorded your 360-degree video in disparate segments, and then had them stitched together by someone who knows what they're doing (or you're reading this in late 2016, when brilliant, cheap auto-stitching software is readily available), the output for each stitched shot—called a lat long—looks like this:
To conceptualize what a lat long is, picture an inside-out globe—where the oceans and continents are on the interior, rather than exterior surface, and you, the viewer of the world, are positioned inside the globe, where the earth's core is, looking out toward the surface. In this analogy, the lat long would be that interior globe surface sliced open and laid flat—like a world map you'd hang on a classroom wall.
This flat image contains all 360 degrees of image in one 16:9 frame. You're literally seeing everything at once: front, back, left, right, up, down. This is, admittedly, not the ideal way to view VR; but it's a necessary intermediary step (and Mettle has even created a pair of affordable Premiere Pro plugins that allow you to scroll around the lat long as if you were watching the experience in an HMD). By unfolding the VR sphere into a flat image, you can bring it into your Premiere, Avid, or FCPX timeline, and edit just like you would a 2-D film.
Just kidding! Technically, editing a VR film in your preferred NLE software is the same as editing a normal, flat image. Conceptually, though, depending on your aesthetic approach, it's somewhere between slightly and astronomically different. Roy Andersson and Ruben Östlund mind find enormous parallels between cutting in VR and 2-D. Danny Boyle and Paul Greengrass would have to completely reimagine their approach to visual storytelling.
But more on the philosophy of editing—or lackthereof—in part II.
Watching what comes out of the camera
You can watch 360-degree video on a computer (YouTube supports 360 that you can control with your mouse; Facebook supports 360 video you can control by moving your phone left and right). But when you can see the edges of the screen, you get no sense of being immersed in a new environment.
When we talk about virtual reality, we're referring to a 360-degree video or CGI motion-picture experience watched through an HMD. What's an HMD? It's a headset that the viewer slips on like a pair of unwieldy goggles. Instead of transparent lenses, there's a screen on which the experience plays. In the simpler HMDs (Google Cardboard; Samsung Gear), the screen is actually just a smartphone whose built-in gyroscopic sensitivity (the phone knows when it’s being tilted or turned) allows a viewer to explore a 360-degree environment simply by moving her head left and right, or up and down. The more powerful HMDs, like Oculus Rift and HTC Vive, work on the same principle. They're just tethered to powerful computers, whose processing power can support dedicated, higher-resolution displays, and faster refresh rates, than the current crop of smartphones. In the case of the HTC Vive, you also have hand controls and a laser that tracks your movement—enabling interactive experiences in which the viewer has locomotive agency. But that is way, way more complicated. Plus, at least for now, the only way to create an interactive environment is to build one digitally in a game engine. And that doesn't fit into our necessarily narrow definition of live-action VR cinema.
As demand for VR increases over the coming months and years, all platforms publishing VR content (Wevr Transport, Vrse, Oculus, etc.) will produce native apps for all operating systems, allowing viewers to watch VR experiences on any and all devices—much as you can stream Netflix through a browser on your iMac, or through an app your iPhone, or cast House of Cards to your TV via Chromecast using your Samsung S7, or watch Orange Is the New Black on Apple TV, etc. etc. etc. etc.
In the case of Transport, Wevr is making a bet that there will be an explosion of independent VR content in the coming years, so they're building a platform that, eventually, will allow you to publish a VR experience as easily as you can upload a video to Vimeo. So if you're learning about the rudiments of VR production and philosophy from this series, by the time you've made your first immersive film, there will be a super-simple way to make it available to your fans.
So, now, let's talk about making virtual reality!!
Ready for more? Part 2 here.