THIS IS WHY KEANU REEVES THREW UP RIGHT AFTER HE GOT OUT OF THE MATRIX

This publication contains two parts:

  1. An immersion in a virtual world, and
  2. Equipment for living there upon arrival.
    The first can be found at a link somewhere in this text, which comprises the second.

The world I coded is an attempt to explore how the digital sphere calibrates its users. It treats attention in several different ways that will be explored later in this text. For now, think of the world as attending to you just as much as you attend to it. It is a virtual experience that tests the lines between the virtual and the physical. Keanu Reeves’s character vomited the moment he got out of the Matrix. I wanted to see if I could produce similar behavior in users.

Before continuing into the digital sphere, let me direct your attention. The bones and brain of the app look like HTML and JavaScript, just like any website. Because A-Frame (the technology behind the WebVR app) is completely web-based, you can theoretically run it on anything that has a web browser. Please note that it is in fact VR and will likely make your device heat up from the heavy processing it has to do to render everything properly. The best way to experience this world is with a VR headset of some sort. I tested it using a $25 headset from Amazon that is essentially a plastic case for a smartphone. The headset comes with straps to fasten it to the user’s head, which is important for this project. If you can be hands-free while using it, you will decouple the physical world from the virtual one more easily. If you don’t have access to a headset, you can use a cardboard VR setup like Google’s or Childish Gambino’s. When the project loads in your browser, tap the goggles icon in the lower right corner to put it into goggle mode. Phones made in the last few years fare pretty well in rendering and interacting with it.

Otherwise, view it regularly in your phone or computer’s browser. It’s just way less cool that way.

Stand up when you are goggled into the Metaverse. Ambient music with a lot of bass adds to the effect as well. I prefer Tycho (I mean, just look at his album artwork and you’ll understand).

You will find yourself in a world that is mostly dark. All interactions you can perform are based on the application’s construction of your gaze. The reticule in the center of the screen shrinks when you are about to select something. It shrinks when it hovers over any object. Some objects are interactable, but most are not.

If you ever feel like you do not know what to do, remember that you are in an entirely new world. Look around. The experience guides you.

Calibration

The site is https://digital-calibration.glitch.me. Go now, but be careful.

Continue to the next section after you return from the digital calibration scene.

Equipment

By this point, you have experienced the calibration scene. What did you see? How do you feel? What captured or directed your attention?

Did you vomit?

Head to the Contact section and let me know. I would love to hear.

When I began learning A-Frame for this project, I planned out how I would create the virtual world. The role of the front-end developer is often to figure out how to deliver the best user experience. After watching the first person use the application, I realized this perspective was wrong. Through various mechanisms and effects, what I really created was a digital sphere (literally—the VR body sits in a giant black sphere with a radius of 3000 virtual meters) in which the user creates his own world. The user is the ultimate controller of the virtual reality. As an active agent and cognizer, the user is just as much part of the reality as the platform he stands on. He calibrates it while it calibrates him.

During the experience, different mechanisms worked to direct and manipulate your attention. The first mechanisms appeared in this text. I listed instructions for how you might attend to the world. When I watched my friends use it, I picked up on several things to advise. Some hardly rotated their heads or their bodies and looked straight ahead throughout the whole experience. They also held onto the goggles throughout their usage. While this is not the movement I intended to produce, it is nonetheless a result and noteworthy. I tried to abstract the cell phone from its original use as much as possible, but they treated the world as a single screen rather than a sphere. That linear world is less exciting considering the capabilities of the virtual one with 360 degrees of content. In this case, the disposition of the user is less active and dynamic. It is structured toward separating the realities. It lacks an active form. As hypothesized, those that implemented this strategy were not as susceptible to digital calibration as others that looked around hands-free.

I am not blaming the users for their static dispositions. Perhaps I should have developed the application in a way that would have directed their attention more fluidly. Nevertheless, their experiences are valid to this project. I hypothesize that they did not embody the virtual world and participate in its network; they merely watched. Embodied attention has spatial implications. Watching does not.

I found that certain instruction worked better for immersing users into the virtual world. Asking them to look around often led to a drop in their hands from the sides of the headset. Ambient techno music also helped place the user in the digital space. From there, the users were capable of going out into the world. I included instructions in the text above in an attempt to educate properly. In these terms, the education is accomplished through Tim Ingold’s model of exduction, in which instructors show a student how to go out into the world rather than bring them into a specific perspective of it. I aimed to provide an exduction with which the users could understand how to act most open to the digital sphere. To be malleable, users need to practice close reading in the style of Alang Navneet. This performance requires an attention to objects as they are, leaving any presuppositions or seasoned interpretations about them aside. For instance, if a user assumes physics and time work the same in the virtual world as they do in the physical one, he will miss the point. He will vomit.

Several of my friends/test subjects experienced nausea (none vomited). This feeling could have been a result of having a glowing screen inches away from their eyes. But, I am shying away from that opinion because the experience does not last more than a few minutes and is mostly dark. Kids these days use their phones for much longer than that and not much farther from their faces and with much brighter lights without feeling nauseated. The feelings of sickness could have been a direct effect of my intentional manipulations of the world.

I designed the falling box experience to instill panic in the user. The first box that falls should surprise them with its size. Then, the sign directs their attention upward toward the second falling box. From their place, the user should feel as though it might strike them. Two of my test subjects visibly flinched when the second box came close to them. When they acted this way, it became clear that Annemarie Mol’s distinction between perspectives and realities is an important one when it comes to virtual realities. In Mol’s piece, she identifies three different approaches for identifying and treating anaemia. These processes are not just three ways of looking at anaemia; they create three different diseases. In the same way, looking through the VR lens passively instead of using it hands-free does not just present a different way to look at the world, it creates a different world itself. The existence of the immersive reality is evidenced in the embodied reaction. The world in which the VR users expose themselves to their environment is one that is much more threatening and can induce protective instincts.

The falling box experience has another subtler implication. The countdown panel that shows up on the user’s screen does not obey the laws of time that it appears to profess. The second second is 10% shorter than a real second. The third count lasts 20% longer than a real second. The fourth is 10% longer, and the fifth is 40% longer. If the user notices this manipulation, he will think it is a glitch. However, none of my subjects reported anything about the timing. Their brains must have adapted to the new time scale. Their attention to the time was embodied in their feelings of discomfort. The distortion could have triggered a neurological reaction that could have contributed to their nausea.

The experience of the shapes flying at the user was also intended to initialize feelings of panic. In some cases, that panic was embodied in physical reactions by the users. I designed that animation to further corrupt any sense of trust that the user might have in the digital sphere to play by the rules of the physical world. When the button for the experience is clicked, shapes appear strewn about. Because they are so far, they appear initially as colored versions of the spheres hovering around the user. I placed text on the screen instructing the user to look around and attend to the shapes. At that point, all users turned their bodies around and glanced around. Some returned to their starting point without giving the new stars much thought. Others searched intently for the reason that I would demand such close attention. Eventually, they realized the shapes were approaching. Many asked, “Wait, sh*t, are they coming at me?” This moment is important because it exposes the users as vulnerable and in search of protection. They were reading what was in front of them as a potential threat, if only for a moment. Almost everyone let out a “Whoa” when the shapes got so close that they pummeled their vision. The tone of that exclamation was often slightly unnerved. It overwhelmed them with color and random shapes. One of my subjects, a fan of techno concerts, looked quite satisfied at that point.

The falling experience, which was often executed last, had the most visible and overt effects on the bodily and mental attention of the users. I used a physics library for A-Frame written by Don McCurdy, a seasoned A-Frame developer. It implements real physics concepts. However, the system is still a simulation. And, it contains editable components. I kept gravity at the default value for a while, but the distance between the spheres acting as stars made it seem like the user was not falling that fast. The initial results of those trials were interesting; many users reported feeling like they were floating in space. Increasing the number of stars made it seem like they were falling faster, but then they crowded the user’s vision a little too much. As it stands right now, gravity works at the rate of -25.0 m/s^2 in the digital sphere. It feels like a reasonable fall, but it is not quite the same as on Earth. Many users shifted their weight during the fall in what I believe was an attempt to ground themselves back in the physics of the natural world. They did not look around them or spin around; they only looked up and down. Spinning around while falling feels disorienting in the digital sphere. One of my friends said the fall itself made him dizzy, so he focused on the “red dot” floating in the air because it appeared to be the only stationary object as the stars whizzed past him. I put planets like the one on which my friend fixated in the digital sphere to act as such reference points. After watching the shapes fly at them, the users knew they could look around. Their choices to remain focused only on what was in front of them suggested they were uncomfortable. They tried to protect themselves from the uncertainty and strangeness of the calibration scene.

When users hit the ground the first time, they bounce fairly high. Some people described the floor as a trampoline. It disoriented almost everyone who tried it. Once people came to a rest on the ground, they started to look around. They tried to figure out where they were. Some said they felt like they were in Tron because of the pattern on the floor. One asked how to get out. He expected an exit button, but I expected him to remove himself from the world by taking off the goggles.

Perhaps the most interesting part comes at the end of the simulation. The program prompts the user to stick out his leg in order to exit the digital sphere. Two people out of the half dozen testers followed the instructions. If they were treating the VR sphere as just a phone screen, they would have known that it would be impossible for the phone to detect any movement by the lower body because the phone is completely encapsulated in the plastic headset. Right after they submitted to the (mis)direction, the users realized their silliness. The obedient decision indicates an outright trust in the digital sphere’s rules, and that is amusing.

With the work of Katherine Hayles, we can understand every component of the digital sphere as a cognizer. That is especially true in the calibration scene. Through all of the examples above, each piece of code acts on the user in a manipulative fashion. The code, a stream of binary digits, comprises a network of cognizers and attenders, of which the user himself becomes a part.

Every time a person subjects himself to the rules of the digital sphere and exhibits a physical or mental response, he acknowledges his place in the digital space. One of my friends wrote that it took him a moment to remember where he was and what he was wearing when he stepped out of the calibration scene. As previously mentioned, we can use Mol’s discussion of multiple realities to illustrate that entire worlds exist outside the one in which we reside. To understand the sense of place in the digital locale, we turn to Andy Clark. In “Where Are We?” Clark argues the location of someone is grounded in the brain’s experiences of control, communication, and feedback...We are where we interact. The users’ brains exercised control and registered feedback from the digital sphere. Communication with the physical world created a conflict. As the project developed, I limited my verbal involvement with the users. Then, they attended mostly to the action-space of the digital world. And they proceeded with caution.

With all this equipment for grasping the virtual, it seems that attention’s loyalty (or lack thereof) lies in Kathleen Stewart’s conception of weak theory. One’s attention is dynamic, adaptive, and responsive. It makes no attempt to reduce its intricate and fragile networks. Its connections educate via processes of exduction, but they are also what make it vulnerable. Other cognizers can manipulate it. Structures and functions sometimes hold forms of attention, which is the case with attention to the physical environment and its supposed laws. A screen and some animations can hack into those entities and put a different spin on them. That is what makes us vomit.