Audiences know that you can’t make a film without a camera. For a long time, Hollywood-style editing and cinematography developed to minimize the audience’s awareness of the camera. What would have been unacceptable in these films, visual acknowledgement of the camera’s presence (lens flares, water or fog on the lens), was permissible in documentary films. Eventually it became part of the cinematic language of narrative films. In a weird way, acknowledging the camera is “there” capturing an image became a way to draw people into that image, rather than pushing them away from it.
When you move the “camera” into the sunlight in The Last of Us, light reveals a dirty lens. During a cut-scene the “camera” shakes a bit. These artificial cinematic “imperfections” are animated in the service of “realism”. But visual cues that say “hey, there’s a camera here!” when there is a camera become a lie when there is no camera, no lens, no visual “imperfection”.
And maybe a certain quest for “realism” dooms these games to be copies of films. They can’t invent their own imagery because they make meaning out of the familiar. Other games, like those made in Twine or other text-focused styles, or more abstract games not seeking to replicate the audiovisual cinematic experience, are freer for it. Their author-creators are not bound by some attempt to create an “objective” reality within their electronic world. They don’t have to recycle cinematic images to make their spaces legible.
Realism isn’t about the real world; it’s about how much something reminds us of the real world, how much it DOESN’T challenge our understanding of the world, how well its digital and our mental models mesh.
Brian Taylor, “The Last of Us and Pittsburgh“