Portrait: Patrick Münnich

The project is both a video game and a dance performance. The main narrative that the audience hears in the work revolves around an urban legend about an encounter with giants, but the narrative that guides the player follows the mutation of stories when they are used for disinformation. In the game world, the player hears voices in different locations that project various attitudes towards the giants’ narratives—from disbelief to exaggeration, to fake authority supposedly confirmed by “researchers,” and so on. 


I initially conceptualised and demoed the project in Linz at the Ars Electronica Founding Lab in autumn 2023 with Julie-Michèle Morin (dramaturg) and Junjian Wang (dancer). After the initial demo was shown to the public, I continued exploring the choreography and the calibration/uncalibration of the MoCap suits at iMAL, focusing on how these aspects depended on the dancer’s movements and the theatrical, audience, and dramaturgical elements of the work. Working with CREW was particularly beneficial, as their expertise in XR performances and audience engagement validated some of my ideas about the dramaturgy and helped me develop alterations for future audience experiences.

However, in this particular work, there are no procedurally generated objects because it is performed in Edit mode, live from my laptop, which outputs a video over 4K. Procedurally generated objects can destabilise the entire process by, well, crashing it. This adds an additional layer of error and unpredictability.

After the world is built, I move on to the characters. If the work is interactive, I consider who Player 1 and Player 2 are—what they look like, what clothes they wear. For this, I use Metahuman and ZBrush for additional feature sculpting. If the clothes need to be customised beyond standard options, I design and sew them in Marvelous Designer, then export to ZBrush for mesh fixing, and finally to Maya for remeshing. For texturing, I use Marmoset and Substance Painter. Once the avatars are created, I calibrate the rig to match the motion capture suit I’m using.

When all these elements are in place, the dramaturgy and dance can begin. This marks the point of liveness and performance. The motion capture data is streamed live into the Unreal Engine world, which is then displayed on a multi-projector setup. From that point, there’s a lot of back and forth—the concept, dramaturgy, and dance evolve to fit the vision, and vice versa.

Letta Shtohryn: So many layers of anxiety too.

The first Machinima work I created was in 2019, titled Algorithmic Oracle. In this piece, I used The Sims 3 and its somewhat random fire algorithm to recreate the event of my own house catching fire (based on true events). I would set the initial actions for the avatars and then film the outcome generated by the game. After that, without saving, I would start again. I became the camerawoman of my own house catching fire in The Sims, observing what the algorithm decided for me. I think I captured more than 100 different scenarios, but only 10 made it into the final work.

It’s fascinating to see what people decide to do with innovative technological devices. There’s a certain weirdness that emerges as people experiment with everything, and eventually, these innovations become standardised. I’m not claiming technological novelty in my work; I’m simply combining relevant tools at my disposal, though I am intrigued by the unconventional uses of technology.

Since I work with speculation, I’m particularly inspired by gaps in knowledge, whether they’re current or historical—these gaps are where speculation thrives. In “Чули? Чули / Chuly? Chuly”, I collaborated with Heritage Malta to adapt into my work a 3D scanned model of a temple in Malta that’s around 5,000 years old. It’s an unusual subject: archaeological, yet tinged with science fiction due to its age and the lack of data surrounding it, which allows space for a myriad of urban legends.

I’m also fascinated by speculative futures. I once worked on a project with geologists to identify Martian lava tubes suitable for the first human habitats. My role was to visualise these concepts, and this project provided enough inspiration for my installation Life on Mars Might Not Want to Be Found (2022). That experience has now led to a project I’m currently working on—a CGI documentary about the largest meteorite in Europe that fell in Ukraine, which is questioning space heritage and its ownership.

As for immersion, I believe it can be found in many things. For me, VR alone isn’t quite enough because my body eventually realises it’s not actually in the setting it perceives. AR adds to what is already present, and MR introduces interaction. But immersion can exist both within XR and outside of it—it can be found in a seventeenth century panorama, a podcast, a cave painting, or a story.

Letta Shtohryn: I want to move away from defining XR solely as VR experiences, as it’s often used in contexts where it’s not necessary. Extended reality is also a form of immersion. Not all immersions are extended reality, and not all extended realities are immersive, but I believe we should broaden the tools we use for extended reality and combine them with those used in theatre, stagecraft, visual arts, and storytelling—even those that are quite analogue. That’s the direction I’d like to see it take, and I think it’s already beginning to expand in that way. There are initiatives like this residency, as well as numerous programmes, grants, and projects aimed at making XR something more than just a single type of technology—something broader and more inclusive of other disciplines. Perhaps it’s just wishful thinking, or maybe it’s the bubble I’m in, but I’d like to see it delay standardisation and evolve into something weirder.

Interview Letta Shtohryn of  by Céline Delatte, Communications Officer for RIT’s partner Dark Euphoria as part of the iMAL artistic residency in Brussel.