Synthetic Resemblance


An immersive performance explores self-actualisation through emerging technologies.

︎ Date: March, 2023
︎ Catergory: Generative Design, Interaction Design, 3D Modelling
︎ Format: 1 min length, Virtual Production
︎ Tools in use: RunwayML, Blender, Unreal Engine, Xsense MVN Motion Capture system, DaVinci Resolve


︎ For the process ➔ Click HERE to the PDF deck.




Synthetic Resemblance utilises cutting-edge technologies such as motion capture, generative AI, and real-time rendering. The performance consists of three layers, each contributing to the overall experience.

First layer: AI-generated selfies and a world of objects mapped with AI selfies illustrate the synthetic perception in the digital realm. The AI selfies are generated through machine learning trained with a dataset of 500 selfies. They reveal perceptual differences between humans and machines.

Second layer: Choreography and improvisation performances translate reality into the digital realm. Sensors on the dancer's motion capture suit are randomly assigned to virtual objects, resulting in a lack of familiarity during the performance. This absence of cues adds to the mystery of object movements.

Last layer: The audience in the studio serves as witnesses to the process of interaction and amalgamation between the digital and physical realities. They observe the captivating fusion between these two realms.






↑ A process video of how StyleGAN (an AI/ML model specific for generating human faces) was trained for generating my selfies

↑ Diagrams about the process on AI generated objects


↑ Explaining how the Ai generated selfies translated to the animated objects used in the virtual world in the virtual production



↑ Story board for choreography and performance


↑ Behind the scene at the virtual production studio © WPP



© 2021 wen chen | email | cv | instagram
London | |