A virtual sculpture of a ghostly person-thing that comes to life in augmented reality.
Full description of the proposed project
A ghostly human-scale 3D figure is placed on the ground in augmented reality. The figure is idley standing but turns to face the user if they walk closer.
This figure is a 3D model made of synthetic and exaggerated body parts. It is made from combining 3D models of robots and anime figures readily downloadable from the internet.
This is piece #2 of a three piece series I am creating for my thesis called Person Thing #1-3. It is a series of experiments where I attempt to create representations of the yellow woman using synthetic and immaterial mediums and examine my relationship with my creations. Will I see my creations as another object? Or as a clone of myself? A new separate being?
For some background, on the topic of east asian femininity in Western visual culture, Anne Anlin Cheng writes, “We have roughly marshalled this vast and tenacious history under a broad heuristic that we might roughly label Oriental female objectification, refracted through the lenses of commodity and sexual fetishism. Yet we barely know how to process the political, racial, and ontic complications of confronting a human figure that emerges as and through ornament. This project aims to explore that elusive gray area in between thing and person by specifically playing with the visual language used in science fiction media to depict both futuristic and atavistic concepts.”
I want the viewer to experience an eery feeling from seeing a breathing, human-sized figure that is grotesque and inorganic in appearance at an intimate distance. I also want the viewer to be confused by the combination of humanlike movements and a mechanical, synthetic avatar.
Why is this concept a meaningful exploration of video sculpture?
I’m creating a virtual sculpture that appears alive.
Plan for installation
The virtual sculpture will be available as an app on the Google Play Store. In the future, I would like to host it on the internet using the WebXR framework. This is so that more people can access it across all devices. It’s also easier for people to go to a web address rather than download a seperate app.
Jillian Zhong is a multi-media creator who appropriates and deconstructs visual languages in popular and corporate cultures throughout her work. Working in mediums varying from fashion to web to XR, she focuses on themes such as identity and memory.
I’m working on this piece as part of my thesis. The concept is to create a ghostly figure that has presence in the room through AR. I want to see if I can use synthetic and artifical objects (3D models of cartoonish robots and anime girls found online) and give the perception of life and personhood. The ‘ghost’ is supposed to be human scale and stand idly in the space. When you walk close enough to it, it should turn to face you.
Much of my time so far was spent learning how to use Blender. I originally wanted to create some sort of Frankenstein looking humanoid using multiple models of anime style models but my 3D modeling skills were not experienced enough. I created a hybrid between a human and a robot. It was always intented to not have a face. It currently does not have any materials on it but I do like the transparent, ghostly look.
Rigging and animations were done in Mixamo:
This is still a work in progress. I need to add the user interaction as well as find a better workflow for developing the rest of this project. Vuforia ground plane detection is not available for my phone: OnePlus 6, making it really hard for me to test this. I ended up using my old phone, a Nexus 6P to upload the build to. This phone is super laggy and detected the ground plane really slowly. Not to mention it was really hard for me to even take a screenshot for documentation. The battery also kept running out as it’s probably at the end of it’s lifespan; I kept having to switch between charging the phone with a wall charger and moving it to the cord attached to the computer to install the build. I need to find another device I can test on and take screen recording documentation on. In the meantime, I will print out the Vuforia ground plane tracker and use it with the webcam.
Because of the unconvience of testing on the old Nexus 6P, the current model scale is way too large and needs to be tweeked.
I worked on this installation with Defne. The assignment was to create a video portrait that “give[s] us an idea of the person or persons”. We were inspired by the personal and individualistic nature of people’s notebooks.
For the filming, I asked people at ITP if they were willing to have their notebooks filmed. They were instructed to flip through their own notebook however they liked.
Defne edited the footage together into one video. Then it was projected with a PICO projector on a blank notebook that was taped open. We used MadMapper to map the projection onto the notebook.
I worked with Jenny and Defne to create this light sculpture. The concept was to make a sculpture that changes how users view sirens. The light pattern of the siren slows down when users touch the surface in front, giving them control over a chaotic light that typically symbolizes panic.