@ the intersection of creative storytelling and technology.

I’ve been doing my best to social distance myself by going down digital rabbit holes…

I tried streaming my full body avatar into an asymmetric VR experience alongside my daughter. A little latency mostly as a result of multiple NDI streams being recorded simultaneously on VR host machine. Enabled basic physics on skeletal mesh to allow for collision between VR player/environment and the streaming avatar. The big take away was that my daughter spent most the session climbing up and down on me and sticking her head into my mesh to see the backside of my eyeballs! 😂

Thought I would explore live-streaming full-body and facial performance in Unreal. I used an Xsens suit configured to stream over my home WiFi. I created an OSC interface on an iPad to control cameras and trigger OBS records. NDI streaming a live camera feed into the game environment. Additionally I was testing to see how accurate my one-to-one virtual space is with the real world.

This year, for obvious reasons, the Girl Scouts are doing virtual cookie sales… so my daughter and I created her pitch in VR! I used Unreal Engine and leveraged the open-source VRE plugin for most of the heavy lifting to create a multiplayer version of our home running off a listen server over LAN. I packaged a build that runs on Quest2 replicating OVRlipsync for lip-flap; NDI to source live video feed in game; OBS to collect and record all NDI streams.

A little VR house tour WIP:

I’m experimenting with optimizing my assets in the glb format; considering a WebXR implementation. This model is ~50k tris and clocks in around 8Mb.

I set-up an Unreal Engine LiveLink OSC controller to record real-time facial performance. Using the OSC messaging protocol to communicate between ue4 editor and livelink face app. I can sync slates and start/stop takes simultaneously in sequencer and on the device. I added some simple multi-cam camera switching and lens control for illustration purposes. A couple extra calls to hide/show meshes and swap textures. I’m using a python script to grab OSC messages in OBS to record video previews. There is a little lag in the OBS records but the frame rate in the ue4 editor is ridiculously fast.

On a machine learning quest to build custom training models for tracking and detection… I’m really blown away with how robust Mediapipe is out of the box. These ready to use open source solutions Google has provided to do cross-platform ML inference on mobile/desktop gpu’s have inspired me to ponder some cool use cases…

I took my daughter out for a bike ride to teach her about GPU-accelerated realtime object detection. 🙂

Frist attempt at creating a DeepFake. Source data consisted of 1500 images extracted from a 3 minute video I shot of myself talking. Trained the model for approximately 12 hours (100k iterations) against the destination video of an audition that my wife had recorded. Some obvious improvements would be adding more source data covering a wider range of poses and eye directions, more training time/iterations, a post comp pass for better color matching.

Driving my DIY photogrammetry head with an iPhone in UE4 in real-time:

My wife shaved my head with the dog clippers, my 10 year old captured me on the iPhone @ 4K/60p, metashape solved the dense cloud/mesh/texture and wrap3d to retopologize for good uvs and blendshape creation in Maya.

Testing collision with physics-based grips using the VRE plugin for Unreal Engine:

Experimenting with house-scale VR: