@ the intersection of creative storytelling and technology.

Occasionally I post things here to help me remember all the stuff I was thinking about…

A quick NeRF test in Unreal Engine using nerfstudio trained with volinga from an iPhone polycam capture. ~15 minutes from start to finish.

Trying to crack the code on TikTok; This pipeline is setup with a couple backstops that allow for animation fixes (if desired in either in Maya or Unreal), but for this TikTok experiment, I’m using a rapid workflow to specifically record live improvisational performances that can be immediately edited as video in Premiere and then conformed using an EDL with unreal’s timecoded animation in sequencer. This allows me to quickly subsequence an environment, add lighting and props and do some minor animation adjustments using the metahuman body and face rigs right in the timeline. It’s a great way to test some artificially un-intelligent humor without overthinking the process.

One of the cools things about the immersive web is the ability to switch (relatively seamlessly!) between different XR modes. I authored a three.js scene to interact with a 3d asset and parse between browser, headset (VR or AR) and Apple’s Quick Look (converting a GLB file to USDZ on the fly). Author once, deploy everywhere!

Here is a Quest Web Launch link to the scene : Do Not Sit Here!

*iOS 16 update recently broke some aspects of WebXR rendering which will make this appear black on mobile. There is a fix for this in the next update… so leaving it for now.

Testing out a Threejs/usdz scene with both WebXR *and* Apple Quick Look support.. This should load in AR on iOS devices by clicking the Apple Reality icon in the embedded Threejs scene… if you have a connected WebXR headset you should be able to spawn the character in AR/passthrough. I can’t wait until Apple starts supporting morphs targets within glTF files.

*iOS 16 update recently broke some aspects of WebXR rendering which will make this appear black on mobile. There is a fix for this in the next update… so leaving it for now.

Extending my WebXR knowledge with some basic VR teleportation mechanics.

Link > VR version

Link > Quest Web Launch

Testing WebXR functionality. Using three.js to learn how GLTF importers work. Added camera controls that dynamically follow the head bone on an imported skeleton mesh. The AR version includes passthrough and should work for both Quest and Quest Pro. Haven’t tested extensively, but excited about the potential for this platform.

Link > AR Version

Link > Quest Web Launch

Using the quest browser to pixel stream the ue5 editor. Live link mocap data streaming into level while using unreal’s remote control. Custom OSC websockets to trigger OBS recording.

Testing the ability to take real-time Live Link motion capture data from unreal into a packaged Android build on the questpro. I wanted to try combing the face and eye tracking components meta supplies as part their movement SDK. I had had hoped to use their finger tracking, but it’s tied to the body tracking component and unfortunately there didn’t seem to be an easy way to set this up for my use case. The real-time live link motion data requires the use of animation blueprints and the movement sdk body tracking is exclusively skeletal mesh based. I’m sure this could be extended in the engine source in the future. I added a simple input on the motion control to swap skeletal meshes at runtime so I could see myself as different characters on the fly. This kind of visual feedback you get as a performer far surpasses this “magic mirror” that I built for the actors on Meet The Voxels.

Testing out body, face and eye tracking on the questpro in unreal using my head/hand meshes from an ARKit rig I had setup a few years ago. Most of the blends required are also available in ARKit (with different naming) and so this worked pretty much out of the box. The unreal examples meta provides aren’t that inspired and so I ripped the assets from the unity GitHub example to better understand how they are setting up their rigs.

Taking the XR approach to digital twins a little further. I tested more questpro color passthrough interactions to expand on the idea of blending the real and the virtual.

Going down the VR passthrough rabbit hole with the questpro, I wanted to see what it would be like to overlay the digital twin of my house and then reveal it by wiping away the virtual space. I was surprised at how well this worked along with the relative accuracy of my modeling skills that I had built up over the pandemic. 😉

The first thing I wanted to do with the questpro was to test color passthrough with the idea of streaming animation live in my living room. This is a little test I packaged for android out of unreal using some motion data I had captured a few months back.

Experimenting with a way to record camera and live-link mocap in vr with the help of my wife, daughter and unrealengine !

Since I don’t have access to an optical volume to do proper virtual camera work, I set up a system in Unreal Engine to record my camera data while streaming live link performance capture to my quest. Added a couple of input bindings to my motion controllers so I can quickly calibrate performers, and easily start and stop sequencer. Surprisingly I experienced very little latency, which is pretty amazing considering how saturated the network was with xsens, live-link, multiple ndi senders and receivers and the onboard recording of the HMD output.

Render is ue5 and obviously no cleanup on the mocap.

Auditioning Metahumans is as easy as CTRL-C and CTRL-V!  

There are so many things that I’m blown away by when it comes to working with Metahumans… too many to list here. I’ve done a lot of retargeting of body animation between rigs over the years, but copying and pasting facial performances across Epic’s face control rig feels like I’m working in a word processing program from the 80’s… and it’s next level cool.

Leveraging UE5 to test Google’s Immersive Stream for XR platform (AR and web browser) utilizing a little homebrew full-body and facial performance capture. Looking forward to an Open XR version of this tech for HMDs. 😉

Been working on a real-time previz pipeline in Unreal using Live Link Face and Xsens. This motion and face data has not been cleaned up. The body is recorded directly in MVN, HD processed and retargeted with skeletal dimensions that match the character. Face is processed from the calibrated Live Link CSV curves recorded on iPhone. Switchboard running through a Multiuser Session triggers takes in Sequencer, MVN and Live Link Face App.

Google Cloud stream is running off of a Linux version of UE 4.27.2 and the Vulkan API.

Looking at ways to take metahuman performance capture from unreal into a traditional maya based animation pipeline, specifically for body and face cleanup. This video shows raw motion that has been retargeted to several metahuman assets. The face and body controls from Unreal and have been imported and hooked up to the source assets in Maya. There have been no modifications to the data in Maya, I’m just illustrating how a round trip would theoretically work in a production environment. Advanced Skeleton control rig is driving the body using motion capture retargeting and metahuman face controls are available for tweaking the facial performance in Maya.

The new mesh to metahuman tool in UE5 is insane! If you’ve spent anytime pushing a scan through a wrap, maya, ue workflow to create custom metahumans… then we can agree this plugin is magic!

Mesh to Metahuman

Immersive virtual memories are already a thing! My daughter traveled back in time to revisit her bedroom from when she was 8 years old. 🙂

tl;dr: Several years ago I took photogrammetry reference of my daughters room. I never really considered I was capturing the details and preserving a memory for her for years to come.

It’s interesting to contemplate how today’s photogrammetry is likely the equivalent of those grainy 2d photos from our childhoods. It’s not hard to imagine that as technology advances the quality and resolution of these spatial memories will greatly improve in much the same way as today’s 12MP smartphone cameras dwarf our parents Instamatics.

When experienced in virtual reality these immersive moments can be “relived” to some extent. The idea of being able to go back in time and visit my childhood bedroom to check out my C64 setup would almost feel like time traveling. I’m excited for this digital generation of kids and all the cool things that they are already getting to experience.

I tried streaming my full body avatar into an asymmetric VR experience alongside my daughter. A little latency mostly as a result of multiple NDI streams being recorded simultaneously on VR host machine. Enabled basic physics on skeletal mesh to allow for collision between VR player/environment and the streaming avatar. The big take away was that my daughter spent most the session climbing up and down on me and sticking her head into my mesh to see the backside of my eyeballs! 😂

Thought I would explore live-streaming full-body and facial performance in Unreal. I used an Xsens suit configured to stream over my home WiFi. I created an OSC interface on an iPad to control cameras and trigger OBS records. NDI streaming a live camera feed into the game environment. Additionally I was testing to see how accurate my one-to-one virtual space is with the real world.

This year, for obvious reasons, the Girl Scouts are doing virtual cookie sales… so my daughter and I created her pitch in VR! I used Unreal Engine and leveraged the open-source VRE plugin for most of the heavy lifting to create a multiplayer version of our home running off a listen server over LAN. I packaged a build that runs on Quest2 replicating OVRlipsync for lip-flap; NDI to source live video feed in game; OBS to collect and record all NDI streams.

A little VR house tour WIP:

I set-up an Unreal Engine LiveLink OSC controller to record real-time facial performance. Using the OSC messaging protocol to communicate between ue4 editor and livelink face app. I can sync slates and start/stop takes simultaneously in sequencer and on the device. I added some simple multi-cam camera switching and lens control for illustration purposes. A couple extra calls to hide/show meshes and swap textures. I’m using a python script to grab OSC messages in OBS to record video previews. There is a little lag in the OBS records but the frame rate in the ue4 editor is ridiculously fast.

On a machine learning quest to build custom training models for tracking and detection… I’m really blown away with how robust Mediapipe is out of the box. These ready to use open source solutions Google has provided to do cross-platform ML inference on mobile/desktop gpu’s have inspired me to ponder some cool use cases…

I took my daughter out for a bike ride to teach her about GPU-accelerated realtime object detection. 🙂

Frist attempt at creating a DeepFake. Source data consisted of 1500 images extracted from a 3 minute video I shot of myself talking. Trained the model for approximately 12 hours (100k iterations) against the destination video of an audition that my wife had recorded. Some obvious improvements would be adding more source data covering a wider range of poses and eye directions, more training time/iterations, a post comp pass for better color matching.

Driving my DIY photogrammetry head with an iPhone in UE4 in real-time:

My wife shaved my head with the dog clippers, my 10 year old captured me on the iPhone @ 4K/60p, metashape solved the dense cloud/mesh/texture and wrap3d to retopologize for good uvs and blendshape creation in Maya.

Testing collision with physics-based grips using the VRE plugin for Unreal Engine:

A couple weeks into the Pandemic two things emerged; a messy house and my attempt at creating a room-scale version of it: