Crash FX Group, the 3D scanning studio behind the incredible face scans in the multi-award winning Marvel Spider-Man PS4 game get hands-on with ESPER’s state-of-the-art LightCage 3D head scanning rig.
We have interviewed experienced artists who have captured scans of humans and assets for large projects including EA, Sony, Netflix, DICE amongst others.
Crash CEO Tony Stranges and 3D Scan Manager Ed McDonough share their experience about using ESPER’s demo scanning rig for capturing photorealistic scans.
Why did you decided to test ESPER LightCage?
We do a lot of photogrammetry scanning. Things we can talk about are Spider-Man for PS4, which we scanned all the heads for. We also did scans for Long Shot Homecoming. This is our first experience with the ESPER Light stage. We’ve used a lot of ESPER products in the past, ESPER Triggers, ESPER Powers, which have worked fantastic with our rigs. We’re very excited to see exactly how we can incorporate this into our workflow.
What makes this 3D scanning rig so special?
It’s a 20 camera set-up – 16 2000D’s to basically get the rough face. Then we have three projection cameras of the 7D’s to project all the different lighting patterns onto the geometry. What we are traditionally used to is using a series of strobes, which is just the pop flash. Where as in this light stage rig we get very, very fast flashes, a lot more information. A lot of image data that we normally wouldn’t have been able to get. As far as specular maps, straight diffuse maps, reflection maps and normal maps.
So, what are your thoughts?
We were doing a few different things. Testing heads, testing hands. The speed of the whole system, the speed of the cameras as well. Make sure that the sync of the lights and the cameras are consistent with a multi-camera set-up. Which has all been great.
Everything is fully manageable, keeping your eyes open for the whole time. The software is actually very easy to use. I’m not a programmer and I was able to pick it up in a few seconds.
We’ll go away and process it and get all the images aligned. Once we do that we’ll be able to take all the images and re-project onto the geometry and then we’ll be able to kick out a normal map. I think having a multi-directional lighting set-up is definitely a change from what we’re using with just strobes. This is going to basically take our data to the next level.