face 3d projection mapping

OMOTE Real-time Face Tracking 3D Projection Mapping

Forget the faces of historic monuments, the new frontier of 3D projection mapping is the faces of humans.

Created by Nobumichi Asai and friends, technical details behind the process are scant at the moment, but from what can be found in this Tumblr post, it’s clear that step one is a 3D scan of the model’s face.

Here is the translated text from that post:

I will continue to explain how to make a face mapping of this time.
Title OMOTE that (= table) is coming from the “ability”, but it has become the idea of Noh mask also in how to make. That is the idea, covered by creating a “surface”. That it is possible to pursue the accuracy, theme that represent a very delicate make-up art as its output was important. I started from the fact that it is 3D laser scanning the face of the model for the first.

I suspect that a structured light scanner was used to capture the geometry of the model’s face rather than a 3D laser scanner. Nonetheless, this is a very cool application of 3D projection mapping.

3D Face Scanning 3D Face Scanning projection mapping3D Face Scanning projection mapping

OMOTE / REAL-TIME FACE TRACKING & PROJECTION MAPPING. from something wonderful on Vimeo.

Google's Project Tango 3D Capture Device

Mantis Vision’s MV4D Tapped As Core 3D Capture Tech Behind Google’s Project Tango Tablets

Mantis Vision, a developer of some of the world’s most advanced 3D enabling technologies, today confirmed that its MV4D technology platform will serve as the core 3D engine behind Google’s Project Tango. Mantis Vision provides the 3D sensing platform, consisting of flash projector hardware components and Mantis Vision’s core MV4D technology which includes structured light-based depth sensing algorithms.

Project Tango Mantis Vision-Tablet_whiteGoogle’s new seven-inch tablet is the first mobile device released that will access the MV4D platform to easily capture, enrich and deliver quality 3D data at scale allowing Google developers to quickly build consumer and professional applications on top of the MV4D platform.

“3D represents a major paradigm shift for mobile. We haven’t seen a change this significant since the introduction of the camera-phone. MV4D allows developers to deliver 3D-enabled mobile devices and capabilities to the world,” said Amihai Loven, CEO, Mantis Vision. “This partnership with Google offers Mantis Vision the flexibility to expand quickly and strategically. It will fuel adoption and engagement directly with consumer audiences worldwide. Together, we are bringing 3D to the masses.”

MV4D is Mantis Vision’s highly-scalable 3D capture and processing platform that allows developers to integrate Mantis’ technology into new and existing applications with ease, to drive user-generated 3D content creation throughout the mobile ecosystem. MV4D’s combination of field-proven 3D imaging hardware and software and a soon-to-be released software development kit (SDK) will ultimately serve as the backbone of 3D-enabled mobile and tablet devices.

“We are excited about working with partners, such as Mantis Vision, as we push forward the hardware and software technologies for 3D sensing and motion tracking on mobile devices,” said Johnny Lee, Technical Product Lead at Google.

Since its inception, Mantis Vision has been dedicated to bringing professional-grade 3D technology to the masses. The company’s technology will be a key component of both professional and consumer level devices and applications across a wide customer base of leading mobile technology companies, application developers and device manufacturers. Because the MV4D platform and SDK is fully scalable, it is already being planned for use in more powerful, diverse range of products in the future.

Learn more about the project here

en-INTL-L-Kinect-for-Windows-Commercial-74Z-00001-mnco mocap 3d scanning

Microsoft Kinect for Windows v2: Affordable MoCap and 3D Scanning Solution

Amid the volley of announcements from Microsoft’s Build conference, is word that the new Kinect for Windows has a near-future release timeframe for both the hardware and its SDK. The desktop version of Microsoft’s do-all sensor will be available to the public this summer, as will its development framework. Perhaps more importantly, once they’re done, developers can publish their creations to the Windows Store; meaning, there’ll probably be more Kinect applications for Windows in one place than ever before. As Redmond tells it, this self-publishing will happen “later this summer.” Next summer, Microsoft is adding support for one of gaming’s most pervasive dev toolkits to Kinect for Windows: the Unity engine — tools developers already know the strengths and weaknesses of, which should bolster the app selection even further. Given that the Xbox One will see Unity support this year, this could mean that cross-platform apps and games are a distinct possibility.

With the specs of Kinect for Windows V2, the 3D scanning and imaging industries may be in for a game-changer. Indie film and game developers will hopefully be able to take advantage of its features as an affordable motion capture (mocap) solution.

Kinect motion capture guru and programmer, Jasper Brekelmans, has been playing with the second release of the Kinect for quite some time and has been posting some impressive results. You can stay on top of everything he is doing on his personal site http://www.brekel.com/.

You can pre-order your Kinect for Windows V2 today for $199 from the Microsoft Store.

Leap Motion Controller Update to Offer Affordable Individual Joint MoCap

Leap Motion Controller Update to Offer Affordable Individual Joint MoCap

Leap Motion has announced that the software for its self-titled PC gesture-control device will soon track the movement of individual finger joints, as well as the overall motion of a user’s hands.

Since its launch in 2012, the $80 Leap Motion controller has attracted a lot of interest in the CG community, with Autodesk releasing Maya and MotionBuilder plugins last year.

Individual joint tracking, more parameters captured
In a post on the company’s blog, Leap Motion CEO Michael Buckwald revealed that version 2 of its software will track the individual joints of a user’s fingers, compensating automatically where individual fingers are occluded.

The software will also expose “much more granular data” via its SDK, including 27 dimensions per hand.

Affordable Individual MoCap tools coming soon
The update, which will be free, and does not require a change of hardware, is now in public beta for developers, although there’s no news of a consumer release date yet.

Jasper Brekelmans, creator of upcoming hand-tracking tool Brekel Pro Hands, has already announced that he is using the SDK.

Read more about the Leap Motion V2 update on the developer’s blog

Massive Software announces Massive 6.0 crowd simulation software

Massive 6.0

New look, new GUI

Massive has a completely new graphic user interface. With graphic design by Lost in Space the new interface not only looks stylish and modern but provides a much smoother interactive user experience. Dialog windows and editors now turn up in the new side panel, keeping the workspace clear and tidy. The main window now hosts multiple panels that can be configured to suit the users needs, and the configurations can be recalled for later use. Since any panel can be a viewport it’s now possible to have up to 5 viewports open at once, each using a different camera.

Massive

 

3D placement

The existing placement tools in Massive have been extended to work in 3 dimensions, independently of the terrain. The point generator can be placed anywhere in space, the circle generator becomes a sphere, the polygon generator gains depth, the spline generator becomes tubular. There’s also a new generator called the geometry generator, which takes a wavefront .obj file and fills the polygonal volume with agents.

 

Auto action import

Building complex agents with hundreds of actions can be a time consuming process, but it doesn’t have to be anymore. In Massive 6.0 the action importing process can be completely automated, reducing what could be months of work to a few minutes. Also, all of the import settings for all of the actions can be saved to a file so that revisions of source motion can be imported in seconds using the same settings as earlier revisions.

Massive

Bullet dynamics

To effortlessly build a mountain of zombies it would be useful to have extremely stable rigid body dynamics. Massive 6.0 supports bullet dynamics, significantly increasing dynamics stability. Just for fun we had 1000 mayhem agents throw themselves off a cliff into a pile on the ground below. Without tweaking of parameters we easily created an impressive zombie shot, demonstrating the stability and ease of use of bullet dynamics.

No typing required

While it is possible to create almost any kind of behaviour using the brain nodes in Massive, it has always required a little typing to specify inputs and outputs of the brain. This is no longer necessary with the new channel menu which allows the user to very quickly construct any possible input or output channel string with a few mouse clicks.

These are just some of the new features of Massive 6.0, which is scheduled for release in September.

 

Massive for Maya

 

Massive has always been a standalone system, and now there’s the choice to use Massive standalone as Massive Prime and Massive Jet, or in Maya as Massive for Maya.

 

Set up and run simulations in Maya

Massive for Maya facilitates the creation of Massive silmuations directly in Maya. All of the Massive scene setup tools such as flow field, lanes, paint and placement editors have been seamlessly reconstructed inside Maya. The simulation workflow has been integrated into Maya to allow for intuitive running, recording and playback of simulations. To achieve this a record button has been added next to the transport controls and a special status indicator has been included in the Massive shelf. Scrubbing of simulations of thousands of agents in Maya is now as simple and efficient as scrubbing the animation of a single character.

Massive Software 3D imaging for crowd placement and simulation

Set up lighting in Maya

The Massive agents automatically appear in preview renders as well as batch renders alongside any other objects in the scene. Rendering in Maya works for Pixar’s RenderMan, Air, 3Delight, Mental Ray and V-Ray. This allows for lighting scenes using the familiar Maya lighting tools, without requiring any special effort to integrate Massive elements into the scene. Furthermore, all of this has been achieved without losing any of the efficiency and scalability of Massive.

 

Edit simulations in Maya graph editor

Any of the agents in a simulation can be made editable in the Maya graph editor. This allows for immediate editing of simulations without leaving the Maya environment. Any changes made to the animation in the graph editor automatically feed back to the Massive agents, so the tweaked agents will appear in the render even though the user sees a Maya character for editing purposes in the viewport. The editing process can even be used with complex animation control rigs, allowing animators and motion editors complete freedom to work however they want to.

 

 

Massive Software 3D imaging for crowd placement and simulationDirectable characters

A major advantage of Massive for Maya is the ability to bring Massive’s famous brains to character animation, providing another vital tool for creating the illusion of life. While animation studios have integrated Massive into their pipeline to do exactly this for years, the ability to create directable characters has not been within easy reach for those using off-the-shelf solutions. With Massive for Maya it’s now possible to create characters using a handful of base cycles, takes and expressions that can handle such tasks as keeping alive, responding to the the focus of the shot, responding to simple direction, or simply walking along a path, thus reducing the amount of work required to fill out a scene with characters which are not currently the focus of the shot. For example, in a scene in which two characters are talking with eachotherand a third character, say a mouse, is reacting, the mouse could be driven by it’s Massive counterpart. The talking characters would drive their Massive counterparts thereby being visible to the mouse. Using attributes in the talking characters, their Massive counterparts could change colour to convey their emotional states to the mouse agent. The mouse agent then performs appropriately, using it’s animation cycles, blend shape animations etc in response to the performance of the talking characters, and looking at whichever character is talking. Once the agents for a project have been created, setting up a shot for this technique requires only a few mouse clicks and the results happen in real-time. Any edits to the timing of the shot will simply flow through to the mouse performance.

SCANable offers on-site 3D imaging of real-world people/characters to populate your 3D crowd asset library in Massive’s crowd placement and simulation software. Contact us today for a free quote.

R3dS Wrap Topology Transfer Software

Introducing R3DS Wrap – Topology Transfer Tool

Wrap is a topology transfer tool. It allows to utilize the topology you already have and transfer your new 3D-scanned data onto it. The resulting models will not only share the same topology and UV-coordinates but also will naturally become blendshapes of each other. Here’s a short video how it works:

and here are a couple of examples based on 3D-scans kindly provided by Lee Perry-Smith

R3dS Wrap Topology Transfer Software

You can download a demo-version from their website http://www.russian3dscanner.com

As with all new technology during its final beta stages, Wrap is not perfect yet. R3DS would be highly appreciative and grateful of everyone that gives us the support and feedback to finalize things in the best possible way. This software has some potential to be a great tool. Check it out!

LiDAR for Visual Effects - Rebirth

Krakatoa Creates CG Visual Effects from LIDAR Scans for Short Film “Rebirth”

Film director and cinematographer Patryk Kizny – along with his talented team at LookyCreative – put together the 2010 short film “The Chapel” using motion controlled HDR time-lapse to achieve an interesting, hyper-real aesthetic. Enthusiastically received when released online, the three-minute piece pays tribute to a beautifully decaying church in a small Polish village built in the late 1700s. Though widely lauded, “The Chapel” felt incomplete to Kizny, so in fall of 2011, he began production on “Rebirth” to refine and add dimension to his initial story.

LiDAR for Visual Effects - Rebirth

Exploring the same church, “Rebirth” comprises three separate scenes created using different visual techniques. Contemplative, philosophical narration and a custom orchestral soundtrack composed by Kizny’s collaborator, Mateusz Zdziebko, help guide the flow and overall aspirational tone of the film, which runs approximately 12 minutes. The first scene features a point cloud representation of the chapel with various pieces and cross-sections of the building appearing, changing and shifting to the music. Based on LIDAR scans taken of the chapel for this project, Kizny generated the point clouds with Thinkbox Software’s volumetric particle renderer, Krakatoa, in Autodesk 3ds Max.

LiDAR for VFX - Rebirth

“About a year after I shot ”The Chapel,” I returned to the location and happened to get involved in heritage preservation efforts,” Kizny explained. “At the time, laser scanning was used for things like archiving, set modeling and support for integrating VFX in post production, but I hadn’t seen any films visualizing point clouds themselves, so that’s what I decided to do.”

EKG Baukultur an Austrian/German company that specializes in digital heritage documentation and laser scanning, scanned the entire building in about a day from 25 different scanning positions. The collected data was then registered and processed – creating a dataset of about 500 million points. Roughly half of the collected data was used to create the visualizations.

3D Laser Scanning for Visual Effects - Rebirth

Data processing was done in multiple stages using various software packages. Initially, the EKG Baukultur team registered the separate scans together in a common coordinates space using FARO Scene software. Using .PTS format, the data was then re-imported into Alice Labs Studio Clouds (acquired by Autodesk in 2011) for clean up. Kizny manually removed any tripods with cameras, people, checkerboards and balls that had been used to reference scans. Then, the data was processed in Geomagic Studio to reduce noise, fill holes and uniformly downsample selected areas of the dataset. Later, the data was exported back to the .PTS ASCII format with the help of MeshLab and processed using custom Python scripting so that it could be ingested using the Krakatoa importer. Lacking a visual effects background, Kizny initially tested a number of tools to find the best way to visualize point cloud data in a cinematic way with varying and largely disappointing results. Six months of extensive R&D led Kizny to Krakatoa, a tool that was astonishingly fast and a fraction of the price of similar software specifically designed for CAD/CAM applications.

“I had a very basic understanding of 3ds Max, and the Krakatoa environment was new to me. Once I began to figure out Krakatoa, it all clicked and the software proved amazing throughout each step of the process,” he said.

Even with mixing the depth of field and motion blur functions in Krakatoa, Kizny was able to keep his render time to roughly five to ten minutes per frame, even while rendering 200 million points in 2K, by using smaller apertures and camera passes from a higher distance.

“Krakatoa is an amazing manipulation toolkit for processing point cloud data, not only for what I’m doing here but also for recoloring, increasing density, projecting textures and relighting point clouds. I have tried virtually all major point cloud processing software, but Krakatoa saved my life on this project,” Kizny noted.

In addition to using Krakatoa to visualize all the CG components of “Rebirth” as well as render point clouds, Kizny also employed the software for advanced color manipulation. With two subsets of data – a master with good color representation and a target that lacked color information – Kizny used a Magma flow modifier and a comprehensive set of nodes to cast and spatially interpolate the color data from the master subset onto the target subset so that they blended seamlessly in the final dataset. Magma modifiers were also used for the color correction of the entire dataset prior to rendering, which allowed Kizny greater flexibility compared to trying to color correct the rendering itself. Using Krakatoa with Magma modifiers also provided Kizny with a comprehensive set of built-in nodes and scripting access.

3D Laser Scanning for Visual Effects - Rebirth

The second scene of “Rebirth” is a time-lapse reminiscent of “The Chapel,” while the final scene shows live action footage of a dancer. Footage for each scene was captured using Canon DSLR cameras, a RED ONE camera and DitoGear motion control equipment. Between the second and third scene, a short transition visualizes the church collapsing, which was created using 3ds Max Particle Flow with help of Thinkbox Ember, a field manipulation toolkit, and Thinkbox Stoke, a particle reflow tool.

“In the transition, I’m trying to collapse a 200 million-point data cloud into smoke, then create the silhouette of a dancer as a light point from the ashes,” shared Kizny. “Even though it’s a short scene, I’m making use of a lot of technology. It’s not only rendering this point cloud data set again; it’s also collapsing it. I’m using the software in an atypical way, and Thinkbox has been incredibly helpful in troubleshooting the workflow so I could establish a solid pipeline.”

Collapsing the church proved to be a challenge for Kizny. Traditionally, when creating digital explosions, VFX artists are blowing up a solid, rigid object. Not only did Kizny need to collapse a point cloud – a daunting task in of itself – but he also had to do so in the hyper-realistic aesthetic he’d established, and in a way that would be both ethereal and physically believable. Using 3ds Max Particle Flow as a simulation environment, Kizny was able to generate a comprehensive vector field of high resolution that was more efficient and precisely controlled with Ember. Ember was also used to animate two angels appearing from the dust and smoke along with the dancer silhouette. The initial dataset of each of angels was pushed through a specific vector noise field that produced a smoke-like dissolve and then reversed thanks to retiming features in Krakatoa, Ember and Stoke, which was also used to add density.

3D Laser Scanning for Visual Effects - Rebirth

“To create the smoke on the floor, I decided to go all the way with Thinkbox tools,” Kizny said. “All the smoke you see was created using Ember vector fields and simulated with Stoke. It was good and damn fast.”

Another obstacle was figuring out how to animate the dancer in the point clouds. Six cameras recorded a live performer with markerless motion capture tracking done using iPi Motion Capture Studio package. The data obtained from the dancer was then ported onto a virtual, rigged model in 3ds Max and used to emit particles for a Particle Flow simulation. Ember vector fields were used for all the smoke-like circulations and then everything was integrated and rendered using Thinkbox’s Deadline, a render management system, and Krakatoa – almost 900 frames and 3 TB of data caches only for particles. Deadline was also used to distribute high volume renders and allocate resources across Kizny’s render farm.

Though an innovative display of digitally artistry, “Rebirth” is also a preservation tool. Interest generated from “The Chapel” and continued with “Rebirth” has enticed a Polish foundation to begin restoration efforts on the run-down building. Additionally, the LIDAR scans of the chapel will be donated to CyArk, a non-profit dedicated to the digital preservation of cultural heritage sites, and made widely available online.

The film is currently securing funding to complete postproduction. Support the campaign and learn more about the project at the IndieGoGo campaign homepage at http://bit.ly/support-rebirth. For updates on the film’s progress, visit http://rebirth-film.com/.

About Thinkbox Software
Thinkbox Software provides creative solutions for visual artists in entertainment, engineering and design. Developer of high-volume particle renderer Krakatoa and render farm management software Deadline, the team of Thinkbox Software solves difficult production problems with intuitive, well-designed solutions and remarkable support. We create tools that help artists manage their jobs and empower them to create worlds and imagine new realities. Thinkbox was founded in 2010 by Chris Bond, founder of Frantic Films. http://www.thinkboxsoftware.com

3D Systems Logo

3D Systems buys company behind Star Wars, Hobbit and Harry Potter models

3D Systems Acquires Gentle Giant Studios

  • Accesses decades of licensed content from industry’s greatest brands
  • Expands leadership capabilities and know-how in retail merchandising
Release Date:
Friday, January 3, 2014 – 08:36

ROCK HILL, South Carolina – January 3, 2014 – 3D Systems  (NYSE:DDD) announced today the acquisition of Gentle Giant Studios, the leading provider of 3D modeling for the entertainment and toy industry. For over two decades, Gentle Giant Studios has led the development of state-of-the-art content using 3D scanning and modeling to develop and manufacture licensed 3D printed characters, toys and collectibles from a variety of franchise properties with global brand recognition, including Marvel, Disney, AMC’s The Walking Dead, Avatar, Harry Potter and Star Wars.

3DS plans to immediately leverage Gentle Giant Studios technology and vast library of digital content into its consumer platform and extend its existing brand relationships to further the reach of 3D scanning, modeling and printing for entertainment, toys, collectibles, action figures in conjunction with numerous blockbuster films and evergreen licensed properties.

“Gentle Giant Studios catapults 3DS’s consumer platform forward with highly curated, licensed characters, content publishing know-how and first-mover experience for the benefit of leading toy companies, movie studios and their merchandising divisions,” said Avi Reichental, President and CEO, 3D Systems.

Learn more about how 3DS is manufacturing the future today at www.3dsystems.com.

About 3D Systems Corporation

3D Systems is a leading provider of 3D printing centric design-to-manufacturing solutions including 3D printers, print materials and cloud sourced on-demand custom parts for professionals and consumers alike in materials including plastics, metals, ceramics and edibles. The company also provides integrated 3D scan-based design, freeform modeling and inspection tools. Its products and services replace and complement traditional methods and reduce the time and cost of designing new products by printing real parts directly from digital input. These solutions are used to rapidly design, create, communicate, prototype or produce real parts, empowering customers to manufacture the future.

 

Leadership Through Innovation and Technology

  • 3DS invented 3D printing with its Stereolithography (SLA) printer and was the first to commercialize it in 1989.
  • 3DS invented Selective Laser Sintering (SLS) printing and was the first to commercialize it in 1992.
  • 3DS invented the Color-Jet-Printing (CJP) class of 3D printers and was the first to commercialize 3D powder-based systems in 1994.
  • 3DS invented Multi-Jet-Printing (MJP) printers and was the first to commercialize it in 1996.

Today its comprehensive range of 3D printers is the industry’s benchmark for production-grade manufacturing in aerospace, automotive, patient specific medical device and a variety of consumer, electronic and fashion accessories.

More information on the company is available at www.3DSystems.com.

About Gentle Giant Studios

Gentle Giant Studios is the leading provider of 3D digital data and is the first company to utilize digital data and 3D printing technology for the consumer products and entertainment industries. Creating beloved 3D characters from a variety of franchise properties with worldwide name recognition, including Star Wars, Marvel, Avatar, Harry Potter, AMC’s The Walking Dead, and The Hobbit. Gentle Giant produces a wide range of products that are manufactured using the highest quality utilizing the most advanced 3D scan to print techniques and a team of incredibly talented artisans that digitally captures the likenesses of actors, props, and scenery to accurately model and recreate these images for fans and collectors everywhere. Gentle Giant Studios also provides prototyping and product development services for consumer products, fine art, theme parks, and provides on set digitizing services for major motion pictures.

More information on the company is available on www.gentlegiantltd.com, and www.gentlegiantstudios.com.

ILM Hulk Avengers Mark Ruffalo as the Hulk in "The Avengers," courtesy of Marvel Films.

How ILM Used Laser Scanning to Give Life to the Hulk in Marvel’s The Avengers

Industrial Light & Magic, a division of Lucasfilm Ltd. and now owned by The Walt Disney Company, forever changed the way movies are made and how we as the viewer experience them. The movie-making geniuses have continually raised the bar in computer-generated imagery (CGI) and visual effects (VFX) year after year since the company’s founding by George Lucas in May of 1975. Released today, ILM takes us behind the scenes to show us how they used laser scanning and other tools to transform Mark Ruffalo into the lovable Hulk character that almost stole the show in Marvel’s third highest-grossing film of all time, The Avengers.

Check out more ILM movie magic on their YouTube channel.