Google's Project Tango 3D Capture Device

Mantis Vision’s MV4D Tapped As Core 3D Capture Tech Behind Google’s Project Tango Tablets

Mantis Vision, a developer of some of the world’s most advanced 3D enabling technologies, today confirmed that its MV4D technology platform will serve as the core 3D engine behind Google’s Project Tango. Mantis Vision provides the 3D sensing platform, consisting of flash projector hardware components and Mantis Vision’s core MV4D technology which includes structured light-based depth sensing algorithms.

Project Tango Mantis Vision-Tablet_whiteGoogle’s new seven-inch tablet is the first mobile device released that will access the MV4D platform to easily capture, enrich and deliver quality 3D data at scale allowing Google developers to quickly build consumer and professional applications on top of the MV4D platform.

“3D represents a major paradigm shift for mobile. We haven’t seen a change this significant since the introduction of the camera-phone. MV4D allows developers to deliver 3D-enabled mobile devices and capabilities to the world,” said Amihai Loven, CEO, Mantis Vision. “This partnership with Google offers Mantis Vision the flexibility to expand quickly and strategically. It will fuel adoption and engagement directly with consumer audiences worldwide. Together, we are bringing 3D to the masses.”

MV4D is Mantis Vision’s highly-scalable 3D capture and processing platform that allows developers to integrate Mantis’ technology into new and existing applications with ease, to drive user-generated 3D content creation throughout the mobile ecosystem. MV4D’s combination of field-proven 3D imaging hardware and software and a soon-to-be released software development kit (SDK) will ultimately serve as the backbone of 3D-enabled mobile and tablet devices.

“We are excited about working with partners, such as Mantis Vision, as we push forward the hardware and software technologies for 3D sensing and motion tracking on mobile devices,” said Johnny Lee, Technical Product Lead at Google.

Since its inception, Mantis Vision has been dedicated to bringing professional-grade 3D technology to the masses. The company’s technology will be a key component of both professional and consumer level devices and applications across a wide customer base of leading mobile technology companies, application developers and device manufacturers. Because the MV4D platform and SDK is fully scalable, it is already being planned for use in more powerful, diverse range of products in the future.

Learn more about the project here

en-INTL-L-Kinect-for-Windows-Commercial-74Z-00001-mnco mocap 3d scanning

Microsoft Kinect for Windows v2: Affordable MoCap and 3D Scanning Solution

Amid the volley of announcements from Microsoft’s Build conference, is word that the new Kinect for Windows has a near-future release timeframe for both the hardware and its SDK. The desktop version of Microsoft’s do-all sensor will be available to the public this summer, as will its development framework. Perhaps more importantly, once they’re done, developers can publish their creations to the Windows Store; meaning, there’ll probably be more Kinect applications for Windows in one place than ever before. As Redmond tells it, this self-publishing will happen “later this summer.” Next summer, Microsoft is adding support for one of gaming’s most pervasive dev toolkits to Kinect for Windows: the Unity engine — tools developers already know the strengths and weaknesses of, which should bolster the app selection even further. Given that the Xbox One will see Unity support this year, this could mean that cross-platform apps and games are a distinct possibility.

With the specs of Kinect for Windows V2, the 3D scanning and imaging industries may be in for a game-changer. Indie film and game developers will hopefully be able to take advantage of its features as an affordable motion capture (mocap) solution.

Kinect motion capture guru and programmer, Jasper Brekelmans, has been playing with the second release of the Kinect for quite some time and has been posting some impressive results. You can stay on top of everything he is doing on his personal site http://www.brekel.com/.

You can pre-order your Kinect for Windows V2 today for $199 from the Microsoft Store.

Leap Motion Controller Update to Offer Affordable Individual Joint MoCap

Leap Motion Controller Update to Offer Affordable Individual Joint MoCap

Leap Motion has announced that the software for its self-titled PC gesture-control device will soon track the movement of individual finger joints, as well as the overall motion of a user’s hands.

Since its launch in 2012, the $80 Leap Motion controller has attracted a lot of interest in the CG community, with Autodesk releasing Maya and MotionBuilder plugins last year.

Individual joint tracking, more parameters captured
In a post on the company’s blog, Leap Motion CEO Michael Buckwald revealed that version 2 of its software will track the individual joints of a user’s fingers, compensating automatically where individual fingers are occluded.

The software will also expose “much more granular data” via its SDK, including 27 dimensions per hand.

Affordable Individual MoCap tools coming soon
The update, which will be free, and does not require a change of hardware, is now in public beta for developers, although there’s no news of a consumer release date yet.

Jasper Brekelmans, creator of upcoming hand-tracking tool Brekel Pro Hands, has already announced that he is using the SDK.

Read more about the Leap Motion V2 update on the developer’s blog

hardware-independent-3d-laser-scanning-large-1152x648

Autodesk Announces ReCap Connect Partnership Program

With its new ReCap Connect Partnership Program, Autodesk will open up Autodesk ReCap – its reality capture platform – to third party developers and partners, allowing them to extend ReCap’s functionality.

“Autodesk has a long history of opening our platforms to support innovation and extension,” said Robert Shear, senior director, Reality Solutions, Autodesk. “With the ReCap Connect Partnership Program, we’ll be allowing a talented pool of partners to expand what our reality capture software can do. As a result, customers will have even more ways to start their designs with accurate dimensions and full photo-quality context rather than a blank screen.”

There are many ways for partners to connect to the ReCap pipeline, which encompasses both laser-based and photo-based workflows.  Partners can write their own import plug-in to bring structured point cloud data into ReCap and ReCap Pro using the Capture Codec Kit that is available as part of the new ReCap desktop version. DotProduct – a maker of handheld, self-contained 3D scanners – is the first partner to take advantage of this capability.

“Autodesk’s ReCap Connect program will enable a 50x data transfer performance boost for DotProduct customers — real time 3D workflows on tablets just got a whole lot faster. Our lean color point clouds will feed reality capture pipelines without eating precious schedule and bandwidth.” Tom Greaves, Vice President, Sales and Marketing, DotProduct LLC.

Alternately, partners can take advantage of the new Embedded ReCap OEM program to send Reality Capture Scan (RCS) data exports from their point cloud processing software directly to Autodesk design products, which all support this new point cloud engine, or to ReCap and ReCap Pro. The first signed partners in the Embedded ReCap OEM program are: Faro, for their Faro Scenesoftware; Z+F for their LaserControl software; CSA for their PanoMap software, LFM for their LFM software products; and Kubit for their VirtuSurv software.  All these partners’ software will feature this RCS export in their coming releases.

“Partnering with Autodesk and participating in the ReCap Connect program helps FARO to ensure a fluent workflow for customers who work with Autodesk products. Making 3D documentation and the use of the captured reality as easy as possible is one of FARO’s foremost goals when developing our products. Therefore, integrating with Autodesk products suits very well to our overall product strategy.” – Oliver Bürkler, Senior Product Manager, 3D Documentation Software & Innovation, FARO

As a third option, partners can build their own application on top of the Autodesk photo-to-3D cloud service by using the ReCap Photo Web API. More than 10 companies – serving markets ranging from medical and civil engineering, to video games and Unmanned Aerial Vehicles (UAVs) – have started developing specific applications that leverage this capability, or have started integrating this capability right into their existing apps. Some of the first partners to use the ReCap Photo Web API include Soundfit, SkyCatch and Twnkls.

“Autodesk’s cloud based ReCap is an important part of the SoundFit’s 3D SugarCube Scanning Service.  Autodesk’s ReCap service has enabled SoundFit to keep the per scan cost of its service very low, opening new markets, such as scans for hearing aids, custom fit communications headsets, musicians monitors and industrial hearing protection. ReCap allows SoundFit to export 3D models in a wide variety of popular 3D formats, so SoundFit customers and manufacturers can import them into Autodesk CAD packages from AutoCAD to 123D Design, or send them directly to any 3D printer or 3D printing service bureau.” – Ben Simon-Thomas, CEO & Co-Founder

For more information about the ReCap Connect Partnership Program, contact Dominique Pouliquen at Email Contact.

Additional Partner Supporting Quotes

“ReCap Connect gives our PointSense and PhoToPlan users smart and fully integrated access to powerful ReCap utilities directly within their familiar AutoCAD design environments. The result is a more simple and efficient overall workflow. ReCap Photo 360 image calibration eliminates the slowest part of a kubit user’s design process resulting in significant time savings per project.” – Matthias Koksch, CEO, kubit

“ReCap, integrated with CSA’s PanoMap Server, provides a powerful functionality to transfer laser scan point cloud data from large-scale 3D laser scan databases to Autodesk products.  Using the interface, the user can select any plant area by a variety of selection criteria and transfer the laser scan points to the design environment in which they are working. The laser scan 3D database of the plant can have thousands of laser scans.” – Amadeus Burger, President, CSA Laser Scanning

“Autodesk’s industry leading Recap photogrammetry technology will be instrumental in introducing BuildIT’s 3D Metrology solution to a broader audience by significantly reducing data capture complexity and cost.” – Vito Marone, Director Sales & Marketing, BuildIT Software & Solutions

“I am very pleased with the ReCap Photo API performance and its usefulness in fulfilling our 3D personalization needs. I believe the ReCap Photo API is the only product that is available in the market today that meets our needs.” – Dr. Masuma, PhD., Founder of iCrea8

 

Angela Costa Simoes

Senior PR Manager

DIRECT  +1 415 547 2388

MOBILE  +1 415 302 2934

@ASimoes76

Autodesk, Inc.

The Landmark @ One Market, 5th Floor

San Francisco, CA 94105

www.autodesk.com

Massive Software announces Massive 6.0 crowd simulation software

Massive 6.0

New look, new GUI

Massive has a completely new graphic user interface. With graphic design by Lost in Space the new interface not only looks stylish and modern but provides a much smoother interactive user experience. Dialog windows and editors now turn up in the new side panel, keeping the workspace clear and tidy. The main window now hosts multiple panels that can be configured to suit the users needs, and the configurations can be recalled for later use. Since any panel can be a viewport it’s now possible to have up to 5 viewports open at once, each using a different camera.

Massive

 

3D placement

The existing placement tools in Massive have been extended to work in 3 dimensions, independently of the terrain. The point generator can be placed anywhere in space, the circle generator becomes a sphere, the polygon generator gains depth, the spline generator becomes tubular. There’s also a new generator called the geometry generator, which takes a wavefront .obj file and fills the polygonal volume with agents.

 

Auto action import

Building complex agents with hundreds of actions can be a time consuming process, but it doesn’t have to be anymore. In Massive 6.0 the action importing process can be completely automated, reducing what could be months of work to a few minutes. Also, all of the import settings for all of the actions can be saved to a file so that revisions of source motion can be imported in seconds using the same settings as earlier revisions.

Massive

Bullet dynamics

To effortlessly build a mountain of zombies it would be useful to have extremely stable rigid body dynamics. Massive 6.0 supports bullet dynamics, significantly increasing dynamics stability. Just for fun we had 1000 mayhem agents throw themselves off a cliff into a pile on the ground below. Without tweaking of parameters we easily created an impressive zombie shot, demonstrating the stability and ease of use of bullet dynamics.

No typing required

While it is possible to create almost any kind of behaviour using the brain nodes in Massive, it has always required a little typing to specify inputs and outputs of the brain. This is no longer necessary with the new channel menu which allows the user to very quickly construct any possible input or output channel string with a few mouse clicks.

These are just some of the new features of Massive 6.0, which is scheduled for release in September.

 

Massive for Maya

 

Massive has always been a standalone system, and now there’s the choice to use Massive standalone as Massive Prime and Massive Jet, or in Maya as Massive for Maya.

 

Set up and run simulations in Maya

Massive for Maya facilitates the creation of Massive silmuations directly in Maya. All of the Massive scene setup tools such as flow field, lanes, paint and placement editors have been seamlessly reconstructed inside Maya. The simulation workflow has been integrated into Maya to allow for intuitive running, recording and playback of simulations. To achieve this a record button has been added next to the transport controls and a special status indicator has been included in the Massive shelf. Scrubbing of simulations of thousands of agents in Maya is now as simple and efficient as scrubbing the animation of a single character.

Massive Software 3D imaging for crowd placement and simulation

Set up lighting in Maya

The Massive agents automatically appear in preview renders as well as batch renders alongside any other objects in the scene. Rendering in Maya works for Pixar’s RenderMan, Air, 3Delight, Mental Ray and V-Ray. This allows for lighting scenes using the familiar Maya lighting tools, without requiring any special effort to integrate Massive elements into the scene. Furthermore, all of this has been achieved without losing any of the efficiency and scalability of Massive.

 

Edit simulations in Maya graph editor

Any of the agents in a simulation can be made editable in the Maya graph editor. This allows for immediate editing of simulations without leaving the Maya environment. Any changes made to the animation in the graph editor automatically feed back to the Massive agents, so the tweaked agents will appear in the render even though the user sees a Maya character for editing purposes in the viewport. The editing process can even be used with complex animation control rigs, allowing animators and motion editors complete freedom to work however they want to.

 

 

Massive Software 3D imaging for crowd placement and simulationDirectable characters

A major advantage of Massive for Maya is the ability to bring Massive’s famous brains to character animation, providing another vital tool for creating the illusion of life. While animation studios have integrated Massive into their pipeline to do exactly this for years, the ability to create directable characters has not been within easy reach for those using off-the-shelf solutions. With Massive for Maya it’s now possible to create characters using a handful of base cycles, takes and expressions that can handle such tasks as keeping alive, responding to the the focus of the shot, responding to simple direction, or simply walking along a path, thus reducing the amount of work required to fill out a scene with characters which are not currently the focus of the shot. For example, in a scene in which two characters are talking with eachotherand a third character, say a mouse, is reacting, the mouse could be driven by it’s Massive counterpart. The talking characters would drive their Massive counterparts thereby being visible to the mouse. Using attributes in the talking characters, their Massive counterparts could change colour to convey their emotional states to the mouse agent. The mouse agent then performs appropriately, using it’s animation cycles, blend shape animations etc in response to the performance of the talking characters, and looking at whichever character is talking. Once the agents for a project have been created, setting up a shot for this technique requires only a few mouse clicks and the results happen in real-time. Any edits to the timing of the shot will simply flow through to the mouse performance.

SCANable offers on-site 3D imaging of real-world people/characters to populate your 3D crowd asset library in Massive’s crowd placement and simulation software. Contact us today for a free quote.

R3dS Wrap Topology Transfer Software

Introducing R3DS Wrap – Topology Transfer Tool

Wrap is a topology transfer tool. It allows to utilize the topology you already have and transfer your new 3D-scanned data onto it. The resulting models will not only share the same topology and UV-coordinates but also will naturally become blendshapes of each other. Here’s a short video how it works:

and here are a couple of examples based on 3D-scans kindly provided by Lee Perry-Smith

R3dS Wrap Topology Transfer Software

You can download a demo-version from their website http://www.russian3dscanner.com

As with all new technology during its final beta stages, Wrap is not perfect yet. R3DS would be highly appreciative and grateful of everyone that gives us the support and feedback to finalize things in the best possible way. This software has some potential to be a great tool. Check it out!

RIEGL_Software_RiALITY_Screen-5d

Introducing the World’s First App for LiDAR data visualization on the iPad: RiALITY

RIEGL proudly announces its new iPad point cloud viewer: RiALITY, now available for free in the iTunes App Store.

This new, innovative App, the first of its kind, allows users to experience LiDAR data in a completely new environment. It also allows easier LiDAR data demonstrations through the use of an iPad.

RIEGL’s RiALITY App enables users to visualize and navigate through point clouds acquired with RIEGL laser scanners. As an example, users are able to explore a dataset of the beautiful Rosenburg Castle in Austria. RIEGL scans can also be imported from RIEGL’s RiSCAN PRO software into the App, as well.

“We’re pleased to present a new way of visualizing point clouds. RiALITY delivers this new technology by providing Augmented Reality technology in an easy-to-use app. Now you can easily send your client a 3D point cloud that they can visualize on their iPad, for free.” said Ananda Fowler, RIEGL’s manager of terrestrial laser scanning software.

RiALITY features true color point clouds and 3D navigation. In a breakthrough technological development, the app features an Augmented Reality Mode. The Augmented Reality Mode allows point clouds to be virtually projected into the real world.

Dive into the point cloud!

Find out more at www.riegl.com/app.

3D-Scanned Olympians Wear Uniforms Suited for Superheroe

Olympic athletes will wear state-of-the-art, 3D-scanned, custom-fitted uniforms

[Source: Mashable]

As if we needed another reason to worship athletes, select Olympic hockey players will wear state-of-the-art, 3D-scanned uniforms custom-fitted to their body parts. That’s right, like superheroes.

Hockey equipment manufacturer Bauer officially unveiled the new line of high-tech hockey equipment, called “OD1N,” in December. CEO Kevin Davis has touted the gear as the “concept car” of hockey equipment. Pouring a cool million dollars into outfitting six elite hockey players, Bauer used a tech-friendly combination of composite materials, compression-molded foam and 3D optical scanning to personalize the equipment, while lightening the skates, protective gear and goalie pads by one-third.

Hockey enthusiasts will see the line in action on the ice when it debuts at the Sochi Winter Games in February. The equipment will be worn by the NHL‘s Jonathan Toews (Chicago Blackhawks/Team Canada), Patrick Kane (Chicago Blackhawks/Team USA), Nicklas Backstrom (Washington Capitals/Team Sweden) and goaltender Henrik Lundqvist (New York Rangers/Team Sweden). Claude Giroux (Philadelphia Flyers/Team Canada) and Alex Ovechkin (Washington Capitals/Team Russia) round out the group of six players who worked with Bauer to test the equipment.

Lundqvist has been practicing and playing with the Od1n goal pads since November, while Toews, Kane and Backstrom are sporting elements of the protective body suit.

Bauer OD1N Equipment

 

Bauer’s new line of hockey equipment comprises skates, goal pads and protective base-layer suits molded to each player’s form.

IMAGE: BAUER

The equipment’s weight reduction should provide a significant on-ice advantage. The skates alone, with their lighter, carbon-composite blade holders, amount to roughly 1,000 fewer pounds of lifted weight during a regulation game, according to Bauer. Lundqvist will lift 180 fewer pounds with the Od1n goalie pads, which replace traditional layers of synthetic leather with compression-molded foam that can be modified depending on the goaltender’s style of play.

“The benefit is not only in the quickness to the puck but in their ultimate endurance and stamina going into the third period,”

“The benefit is not only in the quickness to the puck but in their ultimate endurance and stamina going into the third period,” says Craig Desjardins, Bauer’s general manager of player equipment and project leader for Od1n. “For [Lundqvist], that was the difference between getting a block or getting scored on.”

Like many a concept car, Bauer also drew on new technologies via its designs. Using 3D optical scanning borrowed from the automotive industry, it manufactured protective base-layer suits molded to each player’s physique. The scans generated computerized models, from which Bauer designed custom equipment.

“Being able to customize, for example, a shin guard or elbow pad based on the individual geometry of a player, we’ve taken the guesswork out completely,” says Desjardins. “It’s going to better protect you if it stays in place.”

 

 

It’s all very spiffy, but in automotive terms, a concept car showcases radical new developments in technology and design that make it prohibitively expensive for consumers. The cars don’t often make it to mass production. The cost of Bauer’s own “concept car” design, with its attendant technological advancements, places the equipment well out of reach of all but the most elite hockey players.

Much like BMW’s shape-shifting sedan, the idea is to ogle Od1n, not to own it — although, Bauer will likely outfit a few more NHL bodies in the future.

Its creators are optimistic that certain elements will make their way to mass production, however.

“We’re trying to invent the future of hockey equipment, to show the industry and consumers where it could go, where it will go,” says Desjardins. “In the next few years, we’ll be able to take that technology down into multiple price points.”

So if, a few years from Sochi, your neighborhood is teeming with hockey prodigies, you’ll know why.

LiDAR for Visual Effects - Rebirth

Krakatoa Creates CG Visual Effects from LIDAR Scans for Short Film “Rebirth”

Film director and cinematographer Patryk Kizny – along with his talented team at LookyCreative – put together the 2010 short film “The Chapel” using motion controlled HDR time-lapse to achieve an interesting, hyper-real aesthetic. Enthusiastically received when released online, the three-minute piece pays tribute to a beautifully decaying church in a small Polish village built in the late 1700s. Though widely lauded, “The Chapel” felt incomplete to Kizny, so in fall of 2011, he began production on “Rebirth” to refine and add dimension to his initial story.

LiDAR for Visual Effects - Rebirth

Exploring the same church, “Rebirth” comprises three separate scenes created using different visual techniques. Contemplative, philosophical narration and a custom orchestral soundtrack composed by Kizny’s collaborator, Mateusz Zdziebko, help guide the flow and overall aspirational tone of the film, which runs approximately 12 minutes. The first scene features a point cloud representation of the chapel with various pieces and cross-sections of the building appearing, changing and shifting to the music. Based on LIDAR scans taken of the chapel for this project, Kizny generated the point clouds with Thinkbox Software’s volumetric particle renderer, Krakatoa, in Autodesk 3ds Max.

LiDAR for VFX - Rebirth

“About a year after I shot ”The Chapel,” I returned to the location and happened to get involved in heritage preservation efforts,” Kizny explained. “At the time, laser scanning was used for things like archiving, set modeling and support for integrating VFX in post production, but I hadn’t seen any films visualizing point clouds themselves, so that’s what I decided to do.”

EKG Baukultur an Austrian/German company that specializes in digital heritage documentation and laser scanning, scanned the entire building in about a day from 25 different scanning positions. The collected data was then registered and processed – creating a dataset of about 500 million points. Roughly half of the collected data was used to create the visualizations.

3D Laser Scanning for Visual Effects - Rebirth

Data processing was done in multiple stages using various software packages. Initially, the EKG Baukultur team registered the separate scans together in a common coordinates space using FARO Scene software. Using .PTS format, the data was then re-imported into Alice Labs Studio Clouds (acquired by Autodesk in 2011) for clean up. Kizny manually removed any tripods with cameras, people, checkerboards and balls that had been used to reference scans. Then, the data was processed in Geomagic Studio to reduce noise, fill holes and uniformly downsample selected areas of the dataset. Later, the data was exported back to the .PTS ASCII format with the help of MeshLab and processed using custom Python scripting so that it could be ingested using the Krakatoa importer. Lacking a visual effects background, Kizny initially tested a number of tools to find the best way to visualize point cloud data in a cinematic way with varying and largely disappointing results. Six months of extensive R&D led Kizny to Krakatoa, a tool that was astonishingly fast and a fraction of the price of similar software specifically designed for CAD/CAM applications.

“I had a very basic understanding of 3ds Max, and the Krakatoa environment was new to me. Once I began to figure out Krakatoa, it all clicked and the software proved amazing throughout each step of the process,” he said.

Even with mixing the depth of field and motion blur functions in Krakatoa, Kizny was able to keep his render time to roughly five to ten minutes per frame, even while rendering 200 million points in 2K, by using smaller apertures and camera passes from a higher distance.

“Krakatoa is an amazing manipulation toolkit for processing point cloud data, not only for what I’m doing here but also for recoloring, increasing density, projecting textures and relighting point clouds. I have tried virtually all major point cloud processing software, but Krakatoa saved my life on this project,” Kizny noted.

In addition to using Krakatoa to visualize all the CG components of “Rebirth” as well as render point clouds, Kizny also employed the software for advanced color manipulation. With two subsets of data – a master with good color representation and a target that lacked color information – Kizny used a Magma flow modifier and a comprehensive set of nodes to cast and spatially interpolate the color data from the master subset onto the target subset so that they blended seamlessly in the final dataset. Magma modifiers were also used for the color correction of the entire dataset prior to rendering, which allowed Kizny greater flexibility compared to trying to color correct the rendering itself. Using Krakatoa with Magma modifiers also provided Kizny with a comprehensive set of built-in nodes and scripting access.

3D Laser Scanning for Visual Effects - Rebirth

The second scene of “Rebirth” is a time-lapse reminiscent of “The Chapel,” while the final scene shows live action footage of a dancer. Footage for each scene was captured using Canon DSLR cameras, a RED ONE camera and DitoGear motion control equipment. Between the second and third scene, a short transition visualizes the church collapsing, which was created using 3ds Max Particle Flow with help of Thinkbox Ember, a field manipulation toolkit, and Thinkbox Stoke, a particle reflow tool.

“In the transition, I’m trying to collapse a 200 million-point data cloud into smoke, then create the silhouette of a dancer as a light point from the ashes,” shared Kizny. “Even though it’s a short scene, I’m making use of a lot of technology. It’s not only rendering this point cloud data set again; it’s also collapsing it. I’m using the software in an atypical way, and Thinkbox has been incredibly helpful in troubleshooting the workflow so I could establish a solid pipeline.”

Collapsing the church proved to be a challenge for Kizny. Traditionally, when creating digital explosions, VFX artists are blowing up a solid, rigid object. Not only did Kizny need to collapse a point cloud – a daunting task in of itself – but he also had to do so in the hyper-realistic aesthetic he’d established, and in a way that would be both ethereal and physically believable. Using 3ds Max Particle Flow as a simulation environment, Kizny was able to generate a comprehensive vector field of high resolution that was more efficient and precisely controlled with Ember. Ember was also used to animate two angels appearing from the dust and smoke along with the dancer silhouette. The initial dataset of each of angels was pushed through a specific vector noise field that produced a smoke-like dissolve and then reversed thanks to retiming features in Krakatoa, Ember and Stoke, which was also used to add density.

3D Laser Scanning for Visual Effects - Rebirth

“To create the smoke on the floor, I decided to go all the way with Thinkbox tools,” Kizny said. “All the smoke you see was created using Ember vector fields and simulated with Stoke. It was good and damn fast.”

Another obstacle was figuring out how to animate the dancer in the point clouds. Six cameras recorded a live performer with markerless motion capture tracking done using iPi Motion Capture Studio package. The data obtained from the dancer was then ported onto a virtual, rigged model in 3ds Max and used to emit particles for a Particle Flow simulation. Ember vector fields were used for all the smoke-like circulations and then everything was integrated and rendered using Thinkbox’s Deadline, a render management system, and Krakatoa – almost 900 frames and 3 TB of data caches only for particles. Deadline was also used to distribute high volume renders and allocate resources across Kizny’s render farm.

Though an innovative display of digitally artistry, “Rebirth” is also a preservation tool. Interest generated from “The Chapel” and continued with “Rebirth” has enticed a Polish foundation to begin restoration efforts on the run-down building. Additionally, the LIDAR scans of the chapel will be donated to CyArk, a non-profit dedicated to the digital preservation of cultural heritage sites, and made widely available online.

The film is currently securing funding to complete postproduction. Support the campaign and learn more about the project at the IndieGoGo campaign homepage at http://bit.ly/support-rebirth. For updates on the film’s progress, visit http://rebirth-film.com/.

About Thinkbox Software
Thinkbox Software provides creative solutions for visual artists in entertainment, engineering and design. Developer of high-volume particle renderer Krakatoa and render farm management software Deadline, the team of Thinkbox Software solves difficult production problems with intuitive, well-designed solutions and remarkable support. We create tools that help artists manage their jobs and empower them to create worlds and imagine new realities. Thinkbox was founded in 2010 by Chris Bond, founder of Frantic Films. http://www.thinkboxsoftware.com

Asif-Khan-Megaface-Sochi-3d

2014 Sochi Winter Olympics to Feature Giant 3D Pinscreen of Your Face

Visitors to this year’s Sochi Winter Olympics will have the opportunity to see their face rendered on the side of a building in giant 3D mechanical polygons. The work of British architect Asif KhanMegaface is like a cross between Mount Rushmore’s sculpted facade and the pinscreens that adorned executive offices of the ’90s.

The design of a 2000 sq.m pavilion and landscape for MegaFon, one of the largest Russian telecoms companies and general partner of the Sochi Winter Olympics.

3D photo booths within the pavilion and across Russia in MegaFon retail stores will scan visitors’ portraits to be recreated in by the pavilion. It’s facade is designed to function like a huge pin screen. It is made up of over 10,000 actuators which transform the building’s skin into a three-dimensional portrait of each visitor’s face.

The concept is to give everyone the opportunity to be the face of the Olympics.

The structure is sited at the entrance to the Olympic Park, and incorporates an exhibition hall, hospitality areas, a rooftop viewing deck and 2 broadcasting suites.

The installation consists of 10,000 actuators fitted with LEDs and arranged into triangles that can extend up to six feet out from the side of the building to form 3D shapes. Visitors will be invited to have their face scanned at on-site “3D photo booths” before Khan’s actuators will move to form giant 500-square-foot representations of the scans. Three faces will be shown at any given time for 20 seconds, and it’s estimated 170,000 faces will be rendered during the games. Visitors will also be given a link where they can watch a 20-second video showing the exact moment when their face was on the side of the building.

170,000 FACES WILL BE RENDERED DURING THE GAMES

Megaface will comprise one side of Russian carrier Megafon’s pavilion — the installation’s name itself part of the massive branding exercise that is the Olympics. It’s some way from completion, but Khan and Swiss firm iart, which is realizing Khan’s vision, have successfully demonstrated a prototype (shown below) that uses just 1,000 actuators to render a small-scale image.

[Asif Khan via Verge]

The Kinetic Facade of the MegaFaces Pavilion: Initial Batch Test from iart on Vimeo.