endeavor space shuttle lidar

Endeavour: The Last Space Shuttle as she’s never been seen before.

[source by Mark Gibbs]

Endeavour, NASA’s fifth and final space shuttle, is now on display at the California Science Center in Los Angeles and, if you’re at all a fan of space stuff, it’s one of the most iconic and remarkable flying machines ever built.

David Knight, a trustee and board member of the foundation recently sent me a link to an amazing video of the shuttle as well as some excellent still shots.

David commented that these images were:

 “…captured by Chuck Null on the overhead crane while we were doing full-motion VR and HD/2D filming … the Payload Bay has been closed for [a] few years … one door will be opened once she’s mounted upright in simulated launch position in the new Air & Space Center.

Note that all of this is part of the Endeavour VR Project by which we are utilizing leading-edge imaging technology to film, photograph and LIDAR-scan the entire Orbiter, resulting in the most comprehensive captures of a Space Shuttle interior ever assembled – the goal is to render ultra-res VR experiences by which individuals will be able to don eyewear such as the Oculus Rift (the COO of Oculus himself came down during the capture sessions), and walk or ‘fly’ through the Orbiter, able to ‘look’ anywhere, even touch surfaces and turn switches, via eventual haptic feedback gloves etc.

The project is being Executive Produced by me, with the Producer being Ted Schilowitz (inventor of the RED camera and more), Director is Ben Grossman, who led the special effects for the most recent Star Trek movie. Truly Exciting!”

Here are the pictures …

Endeavour - the last Space Shuttle
Endeavour - the last Space ShuttleCharles Null / David Knight on behalf of the California Science Center
Endeavour - the last Space Shuttle

 

zLense real-time 3D tracking

zLense Announces World’s First Real-Time 3D Depth Mapping Technology for Broadcast Cameras

New virtual production platform dramatically lowers the cost of visual effects (VFX) for live and recorded TV, enabling visual environments previously unattainable in a live studio without any special studio set-up…

27 October 2014, London, UK – zLense, a specialist provider of virtual production platforms to the film, production, broadcast and gaming industries, today announced the launch of the world’s first depth-mapping camera solution that captures 3D data and scenery in real-time and adds a 3D layer, which is optimized for broadcasters and film productions, to the footage. The ground breaking industry-first technology processes space information, making  new and real three-dimensional compositing methods possible, enabling production teams to create stunning 3D effects and utilise state-of-the-art CGI in live TV or pre-recorded transmissions – with no special studio set up.

Utilising the solution, directors can produce unique simulated and augmented reality worlds, generating and combining dynamic virtual reality (VR) and augmented (AR) effects in live studio or outside broadcast transmissions. The unique depth-sensing technology allows for a full 360 degree freedom of camera movement and gives presenters and anchormen greater liberty of performance. Directors can combine dolly, jib arm and handheld shots as presenters move within, interact with and control the virtual environment and, in the near future, using only natural gestures and motions.

“We’re poised to shake up the Virtual Studio world by putting affordable high-quality real-time CGI into the hands of broadcasters,” said Bruno Gyorgy, President of zLense. “This unique world-leading technology changes the face of TV broadcasting as we know it, giving producers and programme directors access to CGI tools and techniques that transform the audience viewing experience.”

Doing away with the need for expensive match-moving work, the zLense Virtual Production platform dramatically speeds up the 3D compositing process, making it possible for directors to mix CGI and live action shots in real-time pre-visualization and take the production values of their studio and OB live transmissions to a new level. The solution is quick to install, requires just a single operator, and is operable in almost any studio lighting.

“With minimal expense and no special studio modifications, local and regional TV channels can use this technology to enhance their news and weather graphics programmes – unleashing live augmented reality, interactive simulations and visualisations that make the delivery of infographics exciting, enticing and totally immersive for viewers,” he continued.

The zLense Virtual Production platform combines depth-sensing technology and image-processing in a standalone camera rig that captures the 3D scene and camera movement. The ‘matte box’ sensor unit, which can be mounted on almost any camera rig, removes the need for external tracking devices or markers, while the platform’s built-in rendering engine cuts the cost and complexity of using visual effects in live and pre-recorded TV productions. The zLense Virtual Production platform can be used alongside other, pre-existing, rendering engines, VR systems and tracking technologies.

The VFX real-time capabilities enabled by the zLense Virtual Production platform include:

  • Volumetric effects
  • Additional motion and depth blur
  • Shadows and reflections to create convincing state-of-the-art visual appearances
  • Dynamic relighting
  • Realistic 3D distortions
  • Creation of a fully interactive virtual environment with interactive physical particle simulation
  • Wide shot and in-depth compositions with full body figures
  • Real-time Z-map and 3D models of the picture

For more information on the zLense features and functionalities, please visit: zlense.com/features

About Zinemath
Zinemath, a leader in developing the re-invention of how professional moving images are going to be processed in the future, is the producer of zLense, a revolutionary real-time depth sensing and modelling platform that adds third dimensional information to the filming process.  zLense is the first depth mapping camera accessory optimized for broadcasters and cinema previsualization. With an R&D center in Budapest, Zinemath, part of the Luxemburg-based Docler Group, is spreading this new vision to all industries in the film, television and mobile technology sectors.

For more information please visit: www.zlense.com

Rent or Buy Leica Geosystems Cyclone 9

Leica Geosystems HDS Introduces Patent-Pending Innovations for Laser Scanning Project Efficiency

With Leica Cyclone 9.0, the industry leading point cloud solution for processing laser scan data, Leica Geosystems HDS introduces major, patent-pending innovations for greater project efficiency. Innovations benefit both field and office via significantly faster, easier scan registration, plus quicker deliverable creation thanks to better 2D and 3D drafting tools and steel modelling. Cyclone 9.0 allows users to scale easily for larger, more complex projects while ensuring high quality deliverables consistently.

Greatest advancement in office scan registration since cloud-to-cloud registration
When Leica Geosystems pioneered cloud-to-cloud registration, it enabled users – for the first time – to accurately execute laser scanning projects without having to physically place special targets around the scene, scan them, and model them in the office. With cloud-to-cloud registration software, users take advantage of overlaps among scans to register them together.

“The cloud-to-cloud registration approach has delivered significant logistical benefits onsite and time savings for many projects. We’ve constantly improved it, but the new Automatic Scan Alignment and Visual Registration capabilities in Cyclone 9.0 represent the biggest advancement in cloud-to-cloud registration since we introduced it,” explained Dr. Chris Thewalt, VP Laser Scanning Software. “Cyclone 9.0 lets users benefit from targetless scanning more often by performing the critical scan registration step far more efficiently in the office for many projects. As users increase the size and scope of their scanning projects, Cyclone 9.0 pays even bigger dividends. Any user who registers laser scan data will find great value in these capabilities.“

With the push of a button, Cyclone 9.0 automatically processes scans, and digital images if available, to create groups of overlapping scans that are initially aligned to each other. Once scan alignment is completed, algorithmic registration is applied for final registration. This new workflow option can be used in conjunction with target registration methods as well. These combined capabilities not only make the most challenging registration scenarios feasible, but also exponentially faster. Even novice users will appreciate their ease-of-use and ready scalability beyond small projects.

Power user Marta Wren, technical specialist at Plowman Craven Associates (PCA – leading UK chartered surveying firm) found that Cyclone 9.0’s Visual Registration tools alone sped up registration processing of scans by up to four times (4X) faster than previous methods. PCA uses laser scanning for civil infrastructure, commercial property, forensics, entertainment, and Building Information Modelling (BIM) applications.

New intuitive 2D and 3D drafting from laser scans
For civil applications, new roadway alignment drafting tools let users import LandXML-based roadway alignments or use simple polylines imported or created in Cyclone. These tools allow users to easily create cross section templates using feature codes, as well as copy them to the next station and visually adjust them to fit roadway conditions at the new location. A new vertical exaggeration tool in Cyclone 9.0 allows users to clearly see subtle changes in elevation; linework created between cross sections along the roadway can be used as breaklines for surface meshing or for 2D maps and drawings in other applications.

For 2D drafting of forensic scenes, building and BIM workflows, a new Quick Slice tool streamlines the process of creating a 2D sketch plane for drafting items, such as building footprints and sections, into just one step. A user only needs to pick one or two points on the face of a building to get started. This tool can also be used to quickly analyse the quality of registrations by visually checking where point clouds overlap.

Also included in Cyclone 9.0 are powerful, automatic point extraction features first introduced in Cyclone II TOPO and Leica CloudWorx. These include efficient SmartPicks for automatically finding bottom, top, and tie point locations and Points-on-a-Grid for automatically placing up to a thousand scan survey points on a grid for ground surfaces or building faces.

Simplified steel fitting of laser scan data
For plant, civil, building and BIM applications, Cyclone 9.0 also introduces a patent-pending innovation for modelling steel from point cloud data more quickly and easily. Unlike time consuming methods that require either processing an entire available cloud to fit a steel shape or isolating a cloud section before fitting, this new tool lets users to quickly and accurately model specific steel elements directly within congested point clouds. Users only need to make two picks along a steel member to model it. Shapes include wide flange, channel, angle, tee, and rectangular tube shapes.

Faster path to deliverables
Leica Cyclone 9.0 also provides users with valuable, new capabilities for faster creation of deliverables for civil, architectural, BIM, plant, and forensic scene documentation from laser scans and High-Definition Surveying™ (HDS™).

Availability
Leica Cyclone 9.0 is available today. Further information about the Leica Cyclone family of products can be found at http://hds.leica-geosystems.com, and users may download new product versions online from this website or purchase or rent licenses from SCANable, your trusted Leica Geosystems representative. Contact us today for pricing on software and training.

Capturing Real-World Environments for Virtual Cinematography

Capturing Real-World Environments for Virtual Cinematography

[source] written by Matt Workman

Virtual Reality Cinematography

As Virtual Reality HMDs (Oculus) come speeding towards consumers, there is an emerging need to capture 360 media and 360 environments. Capturing a location for virtual reality or virtual production is a task that is well suited for a DP and maybe a new niche of cinematography/photography. Not only are we capturing the physical dimensions of the environment using LIDAR, but we capturing the lighting using 360 degree HDR light probes captured with DSLRs/Nodal Tripod systems.

A LIDAR scanner is essentially a camera that shoots in all directions. It lives on a tripod and it can record the physical dimensions and color of an environment/space. It captures millions of points and saves their position and color to be later used to construct the space digitally.

An HDR Latlong Probe in Mari

Using a DSLR camera and a nodal tripod head, the DP would capture High Dynamic Range (32bit float HDR) 360 degree probes of the location, to record the lighting.  This process would essentially capture the lighting in the space at a VERY high dynamic range and that would be later reprojected onto the geometry constructed using the LIDAR data.

Realtime 3D Asset being lit by an HDR environment real time (baked)

The DP is essentially lighting the entire space in 360 degrees and then capturing it. Imagine an entire day of lighting a space in all directions. Lights outside windows, track lighting on walls, practicals, etc. Then capturing that space using the above outlined techniques as an asset to be used later. Once the set is constructed virtually, the director can add actors/props and start filmmaking, like he/she would do on a real set.  And the virtual cinematographer would line up the shots, cameras moves, and real time lighting.

I’ve already encountered a similar paradigm as a DP, when I shot a 360 VR commercial. A few years ago I shot a commercial for Bacardi with a 360 VR camera and we had to light and block talent in all directions within a loft space. The end user was then able to control which way the camera looked in the web player, but the director/DP controlled it’s travel path.

360 Virtual Reality Bacardi Commercial

 

http://www.mattworkman.com/2012/03/18/bacardi-360-virtual-reality/

Capturing a set for VR cinematography would allow the user to control their position in the space as well as which way they were facing. And the talent and interactive elements would be added later.

Final Product: VR Environment Capture

 

In this video you can see the final product of a location captured for VR. The geometry for the set was created using the LIDAR as a reference. The textures and lighting data are baked in from a combination of the LIDAR color data and the reprojected HDR probes.

After all is said in done, we have captured a location, it’s textures, and it’s lighting that can be used a digital location however we need. For previs, virtual production, background VFX plates, a real time asset for Oculus, etc.

SIGGRAPH 2014 and NVIDIA

SG4141: Building Photo-Real Virtual Reality from Real Reality, Byte by Byte
http://www.ustream.tv/recorded/51331701

In this presentation Scott Metzger speaks about his new virtual reality company Nurulize and his work with the Nvidia K5200 GPU and The Foundry’s Mari to create photo real 360 degree environments. He shows a demo of the environment that was captured in 32bit float with 8k textures being played in real time on an Oculus Rift and the results speak for themselves. (The real time asset was down sampled to 16bit EXR)

UDIM Texture Illustration

Some key technologies mentioned were the development of virtual texture engines that allow objects to have MANY 8k textures at once using the UDIM model. The environment’s lighting was baked from V-Ray 3 to a custom UDIM Unity shader and supported by Amplify Creations beta Unity Plug-in.

The xxArray 3D photometry scanner

The actors were scanned in using xxArray photogrammetry system and Mari was used to project the high resolution textures. All of this technology was being enabled by Nvidia’s Quadro GPU line, to allow fast 8k texture buffering.  The actors were later imported in to the real time environment that had been captured and were viewable from all angles through an Oculus Rift HMD.

Real time environment for Oculus

Virtual Reality Filmmaking

Scott brings up some incredibly relevant and important questions about virtual reality for filmmakers (directors/DPs) who plan to work in virtual reality.

  • How do you tell a story in Virtual Reality?
  • How do you direct the viewer to face a certain direction?
  • How do you create a passive experience on the Oculus?

He even give a glimpse at the future distribution model of VR content. His demo for the film Rise will be released for Oculus/VR in the following formats:

  1. A free roam view where the action happens and the viewer is allowed to completely control the camera and point of view.
  2. A directed view where the viewer and look around but the positioning is dictated by the script/director. This model very much interests me and sounds like a video game.
  3. And a tradition 2D post rendered version. Like a tradition cinematic or film, best suited for Vimeo/Youtube/DVD/TV.

A year ago this technology seemed like science fiction, but every year we come closer to completely capturing humans (form/texture), their motions, environments with their textures, real world lighting, and viewing them in real time in virtual reality.

The industry is evolving at an incredibly rapid pace and so must the creatives working in it. Especially the persons responsible for the camera and the lighting, the director of photography.

face 3d projection mapping

OMOTE Real-time Face Tracking 3D Projection Mapping

Forget the faces of historic monuments, the new frontier of 3D projection mapping is the faces of humans.

Created by Nobumichi Asai and friends, technical details behind the process are scant at the moment, but from what can be found in this Tumblr post, it’s clear that step one is a 3D scan of the model’s face.

Here is the translated text from that post:

I will continue to explain how to make a face mapping of this time.
Title OMOTE that (= table) is coming from the “ability”, but it has become the idea of Noh mask also in how to make. That is the idea, covered by creating a “surface”. That it is possible to pursue the accuracy, theme that represent a very delicate make-up art as its output was important. I started from the fact that it is 3D laser scanning the face of the model for the first.

I suspect that a structured light scanner was used to capture the geometry of the model’s face rather than a 3D laser scanner. Nonetheless, this is a very cool application of 3D projection mapping.

3D Face Scanning 3D Face Scanning projection mapping3D Face Scanning projection mapping

OMOTE / REAL-TIME FACE TRACKING & PROJECTION MAPPING. from something wonderful on Vimeo.

Google's Project Tango 3D Capture Device

Mantis Vision’s MV4D Tapped As Core 3D Capture Tech Behind Google’s Project Tango Tablets

Mantis Vision, a developer of some of the world’s most advanced 3D enabling technologies, today confirmed that its MV4D technology platform will serve as the core 3D engine behind Google’s Project Tango. Mantis Vision provides the 3D sensing platform, consisting of flash projector hardware components and Mantis Vision’s core MV4D technology which includes structured light-based depth sensing algorithms.

Project Tango Mantis Vision-Tablet_whiteGoogle’s new seven-inch tablet is the first mobile device released that will access the MV4D platform to easily capture, enrich and deliver quality 3D data at scale allowing Google developers to quickly build consumer and professional applications on top of the MV4D platform.

“3D represents a major paradigm shift for mobile. We haven’t seen a change this significant since the introduction of the camera-phone. MV4D allows developers to deliver 3D-enabled mobile devices and capabilities to the world,” said Amihai Loven, CEO, Mantis Vision. “This partnership with Google offers Mantis Vision the flexibility to expand quickly and strategically. It will fuel adoption and engagement directly with consumer audiences worldwide. Together, we are bringing 3D to the masses.”

MV4D is Mantis Vision’s highly-scalable 3D capture and processing platform that allows developers to integrate Mantis’ technology into new and existing applications with ease, to drive user-generated 3D content creation throughout the mobile ecosystem. MV4D’s combination of field-proven 3D imaging hardware and software and a soon-to-be released software development kit (SDK) will ultimately serve as the backbone of 3D-enabled mobile and tablet devices.

“We are excited about working with partners, such as Mantis Vision, as we push forward the hardware and software technologies for 3D sensing and motion tracking on mobile devices,” said Johnny Lee, Technical Product Lead at Google.

Since its inception, Mantis Vision has been dedicated to bringing professional-grade 3D technology to the masses. The company’s technology will be a key component of both professional and consumer level devices and applications across a wide customer base of leading mobile technology companies, application developers and device manufacturers. Because the MV4D platform and SDK is fully scalable, it is already being planned for use in more powerful, diverse range of products in the future.

Learn more about the project here

en-INTL-L-Kinect-for-Windows-Commercial-74Z-00001-mnco mocap 3d scanning

Microsoft Kinect for Windows v2: Affordable MoCap and 3D Scanning Solution

Amid the volley of announcements from Microsoft’s Build conference, is word that the new Kinect for Windows has a near-future release timeframe for both the hardware and its SDK. The desktop version of Microsoft’s do-all sensor will be available to the public this summer, as will its development framework. Perhaps more importantly, once they’re done, developers can publish their creations to the Windows Store; meaning, there’ll probably be more Kinect applications for Windows in one place than ever before. As Redmond tells it, this self-publishing will happen “later this summer.” Next summer, Microsoft is adding support for one of gaming’s most pervasive dev toolkits to Kinect for Windows: the Unity engine — tools developers already know the strengths and weaknesses of, which should bolster the app selection even further. Given that the Xbox One will see Unity support this year, this could mean that cross-platform apps and games are a distinct possibility.

With the specs of Kinect for Windows V2, the 3D scanning and imaging industries may be in for a game-changer. Indie film and game developers will hopefully be able to take advantage of its features as an affordable motion capture (mocap) solution.

Kinect motion capture guru and programmer, Jasper Brekelmans, has been playing with the second release of the Kinect for quite some time and has been posting some impressive results. You can stay on top of everything he is doing on his personal site http://www.brekel.com/.

You can pre-order your Kinect for Windows V2 today for $199 from the Microsoft Store.

Leap Motion Controller Update to Offer Affordable Individual Joint MoCap

Leap Motion Controller Update to Offer Affordable Individual Joint MoCap

Leap Motion has announced that the software for its self-titled PC gesture-control device will soon track the movement of individual finger joints, as well as the overall motion of a user’s hands.

Since its launch in 2012, the $80 Leap Motion controller has attracted a lot of interest in the CG community, with Autodesk releasing Maya and MotionBuilder plugins last year.

Individual joint tracking, more parameters captured
In a post on the company’s blog, Leap Motion CEO Michael Buckwald revealed that version 2 of its software will track the individual joints of a user’s fingers, compensating automatically where individual fingers are occluded.

The software will also expose “much more granular data” via its SDK, including 27 dimensions per hand.

Affordable Individual MoCap tools coming soon
The update, which will be free, and does not require a change of hardware, is now in public beta for developers, although there’s no news of a consumer release date yet.

Jasper Brekelmans, creator of upcoming hand-tracking tool Brekel Pro Hands, has already announced that he is using the SDK.

Read more about the Leap Motion V2 update on the developer’s blog

Massive Software announces Massive 6.0 crowd simulation software

Massive 6.0

New look, new GUI

Massive has a completely new graphic user interface. With graphic design by Lost in Space the new interface not only looks stylish and modern but provides a much smoother interactive user experience. Dialog windows and editors now turn up in the new side panel, keeping the workspace clear and tidy. The main window now hosts multiple panels that can be configured to suit the users needs, and the configurations can be recalled for later use. Since any panel can be a viewport it’s now possible to have up to 5 viewports open at once, each using a different camera.

Massive

 

3D placement

The existing placement tools in Massive have been extended to work in 3 dimensions, independently of the terrain. The point generator can be placed anywhere in space, the circle generator becomes a sphere, the polygon generator gains depth, the spline generator becomes tubular. There’s also a new generator called the geometry generator, which takes a wavefront .obj file and fills the polygonal volume with agents.

 

Auto action import

Building complex agents with hundreds of actions can be a time consuming process, but it doesn’t have to be anymore. In Massive 6.0 the action importing process can be completely automated, reducing what could be months of work to a few minutes. Also, all of the import settings for all of the actions can be saved to a file so that revisions of source motion can be imported in seconds using the same settings as earlier revisions.

Massive

Bullet dynamics

To effortlessly build a mountain of zombies it would be useful to have extremely stable rigid body dynamics. Massive 6.0 supports bullet dynamics, significantly increasing dynamics stability. Just for fun we had 1000 mayhem agents throw themselves off a cliff into a pile on the ground below. Without tweaking of parameters we easily created an impressive zombie shot, demonstrating the stability and ease of use of bullet dynamics.

No typing required

While it is possible to create almost any kind of behaviour using the brain nodes in Massive, it has always required a little typing to specify inputs and outputs of the brain. This is no longer necessary with the new channel menu which allows the user to very quickly construct any possible input or output channel string with a few mouse clicks.

These are just some of the new features of Massive 6.0, which is scheduled for release in September.

 

Massive for Maya

 

Massive has always been a standalone system, and now there’s the choice to use Massive standalone as Massive Prime and Massive Jet, or in Maya as Massive for Maya.

 

Set up and run simulations in Maya

Massive for Maya facilitates the creation of Massive silmuations directly in Maya. All of the Massive scene setup tools such as flow field, lanes, paint and placement editors have been seamlessly reconstructed inside Maya. The simulation workflow has been integrated into Maya to allow for intuitive running, recording and playback of simulations. To achieve this a record button has been added next to the transport controls and a special status indicator has been included in the Massive shelf. Scrubbing of simulations of thousands of agents in Maya is now as simple and efficient as scrubbing the animation of a single character.

Massive Software 3D imaging for crowd placement and simulation

Set up lighting in Maya

The Massive agents automatically appear in preview renders as well as batch renders alongside any other objects in the scene. Rendering in Maya works for Pixar’s RenderMan, Air, 3Delight, Mental Ray and V-Ray. This allows for lighting scenes using the familiar Maya lighting tools, without requiring any special effort to integrate Massive elements into the scene. Furthermore, all of this has been achieved without losing any of the efficiency and scalability of Massive.

 

Edit simulations in Maya graph editor

Any of the agents in a simulation can be made editable in the Maya graph editor. This allows for immediate editing of simulations without leaving the Maya environment. Any changes made to the animation in the graph editor automatically feed back to the Massive agents, so the tweaked agents will appear in the render even though the user sees a Maya character for editing purposes in the viewport. The editing process can even be used with complex animation control rigs, allowing animators and motion editors complete freedom to work however they want to.

 

 

Massive Software 3D imaging for crowd placement and simulationDirectable characters

A major advantage of Massive for Maya is the ability to bring Massive’s famous brains to character animation, providing another vital tool for creating the illusion of life. While animation studios have integrated Massive into their pipeline to do exactly this for years, the ability to create directable characters has not been within easy reach for those using off-the-shelf solutions. With Massive for Maya it’s now possible to create characters using a handful of base cycles, takes and expressions that can handle such tasks as keeping alive, responding to the the focus of the shot, responding to simple direction, or simply walking along a path, thus reducing the amount of work required to fill out a scene with characters which are not currently the focus of the shot. For example, in a scene in which two characters are talking with eachotherand a third character, say a mouse, is reacting, the mouse could be driven by it’s Massive counterpart. The talking characters would drive their Massive counterparts thereby being visible to the mouse. Using attributes in the talking characters, their Massive counterparts could change colour to convey their emotional states to the mouse agent. The mouse agent then performs appropriately, using it’s animation cycles, blend shape animations etc in response to the performance of the talking characters, and looking at whichever character is talking. Once the agents for a project have been created, setting up a shot for this technique requires only a few mouse clicks and the results happen in real-time. Any edits to the timing of the shot will simply flow through to the mouse performance.

SCANable offers on-site 3D imaging of real-world people/characters to populate your 3D crowd asset library in Massive’s crowd placement and simulation software. Contact us today for a free quote.

R3dS Wrap Topology Transfer Software

Introducing R3DS Wrap – Topology Transfer Tool

Wrap is a topology transfer tool. It allows to utilize the topology you already have and transfer your new 3D-scanned data onto it. The resulting models will not only share the same topology and UV-coordinates but also will naturally become blendshapes of each other. Here’s a short video how it works:

and here are a couple of examples based on 3D-scans kindly provided by Lee Perry-Smith

R3dS Wrap Topology Transfer Software

You can download a demo-version from their website http://www.russian3dscanner.com

As with all new technology during its final beta stages, Wrap is not perfect yet. R3DS would be highly appreciative and grateful of everyone that gives us the support and feedback to finalize things in the best possible way. This software has some potential to be a great tool. Check it out!