faro freestyle 3d handheld scanner

FARO® Launches Innovative, User-Friendly Handheld 3D Scanner to Meet Growing Demand for Portable Scanning

LAKE MARY, Fla.Jan. 7, 2015 /PRNewswire/ — FARO Technologies, Inc. (NASDAQ: FARO), the world’s most trusted source for 3D measurement, imaging, and realization technology, announces the release of the new FARO Freestyle3D Handheld Laser Scanner, an easy, intuitive device for use in Architecture, Engineering and Construction (AEC), Law Enforcement, and other industries.

The FARO Freestyle3D is equipped with a Microsoft Surface™ tablet and offers unprecedented real-time visualization by allowing the user to view point cloud data as it is captured. The Freestyle3D scans to a distance of up to three (3) meters and captures up to 88K points per second with accuracy better than 1.5mm.  The patent-pending, self-compensating optical system also allows users to start scanning immediately with no warm up time required.

“The Freestyle3D is the latest addition to the FARO 3D laser scanning portfolio and represents another step on our journey to democratize 3D scanning,” stated Jay Freeland, FARO’s President and CEO.  “Following the successful adoption of our Focus scanners for long-range scanning, we’ve developed a scanner that provides customers with the same intuitive feel and ease-of-use in a handheld device.”
The portability of Freestyle3D enables users to maneuver and scan in tight and hard-to-reach areas such as car interiors, under tables and behind objects making it ideal for crime scene data collection or architectural preservation and restoration activities.  Memory-scan technology enables Freestyle3D users to pause scanning at any time and then resume data collection where they left off without the use of artificial targets.

Mr. Freeland added, “FARO’s customers continue to stress the importance of work-flow simplicity, portability, and affordability as key drivers to their continued use and adoption of 3D laser scanning.  We have responded by developing an easy-to-use, industrial grade, handheld laser scanning device that weighs less than 2 lbs.”

The Freestyle3D can be employed as a standalone device to scan areas of interest, or used in concert with FARO’s Focus X 130 / X 330 scanners.  Point cloud data from all of these devices can be seamlessly integrated and shared with all of FARO’s software visualization tools including FARO SCENE, WebShare Cloud, and FARO CAD Zone packages.

For more information about FARO’s 3D scanning solutions visit: www.faro.com

This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995 that are subject to risks and uncertainties, such as statements about demand for and customer acceptance of FARO’s products, and FARO’s product development and product launches. Statements that are not historical facts or that describe the Company’s plans, objectives, projections, expectations, assumptions, strategies, or goals are forward-looking statements. In addition, words such as “is,”“will,” and similar expressions or discussions of FARO’s plans or other intentions identify forward-looking statements. Forward-looking statements are not guarantees of future performance and are subject to various known and unknown risks, uncertainties, and otherfactors that may cause actual results, performances, or achievements to differ materially from future results, performances, or achievements expressed or implied by such forward-looking statements. Consequently, undue reliance should not be placed on these forward-looking statements.

Factors that could cause actual results to differ materially from what is expressed or forecasted in such forward-looking statements include, but are not limited to:

  • development by others of new or improved products, processes or technologies that make the Company’s products less competitive or obsolete;
  • the Company’s inability to maintain its technological advantage by developing new products and enhancing its existing products;
  • declines or other adverse changes, or lack of improvement, in industries that the Company serves or the domestic and international economies in the regions of the world where the Company operates and other general economic, business, and financial conditions; and
  • other risks detailed in Part I, Item 1A. Risk Factors in the Company’s Annual Report on Form 10-K for the year ended December 31, 2013 and Part II, Item 1A. Risk Factors in the Company’s Quarterly Report on Form 10-Q for the quarter ended June 28, 2014.

Forward-looking statements in this release represent the Company’s judgment as of the date of this release. The Company undertakes no obligation to update publicly any forward-looking statements, whether as a result of new information, future events, or otherwise, unless otherwise required by law.

About FARO

FARO is the world’s most trusted source for 3D measurement technology. The Company develops and markets computer-aided measurement and imaging devices and software. Technology from FARO permits high-precision 3D measurement, imaging and comparison of parts and complex structures within production and quality assurance processes. The devices are used for inspecting components and assemblies, rapid prototyping, documenting large volume spaces or structures in 3D, surveying and construction, as well as for investigation and reconstruction of accident sites or crime scenes.

Approximately 15,000 customers are operating more than 30,000 installations of FARO’s systems, worldwide. The Company’s global headquarters is located in Lake Mary, FL; its European regional headquarters in Stuttgart, Germany; and its Asia/Pacific regional headquarters in Singapore. FARO has other offices in the United StatesCanadaMexicoBrazilGermany, the United Kingdom,FranceSpainItalyPolandTurkeythe NetherlandsSwitzerlandPortugalIndiaChinaMalaysiaVietnamThailandSouth Korea, and Japan.

More information is available at http://www.faro.com

SOURCE FARO Technologies, Inc.

Mattepainting Toolkit Camera Projection

Photogrammetry and camera projection mapping in Maya made easy

The Mattepainting Toolkit

Photogrammetry and camera projection mapping in Maya made easy.

What’s included?

The Mattepainting Toolkit (gs_mptk) is a plugin suite for Autodesk Maya that helps artists build photorealistic 3D environments with minimal rendering overhead. It offers an extensive toolset for working with digital paintings as well as datasets sourced from photographs.

Version 3.0 is now released!

For Maya versions 2014 and 2015, version 3.0 of the toolkit adds support for Viewport 2.0, and a number of new features. Version 2.0 is still available for Maya versions 2012-2014. A lite version of the toolkit, The Camera Projection Toolkit (gs_cptk) is available for purchase from the Autodesk Exchange. To see a complete feature comparison list between these versions, click here.

How does it work?

The Mattepainting Toolkit uses an OpenGL implementation for shader feedback within Maya’s viewport. This allows users to work directly with paintings, photos, and image sequences that are mapped onto geometry in an immediate and intuitive way.

Overview

The User Interface

Textures are organized in a UI that manages the shaders used for viewport display and rendering.

...

  • Clicking on an image thumbnail will load the texture in your preferred image editor.
  • Texture layer order is determined by a drag-and-drop list.
  • Geometry shading assignments can be quickly added and removed.

Point Cloud Data

Import Bundler and PLY point cloud data from Agisoft Photoscan, Photosynth, or other Structure From Motion (SFM) software.

...

  • Point clouds can be used as a modeling guide to quickly reconstruct a physical space.
  • Cameras are automatically positioned in the scene for projection mapping.

The Viewport

A custom OpenGL shader allows textures to be displayed in high quality and manipulated interactively within the viewport.

...

  • Up to 16 texture layers can be displayed per shader.
  • HDR equirectangular images can be projected spherically.
  • Texture mattes can be painted directly onto geometry within the viewport.
  • Image sequences are supported so that film plates can be mapped to geometry.

Rendering

The layered textures can be rendered with any renderer available to Maya. Custom Mental Ray and V-Ray shaders included with the toolkit extend the texture blending capabilities for those renderers.

...

  • The texture layers can be baked down to object UVs.
  • A coverage map can be rendered to isolate which areas of the geometry are most visible to the camera.
  • For Mental Ray and V-Ray, textures can be blended based on object occlusion, distance from the projection camera, and object facing ratio.
Capturing Real-World Environments for Virtual Cinematography

Capturing Real-World Environments for Virtual Cinematography

[source] written by Matt Workman

Virtual Reality Cinematography

As Virtual Reality HMDs (Oculus) come speeding towards consumers, there is an emerging need to capture 360 media and 360 environments. Capturing a location for virtual reality or virtual production is a task that is well suited for a DP and maybe a new niche of cinematography/photography. Not only are we capturing the physical dimensions of the environment using LIDAR, but we capturing the lighting using 360 degree HDR light probes captured with DSLRs/Nodal Tripod systems.

A LIDAR scanner is essentially a camera that shoots in all directions. It lives on a tripod and it can record the physical dimensions and color of an environment/space. It captures millions of points and saves their position and color to be later used to construct the space digitally.

An HDR Latlong Probe in Mari

Using a DSLR camera and a nodal tripod head, the DP would capture High Dynamic Range (32bit float HDR) 360 degree probes of the location, to record the lighting.  This process would essentially capture the lighting in the space at a VERY high dynamic range and that would be later reprojected onto the geometry constructed using the LIDAR data.

Realtime 3D Asset being lit by an HDR environment real time (baked)

The DP is essentially lighting the entire space in 360 degrees and then capturing it. Imagine an entire day of lighting a space in all directions. Lights outside windows, track lighting on walls, practicals, etc. Then capturing that space using the above outlined techniques as an asset to be used later. Once the set is constructed virtually, the director can add actors/props and start filmmaking, like he/she would do on a real set.  And the virtual cinematographer would line up the shots, cameras moves, and real time lighting.

I’ve already encountered a similar paradigm as a DP, when I shot a 360 VR commercial. A few years ago I shot a commercial for Bacardi with a 360 VR camera and we had to light and block talent in all directions within a loft space. The end user was then able to control which way the camera looked in the web player, but the director/DP controlled it’s travel path.

360 Virtual Reality Bacardi Commercial

 

http://www.mattworkman.com/2012/03/18/bacardi-360-virtual-reality/

Capturing a set for VR cinematography would allow the user to control their position in the space as well as which way they were facing. And the talent and interactive elements would be added later.

Final Product: VR Environment Capture

 

In this video you can see the final product of a location captured for VR. The geometry for the set was created using the LIDAR as a reference. The textures and lighting data are baked in from a combination of the LIDAR color data and the reprojected HDR probes.

After all is said in done, we have captured a location, it’s textures, and it’s lighting that can be used a digital location however we need. For previs, virtual production, background VFX plates, a real time asset for Oculus, etc.

SIGGRAPH 2014 and NVIDIA

SG4141: Building Photo-Real Virtual Reality from Real Reality, Byte by Byte
http://www.ustream.tv/recorded/51331701

In this presentation Scott Metzger speaks about his new virtual reality company Nurulize and his work with the Nvidia K5200 GPU and The Foundry’s Mari to create photo real 360 degree environments. He shows a demo of the environment that was captured in 32bit float with 8k textures being played in real time on an Oculus Rift and the results speak for themselves. (The real time asset was down sampled to 16bit EXR)

UDIM Texture Illustration

Some key technologies mentioned were the development of virtual texture engines that allow objects to have MANY 8k textures at once using the UDIM model. The environment’s lighting was baked from V-Ray 3 to a custom UDIM Unity shader and supported by Amplify Creations beta Unity Plug-in.

The xxArray 3D photometry scanner

The actors were scanned in using xxArray photogrammetry system and Mari was used to project the high resolution textures. All of this technology was being enabled by Nvidia’s Quadro GPU line, to allow fast 8k texture buffering.  The actors were later imported in to the real time environment that had been captured and were viewable from all angles through an Oculus Rift HMD.

Real time environment for Oculus

Virtual Reality Filmmaking

Scott brings up some incredibly relevant and important questions about virtual reality for filmmakers (directors/DPs) who plan to work in virtual reality.

  • How do you tell a story in Virtual Reality?
  • How do you direct the viewer to face a certain direction?
  • How do you create a passive experience on the Oculus?

He even give a glimpse at the future distribution model of VR content. His demo for the film Rise will be released for Oculus/VR in the following formats:

  1. A free roam view where the action happens and the viewer is allowed to completely control the camera and point of view.
  2. A directed view where the viewer and look around but the positioning is dictated by the script/director. This model very much interests me and sounds like a video game.
  3. And a tradition 2D post rendered version. Like a tradition cinematic or film, best suited for Vimeo/Youtube/DVD/TV.

A year ago this technology seemed like science fiction, but every year we come closer to completely capturing humans (form/texture), their motions, environments with their textures, real world lighting, and viewing them in real time in virtual reality.

The industry is evolving at an incredibly rapid pace and so must the creatives working in it. Especially the persons responsible for the camera and the lighting, the director of photography.

Eyesmap 3D Scanning Tablet

3D Sensing Tablets Aims To Replace Multiple Surveyor Tools

 

Source: Tech Crunch

As we reported earlier this year, Google is building a mobile device with 3D sensing capabilities — under the Project Tango moniker. But it’s not the only company looking to combine 3D sensing with mobility.

Spanish startup E-Capture R&D is building a tablet with 3D sensing capabilities that’s aiming to target the enterprise space — for example as a portable tool for surveyors, civil engineers, architects and the like — which is due to go on sale at the beginning of 2015.

The tablet, called EyesMap, will have two rear 13 megapixel cameras, along with a depth sensor and GPS to enable it to measure co-ordinates, surface and volumes of objects up to a distance of 70 to 80 meters in real-time.

Eyesmap 3D Scanning Tablet

 

So, for instance, it could be used to capture measurements of – or create a 3D model of — a bridge or a building from a distance. Or to model objects as small as insects so it could be used to 3D scan individual components by civil engineers, for instance.

Its makers claim it can build high-resolution models with HD realistic textures.

EyesMap uses photogrammetry to ensure accurate measurements and to build outdoor 3D models, but also has an RGBD sensor for indoor scanning.

Eyesmap 3D Scanning TabletThe tablet will apparently be able to scan an “advanced photogrammetric picture” with up to 4 million dots in around 2 minutes. It will also be able to capture 3D objects in motion. It’s using a blend of computer vision techniques, photogrammetry, visual odometer, “precision sensor fine tuning” and other image measuring techniques, say its makers.

E-Capture was founded back in April 2012 by a group of experienced surveyors and Pedro Ortiz-Coder, a researcher in the laser scanning and photogrammetry field. The business has been founder funded thus far, but has also received a public grant of €800,000 to help with development.

In terms of where EyesMap fits into the existing enterprise device market, Ortiz-Coder says it’s competing with multiple standalone instruments in the survey field — such as 3D scanners, telemeters, photogrammetry software and so on — but is bundling multiple functions into a single portable device.

“To [survey small objects], a short range laser scanner is required but, a short-range LS cannot capture big or far away objects. That’s why we thought to create a definitive instrument, which permits the user to scan small objects, indoors, buildings, big objects and do professional works with a portable device,” he tells TechCrunch.

“Moreover, there wasn’t in the market any instrument which can measure objects in motion accurately more than 3-4 meters. EyesMap can measure people, animals, objects in motion in real time with a high range distance.”

The tablet will run Windows and, on the hardware front, will have Intel’s 4th generation i7 processor and 16 GB of RAM. Pricing for the EyesMap slate has not yet been announced.

 

Another 3D mobility project we previously covered, called LazeeEye, was aiming to bring 3D sensing smarts to any smartphone via an add on device (using just RGBD sensing) — albeit that project fell a little short of its funding goal on Kickstarter.

Also in the news recently, Mantis Vision raising $12.5 million in funding from Qualcomm Ventures, Samsung and others for its mobile 3D capture engine that’s designed to work on handheld devices.

There’s no denying mobile 3D as a space is heating up for device makers, although it remains to be seen how slick the end-user applications end up being — and whether they can capture the imagination of mainstream mobile users or, as with E-Capture’s positioning, carve out an initial user base within niche industries.

Shapify Booth Full Body 3D Scanner

Artec Announces the World’s First 3D Full Body Scanner – Shapify Booth

A twelve second body scan and shoppers pick up their 3D printed figurine next time they visit the supermarket

P-3D SELFIE_ITV2000_Vimeo from Granada Reports on Vimeo.

This week Asda and Artec Group are happy to announce their partnership as Asda becomes the first supermarket to bring a new cutting edge 3D printing technology to shoppers in the UK with the installation of Artec Shapify Booth — the world’s first high speed 3D full body scanner in its Trafford Park store. The scanning booth will allow thousands of customers to create a 3D miniature replica of themselves.

Artec Shapify Booth

The Artec scanning booth, equipped with wide-view, high-resolution 3D scanners and a rotation rig, takes just 12 seconds to scan a person. The Artec algorithms automatically fuse 700 captured surfaces into a detailed printable file. This digital model is then sent to the Asda 3D printing centre to be made into an 8″ mini-statue in full colour which can be collected from the store just one week later. Asda’s unique 3D printing technologies allows the processing of a huge volume of high quality figurines at a time, while the prints costs just £60.

Asda first introduced 3D scanning and 3D printing customer’s figurines six months ago, using Artec handheld scanners. Driven by the immediate success of the venture and Asda’s vision to offer 3D technology to the public, Artec Group tailored its professional scanning equipment to spec and created the Shapify Booth, a high speed full body scanner that Asda is now making available to all.
Making 3D prints of all the family, customers can also come along to be scanned in their sports kit, wedding outfits, graduation robes or fancy dress, taking something totally new and personalised back home with them after their weekly shop.

If the trial of the Shapify technology at Trafford Park is successful the new booths will be rolled out to more stores in the Autumn.

Phil Stout, Asda Innovation Manager – Asda is fast becoming, not just a retailer but, a technology company and this innovation is another example of how we’re leading the way on in store consumer facing technology. We’ve been working with Artec technology for a while now and we’re delighted to be the first company in the world able to offer our customers this unique service.

Artyom Yukhin, Artec Group President and CEO – Over the last 5 years Artec has been providing 3D technologies to professionals in industries from space and automotive to medical and movie special effects, but we have always been looking for the chance to do something for the public. Asda’s backing and second to none customer understanding allowed us to create high speed scanners which are fun and easy for people to use.

About Asda Stores Ltd.

Founded in the 1960s in Yorkshire, Asda is one of Britain’s leading retailers. It has more than 180,000 dedicated Asda colleagues serving customers from 551 stores, including 32 Supercentres, 311 Superstores, 29 Asda Living stores, 179 Supermarkets, 25 depots and seven recycling centres across the UK. Its main office is in Leeds, Yorkshire and its George clothing division is in Lutterworth, Leicestershire. More than 18 million people shop at Asda stores every week and 98 per cent of UK homes are served by www.asda.com. Asda joined Walmart, the world’s number one retailer, in 1999.

About Artec Group

Artec Group is a manufacturer and developer of professional 3D hardware and software, headquartered in Luxembourg. Artec Group is a global market leader in 3D scanning solutions used by thousands of people all over the world.
Shapify, the technology for creating 3D printed figurines, was conceived and launched by Artec Group in 2013:www.shapify.me
For more information about Artec Group, visit www.artec-group.com.

Contacts:
Artec Group : press@artec-group.com

FARO SCENE Cloud to Cloud Registration

FARO SCENE 5.3 Laser Scanning Software Provides Scan Registration without Targets

[source]

FARO® Technologies, Inc. (NASDAQ: FARO), the world’s most trusted source for 3D measurement, imaging, and realization technology, announced the release of their newest version of laser scanning software, SCENE 5.3, and scan data hosting-service, SCENE WebShare Cloud 1.5.

FARO’s SCENE 5.3 software, for use with the Laser Scanner Focus3D X Series, delivers scan registration by eliminating artificial targets, such as spheres and checkerboards. Users can choose from two available registration methods: Top View Based or Cloud to Cloud. Top View Based registration allows for targetless positioning of scans. In interiors and in built-up areas without reliable GPS positioning of the individual scans, targetless positioning represents a highly efficient and largely automated method of scanning. The second method, Cloud to Cloud registration, opens up new opportunities for the user to position scans quickly and accurately, even under difficult conditions. In exterior locations with good positioning of the scans by means of the integrated GPS receiver of the Laser Scanner Focus3D X Series, Cloud to Cloud is the method of choice for targetless registration.

In addition, the software also offers various new processes that enable the user to flexibly respond to a wide variety of project requirements. For instance, Correspondence Split View matches similar areas in neighbouring scans to resolve any missing positioning information, and Layout Image Overlay allows users to place scan data in a geographical context using image files, CAD drawings, or maps.

Oliver Bürkler, Senior Product Manager for 3D Documentation Software, remarked, “SCENE 5.3 is the ideal tool for processing laser scanning projects. FARO’s cloud-based hosting solution, SCENE WebShare Cloud, allows scan projects to be published and shared worldwide via the Internet. The collective upgrades to FARO’s laser scanning software solution, SCENE 5.3 and WebShare Cloud 1.5, make even complex 3D documentation projects faster, more efficient, and more effective. “

About FARO
FARO is the world’s most trusted source for 3D measurement, imaging and realization technology. The Company develops and markets computer-aided measurement and imaging devices and software. Technology from FARO permits high-precision 3D measurement, imaging and comparison of parts and complex structures within production and quality assurance processes. The devices are used for inspecting components and assemblies, production planning, documenting large volume spaces or structures in 3D, surveying and construction, as well as for investigation and reconstruction of accident sites or crime scenes.

Worldwide, approximately 15,000 customers are operating more than 30,000 installations of FARO’s systems. The Company’s global headquarters is located in Lake Mary, FL., its European head office in Stuttgart, Germany and its Asia/Pacific head office in Singapore. FARO has branches in Brazil, Mexico, Germany, United Kingdom, France, Spain, Italy, Poland, Netherlands, Turkey, India, China, Singapore, Malaysia, Vietnam, Thailand, South Korea and Japan.

Click here for more information or to download a 30-day evaluation version.

Google's Project Tango 3D Capture Device

Mantis Vision’s MV4D Tapped As Core 3D Capture Tech Behind Google’s Project Tango Tablets

Mantis Vision, a developer of some of the world’s most advanced 3D enabling technologies, today confirmed that its MV4D technology platform will serve as the core 3D engine behind Google’s Project Tango. Mantis Vision provides the 3D sensing platform, consisting of flash projector hardware components and Mantis Vision’s core MV4D technology which includes structured light-based depth sensing algorithms.

Project Tango Mantis Vision-Tablet_whiteGoogle’s new seven-inch tablet is the first mobile device released that will access the MV4D platform to easily capture, enrich and deliver quality 3D data at scale allowing Google developers to quickly build consumer and professional applications on top of the MV4D platform.

“3D represents a major paradigm shift for mobile. We haven’t seen a change this significant since the introduction of the camera-phone. MV4D allows developers to deliver 3D-enabled mobile devices and capabilities to the world,” said Amihai Loven, CEO, Mantis Vision. “This partnership with Google offers Mantis Vision the flexibility to expand quickly and strategically. It will fuel adoption and engagement directly with consumer audiences worldwide. Together, we are bringing 3D to the masses.”

MV4D is Mantis Vision’s highly-scalable 3D capture and processing platform that allows developers to integrate Mantis’ technology into new and existing applications with ease, to drive user-generated 3D content creation throughout the mobile ecosystem. MV4D’s combination of field-proven 3D imaging hardware and software and a soon-to-be released software development kit (SDK) will ultimately serve as the backbone of 3D-enabled mobile and tablet devices.

“We are excited about working with partners, such as Mantis Vision, as we push forward the hardware and software technologies for 3D sensing and motion tracking on mobile devices,” said Johnny Lee, Technical Product Lead at Google.

Since its inception, Mantis Vision has been dedicated to bringing professional-grade 3D technology to the masses. The company’s technology will be a key component of both professional and consumer level devices and applications across a wide customer base of leading mobile technology companies, application developers and device manufacturers. Because the MV4D platform and SDK is fully scalable, it is already being planned for use in more powerful, diverse range of products in the future.

Learn more about the project here

en-INTL-L-Kinect-for-Windows-Commercial-74Z-00001-mnco mocap 3d scanning

Microsoft Kinect for Windows v2: Affordable MoCap and 3D Scanning Solution

Amid the volley of announcements from Microsoft’s Build conference, is word that the new Kinect for Windows has a near-future release timeframe for both the hardware and its SDK. The desktop version of Microsoft’s do-all sensor will be available to the public this summer, as will its development framework. Perhaps more importantly, once they’re done, developers can publish their creations to the Windows Store; meaning, there’ll probably be more Kinect applications for Windows in one place than ever before. As Redmond tells it, this self-publishing will happen “later this summer.” Next summer, Microsoft is adding support for one of gaming’s most pervasive dev toolkits to Kinect for Windows: the Unity engine — tools developers already know the strengths and weaknesses of, which should bolster the app selection even further. Given that the Xbox One will see Unity support this year, this could mean that cross-platform apps and games are a distinct possibility.

With the specs of Kinect for Windows V2, the 3D scanning and imaging industries may be in for a game-changer. Indie film and game developers will hopefully be able to take advantage of its features as an affordable motion capture (mocap) solution.

Kinect motion capture guru and programmer, Jasper Brekelmans, has been playing with the second release of the Kinect for quite some time and has been posting some impressive results. You can stay on top of everything he is doing on his personal site http://www.brekel.com/.

You can pre-order your Kinect for Windows V2 today for $199 from the Microsoft Store.

LiDAR for Visual Effects - Rebirth

Krakatoa Creates CG Visual Effects from LIDAR Scans for Short Film “Rebirth”

Film director and cinematographer Patryk Kizny – along with his talented team at LookyCreative – put together the 2010 short film “The Chapel” using motion controlled HDR time-lapse to achieve an interesting, hyper-real aesthetic. Enthusiastically received when released online, the three-minute piece pays tribute to a beautifully decaying church in a small Polish village built in the late 1700s. Though widely lauded, “The Chapel” felt incomplete to Kizny, so in fall of 2011, he began production on “Rebirth” to refine and add dimension to his initial story.

LiDAR for Visual Effects - Rebirth

Exploring the same church, “Rebirth” comprises three separate scenes created using different visual techniques. Contemplative, philosophical narration and a custom orchestral soundtrack composed by Kizny’s collaborator, Mateusz Zdziebko, help guide the flow and overall aspirational tone of the film, which runs approximately 12 minutes. The first scene features a point cloud representation of the chapel with various pieces and cross-sections of the building appearing, changing and shifting to the music. Based on LIDAR scans taken of the chapel for this project, Kizny generated the point clouds with Thinkbox Software’s volumetric particle renderer, Krakatoa, in Autodesk 3ds Max.

LiDAR for VFX - Rebirth

“About a year after I shot ”The Chapel,” I returned to the location and happened to get involved in heritage preservation efforts,” Kizny explained. “At the time, laser scanning was used for things like archiving, set modeling and support for integrating VFX in post production, but I hadn’t seen any films visualizing point clouds themselves, so that’s what I decided to do.”

EKG Baukultur an Austrian/German company that specializes in digital heritage documentation and laser scanning, scanned the entire building in about a day from 25 different scanning positions. The collected data was then registered and processed – creating a dataset of about 500 million points. Roughly half of the collected data was used to create the visualizations.

3D Laser Scanning for Visual Effects - Rebirth

Data processing was done in multiple stages using various software packages. Initially, the EKG Baukultur team registered the separate scans together in a common coordinates space using FARO Scene software. Using .PTS format, the data was then re-imported into Alice Labs Studio Clouds (acquired by Autodesk in 2011) for clean up. Kizny manually removed any tripods with cameras, people, checkerboards and balls that had been used to reference scans. Then, the data was processed in Geomagic Studio to reduce noise, fill holes and uniformly downsample selected areas of the dataset. Later, the data was exported back to the .PTS ASCII format with the help of MeshLab and processed using custom Python scripting so that it could be ingested using the Krakatoa importer. Lacking a visual effects background, Kizny initially tested a number of tools to find the best way to visualize point cloud data in a cinematic way with varying and largely disappointing results. Six months of extensive R&D led Kizny to Krakatoa, a tool that was astonishingly fast and a fraction of the price of similar software specifically designed for CAD/CAM applications.

“I had a very basic understanding of 3ds Max, and the Krakatoa environment was new to me. Once I began to figure out Krakatoa, it all clicked and the software proved amazing throughout each step of the process,” he said.

Even with mixing the depth of field and motion blur functions in Krakatoa, Kizny was able to keep his render time to roughly five to ten minutes per frame, even while rendering 200 million points in 2K, by using smaller apertures and camera passes from a higher distance.

“Krakatoa is an amazing manipulation toolkit for processing point cloud data, not only for what I’m doing here but also for recoloring, increasing density, projecting textures and relighting point clouds. I have tried virtually all major point cloud processing software, but Krakatoa saved my life on this project,” Kizny noted.

In addition to using Krakatoa to visualize all the CG components of “Rebirth” as well as render point clouds, Kizny also employed the software for advanced color manipulation. With two subsets of data – a master with good color representation and a target that lacked color information – Kizny used a Magma flow modifier and a comprehensive set of nodes to cast and spatially interpolate the color data from the master subset onto the target subset so that they blended seamlessly in the final dataset. Magma modifiers were also used for the color correction of the entire dataset prior to rendering, which allowed Kizny greater flexibility compared to trying to color correct the rendering itself. Using Krakatoa with Magma modifiers also provided Kizny with a comprehensive set of built-in nodes and scripting access.

3D Laser Scanning for Visual Effects - Rebirth

The second scene of “Rebirth” is a time-lapse reminiscent of “The Chapel,” while the final scene shows live action footage of a dancer. Footage for each scene was captured using Canon DSLR cameras, a RED ONE camera and DitoGear motion control equipment. Between the second and third scene, a short transition visualizes the church collapsing, which was created using 3ds Max Particle Flow with help of Thinkbox Ember, a field manipulation toolkit, and Thinkbox Stoke, a particle reflow tool.

“In the transition, I’m trying to collapse a 200 million-point data cloud into smoke, then create the silhouette of a dancer as a light point from the ashes,” shared Kizny. “Even though it’s a short scene, I’m making use of a lot of technology. It’s not only rendering this point cloud data set again; it’s also collapsing it. I’m using the software in an atypical way, and Thinkbox has been incredibly helpful in troubleshooting the workflow so I could establish a solid pipeline.”

Collapsing the church proved to be a challenge for Kizny. Traditionally, when creating digital explosions, VFX artists are blowing up a solid, rigid object. Not only did Kizny need to collapse a point cloud – a daunting task in of itself – but he also had to do so in the hyper-realistic aesthetic he’d established, and in a way that would be both ethereal and physically believable. Using 3ds Max Particle Flow as a simulation environment, Kizny was able to generate a comprehensive vector field of high resolution that was more efficient and precisely controlled with Ember. Ember was also used to animate two angels appearing from the dust and smoke along with the dancer silhouette. The initial dataset of each of angels was pushed through a specific vector noise field that produced a smoke-like dissolve and then reversed thanks to retiming features in Krakatoa, Ember and Stoke, which was also used to add density.

3D Laser Scanning for Visual Effects - Rebirth

“To create the smoke on the floor, I decided to go all the way with Thinkbox tools,” Kizny said. “All the smoke you see was created using Ember vector fields and simulated with Stoke. It was good and damn fast.”

Another obstacle was figuring out how to animate the dancer in the point clouds. Six cameras recorded a live performer with markerless motion capture tracking done using iPi Motion Capture Studio package. The data obtained from the dancer was then ported onto a virtual, rigged model in 3ds Max and used to emit particles for a Particle Flow simulation. Ember vector fields were used for all the smoke-like circulations and then everything was integrated and rendered using Thinkbox’s Deadline, a render management system, and Krakatoa – almost 900 frames and 3 TB of data caches only for particles. Deadline was also used to distribute high volume renders and allocate resources across Kizny’s render farm.

Though an innovative display of digitally artistry, “Rebirth” is also a preservation tool. Interest generated from “The Chapel” and continued with “Rebirth” has enticed a Polish foundation to begin restoration efforts on the run-down building. Additionally, the LIDAR scans of the chapel will be donated to CyArk, a non-profit dedicated to the digital preservation of cultural heritage sites, and made widely available online.

The film is currently securing funding to complete postproduction. Support the campaign and learn more about the project at the IndieGoGo campaign homepage at http://bit.ly/support-rebirth. For updates on the film’s progress, visit http://rebirth-film.com/.

About Thinkbox Software
Thinkbox Software provides creative solutions for visual artists in entertainment, engineering and design. Developer of high-volume particle renderer Krakatoa and render farm management software Deadline, the team of Thinkbox Software solves difficult production problems with intuitive, well-designed solutions and remarkable support. We create tools that help artists manage their jobs and empower them to create worlds and imagine new realities. Thinkbox was founded in 2010 by Chris Bond, founder of Frantic Films. http://www.thinkboxsoftware.com

Autodesk ReCap: Making Reality Capture Easy and Affordable

Autodesk Aims to Streamline Use of Point Cloud Data

A key addition to the complete 2014 portfolio of Suites is Autodesk® ReCap™ product, a family of powerful and easy-to-use software and services on the desktop and in the cloud to create intelligent 3D data from captured photos and laser scans in a streamlined workflow.  Autodesk ReCap is the first industry solution to bring together laser scanning and photogrammetry into one streamlined process. In addition, no other solution on the market provides the visualization quality and scalability to handle extremely large data sets.

The Autodesk ReCap product line comprises two main offerings – Autodesk ReCap Studio and Autodesk ReCap Photo. Autodesk ReCap Studio makes it easy to clean, organize and visualize massive datasets captured from reality. Autodesk ReCap Photo helps users create high-resolution textured 3D models from photos using the power of cloud computing. Rather than beginning with a blank screen, Autodesk ReCap now enables any designer, architect or engineer to add, modify, validate and document their design process in context from existing environments.

For example, a civil engineer can bypass an existing bridge or expand the road underneath digitally and test feasibility. At construction phase, builders can run clash detection to understand if utilities will be in the way. Urban planners can get answers to specific design questions about large areas, such as how much building roof surface is covered by shadow or vegetation.

ReCap Studio is a data preparation environment that runs on the desktop.  Users can import captured data directly into Autodesk design solutions, such as AutoCAD®, Autodesk® Revit®, Autodesk Inventor®, etc., to conduct QA and verification of data. The data can come from non-intelligent, black and white sparse point clouds to intelligent, visually high appealing content. ReCap Studio will ship in Autodesk product and suite installers or be available for free on the Autodesk Exchange Apps store.

ReCap Studio 2

ReCap Photo is an Autodesk 360 service designed to create high resolution 3D data from photos to enable users to visualize and share 3D data. By leveraging the power of the cloud to process and store massive data files, users can upload images on Autodesk 360 and instantly create a 3D mesh model. ReCap Photo is available with Standard Suites entitlement and higher.

ReCap Photo 2

Key features of Autodesk ReCap include:

  • Visualize and edit massive datasets:  On the desktop, ReCap users can view and edit billions of points to prepare them for use in Autodesk portfolio products to enable realistic in context design work
  • Professional-Grade Photo to 3D Features: ReCap unlocks the power of ubiquitous cameras to capture high-quality 3D models, bringing reality capture within reach of anyone with a camera.  ReCap supports objects of any size and range, full resolution for high-density meshes, survey points and multiple file exports.
  • Photo and Laser: ReCap incorporates the best of both photo and laser data capture so that customers can use photos to fill in holes or augment laser scan data. Users can both increase photos scene accuracy with laser points and add photo-realistic detail to laser scans. Create point clouds from photos, align scans and photos and convert professional grade photo to 3D models.

Autodesk continues to invest in developing sophisticated, easy-to-use reality capture technologies. The company has made several key acquisitions including Alice Labs and Allpoint Systems as well as applied its own research and development resources to accelerate the mainstream adoption of these technologies. As customers are looking for ways to easily and accurately capture the world around them, Autodesk ReCap streamlines Reality Capture workflows, making working with Reality Capture data easyquick and cost effective.

Autodesk is the only company who has combined laser scanning data and photogrammetry into one product family to address and streamline the entire workflow.  Whereas traditional point clouds appear as dots, Autodesk technology can now visualize truly massive point clouds as realistic surfaces. Unique to Autodesk is that users can interact with these huge data sets doing CAD-like operations such as selection, tagging, moving, measuring, clash detection, and object extraction, all with native points. Laser scanning and photogrammetry are historically very expensive and data intensive. Autodesk’s goal is to democratize the process of reality capture so that anyone can capture the world around them to create high quality 3D models.