faro freestyle 3d handheld scanner

FARO® Launches Innovative, User-Friendly Handheld 3D Scanner to Meet Growing Demand for Portable Scanning

LAKE MARY, Fla.Jan. 7, 2015 /PRNewswire/ — FARO Technologies, Inc. (NASDAQ: FARO), the world’s most trusted source for 3D measurement, imaging, and realization technology, announces the release of the new FARO Freestyle3D Handheld Laser Scanner, an easy, intuitive device for use in Architecture, Engineering and Construction (AEC), Law Enforcement, and other industries.

The FARO Freestyle3D is equipped with a Microsoft Surface™ tablet and offers unprecedented real-time visualization by allowing the user to view point cloud data as it is captured. The Freestyle3D scans to a distance of up to three (3) meters and captures up to 88K points per second with accuracy better than 1.5mm.  The patent-pending, self-compensating optical system also allows users to start scanning immediately with no warm up time required.

“The Freestyle3D is the latest addition to the FARO 3D laser scanning portfolio and represents another step on our journey to democratize 3D scanning,” stated Jay Freeland, FARO’s President and CEO.  “Following the successful adoption of our Focus scanners for long-range scanning, we’ve developed a scanner that provides customers with the same intuitive feel and ease-of-use in a handheld device.”
The portability of Freestyle3D enables users to maneuver and scan in tight and hard-to-reach areas such as car interiors, under tables and behind objects making it ideal for crime scene data collection or architectural preservation and restoration activities.  Memory-scan technology enables Freestyle3D users to pause scanning at any time and then resume data collection where they left off without the use of artificial targets.

Mr. Freeland added, “FARO’s customers continue to stress the importance of work-flow simplicity, portability, and affordability as key drivers to their continued use and adoption of 3D laser scanning.  We have responded by developing an easy-to-use, industrial grade, handheld laser scanning device that weighs less than 2 lbs.”

The Freestyle3D can be employed as a standalone device to scan areas of interest, or used in concert with FARO’s Focus X 130 / X 330 scanners.  Point cloud data from all of these devices can be seamlessly integrated and shared with all of FARO’s software visualization tools including FARO SCENE, WebShare Cloud, and FARO CAD Zone packages.

For more information about FARO’s 3D scanning solutions visit: www.faro.com

This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995 that are subject to risks and uncertainties, such as statements about demand for and customer acceptance of FARO’s products, and FARO’s product development and product launches. Statements that are not historical facts or that describe the Company’s plans, objectives, projections, expectations, assumptions, strategies, or goals are forward-looking statements. In addition, words such as “is,”“will,” and similar expressions or discussions of FARO’s plans or other intentions identify forward-looking statements. Forward-looking statements are not guarantees of future performance and are subject to various known and unknown risks, uncertainties, and otherfactors that may cause actual results, performances, or achievements to differ materially from future results, performances, or achievements expressed or implied by such forward-looking statements. Consequently, undue reliance should not be placed on these forward-looking statements.

Factors that could cause actual results to differ materially from what is expressed or forecasted in such forward-looking statements include, but are not limited to:

  • development by others of new or improved products, processes or technologies that make the Company’s products less competitive or obsolete;
  • the Company’s inability to maintain its technological advantage by developing new products and enhancing its existing products;
  • declines or other adverse changes, or lack of improvement, in industries that the Company serves or the domestic and international economies in the regions of the world where the Company operates and other general economic, business, and financial conditions; and
  • other risks detailed in Part I, Item 1A. Risk Factors in the Company’s Annual Report on Form 10-K for the year ended December 31, 2013 and Part II, Item 1A. Risk Factors in the Company’s Quarterly Report on Form 10-Q for the quarter ended June 28, 2014.

Forward-looking statements in this release represent the Company’s judgment as of the date of this release. The Company undertakes no obligation to update publicly any forward-looking statements, whether as a result of new information, future events, or otherwise, unless otherwise required by law.

About FARO

FARO is the world’s most trusted source for 3D measurement technology. The Company develops and markets computer-aided measurement and imaging devices and software. Technology from FARO permits high-precision 3D measurement, imaging and comparison of parts and complex structures within production and quality assurance processes. The devices are used for inspecting components and assemblies, rapid prototyping, documenting large volume spaces or structures in 3D, surveying and construction, as well as for investigation and reconstruction of accident sites or crime scenes.

Approximately 15,000 customers are operating more than 30,000 installations of FARO’s systems, worldwide. The Company’s global headquarters is located in Lake Mary, FL; its European regional headquarters in Stuttgart, Germany; and its Asia/Pacific regional headquarters in Singapore. FARO has other offices in the United StatesCanadaMexicoBrazilGermany, the United Kingdom,FranceSpainItalyPolandTurkeythe NetherlandsSwitzerlandPortugalIndiaChinaMalaysiaVietnamThailandSouth Korea, and Japan.

More information is available at http://www.faro.com

SOURCE FARO Technologies, Inc.

Mattepainting Toolkit Camera Projection

Photogrammetry and camera projection mapping in Maya made easy

The Mattepainting Toolkit

Photogrammetry and camera projection mapping in Maya made easy.

What’s included?

The Mattepainting Toolkit (gs_mptk) is a plugin suite for Autodesk Maya that helps artists build photorealistic 3D environments with minimal rendering overhead. It offers an extensive toolset for working with digital paintings as well as datasets sourced from photographs.

Version 3.0 is now released!

For Maya versions 2014 and 2015, version 3.0 of the toolkit adds support for Viewport 2.0, and a number of new features. Version 2.0 is still available for Maya versions 2012-2014. A lite version of the toolkit, The Camera Projection Toolkit (gs_cptk) is available for purchase from the Autodesk Exchange. To see a complete feature comparison list between these versions, click here.

How does it work?

The Mattepainting Toolkit uses an OpenGL implementation for shader feedback within Maya’s viewport. This allows users to work directly with paintings, photos, and image sequences that are mapped onto geometry in an immediate and intuitive way.

Overview

The User Interface

Textures are organized in a UI that manages the shaders used for viewport display and rendering.

...

  • Clicking on an image thumbnail will load the texture in your preferred image editor.
  • Texture layer order is determined by a drag-and-drop list.
  • Geometry shading assignments can be quickly added and removed.

Point Cloud Data

Import Bundler and PLY point cloud data from Agisoft Photoscan, Photosynth, or other Structure From Motion (SFM) software.

...

  • Point clouds can be used as a modeling guide to quickly reconstruct a physical space.
  • Cameras are automatically positioned in the scene for projection mapping.

The Viewport

A custom OpenGL shader allows textures to be displayed in high quality and manipulated interactively within the viewport.

...

  • Up to 16 texture layers can be displayed per shader.
  • HDR equirectangular images can be projected spherically.
  • Texture mattes can be painted directly onto geometry within the viewport.
  • Image sequences are supported so that film plates can be mapped to geometry.

Rendering

The layered textures can be rendered with any renderer available to Maya. Custom Mental Ray and V-Ray shaders included with the toolkit extend the texture blending capabilities for those renderers.

...

  • The texture layers can be baked down to object UVs.
  • A coverage map can be rendered to isolate which areas of the geometry are most visible to the camera.
  • For Mental Ray and V-Ray, textures can be blended based on object occlusion, distance from the projection camera, and object facing ratio.
3D scan and print of President Obama

Smithsonian Displays 3D Portrait of President Obama

The first presidential portraits created from 3-D scan data are now on display in the Smithsonian Castle. The portraits of President Barack Obama were created based on data collected by a Smithsonian-led team of 3-D digital imaging specialists and include a digital and 3-D printed bust and life mask. A new video released today by the White House details the behind-the-scenes process of scanning, creating and printing the historic portraits. The portraits will be on view in the Commons gallery of the Castle starting today, Dec. 2, through Dec. 31. The portraits were previously displayed at the White House Maker Faire June 18.

3D Print of President Obama

The Smithsonian-led team scanned the President earlier this year using two distinct 3-D documentation processes. Experts from the University of Southern California’s Institute for Creative Technologies used their Light Stage face scanner to document the President’s face from ear to ear in high resolution. Next, a Smithsonian team used handheld 3-D scanners and traditional single-lens reflex cameras to record peripheral 3-D data to create an accurate bust.

The data captured was post-processed by 3-D graphics experts at the software company Autodesk to create final high-resolution models. The life mask and bust were then printed using 3D Systems’ Selective Laser Sintering printers.

The data and the printed models are part of the collection of the Smithsonian’s National Portrait Gallery. The Portrait Gallery’s collection has multiple images of every U.S. president, and these portraits will support the current and future collection of works the museum has to represent Obama.

The life-mask scan of Obama joins only three other presidential life masks in the Portrait Gallery’s collection: one of George Washington created by Jean-Antoine Houdon and two of Abraham Lincoln created by Leonard Wells Volk (1860) and Clark Mills (1865). The Washington and Lincoln life masks were created using traditional plaster-casting methods. The Lincoln life masks are currently available to explore and download on the Smithsonian’s X 3D website.

The video below shows an Artec Eva being used to capture a 3D portrait of President Barack Obama along with Mobile Light Stage – in essence, eight high-end DSLRs and 50 light sources mounted in a futuristic-looking quarter-circle of aluminum scaffolding. During a facial scan, the cameras capture 10 photographs each under different lighting conditions for a total of 80 photographs. All of this happened in a single second. Afterwards, sophisticated algorithms processed this data into high-resolution 3D models. The Light Stage captured the President’s facial features from ear to ear, similar to the 1860 Lincoln life mask.

About Smithsonian X 3D

The Smithsonian publicly launched its 3-D scanning and imaging program Smithsonian X 3D in 2013 to make museum collections and scientific specimens more widely available for use and study. The Smithsonian X 3D Collection features objects from the Smithsonian that highlight different applications of 3-D capture and printing, as well as digital delivery methods for 3-D data in research, education and conservation. Objects include the Wright Flyer, a model of the remnants of supernova Cassiopeia A, a fossil whale and a sixth-century Buddha statue. The public can explore all these objects online through a free custom-built, plug-in browser and download the data for their own use in modeling programs or to print using a 3-D printer.

endeavor space shuttle lidar

Endeavour: The Last Space Shuttle as she’s never been seen before.

[source by Mark Gibbs]

Endeavour, NASA’s fifth and final space shuttle, is now on display at the California Science Center in Los Angeles and, if you’re at all a fan of space stuff, it’s one of the most iconic and remarkable flying machines ever built.

David Knight, a trustee and board member of the foundation recently sent me a link to an amazing video of the shuttle as well as some excellent still shots.

David commented that these images were:

 “…captured by Chuck Null on the overhead crane while we were doing full-motion VR and HD/2D filming … the Payload Bay has been closed for [a] few years … one door will be opened once she’s mounted upright in simulated launch position in the new Air & Space Center.

Note that all of this is part of the Endeavour VR Project by which we are utilizing leading-edge imaging technology to film, photograph and LIDAR-scan the entire Orbiter, resulting in the most comprehensive captures of a Space Shuttle interior ever assembled – the goal is to render ultra-res VR experiences by which individuals will be able to don eyewear such as the Oculus Rift (the COO of Oculus himself came down during the capture sessions), and walk or ‘fly’ through the Orbiter, able to ‘look’ anywhere, even touch surfaces and turn switches, via eventual haptic feedback gloves etc.

The project is being Executive Produced by me, with the Producer being Ted Schilowitz (inventor of the RED camera and more), Director is Ben Grossman, who led the special effects for the most recent Star Trek movie. Truly Exciting!”

Here are the pictures …

Endeavour - the last Space Shuttle
Endeavour - the last Space ShuttleCharles Null / David Knight on behalf of the California Science Center
Endeavour - the last Space Shuttle

 

zLense real-time 3D tracking

zLense Announces World’s First Real-Time 3D Depth Mapping Technology for Broadcast Cameras

New virtual production platform dramatically lowers the cost of visual effects (VFX) for live and recorded TV, enabling visual environments previously unattainable in a live studio without any special studio set-up…

27 October 2014, London, UK – zLense, a specialist provider of virtual production platforms to the film, production, broadcast and gaming industries, today announced the launch of the world’s first depth-mapping camera solution that captures 3D data and scenery in real-time and adds a 3D layer, which is optimized for broadcasters and film productions, to the footage. The ground breaking industry-first technology processes space information, making  new and real three-dimensional compositing methods possible, enabling production teams to create stunning 3D effects and utilise state-of-the-art CGI in live TV or pre-recorded transmissions – with no special studio set up.

Utilising the solution, directors can produce unique simulated and augmented reality worlds, generating and combining dynamic virtual reality (VR) and augmented (AR) effects in live studio or outside broadcast transmissions. The unique depth-sensing technology allows for a full 360 degree freedom of camera movement and gives presenters and anchormen greater liberty of performance. Directors can combine dolly, jib arm and handheld shots as presenters move within, interact with and control the virtual environment and, in the near future, using only natural gestures and motions.

“We’re poised to shake up the Virtual Studio world by putting affordable high-quality real-time CGI into the hands of broadcasters,” said Bruno Gyorgy, President of zLense. “This unique world-leading technology changes the face of TV broadcasting as we know it, giving producers and programme directors access to CGI tools and techniques that transform the audience viewing experience.”

Doing away with the need for expensive match-moving work, the zLense Virtual Production platform dramatically speeds up the 3D compositing process, making it possible for directors to mix CGI and live action shots in real-time pre-visualization and take the production values of their studio and OB live transmissions to a new level. The solution is quick to install, requires just a single operator, and is operable in almost any studio lighting.

“With minimal expense and no special studio modifications, local and regional TV channels can use this technology to enhance their news and weather graphics programmes – unleashing live augmented reality, interactive simulations and visualisations that make the delivery of infographics exciting, enticing and totally immersive for viewers,” he continued.

The zLense Virtual Production platform combines depth-sensing technology and image-processing in a standalone camera rig that captures the 3D scene and camera movement. The ‘matte box’ sensor unit, which can be mounted on almost any camera rig, removes the need for external tracking devices or markers, while the platform’s built-in rendering engine cuts the cost and complexity of using visual effects in live and pre-recorded TV productions. The zLense Virtual Production platform can be used alongside other, pre-existing, rendering engines, VR systems and tracking technologies.

The VFX real-time capabilities enabled by the zLense Virtual Production platform include:

  • Volumetric effects
  • Additional motion and depth blur
  • Shadows and reflections to create convincing state-of-the-art visual appearances
  • Dynamic relighting
  • Realistic 3D distortions
  • Creation of a fully interactive virtual environment with interactive physical particle simulation
  • Wide shot and in-depth compositions with full body figures
  • Real-time Z-map and 3D models of the picture

For more information on the zLense features and functionalities, please visit: zlense.com/features

About Zinemath
Zinemath, a leader in developing the re-invention of how professional moving images are going to be processed in the future, is the producer of zLense, a revolutionary real-time depth sensing and modelling platform that adds third dimensional information to the filming process.  zLense is the first depth mapping camera accessory optimized for broadcasters and cinema previsualization. With an R&D center in Budapest, Zinemath, part of the Luxemburg-based Docler Group, is spreading this new vision to all industries in the film, television and mobile technology sectors.

For more information please visit: www.zlense.com

Autodesk-Meshmixer-Launch-2

Make a 3D Printed Kit with Meshmixer 2.7

[source]

Meshmixer 2.7 was released today full of new tools for 3D printing. Here I use the new version of the app to create a 3D printed kit of parts that can be printed in one job and assembled together pin connectors.

To do this I used several of the new features to make this a fast and painless process. I dug up a 123D Catch capture I took of a bronze sculpture of John Muir. I found it in my dentists office, it turns out my dentist sculpted it. I thought I’d make my own take on it by slicing it up and connecting it back together so it can be interactive, swiveling the pieces around the pin connectors.

I made use of the new pin connectors solid parts that are included in the release (in the miscellaneous bin). I also used the powerful Layout/Packing tool to layout parts on the print bed as a kit of parts to print in one print job. Also, the addition of the Orthographic view is incredibly helpful when creating the kit and laying it out within the print volume of my Replicator 2X. An instructable is in progress with a how-to for a 3D printed kit such as this.

 

This new release has some other nice updates. Check em out below:

– New Layout/Packing Tool under Analysis for 3D print bed layout

– New Deviation Tool for visualizing max distance between two objects (ie original & reduced version)

– New Clearance Tool for visualizing min distance between two objects (ie to verify tolerances)

– Under Analysis menu, requires selection of two objects)

– Reduce Tool now supports reducing to triangle count, (approximate) maximum deviation

– Support Generation improvements

– Better DLP/SLA preset

– Can now draw horizontal bars in support generator

– Ctrl-click now deletes all support segments above click point

– Shift-ctrl-click to only delete clicked segment

– Solid Part dropping now has built-in option to boolean add/subtract

– Can set operation-type preference during Convert To Solid Part

– Can set option to preserve physical dimensions during Convert To Solid Part

– New Snapping options in Measure tool

– Can now turn on Print Bed rendering in Modeling view (under View menu)

– Must enter Print View to change/configure printer

– Improved support for low-end graphics cards

For your kit of parts, try out the new pin connectors included in the Misc. parts library. One is a negative (boolean subtract it when dropping the part). The other you can drop on the print bed for printing by itself. It fits into the negative hole. You can also author your own parts and they will drop at a fixed scale (so they fit!).

Let us know what kind of kits you create…maybe we can add in your connectors in a future release. (There’s a free 3d print and t-shirt involved). Let us know at meshmixer@autodesk.com.

Have fun!!

Rent or Buy Leica Geosystems Cyclone 9

Leica Geosystems HDS Introduces Patent-Pending Innovations for Laser Scanning Project Efficiency

With Leica Cyclone 9.0, the industry leading point cloud solution for processing laser scan data, Leica Geosystems HDS introduces major, patent-pending innovations for greater project efficiency. Innovations benefit both field and office via significantly faster, easier scan registration, plus quicker deliverable creation thanks to better 2D and 3D drafting tools and steel modelling. Cyclone 9.0 allows users to scale easily for larger, more complex projects while ensuring high quality deliverables consistently.

Greatest advancement in office scan registration since cloud-to-cloud registration
When Leica Geosystems pioneered cloud-to-cloud registration, it enabled users – for the first time – to accurately execute laser scanning projects without having to physically place special targets around the scene, scan them, and model them in the office. With cloud-to-cloud registration software, users take advantage of overlaps among scans to register them together.

“The cloud-to-cloud registration approach has delivered significant logistical benefits onsite and time savings for many projects. We’ve constantly improved it, but the new Automatic Scan Alignment and Visual Registration capabilities in Cyclone 9.0 represent the biggest advancement in cloud-to-cloud registration since we introduced it,” explained Dr. Chris Thewalt, VP Laser Scanning Software. “Cyclone 9.0 lets users benefit from targetless scanning more often by performing the critical scan registration step far more efficiently in the office for many projects. As users increase the size and scope of their scanning projects, Cyclone 9.0 pays even bigger dividends. Any user who registers laser scan data will find great value in these capabilities.“

With the push of a button, Cyclone 9.0 automatically processes scans, and digital images if available, to create groups of overlapping scans that are initially aligned to each other. Once scan alignment is completed, algorithmic registration is applied for final registration. This new workflow option can be used in conjunction with target registration methods as well. These combined capabilities not only make the most challenging registration scenarios feasible, but also exponentially faster. Even novice users will appreciate their ease-of-use and ready scalability beyond small projects.

Power user Marta Wren, technical specialist at Plowman Craven Associates (PCA – leading UK chartered surveying firm) found that Cyclone 9.0’s Visual Registration tools alone sped up registration processing of scans by up to four times (4X) faster than previous methods. PCA uses laser scanning for civil infrastructure, commercial property, forensics, entertainment, and Building Information Modelling (BIM) applications.

New intuitive 2D and 3D drafting from laser scans
For civil applications, new roadway alignment drafting tools let users import LandXML-based roadway alignments or use simple polylines imported or created in Cyclone. These tools allow users to easily create cross section templates using feature codes, as well as copy them to the next station and visually adjust them to fit roadway conditions at the new location. A new vertical exaggeration tool in Cyclone 9.0 allows users to clearly see subtle changes in elevation; linework created between cross sections along the roadway can be used as breaklines for surface meshing or for 2D maps and drawings in other applications.

For 2D drafting of forensic scenes, building and BIM workflows, a new Quick Slice tool streamlines the process of creating a 2D sketch plane for drafting items, such as building footprints and sections, into just one step. A user only needs to pick one or two points on the face of a building to get started. This tool can also be used to quickly analyse the quality of registrations by visually checking where point clouds overlap.

Also included in Cyclone 9.0 are powerful, automatic point extraction features first introduced in Cyclone II TOPO and Leica CloudWorx. These include efficient SmartPicks for automatically finding bottom, top, and tie point locations and Points-on-a-Grid for automatically placing up to a thousand scan survey points on a grid for ground surfaces or building faces.

Simplified steel fitting of laser scan data
For plant, civil, building and BIM applications, Cyclone 9.0 also introduces a patent-pending innovation for modelling steel from point cloud data more quickly and easily. Unlike time consuming methods that require either processing an entire available cloud to fit a steel shape or isolating a cloud section before fitting, this new tool lets users to quickly and accurately model specific steel elements directly within congested point clouds. Users only need to make two picks along a steel member to model it. Shapes include wide flange, channel, angle, tee, and rectangular tube shapes.

Faster path to deliverables
Leica Cyclone 9.0 also provides users with valuable, new capabilities for faster creation of deliverables for civil, architectural, BIM, plant, and forensic scene documentation from laser scans and High-Definition Surveying™ (HDS™).

Availability
Leica Cyclone 9.0 is available today. Further information about the Leica Cyclone family of products can be found at http://hds.leica-geosystems.com, and users may download new product versions online from this website or purchase or rent licenses from SCANable, your trusted Leica Geosystems representative. Contact us today for pricing on software and training.

Capturing Real-World Environments for Virtual Cinematography

Capturing Real-World Environments for Virtual Cinematography

[source] written by Matt Workman

Virtual Reality Cinematography

As Virtual Reality HMDs (Oculus) come speeding towards consumers, there is an emerging need to capture 360 media and 360 environments. Capturing a location for virtual reality or virtual production is a task that is well suited for a DP and maybe a new niche of cinematography/photography. Not only are we capturing the physical dimensions of the environment using LIDAR, but we capturing the lighting using 360 degree HDR light probes captured with DSLRs/Nodal Tripod systems.

A LIDAR scanner is essentially a camera that shoots in all directions. It lives on a tripod and it can record the physical dimensions and color of an environment/space. It captures millions of points and saves their position and color to be later used to construct the space digitally.

An HDR Latlong Probe in Mari

Using a DSLR camera and a nodal tripod head, the DP would capture High Dynamic Range (32bit float HDR) 360 degree probes of the location, to record the lighting.  This process would essentially capture the lighting in the space at a VERY high dynamic range and that would be later reprojected onto the geometry constructed using the LIDAR data.

Realtime 3D Asset being lit by an HDR environment real time (baked)

The DP is essentially lighting the entire space in 360 degrees and then capturing it. Imagine an entire day of lighting a space in all directions. Lights outside windows, track lighting on walls, practicals, etc. Then capturing that space using the above outlined techniques as an asset to be used later. Once the set is constructed virtually, the director can add actors/props and start filmmaking, like he/she would do on a real set.  And the virtual cinematographer would line up the shots, cameras moves, and real time lighting.

I’ve already encountered a similar paradigm as a DP, when I shot a 360 VR commercial. A few years ago I shot a commercial for Bacardi with a 360 VR camera and we had to light and block talent in all directions within a loft space. The end user was then able to control which way the camera looked in the web player, but the director/DP controlled it’s travel path.

360 Virtual Reality Bacardi Commercial

 

http://www.mattworkman.com/2012/03/18/bacardi-360-virtual-reality/

Capturing a set for VR cinematography would allow the user to control their position in the space as well as which way they were facing. And the talent and interactive elements would be added later.

Final Product: VR Environment Capture

 

In this video you can see the final product of a location captured for VR. The geometry for the set was created using the LIDAR as a reference. The textures and lighting data are baked in from a combination of the LIDAR color data and the reprojected HDR probes.

After all is said in done, we have captured a location, it’s textures, and it’s lighting that can be used a digital location however we need. For previs, virtual production, background VFX plates, a real time asset for Oculus, etc.

SIGGRAPH 2014 and NVIDIA

SG4141: Building Photo-Real Virtual Reality from Real Reality, Byte by Byte
http://www.ustream.tv/recorded/51331701

In this presentation Scott Metzger speaks about his new virtual reality company Nurulize and his work with the Nvidia K5200 GPU and The Foundry’s Mari to create photo real 360 degree environments. He shows a demo of the environment that was captured in 32bit float with 8k textures being played in real time on an Oculus Rift and the results speak for themselves. (The real time asset was down sampled to 16bit EXR)

UDIM Texture Illustration

Some key technologies mentioned were the development of virtual texture engines that allow objects to have MANY 8k textures at once using the UDIM model. The environment’s lighting was baked from V-Ray 3 to a custom UDIM Unity shader and supported by Amplify Creations beta Unity Plug-in.

The xxArray 3D photometry scanner

The actors were scanned in using xxArray photogrammetry system and Mari was used to project the high resolution textures. All of this technology was being enabled by Nvidia’s Quadro GPU line, to allow fast 8k texture buffering.  The actors were later imported in to the real time environment that had been captured and were viewable from all angles through an Oculus Rift HMD.

Real time environment for Oculus

Virtual Reality Filmmaking

Scott brings up some incredibly relevant and important questions about virtual reality for filmmakers (directors/DPs) who plan to work in virtual reality.

  • How do you tell a story in Virtual Reality?
  • How do you direct the viewer to face a certain direction?
  • How do you create a passive experience on the Oculus?

He even give a glimpse at the future distribution model of VR content. His demo for the film Rise will be released for Oculus/VR in the following formats:

  1. A free roam view where the action happens and the viewer is allowed to completely control the camera and point of view.
  2. A directed view where the viewer and look around but the positioning is dictated by the script/director. This model very much interests me and sounds like a video game.
  3. And a tradition 2D post rendered version. Like a tradition cinematic or film, best suited for Vimeo/Youtube/DVD/TV.

A year ago this technology seemed like science fiction, but every year we come closer to completely capturing humans (form/texture), their motions, environments with their textures, real world lighting, and viewing them in real time in virtual reality.

The industry is evolving at an incredibly rapid pace and so must the creatives working in it. Especially the persons responsible for the camera and the lighting, the director of photography.

face 3d projection mapping

OMOTE Real-time Face Tracking 3D Projection Mapping

Forget the faces of historic monuments, the new frontier of 3D projection mapping is the faces of humans.

Created by Nobumichi Asai and friends, technical details behind the process are scant at the moment, but from what can be found in this Tumblr post, it’s clear that step one is a 3D scan of the model’s face.

Here is the translated text from that post:

I will continue to explain how to make a face mapping of this time.
Title OMOTE that (= table) is coming from the “ability”, but it has become the idea of Noh mask also in how to make. That is the idea, covered by creating a “surface”. That it is possible to pursue the accuracy, theme that represent a very delicate make-up art as its output was important. I started from the fact that it is 3D laser scanning the face of the model for the first.

I suspect that a structured light scanner was used to capture the geometry of the model’s face rather than a 3D laser scanner. Nonetheless, this is a very cool application of 3D projection mapping.

3D Face Scanning 3D Face Scanning projection mapping3D Face Scanning projection mapping

OMOTE / REAL-TIME FACE TRACKING & PROJECTION MAPPING. from something wonderful on Vimeo.

Eyesmap 3D Scanning Tablet

3D Sensing Tablets Aims To Replace Multiple Surveyor Tools

 

Source: Tech Crunch

As we reported earlier this year, Google is building a mobile device with 3D sensing capabilities — under the Project Tango moniker. But it’s not the only company looking to combine 3D sensing with mobility.

Spanish startup E-Capture R&D is building a tablet with 3D sensing capabilities that’s aiming to target the enterprise space — for example as a portable tool for surveyors, civil engineers, architects and the like — which is due to go on sale at the beginning of 2015.

The tablet, called EyesMap, will have two rear 13 megapixel cameras, along with a depth sensor and GPS to enable it to measure co-ordinates, surface and volumes of objects up to a distance of 70 to 80 meters in real-time.

Eyesmap 3D Scanning Tablet

 

So, for instance, it could be used to capture measurements of – or create a 3D model of — a bridge or a building from a distance. Or to model objects as small as insects so it could be used to 3D scan individual components by civil engineers, for instance.

Its makers claim it can build high-resolution models with HD realistic textures.

EyesMap uses photogrammetry to ensure accurate measurements and to build outdoor 3D models, but also has an RGBD sensor for indoor scanning.

Eyesmap 3D Scanning TabletThe tablet will apparently be able to scan an “advanced photogrammetric picture” with up to 4 million dots in around 2 minutes. It will also be able to capture 3D objects in motion. It’s using a blend of computer vision techniques, photogrammetry, visual odometer, “precision sensor fine tuning” and other image measuring techniques, say its makers.

E-Capture was founded back in April 2012 by a group of experienced surveyors and Pedro Ortiz-Coder, a researcher in the laser scanning and photogrammetry field. The business has been founder funded thus far, but has also received a public grant of €800,000 to help with development.

In terms of where EyesMap fits into the existing enterprise device market, Ortiz-Coder says it’s competing with multiple standalone instruments in the survey field — such as 3D scanners, telemeters, photogrammetry software and so on — but is bundling multiple functions into a single portable device.

“To [survey small objects], a short range laser scanner is required but, a short-range LS cannot capture big or far away objects. That’s why we thought to create a definitive instrument, which permits the user to scan small objects, indoors, buildings, big objects and do professional works with a portable device,” he tells TechCrunch.

“Moreover, there wasn’t in the market any instrument which can measure objects in motion accurately more than 3-4 meters. EyesMap can measure people, animals, objects in motion in real time with a high range distance.”

The tablet will run Windows and, on the hardware front, will have Intel’s 4th generation i7 processor and 16 GB of RAM. Pricing for the EyesMap slate has not yet been announced.

 

Another 3D mobility project we previously covered, called LazeeEye, was aiming to bring 3D sensing smarts to any smartphone via an add on device (using just RGBD sensing) — albeit that project fell a little short of its funding goal on Kickstarter.

Also in the news recently, Mantis Vision raising $12.5 million in funding from Qualcomm Ventures, Samsung and others for its mobile 3D capture engine that’s designed to work on handheld devices.

There’s no denying mobile 3D as a space is heating up for device makers, although it remains to be seen how slick the end-user applications end up being — and whether they can capture the imagination of mainstream mobile users or, as with E-Capture’s positioning, carve out an initial user base within niche industries.