Capturing Real-World Environments for Virtual Cinematography

Capturing Real-World Environments for Virtual Cinematography

[source] written by Matt Workman

Virtual Reality Cinematography

As Virtual Reality HMDs (Oculus) come speeding towards consumers, there is an emerging need to capture 360 media and 360 environments. Capturing a location for virtual reality or virtual production is a task that is well suited for a DP and maybe a new niche of cinematography/photography. Not only are we capturing the physical dimensions of the environment using LIDAR, but we capturing the lighting using 360 degree HDR light probes captured with DSLRs/Nodal Tripod systems.

A LIDAR scanner is essentially a camera that shoots in all directions. It lives on a tripod and it can record the physical dimensions and color of an environment/space. It captures millions of points and saves their position and color to be later used to construct the space digitally.

An HDR Latlong Probe in Mari

Using a DSLR camera and a nodal tripod head, the DP would capture High Dynamic Range (32bit float HDR) 360 degree probes of the location, to record the lighting.  This process would essentially capture the lighting in the space at a VERY high dynamic range and that would be later reprojected onto the geometry constructed using the LIDAR data.

Realtime 3D Asset being lit by an HDR environment real time (baked)

The DP is essentially lighting the entire space in 360 degrees and then capturing it. Imagine an entire day of lighting a space in all directions. Lights outside windows, track lighting on walls, practicals, etc. Then capturing that space using the above outlined techniques as an asset to be used later. Once the set is constructed virtually, the director can add actors/props and start filmmaking, like he/she would do on a real set.  And the virtual cinematographer would line up the shots, cameras moves, and real time lighting.

I’ve already encountered a similar paradigm as a DP, when I shot a 360 VR commercial. A few years ago I shot a commercial for Bacardi with a 360 VR camera and we had to light and block talent in all directions within a loft space. The end user was then able to control which way the camera looked in the web player, but the director/DP controlled it’s travel path.

360 Virtual Reality Bacardi Commercial

 

http://www.mattworkman.com/2012/03/18/bacardi-360-virtual-reality/

Capturing a set for VR cinematography would allow the user to control their position in the space as well as which way they were facing. And the talent and interactive elements would be added later.

Final Product: VR Environment Capture

 

In this video you can see the final product of a location captured for VR. The geometry for the set was created using the LIDAR as a reference. The textures and lighting data are baked in from a combination of the LIDAR color data and the reprojected HDR probes.

After all is said in done, we have captured a location, it’s textures, and it’s lighting that can be used a digital location however we need. For previs, virtual production, background VFX plates, a real time asset for Oculus, etc.

SIGGRAPH 2014 and NVIDIA

SG4141: Building Photo-Real Virtual Reality from Real Reality, Byte by Byte
http://www.ustream.tv/recorded/51331701

In this presentation Scott Metzger speaks about his new virtual reality company Nurulize and his work with the Nvidia K5200 GPU and The Foundry’s Mari to create photo real 360 degree environments. He shows a demo of the environment that was captured in 32bit float with 8k textures being played in real time on an Oculus Rift and the results speak for themselves. (The real time asset was down sampled to 16bit EXR)

UDIM Texture Illustration

Some key technologies mentioned were the development of virtual texture engines that allow objects to have MANY 8k textures at once using the UDIM model. The environment’s lighting was baked from V-Ray 3 to a custom UDIM Unity shader and supported by Amplify Creations beta Unity Plug-in.

The xxArray 3D photometry scanner

The actors were scanned in using xxArray photogrammetry system and Mari was used to project the high resolution textures. All of this technology was being enabled by Nvidia’s Quadro GPU line, to allow fast 8k texture buffering.  The actors were later imported in to the real time environment that had been captured and were viewable from all angles through an Oculus Rift HMD.

Real time environment for Oculus

Virtual Reality Filmmaking

Scott brings up some incredibly relevant and important questions about virtual reality for filmmakers (directors/DPs) who plan to work in virtual reality.

  • How do you tell a story in Virtual Reality?
  • How do you direct the viewer to face a certain direction?
  • How do you create a passive experience on the Oculus?

He even give a glimpse at the future distribution model of VR content. His demo for the film Rise will be released for Oculus/VR in the following formats:

  1. A free roam view where the action happens and the viewer is allowed to completely control the camera and point of view.
  2. A directed view where the viewer and look around but the positioning is dictated by the script/director. This model very much interests me and sounds like a video game.
  3. And a tradition 2D post rendered version. Like a tradition cinematic or film, best suited for Vimeo/Youtube/DVD/TV.

A year ago this technology seemed like science fiction, but every year we come closer to completely capturing humans (form/texture), their motions, environments with their textures, real world lighting, and viewing them in real time in virtual reality.

The industry is evolving at an incredibly rapid pace and so must the creatives working in it. Especially the persons responsible for the camera and the lighting, the director of photography.

FARO SCENE Cloud to Cloud Registration

FARO SCENE 5.3 Laser Scanning Software Provides Scan Registration without Targets

[source]

FARO® Technologies, Inc. (NASDAQ: FARO), the world’s most trusted source for 3D measurement, imaging, and realization technology, announced the release of their newest version of laser scanning software, SCENE 5.3, and scan data hosting-service, SCENE WebShare Cloud 1.5.

FARO’s SCENE 5.3 software, for use with the Laser Scanner Focus3D X Series, delivers scan registration by eliminating artificial targets, such as spheres and checkerboards. Users can choose from two available registration methods: Top View Based or Cloud to Cloud. Top View Based registration allows for targetless positioning of scans. In interiors and in built-up areas without reliable GPS positioning of the individual scans, targetless positioning represents a highly efficient and largely automated method of scanning. The second method, Cloud to Cloud registration, opens up new opportunities for the user to position scans quickly and accurately, even under difficult conditions. In exterior locations with good positioning of the scans by means of the integrated GPS receiver of the Laser Scanner Focus3D X Series, Cloud to Cloud is the method of choice for targetless registration.

In addition, the software also offers various new processes that enable the user to flexibly respond to a wide variety of project requirements. For instance, Correspondence Split View matches similar areas in neighbouring scans to resolve any missing positioning information, and Layout Image Overlay allows users to place scan data in a geographical context using image files, CAD drawings, or maps.

Oliver Bürkler, Senior Product Manager for 3D Documentation Software, remarked, “SCENE 5.3 is the ideal tool for processing laser scanning projects. FARO’s cloud-based hosting solution, SCENE WebShare Cloud, allows scan projects to be published and shared worldwide via the Internet. The collective upgrades to FARO’s laser scanning software solution, SCENE 5.3 and WebShare Cloud 1.5, make even complex 3D documentation projects faster, more efficient, and more effective. “

About FARO
FARO is the world’s most trusted source for 3D measurement, imaging and realization technology. The Company develops and markets computer-aided measurement and imaging devices and software. Technology from FARO permits high-precision 3D measurement, imaging and comparison of parts and complex structures within production and quality assurance processes. The devices are used for inspecting components and assemblies, production planning, documenting large volume spaces or structures in 3D, surveying and construction, as well as for investigation and reconstruction of accident sites or crime scenes.

Worldwide, approximately 15,000 customers are operating more than 30,000 installations of FARO’s systems. The Company’s global headquarters is located in Lake Mary, FL., its European head office in Stuttgart, Germany and its Asia/Pacific head office in Singapore. FARO has branches in Brazil, Mexico, Germany, United Kingdom, France, Spain, Italy, Poland, Netherlands, Turkey, India, China, Singapore, Malaysia, Vietnam, Thailand, South Korea and Japan.

Click here for more information or to download a 30-day evaluation version.

Google's Project Tango 3D Capture Device

Mantis Vision’s MV4D Tapped As Core 3D Capture Tech Behind Google’s Project Tango Tablets

Mantis Vision, a developer of some of the world’s most advanced 3D enabling technologies, today confirmed that its MV4D technology platform will serve as the core 3D engine behind Google’s Project Tango. Mantis Vision provides the 3D sensing platform, consisting of flash projector hardware components and Mantis Vision’s core MV4D technology which includes structured light-based depth sensing algorithms.

Project Tango Mantis Vision-Tablet_whiteGoogle’s new seven-inch tablet is the first mobile device released that will access the MV4D platform to easily capture, enrich and deliver quality 3D data at scale allowing Google developers to quickly build consumer and professional applications on top of the MV4D platform.

“3D represents a major paradigm shift for mobile. We haven’t seen a change this significant since the introduction of the camera-phone. MV4D allows developers to deliver 3D-enabled mobile devices and capabilities to the world,” said Amihai Loven, CEO, Mantis Vision. “This partnership with Google offers Mantis Vision the flexibility to expand quickly and strategically. It will fuel adoption and engagement directly with consumer audiences worldwide. Together, we are bringing 3D to the masses.”

MV4D is Mantis Vision’s highly-scalable 3D capture and processing platform that allows developers to integrate Mantis’ technology into new and existing applications with ease, to drive user-generated 3D content creation throughout the mobile ecosystem. MV4D’s combination of field-proven 3D imaging hardware and software and a soon-to-be released software development kit (SDK) will ultimately serve as the backbone of 3D-enabled mobile and tablet devices.

“We are excited about working with partners, such as Mantis Vision, as we push forward the hardware and software technologies for 3D sensing and motion tracking on mobile devices,” said Johnny Lee, Technical Product Lead at Google.

Since its inception, Mantis Vision has been dedicated to bringing professional-grade 3D technology to the masses. The company’s technology will be a key component of both professional and consumer level devices and applications across a wide customer base of leading mobile technology companies, application developers and device manufacturers. Because the MV4D platform and SDK is fully scalable, it is already being planned for use in more powerful, diverse range of products in the future.

Learn more about the project here

hardware-independent-3d-laser-scanning-large-1152x648

Autodesk Announces ReCap Connect Partnership Program

With its new ReCap Connect Partnership Program, Autodesk will open up Autodesk ReCap – its reality capture platform – to third party developers and partners, allowing them to extend ReCap’s functionality.

“Autodesk has a long history of opening our platforms to support innovation and extension,” said Robert Shear, senior director, Reality Solutions, Autodesk. “With the ReCap Connect Partnership Program, we’ll be allowing a talented pool of partners to expand what our reality capture software can do. As a result, customers will have even more ways to start their designs with accurate dimensions and full photo-quality context rather than a blank screen.”

There are many ways for partners to connect to the ReCap pipeline, which encompasses both laser-based and photo-based workflows.  Partners can write their own import plug-in to bring structured point cloud data into ReCap and ReCap Pro using the Capture Codec Kit that is available as part of the new ReCap desktop version. DotProduct – a maker of handheld, self-contained 3D scanners – is the first partner to take advantage of this capability.

“Autodesk’s ReCap Connect program will enable a 50x data transfer performance boost for DotProduct customers — real time 3D workflows on tablets just got a whole lot faster. Our lean color point clouds will feed reality capture pipelines without eating precious schedule and bandwidth.” Tom Greaves, Vice President, Sales and Marketing, DotProduct LLC.

Alternately, partners can take advantage of the new Embedded ReCap OEM program to send Reality Capture Scan (RCS) data exports from their point cloud processing software directly to Autodesk design products, which all support this new point cloud engine, or to ReCap and ReCap Pro. The first signed partners in the Embedded ReCap OEM program are: Faro, for their Faro Scenesoftware; Z+F for their LaserControl software; CSA for their PanoMap software, LFM for their LFM software products; and Kubit for their VirtuSurv software.  All these partners’ software will feature this RCS export in their coming releases.

“Partnering with Autodesk and participating in the ReCap Connect program helps FARO to ensure a fluent workflow for customers who work with Autodesk products. Making 3D documentation and the use of the captured reality as easy as possible is one of FARO’s foremost goals when developing our products. Therefore, integrating with Autodesk products suits very well to our overall product strategy.” – Oliver Bürkler, Senior Product Manager, 3D Documentation Software & Innovation, FARO

As a third option, partners can build their own application on top of the Autodesk photo-to-3D cloud service by using the ReCap Photo Web API. More than 10 companies – serving markets ranging from medical and civil engineering, to video games and Unmanned Aerial Vehicles (UAVs) – have started developing specific applications that leverage this capability, or have started integrating this capability right into their existing apps. Some of the first partners to use the ReCap Photo Web API include Soundfit, SkyCatch and Twnkls.

“Autodesk’s cloud based ReCap is an important part of the SoundFit’s 3D SugarCube Scanning Service.  Autodesk’s ReCap service has enabled SoundFit to keep the per scan cost of its service very low, opening new markets, such as scans for hearing aids, custom fit communications headsets, musicians monitors and industrial hearing protection. ReCap allows SoundFit to export 3D models in a wide variety of popular 3D formats, so SoundFit customers and manufacturers can import them into Autodesk CAD packages from AutoCAD to 123D Design, or send them directly to any 3D printer or 3D printing service bureau.” – Ben Simon-Thomas, CEO & Co-Founder

For more information about the ReCap Connect Partnership Program, contact Dominique Pouliquen at Email Contact.

Additional Partner Supporting Quotes

“ReCap Connect gives our PointSense and PhoToPlan users smart and fully integrated access to powerful ReCap utilities directly within their familiar AutoCAD design environments. The result is a more simple and efficient overall workflow. ReCap Photo 360 image calibration eliminates the slowest part of a kubit user’s design process resulting in significant time savings per project.” – Matthias Koksch, CEO, kubit

“ReCap, integrated with CSA’s PanoMap Server, provides a powerful functionality to transfer laser scan point cloud data from large-scale 3D laser scan databases to Autodesk products.  Using the interface, the user can select any plant area by a variety of selection criteria and transfer the laser scan points to the design environment in which they are working. The laser scan 3D database of the plant can have thousands of laser scans.” – Amadeus Burger, President, CSA Laser Scanning

“Autodesk’s industry leading Recap photogrammetry technology will be instrumental in introducing BuildIT’s 3D Metrology solution to a broader audience by significantly reducing data capture complexity and cost.” – Vito Marone, Director Sales & Marketing, BuildIT Software & Solutions

“I am very pleased with the ReCap Photo API performance and its usefulness in fulfilling our 3D personalization needs. I believe the ReCap Photo API is the only product that is available in the market today that meets our needs.” – Dr. Masuma, PhD., Founder of iCrea8

 

Angela Costa Simoes

Senior PR Manager

DIRECT  +1 415 547 2388

MOBILE  +1 415 302 2934

@ASimoes76

Autodesk, Inc.

The Landmark @ One Market, 5th Floor

San Francisco, CA 94105

www.autodesk.com

RIEGL_Software_RiALITY_Screen-5d

Introducing the World’s First App for LiDAR data visualization on the iPad: RiALITY

RIEGL proudly announces its new iPad point cloud viewer: RiALITY, now available for free in the iTunes App Store.

This new, innovative App, the first of its kind, allows users to experience LiDAR data in a completely new environment. It also allows easier LiDAR data demonstrations through the use of an iPad.

RIEGL’s RiALITY App enables users to visualize and navigate through point clouds acquired with RIEGL laser scanners. As an example, users are able to explore a dataset of the beautiful Rosenburg Castle in Austria. RIEGL scans can also be imported from RIEGL’s RiSCAN PRO software into the App, as well.

“We’re pleased to present a new way of visualizing point clouds. RiALITY delivers this new technology by providing Augmented Reality technology in an easy-to-use app. Now you can easily send your client a 3D point cloud that they can visualize on their iPad, for free.” said Ananda Fowler, RIEGL’s manager of terrestrial laser scanning software.

RiALITY features true color point clouds and 3D navigation. In a breakthrough technological development, the app features an Augmented Reality Mode. The Augmented Reality Mode allows point clouds to be virtually projected into the real world.

Dive into the point cloud!

Find out more at www.riegl.com/app.

LiDAR for Visual Effects - Rebirth

Krakatoa Creates CG Visual Effects from LIDAR Scans for Short Film “Rebirth”

Film director and cinematographer Patryk Kizny – along with his talented team at LookyCreative – put together the 2010 short film “The Chapel” using motion controlled HDR time-lapse to achieve an interesting, hyper-real aesthetic. Enthusiastically received when released online, the three-minute piece pays tribute to a beautifully decaying church in a small Polish village built in the late 1700s. Though widely lauded, “The Chapel” felt incomplete to Kizny, so in fall of 2011, he began production on “Rebirth” to refine and add dimension to his initial story.

LiDAR for Visual Effects - Rebirth

Exploring the same church, “Rebirth” comprises three separate scenes created using different visual techniques. Contemplative, philosophical narration and a custom orchestral soundtrack composed by Kizny’s collaborator, Mateusz Zdziebko, help guide the flow and overall aspirational tone of the film, which runs approximately 12 minutes. The first scene features a point cloud representation of the chapel with various pieces and cross-sections of the building appearing, changing and shifting to the music. Based on LIDAR scans taken of the chapel for this project, Kizny generated the point clouds with Thinkbox Software’s volumetric particle renderer, Krakatoa, in Autodesk 3ds Max.

LiDAR for VFX - Rebirth

“About a year after I shot ”The Chapel,” I returned to the location and happened to get involved in heritage preservation efforts,” Kizny explained. “At the time, laser scanning was used for things like archiving, set modeling and support for integrating VFX in post production, but I hadn’t seen any films visualizing point clouds themselves, so that’s what I decided to do.”

EKG Baukultur an Austrian/German company that specializes in digital heritage documentation and laser scanning, scanned the entire building in about a day from 25 different scanning positions. The collected data was then registered and processed – creating a dataset of about 500 million points. Roughly half of the collected data was used to create the visualizations.

3D Laser Scanning for Visual Effects - Rebirth

Data processing was done in multiple stages using various software packages. Initially, the EKG Baukultur team registered the separate scans together in a common coordinates space using FARO Scene software. Using .PTS format, the data was then re-imported into Alice Labs Studio Clouds (acquired by Autodesk in 2011) for clean up. Kizny manually removed any tripods with cameras, people, checkerboards and balls that had been used to reference scans. Then, the data was processed in Geomagic Studio to reduce noise, fill holes and uniformly downsample selected areas of the dataset. Later, the data was exported back to the .PTS ASCII format with the help of MeshLab and processed using custom Python scripting so that it could be ingested using the Krakatoa importer. Lacking a visual effects background, Kizny initially tested a number of tools to find the best way to visualize point cloud data in a cinematic way with varying and largely disappointing results. Six months of extensive R&D led Kizny to Krakatoa, a tool that was astonishingly fast and a fraction of the price of similar software specifically designed for CAD/CAM applications.

“I had a very basic understanding of 3ds Max, and the Krakatoa environment was new to me. Once I began to figure out Krakatoa, it all clicked and the software proved amazing throughout each step of the process,” he said.

Even with mixing the depth of field and motion blur functions in Krakatoa, Kizny was able to keep his render time to roughly five to ten minutes per frame, even while rendering 200 million points in 2K, by using smaller apertures and camera passes from a higher distance.

“Krakatoa is an amazing manipulation toolkit for processing point cloud data, not only for what I’m doing here but also for recoloring, increasing density, projecting textures and relighting point clouds. I have tried virtually all major point cloud processing software, but Krakatoa saved my life on this project,” Kizny noted.

In addition to using Krakatoa to visualize all the CG components of “Rebirth” as well as render point clouds, Kizny also employed the software for advanced color manipulation. With two subsets of data – a master with good color representation and a target that lacked color information – Kizny used a Magma flow modifier and a comprehensive set of nodes to cast and spatially interpolate the color data from the master subset onto the target subset so that they blended seamlessly in the final dataset. Magma modifiers were also used for the color correction of the entire dataset prior to rendering, which allowed Kizny greater flexibility compared to trying to color correct the rendering itself. Using Krakatoa with Magma modifiers also provided Kizny with a comprehensive set of built-in nodes and scripting access.

3D Laser Scanning for Visual Effects - Rebirth

The second scene of “Rebirth” is a time-lapse reminiscent of “The Chapel,” while the final scene shows live action footage of a dancer. Footage for each scene was captured using Canon DSLR cameras, a RED ONE camera and DitoGear motion control equipment. Between the second and third scene, a short transition visualizes the church collapsing, which was created using 3ds Max Particle Flow with help of Thinkbox Ember, a field manipulation toolkit, and Thinkbox Stoke, a particle reflow tool.

“In the transition, I’m trying to collapse a 200 million-point data cloud into smoke, then create the silhouette of a dancer as a light point from the ashes,” shared Kizny. “Even though it’s a short scene, I’m making use of a lot of technology. It’s not only rendering this point cloud data set again; it’s also collapsing it. I’m using the software in an atypical way, and Thinkbox has been incredibly helpful in troubleshooting the workflow so I could establish a solid pipeline.”

Collapsing the church proved to be a challenge for Kizny. Traditionally, when creating digital explosions, VFX artists are blowing up a solid, rigid object. Not only did Kizny need to collapse a point cloud – a daunting task in of itself – but he also had to do so in the hyper-realistic aesthetic he’d established, and in a way that would be both ethereal and physically believable. Using 3ds Max Particle Flow as a simulation environment, Kizny was able to generate a comprehensive vector field of high resolution that was more efficient and precisely controlled with Ember. Ember was also used to animate two angels appearing from the dust and smoke along with the dancer silhouette. The initial dataset of each of angels was pushed through a specific vector noise field that produced a smoke-like dissolve and then reversed thanks to retiming features in Krakatoa, Ember and Stoke, which was also used to add density.

3D Laser Scanning for Visual Effects - Rebirth

“To create the smoke on the floor, I decided to go all the way with Thinkbox tools,” Kizny said. “All the smoke you see was created using Ember vector fields and simulated with Stoke. It was good and damn fast.”

Another obstacle was figuring out how to animate the dancer in the point clouds. Six cameras recorded a live performer with markerless motion capture tracking done using iPi Motion Capture Studio package. The data obtained from the dancer was then ported onto a virtual, rigged model in 3ds Max and used to emit particles for a Particle Flow simulation. Ember vector fields were used for all the smoke-like circulations and then everything was integrated and rendered using Thinkbox’s Deadline, a render management system, and Krakatoa – almost 900 frames and 3 TB of data caches only for particles. Deadline was also used to distribute high volume renders and allocate resources across Kizny’s render farm.

Though an innovative display of digitally artistry, “Rebirth” is also a preservation tool. Interest generated from “The Chapel” and continued with “Rebirth” has enticed a Polish foundation to begin restoration efforts on the run-down building. Additionally, the LIDAR scans of the chapel will be donated to CyArk, a non-profit dedicated to the digital preservation of cultural heritage sites, and made widely available online.

The film is currently securing funding to complete postproduction. Support the campaign and learn more about the project at the IndieGoGo campaign homepage at http://bit.ly/support-rebirth. For updates on the film’s progress, visit http://rebirth-film.com/.

About Thinkbox Software
Thinkbox Software provides creative solutions for visual artists in entertainment, engineering and design. Developer of high-volume particle renderer Krakatoa and render farm management software Deadline, the team of Thinkbox Software solves difficult production problems with intuitive, well-designed solutions and remarkable support. We create tools that help artists manage their jobs and empower them to create worlds and imagine new realities. Thinkbox was founded in 2010 by Chris Bond, founder of Frantic Films. http://www.thinkboxsoftware.com

Mobile LiDAR Systems Dynascan Mobile Mapper

Mobile Scanners Enter the Rental Market

The Only Portable Mobile Mapping & GIS Data Collection Solution is now available for daily, weekly and monthly rental.

Measurement Devices Ltd (MDL)Mobile Surveying Systems or Mobile Mapping Systems, as they are becoming known, are not new. Several laser manufacturers have introduced and sold Mobile Mapping Systems over the last few years. The difference is that these systems (with one exception) are adaptations of tripod mounted (static) laser scanners. They are all large, very heavy, complicated and very, very expensive.

Measurement Devices Ltd. announces the world-wide availability of Dynascan equipment hire/rental giving even the smallest survey organizations the ability to add mobile mapping to its service offering, enabling them to undertake previously undreamed of ‘detailed 3D, large scale mapping projects’, formerly the domain of large companies or government organizations. Productivity of Laser Surveying Systems will no longer be measured in terms of “Points per Second” but in terms of “Square Miles per Hour” and available for the same cost-effective daily hire rate as static terrestrial 3D laser scanning systems (i.e. Leica Geosystems ScanStation C10, HDS6200, HDS7000).

Furthermore, MDL has assembled a global team of LiDAR industry experts to provide field and office support services and custom-tailored training on “real-world” workflows and procedures. Travis Reinke, Chief Operating Officer of Measurement Devices US, LLC stated, “By providing easy access to the latest technology and expertise, we are able to change the decision-making process. Project managers and the like now have the ability to integrate new technologies into their workflows without the huge risk of taking on large capital expenditures. We’re disrupting the market by introducing mobile mapping to the masses.”

MDL’s new business model of providing ‘complete measurement solutions’ gives greater support to customers with irregular or infrequent resource needs or project specific requirements. Reinke went on to add, “Our highly-skilled team of professionals are available to ensure that first-time users are successful on their first project. We accomplish this by offering a discounted bundled rate for an experienced technician and the mobile system when needed.”

MDL’s mission is:

  • To assist and enable more end-users in the real-world application of various rapid 3D data capture technology by providing easy access to a comprehensive range of equipment rental, consultancy and education.
  • To help end-users close the information gap between the Hardware/Software Vendors and the real world.
  • To empower and educate our client base in order to enhance industry growth and spawn further advancements.
  • To support customers with irregular or infrequent resource needs or project specific requirements.

With decades of combined experience gained from a real world application of laser measurement systems, MDL’s team of industry professionals has obtained a highly qualified and proven skill set focused on Quarry and Mining Surveying, Dynamic Land Surveying, High Definition Surveying, and Hydrographic Surveying.

Eliminate the Investment Risk

  • Rent equipment until you’re ready to buy – expense on per project basis.
  • Take advantage of knowledgeable experts with real world experience to support your project team.
  • Ramp up or down as needed to keep your capital expenditures minimal.
  • Rely upon our experience to provide you with practical user information and get more than just a demo.

The Dynascan ‘Plug and Play’ LIDAR system is a fully integrated high speed laser scanner, high accuracy GPS positioning system and Inertial Measurement Unit. The system is light weight, highly portable and may be used on land vehicles or marine vessels to acquire 3D survey quality data of topography, urban developments, industrial plants, including overhead utility cables, bridges, dams, harbors, beaches, rivers and canal banks, and much more.

By fully integrating all the sensors in one package, MDL has eliminated the need for field calibrations as all sensor offset measurements are fixed during the factory calibration process and pre-configured in the data acquisition software. The LIDAR system is shipped ready for operation and can be mobilized in a matter of minutes.

Dynascan system comes complete with data acquisition and post processing software suite which is compatible with all known brand multi-beam echo sounders, swathe sonar systems, positioning systems and Inertial Measurement Units. Data from all available sensors is synchronized, “Time Tagged” and Recorded. Post processed and raw data may be exported to most 3D data base and CAD software packages. Dynascan is highly affordable and represents an unrivalled ‘Price-Performance’ advantage, opening up the benefits of 3D LIDAR to numerous applications and market opportunities.

 

Google Maps Street View 2.0 [LiDAR]

Brian Ussery is reporting that Google is back in Atlanta, GA making Street View images for Google Maps but, this time they brought in the big guns. Beu Blog reported on April 28, 2010, “The cars here today are equipped with GPS, high resolution panoramic cameras and multiple SICK sensors. These sensors collect LiDAR data that can be used for 3D imaging and visualizations like that seen in Radiohead’s recent “House of Cards” music video. Google Earth and SketchUp, Google’s 3D virtual building maker for Maps also use this type of data.

Last week Google announced the release of a plugin which allows users access to Google Earth imagery via Maps. As a result it’s now possible to view 3d images in Google Maps. The problem here is fairly obvious, Google Earth’s aerial imagery is taken from above and as a result not from the same perspective as users interacting with the data. Not to worry though, the StreetView team has been working on these kinds of problems for some time. When it comes to Navigation, Maps or StreetView, earthbound LiDAR enhanced imagery processed via Sketchup seems like a perfect complement to Google’s existing view from above. Combining high resolution imagery taken from the user’s perspective with advanced 3D image technology, presents some new possibilities to say the least. Factor in new releases like business ads in Maps, now being available in 3D on your mobile device and it’s pretty clear how Sketchup will be monetized.”

It is expected that Google’s incorporation of LiDAR into their mapping efforts will lead to some significant changes to our industry. If you have not previously seen the “House of Cards” video, be sure to check out the interactive music video code to see how Google made the point cloud data readily available for manipulation in a standard web browser. Point clouds are finally becoming more natively accepted in most CAD platforms and with Google getting involved in the industry, who knows where we will be in the near future.

Tiltan TLiD Transform LiDAR Point Clouds to 3D Models in One Keystroke

From Tiltan’s website: TLiD is Tiltan’s innovative solution for fast, automated creation of 3D maps and GIS information from LiDAR point clouds.

TLiD Main Features:

– Automatic extraction of DTM (bare earth) and DSM
– Automatic features extraction (houses, trees, power lines)
– Automatic full scene 3D reconstruction
– LAS or free ASCII txt input
– LAS, SHP, DTM and other output file formats
– Multiple input/output coordinate systems
– Integrated with a 3D Viewer

TLiD Advantages:

– Fast parallel processing for cost reduction
– No limitation on input file size
– Standalone product
– Special Applications
– Trees counting – height and size
– Power line mapping and clearance
– Line of sight
– Other applications – available on request

National Center for Airborne Laser Mapping Comes to Houston [LiDAR]

April 12, 2010 – Houston – Increasing its cadre of laser mapping researchers, the University of Houston will expand its pioneering work in such areas as homeland security, disaster recovery, oil and gas exploration, wind farm site planning and environmental studies.

The NSF National Center for Airborne Laser Mapping (NCALM) and the groundbreaking researcher leading it recently moved operations to the University of Houston.  Based upon historical information, revenues generated by the center’s operation are anticipated to be $1 million per year and will be reinvested in the program.

NCALM is UH’s first and only NSF-supported center, established and sustained by funding from the National Science Foundation.  This differs from the way the university typically sets up centers, using university funds or grants from multiple sources for multiple projects.  These types of centers support NSF’s focus on interdisciplinary research, spanning several institutions and departments.

Ramesh Shrestha, Hugh Roy and Lillie Cranz Cullen Distinguished Professor of Civil and Environmental Engineering, brought NCALM to UH from the University of Florida.  He has been director of the center, focused on ground-based scanning laser technology and airborne laser swath mapping research, since it was established in 2003.  Shrestha brought much of his Florida team with him to Houston, where they now operate NCALM jointly with the University of California-Berkley.

“With the center, we have brought laser mapping’s uses to the forefront and expect to continue to have this impact in our new Houston home,” Shrestha said.  “We plan to establish curriculum catered to this specialty and eventually add a graduate degree in geosensing systems engineering.  This is in addition to carrying out research far surpassing what is capable in laser mapping to date.”

Shrestha’s work with laser mapping goes back to the 1990s, when this once niche research area was just making its debut.  Bill Carter, now a research professor at UH, worked with him early on and helped establish NCALM.

“Together, we saw its potential to far exceed what was possible with many traditional methods, such as airborne photogrammetric mapping that uses cameras to detail terrain,” Carter said.  “Laser mapping has the ability to work day or night, as well as generally map areas even though they were covered by forests and other vegetation where photogrammetric methods couldn’t.”

It wasn’t long before other scientists would see its same benefits, especially as the two developed techniques to remove and minimize some of the errors seen in the early years.  Their equipment became fine-tuned to collect even more data, now mapping as many as 167,000 points per second compared to the 3,000 they were able to achieve when they first started.

Their work has changed the way the state of Florida monitors erosion on its coastline, produced the highest resolution 3-D images in existence of the San Andreas Fault and taken them across the globe to map Mayan Ruins in Belize and volcanoes in Hawaii.  While the impact of their work is already far reaching, their plan for the coming years indicates they are not close to completion.  The value of this work is evident in evaluating the before and after of hurricanes and earthquakes in terms of improving building design and other mitigation efforts, as well as offering predictive tools for subsequent powerful events.

Aided by NSF, future NCALM efforts explore the possibility of using Light Detection and Ranging (LiDAR) to map everything from glacial movements to the migration of penguin colonies in Antarctica.  Using LiDAR, researchers take measurements of the ground’s surface from their Cessna 337 Skymaster airplane.

From roughly 2,000 feet, this remote technology measures properties of scattered light through the use of laser pulses.  Thousands of small cone-shaped pulses travel through a hole in the bottom of the plane to the ground below, and a unique detector picks up rays reflected from the ground.  Then, each point’s distance is determined by measuring the time delay between the transmission of a pulse and the detection of reflected signals.  The plane’s location and movement in the air are tracked by an inertial measurement unit fixed inside the laser system with a GPS receiver mounted to the plane and others on the ground.  Both are used, along with the laser data, to produce detailed 3-D topographical images of the terrain.

“In coming years, our group plans to develop a next-generation LiDAR system.  The unit would be less expensive than commercially available systems and allow for some of the most accurate, highest-resolution observations possible in laser mapping,” Shretha said.  “We want to develop a system like no one else has developed.  It would really change what could be done with this technology.  It would have new features, be faster, smaller and capture more during each flight than we can today.”

According to Shrestha, this system would use a much shorter pulse-length laser, increasing the number of points that could be mapped per second to 800,000.  This would add to data accuracy and reduce the amount of time needed in the air to collect the information.  Additionally, it would be able, for the first time, to penetrate shallow water depths.

###

NOTE TO JOURNALISTS: High-resolution photos of Ramesh Shretha and the Cessna 337 Skymaster airplane are available to media by contacting Lisa Merkl.

About the University of Houston
The University of Houston, Texas’ premier metropolitan research and teaching institution, is home to more than 40 research centers and institutes and sponsors more than 300 partnerships with corporate, civic and governmental entities.  UH, the most diverse research university in the country, stands at the forefront of education, research and service with more than 37,000 students.

About the Cullen College of Engineering
The Cullen College of Engineering at UH has played a vitally important role in educating engineers in Texas.  Taught by innovative faculty, eight of whom are in the National Academy of Engineering, the college offers degree programs in biomedical, chemical, civil, computer, electrical, environmental, industrial, mechanical and petroleum engineering, as well as specialty programs in materials, aerospace, and computer and systems engineering.

For more information about UH, visit the university’s Newsroom at http://www.uh.edu/news-events/.

To receive UH science news via e-mail, visit http://www.uh.edu/news-events/mailing-lists/sciencelistserv.php.

For additional news alerts about UH, follow us on Facebook and Twitter.