European LiDAR Mapping Forum

Title: European LiDAR Mapping Forum
Location: The Hague Netherlands
Link out: Click here
Description: The European event for airborne, bathymetric and terrestrial LiDAR, with a particular focus on mobile mapping systems.

A major, two day technical conference focusing on the use of LiDAR to support transport, urban modelling, coastal zone mapping, asset management and GIS applications.

Alongside there is an associated international exhibition for system and component manufacturers, operators and service companies.

Building on 10 years of experience with the annual International LiDAR Mapping Forum (ILMF) in the USA, the organizers are bringing this unique event to Europe. With its focus on LiDAR technology and applications, ELMF10 recognizes the technology advances spearheaded in Europe, and the particular challenges and opportunities which face operators in applying LiDAR to new developments in the European market.

The ELMF10 theme is “LiDAR Across The Market Spectrum”, and the conference programme will feature technical presentations by the industry leaders and opinion formers.

Start Date: 2010-11-30
End Date: 2010-12-01

Tiltan TLiD Transform LiDAR Point Clouds to 3D Models in One Keystroke

From Tiltan’s website: TLiD is Tiltan’s innovative solution for fast, automated creation of 3D maps and GIS information from LiDAR point clouds.

TLiD Main Features:

– Automatic extraction of DTM (bare earth) and DSM
– Automatic features extraction (houses, trees, power lines)
– Automatic full scene 3D reconstruction
– LAS or free ASCII txt input
– LAS, SHP, DTM and other output file formats
– Multiple input/output coordinate systems
– Integrated with a 3D Viewer

TLiD Advantages:

– Fast parallel processing for cost reduction
– No limitation on input file size
– Standalone product
– Special Applications
– Trees counting – height and size
– Power line mapping and clearance
– Line of sight
– Other applications – available on request

Exploring Point-Based Rendering in Pixar’s RenderMan [Point Clouds]

Creating animations of large point cloud datasets generated from terrestrial laser scanners and LiDAR has been an issue for a number of years. While it has been possible using tools such as Leica Geosystems Cyclone or Pointools, just to name a couple, it is still a very cumbersome task. It is exciting to see the CGI industry beginning to adopt the use of this data and developing applications that make it easier to visualize.

Over the last few years, a brand new technology has emerged for creating CGI effects that has already made a big impact on feature film production. It is called point-based rendering, and this powerful technique makes creating global illumination effects for feature film far more practical and efficient than before. In fact, this year the Academy of Motion Picture Art and Sciences awarded the creators of this innovation, Per Christensen, Michael Bunnell and Christophe Hery, with a Scientific and Engineering Academy award.

In this great article written by Nils O Sandys at CGSociety, we will look into the development of this important new technology, how point-based rendering works, and what this all means to the future of feature film production as we know it. Read the full article here.

National Center for Airborne Laser Mapping Comes to Houston [LiDAR]

April 12, 2010 – Houston – Increasing its cadre of laser mapping researchers, the University of Houston will expand its pioneering work in such areas as homeland security, disaster recovery, oil and gas exploration, wind farm site planning and environmental studies.

The NSF National Center for Airborne Laser Mapping (NCALM) and the groundbreaking researcher leading it recently moved operations to the University of Houston.  Based upon historical information, revenues generated by the center’s operation are anticipated to be $1 million per year and will be reinvested in the program.

NCALM is UH’s first and only NSF-supported center, established and sustained by funding from the National Science Foundation.  This differs from the way the university typically sets up centers, using university funds or grants from multiple sources for multiple projects.  These types of centers support NSF’s focus on interdisciplinary research, spanning several institutions and departments.

Ramesh Shrestha, Hugh Roy and Lillie Cranz Cullen Distinguished Professor of Civil and Environmental Engineering, brought NCALM to UH from the University of Florida.  He has been director of the center, focused on ground-based scanning laser technology and airborne laser swath mapping research, since it was established in 2003.  Shrestha brought much of his Florida team with him to Houston, where they now operate NCALM jointly with the University of California-Berkley.

“With the center, we have brought laser mapping’s uses to the forefront and expect to continue to have this impact in our new Houston home,” Shrestha said.  “We plan to establish curriculum catered to this specialty and eventually add a graduate degree in geosensing systems engineering.  This is in addition to carrying out research far surpassing what is capable in laser mapping to date.”

Shrestha’s work with laser mapping goes back to the 1990s, when this once niche research area was just making its debut.  Bill Carter, now a research professor at UH, worked with him early on and helped establish NCALM.

“Together, we saw its potential to far exceed what was possible with many traditional methods, such as airborne photogrammetric mapping that uses cameras to detail terrain,” Carter said.  “Laser mapping has the ability to work day or night, as well as generally map areas even though they were covered by forests and other vegetation where photogrammetric methods couldn’t.”

It wasn’t long before other scientists would see its same benefits, especially as the two developed techniques to remove and minimize some of the errors seen in the early years.  Their equipment became fine-tuned to collect even more data, now mapping as many as 167,000 points per second compared to the 3,000 they were able to achieve when they first started.

Their work has changed the way the state of Florida monitors erosion on its coastline, produced the highest resolution 3-D images in existence of the San Andreas Fault and taken them across the globe to map Mayan Ruins in Belize and volcanoes in Hawaii.  While the impact of their work is already far reaching, their plan for the coming years indicates they are not close to completion.  The value of this work is evident in evaluating the before and after of hurricanes and earthquakes in terms of improving building design and other mitigation efforts, as well as offering predictive tools for subsequent powerful events.

Aided by NSF, future NCALM efforts explore the possibility of using Light Detection and Ranging (LiDAR) to map everything from glacial movements to the migration of penguin colonies in Antarctica.  Using LiDAR, researchers take measurements of the ground’s surface from their Cessna 337 Skymaster airplane.

From roughly 2,000 feet, this remote technology measures properties of scattered light through the use of laser pulses.  Thousands of small cone-shaped pulses travel through a hole in the bottom of the plane to the ground below, and a unique detector picks up rays reflected from the ground.  Then, each point’s distance is determined by measuring the time delay between the transmission of a pulse and the detection of reflected signals.  The plane’s location and movement in the air are tracked by an inertial measurement unit fixed inside the laser system with a GPS receiver mounted to the plane and others on the ground.  Both are used, along with the laser data, to produce detailed 3-D topographical images of the terrain.

“In coming years, our group plans to develop a next-generation LiDAR system.  The unit would be less expensive than commercially available systems and allow for some of the most accurate, highest-resolution observations possible in laser mapping,” Shretha said.  “We want to develop a system like no one else has developed.  It would really change what could be done with this technology.  It would have new features, be faster, smaller and capture more during each flight than we can today.”

According to Shrestha, this system would use a much shorter pulse-length laser, increasing the number of points that could be mapped per second to 800,000.  This would add to data accuracy and reduce the amount of time needed in the air to collect the information.  Additionally, it would be able, for the first time, to penetrate shallow water depths.


NOTE TO JOURNALISTS: High-resolution photos of Ramesh Shretha and the Cessna 337 Skymaster airplane are available to media by contacting Lisa Merkl.

About the University of Houston
The University of Houston, Texas’ premier metropolitan research and teaching institution, is home to more than 40 research centers and institutes and sponsors more than 300 partnerships with corporate, civic and governmental entities.  UH, the most diverse research university in the country, stands at the forefront of education, research and service with more than 37,000 students.

About the Cullen College of Engineering
The Cullen College of Engineering at UH has played a vitally important role in educating engineers in Texas.  Taught by innovative faculty, eight of whom are in the National Academy of Engineering, the college offers degree programs in biomedical, chemical, civil, computer, electrical, environmental, industrial, mechanical and petroleum engineering, as well as specialty programs in materials, aerospace, and computer and systems engineering.

For more information about UH, visit the university’s Newsroom at

To receive UH science news via e-mail, visit

For additional news alerts about UH, follow us on Facebook and Twitter.

Create Floor Plans Automatically from 3D Point Clouds [Point-Cab]

SCANable is evaluating this promising new software and will follow-up with an indepth review in the coming days. In the meantime, please feel free to check it out for yourself and leave your input here.

Automatic analysis and interpretation of laser scanning data

  • Floor plans, views and sectional views are created at the push of a button.
  • Program offers an intuitive user interface (no training required).
  • For the first time, results can be processed in almost all CAD programs.
  • Visual representations allow a direct use and will dramatically reduce time needed for analysis and modeling.

Point-Cab is the first application that automatically creates―at the push of a button―floor plans and sectional views. The goal of the development team has been to create an extremely user-friendly interface design.

The program creates a visual representation of laser scanning data. The result has much more expressiveness and validity than individual points of point clouds. Images and representations of sections are a prerequisite for an easy modeling in CAD programs.

The program supports most of the common CAD programs and construction tools through standard interfaces. You may use the results in ArchiCAD, Google SketchUp, Revit and other tools.

Point-Cab product videos

Get to the video:

Creation of a floor plan at the push of a button

Get to the video:

Sectional views created from laser scanning data

Point-Cab supports several CAD programs, such as:

    • AutoCAD
    • ArchiCAD
    • Microstation
    • SketchUp

Point-Cab Layout supports the following laser scanner formats:

    • PTX exchange format
    • FARO laser scanner
    • Riegl laser scanner (available soon)

Examples of use

Point-Cab Layout transfers laser scanning data into AutoCAD (available soon)

Point-Cab transfers laser scanning data into ArchiCAD (available soon)

Point-Cab transfers laser scanning data into Microstation (available soon)

Point-Cab transfers laser scanning data into SketchUp

Point-Cab Layout trial version

Find out for yourself why this revolutionary program can make such a difference for you.Click here to download the Point-Cab trial version. The trial version can be used for 15 hours. Time is consumed when you use the application. Feel free to also download our laser scanning data examples. Please find all data examples here.

Here you can download the processed layout results of the full version of Point-Cab.

Buy Point-Cab Layout

Click here or send email to

Digitally Preserving American History

Erosion from recent storms uncovered the wreck by Harry R. Feldman Inc.

It is great to finally see people accepting the benefits of laser scanning as a means of digital preservation. Below is an excellent article posted by by Stefanie Geisler, Boston Globe Correspondent. Source

The wreck of the British warship that Paul Revere slipped by on his legendary journey to Lexington in 1775 has resurfaced in the shifting sands of Cape Cod, and federal park officials are seizing the moment by having the wreck “digitally preserved,” using three-dimensional imaging technology.

“We know the wreck is going to disappear again under the sand, and it may not resurface again in our lifetimes,” said William P. Burke, the historian at the Cape Cod National Seashore, noting that the last time any part of the HMS Somerset III had been sighted was 37 years ago.

“Somewhere down the road, if someone’s researching the Somerset, or the effects of ocean currents on shipwrecks, or anything like that, they will have this record,” he said. “We’re in the forever business. We’re looking at tomorrow, but we’re also looking ahead indefinitely.”

The Somerset fought in the American Revolution and had a crew of more than 400. In 1775, Paul Revere slipped through Boston Harbor past the ship before beginning his ride to warn the colonials the British were on the move. In his poem “Paul Revere’s Ride,” Henry Wadsworth Longfellow called it “a phantom ship, with each mast and spar/Across the moon like a prison bar.” The ship sank on Nov. 2, 1778 off the Cape.

After erosion from recent storms, about a dozen of the Somerset’s timbers were found poking through the wet sand at low tide in the national seashore in Provincetown. Park officials called on Harry R. Feldman Inc., a land surveying company from Boston, to make the three-dimensional rendering.

On Thursday, crews set up survey markers and a laser scanning instrument, said Michael Feldman, the company’s president.

The instrument was placed near the timbers, Feldman said. Using the scanner, the surveyors collect millions of data points that are used to create the three-dimensional rendering.

“The great thing about this technology is it not only shows a three-dimensional picture or video of what’s there, it also obtains data down to quarter-inch accuracy,” Feldman said.

It could take several visits to the site to complete the imaging. But when it’s done, the national seashore will have an animated fly-through of the wreck site — and anyone interested in seeing it won’t have to wait for the timbers to reappear.

The imaging will only capture the timbers that are showing. The rest of the wreck, which is buried in sand, might deteriorate if the site were excavated, Burke said.

Most of the crew survived when the ship sank, but they didn’t get a warm welcome when they reached the shore, he said.

“They were pretty upset with them, because the British had been blockading Provincetown for a long time during the war,” Burke said. “They marched all the survivors off the Cape, and eventually exchanged them for American prisoners.”

Laser Scanning the Hold of the Charles W. Morgan

In an effort to precisely document the measurements of the Charles W. Morgan, Feldman Land Surveyors from Boston, MA used a Leica Geosystems ScanStation 2 3D laser scanner to create a photo-realistic 3-D model of the ships hold.

Charles W. Morgan was a U.S. whaleship during the 19th and early 20th century. Ships of this type usually harvested the blubber of whales for whale oil, which was commonly used in lamps during the time period. The ship is currently an exhibit at the Mystic Seaport museum in Mystic, Connecticut.

The ship is named for its original owner, a Quaker whaling merchant who ordered its construction from the shipbuilders Jethro and Zachariah Hillman of New Bedford, Massachusetts. The ship’s maiden voyage began on September 6, 1841, with a journey around Cape Horn and cruised across the Pacific Ocean. Following the Morgan’s initial three year and four month voyage, she came home with 1,600 barrels of sperm oil, 800 barrels of whale oil and 10,000 lbs of whalebone, known as baleen, which was worth around USD$56,000.

The hull and deck of Morgan reflected the industry which she was built to serve. A typical whaleship has three functions:

  • to serve as a mother ship to a fleet of small whaleboats, which are stored on the davits when not in use,
  • to serve as a factory and a refinery ship with tryworks for extracting oil from the whale blubber,
  • to serve as oil tankers.

In her 80 years of service, she made 37 voyages ranging in length from nine months to five years. Charles W. Morgan, in total, brought home 54,483 barrels of sperm whale oil and 152,934 pounds of whalebone. She sailed in the Indian and South Atlantic Oceans, surviving ice and snow storms. Her crew survived a cannibal attack in the South Pacific. Between 1888 and 1904 she was based in San Francisco.

Morgan had more than 1,000 whalemen of all races and nationalities in her lifetime. Her crew included not only Americans, but sailors from Cape Verde, New Zealand, the Seychelles, Guadeloupe, and Norfolk Island. The ship’s crew averaged around 33 men per voyage. As with other whaleships in the 1800s, Morgan often was home to the captain’s family. The Morgan was owned and managed by the J. & W. R. Wing Company of New Bedford.[3]

During its years of service, Charles W. Morgan was used in several movies, including Miss Petticoats (1916), Java Head (1921) and Down to the Sea in Ships (1922).

On the night of June 30, 1924, the Charles W. Morgan caught fire when the flaming wreck of the steamer Sankaty, which had drifted across the Acushnet River from New Bedford harbor in flames, collided with it. Badly charred, Morgan narrowly escaped destruction.[4][5][6]

One of the Charles W. Morgan’s whaleboats, featuring models of crew members with oars and harpoons.

The whaling days came to an end with the perfection of refining petroleum. Morgan was under the care of Whaling Enshrined, Inc. until 1941, when she was transferred to Mystic Seaport, where she still stands to this day. The ship is the only surviving wooden whaleship from the 1800s American fleet.[7]

The 1971 U.S. commemorative stamp honoring the Charles W. Morgan

The Charles W. Morgan arrived at Mystic Seaport in December 1941, narrowly avoiding destruction during World War II. The ship was declared a National Historic Landmark and listed on the National Register of Historic Places in 1966.[2][7][8]

An initial restoration and preservation project was undertaken in 1968. A new restoration effort took place in 2009.
Mystic Seaport has recently completed a multi-million dollar shipyard upgrade to accommodate the next phase of Morgan’s restoration, which is currently underway.

In 1971, the United States Postal Service issued a commemorative stamp honoring the Charles W. Morgan.

Laser Scanning blends highly accurate measurements with three dimension expression of the space and the relationship among the various structural elements. It not only gives us a record of the vessel but has tremendous value as an educational tool describing the structure to those not familiar with traditional naval architectural drawings.

Welcome to SCANable, The Ultimate Resource for 3D Laser Scanning and LiDAR

SCANable is an organization committed to providing valuable information and resources related to the 3D Laser Scanning, LiDAR and Mobile Scanning industries in order to increase the awareness of the benefits and capabilities of these technologies. We strive to provide the necessary tools and support to further enhance the state of these industries including on-line laser scanner equipment rental through our Preferred Provider program (Leica, Z+F, FARO, Riegl, Trimble and Topcon).

We are working with our partners across the nation to provide price consolidation and standardization of laser scanning equipment rentals with a commitment to provide the highest quality products and absolute excellence in service. Our goal is to build lasting customer relationships and ultimately, to be the most valuable resource for the entire industry.

SCANable provides:

– The Latest News Related to 3D Laser Scanning, LiDAR and Mobile Scanning
– On-line Equipment Rental Reservations – Nationwide (Leica Geosystems, FARO, Z+F and more)
Hardware and Software Reviews
– Classified Ads
– Industry Job Openings
– Upcoming Industry Events
– Service Provider Business Directory
Zebra Imaging Hologram Processing

Our organization will prove to be a valuable resource. However, your input and contributions will help to further enhance the awareness of our industry. If you are interested in increasing the exposure of your company, please fill out the Business Directory form.

If you are interested in contributing articles relative to this site, please register for a free account here. Upon approval, you will have access to this site. If you don’t have time to commit to writing articles for this site, you can also send news tips or article ideas to

Intelisum receives U.S. Patent for GPS-Enhanced Laser Scanning System

On March 30,2010, Intelisum Inc. received a U.S. Patent for “GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site”. According to the United States Patent and Trademark Office website, this was originally filed on June 30, 2006. Details on the filing are listed below and the filing can be found here.


1. Field of the Invention

The present invention relates generally to three-dimensional modeling. More specifically, the present invention relates to a system and method for capturing three-dimensional virtual models of a site that can be co-registered and visualized within a computer system.

2. Description of Related Background Art

Lidar (light detection and ranging) uses laser technology to make precise distance measurements over long or short distances. One application of lidar is the range scanner, or scanning lidar. In a typical range scanner, a lidar is mounted on a tripod equipped with a servo mechanism that continuously pans and tilts the lidar to scan a three-dimensional area. During the scanning process, the lidar makes repeated range measurements to objects in its path. The resulting range data may be collected and serve as a rough model of the scanned area.

Physical limitations of the range scanner constrain the maximum resolution of the range data, which decreases with distance from the range scanner. At large distances, the range scanner may not be able to discern surface details of an object. A lack of continuous spatial data (gaps between points) and a lack of color attributes are significant limitations of conventional range scanners. Furthermore, a range scanner only scans objects within the lidar’s line-of-sight. As a result, no data is collected for the side of an object opposite to the lidar or for objects obscured by other objects (“occlusions”).

To obtain a more complete and accurate model, the range scanner can be moved to other scanning locations in order to scan the same area from different perspectives and thereby obtain range data for obscured objects. Thereafter, the resulting sets of range data can be merged into a single model.

Unfortunately, the merging of sets of range data is not automatic. Human decision-making is generally required at several steps in the merging process. For instance, a human surveyor is typically needed to determine the relative distances between the range scanning locations and the scanned area. Furthermore, a human operator must manually identify points in common (“fiducials”) between multiple sets of range data in order to align and merge the sets into a single model. Such identification is by no means easy, particularly in the case of curved surfaces. The need for human decision-making increases the cost of modeling and the likelihood of error in the process.


A system for capturing a virtual model of a site includes a range scanner for scanning the site to generate range data indicating distances from the range scanner to real-world objects. The system also includes a global positioning system (GPS) receiver coupled to the range scanner for acquiring GPS data for the range scanner at a scanning location. In addition, the system includes a communication interface for outputting a virtual model comprising the range data and the GPS data.

The system may further include a transformation module for using the GPS data with orientation information, such as bearing, for the range scanner to automatically transform the range data from a scanning coordinate system to a modeling coordinate system, where the modeling coordinate system is independent of the scanning location. A co-registration module may then combine the transformed range data with a second set of transformed range data for the same site generated at a second scanning location.

The system also includes a digital camera coupled to the range scanner for obtaining digital images of the real-world objects scanned by the range scanner. The system may associate the digital images of the real-world objects with the corresponding range data in the virtual model.

A system for building a virtual model of a site includes a communication interface for receiving a first set of range data indicating distances from a range scanner at a first location to real-world objects. The communication interface also receives a first set of GPS data for the range scanner at the first location. The system further includes a transformation module for using the first set of GPS data with orientation information for the range scanner to automatically transform the first set of range data from a first local coordinate system to a modeling coordinate system.

A system for modeling an object includes a range scanner for scanning an object from a first vantage point to generate a first range image. The system further includes a GPS receiver for obtaining GPS readings for the first vantage point, as well as a storage medium for associating the first range image and the GPS readings within a first virtual model.

The range scanner may re-scan the object from a second vantage point to generate a second range image. Likewise, the GPS receiver may acquire updated GPS readings for the second vantage point, after which the storage medium associates the second range image and the updated GPS readings within a second virtual model. A transformation module then employs the GPS readings of the virtual models with orientation information for the range scanner at each location to automatically transform the associated range images from local coordinate systems referenced to the vantage points to a single coordinate system independent of the vantage points.