Brian Ussery is reporting that Google is back in Atlanta, GA making Street View images for Google Maps but, this time they brought in the big guns. Beu Blog reported on April 28, 2010, “The cars here today are equipped with GPS, high resolution panoramic cameras and multiple SICK sensors. These sensors collect LiDAR data that can be used for 3D imaging and visualizations like that seen in Radiohead’s recent “House of Cards” music video. Google Earth and SketchUp, Google’s 3D virtual building maker for Maps also use this type of data.
Last week Google announced the release of a plugin which allows users access to Google Earth imagery via Maps. As a result it’s now possible to view 3d images in Google Maps. The problem here is fairly obvious, Google Earth’s aerial imagery is taken from above and as a result not from the same perspective as users interacting with the data. Not to worry though, the StreetView team has been working on these kinds of problems for some time. When it comes to Navigation, Maps or StreetView, earthbound LiDAR enhanced imagery processed via Sketchup seems like a perfect complement to Google’s existing view from above. Combining high resolution imagery taken from the user’s perspective with advanced 3D image technology, presents some new possibilities to say the least. Factor in new releases like business ads in Maps, now being available in 3D on your mobile device and it’s pretty clear how Sketchup will be monetized.”
It is expected that Google’s incorporation of LiDAR into their mapping efforts will lead to some significant changes to our industry. If you have not previously seen the “House of Cards” video, be sure to check out the interactive music video code to see how Google made the point cloud data readily available for manipulation in a standard web browser. Point clouds are finally becoming more natively accepted in most CAD platforms and with Google getting involved in the industry, who knows where we will be in the near future.
April 12, 2010 – Houston – Increasing its cadre of laser mapping researchers, the University of Houston will expand its pioneering work in such areas as homeland security, disaster recovery, oil and gas exploration, wind farm site planning and environmental studies.
The NSF National Center for Airborne Laser Mapping (NCALM) and the groundbreaking researcher leading it recently moved operations to the University of Houston. Based upon historical information, revenues generated by the center’s operation are anticipated to be $1 million per year and will be reinvested in the program.
NCALM is UH’s first and only NSF-supported center, established and sustained by funding from the National Science Foundation. This differs from the way the university typically sets up centers, using university funds or grants from multiple sources for multiple projects. These types of centers support NSF’s focus on interdisciplinary research, spanning several institutions and departments.
Ramesh Shrestha, Hugh Roy and Lillie Cranz Cullen Distinguished Professor of Civil and Environmental Engineering, brought NCALM to UH from the University of Florida. He has been director of the center, focused on ground-based scanning laser technology and airborne laser swath mapping research, since it was established in 2003. Shrestha brought much of his Florida team with him to Houston, where they now operate NCALM jointly with the University of California-Berkley.
“With the center, we have brought laser mapping’s uses to the forefront and expect to continue to have this impact in our new Houston home,” Shrestha said. “We plan to establish curriculum catered to this specialty and eventually add a graduate degree in geosensing systems engineering. This is in addition to carrying out research far surpassing what is capable in laser mapping to date.”
Shrestha’s work with laser mapping goes back to the 1990s, when this once niche research area was just making its debut. Bill Carter, now a research professor at UH, worked with him early on and helped establish NCALM.
“Together, we saw its potential to far exceed what was possible with many traditional methods, such as airborne photogrammetric mapping that uses cameras to detail terrain,” Carter said. “Laser mapping has the ability to work day or night, as well as generally map areas even though they were covered by forests and other vegetation where photogrammetric methods couldn’t.”
It wasn’t long before other scientists would see its same benefits, especially as the two developed techniques to remove and minimize some of the errors seen in the early years. Their equipment became fine-tuned to collect even more data, now mapping as many as 167,000 points per second compared to the 3,000 they were able to achieve when they first started.
Their work has changed the way the state of Florida monitors erosion on its coastline, produced the highest resolution 3-D images in existence of the San Andreas Fault and taken them across the globe to map Mayan Ruins in Belize and volcanoes in Hawaii. While the impact of their work is already far reaching, their plan for the coming years indicates they are not close to completion. The value of this work is evident in evaluating the before and after of hurricanes and earthquakes in terms of improving building design and other mitigation efforts, as well as offering predictive tools for subsequent powerful events.
Aided by NSF, future NCALM efforts explore the possibility of using Light Detection and Ranging (LiDAR) to map everything from glacial movements to the migration of penguin colonies in Antarctica. Using LiDAR, researchers take measurements of the ground’s surface from their Cessna 337 Skymaster airplane.
From roughly 2,000 feet, this remote technology measures properties of scattered light through the use of laser pulses. Thousands of small cone-shaped pulses travel through a hole in the bottom of the plane to the ground below, and a unique detector picks up rays reflected from the ground. Then, each point’s distance is determined by measuring the time delay between the transmission of a pulse and the detection of reflected signals. The plane’s location and movement in the air are tracked by an inertial measurement unit fixed inside the laser system with a GPS receiver mounted to the plane and others on the ground. Both are used, along with the laser data, to produce detailed 3-D topographical images of the terrain.
“In coming years, our group plans to develop a next-generation LiDAR system. The unit would be less expensive than commercially available systems and allow for some of the most accurate, highest-resolution observations possible in laser mapping,” Shretha said. “We want to develop a system like no one else has developed. It would really change what could be done with this technology. It would have new features, be faster, smaller and capture more during each flight than we can today.”
According to Shrestha, this system would use a much shorter pulse-length laser, increasing the number of points that could be mapped per second to 800,000. This would add to data accuracy and reduce the amount of time needed in the air to collect the information. Additionally, it would be able, for the first time, to penetrate shallow water depths.
NOTE TO JOURNALISTS: High-resolution photos of Ramesh Shretha and the Cessna 337 Skymaster airplane are available to media by contacting Lisa Merkl.
About the University of Houston
The University of Houston, Texas’ premier metropolitan research and teaching institution, is home to more than 40 research centers and institutes and sponsors more than 300 partnerships with corporate, civic and governmental entities. UH, the most diverse research university in the country, stands at the forefront of education, research and service with more than 37,000 students.
About the Cullen College of Engineering
The Cullen College of Engineering at UH has played a vitally important role in educating engineers in Texas. Taught by innovative faculty, eight of whom are in the National Academy of Engineering, the college offers degree programs in biomedical, chemical, civil, computer, electrical, environmental, industrial, mechanical and petroleum engineering, as well as specialty programs in materials, aerospace, and computer and systems engineering.
For more information about UH, visit the university’s Newsroom at http://www.uh.edu/news-events/.
To receive UH science news via e-mail, visit http://www.uh.edu/news-events/mailing-lists/sciencelistserv.php.
SCANable is evaluating this promising new software and will follow-up with an indepth review in the coming days. In the meantime, please feel free to check it out for yourself and leave your input here.
Automatic analysis and interpretation of laser scanning data
- Floor plans, views and sectional views are created at the push of a button.
- Program offers an intuitive user interface (no training required).
- For the first time, results can be processed in almost all CAD programs.
- Visual representations allow a direct use and will dramatically reduce time needed for analysis and modeling.
Point-Cab is the first application that automatically creates―at the push of a button―floor plans and sectional views. The goal of the development team has been to create an extremely user-friendly interface design.
The program creates a visual representation of laser scanning data. The result has much more expressiveness and validity than individual points of point clouds. Images and representations of sections are a prerequisite for an easy modeling in CAD programs.
The program supports most of the common CAD programs and construction tools through standard interfaces. You may use the results in ArchiCAD, Google SketchUp, Revit and other tools.
Point-Cab product videos
|Get to the video:||Get to the video:|
Point-Cab supports several CAD programs, such as:
Point-Cab Layout supports the following laser scanner formats:
- PTX exchange format
- FARO laser scanner
- Riegl laser scanner (available soon)
Examples of use
Point-Cab Layout transfers laser scanning data into AutoCAD (available soon)
Point-Cab transfers laser scanning data into ArchiCAD (available soon)
Point-Cab transfers laser scanning data into Microstation (available soon)
Point-Cab Layout trial version
Find out for yourself why this revolutionary program can make such a difference for you.Click here to download the Point-Cab trial version. The trial version can be used for 15 hours. Time is consumed when you use the application. Feel free to also download our laser scanning data examples. Please find all data examples here.
Here you can download the processed layout results of the full version of Point-Cab.
Buy Point-Cab Layout
It is great to finally see people accepting the benefits of laser scanning as a means of digital preservation. Below is an excellent article posted by by Stefanie Geisler, Boston Globe Correspondent. Source
The wreck of the British warship that Paul Revere slipped by on his legendary journey to Lexington in 1775 has resurfaced in the shifting sands of Cape Cod, and federal park officials are seizing the moment by having the wreck “digitally preserved,” using three-dimensional imaging technology.
“We know the wreck is going to disappear again under the sand, and it may not resurface again in our lifetimes,” said William P. Burke, the historian at the Cape Cod National Seashore, noting that the last time any part of the HMS Somerset III had been sighted was 37 years ago.
“Somewhere down the road, if someone’s researching the Somerset, or the effects of ocean currents on shipwrecks, or anything like that, they will have this record,” he said. “We’re in the forever business. We’re looking at tomorrow, but we’re also looking ahead indefinitely.”
The Somerset fought in the American Revolution and had a crew of more than 400. In 1775, Paul Revere slipped through Boston Harbor past the ship before beginning his ride to warn the colonials the British were on the move. In his poem “Paul Revere’s Ride,” Henry Wadsworth Longfellow called it “a phantom ship, with each mast and spar/Across the moon like a prison bar.” The ship sank on Nov. 2, 1778 off the Cape.
After erosion from recent storms, about a dozen of the Somerset’s timbers were found poking through the wet sand at low tide in the national seashore in Provincetown. Park officials called on Harry R. Feldman Inc., a land surveying company from Boston, to make the three-dimensional rendering.
On Thursday, crews set up survey markers and a laser scanning instrument, said Michael Feldman, the company’s president.
The instrument was placed near the timbers, Feldman said. Using the scanner, the surveyors collect millions of data points that are used to create the three-dimensional rendering.
“The great thing about this technology is it not only shows a three-dimensional picture or video of what’s there, it also obtains data down to quarter-inch accuracy,” Feldman said.
It could take several visits to the site to complete the imaging. But when it’s done, the national seashore will have an animated fly-through of the wreck site — and anyone interested in seeing it won’t have to wait for the timbers to reappear.
The imaging will only capture the timbers that are showing. The rest of the wreck, which is buried in sand, might deteriorate if the site were excavated, Burke said.
Most of the crew survived when the ship sank, but they didn’t get a warm welcome when they reached the shore, he said.
“They were pretty upset with them, because the British had been blockading Provincetown for a long time during the war,” Burke said. “They marched all the survivors off the Cape, and eventually exchanged them for American prisoners.”
On March 30,2010, Intelisum Inc. received a U.S. Patent for “GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site”. According to the United States Patent and Trademark Office website, this was originally filed on June 30, 2006. Details on the filing are listed below and the filing can be found here.
1. Field of the Invention
The present invention relates generally to three-dimensional modeling. More specifically, the present invention relates to a system and method for capturing three-dimensional virtual models of a site that can be co-registered and visualized within a computer system.
2. Description of Related Background Art
Lidar (light detection and ranging) uses laser technology to make precise distance measurements over long or short distances. One application of lidar is the range scanner, or scanning lidar. In a typical range scanner, a lidar is mounted on a tripod equipped with a servo mechanism that continuously pans and tilts the lidar to scan a three-dimensional area. During the scanning process, the lidar makes repeated range measurements to objects in its path. The resulting range data may be collected and serve as a rough model of the scanned area.
Physical limitations of the range scanner constrain the maximum resolution of the range data, which decreases with distance from the range scanner. At large distances, the range scanner may not be able to discern surface details of an object. A lack of continuous spatial data (gaps between points) and a lack of color attributes are significant limitations of conventional range scanners. Furthermore, a range scanner only scans objects within the lidar’s line-of-sight. As a result, no data is collected for the side of an object opposite to the lidar or for objects obscured by other objects (“occlusions”).
To obtain a more complete and accurate model, the range scanner can be moved to other scanning locations in order to scan the same area from different perspectives and thereby obtain range data for obscured objects. Thereafter, the resulting sets of range data can be merged into a single model.
Unfortunately, the merging of sets of range data is not automatic. Human decision-making is generally required at several steps in the merging process. For instance, a human surveyor is typically needed to determine the relative distances between the range scanning locations and the scanned area. Furthermore, a human operator must manually identify points in common (“fiducials”) between multiple sets of range data in order to align and merge the sets into a single model. Such identification is by no means easy, particularly in the case of curved surfaces. The need for human decision-making increases the cost of modeling and the likelihood of error in the process.
A system for capturing a virtual model of a site includes a range scanner for scanning the site to generate range data indicating distances from the range scanner to real-world objects. The system also includes a global positioning system (GPS) receiver coupled to the range scanner for acquiring GPS data for the range scanner at a scanning location. In addition, the system includes a communication interface for outputting a virtual model comprising the range data and the GPS data.
The system may further include a transformation module for using the GPS data with orientation information, such as bearing, for the range scanner to automatically transform the range data from a scanning coordinate system to a modeling coordinate system, where the modeling coordinate system is independent of the scanning location. A co-registration module may then combine the transformed range data with a second set of transformed range data for the same site generated at a second scanning location.
The system also includes a digital camera coupled to the range scanner for obtaining digital images of the real-world objects scanned by the range scanner. The system may associate the digital images of the real-world objects with the corresponding range data in the virtual model.
A system for building a virtual model of a site includes a communication interface for receiving a first set of range data indicating distances from a range scanner at a first location to real-world objects. The communication interface also receives a first set of GPS data for the range scanner at the first location. The system further includes a transformation module for using the first set of GPS data with orientation information for the range scanner to automatically transform the first set of range data from a first local coordinate system to a modeling coordinate system.
A system for modeling an object includes a range scanner for scanning an object from a first vantage point to generate a first range image. The system further includes a GPS receiver for obtaining GPS readings for the first vantage point, as well as a storage medium for associating the first range image and the GPS readings within a first virtual model.
The range scanner may re-scan the object from a second vantage point to generate a second range image. Likewise, the GPS receiver may acquire updated GPS readings for the second vantage point, after which the storage medium associates the second range image and the updated GPS readings within a second virtual model. A transformation module then employs the GPS readings of the virtual models with orientation information for the range scanner at each location to automatically transform the associated range images from local coordinate systems referenced to the vantage points to a single coordinate system independent of the vantage points.
SAN RAFAEL, Calif.–(BUSINESS WIRE)–Autodesk, Inc. (NASDAQ:ADSK) announced the availability of the 2011 AutoCAD software products, including AutoCAD 2011 software, a leading 2D and 3D design and documentation platform, and AutoCAD LT 2011 software for professional 2D drafting and detailing. The latest releases of AutoCAD deliver powerful new features — such as new tools for surface modeling and transparency for objects and layers — that can help designers explore their ideas and maximize productivity. The 2011 AutoCAD products are Microsoft Windows 7 certified and are compatible with and supported on Windows 7 Home Premium, Professional, Enterprise, and Ultimate as well as Windows Vista and Windows XP operating systems.
“We have also implemented many of the top features requested by Autodesk User Group International (AUGI) members and focused on providing new tools that are quick to learn but can have a big impact in everyday work.”
“In the 2011 releases we have continued to invest in increasing drafting productivity and have added a strong set of new 3D modeling features for conceptual design that will help millions of AutoCAD users worldwide take their designs further,” said Guri Stark, vice president, AutoCAD and Platform Products. “We have also implemented many of the top features requested by Autodesk User Group International (AUGI) members and focused on providing new tools that are quick to learn but can have a big impact in everyday work.”
AutoCAD 2011 gives designers more advanced conceptual design tools as well as increased flexibility and control when designing in 3D:
- New surface modeling tools enable users to easily create smooth surfaces and surface transitions, while associativity maintains relationships between all of the objects.
- Point cloud support for up to two billion points enables users to quickly visualize scanned objects directly within the modeling workspace.
- Inferred constraints enable designers to define constraints as they draw.
- Hatch command enhancements bring improved drafting efficiency, while new gradient hatch patterns enable users to add more colors and shading to drawings.
- TimeSaver tools, previously available only to customers on Autodesk Subscription, are now available to all AutoCAD users.
AutoCAD LT 2011 builds on its reputation for productivity with new commands that make everyday tasks more efficient. In addition to the hatch command enhancements and TimeSaver Tools found in AutoCAD 2011, AutoCAD LT 2011 adds new tools that give users additional options for controlling the appearance of drawings:
- Transparency for objects and layers provides new options for managing the appearance of drawings and communicating design intent.
- New multifunctional polyline grips make editing polylines significantly faster and easier.
- The ability to create or select similar objects based on the properties of existing objects helps users save time when drawing and editing geometry.
Industry Solutions for the AutoCAD 2011 Products
The updated AutoCAD 2011 software portfolio includes the following industry-specific applications built on the AutoCAD platform:
- AutoCAD Architecture 2011 software for efficient architectural drafting and documentation has new geometric and dimensional constraints and renovation tools to help accelerate design.
- AutoCAD Electrical 2011 software helps electrical controls designers to quickly create control system designs and easily access extensive catalog information for large electrical controls projects.
- AutoCAD Mechanical 2011 software’s streamlined design environment gives users vastly improved access to power dimensioning functionality, which automatically aligns part dimensions with the rest of the drawing properties, without ever opening a dialog box.
- AutoCAD MEP 2011 software provides greater drafting productivity for mechanical, electrical and plumbing (MEP) designers and drafters and has new features for creating and storing AutoCAD block names, sloped piping and parallel conduit routing.
Product availability varies by country. Details and purchasing options are available at www.autodesk.com/purchaseoptions.
Autodesk, Inc., is a world leader in 2D and 3D design, engineering and entertainment software for the manufacturing, building and construction, and media and entertainment markets. Since its introduction of AutoCAD software in 1982, Autodesk continues to develop the broadest portfolio of state-of-the-art software to help customers experience their ideas digitally before they are built. Fortune 100 companies — as well as the last 15 Academy Award winners for Best Visual Effects — use Autodesk software tools to design, visualize and simulate their ideas to save time and money, enhance quality and foster innovation for competitive advantage. For additional information about Autodesk, visit www.autodesk.com.
Autodesk, AutoCAD and AutoCAD LT are registered trademarks or trademarks of Autodesk, Inc., and/or its subsidiaries and/or affiliates in the USA and/or other countries. Academy Award is a registered trademark of the Academy of Motion Picture Arts and Sciences. All other brand names, product names or trademarks belong to their respective holders. Autodesk reserves the right to alter product and services offerings, and specifications and pricing at any time without notice, and is not responsible for typographical or graphical errors that may appear in this document.
© 2010 Autodesk, Inc. All rights reserved.
HOUSTON, TX March 18, 2010 Coign Asset Metrics & Technologies (CoignAMT), at the direction of Zoo Film Productions of Hollywood, CA, has helped produce IBM’s first globally released television commercial created entirely from Light Detection and Ranging (LiDAR) 3D point cloud data.
CoignAMT used the HDS6100 phase-based laser scanner from Leica Geosystems to create scenes of cars on a freeway, patients in a hospital, electricity grids and much more. The 30-second LiDAR-based commercial is a key part of IBM’s Smarter Planet initiative to portray that data is all around; and that by changing the way the world thinks, companies can maximize the use of data to lower their costs and reduce environmental impact.
Travis Reinke, business sector manager for CoignAMT, says, “Coincidentally, IBM’s perspective is a core part of CoignAMT’s business practice. We help our clients see the long-term value of the data they currently have by using the latest technology, such as 3D laser scanning, to quickly gather an immense amount of data to support their existing “intelligent” systems.”
The Zoo Film Productions crew spent a week in Houston, TX with CoignAMT personnel capturing laser scan data of transmission lines and over 16 blocks of downtown Houston streetscape and surounding buildings. CoignAMT then merged the point cloud data gathered in downtown with 6 square miles of low-altitude helicopter-based LiDAR provided by Aerotec LLC out of Birmingham, AL. Zoo Film Productions crews also created numerous scenes of hospital activities as well as automobiles with and without drivers that CoignAMT scanned individually for use in the commercial.
Reinke continues, “We were honored to be part of this innovative project given the direct correlation between the services CoignAMT provides and IBM’s Smarter Planet initiatives. Using the latest laser scanning technology to visually portray the importance of the data surrounding us was an unforeseen irony. I would never have imagined that we would be using this technology to scan people and cars, objects that are often considered ‘noise’ on a typical inventory project.”
View IBM “Data Anthem” at 848×480: http://www.glossyinc.com/zoo/ibmdataanthem.html
Full credits and a selection of stills: http://www.glossyinc.com/ibmdacred.html
Follow CoignAMT on Twitter: http://www.twitter.com/CoignAMT
Follow Travis Reinke on Twitter: http://www.twitter.com/HDLS
# # #
About Coign Asset Metrics & Technologies LLC:
Coign Asset Metrics & Technologies, LLC (CoignAMT) is a HUBZone qualified, SBA certified, small business that provides a full range of asset management life cycle services and specialized technologies, including high definition laser scanning (HDLS). Its mission is to strengthen public and private sector organizations by aligning their assets and operational practices with their strategic initiatives. CoignAMT is headquartered in the Pittsburgh, PA area, with regional offices located in Colorado Springs, CO, and Houston, TX. Clients include federal, state, and local governments, as well as private sector customers in the construction, energy, transportation, manufacturing, and security industries.
ALBUQUERQUE, N.M., March 8, 2010 — The Boeing Company [NYSE: BA] today announced it has begun offering a new, compact, energy-efficient camera that provides three-dimensional images for military and commercial applications.
Boeing Directed Energy Systems and wholly owned Boeing subsidiary Spectrolab have jointly developed the camera using their own research and development funding, and successfully tested it over the past two years by attaching it to mobile ground platforms and a Boeing AH-6 Little Bird helicopter. Equipped with advanced sensors that were developed by the Massachusetts Institute of Technology’s Lincoln Laboratory and transferred to Boeing under a teaming arrangement, the cube-shaped camera is one-third the size and uses one-tenth the power of most comparable 3-D imaging cameras.
“Our three-dimensional camera fits a lot of capability into a small package,” said Nasser Karam, vice president of Advanced Technology Products at Spectrolab. “Its compact design and modest power needs will allow it to be deployed on a wide range of platforms, including unmanned aerial and ground vehicles that don’t have much room or power to spare.”
The camera, which Boeing can customize for each customer, has many potential uses, including mapping terrain, tracking targets and seeing through foliage. To create a 3-D image, the camera fires a short pulse of laser light, then measures the pulse’s flight time to determine how far away each part of the camera’s field of view is.
“The camera combines cutting-edge sensor technology with Boeing’s advanced pointing and tracking solutions and real-time processing to provide our customers with highly integrated 3-D imaging payloads for ground, airborne or space-based applications,” said Joseph Paranto, Growth lead for Directed Energy Systems in Albuquerque.
Boeing is currently integrating the camera into compact 3-D imaging payloads on unmanned aerial vehicles and will be testing that capability this spring. The team will also add 3-D video capability to the camera soon to complement its existing still-image capability.
A unit of The Boeing Company, Boeing Defense, Space & Security is one of the world’s largest defense, space and security businesses specializing in innovative and capabilities-driven customer solutions, and the world’s largest and most versatile manufacturer of military aircraft. Headquartered in St. Louis, Boeing Defense, Space & Security is a $34 billion business with 68,000 employees worldwide.
Presented by: Executives from Spar Point research, Pointools, and Bentley
Summary:On October 14, 2009, Bentley announced that it had signed a “Continuous Technology Transfer Agreement” to incorporate Pointools’ Vortex Engine in the Bentley Technology Platform to enable reuse of 3D laser scanned data. This webinar will include three different perspectives on this agreement. To see the future of 3D data integration and platform interoperability you have to attend this webinar.
What you can learn:
- PIXAR Deep Dive on SSS: SIGGRAPH PreviewJuly 28, 2017 - 5:25 pm
- Live Planet Unveils NVIDIA Powered Computational Camera...May 8, 2017 - 12:00 pm
- Shot on Lytro’s Light-field Camera, ‘Hallelujah’ Is...April 23, 2017 - 8:00 am
- Faceware Announces New Facial Motion Capture HardwareDecember 5, 2016 - 3:04 pm
2002 Timberloch Place
The Woodlands, Texas 77380
Los Angeles Office
3415 South Sepulveda Boulevard
Los Angeles, California 90034
New York Office
600 3rd Ave
New York, NY 10016
1201 Peachtree Street
Atlanta, Georgia 30361
New Orleans Office
201 St Charles Ave #2500
New Orleans, LA 70170