velodyne_lidar_tm_puck_hires_fronttop_3k

Velodyne LiDAR Announces Puck Hi-Res™ LiDAR Sensor

Velodyne LiDAR Announces Puck Hi-Res™ LiDAR Sensor, Offering Higher Resolution to Identify Objects at Greater Distances

Industry-leading, real-time LiDAR sensor impacts autonomous vehicle, 3D mapping and surveillance industries with significantly higher resolution of 3D images

velodyne_lidar_tm_puck_hires_fronttop_3k

MORGAN HILL, Calif.–(BUSINESS WIRE)–Velodyne LiDAR Inc., the recognized global leader in Light, Detection and Ranging (LiDAR) technology, today unveiled its new Puck Hi-Res™ sensor, a version of the company’s groundbreaking LiDAR Puck that provides higher resolution in captured 3D images, which allows objects to be identified at greater distances. Puck Hi-Res is the third new LiDAR sensor released by the company this year, joining the standard VLP-16 Puck™ and the Puck LITE™.

“Not only does the Puck Hi-Res provide greater detail in longer ranges, but it retains all the functions of the original VLP-16 Puck that shook up these industries when it was introduced in September 2014.”

“Introducing a high-resolution LiDAR solution is essential to advancing any industry that leverages the capture of 3D images, from autonomous navigation to mapping to surveillance,” said Mike Jellen, President and COO, Velodyne LiDAR. “The Puck Hi-Res sensor will provide the most detailed 3D views possible from LiDAR, enabling widespread adoption of this technology while increasing safety and reliability.”

Expanding on Velodyne LiDAR’s groundbreaking VLP-16 Puck, a 16-channel, real-time 3D LiDAR sensor that weighs just 830 grams, Puck Hi-Res is used in applications that require greater resolution in the captured 3D image. Puck Hi-Res retains the VLP-16 Puck’s 360° horizontal field-of-view (FoV) and 100-meter range, but delivers a 20° vertical FoV for a tighter channel distribution – 1.33° between channels instead of 2.00° – to deliver greater details in the 3D image at longer ranges. This will enable the host system to not only detect, but also better discern, objects at these greater distances.

“Building on the VLP-16 Puck and the Puck LITE, the Puck Hi-Res was an intuitive next step for us, as the evolution of the various industries that rely on LiDAR showed the need for higher resolution 3D imaging,” said Wayne Seto, product line manager, Velodyne LiDAR. “Not only does the Puck Hi-Res provide greater detail in longer ranges, but it retains all the functions of the original VLP-16 Puck that shook up these industries when it was introduced in September 2014.”

“The 3D imaging market is expected to grow from $5.71B in 2015 to $15.15B in 2020, led by the development of autonomous shuttles for large campuses, airports, and basically anywhere there’s a need to safely move people and cargo,” said Dr. Rajender Thusu, Industry Principal for Sensors & Instruments, Frost & Sullivan. “We expect Velodyne LiDAR’s line of sensors to play a key role in this surge in autonomous vehicle development, as the company leads the way in partnerships with key industry drivers, along with the fact that sensors like the new Puck Hi-Res are substantially more sophisticated than competitive offerings and increasingly accessible to all industry players.”

Velodyne LiDAR is now accepting orders for Puck Hi-Res, with a lead-time of approximately eight weeks.

About Velodyne LiDAR

Founded in 1983 by David S. Hall, Velodyne Acoustics Inc. first disrupted the premium audio market through Hall’s patented invention of virtually distortion-less, servo-driven subwoofers. Hall subsequently leveraged his knowledge of robotics and 3D visualization systems to invent ground breaking sensor technology for self-driving cars and 3D mapping, introducing the HDL-64 Solid-State Hybrid LiDAR sensor in 2005. Since then, Velodyne LiDAR has emerged as the leading supplier of solid-state hybrid LiDAR sensor technology used in a variety of commercial applications including advanced automotive safety systems, autonomous driving, 3D mobile mapping, 3D aerial mapping and security. The compact, lightweight HDL-32E sensor is available for applications including UAVs, while the VLP-16 LiDAR Puck is a 16-channel LiDAR sensor that is both substantially smaller and dramatically less expensive than previous generation sensors. To read more about the technology, including white papers, visit http://www.velodynelidar.com.

Contacts

Velodyne LiDAR
Laurel Nissen
lnissen@velodyne.com
or
Porter Novelli/Voce
Andrew Hussey
Andrew.hussey@porternovelli.com

Hennessy VSOP 3D Scan

Hennessy Launches “Harmony. Mastered from Chaos.” Interactive Campaign using LiDAR Scans

NEW YORK, June 30, 2016 /PRNewswire/ — Hennessy, the world’s #1 Cognac, today announced “Harmony. Mastered from Chaos.” –a dynamic new campaign that brings to life the multitude of complex variables that are artfully and expertly mastered by human touch to create the brand’s most harmonious blend, V.S.O.P Privilège. Set to launch June 30th, the campaign showcases the absolute mastery exuded at every stage of crafting this blend. This first campaign in over ten years also offers a glimpse into the inner workings of Hennessy’s mysterious Comité de Dégustation (Tasting Committee)—perhaps the ideal example of Hennessy’s mastery—that crafts the same rich, high quality liquid year over year. Narrated by Leslie Odom, Jr., the campaign features 60, 30 and 15 second digital spots and an interactive digital experience, adding another vivid chapter to the brand’s “Never stop. Never settle.” platform.

“Sharing the intriguing story of the Hennessy Tasting Committee, its exacting practices and long standing rituals, illustrates the crucial role that over 250 years of tradition and excellence play in mastering this well-structured spirit,” said Giles Woodyer, Senior Vice President, Hennessy US. “With more and more people discovering Cognac and seeking out the heritage behind brands, we knew it was the right time to launch the first significant marketing campaign for V.S.O.P Privilège.”

Hennessy’s Comité de Dégustation is a group of seven masters, including seventh generation Master Blender, Yann Fillioux, unparalleled in the world of Cognac. These architects of time oversee the eaux-de-vie to ensure that every bottle of V.S.O.P Privilège is perfectly balanced despite the many intricate variables present during creation of the Cognac. From daily tastings at exactly 11am in the Grand Bureau (whose doors never open to the public) to annual tastings of the entire library of Hennessy eaux-de-vie (one of the largest and oldest in the world), this august body meticulously safeguards the future of Hennessy, its continuity and legacy.

Through a perfectly orchestrated phalanx marked by an abundance of tradition, caring and human touch, V.S.O.P Privilège is created as a complete and harmonious blend: the definitive expression of a perfectly balanced Cognac. Based on a selection of firmly structured eaux-de-vie, aged largely in partially used barrels in order to take on subtle levels of oak tannins, this highly characterful Cognac reveals balanced aromas of fresh vanilla, cinnamon and toasty notes, all coming together with a seamless perfection.

“Harmony. Mastered from Chaos.”
In partnership with Droga5, the film and interactive experience were directed by Ben Tricklebank of Tool of North America, and Active Theory, a Los Angeles-based interactive studio. From the vineyards in Cognac, France, to the distillery and Cognac cellars, viewers are taken on a powerful and modern cinematic journey to experience the scrupulous process of crafting Hennessy VSOP Privilège. The multidimensional campaign uses a combination of live-action footage and technology, including 3D lidar scanning, depth capture provided by SCANable, and binaural recording to visualize the juxtaposition of complexity versus mastery that is critical to the Hennessy V.S.O.P Privilège Cognac-making process.

“Harmony. Mastered from Chaos.” will be supported by a fully integrated marketing campaign including consumer events, retail tastings, social and PR initiatives. Consumers will be able to further engage with the brand through  the first annual “Cognac Classics Week” hosted by Liquor.com, taking place July 11-18 to demonstrate the harmony that V.S.O.P Privilège adds to classic cocktails. Kicking off on Bastille Day in a nod to Hennessy’s French heritage, mixologists across New York City, Chicago, and Los Angeles will offer new twists on classics such as the French 75, Sidecar, and Sazerac, all crafted with the perfectly balanced V.S.O.P Privilège.

For more information on Cognac Classics Week, including a list of participating bars and upcoming events, visitwww.Liquor.com/TBD and follow the hashtag #CognacClassicsWeek.

To learn more about “Harmony. Mastered from Chaos.” visit Hennessy.com or Facebook.com/Hennessy.

ABOUT HENNESSY
In 2015, the Maison Hennessy celebrated 250 years of an exceptional adventure that has lasted for seven generations and spanned five continents.

It began in the French region of Cognac, the seat from which the Maison has constantly passed down the best the land has to give, from one generation to the next. In particular, such longevity is thanks to those people, past and present, who have ensured Hennessy’s success both locally and around the world. Hennessy’s success and longevity are also the result of the values the Maison has upheld since its creation: unique savoir-faire, a constant quest for innovation, and an unwavering commitment to Creation, Excellence, Legacy, and Sustainable Development. Today, these qualities are the hallmark of a House – a crown jewel in the LVMH Group – that crafts the most iconic, prestigious Cognacs in the world.

Hennessy is imported and distributed in the U.S. by Moët Hennessy USA. Hennessy distills, ages and blends spanning a full range: Hennessy V.S, Hennessy Black, V.S.O.P Privilège, X.O, Paradis, Paradis Impérial and Richard Hennessy. For more information and where to purchase/ engrave, please visit Hennessy.com.

 

 

Video – https://youtu.be/vp5e8YV0pjc
Photo – http://photos.prnewswire.com/prnh/20160629/385105
Photo – http://photos.prnewswire.com/prnh/20160629/385106

SOURCE Hennessy

Leica_Pegasus_Backpack_Keyvisual_PIC_655x180

Leica Pegasus Backpack Wearable Reality Capture – Indoors, Outdoors, Anywhere

Ultra mobile reality capture sensor platform – authoritative professional documentation indoors or outdoors

Leica Pegasus Backpack is a unique wearable reality capturing sensor platform combining cameras and Lidar profilers with the lightness of a carbon fiber chassis and a highly ergonomic design. The Pegasus:Backpack enables extensive and efficient indoor or outdoor documentation at a level of accuracy that is authoritative and professional. The Pegasus:Backpack is designed for rapid and regular reality capture – no longer is scanning registration needed for progressive scanning. The Pegasus:Backpack is just completely portable – enabling it to be checked in as luggage on a flight – simply fly-in, wear, collect, then fly-out. As part of the Pegasus platform, the Pegasus:Backpack is designed to act as a sensor platform with our standard external trigger and sync port outputs.

Leica Pegasus:Backpack

Map indoors, outdoors, underground, anywhere
Making progressive professional BIM documentation a reality with the Leica Pegasus:Backpack solution, synchronising imagery and point cloud data together, therefore assuring a complete documentation of a building for full life cycle management. By using SLAM (Simultaneous Localization and Mapping) technology and a high precision IMU, we ensure accurate positioning with GNSS outages – ensuring the best known position independent of how it is used.

With the Leica Pegasus:Backpack outdoor areas or underground infrastructures with limited access professional data collection is no longer limited . By capturing full 360 spherical view and Lidar together means you never forget an object or return to a project site – no matter where you are. A hardware light sensor ensures the operator that all images are usable while other functions are verifiable and adjustable over the operators tablet device.


Main features

  • Indoor and outdoor mapping in one single solution – position agnostic
  • Marries imagery and point cloud data into a single calibrated, user-intuitive platform
  • Full calibrated spherical view
  • External trigger output and external time stamping for additional sensors
  • Light sensor for auto brightness and balance control for image capture
  • Software enables access to Esri® ArcGIS for Desktop
  • Capture and edit 3D spatial objects from images and / or within the point cloud
  • Economical with data – balances data quantity and quality, with project logistics and post-processing
  • Ultra light weight carbon fiber core frame with an extensive ergonomic support for prolonged use
  • Real time view of the captured data through the tablet device
  • Up to 6 hours operational time with optional battery pack

Hardware features

  • Two profilers with 600,000 pts/sec, 50 m usable range and 16 channels
  • Largest sensor to pixel in the market – 5.5 um x 5.5 um
  • Five 4 MB cameras positioned to capture 360° x 200° view
  • User adjustable acquisition intervals based on the distance travelled
  • NovAtel ProPak6™ provides the latest and most sophisticated precise GNSS receiver with a robust field proven IMU for position accuracy of 20 mm RMS after 10 seconds of outage
  • Marrying a triple band GNSS system with the latest multiple beam enabled SLAM algorithms
  • INS determination of the location, speed, velocity and orientation at a rate of 200 Hz
  • Ultra portable system fitting into one carrying case (system weight 13 kg)
  • Battery based – using four batteries in a hot swappable configuration
  • Multi-core industrial PC, 1 TB SSD, USB3 interface, ethernet, and wireless connection from the system to the tablet device

Leica Pegasus:Backpack Indoor mapping solution

Leica Pegasus:Backpack enables unimaginable applications for indoor and outdoor mapping combining  visual images with the accuracy of a point cloud for professional documentation – in a wearable, ergonomic, and ultra light carbon fiber construction.

Software features

  • User capable of adding acquisition point objects in a Shapefile format during data acquisition
  • Advanced export capability for CAD-systems and others (DWG, DXF, SHP, GDB, DGN, E57, HPC, LAS, PTS, NMEA, KMZ)
  • Semi-automatic extraction tools
  • Sequenced images and videos for rapid navigation and object recognition
  • Software pointer “snaps” automatically and continuously onto the point cloud data from within an image
  • Immediate access to point clouds for accurate measurement
  • 3D stereoscopic view to decrease errors and increase throughput
  • Shadowed or missing 3D points can be acquired via photogrammetric processes
  • Data capture module displays the current location based on a GIS user interface
  • Data capture module displays all cameras and Lidar scans live, simultaneously
  • Data capture module enables laser scanner management and GNSS Operation
  • Live status monitoring of system during data acquisition

Software benefits

  • Lidar accuracy with image-based usability
  • Digitise spatial objects through mobile mapping
  • A more natural approach for non-professional users while offering technical interface for advanced users
  • Scalable to your applications including less accurate simple GIS needs
  • Short data acquisition time
  • High acquisition throughput
  • High post-processing throughput
  • Manageable license options – compatible with thin-client viewer
  • Esri® ArcGIS for Desktop compatible
  • Leverages Esri® relational platform for advanced features
SynthEyes 3D Tracking Software

Andersson Technologies releases SynthEyes 1502 3D Tracking Software

Andersson Technologies has released SynthEyes 1502, the latest version of its 3D tracking software, improving compatibility with Blackmagic Design’s Fusion compositing software.

Reflecting the renewed interest in Fusion
According to the official announcement: “Blackmagic Design’s recent decision to make Fusion 7 free of charge has led to increased interest in that package. While SynthEyes has exported to Fusion for many years now — for projects such as Battlestar Galactica — Andersson Technologies LLC upgraded SynthEyes’s Fusion export.”

Accordingly, the legacy Fusion exporter now supports 3D planar trackers; primitive, imported, or tracker-built meshes; imported or extracted textures; multiple cameras; and lens distortion via image maps.

The new lens distortion feature should make it possible to reproduce the distortion patterns of any real-world lens without its properties having been coded explicitly in the software or a custom plugin.

A new second exporter creates corner pin nodes in Fusion from 2D or 3D planar trackers in SynthEyes.

Other new features in SynthEyes 1502 include an error curve mini-view, a DNG/CinemaDNG file reader, and a refresh of the user interface, including the option to turn toolbar icons on or off.

Pricing and availability
SynthEyes 1502 is available now for Windows, Linux and Mac OS X. New licences cost from $249 to $999, depending on which edition you buy. The new version is free to registered users.

New features in SynthEyes 1502 include:

  • Toolbar icons are back! Some love ’em, some hate ’em. Have it your way: set the preference. Shows both text and icon by default to make it easiest, especially for new users with older tutorials. Some new and improved icons.
  • Refresh of user interface color preferences to a somewhat darker and trendier look. Other minor appearance tweaks.
  • New error curve mini-view.
  • Updated Fusion 3D exporter now exports all cameras, 3D planars, all meshes (including imported), lens distortion via image maps, etc.
  • New Fusion 2D corner pinning exporter.
  • Lens distortion export via color maps, currently for Fusion (Nuke for testing).
  • During offset tracking, a tracker can be (repeatedly) shift-dragged to different reference patterns on any frame, and SynthEyes will automatically adjust the offset channel keying.
  • Rotopanel’s Import tracker to CP (control point) now asks whether you want to import the relative motion or absolute position.
  • DNG/CinemaDNG reading. Marginal utility: DNG requires much proprietary postprocessing to get usable images, despite new luma and chroma blur settings in the image preprocessor.
  • New script to “Reparent meshes to active host” (without moving them)
  • New section in the user manual on “Realistic Compositing for 3-D”
  • New tutorials on offset tracking and Fusion.
  • Upgraded to RED 5.3 SDK (includes REDcolor4, DRAGONcolor2).
    • Faster camera and perspective drawing with large meshes and lidar scan data.
  • Windows: Installing license data no longer requires “right click/Start as Administrator”—the UAC dialog will appear instead.
  • Windows: Automatically keeps the last 3 crash dumps. Even one crash is one too many.
  • Windows: Installers, SynthEyes, and Synthia are now code-signed for “Andersson Technologies LLC” instead of showing “Unknown publisher”.
  • Mac OS X: Yosemite required that we change to the latest XCode 6—this eliminated support for OS X 10.7. Apple made 10.8 more difficult as well.

About SynthEyes

SynthEyes is a program for 3-D camera-tracking, also known as match-moving. SynthEyes can look at the image sequence from your live-action shoot and determine how the real camera moved during the shoot, what the camera’s field of view (~focal length) was, and where various locations were in 3-D, so that you can create computer-generated imagery that exactly fits into the shot. SynthEyes is widely used in film, television, commercial, and music video post-production.

What can SynthEyes do for me? You can use SynthEyes to help insert animated creatures or vehicles; fix shaky shots; extend or fix a set; add virtual sets to green-screen shoots; replace signs or insert monitor images; produce 3D stereoscopic films; create architectural previews; reconstruct accidents; do product placements after the shoot; add 3D cybernetic implants, cosmetic effects, or injuries to actors; produce panoramic backdrops or clean plates; build textured 3-D meshes from images; add 3-D particle effects; or capture body motion to drive computer-generated characters. And those are just the more common uses; we’re sure you can think of more.

What are its features? Take a deep breath! SynthEyes offers 3-D tracking, set reconstruction, stabilization, and motion capture. It handles camera tracking, 2- and 3-D planar tracking, object tracking, object tracking from reference meshes, camera+object tracking, survey shots, multiple-shot tracking, tripod (nodal, 2.5-D) tracking, mixed tripod and translating shots, stereoscopic shots, nodal stereoscopic shots, zooming shots, lens distortion, light solving. It can handle shots of any resolution (Intro version limited to 1920×1080)—HD, film, IMAX, with 8-bit, 16-bit, or 32-bit float data, and can be used on shots with thousands of frames. A keyer simplifies and speeds tracking for green-screen shots. The image preprocessor helps remove grain, compression artifacts, off-centering, or varying lighting and improve low-contrast shots. Textures can be extracted for a mesh from the image sequence, producing higher resolution and lower noise than any individual image. A revolutionary Instructible Assistant, Synthia™, helps you work faster and better, from spoken or typed natural language directions.

SynthEyes offers complete control over the tracking process for challenging shots, including an efficient workflow for supervised trackers, combined automated/supervised tracking, offset tracking, incremental solving, rolling-shutter compensation, a hard and soft path locking system, distance constraints for low-perspective shots, and cross-camera constraints for stereo. A solver phase system lets you set up complex solving strategies with a visual node-based approach (not in Intro version). You can set up a coordinate system with tracker constraints, camera constraints, an automated ground-plane-finding tool, by aligning to a mesh, a line-based single-frame alignment system, manually, or with some cool phase techniques.

Eyes starting to glaze over at all the features? Don’t worry, there’s a big green AUTO button too. Download the free demo and see for yourself.

What can SynthEyes talk to? SynthEyes is a tracking app; you’ll use the other apps you already know to generate the pretty pictures. SynthEyes exports to about 25 different 2-D and 3-D programs. The Sizzle scripting language lets you customize the standard exports, or add your own imports, exports, or tools. You can customize toolbars, color scheme, keyboard mapping, and viewport configurations too. Advanced customers can use the SyPy Python API/SDK.

endeavor space shuttle lidar

Endeavour: The Last Space Shuttle as she’s never been seen before.

[source by Mark Gibbs]

Endeavour, NASA’s fifth and final space shuttle, is now on display at the California Science Center in Los Angeles and, if you’re at all a fan of space stuff, it’s one of the most iconic and remarkable flying machines ever built.

David Knight, a trustee and board member of the foundation recently sent me a link to an amazing video of the shuttle as well as some excellent still shots.

David commented that these images were:

 “…captured by Chuck Null on the overhead crane while we were doing full-motion VR and HD/2D filming … the Payload Bay has been closed for [a] few years … one door will be opened once she’s mounted upright in simulated launch position in the new Air & Space Center.

Note that all of this is part of the Endeavour VR Project by which we are utilizing leading-edge imaging technology to film, photograph and LIDAR-scan the entire Orbiter, resulting in the most comprehensive captures of a Space Shuttle interior ever assembled – the goal is to render ultra-res VR experiences by which individuals will be able to don eyewear such as the Oculus Rift (the COO of Oculus himself came down during the capture sessions), and walk or ‘fly’ through the Orbiter, able to ‘look’ anywhere, even touch surfaces and turn switches, via eventual haptic feedback gloves etc.

The project is being Executive Produced by me, with the Producer being Ted Schilowitz (inventor of the RED camera and more), Director is Ben Grossman, who led the special effects for the most recent Star Trek movie. Truly Exciting!”

Here are the pictures …

Endeavour - the last Space Shuttle
Endeavour - the last Space ShuttleCharles Null / David Knight on behalf of the California Science Center
Endeavour - the last Space Shuttle

 

Rent or Buy Leica Geosystems Cyclone 9

Leica Geosystems HDS Introduces Patent-Pending Innovations for Laser Scanning Project Efficiency

With Leica Cyclone 9.0, the industry leading point cloud solution for processing laser scan data, Leica Geosystems HDS introduces major, patent-pending innovations for greater project efficiency. Innovations benefit both field and office via significantly faster, easier scan registration, plus quicker deliverable creation thanks to better 2D and 3D drafting tools and steel modelling. Cyclone 9.0 allows users to scale easily for larger, more complex projects while ensuring high quality deliverables consistently.

Greatest advancement in office scan registration since cloud-to-cloud registration
When Leica Geosystems pioneered cloud-to-cloud registration, it enabled users – for the first time – to accurately execute laser scanning projects without having to physically place special targets around the scene, scan them, and model them in the office. With cloud-to-cloud registration software, users take advantage of overlaps among scans to register them together.

“The cloud-to-cloud registration approach has delivered significant logistical benefits onsite and time savings for many projects. We’ve constantly improved it, but the new Automatic Scan Alignment and Visual Registration capabilities in Cyclone 9.0 represent the biggest advancement in cloud-to-cloud registration since we introduced it,” explained Dr. Chris Thewalt, VP Laser Scanning Software. “Cyclone 9.0 lets users benefit from targetless scanning more often by performing the critical scan registration step far more efficiently in the office for many projects. As users increase the size and scope of their scanning projects, Cyclone 9.0 pays even bigger dividends. Any user who registers laser scan data will find great value in these capabilities.“

With the push of a button, Cyclone 9.0 automatically processes scans, and digital images if available, to create groups of overlapping scans that are initially aligned to each other. Once scan alignment is completed, algorithmic registration is applied for final registration. This new workflow option can be used in conjunction with target registration methods as well. These combined capabilities not only make the most challenging registration scenarios feasible, but also exponentially faster. Even novice users will appreciate their ease-of-use and ready scalability beyond small projects.

Power user Marta Wren, technical specialist at Plowman Craven Associates (PCA – leading UK chartered surveying firm) found that Cyclone 9.0’s Visual Registration tools alone sped up registration processing of scans by up to four times (4X) faster than previous methods. PCA uses laser scanning for civil infrastructure, commercial property, forensics, entertainment, and Building Information Modelling (BIM) applications.

New intuitive 2D and 3D drafting from laser scans
For civil applications, new roadway alignment drafting tools let users import LandXML-based roadway alignments or use simple polylines imported or created in Cyclone. These tools allow users to easily create cross section templates using feature codes, as well as copy them to the next station and visually adjust them to fit roadway conditions at the new location. A new vertical exaggeration tool in Cyclone 9.0 allows users to clearly see subtle changes in elevation; linework created between cross sections along the roadway can be used as breaklines for surface meshing or for 2D maps and drawings in other applications.

For 2D drafting of forensic scenes, building and BIM workflows, a new Quick Slice tool streamlines the process of creating a 2D sketch plane for drafting items, such as building footprints and sections, into just one step. A user only needs to pick one or two points on the face of a building to get started. This tool can also be used to quickly analyse the quality of registrations by visually checking where point clouds overlap.

Also included in Cyclone 9.0 are powerful, automatic point extraction features first introduced in Cyclone II TOPO and Leica CloudWorx. These include efficient SmartPicks for automatically finding bottom, top, and tie point locations and Points-on-a-Grid for automatically placing up to a thousand scan survey points on a grid for ground surfaces or building faces.

Simplified steel fitting of laser scan data
For plant, civil, building and BIM applications, Cyclone 9.0 also introduces a patent-pending innovation for modelling steel from point cloud data more quickly and easily. Unlike time consuming methods that require either processing an entire available cloud to fit a steel shape or isolating a cloud section before fitting, this new tool lets users to quickly and accurately model specific steel elements directly within congested point clouds. Users only need to make two picks along a steel member to model it. Shapes include wide flange, channel, angle, tee, and rectangular tube shapes.

Faster path to deliverables
Leica Cyclone 9.0 also provides users with valuable, new capabilities for faster creation of deliverables for civil, architectural, BIM, plant, and forensic scene documentation from laser scans and High-Definition Surveying™ (HDS™).

Availability
Leica Cyclone 9.0 is available today. Further information about the Leica Cyclone family of products can be found at http://hds.leica-geosystems.com, and users may download new product versions online from this website or purchase or rent licenses from SCANable, your trusted Leica Geosystems representative. Contact us today for pricing on software and training.

Capturing Real-World Environments for Virtual Cinematography

Capturing Real-World Environments for Virtual Cinematography

[source] written by Matt Workman

Virtual Reality Cinematography

As Virtual Reality HMDs (Oculus) come speeding towards consumers, there is an emerging need to capture 360 media and 360 environments. Capturing a location for virtual reality or virtual production is a task that is well suited for a DP and maybe a new niche of cinematography/photography. Not only are we capturing the physical dimensions of the environment using LIDAR, but we capturing the lighting using 360 degree HDR light probes captured with DSLRs/Nodal Tripod systems.

A LIDAR scanner is essentially a camera that shoots in all directions. It lives on a tripod and it can record the physical dimensions and color of an environment/space. It captures millions of points and saves their position and color to be later used to construct the space digitally.

An HDR Latlong Probe in Mari

Using a DSLR camera and a nodal tripod head, the DP would capture High Dynamic Range (32bit float HDR) 360 degree probes of the location, to record the lighting.  This process would essentially capture the lighting in the space at a VERY high dynamic range and that would be later reprojected onto the geometry constructed using the LIDAR data.

Realtime 3D Asset being lit by an HDR environment real time (baked)

The DP is essentially lighting the entire space in 360 degrees and then capturing it. Imagine an entire day of lighting a space in all directions. Lights outside windows, track lighting on walls, practicals, etc. Then capturing that space using the above outlined techniques as an asset to be used later. Once the set is constructed virtually, the director can add actors/props and start filmmaking, like he/she would do on a real set.  And the virtual cinematographer would line up the shots, cameras moves, and real time lighting.

I’ve already encountered a similar paradigm as a DP, when I shot a 360 VR commercial. A few years ago I shot a commercial for Bacardi with a 360 VR camera and we had to light and block talent in all directions within a loft space. The end user was then able to control which way the camera looked in the web player, but the director/DP controlled it’s travel path.

360 Virtual Reality Bacardi Commercial

 

http://www.mattworkman.com/2012/03/18/bacardi-360-virtual-reality/

Capturing a set for VR cinematography would allow the user to control their position in the space as well as which way they were facing. And the talent and interactive elements would be added later.

Final Product: VR Environment Capture

 

In this video you can see the final product of a location captured for VR. The geometry for the set was created using the LIDAR as a reference. The textures and lighting data are baked in from a combination of the LIDAR color data and the reprojected HDR probes.

After all is said in done, we have captured a location, it’s textures, and it’s lighting that can be used a digital location however we need. For previs, virtual production, background VFX plates, a real time asset for Oculus, etc.

SIGGRAPH 2014 and NVIDIA

SG4141: Building Photo-Real Virtual Reality from Real Reality, Byte by Byte
http://www.ustream.tv/recorded/51331701

In this presentation Scott Metzger speaks about his new virtual reality company Nurulize and his work with the Nvidia K5200 GPU and The Foundry’s Mari to create photo real 360 degree environments. He shows a demo of the environment that was captured in 32bit float with 8k textures being played in real time on an Oculus Rift and the results speak for themselves. (The real time asset was down sampled to 16bit EXR)

UDIM Texture Illustration

Some key technologies mentioned were the development of virtual texture engines that allow objects to have MANY 8k textures at once using the UDIM model. The environment’s lighting was baked from V-Ray 3 to a custom UDIM Unity shader and supported by Amplify Creations beta Unity Plug-in.

The xxArray 3D photometry scanner

The actors were scanned in using xxArray photogrammetry system and Mari was used to project the high resolution textures. All of this technology was being enabled by Nvidia’s Quadro GPU line, to allow fast 8k texture buffering.  The actors were later imported in to the real time environment that had been captured and were viewable from all angles through an Oculus Rift HMD.

Real time environment for Oculus

Virtual Reality Filmmaking

Scott brings up some incredibly relevant and important questions about virtual reality for filmmakers (directors/DPs) who plan to work in virtual reality.

  • How do you tell a story in Virtual Reality?
  • How do you direct the viewer to face a certain direction?
  • How do you create a passive experience on the Oculus?

He even give a glimpse at the future distribution model of VR content. His demo for the film Rise will be released for Oculus/VR in the following formats:

  1. A free roam view where the action happens and the viewer is allowed to completely control the camera and point of view.
  2. A directed view where the viewer and look around but the positioning is dictated by the script/director. This model very much interests me and sounds like a video game.
  3. And a tradition 2D post rendered version. Like a tradition cinematic or film, best suited for Vimeo/Youtube/DVD/TV.

A year ago this technology seemed like science fiction, but every year we come closer to completely capturing humans (form/texture), their motions, environments with their textures, real world lighting, and viewing them in real time in virtual reality.

The industry is evolving at an incredibly rapid pace and so must the creatives working in it. Especially the persons responsible for the camera and the lighting, the director of photography.

FARO SCENE Cloud to Cloud Registration

FARO SCENE 5.3 Laser Scanning Software Provides Scan Registration without Targets

[source]

FARO® Technologies, Inc. (NASDAQ: FARO), the world’s most trusted source for 3D measurement, imaging, and realization technology, announced the release of their newest version of laser scanning software, SCENE 5.3, and scan data hosting-service, SCENE WebShare Cloud 1.5.

FARO’s SCENE 5.3 software, for use with the Laser Scanner Focus3D X Series, delivers scan registration by eliminating artificial targets, such as spheres and checkerboards. Users can choose from two available registration methods: Top View Based or Cloud to Cloud. Top View Based registration allows for targetless positioning of scans. In interiors and in built-up areas without reliable GPS positioning of the individual scans, targetless positioning represents a highly efficient and largely automated method of scanning. The second method, Cloud to Cloud registration, opens up new opportunities for the user to position scans quickly and accurately, even under difficult conditions. In exterior locations with good positioning of the scans by means of the integrated GPS receiver of the Laser Scanner Focus3D X Series, Cloud to Cloud is the method of choice for targetless registration.

In addition, the software also offers various new processes that enable the user to flexibly respond to a wide variety of project requirements. For instance, Correspondence Split View matches similar areas in neighbouring scans to resolve any missing positioning information, and Layout Image Overlay allows users to place scan data in a geographical context using image files, CAD drawings, or maps.

Oliver Bürkler, Senior Product Manager for 3D Documentation Software, remarked, “SCENE 5.3 is the ideal tool for processing laser scanning projects. FARO’s cloud-based hosting solution, SCENE WebShare Cloud, allows scan projects to be published and shared worldwide via the Internet. The collective upgrades to FARO’s laser scanning software solution, SCENE 5.3 and WebShare Cloud 1.5, make even complex 3D documentation projects faster, more efficient, and more effective. “

About FARO
FARO is the world’s most trusted source for 3D measurement, imaging and realization technology. The Company develops and markets computer-aided measurement and imaging devices and software. Technology from FARO permits high-precision 3D measurement, imaging and comparison of parts and complex structures within production and quality assurance processes. The devices are used for inspecting components and assemblies, production planning, documenting large volume spaces or structures in 3D, surveying and construction, as well as for investigation and reconstruction of accident sites or crime scenes.

Worldwide, approximately 15,000 customers are operating more than 30,000 installations of FARO’s systems. The Company’s global headquarters is located in Lake Mary, FL., its European head office in Stuttgart, Germany and its Asia/Pacific head office in Singapore. FARO has branches in Brazil, Mexico, Germany, United Kingdom, France, Spain, Italy, Poland, Netherlands, Turkey, India, China, Singapore, Malaysia, Vietnam, Thailand, South Korea and Japan.

Click here for more information or to download a 30-day evaluation version.

Google's Project Tango 3D Capture Device

Mantis Vision’s MV4D Tapped As Core 3D Capture Tech Behind Google’s Project Tango Tablets

Mantis Vision, a developer of some of the world’s most advanced 3D enabling technologies, today confirmed that its MV4D technology platform will serve as the core 3D engine behind Google’s Project Tango. Mantis Vision provides the 3D sensing platform, consisting of flash projector hardware components and Mantis Vision’s core MV4D technology which includes structured light-based depth sensing algorithms.

Project Tango Mantis Vision-Tablet_whiteGoogle’s new seven-inch tablet is the first mobile device released that will access the MV4D platform to easily capture, enrich and deliver quality 3D data at scale allowing Google developers to quickly build consumer and professional applications on top of the MV4D platform.

“3D represents a major paradigm shift for mobile. We haven’t seen a change this significant since the introduction of the camera-phone. MV4D allows developers to deliver 3D-enabled mobile devices and capabilities to the world,” said Amihai Loven, CEO, Mantis Vision. “This partnership with Google offers Mantis Vision the flexibility to expand quickly and strategically. It will fuel adoption and engagement directly with consumer audiences worldwide. Together, we are bringing 3D to the masses.”

MV4D is Mantis Vision’s highly-scalable 3D capture and processing platform that allows developers to integrate Mantis’ technology into new and existing applications with ease, to drive user-generated 3D content creation throughout the mobile ecosystem. MV4D’s combination of field-proven 3D imaging hardware and software and a soon-to-be released software development kit (SDK) will ultimately serve as the backbone of 3D-enabled mobile and tablet devices.

“We are excited about working with partners, such as Mantis Vision, as we push forward the hardware and software technologies for 3D sensing and motion tracking on mobile devices,” said Johnny Lee, Technical Product Lead at Google.

Since its inception, Mantis Vision has been dedicated to bringing professional-grade 3D technology to the masses. The company’s technology will be a key component of both professional and consumer level devices and applications across a wide customer base of leading mobile technology companies, application developers and device manufacturers. Because the MV4D platform and SDK is fully scalable, it is already being planned for use in more powerful, diverse range of products in the future.

Learn more about the project here

hardware-independent-3d-laser-scanning-large-1152x648

Autodesk Announces ReCap Connect Partnership Program

With its new ReCap Connect Partnership Program, Autodesk will open up Autodesk ReCap – its reality capture platform – to third party developers and partners, allowing them to extend ReCap’s functionality.

“Autodesk has a long history of opening our platforms to support innovation and extension,” said Robert Shear, senior director, Reality Solutions, Autodesk. “With the ReCap Connect Partnership Program, we’ll be allowing a talented pool of partners to expand what our reality capture software can do. As a result, customers will have even more ways to start their designs with accurate dimensions and full photo-quality context rather than a blank screen.”

There are many ways for partners to connect to the ReCap pipeline, which encompasses both laser-based and photo-based workflows.  Partners can write their own import plug-in to bring structured point cloud data into ReCap and ReCap Pro using the Capture Codec Kit that is available as part of the new ReCap desktop version. DotProduct – a maker of handheld, self-contained 3D scanners – is the first partner to take advantage of this capability.

“Autodesk’s ReCap Connect program will enable a 50x data transfer performance boost for DotProduct customers — real time 3D workflows on tablets just got a whole lot faster. Our lean color point clouds will feed reality capture pipelines without eating precious schedule and bandwidth.” Tom Greaves, Vice President, Sales and Marketing, DotProduct LLC.

Alternately, partners can take advantage of the new Embedded ReCap OEM program to send Reality Capture Scan (RCS) data exports from their point cloud processing software directly to Autodesk design products, which all support this new point cloud engine, or to ReCap and ReCap Pro. The first signed partners in the Embedded ReCap OEM program are: Faro, for their Faro Scenesoftware; Z+F for their LaserControl software; CSA for their PanoMap software, LFM for their LFM software products; and Kubit for their VirtuSurv software.  All these partners’ software will feature this RCS export in their coming releases.

“Partnering with Autodesk and participating in the ReCap Connect program helps FARO to ensure a fluent workflow for customers who work with Autodesk products. Making 3D documentation and the use of the captured reality as easy as possible is one of FARO’s foremost goals when developing our products. Therefore, integrating with Autodesk products suits very well to our overall product strategy.” – Oliver Bürkler, Senior Product Manager, 3D Documentation Software & Innovation, FARO

As a third option, partners can build their own application on top of the Autodesk photo-to-3D cloud service by using the ReCap Photo Web API. More than 10 companies – serving markets ranging from medical and civil engineering, to video games and Unmanned Aerial Vehicles (UAVs) – have started developing specific applications that leverage this capability, or have started integrating this capability right into their existing apps. Some of the first partners to use the ReCap Photo Web API include Soundfit, SkyCatch and Twnkls.

“Autodesk’s cloud based ReCap is an important part of the SoundFit’s 3D SugarCube Scanning Service.  Autodesk’s ReCap service has enabled SoundFit to keep the per scan cost of its service very low, opening new markets, such as scans for hearing aids, custom fit communications headsets, musicians monitors and industrial hearing protection. ReCap allows SoundFit to export 3D models in a wide variety of popular 3D formats, so SoundFit customers and manufacturers can import them into Autodesk CAD packages from AutoCAD to 123D Design, or send them directly to any 3D printer or 3D printing service bureau.” – Ben Simon-Thomas, CEO & Co-Founder

For more information about the ReCap Connect Partnership Program, contact Dominique Pouliquen at Email Contact.

Additional Partner Supporting Quotes

“ReCap Connect gives our PointSense and PhoToPlan users smart and fully integrated access to powerful ReCap utilities directly within their familiar AutoCAD design environments. The result is a more simple and efficient overall workflow. ReCap Photo 360 image calibration eliminates the slowest part of a kubit user’s design process resulting in significant time savings per project.” – Matthias Koksch, CEO, kubit

“ReCap, integrated with CSA’s PanoMap Server, provides a powerful functionality to transfer laser scan point cloud data from large-scale 3D laser scan databases to Autodesk products.  Using the interface, the user can select any plant area by a variety of selection criteria and transfer the laser scan points to the design environment in which they are working. The laser scan 3D database of the plant can have thousands of laser scans.” – Amadeus Burger, President, CSA Laser Scanning

“Autodesk’s industry leading Recap photogrammetry technology will be instrumental in introducing BuildIT’s 3D Metrology solution to a broader audience by significantly reducing data capture complexity and cost.” – Vito Marone, Director Sales & Marketing, BuildIT Software & Solutions

“I am very pleased with the ReCap Photo API performance and its usefulness in fulfilling our 3D personalization needs. I believe the ReCap Photo API is the only product that is available in the market today that meets our needs.” – Dr. Masuma, PhD., Founder of iCrea8

 

Angela Costa Simoes

Senior PR Manager

DIRECT  +1 415 547 2388

MOBILE  +1 415 302 2934

@ASimoes76

Autodesk, Inc.

The Landmark @ One Market, 5th Floor

San Francisco, CA 94105

www.autodesk.com