Hennessy VSOP 3D Scan

Hennessy Launches “Harmony. Mastered from Chaos.” Interactive Campaign using LiDAR Scans

NEW YORK, June 30, 2016 /PRNewswire/ — Hennessy, the world’s #1 Cognac, today announced “Harmony. Mastered from Chaos.” –a dynamic new campaign that brings to life the multitude of complex variables that are artfully and expertly mastered by human touch to create the brand’s most harmonious blend, V.S.O.P Privilège. Set to launch June 30th, the campaign showcases the absolute mastery exuded at every stage of crafting this blend. This first campaign in over ten years also offers a glimpse into the inner workings of Hennessy’s mysterious Comité de Dégustation (Tasting Committee)—perhaps the ideal example of Hennessy’s mastery—that crafts the same rich, high quality liquid year over year. Narrated by Leslie Odom, Jr., the campaign features 60, 30 and 15 second digital spots and an interactive digital experience, adding another vivid chapter to the brand’s “Never stop. Never settle.” platform.

“Sharing the intriguing story of the Hennessy Tasting Committee, its exacting practices and long standing rituals, illustrates the crucial role that over 250 years of tradition and excellence play in mastering this well-structured spirit,” said Giles Woodyer, Senior Vice President, Hennessy US. “With more and more people discovering Cognac and seeking out the heritage behind brands, we knew it was the right time to launch the first significant marketing campaign for V.S.O.P Privilège.”

Hennessy’s Comité de Dégustation is a group of seven masters, including seventh generation Master Blender, Yann Fillioux, unparalleled in the world of Cognac. These architects of time oversee the eaux-de-vie to ensure that every bottle of V.S.O.P Privilège is perfectly balanced despite the many intricate variables present during creation of the Cognac. From daily tastings at exactly 11am in the Grand Bureau (whose doors never open to the public) to annual tastings of the entire library of Hennessy eaux-de-vie (one of the largest and oldest in the world), this august body meticulously safeguards the future of Hennessy, its continuity and legacy.

Through a perfectly orchestrated phalanx marked by an abundance of tradition, caring and human touch, V.S.O.P Privilège is created as a complete and harmonious blend: the definitive expression of a perfectly balanced Cognac. Based on a selection of firmly structured eaux-de-vie, aged largely in partially used barrels in order to take on subtle levels of oak tannins, this highly characterful Cognac reveals balanced aromas of fresh vanilla, cinnamon and toasty notes, all coming together with a seamless perfection.

“Harmony. Mastered from Chaos.”
In partnership with Droga5, the film and interactive experience were directed by Ben Tricklebank of Tool of North America, and Active Theory, a Los Angeles-based interactive studio. From the vineyards in Cognac, France, to the distillery and Cognac cellars, viewers are taken on a powerful and modern cinematic journey to experience the scrupulous process of crafting Hennessy VSOP Privilège. The multidimensional campaign uses a combination of live-action footage and technology, including 3D lidar scanning, depth capture provided by SCANable, and binaural recording to visualize the juxtaposition of complexity versus mastery that is critical to the Hennessy V.S.O.P Privilège Cognac-making process.

“Harmony. Mastered from Chaos.” will be supported by a fully integrated marketing campaign including consumer events, retail tastings, social and PR initiatives. Consumers will be able to further engage with the brand through  the first annual “Cognac Classics Week” hosted by Liquor.com, taking place July 11-18 to demonstrate the harmony that V.S.O.P Privilège adds to classic cocktails. Kicking off on Bastille Day in a nod to Hennessy’s French heritage, mixologists across New York City, Chicago, and Los Angeles will offer new twists on classics such as the French 75, Sidecar, and Sazerac, all crafted with the perfectly balanced V.S.O.P Privilège.

For more information on Cognac Classics Week, including a list of participating bars and upcoming events, visitwww.Liquor.com/TBD and follow the hashtag #CognacClassicsWeek.

To learn more about “Harmony. Mastered from Chaos.” visit Hennessy.com or Facebook.com/Hennessy.

ABOUT HENNESSY
In 2015, the Maison Hennessy celebrated 250 years of an exceptional adventure that has lasted for seven generations and spanned five continents.

It began in the French region of Cognac, the seat from which the Maison has constantly passed down the best the land has to give, from one generation to the next. In particular, such longevity is thanks to those people, past and present, who have ensured Hennessy’s success both locally and around the world. Hennessy’s success and longevity are also the result of the values the Maison has upheld since its creation: unique savoir-faire, a constant quest for innovation, and an unwavering commitment to Creation, Excellence, Legacy, and Sustainable Development. Today, these qualities are the hallmark of a House – a crown jewel in the LVMH Group – that crafts the most iconic, prestigious Cognacs in the world.

Hennessy is imported and distributed in the U.S. by Moët Hennessy USA. Hennessy distills, ages and blends spanning a full range: Hennessy V.S, Hennessy Black, V.S.O.P Privilège, X.O, Paradis, Paradis Impérial and Richard Hennessy. For more information and where to purchase/ engrave, please visit Hennessy.com.

 

 

Video – https://youtu.be/vp5e8YV0pjc
Photo – http://photos.prnewswire.com/prnh/20160629/385105
Photo – http://photos.prnewswire.com/prnh/20160629/385106

SOURCE Hennessy

SynthEyes 3D Tracking Software

Andersson Technologies releases SynthEyes 1502 3D Tracking Software

Andersson Technologies has released SynthEyes 1502, the latest version of its 3D tracking software, improving compatibility with Blackmagic Design’s Fusion compositing software.

Reflecting the renewed interest in Fusion
According to the official announcement: “Blackmagic Design’s recent decision to make Fusion 7 free of charge has led to increased interest in that package. While SynthEyes has exported to Fusion for many years now — for projects such as Battlestar Galactica — Andersson Technologies LLC upgraded SynthEyes’s Fusion export.”

Accordingly, the legacy Fusion exporter now supports 3D planar trackers; primitive, imported, or tracker-built meshes; imported or extracted textures; multiple cameras; and lens distortion via image maps.

The new lens distortion feature should make it possible to reproduce the distortion patterns of any real-world lens without its properties having been coded explicitly in the software or a custom plugin.

A new second exporter creates corner pin nodes in Fusion from 2D or 3D planar trackers in SynthEyes.

Other new features in SynthEyes 1502 include an error curve mini-view, a DNG/CinemaDNG file reader, and a refresh of the user interface, including the option to turn toolbar icons on or off.

Pricing and availability
SynthEyes 1502 is available now for Windows, Linux and Mac OS X. New licences cost from $249 to $999, depending on which edition you buy. The new version is free to registered users.

New features in SynthEyes 1502 include:

  • Toolbar icons are back! Some love ’em, some hate ’em. Have it your way: set the preference. Shows both text and icon by default to make it easiest, especially for new users with older tutorials. Some new and improved icons.
  • Refresh of user interface color preferences to a somewhat darker and trendier look. Other minor appearance tweaks.
  • New error curve mini-view.
  • Updated Fusion 3D exporter now exports all cameras, 3D planars, all meshes (including imported), lens distortion via image maps, etc.
  • New Fusion 2D corner pinning exporter.
  • Lens distortion export via color maps, currently for Fusion (Nuke for testing).
  • During offset tracking, a tracker can be (repeatedly) shift-dragged to different reference patterns on any frame, and SynthEyes will automatically adjust the offset channel keying.
  • Rotopanel’s Import tracker to CP (control point) now asks whether you want to import the relative motion or absolute position.
  • DNG/CinemaDNG reading. Marginal utility: DNG requires much proprietary postprocessing to get usable images, despite new luma and chroma blur settings in the image preprocessor.
  • New script to “Reparent meshes to active host” (without moving them)
  • New section in the user manual on “Realistic Compositing for 3-D”
  • New tutorials on offset tracking and Fusion.
  • Upgraded to RED 5.3 SDK (includes REDcolor4, DRAGONcolor2).
    • Faster camera and perspective drawing with large meshes and lidar scan data.
  • Windows: Installing license data no longer requires “right click/Start as Administrator”—the UAC dialog will appear instead.
  • Windows: Automatically keeps the last 3 crash dumps. Even one crash is one too many.
  • Windows: Installers, SynthEyes, and Synthia are now code-signed for “Andersson Technologies LLC” instead of showing “Unknown publisher”.
  • Mac OS X: Yosemite required that we change to the latest XCode 6—this eliminated support for OS X 10.7. Apple made 10.8 more difficult as well.

About SynthEyes

SynthEyes is a program for 3-D camera-tracking, also known as match-moving. SynthEyes can look at the image sequence from your live-action shoot and determine how the real camera moved during the shoot, what the camera’s field of view (~focal length) was, and where various locations were in 3-D, so that you can create computer-generated imagery that exactly fits into the shot. SynthEyes is widely used in film, television, commercial, and music video post-production.

What can SynthEyes do for me? You can use SynthEyes to help insert animated creatures or vehicles; fix shaky shots; extend or fix a set; add virtual sets to green-screen shoots; replace signs or insert monitor images; produce 3D stereoscopic films; create architectural previews; reconstruct accidents; do product placements after the shoot; add 3D cybernetic implants, cosmetic effects, or injuries to actors; produce panoramic backdrops or clean plates; build textured 3-D meshes from images; add 3-D particle effects; or capture body motion to drive computer-generated characters. And those are just the more common uses; we’re sure you can think of more.

What are its features? Take a deep breath! SynthEyes offers 3-D tracking, set reconstruction, stabilization, and motion capture. It handles camera tracking, 2- and 3-D planar tracking, object tracking, object tracking from reference meshes, camera+object tracking, survey shots, multiple-shot tracking, tripod (nodal, 2.5-D) tracking, mixed tripod and translating shots, stereoscopic shots, nodal stereoscopic shots, zooming shots, lens distortion, light solving. It can handle shots of any resolution (Intro version limited to 1920×1080)—HD, film, IMAX, with 8-bit, 16-bit, or 32-bit float data, and can be used on shots with thousands of frames. A keyer simplifies and speeds tracking for green-screen shots. The image preprocessor helps remove grain, compression artifacts, off-centering, or varying lighting and improve low-contrast shots. Textures can be extracted for a mesh from the image sequence, producing higher resolution and lower noise than any individual image. A revolutionary Instructible Assistant, Synthia™, helps you work faster and better, from spoken or typed natural language directions.

SynthEyes offers complete control over the tracking process for challenging shots, including an efficient workflow for supervised trackers, combined automated/supervised tracking, offset tracking, incremental solving, rolling-shutter compensation, a hard and soft path locking system, distance constraints for low-perspective shots, and cross-camera constraints for stereo. A solver phase system lets you set up complex solving strategies with a visual node-based approach (not in Intro version). You can set up a coordinate system with tracker constraints, camera constraints, an automated ground-plane-finding tool, by aligning to a mesh, a line-based single-frame alignment system, manually, or with some cool phase techniques.

Eyes starting to glaze over at all the features? Don’t worry, there’s a big green AUTO button too. Download the free demo and see for yourself.

What can SynthEyes talk to? SynthEyes is a tracking app; you’ll use the other apps you already know to generate the pretty pictures. SynthEyes exports to about 25 different 2-D and 3-D programs. The Sizzle scripting language lets you customize the standard exports, or add your own imports, exports, or tools. You can customize toolbars, color scheme, keyboard mapping, and viewport configurations too. Advanced customers can use the SyPy Python API/SDK.

face 3d projection mapping

OMOTE Real-time Face Tracking 3D Projection Mapping

Forget the faces of historic monuments, the new frontier of 3D projection mapping is the faces of humans.

Created by Nobumichi Asai and friends, technical details behind the process are scant at the moment, but from what can be found in this Tumblr post, it’s clear that step one is a 3D scan of the model’s face.

Here is the translated text from that post:

I will continue to explain how to make a face mapping of this time.
Title OMOTE that (= table) is coming from the “ability”, but it has become the idea of Noh mask also in how to make. That is the idea, covered by creating a “surface”. That it is possible to pursue the accuracy, theme that represent a very delicate make-up art as its output was important. I started from the fact that it is 3D laser scanning the face of the model for the first.

I suspect that a structured light scanner was used to capture the geometry of the model’s face rather than a 3D laser scanner. Nonetheless, this is a very cool application of 3D projection mapping.

3D Face Scanning 3D Face Scanning projection mapping3D Face Scanning projection mapping

OMOTE / REAL-TIME FACE TRACKING & PROJECTION MAPPING. from something wonderful on Vimeo.

Leap Motion Controller Update to Offer Affordable Individual Joint MoCap

Leap Motion Controller Update to Offer Affordable Individual Joint MoCap

Leap Motion has announced that the software for its self-titled PC gesture-control device will soon track the movement of individual finger joints, as well as the overall motion of a user’s hands.

Since its launch in 2012, the $80 Leap Motion controller has attracted a lot of interest in the CG community, with Autodesk releasing Maya and MotionBuilder plugins last year.

Individual joint tracking, more parameters captured
In a post on the company’s blog, Leap Motion CEO Michael Buckwald revealed that version 2 of its software will track the individual joints of a user’s fingers, compensating automatically where individual fingers are occluded.

The software will also expose “much more granular data” via its SDK, including 27 dimensions per hand.

Affordable Individual MoCap tools coming soon
The update, which will be free, and does not require a change of hardware, is now in public beta for developers, although there’s no news of a consumer release date yet.

Jasper Brekelmans, creator of upcoming hand-tracking tool Brekel Pro Hands, has already announced that he is using the SDK.

Read more about the Leap Motion V2 update on the developer’s blog

Massive Software announces Massive 6.0 crowd simulation software

Massive 6.0

New look, new GUI

Massive has a completely new graphic user interface. With graphic design by Lost in Space the new interface not only looks stylish and modern but provides a much smoother interactive user experience. Dialog windows and editors now turn up in the new side panel, keeping the workspace clear and tidy. The main window now hosts multiple panels that can be configured to suit the users needs, and the configurations can be recalled for later use. Since any panel can be a viewport it’s now possible to have up to 5 viewports open at once, each using a different camera.

Massive

 

3D placement

The existing placement tools in Massive have been extended to work in 3 dimensions, independently of the terrain. The point generator can be placed anywhere in space, the circle generator becomes a sphere, the polygon generator gains depth, the spline generator becomes tubular. There’s also a new generator called the geometry generator, which takes a wavefront .obj file and fills the polygonal volume with agents.

 

Auto action import

Building complex agents with hundreds of actions can be a time consuming process, but it doesn’t have to be anymore. In Massive 6.0 the action importing process can be completely automated, reducing what could be months of work to a few minutes. Also, all of the import settings for all of the actions can be saved to a file so that revisions of source motion can be imported in seconds using the same settings as earlier revisions.

Massive

Bullet dynamics

To effortlessly build a mountain of zombies it would be useful to have extremely stable rigid body dynamics. Massive 6.0 supports bullet dynamics, significantly increasing dynamics stability. Just for fun we had 1000 mayhem agents throw themselves off a cliff into a pile on the ground below. Without tweaking of parameters we easily created an impressive zombie shot, demonstrating the stability and ease of use of bullet dynamics.

No typing required

While it is possible to create almost any kind of behaviour using the brain nodes in Massive, it has always required a little typing to specify inputs and outputs of the brain. This is no longer necessary with the new channel menu which allows the user to very quickly construct any possible input or output channel string with a few mouse clicks.

These are just some of the new features of Massive 6.0, which is scheduled for release in September.

 

Massive for Maya

 

Massive has always been a standalone system, and now there’s the choice to use Massive standalone as Massive Prime and Massive Jet, or in Maya as Massive for Maya.

 

Set up and run simulations in Maya

Massive for Maya facilitates the creation of Massive silmuations directly in Maya. All of the Massive scene setup tools such as flow field, lanes, paint and placement editors have been seamlessly reconstructed inside Maya. The simulation workflow has been integrated into Maya to allow for intuitive running, recording and playback of simulations. To achieve this a record button has been added next to the transport controls and a special status indicator has been included in the Massive shelf. Scrubbing of simulations of thousands of agents in Maya is now as simple and efficient as scrubbing the animation of a single character.

Massive Software 3D imaging for crowd placement and simulation

Set up lighting in Maya

The Massive agents automatically appear in preview renders as well as batch renders alongside any other objects in the scene. Rendering in Maya works for Pixar’s RenderMan, Air, 3Delight, Mental Ray and V-Ray. This allows for lighting scenes using the familiar Maya lighting tools, without requiring any special effort to integrate Massive elements into the scene. Furthermore, all of this has been achieved without losing any of the efficiency and scalability of Massive.

 

Edit simulations in Maya graph editor

Any of the agents in a simulation can be made editable in the Maya graph editor. This allows for immediate editing of simulations without leaving the Maya environment. Any changes made to the animation in the graph editor automatically feed back to the Massive agents, so the tweaked agents will appear in the render even though the user sees a Maya character for editing purposes in the viewport. The editing process can even be used with complex animation control rigs, allowing animators and motion editors complete freedom to work however they want to.

 

 

Massive Software 3D imaging for crowd placement and simulationDirectable characters

A major advantage of Massive for Maya is the ability to bring Massive’s famous brains to character animation, providing another vital tool for creating the illusion of life. While animation studios have integrated Massive into their pipeline to do exactly this for years, the ability to create directable characters has not been within easy reach for those using off-the-shelf solutions. With Massive for Maya it’s now possible to create characters using a handful of base cycles, takes and expressions that can handle such tasks as keeping alive, responding to the the focus of the shot, responding to simple direction, or simply walking along a path, thus reducing the amount of work required to fill out a scene with characters which are not currently the focus of the shot. For example, in a scene in which two characters are talking with eachotherand a third character, say a mouse, is reacting, the mouse could be driven by it’s Massive counterpart. The talking characters would drive their Massive counterparts thereby being visible to the mouse. Using attributes in the talking characters, their Massive counterparts could change colour to convey their emotional states to the mouse agent. The mouse agent then performs appropriately, using it’s animation cycles, blend shape animations etc in response to the performance of the talking characters, and looking at whichever character is talking. Once the agents for a project have been created, setting up a shot for this technique requires only a few mouse clicks and the results happen in real-time. Any edits to the timing of the shot will simply flow through to the mouse performance.

SCANable offers on-site 3D imaging of real-world people/characters to populate your 3D crowd asset library in Massive’s crowd placement and simulation software. Contact us today for a free quote.

LiDAR for Visual Effects - Rebirth

Krakatoa Creates CG Visual Effects from LIDAR Scans for Short Film “Rebirth”

Film director and cinematographer Patryk Kizny – along with his talented team at LookyCreative – put together the 2010 short film “The Chapel” using motion controlled HDR time-lapse to achieve an interesting, hyper-real aesthetic. Enthusiastically received when released online, the three-minute piece pays tribute to a beautifully decaying church in a small Polish village built in the late 1700s. Though widely lauded, “The Chapel” felt incomplete to Kizny, so in fall of 2011, he began production on “Rebirth” to refine and add dimension to his initial story.

LiDAR for Visual Effects - Rebirth

Exploring the same church, “Rebirth” comprises three separate scenes created using different visual techniques. Contemplative, philosophical narration and a custom orchestral soundtrack composed by Kizny’s collaborator, Mateusz Zdziebko, help guide the flow and overall aspirational tone of the film, which runs approximately 12 minutes. The first scene features a point cloud representation of the chapel with various pieces and cross-sections of the building appearing, changing and shifting to the music. Based on LIDAR scans taken of the chapel for this project, Kizny generated the point clouds with Thinkbox Software’s volumetric particle renderer, Krakatoa, in Autodesk 3ds Max.

LiDAR for VFX - Rebirth

“About a year after I shot ”The Chapel,” I returned to the location and happened to get involved in heritage preservation efforts,” Kizny explained. “At the time, laser scanning was used for things like archiving, set modeling and support for integrating VFX in post production, but I hadn’t seen any films visualizing point clouds themselves, so that’s what I decided to do.”

EKG Baukultur an Austrian/German company that specializes in digital heritage documentation and laser scanning, scanned the entire building in about a day from 25 different scanning positions. The collected data was then registered and processed – creating a dataset of about 500 million points. Roughly half of the collected data was used to create the visualizations.

3D Laser Scanning for Visual Effects - Rebirth

Data processing was done in multiple stages using various software packages. Initially, the EKG Baukultur team registered the separate scans together in a common coordinates space using FARO Scene software. Using .PTS format, the data was then re-imported into Alice Labs Studio Clouds (acquired by Autodesk in 2011) for clean up. Kizny manually removed any tripods with cameras, people, checkerboards and balls that had been used to reference scans. Then, the data was processed in Geomagic Studio to reduce noise, fill holes and uniformly downsample selected areas of the dataset. Later, the data was exported back to the .PTS ASCII format with the help of MeshLab and processed using custom Python scripting so that it could be ingested using the Krakatoa importer. Lacking a visual effects background, Kizny initially tested a number of tools to find the best way to visualize point cloud data in a cinematic way with varying and largely disappointing results. Six months of extensive R&D led Kizny to Krakatoa, a tool that was astonishingly fast and a fraction of the price of similar software specifically designed for CAD/CAM applications.

“I had a very basic understanding of 3ds Max, and the Krakatoa environment was new to me. Once I began to figure out Krakatoa, it all clicked and the software proved amazing throughout each step of the process,” he said.

Even with mixing the depth of field and motion blur functions in Krakatoa, Kizny was able to keep his render time to roughly five to ten minutes per frame, even while rendering 200 million points in 2K, by using smaller apertures and camera passes from a higher distance.

“Krakatoa is an amazing manipulation toolkit for processing point cloud data, not only for what I’m doing here but also for recoloring, increasing density, projecting textures and relighting point clouds. I have tried virtually all major point cloud processing software, but Krakatoa saved my life on this project,” Kizny noted.

In addition to using Krakatoa to visualize all the CG components of “Rebirth” as well as render point clouds, Kizny also employed the software for advanced color manipulation. With two subsets of data – a master with good color representation and a target that lacked color information – Kizny used a Magma flow modifier and a comprehensive set of nodes to cast and spatially interpolate the color data from the master subset onto the target subset so that they blended seamlessly in the final dataset. Magma modifiers were also used for the color correction of the entire dataset prior to rendering, which allowed Kizny greater flexibility compared to trying to color correct the rendering itself. Using Krakatoa with Magma modifiers also provided Kizny with a comprehensive set of built-in nodes and scripting access.

3D Laser Scanning for Visual Effects - Rebirth

The second scene of “Rebirth” is a time-lapse reminiscent of “The Chapel,” while the final scene shows live action footage of a dancer. Footage for each scene was captured using Canon DSLR cameras, a RED ONE camera and DitoGear motion control equipment. Between the second and third scene, a short transition visualizes the church collapsing, which was created using 3ds Max Particle Flow with help of Thinkbox Ember, a field manipulation toolkit, and Thinkbox Stoke, a particle reflow tool.

“In the transition, I’m trying to collapse a 200 million-point data cloud into smoke, then create the silhouette of a dancer as a light point from the ashes,” shared Kizny. “Even though it’s a short scene, I’m making use of a lot of technology. It’s not only rendering this point cloud data set again; it’s also collapsing it. I’m using the software in an atypical way, and Thinkbox has been incredibly helpful in troubleshooting the workflow so I could establish a solid pipeline.”

Collapsing the church proved to be a challenge for Kizny. Traditionally, when creating digital explosions, VFX artists are blowing up a solid, rigid object. Not only did Kizny need to collapse a point cloud – a daunting task in of itself – but he also had to do so in the hyper-realistic aesthetic he’d established, and in a way that would be both ethereal and physically believable. Using 3ds Max Particle Flow as a simulation environment, Kizny was able to generate a comprehensive vector field of high resolution that was more efficient and precisely controlled with Ember. Ember was also used to animate two angels appearing from the dust and smoke along with the dancer silhouette. The initial dataset of each of angels was pushed through a specific vector noise field that produced a smoke-like dissolve and then reversed thanks to retiming features in Krakatoa, Ember and Stoke, which was also used to add density.

3D Laser Scanning for Visual Effects - Rebirth

“To create the smoke on the floor, I decided to go all the way with Thinkbox tools,” Kizny said. “All the smoke you see was created using Ember vector fields and simulated with Stoke. It was good and damn fast.”

Another obstacle was figuring out how to animate the dancer in the point clouds. Six cameras recorded a live performer with markerless motion capture tracking done using iPi Motion Capture Studio package. The data obtained from the dancer was then ported onto a virtual, rigged model in 3ds Max and used to emit particles for a Particle Flow simulation. Ember vector fields were used for all the smoke-like circulations and then everything was integrated and rendered using Thinkbox’s Deadline, a render management system, and Krakatoa – almost 900 frames and 3 TB of data caches only for particles. Deadline was also used to distribute high volume renders and allocate resources across Kizny’s render farm.

Though an innovative display of digitally artistry, “Rebirth” is also a preservation tool. Interest generated from “The Chapel” and continued with “Rebirth” has enticed a Polish foundation to begin restoration efforts on the run-down building. Additionally, the LIDAR scans of the chapel will be donated to CyArk, a non-profit dedicated to the digital preservation of cultural heritage sites, and made widely available online.

The film is currently securing funding to complete postproduction. Support the campaign and learn more about the project at the IndieGoGo campaign homepage at http://bit.ly/support-rebirth. For updates on the film’s progress, visit http://rebirth-film.com/.

About Thinkbox Software
Thinkbox Software provides creative solutions for visual artists in entertainment, engineering and design. Developer of high-volume particle renderer Krakatoa and render farm management software Deadline, the team of Thinkbox Software solves difficult production problems with intuitive, well-designed solutions and remarkable support. We create tools that help artists manage their jobs and empower them to create worlds and imagine new realities. Thinkbox was founded in 2010 by Chris Bond, founder of Frantic Films. http://www.thinkboxsoftware.com

Leica_CloudWorx_for_AutoCAD

Leica Geosystems announces updates for its point cloud software applications

Leica Geosystems announces a major set of updates for its point cloud software applications within its flagship Leica Cyclone and Leica CloudWorx families. These updates save a significant time in the office per day and make it more convenient to work with rich, as-built point cloud data. This is the company’s largest set of point cloud software releases to date.

“What we’re seeing in the market is that our customers are using laser scanning in an increasing variety of scenarios and under more demanding circumstances, so they need more options for working with point cloud data and they need to do their work even faster,” states Chris Thewalt, VP of Scanning Software.  “Overall, we continue to see strong growth of 3D laser scanning/High-Definition Surveying (HDS) with a corresponding expansion and diversification of our user community’s needs. In response, we’ve been investing heavily in a number of our standalone Cyclone and our plug-in CloudWorx point cloud software applications. This large set of releases reflects that ongoing investment.”

Leica Cyclone and Leica CloudWorx families

• More flexible licensing lets users easily move licenses between the field and office and on-or-off a network.
• Users on customer support can implement license upgrades on their own at any time
• Rentals are now available for as short as one week for most products; discounts are available for extended rental periods

Leica CloudWorx for AutoCAD 5.0

• Plug-in for AutoCAD saves hours in the office for working with 3D point clouds in AutoCAD for both experienced users and users new to working in 3D
• Easier X,Y,Z coordinate system setup and faster navigation to desired views; faster creation of 2D drawings; faster ground surface and TIN creation; and, faster selection of high, low and ground points

Leica CloudWorx for 3ds Max 2.0

• New Leica CloudWorx plug-in family member (replaces Leica CloudWorx-VR)
• Eliminates prior need to export from Cyclone and import to Leica CloudWorx-VR; users now enjoy direct data access to Cyclone files
• Adds rich set of standard CloudWorx plug-in tools for working more efficiently with point clouds in 3ds Max

Leica CloudWorx for PDMS 1.3

• Plug-in for PDMS adds valuable option of importing plant models from PDMS directly into Leica Cyclone and exporting models created from point clouds in Cyclone directly into PDMS
• Avoids prior need to import/export models into/from PDMS and Cyclone via AutoCAD or MicroStation
• Supports direct import of PDMS models into popular Leica TruView software

Leica_CloudWorx_for_AutoCAD

Point Cloud Tools for 3D Studio [Project Helix]

Bring your visualizations into context with Project Helix, a powerful technology prototype enabling display and rendering of 3D laser scanning/LiDAR data sets with Autodesk® 3ds Max® and Autodesk® 3ds Max® Design software. With the 3ds Max Point Cloud Tools you can more quickly import as-built site references to help evaluate and visualize your designs in context of their surrounding elements. Point cloud data sets are often created by 3D scanners and represent set of measured vertices in a three-dimensional coordinate system. Using an automatic process, these devices measure in a large number of points on the surface of an object and output a point cloud as a data file. Download Now

The Point Cloud Tool for 3ds Max and 3ds Max Design allows you to:

  • Import .PTS format point cloud data into 3ds Max or 3ds Max Design scenes (release 2010 & 2011)
  • Display the point cloud data in the 3ds Max viewport with a variety of rendering options and levels of detail
  • Render point clouds using the mental ray® renderer*
  • Slice point clouds into pieces using geometric display volumes
  • Export multiple clouds or parts of clouds to new .PTS files

* mental ray is a registered trademark of mental images GmbH licensed for use by Autodesk, Inc.

The Project Helix Technology Preview will be made available only for a limited time, so download Project Helix before June 20, 2011 and place your designs in context today!

If you would like to try the Point Cloud Tool for 3ds Max with a sample data set:


FEATURED VIDEOS

If you do not have access to YouTube videos, you can download the video from as 3ds Max Point Cloud Tools.mp4.

Exploring Point-Based Rendering in Pixar’s RenderMan [Point Clouds]

Creating animations of large point cloud datasets generated from terrestrial laser scanners and LiDAR has been an issue for a number of years. While it has been possible using tools such as Leica Geosystems Cyclone or Pointools, just to name a couple, it is still a very cumbersome task. It is exciting to see the CGI industry beginning to adopt the use of this data and developing applications that make it easier to visualize.

Over the last few years, a brand new technology has emerged for creating CGI effects that has already made a big impact on feature film production. It is called point-based rendering, and this powerful technique makes creating global illumination effects for feature film far more practical and efficient than before. In fact, this year the Academy of Motion Picture Art and Sciences awarded the creators of this innovation, Per Christensen, Michael Bunnell and Christophe Hery, with a Scientific and Engineering Academy award.

In this great article written by Nils O Sandys at CGSociety, we will look into the development of this important new technology, how point-based rendering works, and what this all means to the future of feature film production as we know it. Read the full article here.

Too-cool technologies: Game Engine-quality Point Clouds and Digital Holography

By Lieca N. Hohner, Chief Editor SparLLC

Our industry never comes short in the innovation department. HKS Inc., headquartered in Dallas, Texas, proves this—it’s turned “regular” point clouds into game-engine quality. Here’s the story. And then read on for some amazing display solutions.
HKS, Inc.’s Pat Carmichael, manager of the Advanced Technology Group, began investigating point cloud scans as a way to achieve high-quality as-built information for the company’s architectural geometry applications used for schematic design (most often Revit). The team realized many benefits using laser scan data, including the ability to obtain data not manually possible, draw while acquiring field data, gain highly accurate data comparable to total station data, and to collect immense amounts of data in rapid time. Point clouds are the bread and butter of rapid model acquisition, Carmichael said in his presentation at SPAR 2009.
HKS scan data captured from subcontractors’ scanners are used in HKS’ home-grown product called BIMMIT, an evolving spin-off product from their real-time game-engine product ARCHengine that has been in development for more than 10 years and that which enhances Revit models. BIMMIT is usually coupled with HKS’ proprietary ARCHengine for real-time display of the resulting 3D BIMMIT/Revit models, which can be between 8 million to 30 million polygons depending on their use on a laptop or desktop.
To illustrate the awesomeness of this melding, consider the W hotel in Dallas. The final model of the pre-constructed hotel designed by HKS was used to show city officials how the hotel’s sight lines would affect the downtown skyline so valued by the city. It was also used to sell out the associated condos prior to construction, as developers could take prospective buyers virtually up to their windows to show the views from their units. This same concept was used for the Ritz-Carlton twin towers in downtown Dallas; the presentation helped to pre-sell approximately 85% of the Phase One units in about six months—even in this down market, Carmichael said.
HKS used aerial lidar from the city to set elevations, some of which are photographically textured. HKS also flies with a RED ONE digital camera, which shoots in 4K resolution, whereby they extract high-resolution textures rapidly for application with the aerial lidar geometry. Most of the building models come out of Revit.
With these incredible design tools, HKS also performed a design review on the seating in the new American Airlines sports arena. HKS showed staff, team owners and other investors how seats would articulate and rise for a hockey or basketball arena and specifically how they would affect viewlines. On the new Dallas Cowboys Stadium in Arlington, Texas, HKS took the collected field scan data, structural data and drawing data—and all site views from all 89,000 seats, scoreboards, etc., into ARCHengine. To check the models during construction, HKS used a total station to get information from point to point. In the desktop models of the ARCHengine tools, everything is georeferenced with lat/long/elev, which gives the team dimensional data.
“It’s a serious design tool,” Carmichael said. “It’s a serious communication tool to the clients/users/vendors, all the other suppliers, and a bunch of other people participating in the design process.”
Carmichael says the next version of ARCHengine version 3 will tie individual objects to a reporting structure, in line with 4D business strategies that tie in time, space calculations and scheduling. He said HKS Advanced Technology Group is also working with Intel on the high-end multi-processors to be able to react more quickly to a cluster of cores for simulations.
Those involved in sports stadium, government, military or GSA work will be interested in HKS’ Advanced Technology Group solutions.
To see an interactive map of the seats in the new Dallas Stadium with panoramic images generated from ARCHengine, click tohttp://www.dallascowboys.com/tickets/newstadiumInteractiveMaps.cfm

Digital Imaging, Holographic Style

Zebra Imaging, Inc., provider of holographic display technologies, has taken visualization of LiDAR and laser scan data sets to a new level. Users in the geospatial, AEC, automotive, medical, oil & gas, military and other arenas can view a topographical data set in full parallax, full color and without any glasses or goggles. “Seeing LiDAR and laser scanned data volumetrically expands its utility and value,” said Michael Klug, Zebra’s CTO, at this year’s SPAR 2009. Government and commercial uses seem endless.
Zebra’s solution graduates a physical display to digital holography by reconstructing a 3D image in space using film-based displays and illumination. The 12-year-old company founded by graduates of the Massachusetts Institute of Technology’s Media Lab has cut its teeth by aiding the military and law enforcement with displays that assist planning and after-action efforts, situational awareness and training.
It’s pretty cool stuff—a far, FAR cry from the hologram stickers I collected as a little girl. Klug describes the process as being more like burning data to a disc rather than a printing process. From a pair of GeoTiffs (one being a DEM, the other a geotextured map), Zebra’s proprietary Imager burns the pattern into photopolymer film with intersecting laser beams and produces an A1-size (594 × 841 mm) monochrome hologram that can be produced in CAD, GIS, medical imaging, oil & gas, etc., formats in 1-millimeter hogel size—about a pixel. Process time is about three hours. An average A1-size monochrome (green) hologram costs about $2,500. Full color and replication is available, and Klug says high-speed development will be available by Q4. Klug claims Zebra’s solution is similar or lower cost compared to other market alternatives today, and that it is more transportable and usable with full solid parallax 3D.
Zebra has produced more than 6,000 LiDAR-based holographic displays for military use in Iraq and Afghanistan since 2006. The 2×2 ½-foot maps provide warfighters with a common communication tool to get a common operating picture of an area of interest without language or cultural obstacles. Klug said they’re easily transportable and durable and, later, shreddable.
Focus on AEC Market
In the last two years, the company has developed a new product line for the AEC realm. Attention focuses on geospatial context and all phases of design, BIM documentation, and communications and marketing.
Currently, Zebra is defining a styles guide and a CAD tool API plug-in-based interface available from a drop-down menu in Revit, 3DS Max and Google Sketchup (at first, then others). Klug says the creation of a wizard is a bit complicated for Zebra since they render with in-house tools to manage 64,000 to a quarter-million views of a scene within two hours. So, they’ve created a render-quality selection where the user can select a point cloud, a simple-shaded rendition of a data set, a textured data set or a photo-real selection (which customizes the job). Orders are returned in A-frame and horizontal format (each of which delivers different results) and include a lighting component.
The Creation of Dynamic Displays
In 2004, Zebra was sponsored by DARPA to create a program for dynamic 3D displays for interactive graphic-intensive applications.  The dynamic displays would be easy to view, have 360-degree visibility, be electronically updated in real-time, be modular and scalable to 6×6 feet, and offer horizontal, vertical and inclined orientations. To date, they’ve established a 1-meter diagonal prototype modular display of 8-inch square tiles with an image volume that occupies about 1 foot of space. It directly plugs into OpenGL-based applications and updates at 10 Hz. Pilot production and beta phase of this display is expected next year. Klug said any rendering feature a user can see on a 2D screen can be produced in the hologram, including translucency, transparency, reflection, etc.

Uses for these displays include, but don’t appear to be limited to, spatial, project and industrial process planning, land development, event security logistics, emergency management, heritage preservation, forensics presentation and construction progress monitoring.