Velodyne LiDAR Announces Puck Hi-Res™ LiDAR Sensor

Velodyne LiDAR Announces Puck Hi-Res™ LiDAR Sensor, Offering Higher Resolution to Identify Objects at Greater Distances

Industry-leading, real-time LiDAR sensor impacts autonomous vehicle, 3D mapping and surveillance industries with significantly higher resolution of 3D images

velodyne_lidar_tm_puck_hires_fronttop_3k

MORGAN HILL, Calif.–(BUSINESS WIRE)–Velodyne LiDAR Inc., the recognized global leader in Light, Detection and Ranging (LiDAR) technology, today unveiled its new Puck Hi-Res™ sensor, a version of the company’s groundbreaking LiDAR Puck that provides higher resolution in captured 3D images, which allows objects to be identified at greater distances. Puck Hi-Res is the third new LiDAR sensor released by the company this year, joining the standard VLP-16 Puck™ and the Puck LITE™.

“Not only does the Puck Hi-Res provide greater detail in longer ranges, but it retains all the functions of the original VLP-16 Puck that shook up these industries when it was introduced in September 2014.”

“Introducing a high-resolution LiDAR solution is essential to advancing any industry that leverages the capture of 3D images, from autonomous navigation to mapping to surveillance,” said Mike Jellen, President and COO, Velodyne LiDAR. “The Puck Hi-Res sensor will provide the most detailed 3D views possible from LiDAR, enabling widespread adoption of this technology while increasing safety and reliability.”

Expanding on Velodyne LiDAR’s groundbreaking VLP-16 Puck, a 16-channel, real-time 3D LiDAR sensor that weighs just 830 grams, Puck Hi-Res is used in applications that require greater resolution in the captured 3D image. Puck Hi-Res retains the VLP-16 Puck’s 360° horizontal field-of-view (FoV) and 100-meter range, but delivers a 20° vertical FoV for a tighter channel distribution – 1.33° between channels instead of 2.00° – to deliver greater details in the 3D image at longer ranges. This will enable the host system to not only detect, but also better discern, objects at these greater distances.

“Building on the VLP-16 Puck and the Puck LITE, the Puck Hi-Res was an intuitive next step for us, as the evolution of the various industries that rely on LiDAR showed the need for higher resolution 3D imaging,” said Wayne Seto, product line manager, Velodyne LiDAR. “Not only does the Puck Hi-Res provide greater detail in longer ranges, but it retains all the functions of the original VLP-16 Puck that shook up these industries when it was introduced in September 2014.”

“The 3D imaging market is expected to grow from $5.71B in 2015 to $15.15B in 2020, led by the development of autonomous shuttles for large campuses, airports, and basically anywhere there’s a need to safely move people and cargo,” said Dr. Rajender Thusu, Industry Principal for Sensors & Instruments, Frost & Sullivan. “We expect Velodyne LiDAR’s line of sensors to play a key role in this surge in autonomous vehicle development, as the company leads the way in partnerships with key industry drivers, along with the fact that sensors like the new Puck Hi-Res are substantially more sophisticated than competitive offerings and increasingly accessible to all industry players.”

Velodyne LiDAR is now accepting orders for Puck Hi-Res, with a lead-time of approximately eight weeks.

About Velodyne LiDAR

Founded in 1983 by David S. Hall, Velodyne Acoustics Inc. first disrupted the premium audio market through Hall’s patented invention of virtually distortion-less, servo-driven subwoofers. Hall subsequently leveraged his knowledge of robotics and 3D visualization systems to invent ground breaking sensor technology for self-driving cars and 3D mapping, introducing the HDL-64 Solid-State Hybrid LiDAR sensor in 2005. Since then, Velodyne LiDAR has emerged as the leading supplier of solid-state hybrid LiDAR sensor technology used in a variety of commercial applications including advanced automotive safety systems, autonomous driving, 3D mobile mapping, 3D aerial mapping and security. The compact, lightweight HDL-32E sensor is available for applications including UAVs, while the VLP-16 LiDAR Puck is a 16-channel LiDAR sensor that is both substantially smaller and dramatically less expensive than previous generation sensors. To read more about the technology, including white papers, visit http://www.velodynelidar.com.

Contacts

Velodyne LiDAR
Laurel Nissen
lnissen@velodyne.com
or
Porter Novelli/Voce
Andrew Hussey
Andrew.hussey@porternovelli.com

Hennessy Launches “Harmony. Mastered from Chaos.” Interactive Campaign using LiDAR Scans

NEW YORK, June 30, 2016 /PRNewswire/ — Hennessy, the world’s #1 Cognac, today announced “Harmony. Mastered from Chaos.” –a dynamic new campaign that brings to life the multitude of complex variables that are artfully and expertly mastered by human touch to create the brand’s most harmonious blend, V.S.O.P Privilège. Set to launch June 30th, the campaign showcases the absolute mastery exuded at every stage of crafting this blend. This first campaign in over ten years also offers a glimpse into the inner workings of Hennessy’s mysterious Comité de Dégustation (Tasting Committee)—perhaps the ideal example of Hennessy’s mastery—that crafts the same rich, high quality liquid year over year. Narrated by Leslie Odom, Jr., the campaign features 60, 30 and 15 second digital spots and an interactive digital experience, adding another vivid chapter to the brand’s “Never stop. Never settle.” platform.

“Sharing the intriguing story of the Hennessy Tasting Committee, its exacting practices and long standing rituals, illustrates the crucial role that over 250 years of tradition and excellence play in mastering this well-structured spirit,” said Giles Woodyer, Senior Vice President, Hennessy US. “With more and more people discovering Cognac and seeking out the heritage behind brands, we knew it was the right time to launch the first significant marketing campaign for V.S.O.P Privilège.”

Hennessy’s Comité de Dégustation is a group of seven masters, including seventh generation Master Blender, Yann Fillioux, unparalleled in the world of Cognac. These architects of time oversee the eaux-de-vie to ensure that every bottle of V.S.O.P Privilège is perfectly balanced despite the many intricate variables present during creation of the Cognac. From daily tastings at exactly 11am in the Grand Bureau (whose doors never open to the public) to annual tastings of the entire library of Hennessy eaux-de-vie (one of the largest and oldest in the world), this august body meticulously safeguards the future of Hennessy, its continuity and legacy.

Through a perfectly orchestrated phalanx marked by an abundance of tradition, caring and human touch, V.S.O.P Privilège is created as a complete and harmonious blend: the definitive expression of a perfectly balanced Cognac. Based on a selection of firmly structured eaux-de-vie, aged largely in partially used barrels in order to take on subtle levels of oak tannins, this highly characterful Cognac reveals balanced aromas of fresh vanilla, cinnamon and toasty notes, all coming together with a seamless perfection.

“Harmony. Mastered from Chaos.”
In partnership with Droga5, the film and interactive experience were directed by Ben Tricklebank of Tool of North America, and Active Theory, a Los Angeles-based interactive studio. From the vineyards in Cognac, France, to the distillery and Cognac cellars, viewers are taken on a powerful and modern cinematic journey to experience the scrupulous process of crafting Hennessy VSOP Privilège. The multidimensional campaign uses a combination of live-action footage and technology, including 3D lidar scanning, depth capture provided by SCANable, and binaural recording to visualize the juxtaposition of complexity versus mastery that is critical to the Hennessy V.S.O.P Privilège Cognac-making process.

“Harmony. Mastered from Chaos.” will be supported by a fully integrated marketing campaign including consumer events, retail tastings, social and PR initiatives. Consumers will be able to further engage with the brand through  the first annual “Cognac Classics Week” hosted by Liquor.com, taking place July 11-18 to demonstrate the harmony that V.S.O.P Privilège adds to classic cocktails. Kicking off on Bastille Day in a nod to Hennessy’s French heritage, mixologists across New York City, Chicago, and Los Angeles will offer new twists on classics such as the French 75, Sidecar, and Sazerac, all crafted with the perfectly balanced V.S.O.P Privilège.

For more information on Cognac Classics Week, including a list of participating bars and upcoming events, visitwww.Liquor.com/TBD and follow the hashtag #CognacClassicsWeek.

To learn more about “Harmony. Mastered from Chaos.” visit Hennessy.com or Facebook.com/Hennessy.

ABOUT HENNESSY
In 2015, the Maison Hennessy celebrated 250 years of an exceptional adventure that has lasted for seven generations and spanned five continents.

It began in the French region of Cognac, the seat from which the Maison has constantly passed down the best the land has to give, from one generation to the next. In particular, such longevity is thanks to those people, past and present, who have ensured Hennessy’s success both locally and around the world. Hennessy’s success and longevity are also the result of the values the Maison has upheld since its creation: unique savoir-faire, a constant quest for innovation, and an unwavering commitment to Creation, Excellence, Legacy, and Sustainable Development. Today, these qualities are the hallmark of a House – a crown jewel in the LVMH Group – that crafts the most iconic, prestigious Cognacs in the world.

Hennessy is imported and distributed in the U.S. by Moët Hennessy USA. Hennessy distills, ages and blends spanning a full range: Hennessy V.S, Hennessy Black, V.S.O.P Privilège, X.O, Paradis, Paradis Impérial and Richard Hennessy. For more information and where to purchase/ engrave, please visit Hennessy.com.

 

 

Video – https://youtu.be/vp5e8YV0pjc
Photo – http://photos.prnewswire.com/prnh/20160629/385105
Photo – http://photos.prnewswire.com/prnh/20160629/385106

SOURCE Hennessy

Chaos Group Patents Scanning Technology For World’s Most Accurate 3D Materials

Today, Chaos Group announces a new scanning technology that can produce an exact digital replica of a physical material with sub-mm precision. Design and manufacturing firms can instantly take advantage of this patented technology through VRscans, Chaos Group’s new full-service material scanning business.

“We’ve dedicated the last 19 years to advancing rendering technology, so designers can trust what they see on their screen. Now they can match the material sample in their hand with the one in their 3D model,” said Peter Mitev, CEO at Chaos Group.

While using digital graphics to prototype projects has been the standard for design firms and Fortune 500 companies for years, the ability to present products with 100% accuracy has not. In the past, approximations were used, forcing artists to spend countless hours getting 90% of the way there. With VRscans, companies can now send in their material samples, trusting that what comes back will be the most accurate representation of a digital material yet. Once processed, the materials are returned with a .vrscan file that arrives ready to render in V-Ray – Chaos Group’s widely-used photorealistic lighting and rendering software.

VRscans combines precise optical hardware with proprietary software; the result of several years of research and prototyping. VRscans captures each material’s full appearance, including surface texture, reflectivity, dimensionality, and its unique response to light. This complex data is read by the new VRscans plugin, creating a physically accurate material that can be rendered from any angle and in any lighting condition.

“Several pilot projects are already underway with top automotive, aerospace, and furniture companies,” said Dinko Dimitrov, VRscans Product Manager. “Digital prototypes are the best way to design quickly. Now designers can do it with confidence.”

Pricing and Availability

VRscans is available now and is compatible with the latest versions of V-Ray for 3ds Max and Maya. Support for additional applications is in development.

For information and pricing, email vrscans(at)chaosgroup(dot)com or visit http://www.vrscans.com.

About Chaos Group

Chaos Group is a worldwide leader in computer graphics technology that helps artists and designers create photoreal imagery and animation for design, television, and feature films. Chaos Group’s physically based lighting and rendering software V-Ray is used daily by top design studios, architectural firms, advertising agencies, and visual effects companies around the globe. Today, the company’s research and development in cloud rendering, material scanning, and virtual reality is shaping the future of creative storytelling and digital design. Founded in 1997, Chaos Group is privately owned with offices in Sofia, Los Angeles, Baltimore, Seoul, and Tokyo. For more information, visit: chaosgroup.com.

Web-based Point Cloud Viewing

Introducing PointBox, a powerful web-based point cloud viewer for sharing and publishing your own point clouds in LAS, LAZ, PLY, PTS and Potree Zip file formats.

No limits

Publish and share your own point clouds without point limit. Web browser loads just the right points according to the camera position

Tools

You can measure distances, profiles and volumes of the point clouds. The clouds can be navigated using different kind of controls

Free

You can share point clouds up to 2 GB during the beta testing period. Premium accounts will be available for the stable version

For more information or to try it our for yourself, visit www.pointbox.xyz.

Google Brings Its 360-Degree Movies App, Spotlight Stories, To iOS

SCANable helped bring the Google Spotlight Stories fourth animated film to life by capturing all of the sets, locations, actors and vehicles in 3D for recreation of each item as photorealistic 3D digital assets. These assets were used by the VFX teams to create the final version of the film.

Source: TechCrunch.com

Google Spotlight Stories, a mobile app featuring immersive, 360-degree animated films originally developed by Motorola, has now made its way to iOS devices.

When viewers watch the movie, entitled “HELP” they can look anywhere, set the pace and frame the shot by moving their mobile device. Previously Spotlight Stories was only supported by Android but is now available to users of iOS 8.0 or higher.

The app itself is intended for entertainment purposes, as it offers stories built using 3D and 2D animations, 360-degree spherical “cinema-quality” video, sound sphere audio and “sensor fusion techniques,” explains Google. In short, what that means is that viewers can look around inside the animated content by moving their body and the phone to see different parts or angles of the story taking place.

Basically, the app can take advantage of the device’s sensors like its gyroscope and accelerometer in order to offer an immersive viewing experience. However, it doesn’t let end users create these sorts of movies for themselves.

One of the original animations featured in Spotlight Stories when it debuted was a film called “Windy Days” by ex-Pixar moviemakers, which appeared on Moto X phones when the Android app rolled out. This, as well as the other content previously available on Android, is also available in the new iOS app.

The app includes films like “Duet” from Glen Keane, “Buggy Night” from ATAP, and “Help” by “The Fast and the Furious” director Justin Lin. What’s interesting is that this latter movie, unlike the others, is noted as being “free for a limited time,” which indicates that Google may be planning to sell movies through this service in the future.

The technology for making these artistic mini-movies was first developed by Motorola Mobility’s Advanced Technology And Products (ATAP) moonshot division, but Google continued to fund its development in the years that followed. However, because the app was originally intended for Motorola devices (like the Moto X), it didn’t immediately support a wide range of Android devices when it launched. Some limitations on Android continue today, as the Google Play version still indicates that Spotlight Stories is “not yet compatible with all smartphones.”

However, the new iOS release will work on any device running iOS 8.0 or higher, notes Google.

The app is a free download, here on iTunes.

Emmy Nomination for Nissan “With Dad” Commercial

Do you recall the emotional Nissan television ad that was introduced during Super Bowl XLIX earlier this year? It made the list of 2015 Emmy nominations for Outstanding Commercial, and we are not surprised. Refresh your memory and watch it again.

This sentimental commercial, titled “With Dad,” certainly caught the public’s attention. After it premiered during the Super Bowl, the ad was voted the Favorite Super Bowl Commercial of 2015 and received more than 22 million views on YouTube. Nissan definitely made a bang after opting not to advertise during the Super Bowl in nearly twenty years!

As you can see, the commercial takes viewers through the lives of a loving family working its way through tough times and struggling to stay close despite the circumstances. Nissan took the opportunity to introduce two new Nissan models, the GT-R LM NISMO sports car and 2016 Nissan Maxima sports sedan.

Nissan #withdad campaign

The popular commercial ran as part of Nissan’s #withdad campaign, aimed toward reminding the public how much more fun and exciting life can be with dad. In addition to the Super Bowl commercial and other YouTube videos, Nissan also donated $1,000,000 to a couple organizations that help individuals make better lives for them and their families–Habitat for Humanity and Wounded Warrior Project.

Source: Sorg Nissan

Shade VFX Nominated for an Emmy Award for Daredevil

Daredevil, The Netflix Original Series, has been nominated for three Emmy awards in 2015, including one for Shade VFX in the category of Best Supporting Visual Effects. SCANable partnered with Shade VFX on this project by providing 3D scans of sets and actors used for camera tracking and matchmoving.

Source: Shade VFX

At the 67th Primetime Emmy Awards the Best Supporting Visual Effects category will award seamless, invisible effects in a television series and we’re utterly thrilled to be in the running alongside American Horror Story: Freak Show, Boardwalk Empire, Gotham, and The Walking Dead.

Daredevil, as we’ve discussed previously, was a unique opportunity for Shade VFX to expand both into New York and into the world of exceptional television working alongside Netflix and Marvel.

We created invisible effects that were a showpiece of the series, propelling the gritty and dark crime story forward and making your palms sweaty with action along the way.

Congratulations to the whole team that worked on Daredevil; Visual Effects Producer, David Van Dyke, Visual Effects Supervisors, Bryan Godwin and Karl Coyner, as well as Senior Compositing Lead, Steve J. Sanchez, Visual Effects Coordinator, Julie Long, Visual Effects Editor, Pedro Tarrago, Associate Compositing Lead, Neiko Nagy, CG Artist, Moshe Swed, and FX Technical Director, Kjell Strode.

Best of luck also to the Shade VFX team and all the other vendors for their nomination for Best Visual Effects for Black Sails! Particular congratulations of course to Chip Baden, who will be representing the Team Shade in this category.

Sincere thanks to Academy of Television Arts and Sciences, Marvel and Netflix.

Leica Pegasus Backpack Wearable Reality Capture – Indoors, Outdoors, Anywhere

Ultra mobile reality capture sensor platform – authoritative professional documentation indoors or outdoors

Leica Pegasus Backpack is a unique wearable reality capturing sensor platform combining cameras and Lidar profilers with the lightness of a carbon fiber chassis and a highly ergonomic design. The Pegasus:Backpack enables extensive and efficient indoor or outdoor documentation at a level of accuracy that is authoritative and professional. The Pegasus:Backpack is designed for rapid and regular reality capture – no longer is scanning registration needed for progressive scanning. The Pegasus:Backpack is just completely portable – enabling it to be checked in as luggage on a flight – simply fly-in, wear, collect, then fly-out. As part of the Pegasus platform, the Pegasus:Backpack is designed to act as a sensor platform with our standard external trigger and sync port outputs.

Leica Pegasus:Backpack

Map indoors, outdoors, underground, anywhere
Making progressive professional BIM documentation a reality with the Leica Pegasus:Backpack solution, synchronising imagery and point cloud data together, therefore assuring a complete documentation of a building for full life cycle management. By using SLAM (Simultaneous Localization and Mapping) technology and a high precision IMU, we ensure accurate positioning with GNSS outages – ensuring the best known position independent of how it is used.

With the Leica Pegasus:Backpack outdoor areas or underground infrastructures with limited access professional data collection is no longer limited . By capturing full 360 spherical view and Lidar together means you never forget an object or return to a project site – no matter where you are. A hardware light sensor ensures the operator that all images are usable while other functions are verifiable and adjustable over the operators tablet device.


Main features

  • Indoor and outdoor mapping in one single solution – position agnostic
  • Marries imagery and point cloud data into a single calibrated, user-intuitive platform
  • Full calibrated spherical view
  • External trigger output and external time stamping for additional sensors
  • Light sensor for auto brightness and balance control for image capture
  • Software enables access to Esri® ArcGIS for Desktop
  • Capture and edit 3D spatial objects from images and / or within the point cloud
  • Economical with data – balances data quantity and quality, with project logistics and post-processing
  • Ultra light weight carbon fiber core frame with an extensive ergonomic support for prolonged use
  • Real time view of the captured data through the tablet device
  • Up to 6 hours operational time with optional battery pack

Hardware features

  • Two profilers with 600,000 pts/sec, 50 m usable range and 16 channels
  • Largest sensor to pixel in the market – 5.5 um x 5.5 um
  • Five 4 MB cameras positioned to capture 360° x 200° view
  • User adjustable acquisition intervals based on the distance travelled
  • NovAtel ProPak6™ provides the latest and most sophisticated precise GNSS receiver with a robust field proven IMU for position accuracy of 20 mm RMS after 10 seconds of outage
  • Marrying a triple band GNSS system with the latest multiple beam enabled SLAM algorithms
  • INS determination of the location, speed, velocity and orientation at a rate of 200 Hz
  • Ultra portable system fitting into one carrying case (system weight 13 kg)
  • Battery based – using four batteries in a hot swappable configuration
  • Multi-core industrial PC, 1 TB SSD, USB3 interface, ethernet, and wireless connection from the system to the tablet device

Leica Pegasus:Backpack Indoor mapping solution

Leica Pegasus:Backpack enables unimaginable applications for indoor and outdoor mapping combining  visual images with the accuracy of a point cloud for professional documentation – in a wearable, ergonomic, and ultra light carbon fiber construction.

Software features

  • User capable of adding acquisition point objects in a Shapefile format during data acquisition
  • Advanced export capability for CAD-systems and others (DWG, DXF, SHP, GDB, DGN, E57, HPC, LAS, PTS, NMEA, KMZ)
  • Semi-automatic extraction tools
  • Sequenced images and videos for rapid navigation and object recognition
  • Software pointer “snaps” automatically and continuously onto the point cloud data from within an image
  • Immediate access to point clouds for accurate measurement
  • 3D stereoscopic view to decrease errors and increase throughput
  • Shadowed or missing 3D points can be acquired via photogrammetric processes
  • Data capture module displays the current location based on a GIS user interface
  • Data capture module displays all cameras and Lidar scans live, simultaneously
  • Data capture module enables laser scanner management and GNSS Operation
  • Live status monitoring of system during data acquisition

Software benefits

  • Lidar accuracy with image-based usability
  • Digitise spatial objects through mobile mapping
  • A more natural approach for non-professional users while offering technical interface for advanced users
  • Scalable to your applications including less accurate simple GIS needs
  • Short data acquisition time
  • High acquisition throughput
  • High post-processing throughput
  • Manageable license options – compatible with thin-client viewer
  • Esri® ArcGIS for Desktop compatible
  • Leverages Esri® relational platform for advanced features

Andersson Technologies releases SynthEyes 1502 3D Tracking Software

Andersson Technologies has released SynthEyes 1502, the latest version of its 3D tracking software, improving compatibility with Blackmagic Design’s Fusion compositing software.

Reflecting the renewed interest in Fusion
According to the official announcement: “Blackmagic Design’s recent decision to make Fusion 7 free of charge has led to increased interest in that package. While SynthEyes has exported to Fusion for many years now — for projects such as Battlestar Galactica — Andersson Technologies LLC upgraded SynthEyes’s Fusion export.”

Accordingly, the legacy Fusion exporter now supports 3D planar trackers; primitive, imported, or tracker-built meshes; imported or extracted textures; multiple cameras; and lens distortion via image maps.

The new lens distortion feature should make it possible to reproduce the distortion patterns of any real-world lens without its properties having been coded explicitly in the software or a custom plugin.

A new second exporter creates corner pin nodes in Fusion from 2D or 3D planar trackers in SynthEyes.

Other new features in SynthEyes 1502 include an error curve mini-view, a DNG/CinemaDNG file reader, and a refresh of the user interface, including the option to turn toolbar icons on or off.

Pricing and availability
SynthEyes 1502 is available now for Windows, Linux and Mac OS X. New licences cost from $249 to $999, depending on which edition you buy. The new version is free to registered users.

New features in SynthEyes 1502 include:

  • Toolbar icons are back! Some love ’em, some hate ’em. Have it your way: set the preference. Shows both text and icon by default to make it easiest, especially for new users with older tutorials. Some new and improved icons.
  • Refresh of user interface color preferences to a somewhat darker and trendier look. Other minor appearance tweaks.
  • New error curve mini-view.
  • Updated Fusion 3D exporter now exports all cameras, 3D planars, all meshes (including imported), lens distortion via image maps, etc.
  • New Fusion 2D corner pinning exporter.
  • Lens distortion export via color maps, currently for Fusion (Nuke for testing).
  • During offset tracking, a tracker can be (repeatedly) shift-dragged to different reference patterns on any frame, and SynthEyes will automatically adjust the offset channel keying.
  • Rotopanel’s Import tracker to CP (control point) now asks whether you want to import the relative motion or absolute position.
  • DNG/CinemaDNG reading. Marginal utility: DNG requires much proprietary postprocessing to get usable images, despite new luma and chroma blur settings in the image preprocessor.
  • New script to “Reparent meshes to active host” (without moving them)
  • New section in the user manual on “Realistic Compositing for 3-D”
  • New tutorials on offset tracking and Fusion.
  • Upgraded to RED 5.3 SDK (includes REDcolor4, DRAGONcolor2).
    • Faster camera and perspective drawing with large meshes and lidar scan data.
  • Windows: Installing license data no longer requires “right click/Start as Administrator”—the UAC dialog will appear instead.
  • Windows: Automatically keeps the last 3 crash dumps. Even one crash is one too many.
  • Windows: Installers, SynthEyes, and Synthia are now code-signed for “Andersson Technologies LLC” instead of showing “Unknown publisher”.
  • Mac OS X: Yosemite required that we change to the latest XCode 6—this eliminated support for OS X 10.7. Apple made 10.8 more difficult as well.

About SynthEyes

SynthEyes is a program for 3-D camera-tracking, also known as match-moving. SynthEyes can look at the image sequence from your live-action shoot and determine how the real camera moved during the shoot, what the camera’s field of view (~focal length) was, and where various locations were in 3-D, so that you can create computer-generated imagery that exactly fits into the shot. SynthEyes is widely used in film, television, commercial, and music video post-production.

What can SynthEyes do for me? You can use SynthEyes to help insert animated creatures or vehicles; fix shaky shots; extend or fix a set; add virtual sets to green-screen shoots; replace signs or insert monitor images; produce 3D stereoscopic films; create architectural previews; reconstruct accidents; do product placements after the shoot; add 3D cybernetic implants, cosmetic effects, or injuries to actors; produce panoramic backdrops or clean plates; build textured 3-D meshes from images; add 3-D particle effects; or capture body motion to drive computer-generated characters. And those are just the more common uses; we’re sure you can think of more.

What are its features? Take a deep breath! SynthEyes offers 3-D tracking, set reconstruction, stabilization, and motion capture. It handles camera tracking, 2- and 3-D planar tracking, object tracking, object tracking from reference meshes, camera+object tracking, survey shots, multiple-shot tracking, tripod (nodal, 2.5-D) tracking, mixed tripod and translating shots, stereoscopic shots, nodal stereoscopic shots, zooming shots, lens distortion, light solving. It can handle shots of any resolution (Intro version limited to 1920×1080)—HD, film, IMAX, with 8-bit, 16-bit, or 32-bit float data, and can be used on shots with thousands of frames. A keyer simplifies and speeds tracking for green-screen shots. The image preprocessor helps remove grain, compression artifacts, off-centering, or varying lighting and improve low-contrast shots. Textures can be extracted for a mesh from the image sequence, producing higher resolution and lower noise than any individual image. A revolutionary Instructible Assistant, Synthia™, helps you work faster and better, from spoken or typed natural language directions.

SynthEyes offers complete control over the tracking process for challenging shots, including an efficient workflow for supervised trackers, combined automated/supervised tracking, offset tracking, incremental solving, rolling-shutter compensation, a hard and soft path locking system, distance constraints for low-perspective shots, and cross-camera constraints for stereo. A solver phase system lets you set up complex solving strategies with a visual node-based approach (not in Intro version). You can set up a coordinate system with tracker constraints, camera constraints, an automated ground-plane-finding tool, by aligning to a mesh, a line-based single-frame alignment system, manually, or with some cool phase techniques.

Eyes starting to glaze over at all the features? Don’t worry, there’s a big green AUTO button too. Download the free demo and see for yourself.

What can SynthEyes talk to? SynthEyes is a tracking app; you’ll use the other apps you already know to generate the pretty pictures. SynthEyes exports to about 25 different 2-D and 3-D programs. The Sizzle scripting language lets you customize the standard exports, or add your own imports, exports, or tools. You can customize toolbars, color scheme, keyboard mapping, and viewport configurations too. Advanced customers can use the SyPy Python API/SDK.

[TC]2 Announces Availability of Its Most Advanced 3D/4D Body Scanner

New TC2-19 Offers Fastest and Most Accurate Measurements on the Market [source]

TC2 Announces Availability of Its Most Advanced 3D-4D Body ScannerCary, NC – April 30, 2015 – [TC]², the innovation leader for the fashion industry and 3D body scanning technology, announces general availability of the TC2-19, the most advanced 3D/4D body scanning and measurement technology available on the market.

The TC2-19 provides the option of using a touch screen inside the scanner booth which allows users to “self-scan” by following on-screen instructions. The “self-scan mode” is ideal for use with the iStyling™ Full Retail Solution that provides for greater fit accuracy, styling advice, and garment customization. The body scanner interfaces with multiple CAD technologies and avatar engines.

While capturing the accurate body measurements that [TC]² 3D body scanners are well-known for today, the TC2-19 offers a “quick scan” option (2 seconds), 360° 3D body scanning and rapid processing speeds (17 seconds). The newly developed 4D mode enables 3D movement visualization inside the scanner.

“The combination of speed, accuracy, and stability enabled by software and hardware advancements make this the most advanced 3D/4D body scanner ever built”, said Dr. Mike Fralix, CEO of [TC]². “We are excited that our existing retail customers and major brands now have a scalable solution that can easily and cost effectively roll out to multiple locations.”

The [TC]² body scanner captures thousands of body measurements used by the fashion, medical, and fitness industries to make custom garments, predict customer sizing, benchmark fitness goals, and augment surgical processes. The scanner fits in a space the size of the average retail dressing room and features a booth for privacy. The TC2-19 comes with a lifetime scanner software license and PC.

The TC2-19 can be viewed at the [TC]² National Apparel Technology Center in Cary, N.C.

About [TC]²:

[TC]², Textile / Clothing and Technology Corporation, is a leader in innovation and dedicated to the advancement of the fashion and sewn products industry. Its research, consulting services, and products help brands, retailers, and manufacturers provide increased value for their customers while improving their bottom line. [TC]²’s mission focuses on the development, promotion and implementation of new technologies and ideas that significantly impact the industry.