Loading
SCANable
  • Home
  • About Us
    • COVID-19 Response
    • News
  • Services
    • Request 3D Scanning Services
      • Architecture & Heritage
      • Civil/Survey
      • Plant, Process & Marine
      • Security and Threat Analysis
      • Training and Support
    • Mobile Photogrammetry Studios
    • CAL – Capture Array for Lumetrics
    • Virtual Production Support
    • Drone Photogrammetry
    • Digital Doubles
    • Sets and Environments
    • Props and Products
    • Vehicles
    • Rentals
  • Recent Productions
  • Locations
    • Los Angeles
    • New York
    • Atlanta
    • New Orleans
    • Houston
  • Contact
  • Click to open the search input field Click to open the search input field Search
  • Menu Menu
3D Body Scans for Garage Magazine Augmented Reality App

Behind the Work: Garage Magazine Augmented Reality App

March 10, 2015/0 Comments/in Featured, Visual Effects (VFX)/by Travis Reinke

With New York Fashion Week FW15 in high gear, SCANable got in on the action with a dream assignment collaborating with The Mill, Garage Magazine, renowned makeup artist Pat McGrath, photographer Phil Poynter and Beats by Dre to bring February’s cover models to life through Garage’s smart phone app.

Source: Mill Blog
Garage Magazine Nº8 features cover models: Cara Delevingne, Kendall Jenner, Lara Stone, Binx Walton and Joan Smalls. Each model was rendered in a way that blends the magic touch of Pat McGrath and the technical skills of The Mill team. When the covers are viewed using the GARAGE Mag app, each of the cover stars literally jump out of the page as a 3D rendition and animation, using augmented reality to explore the intersection of print and digital.

‪The Covers

Led by The Mill creative director Andreas Berner, the brief was to create five different covers, each featuring a supermodel wearing a colorful set of Beats by Dre headphones. Each model was treated with a pure CG interpretation of various elements inspired by original Pat McGrath’s make up designs: android mask, graphite scribbles, shrink wrap, crystals, and smoke elements.

GARAGE Nº8

GARAGE Nº8

Cara Delevingne: Android Mask

Cara’s look was inspired by Pat McGrath’s make-up for the Spring/Summer Alexander McQueen show. Once the app is activated, segments of blue armor animate from behind her head and create an android effect.

Kendall Jenner: Graphite Scribbles

Kendall is taken over by a mesh of stone 3D graphite that swoops over her whole body and face until it engulfs her in delicate body armor. The animation is the shape of her silhouette, creating the effect that both she and the headphones are immersed in a cage of lines.

Lara Stone: Shrink Wrap

Lara appears in airtight shrink-wrap plastic, creating a smooth and almost liquid looking cover. As Lara’s head emerges from the page, the rich color tone of the headphones begins to take over and envelope her completely back into the cover.

Binx Walton: Crystals

The animation appears in midair, organically building from crystals and facets  to create a 3D crystal bust. The tiny delicate sharp crystals appear in a mask shape, gradually taking over so she is fully covered. The look is inspired by McGrath’s makeup from the Givenchy Spring/Summer 2014 show.

Joan Smalls: Smoke

Joan’s look is inspired by the movement of a smoke electric storm and the northern lights. Joan emerges from the page to the sound of her taking a deep breath. Purple smoke FX continues to build and swirl around her three dimensional head.

The Process

The idea was that each model would be a breathing, living organism consumed by the nature of the VFX. After Phil shot the models with bare makeup, SCANable captured a 3D scan on-set to generate full body CG models for post production. This allowed The Mill’s 3D team, led by Raymond Leung, to retouch and create geometry for the app.

The Mill design team then outlined the designs on top of the retouched photographs with style frames. After a final cropping & editing session with Phil, the high res prints were sent to press.

Binx Walton: Crystals

Binx Walton: Crystals

The next step was to convert the ideas from the print component into the AR app. Garage Magazine had previously teamed up with artist Jeff Koons, creating a cover where, when using the Garage app, viewers were able to walk around a virtual sculpture. The Mill team pushed the technology further for Issue Nº8 with animation, sound and additional elements.

For the app execution, in-house 2D and 3D tools were used to create FX simulations, animations and final composites. Many of the initial ideas were limited by the technology used in real-time applications and a lack of processing power, which meant the team needed to get creative.

Lara Stone: Shrink Wrap

Lara Stone: Shrink Wrap

Kendall’s effect was particularly challenging as her execution utilized disciplines across the CG department. We were up against resolution restrictions within the technology. Each strand grown was a culmination of particle effects, animation, texturing, modeling and lighting. But with some clever ingenuity, and an indication process of constant testing, updating, and retesting, her incredible look was achieved. This was the approach taken for the look of each of the models.

Kendall Jenner: Graphite Scribbles

Kendall Jenner: Graphite Scribbles

Unique music scores by Alex Da Kid and sound FX by Finger Music were incorporated to accompany each execution, building an even stronger interactive experience. The last step was integrating all finished animations into the actual app, done by Meiré & Meiré.


This issue of Garage is currently on stands and available for purchase online. You can download the app here and watch these beauties come to life.

https://scanable.com/wp-content/uploads/2015/02/Garage-Magazine-Covers1.png 544 2048 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2015-03-10 16:05:002015-03-10 16:16:23Behind the Work: Garage Magazine Augmented Reality App
faro freestyle 3d handheld scanner

FARO® Launches Innovative, User-Friendly Handheld 3D Scanner to Meet Growing Demand for Portable Scanning

January 20, 2015/0 Comments/in 3D Laser Scanning, In the News, New Technology, Point Cloud/by Travis Reinke

LAKE MARY, Fla., Jan. 7, 2015 /PRNewswire/ — FARO Technologies, Inc. (NASDAQ: FARO), the world’s most trusted source for 3D measurement, imaging, and realization technology, announces the release of the new FARO Freestyle3D Handheld Laser Scanner, an easy, intuitive device for use in Architecture, Engineering and Construction (AEC), Law Enforcement, and other industries.

The FARO Freestyle3D is equipped with a Microsoft Surface™ tablet and offers unprecedented real-time visualization by allowing the user to view point cloud data as it is captured. The Freestyle3D scans to a distance of up to three (3) meters and captures up to 88K points per second with accuracy better than 1.5mm.  The patent-pending, self-compensating optical system also allows users to start scanning immediately with no warm up time required.

“The Freestyle3D is the latest addition to the FARO 3D laser scanning portfolio and represents another step on our journey to democratize 3D scanning,” stated Jay Freeland, FARO’s President and CEO.  “Following the successful adoption of our Focus scanners for long-range scanning, we’ve developed a scanner that provides customers with the same intuitive feel and ease-of-use in a handheld device.”
The portability of Freestyle3D enables users to maneuver and scan in tight and hard-to-reach areas such as car interiors, under tables and behind objects making it ideal for crime scene data collection or architectural preservation and restoration activities.  Memory-scan technology enables Freestyle3D users to pause scanning at any time and then resume data collection where they left off without the use of artificial targets.

Mr. Freeland added, “FARO’s customers continue to stress the importance of work-flow simplicity, portability, and affordability as key drivers to their continued use and adoption of 3D laser scanning.  We have responded by developing an easy-to-use, industrial grade, handheld laser scanning device that weighs less than 2 lbs.”

The Freestyle3D can be employed as a standalone device to scan areas of interest, or used in concert with FARO’s Focus X 130 / X 330 scanners.  Point cloud data from all of these devices can be seamlessly integrated and shared with all of FARO’s software visualization tools including FARO SCENE, WebShare Cloud, and FARO CAD Zone packages.

For more information about FARO’s 3D scanning solutions visit: www.faro.com

This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995 that are subject to risks and uncertainties, such as statements about demand for and customer acceptance of FARO’s products, and FARO’s product development and product launches. Statements that are not historical facts or that describe the Company’s plans, objectives, projections, expectations, assumptions, strategies, or goals are forward-looking statements. In addition, words such as “is,”“will,” and similar expressions or discussions of FARO’s plans or other intentions identify forward-looking statements. Forward-looking statements are not guarantees of future performance and are subject to various known and unknown risks, uncertainties, and otherfactors that may cause actual results, performances, or achievements to differ materially from future results, performances, or achievements expressed or implied by such forward-looking statements. Consequently, undue reliance should not be placed on these forward-looking statements.

Factors that could cause actual results to differ materially from what is expressed or forecasted in such forward-looking statements include, but are not limited to:

  • development by others of new or improved products, processes or technologies that make the Company’s products less competitive or obsolete;
  • the Company’s inability to maintain its technological advantage by developing new products and enhancing its existing products;
  • declines or other adverse changes, or lack of improvement, in industries that the Company serves or the domestic and international economies in the regions of the world where the Company operates and other general economic, business, and financial conditions; and
  • other risks detailed in Part I, Item 1A. Risk Factors in the Company’s Annual Report on Form 10-K for the year ended December 31, 2013 and Part II, Item 1A. Risk Factors in the Company’s Quarterly Report on Form 10-Q for the quarter ended June 28, 2014.

Forward-looking statements in this release represent the Company’s judgment as of the date of this release. The Company undertakes no obligation to update publicly any forward-looking statements, whether as a result of new information, future events, or otherwise, unless otherwise required by law.

About FARO

FARO is the world’s most trusted source for 3D measurement technology. The Company develops and markets computer-aided measurement and imaging devices and software. Technology from FARO permits high-precision 3D measurement, imaging and comparison of parts and complex structures within production and quality assurance processes. The devices are used for inspecting components and assemblies, rapid prototyping, documenting large volume spaces or structures in 3D, surveying and construction, as well as for investigation and reconstruction of accident sites or crime scenes.

Approximately 15,000 customers are operating more than 30,000 installations of FARO’s systems, worldwide. The Company’s global headquarters is located in Lake Mary, FL; its European regional headquarters in Stuttgart, Germany; and its Asia/Pacific regional headquarters in Singapore. FARO has other offices in the United States, Canada, Mexico, Brazil, Germany, the United Kingdom,France, Spain, Italy, Poland, Turkey, the Netherlands, Switzerland, Portugal, India, China, Malaysia, Vietnam, Thailand, South Korea, and Japan.

More information is available at http://www.faro.com

SOURCE FARO Technologies, Inc.

https://scanable.com/wp-content/uploads/2015/01/freestyle-video-thumb-big.jpg 563 1000 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2015-01-20 16:29:522015-01-20 16:29:52FARO® Launches Innovative, User-Friendly Handheld 3D Scanner to Meet Growing Demand for Portable Scanning
Mattepainting Toolkit Camera Projection

Photogrammetry and camera projection mapping in Maya made easy

January 20, 2015/0 Comments/in 3D Laser Scanning, Software, Visual Effects (VFX)/by Travis Reinke

The Mattepainting Toolkit

Photogrammetry and camera projection mapping in Maya made easy.

What’s included?

The Mattepainting Toolkit (gs_mptk) is a plugin suite for Autodesk Maya that helps artists build photorealistic 3D environments with minimal rendering overhead. It offers an extensive toolset for working with digital paintings as well as datasets sourced from photographs.

Version 3.0 is now released!

For Maya versions 2014 and 2015, version 3.0 of the toolkit adds support for Viewport 2.0, and a number of new features. Version 2.0 is still available for Maya versions 2012-2014. A lite version of the toolkit, The Camera Projection Toolkit (gs_cptk) is available for purchase from the Autodesk Exchange. To see a complete feature comparison list between these versions, click here.

How does it work?

The Mattepainting Toolkit uses an OpenGL implementation for shader feedback within Maya’s viewport. This allows users to work directly with paintings, photos, and image sequences that are mapped onto geometry in an immediate and intuitive way.

Overview

The User Interface

Textures are organized in a UI that manages the shaders used for viewport display and rendering.

...

  • Clicking on an image thumbnail will load the texture in your preferred image editor.
  • Texture layer order is determined by a drag-and-drop list.
  • Geometry shading assignments can be quickly added and removed.

Point Cloud Data

Import Bundler and PLY point cloud data from Agisoft Photoscan, Photosynth, or other Structure From Motion (SFM) software.

...

  • Point clouds can be used as a modeling guide to quickly reconstruct a physical space.
  • Cameras are automatically positioned in the scene for projection mapping.

The Viewport

A custom OpenGL shader allows textures to be displayed in high quality and manipulated interactively within the viewport.

...

  • Up to 16 texture layers can be displayed per shader.
  • HDR equirectangular images can be projected spherically.
  • Texture mattes can be painted directly onto geometry within the viewport.
  • Image sequences are supported so that film plates can be mapped to geometry.

Rendering

The layered textures can be rendered with any renderer available to Maya. Custom Mental Ray and V-Ray shaders included with the toolkit extend the texture blending capabilities for those renderers.

...

  • The texture layers can be baked down to object UVs.
  • A coverage map can be rendered to isolate which areas of the geometry are most visible to the camera.
  • For Mental Ray and V-Ray, textures can be blended based on object occlusion, distance from the projection camera, and object facing ratio.
https://scanable.com/wp-content/uploads/2015/01/gnomon_cover.jpg 510 800 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2015-01-20 16:11:532015-01-20 16:11:53Photogrammetry and camera projection mapping in Maya made easy
3D scan and print of President Obama

Smithsonian Displays 3D Portrait of President Obama

December 9, 2014/0 Comments/in 3D Printing, In the News, Photogrammetry, Uncategorized/by Travis Reinke

The first presidential portraits created from 3-D scan data are now on display in the Smithsonian Castle. The portraits of President Barack Obama were created based on data collected by a Smithsonian-led team of 3-D digital imaging specialists and include a digital and 3-D printed bust and life mask. A new video released today by the White House details the behind-the-scenes process of scanning, creating and printing the historic portraits. The portraits will be on view in the Commons gallery of the Castle starting today, Dec. 2, through Dec. 31. The portraits were previously displayed at the White House Maker Faire June 18.

3D Print of President Obama

The Smithsonian-led team scanned the President earlier this year using two distinct 3-D documentation processes. Experts from the University of Southern California’s Institute for Creative Technologies used their Light Stage face scanner to document the President’s face from ear to ear in high resolution. Next, a Smithsonian team used handheld 3-D scanners and traditional single-lens reflex cameras to record peripheral 3-D data to create an accurate bust.

The data captured was post-processed by 3-D graphics experts at the software company Autodesk to create final high-resolution models. The life mask and bust were then printed using 3D Systems’ Selective Laser Sintering printers.

The data and the printed models are part of the collection of the Smithsonian’s National Portrait Gallery. The Portrait Gallery’s collection has multiple images of every U.S. president, and these portraits will support the current and future collection of works the museum has to represent Obama.

The life-mask scan of Obama joins only three other presidential life masks in the Portrait Gallery’s collection: one of George Washington created by Jean-Antoine Houdon and two of Abraham Lincoln created by Leonard Wells Volk (1860) and Clark Mills (1865). The Washington and Lincoln life masks were created using traditional plaster-casting methods. The Lincoln life masks are currently available to explore and download on the Smithsonian’s X 3D website.

The video below shows an Artec Eva being used to capture a 3D portrait of President Barack Obama along with Mobile Light Stage – in essence, eight high-end DSLRs and 50 light sources mounted in a futuristic-looking quarter-circle of aluminum scaffolding. During a facial scan, the cameras capture 10 photographs each under different lighting conditions for a total of 80 photographs. All of this happened in a single second. Afterwards, sophisticated algorithms processed this data into high-resolution 3D models. The Light Stage captured the President’s facial features from ear to ear, similar to the 1860 Lincoln life mask.

About Smithsonian X 3D

The Smithsonian publicly launched its 3-D scanning and imaging program Smithsonian X 3D in 2013 to make museum collections and scientific specimens more widely available for use and study. The Smithsonian X 3D Collection features objects from the Smithsonian that highlight different applications of 3-D capture and printing, as well as digital delivery methods for 3-D data in research, education and conservation. Objects include the Wright Flyer, a model of the remnants of supernova Cassiopeia A, a fossil whale and a sixth-century Buddha statue. The public can explore all these objects online through a free custom-built, plug-in browser and download the data for their own use in modeling programs or to print using a 3-D printer.

https://scanable.com/wp-content/uploads/2014/12/DPO_ObamaBust_3views_1.jpg 1002 2048 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2014-12-09 03:24:022014-12-09 03:24:02Smithsonian Displays 3D Portrait of President Obama
endeavor space shuttle lidar

Endeavour: The Last Space Shuttle as she’s never been seen before.

December 2, 2014/0 Comments/in LiDAR, Visual Effects (VFX)/by Travis Reinke

[source by Mark Gibbs]

Endeavour, NASA’s fifth and final space shuttle, is now on display at the California Science Center in Los Angeles and, if you’re at all a fan of space stuff, it’s one of the most iconic and remarkable flying machines ever built.

David Knight, a trustee and board member of the foundation recently sent me a link to an amazing video of the shuttle as well as some excellent still shots.

David commented that these images were:

 “…captured by Chuck Null on the overhead crane while we were doing full-motion VR and HD/2D filming … the Payload Bay has been closed for [a] few years … one door will be opened once she’s mounted upright in simulated launch position in the new Air & Space Center.

Note that all of this is part of the Endeavour VR Project by which we are utilizing leading-edge imaging technology to film, photograph and LIDAR-scan the entire Orbiter, resulting in the most comprehensive captures of a Space Shuttle interior ever assembled – the goal is to render ultra-res VR experiences by which individuals will be able to don eyewear such as the Oculus Rift (the COO of Oculus himself came down during the capture sessions), and walk or ‘fly’ through the Orbiter, able to ‘look’ anywhere, even touch surfaces and turn switches, via eventual haptic feedback gloves etc.

The project is being Executive Produced by me, with the Producer being Ted Schilowitz (inventor of the RED camera and more), Director is Ben Grossman, who led the special effects for the most recent Star Trek movie. Truly Exciting!”

Here are the pictures …

Endeavour - the last Space Shuttle
Endeavour - the last Space ShuttleCharles Null / David Knight on behalf of the California Science Center
Endeavour - the last Space Shuttle

 

https://scanable.com/wp-content/uploads/2014/12/endeavor-space-shuttle-lidar.jpg 937 620 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2014-12-02 04:36:472014-12-02 04:36:47Endeavour: The Last Space Shuttle as she’s never been seen before.
zLense real-time 3D tracking

zLense Announces World’s First Real-Time 3D Depth Mapping Technology for Broadcast Cameras

December 2, 2014/0 Comments/in In the News, Visual Effects (VFX)/by Travis Reinke

New virtual production platform dramatically lowers the cost of visual effects (VFX) for live and recorded TV, enabling visual environments previously unattainable in a live studio without any special studio set-up…

27 October 2014, London, UK – zLense, a specialist provider of virtual production platforms to the film, production, broadcast and gaming industries, today announced the launch of the world’s first depth-mapping camera solution that captures 3D data and scenery in real-time and adds a 3D layer, which is optimized for broadcasters and film productions, to the footage. The ground breaking industry-first technology processes space information, making  new and real three-dimensional compositing methods possible, enabling production teams to create stunning 3D effects and utilise state-of-the-art CGI in live TV or pre-recorded transmissions – with no special studio set up.

Utilising the solution, directors can produce unique simulated and augmented reality worlds, generating and combining dynamic virtual reality (VR) and augmented (AR) effects in live studio or outside broadcast transmissions. The unique depth-sensing technology allows for a full 360 degree freedom of camera movement and gives presenters and anchormen greater liberty of performance. Directors can combine dolly, jib arm and handheld shots as presenters move within, interact with and control the virtual environment and, in the near future, using only natural gestures and motions.

“We’re poised to shake up the Virtual Studio world by putting affordable high-quality real-time CGI into the hands of broadcasters,” said Bruno Gyorgy, President of zLense. “This unique world-leading technology changes the face of TV broadcasting as we know it, giving producers and programme directors access to CGI tools and techniques that transform the audience viewing experience.”

Doing away with the need for expensive match-moving work, the zLense Virtual Production platform dramatically speeds up the 3D compositing process, making it possible for directors to mix CGI and live action shots in real-time pre-visualization and take the production values of their studio and OB live transmissions to a new level. The solution is quick to install, requires just a single operator, and is operable in almost any studio lighting.

“With minimal expense and no special studio modifications, local and regional TV channels can use this technology to enhance their news and weather graphics programmes – unleashing live augmented reality, interactive simulations and visualisations that make the delivery of infographics exciting, enticing and totally immersive for viewers,” he continued.

The zLense Virtual Production platform combines depth-sensing technology and image-processing in a standalone camera rig that captures the 3D scene and camera movement. The ‘matte box’ sensor unit, which can be mounted on almost any camera rig, removes the need for external tracking devices or markers, while the platform’s built-in rendering engine cuts the cost and complexity of using visual effects in live and pre-recorded TV productions. The zLense Virtual Production platform can be used alongside other, pre-existing, rendering engines, VR systems and tracking technologies.

The VFX real-time capabilities enabled by the zLense Virtual Production platform include:

  • Volumetric effects
  • Additional motion and depth blur
  • Shadows and reflections to create convincing state-of-the-art visual appearances
  • Dynamic relighting
  • Realistic 3D distortions
  • Creation of a fully interactive virtual environment with interactive physical particle simulation
  • Wide shot and in-depth compositions with full body figures
  • Real-time Z-map and 3D models of the picture

For more information on the zLense features and functionalities, please visit: zlense.com/features

About Zinemath
Zinemath, a leader in developing the re-invention of how professional moving images are going to be processed in the future, is the producer of zLense, a revolutionary real-time depth sensing and modelling platform that adds third dimensional information to the filming process.  zLense is the first depth mapping camera accessory optimized for broadcasters and cinema previsualization. With an R&D center in Budapest, Zinemath, part of the Luxemburg-based Docler Group, is spreading this new vision to all industries in the film, television and mobile technology sectors.

For more information please visit: www.zlense.com

https://scanable.com/wp-content/uploads/2014/12/zLense-real-time-3D-tracking.jpg 1080 1920 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2014-12-02 04:18:202014-12-02 04:18:20zLense Announces World’s First Real-Time 3D Depth Mapping Technology for Broadcast Cameras
Autodesk-Meshmixer-Launch-2

Make a 3D Printed Kit with Meshmixer 2.7

November 26, 2014/0 Comments/in 3D Printing, Software/by Travis Reinke

[source]

Meshmixer 2.7 was released today full of new tools for 3D printing. Here I use the new version of the app to create a 3D printed kit of parts that can be printed in one job and assembled together pin connectors.

To do this I used several of the new features to make this a fast and painless process. I dug up a 123D Catch capture I took of a bronze sculpture of John Muir. I found it in my dentists office, it turns out my dentist sculpted it. I thought I’d make my own take on it by slicing it up and connecting it back together so it can be interactive, swiveling the pieces around the pin connectors.

I made use of the new pin connectors solid parts that are included in the release (in the miscellaneous bin). I also used the powerful Layout/Packing tool to layout parts on the print bed as a kit of parts to print in one print job. Also, the addition of the Orthographic view is incredibly helpful when creating the kit and laying it out within the print volume of my Replicator 2X. An instructable is in progress with a how-to for a 3D printed kit such as this.

 

This new release has some other nice updates. Check em out below:

– New Layout/Packing Tool under Analysis for 3D print bed layout

– New Deviation Tool for visualizing max distance between two objects (ie original & reduced version)

– New Clearance Tool for visualizing min distance between two objects (ie to verify tolerances)

– Under Analysis menu, requires selection of two objects)

– Reduce Tool now supports reducing to triangle count, (approximate) maximum deviation

– Support Generation improvements

– Better DLP/SLA preset

– Can now draw horizontal bars in support generator

– Ctrl-click now deletes all support segments above click point

– Shift-ctrl-click to only delete clicked segment

– Solid Part dropping now has built-in option to boolean add/subtract

– Can set operation-type preference during Convert To Solid Part

– Can set option to preserve physical dimensions during Convert To Solid Part

– New Snapping options in Measure tool

– Can now turn on Print Bed rendering in Modeling view (under View menu)

– Must enter Print View to change/configure printer

– Improved support for low-end graphics cards

For your kit of parts, try out the new pin connectors included in the Misc. parts library. One is a negative (boolean subtract it when dropping the part). The other you can drop on the print bed for printing by itself. It fits into the negative hole. You can also author your own parts and they will drop at a fixed scale (so they fit!).

Let us know what kind of kits you create…maybe we can add in your connectors in a future release. (There’s a free 3d print and t-shirt involved). Let us know at meshmixer@autodesk.com.

Have fun!!

https://scanable.com/wp-content/uploads/2014/12/Autodesk-Meshmixer-Launch-2.jpg 726 1286 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2014-11-26 03:59:232014-12-02 04:07:49Make a 3D Printed Kit with Meshmixer 2.7
Rent or Buy Leica Geosystems Cyclone 9

Leica Geosystems HDS Introduces Patent-Pending Innovations for Laser Scanning Project Efficiency

November 10, 2014/0 Comments/in LiDAR, New Technology, News, Point Cloud, Software, Visual Effects (VFX)/by Travis Reinke

With Leica Cyclone 9.0, the industry leading point cloud solution for processing laser scan data, Leica Geosystems HDS introduces major, patent-pending innovations for greater project efficiency. Innovations benefit both field and office via significantly faster, easier scan registration, plus quicker deliverable creation thanks to better 2D and 3D drafting tools and steel modelling. Cyclone 9.0 allows users to scale easily for larger, more complex projects while ensuring high quality deliverables consistently.

Greatest advancement in office scan registration since cloud-to-cloud registration
When Leica Geosystems pioneered cloud-to-cloud registration, it enabled users – for the first time – to accurately execute laser scanning projects without having to physically place special targets around the scene, scan them, and model them in the office. With cloud-to-cloud registration software, users take advantage of overlaps among scans to register them together.

“The cloud-to-cloud registration approach has delivered significant logistical benefits onsite and time savings for many projects. We’ve constantly improved it, but the new Automatic Scan Alignment and Visual Registration capabilities in Cyclone 9.0 represent the biggest advancement in cloud-to-cloud registration since we introduced it,” explained Dr. Chris Thewalt, VP Laser Scanning Software. “Cyclone 9.0 lets users benefit from targetless scanning more often by performing the critical scan registration step far more efficiently in the office for many projects. As users increase the size and scope of their scanning projects, Cyclone 9.0 pays even bigger dividends. Any user who registers laser scan data will find great value in these capabilities.“

With the push of a button, Cyclone 9.0 automatically processes scans, and digital images if available, to create groups of overlapping scans that are initially aligned to each other. Once scan alignment is completed, algorithmic registration is applied for final registration. This new workflow option can be used in conjunction with target registration methods as well. These combined capabilities not only make the most challenging registration scenarios feasible, but also exponentially faster. Even novice users will appreciate their ease-of-use and ready scalability beyond small projects.

Power user Marta Wren, technical specialist at Plowman Craven Associates (PCA – leading UK chartered surveying firm) found that Cyclone 9.0’s Visual Registration tools alone sped up registration processing of scans by up to four times (4X) faster than previous methods. PCA uses laser scanning for civil infrastructure, commercial property, forensics, entertainment, and Building Information Modelling (BIM) applications.

New intuitive 2D and 3D drafting from laser scans
For civil applications, new roadway alignment drafting tools let users import LandXML-based roadway alignments or use simple polylines imported or created in Cyclone. These tools allow users to easily create cross section templates using feature codes, as well as copy them to the next station and visually adjust them to fit roadway conditions at the new location. A new vertical exaggeration tool in Cyclone 9.0 allows users to clearly see subtle changes in elevation; linework created between cross sections along the roadway can be used as breaklines for surface meshing or for 2D maps and drawings in other applications.

For 2D drafting of forensic scenes, building and BIM workflows, a new Quick Slice tool streamlines the process of creating a 2D sketch plane for drafting items, such as building footprints and sections, into just one step. A user only needs to pick one or two points on the face of a building to get started. This tool can also be used to quickly analyse the quality of registrations by visually checking where point clouds overlap.

Also included in Cyclone 9.0 are powerful, automatic point extraction features first introduced in Cyclone II TOPO and Leica CloudWorx. These include efficient SmartPicks for automatically finding bottom, top, and tie point locations and Points-on-a-Grid for automatically placing up to a thousand scan survey points on a grid for ground surfaces or building faces.

Simplified steel fitting of laser scan data
For plant, civil, building and BIM applications, Cyclone 9.0 also introduces a patent-pending innovation for modelling steel from point cloud data more quickly and easily. Unlike time consuming methods that require either processing an entire available cloud to fit a steel shape or isolating a cloud section before fitting, this new tool lets users to quickly and accurately model specific steel elements directly within congested point clouds. Users only need to make two picks along a steel member to model it. Shapes include wide flange, channel, angle, tee, and rectangular tube shapes.

Faster path to deliverables
Leica Cyclone 9.0 also provides users with valuable, new capabilities for faster creation of deliverables for civil, architectural, BIM, plant, and forensic scene documentation from laser scans and High-Definition Surveying™ (HDS™).

Availability
Leica Cyclone 9.0 is available today. Further information about the Leica Cyclone family of products can be found at http://hds.leica-geosystems.com, and users may download new product versions online from this website or purchase or rent licenses from SCANable, your trusted Leica Geosystems representative. Contact us today for pricing on software and training.

https://scanable.com/wp-content/uploads/2014/11/Leica-Cyclone-9.jpg 488 640 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2014-11-10 03:50:592014-11-10 03:50:59Leica Geosystems HDS Introduces Patent-Pending Innovations for Laser Scanning Project Efficiency
Capturing Real-World Environments for Virtual Cinematography

Capturing Real-World Environments for Virtual Cinematography

August 28, 2014/0 Comments/in 3D Laser Scanning, LiDAR, Photogrammetry, Visual Effects (VFX)/by Travis Reinke

[source] written by Matt Workman

Virtual Reality Cinematography

As Virtual Reality HMDs (Oculus) come speeding towards consumers, there is an emerging need to capture 360 media and 360 environments. Capturing a location for virtual reality or virtual production is a task that is well suited for a DP and maybe a new niche of cinematography/photography. Not only are we capturing the physical dimensions of the environment using LIDAR, but we capturing the lighting using 360 degree HDR light probes captured with DSLRs/Nodal Tripod systems.

A LIDAR scanner is essentially a camera that shoots in all directions. It lives on a tripod and it can record the physical dimensions and color of an environment/space. It captures millions of points and saves their position and color to be later used to construct the space digitally.

An HDR Latlong Probe in Mari

Using a DSLR camera and a nodal tripod head, the DP would capture High Dynamic Range (32bit float HDR) 360 degree probes of the location, to record the lighting.  This process would essentially capture the lighting in the space at a VERY high dynamic range and that would be later reprojected onto the geometry constructed using the LIDAR data.

Realtime 3D Asset being lit by an HDR environment real time (baked)

The DP is essentially lighting the entire space in 360 degrees and then capturing it. Imagine an entire day of lighting a space in all directions. Lights outside windows, track lighting on walls, practicals, etc. Then capturing that space using the above outlined techniques as an asset to be used later. Once the set is constructed virtually, the director can add actors/props and start filmmaking, like he/she would do on a real set.  And the virtual cinematographer would line up the shots, cameras moves, and real time lighting.

I’ve already encountered a similar paradigm as a DP, when I shot a 360 VR commercial. A few years ago I shot a commercial for Bacardi with a 360 VR camera and we had to light and block talent in all directions within a loft space. The end user was then able to control which way the camera looked in the web player, but the director/DP controlled it’s travel path.

360 Virtual Reality Bacardi Commercial

 

http://www.mattworkman.com/2012/03/18/bacardi-360-virtual-reality/

Capturing a set for VR cinematography would allow the user to control their position in the space as well as which way they were facing. And the talent and interactive elements would be added later.

Final Product: VR Environment Capture

 

In this video you can see the final product of a location captured for VR. The geometry for the set was created using the LIDAR as a reference. The textures and lighting data are baked in from a combination of the LIDAR color data and the reprojected HDR probes.

After all is said in done, we have captured a location, it’s textures, and it’s lighting that can be used a digital location however we need. For previs, virtual production, background VFX plates, a real time asset for Oculus, etc.

SIGGRAPH 2014 and NVIDIA

SG4141: Building Photo-Real Virtual Reality from Real Reality, Byte by Byte
http://www.ustream.tv/recorded/51331701

In this presentation Scott Metzger speaks about his new virtual reality company Nurulize and his work with the Nvidia K5200 GPU and The Foundry’s Mari to create photo real 360 degree environments. He shows a demo of the environment that was captured in 32bit float with 8k textures being played in real time on an Oculus Rift and the results speak for themselves. (The real time asset was down sampled to 16bit EXR)

UDIM Texture Illustration

Some key technologies mentioned were the development of virtual texture engines that allow objects to have MANY 8k textures at once using the UDIM model. The environment’s lighting was baked from V-Ray 3 to a custom UDIM Unity shader and supported by Amplify Creations beta Unity Plug-in.

The xxArray 3D photometry scanner

The actors were scanned in using xxArray photogrammetry system and Mari was used to project the high resolution textures. All of this technology was being enabled by Nvidia’s Quadro GPU line, to allow fast 8k texture buffering.  The actors were later imported in to the real time environment that had been captured and were viewable from all angles through an Oculus Rift HMD.

Real time environment for Oculus

Virtual Reality Filmmaking

Scott brings up some incredibly relevant and important questions about virtual reality for filmmakers (directors/DPs) who plan to work in virtual reality.

  • How do you tell a story in Virtual Reality?
  • How do you direct the viewer to face a certain direction?
  • How do you create a passive experience on the Oculus?

He even give a glimpse at the future distribution model of VR content. His demo for the film Rise will be released for Oculus/VR in the following formats:

  1. A free roam view where the action happens and the viewer is allowed to completely control the camera and point of view.
  2. A directed view where the viewer and look around but the positioning is dictated by the script/director. This model very much interests me and sounds like a video game.
  3. And a tradition 2D post rendered version. Like a tradition cinematic or film, best suited for Vimeo/Youtube/DVD/TV.

A year ago this technology seemed like science fiction, but every year we come closer to completely capturing humans (form/texture), their motions, environments with their textures, real world lighting, and viewing them in real time in virtual reality.

The industry is evolving at an incredibly rapid pace and so must the creatives working in it. Especially the persons responsible for the camera and the lighting, the director of photography.

https://scanable.com/wp-content/uploads/2014/08/siggraph2014-header02.jpg 500 1050 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2014-08-28 16:30:322014-08-28 16:44:25Capturing Real-World Environments for Virtual Cinematography
face 3d projection mapping

OMOTE Real-time Face Tracking 3D Projection Mapping

August 19, 2014/0 Comments/in Animation, New Technology, Visual Effects (VFX)/by Travis Reinke

Forget the faces of historic monuments, the new frontier of 3D projection mapping is the faces of humans.

Created by Nobumichi Asai and friends, technical details behind the process are scant at the moment, but from what can be found in this Tumblr post, it’s clear that step one is a 3D scan of the model’s face.

Here is the translated text from that post:

I will continue to explain how to make a face mapping of this time.
Title OMOTE that (= table) is coming from the “ability”, but it has become the idea of Noh mask also in how to make. That is the idea, covered by creating a “surface”. That it is possible to pursue the accuracy, theme that represent a very delicate make-up art as its output was important. I started from the fact that it is 3D laser scanning the face of the model for the first.

I suspect that a structured light scanner was used to capture the geometry of the model’s face rather than a 3D laser scanner. Nonetheless, this is a very cool application of 3D projection mapping.

3D Face Scanning 3D Face Scanning projection mapping3D Face Scanning projection mapping

OMOTE / REAL-TIME FACE TRACKING & PROJECTION MAPPING. from something wonderful on Vimeo.

https://scanable.com/wp-content/uploads/2014/08/face-projection-mapping.gif 287 500 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2014-08-19 04:44:422014-08-19 04:44:42OMOTE Real-time Face Tracking 3D Projection Mapping
Page 4 of 15«‹23456›»

Pages

  • 2023 Trilith Charity Gala
  • 3D Body Scanning, LiDAR and VFX Support in Houston, Texas
  • 3D Head and Body Scanning
  • 3D Object Scanning
  • 3D VFX LiDAR Scanning and Support in New Orleans, Louisiana
  • 3D VFX Scanning and Support in Atlanta, Georgia
  • 3D VFX Scanning and Support in Los Angeles, California
  • 3D VFX Scanning and Support in New York
  • About Us
  • Architecture & Heritage
  • Artec Leo
  • Asset
  • Awards
  • Blog
  • CAL – Capture Array for Lumetrics
  • Cart
  • Checkout
  • Civil/Survey
  • Clients
  • Contact
  • Drone Photogrammetry
  • FARO Focus S 150 and 350 3D Laser Scanner
  • FARO Focus3D X 330 Long Range 3D Laser Scanner
  • FARO Freestyle3D Handheld Scanner
  • Home
  • Industry News
  • Leica HDS6200
  • LiDAR Location and Environment 3D Scanning
  • Logout
  • Maquette, Statues and Fine Art 3D scanning
  • Mobile 3D Scanning
  • Mobile Photogrammetry Studios
  • MoPho 1 Service Request
  • MoPho 2 Service Request
  • MoPho 3 Service Request
  • My Account
  • New
  • News
  • Plant, Process & Marine
  • Pricing and FAQ
  • Privacy Policy
  • Purchase Affordable 3D Laser Scanners
  • Recent Productions
  • Rent the Leica RTC360
  • Request 3D Scanning Services
  • SCANable Rig Checklist
  • SCANable UAS (Unmanned Aerial Systems)
  • SCANable | COVID-19 Response
  • Security and Threat Analysis
  • Services
  • Shop
  • Surphaser 100HSX
  • Trailer Move
  • Training and Support
  • VFX 3D Vehicle Scanning For Film and Television
  • VFX is not AI
  • Virtual Reality
  • Visual Effects/CGI

Categories

  • 3D Laser Scanning
  • 3D Printing
  • Animation
  • Archaeology
  • Blog
  • Building Information Modeling (BIM)
  • Events
  • Featured
  • Forensic
  • Government
  • In the News
  • Industrial
  • Industry
  • LiDAR
  • Mobile Scanning
  • Modeling
  • New Hardware
  • New Technology
  • News
  • Photogrammetry
  • Point Cloud
  • Reviews
  • Software
  • Uncategorized
  • Virtual Reality
  • Visual Effects (VFX)

Archive

  • June 2025
  • May 2025
  • January 2025
  • December 2024
  • November 2022
  • October 2020
  • August 2020
  • July 2020
  • June 2020
  • January 2020
  • July 2017
  • May 2017
  • April 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • July 2016
  • May 2016
  • March 2016
  • September 2015
  • June 2015
  • May 2015
  • March 2015
  • January 2015
  • December 2014
  • November 2014
  • August 2014
  • June 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • November 2013
  • June 2013
  • March 2013
  • January 2013
  • December 2012
  • November 2012
  • September 2012
  • August 2012
  • July 2012
  • December 2010
  • November 2010
  • October 2010
  • August 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009

Quick Pages

  • Home
  • News
  • Services
  • Contact
  • Privacy Policy

Recent Productions

  • Stranger Things 5 (Netflix)
  • Mountainhead (HBO Films)
  • Weapons (Warner Bros.)
  • The Studio (Lionsgate Television)
  • The Woman in the Yard (Universal Pictures)

Latest News

  • A $1.5B Investment in Texas’ Film is Now LawJune 23, 2025 - 4:49 pm
  • Daredevil: Born Again: the art and craft of critical VFX collaborationMay 3, 2025 - 1:58 pm
  • Matthew McConaughey, Woody Harrelson Revive ‘True Detective’ Roles to Call for Filming in Texas: ‘Hollywood Is a Flat Circle’January 29, 2025 - 4:22 pm
  • Here are all the nominees for the 23rd Annual VES AwardsJanuary 15, 2025 - 6:58 pm

Locations

Georgia

Los Angeles

New York

Houston

New Orleans

© Copyright 2025 - SCANable - Enfold Theme by Kriesi
  • Link to Facebook
  • Link to X
  • Link to LinkedIn
  • Link to Youtube
  • Link to Rss this site
  • Link to Instagram
Scroll to top Scroll to top Scroll to top