Loading
SCANable
  • Home
  • About Us
    • COVID-19 Response
    • News
  • Services
    • Request 3D Scanning Services
      • Architecture & Heritage
      • Civil/Survey
      • Plant, Process & Marine
      • Security and Threat Analysis
      • Training and Support
    • Mobile Photogrammetry Studios
    • CAL – Capture Array for Lumetrics
    • Virtual Production Support
    • Drone Photogrammetry
    • Digital Doubles
    • Sets and Environments
    • Props and Products
    • Vehicles
    • Rentals
  • Recent Productions
  • Locations
    • Los Angeles
    • New York
    • Atlanta
    • New Orleans
    • Houston
  • Contact
  • Click to open the search input field Click to open the search input field Search
  • Menu Menu
3D Body Scans for Garage Magazine Augmented Reality App

Behind the Work: Garage Magazine Augmented Reality App

March 10, 2015/0 Comments/in Featured, Visual Effects (VFX)/by Travis Reinke

With New York Fashion Week FW15 in high gear, SCANable got in on the action with a dream assignment collaborating with The Mill, Garage Magazine, renowned makeup artist Pat McGrath, photographer Phil Poynter and Beats by Dre to bring February’s cover models to life through Garage’s smart phone app.

Source: Mill Blog
Garage Magazine Nº8 features cover models: Cara Delevingne, Kendall Jenner, Lara Stone, Binx Walton and Joan Smalls. Each model was rendered in a way that blends the magic touch of Pat McGrath and the technical skills of The Mill team. When the covers are viewed using the GARAGE Mag app, each of the cover stars literally jump out of the page as a 3D rendition and animation, using augmented reality to explore the intersection of print and digital.

‪The Covers

Led by The Mill creative director Andreas Berner, the brief was to create five different covers, each featuring a supermodel wearing a colorful set of Beats by Dre headphones. Each model was treated with a pure CG interpretation of various elements inspired by original Pat McGrath’s make up designs: android mask, graphite scribbles, shrink wrap, crystals, and smoke elements.

GARAGE Nº8

GARAGE Nº8

Cara Delevingne: Android Mask

Cara’s look was inspired by Pat McGrath’s make-up for the Spring/Summer Alexander McQueen show. Once the app is activated, segments of blue armor animate from behind her head and create an android effect.

Kendall Jenner: Graphite Scribbles

Kendall is taken over by a mesh of stone 3D graphite that swoops over her whole body and face until it engulfs her in delicate body armor. The animation is the shape of her silhouette, creating the effect that both she and the headphones are immersed in a cage of lines.

Lara Stone: Shrink Wrap

Lara appears in airtight shrink-wrap plastic, creating a smooth and almost liquid looking cover. As Lara’s head emerges from the page, the rich color tone of the headphones begins to take over and envelope her completely back into the cover.

Binx Walton: Crystals

The animation appears in midair, organically building from crystals and facets  to create a 3D crystal bust. The tiny delicate sharp crystals appear in a mask shape, gradually taking over so she is fully covered. The look is inspired by McGrath’s makeup from the Givenchy Spring/Summer 2014 show.

Joan Smalls: Smoke

Joan’s look is inspired by the movement of a smoke electric storm and the northern lights. Joan emerges from the page to the sound of her taking a deep breath. Purple smoke FX continues to build and swirl around her three dimensional head.

The Process

The idea was that each model would be a breathing, living organism consumed by the nature of the VFX. After Phil shot the models with bare makeup, SCANable captured a 3D scan on-set to generate full body CG models for post production. This allowed The Mill’s 3D team, led by Raymond Leung, to retouch and create geometry for the app.

The Mill design team then outlined the designs on top of the retouched photographs with style frames. After a final cropping & editing session with Phil, the high res prints were sent to press.

Binx Walton: Crystals

Binx Walton: Crystals

The next step was to convert the ideas from the print component into the AR app. Garage Magazine had previously teamed up with artist Jeff Koons, creating a cover where, when using the Garage app, viewers were able to walk around a virtual sculpture. The Mill team pushed the technology further for Issue Nº8 with animation, sound and additional elements.

For the app execution, in-house 2D and 3D tools were used to create FX simulations, animations and final composites. Many of the initial ideas were limited by the technology used in real-time applications and a lack of processing power, which meant the team needed to get creative.

Lara Stone: Shrink Wrap

Lara Stone: Shrink Wrap

Kendall’s effect was particularly challenging as her execution utilized disciplines across the CG department. We were up against resolution restrictions within the technology. Each strand grown was a culmination of particle effects, animation, texturing, modeling and lighting. But with some clever ingenuity, and an indication process of constant testing, updating, and retesting, her incredible look was achieved. This was the approach taken for the look of each of the models.

Kendall Jenner: Graphite Scribbles

Kendall Jenner: Graphite Scribbles

Unique music scores by Alex Da Kid and sound FX by Finger Music were incorporated to accompany each execution, building an even stronger interactive experience. The last step was integrating all finished animations into the actual app, done by Meiré & Meiré.


This issue of Garage is currently on stands and available for purchase online. You can download the app here and watch these beauties come to life.

https://scanable.com/wp-content/uploads/2015/02/Garage-Magazine-Covers1.png 544 2048 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2015-03-10 16:05:002015-03-10 16:16:23Behind the Work: Garage Magazine Augmented Reality App
Mattepainting Toolkit Camera Projection

Photogrammetry and camera projection mapping in Maya made easy

January 20, 2015/0 Comments/in 3D Laser Scanning, Software, Visual Effects (VFX)/by Travis Reinke

The Mattepainting Toolkit

Photogrammetry and camera projection mapping in Maya made easy.

What’s included?

The Mattepainting Toolkit (gs_mptk) is a plugin suite for Autodesk Maya that helps artists build photorealistic 3D environments with minimal rendering overhead. It offers an extensive toolset for working with digital paintings as well as datasets sourced from photographs.

Version 3.0 is now released!

For Maya versions 2014 and 2015, version 3.0 of the toolkit adds support for Viewport 2.0, and a number of new features. Version 2.0 is still available for Maya versions 2012-2014. A lite version of the toolkit, The Camera Projection Toolkit (gs_cptk) is available for purchase from the Autodesk Exchange. To see a complete feature comparison list between these versions, click here.

How does it work?

The Mattepainting Toolkit uses an OpenGL implementation for shader feedback within Maya’s viewport. This allows users to work directly with paintings, photos, and image sequences that are mapped onto geometry in an immediate and intuitive way.

Overview

The User Interface

Textures are organized in a UI that manages the shaders used for viewport display and rendering.

...

  • Clicking on an image thumbnail will load the texture in your preferred image editor.
  • Texture layer order is determined by a drag-and-drop list.
  • Geometry shading assignments can be quickly added and removed.

Point Cloud Data

Import Bundler and PLY point cloud data from Agisoft Photoscan, Photosynth, or other Structure From Motion (SFM) software.

...

  • Point clouds can be used as a modeling guide to quickly reconstruct a physical space.
  • Cameras are automatically positioned in the scene for projection mapping.

The Viewport

A custom OpenGL shader allows textures to be displayed in high quality and manipulated interactively within the viewport.

...

  • Up to 16 texture layers can be displayed per shader.
  • HDR equirectangular images can be projected spherically.
  • Texture mattes can be painted directly onto geometry within the viewport.
  • Image sequences are supported so that film plates can be mapped to geometry.

Rendering

The layered textures can be rendered with any renderer available to Maya. Custom Mental Ray and V-Ray shaders included with the toolkit extend the texture blending capabilities for those renderers.

...

  • The texture layers can be baked down to object UVs.
  • A coverage map can be rendered to isolate which areas of the geometry are most visible to the camera.
  • For Mental Ray and V-Ray, textures can be blended based on object occlusion, distance from the projection camera, and object facing ratio.
https://scanable.com/wp-content/uploads/2015/01/gnomon_cover.jpg 510 800 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2015-01-20 16:11:532015-01-20 16:11:53Photogrammetry and camera projection mapping in Maya made easy
endeavor space shuttle lidar

Endeavour: The Last Space Shuttle as she’s never been seen before.

December 2, 2014/0 Comments/in LiDAR, Visual Effects (VFX)/by Travis Reinke

[source by Mark Gibbs]

Endeavour, NASA’s fifth and final space shuttle, is now on display at the California Science Center in Los Angeles and, if you’re at all a fan of space stuff, it’s one of the most iconic and remarkable flying machines ever built.

David Knight, a trustee and board member of the foundation recently sent me a link to an amazing video of the shuttle as well as some excellent still shots.

David commented that these images were:

 “…captured by Chuck Null on the overhead crane while we were doing full-motion VR and HD/2D filming … the Payload Bay has been closed for [a] few years … one door will be opened once she’s mounted upright in simulated launch position in the new Air & Space Center.

Note that all of this is part of the Endeavour VR Project by which we are utilizing leading-edge imaging technology to film, photograph and LIDAR-scan the entire Orbiter, resulting in the most comprehensive captures of a Space Shuttle interior ever assembled – the goal is to render ultra-res VR experiences by which individuals will be able to don eyewear such as the Oculus Rift (the COO of Oculus himself came down during the capture sessions), and walk or ‘fly’ through the Orbiter, able to ‘look’ anywhere, even touch surfaces and turn switches, via eventual haptic feedback gloves etc.

The project is being Executive Produced by me, with the Producer being Ted Schilowitz (inventor of the RED camera and more), Director is Ben Grossman, who led the special effects for the most recent Star Trek movie. Truly Exciting!”

Here are the pictures …

Endeavour - the last Space Shuttle
Endeavour - the last Space ShuttleCharles Null / David Knight on behalf of the California Science Center
Endeavour - the last Space Shuttle

 

https://scanable.com/wp-content/uploads/2014/12/endeavor-space-shuttle-lidar.jpg 937 620 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2014-12-02 04:36:472014-12-02 04:36:47Endeavour: The Last Space Shuttle as she’s never been seen before.
zLense real-time 3D tracking

zLense Announces World’s First Real-Time 3D Depth Mapping Technology for Broadcast Cameras

December 2, 2014/0 Comments/in In the News, Visual Effects (VFX)/by Travis Reinke

New virtual production platform dramatically lowers the cost of visual effects (VFX) for live and recorded TV, enabling visual environments previously unattainable in a live studio without any special studio set-up…

27 October 2014, London, UK – zLense, a specialist provider of virtual production platforms to the film, production, broadcast and gaming industries, today announced the launch of the world’s first depth-mapping camera solution that captures 3D data and scenery in real-time and adds a 3D layer, which is optimized for broadcasters and film productions, to the footage. The ground breaking industry-first technology processes space information, making  new and real three-dimensional compositing methods possible, enabling production teams to create stunning 3D effects and utilise state-of-the-art CGI in live TV or pre-recorded transmissions – with no special studio set up.

Utilising the solution, directors can produce unique simulated and augmented reality worlds, generating and combining dynamic virtual reality (VR) and augmented (AR) effects in live studio or outside broadcast transmissions. The unique depth-sensing technology allows for a full 360 degree freedom of camera movement and gives presenters and anchormen greater liberty of performance. Directors can combine dolly, jib arm and handheld shots as presenters move within, interact with and control the virtual environment and, in the near future, using only natural gestures and motions.

“We’re poised to shake up the Virtual Studio world by putting affordable high-quality real-time CGI into the hands of broadcasters,” said Bruno Gyorgy, President of zLense. “This unique world-leading technology changes the face of TV broadcasting as we know it, giving producers and programme directors access to CGI tools and techniques that transform the audience viewing experience.”

Doing away with the need for expensive match-moving work, the zLense Virtual Production platform dramatically speeds up the 3D compositing process, making it possible for directors to mix CGI and live action shots in real-time pre-visualization and take the production values of their studio and OB live transmissions to a new level. The solution is quick to install, requires just a single operator, and is operable in almost any studio lighting.

“With minimal expense and no special studio modifications, local and regional TV channels can use this technology to enhance their news and weather graphics programmes – unleashing live augmented reality, interactive simulations and visualisations that make the delivery of infographics exciting, enticing and totally immersive for viewers,” he continued.

The zLense Virtual Production platform combines depth-sensing technology and image-processing in a standalone camera rig that captures the 3D scene and camera movement. The ‘matte box’ sensor unit, which can be mounted on almost any camera rig, removes the need for external tracking devices or markers, while the platform’s built-in rendering engine cuts the cost and complexity of using visual effects in live and pre-recorded TV productions. The zLense Virtual Production platform can be used alongside other, pre-existing, rendering engines, VR systems and tracking technologies.

The VFX real-time capabilities enabled by the zLense Virtual Production platform include:

  • Volumetric effects
  • Additional motion and depth blur
  • Shadows and reflections to create convincing state-of-the-art visual appearances
  • Dynamic relighting
  • Realistic 3D distortions
  • Creation of a fully interactive virtual environment with interactive physical particle simulation
  • Wide shot and in-depth compositions with full body figures
  • Real-time Z-map and 3D models of the picture

For more information on the zLense features and functionalities, please visit: zlense.com/features

About Zinemath
Zinemath, a leader in developing the re-invention of how professional moving images are going to be processed in the future, is the producer of zLense, a revolutionary real-time depth sensing and modelling platform that adds third dimensional information to the filming process.  zLense is the first depth mapping camera accessory optimized for broadcasters and cinema previsualization. With an R&D center in Budapest, Zinemath, part of the Luxemburg-based Docler Group, is spreading this new vision to all industries in the film, television and mobile technology sectors.

For more information please visit: www.zlense.com

https://scanable.com/wp-content/uploads/2014/12/zLense-real-time-3D-tracking.jpg 1080 1920 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2014-12-02 04:18:202014-12-02 04:18:20zLense Announces World’s First Real-Time 3D Depth Mapping Technology for Broadcast Cameras
Rent or Buy Leica Geosystems Cyclone 9

Leica Geosystems HDS Introduces Patent-Pending Innovations for Laser Scanning Project Efficiency

November 10, 2014/0 Comments/in LiDAR, New Technology, News, Point Cloud, Software, Visual Effects (VFX)/by Travis Reinke

With Leica Cyclone 9.0, the industry leading point cloud solution for processing laser scan data, Leica Geosystems HDS introduces major, patent-pending innovations for greater project efficiency. Innovations benefit both field and office via significantly faster, easier scan registration, plus quicker deliverable creation thanks to better 2D and 3D drafting tools and steel modelling. Cyclone 9.0 allows users to scale easily for larger, more complex projects while ensuring high quality deliverables consistently.

Greatest advancement in office scan registration since cloud-to-cloud registration
When Leica Geosystems pioneered cloud-to-cloud registration, it enabled users – for the first time – to accurately execute laser scanning projects without having to physically place special targets around the scene, scan them, and model them in the office. With cloud-to-cloud registration software, users take advantage of overlaps among scans to register them together.

“The cloud-to-cloud registration approach has delivered significant logistical benefits onsite and time savings for many projects. We’ve constantly improved it, but the new Automatic Scan Alignment and Visual Registration capabilities in Cyclone 9.0 represent the biggest advancement in cloud-to-cloud registration since we introduced it,” explained Dr. Chris Thewalt, VP Laser Scanning Software. “Cyclone 9.0 lets users benefit from targetless scanning more often by performing the critical scan registration step far more efficiently in the office for many projects. As users increase the size and scope of their scanning projects, Cyclone 9.0 pays even bigger dividends. Any user who registers laser scan data will find great value in these capabilities.“

With the push of a button, Cyclone 9.0 automatically processes scans, and digital images if available, to create groups of overlapping scans that are initially aligned to each other. Once scan alignment is completed, algorithmic registration is applied for final registration. This new workflow option can be used in conjunction with target registration methods as well. These combined capabilities not only make the most challenging registration scenarios feasible, but also exponentially faster. Even novice users will appreciate their ease-of-use and ready scalability beyond small projects.

Power user Marta Wren, technical specialist at Plowman Craven Associates (PCA – leading UK chartered surveying firm) found that Cyclone 9.0’s Visual Registration tools alone sped up registration processing of scans by up to four times (4X) faster than previous methods. PCA uses laser scanning for civil infrastructure, commercial property, forensics, entertainment, and Building Information Modelling (BIM) applications.

New intuitive 2D and 3D drafting from laser scans
For civil applications, new roadway alignment drafting tools let users import LandXML-based roadway alignments or use simple polylines imported or created in Cyclone. These tools allow users to easily create cross section templates using feature codes, as well as copy them to the next station and visually adjust them to fit roadway conditions at the new location. A new vertical exaggeration tool in Cyclone 9.0 allows users to clearly see subtle changes in elevation; linework created between cross sections along the roadway can be used as breaklines for surface meshing or for 2D maps and drawings in other applications.

For 2D drafting of forensic scenes, building and BIM workflows, a new Quick Slice tool streamlines the process of creating a 2D sketch plane for drafting items, such as building footprints and sections, into just one step. A user only needs to pick one or two points on the face of a building to get started. This tool can also be used to quickly analyse the quality of registrations by visually checking where point clouds overlap.

Also included in Cyclone 9.0 are powerful, automatic point extraction features first introduced in Cyclone II TOPO and Leica CloudWorx. These include efficient SmartPicks for automatically finding bottom, top, and tie point locations and Points-on-a-Grid for automatically placing up to a thousand scan survey points on a grid for ground surfaces or building faces.

Simplified steel fitting of laser scan data
For plant, civil, building and BIM applications, Cyclone 9.0 also introduces a patent-pending innovation for modelling steel from point cloud data more quickly and easily. Unlike time consuming methods that require either processing an entire available cloud to fit a steel shape or isolating a cloud section before fitting, this new tool lets users to quickly and accurately model specific steel elements directly within congested point clouds. Users only need to make two picks along a steel member to model it. Shapes include wide flange, channel, angle, tee, and rectangular tube shapes.

Faster path to deliverables
Leica Cyclone 9.0 also provides users with valuable, new capabilities for faster creation of deliverables for civil, architectural, BIM, plant, and forensic scene documentation from laser scans and High-Definition Surveying™ (HDS™).

Availability
Leica Cyclone 9.0 is available today. Further information about the Leica Cyclone family of products can be found at http://hds.leica-geosystems.com, and users may download new product versions online from this website or purchase or rent licenses from SCANable, your trusted Leica Geosystems representative. Contact us today for pricing on software and training.

https://scanable.com/wp-content/uploads/2014/11/Leica-Cyclone-9.jpg 488 640 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2014-11-10 03:50:592014-11-10 03:50:59Leica Geosystems HDS Introduces Patent-Pending Innovations for Laser Scanning Project Efficiency
Capturing Real-World Environments for Virtual Cinematography

Capturing Real-World Environments for Virtual Cinematography

August 28, 2014/0 Comments/in 3D Laser Scanning, LiDAR, Photogrammetry, Visual Effects (VFX)/by Travis Reinke

[source] written by Matt Workman

Virtual Reality Cinematography

As Virtual Reality HMDs (Oculus) come speeding towards consumers, there is an emerging need to capture 360 media and 360 environments. Capturing a location for virtual reality or virtual production is a task that is well suited for a DP and maybe a new niche of cinematography/photography. Not only are we capturing the physical dimensions of the environment using LIDAR, but we capturing the lighting using 360 degree HDR light probes captured with DSLRs/Nodal Tripod systems.

A LIDAR scanner is essentially a camera that shoots in all directions. It lives on a tripod and it can record the physical dimensions and color of an environment/space. It captures millions of points and saves their position and color to be later used to construct the space digitally.

An HDR Latlong Probe in Mari

Using a DSLR camera and a nodal tripod head, the DP would capture High Dynamic Range (32bit float HDR) 360 degree probes of the location, to record the lighting.  This process would essentially capture the lighting in the space at a VERY high dynamic range and that would be later reprojected onto the geometry constructed using the LIDAR data.

Realtime 3D Asset being lit by an HDR environment real time (baked)

The DP is essentially lighting the entire space in 360 degrees and then capturing it. Imagine an entire day of lighting a space in all directions. Lights outside windows, track lighting on walls, practicals, etc. Then capturing that space using the above outlined techniques as an asset to be used later. Once the set is constructed virtually, the director can add actors/props and start filmmaking, like he/she would do on a real set.  And the virtual cinematographer would line up the shots, cameras moves, and real time lighting.

I’ve already encountered a similar paradigm as a DP, when I shot a 360 VR commercial. A few years ago I shot a commercial for Bacardi with a 360 VR camera and we had to light and block talent in all directions within a loft space. The end user was then able to control which way the camera looked in the web player, but the director/DP controlled it’s travel path.

360 Virtual Reality Bacardi Commercial

 

http://www.mattworkman.com/2012/03/18/bacardi-360-virtual-reality/

Capturing a set for VR cinematography would allow the user to control their position in the space as well as which way they were facing. And the talent and interactive elements would be added later.

Final Product: VR Environment Capture

 

In this video you can see the final product of a location captured for VR. The geometry for the set was created using the LIDAR as a reference. The textures and lighting data are baked in from a combination of the LIDAR color data and the reprojected HDR probes.

After all is said in done, we have captured a location, it’s textures, and it’s lighting that can be used a digital location however we need. For previs, virtual production, background VFX plates, a real time asset for Oculus, etc.

SIGGRAPH 2014 and NVIDIA

SG4141: Building Photo-Real Virtual Reality from Real Reality, Byte by Byte
http://www.ustream.tv/recorded/51331701

In this presentation Scott Metzger speaks about his new virtual reality company Nurulize and his work with the Nvidia K5200 GPU and The Foundry’s Mari to create photo real 360 degree environments. He shows a demo of the environment that was captured in 32bit float with 8k textures being played in real time on an Oculus Rift and the results speak for themselves. (The real time asset was down sampled to 16bit EXR)

UDIM Texture Illustration

Some key technologies mentioned were the development of virtual texture engines that allow objects to have MANY 8k textures at once using the UDIM model. The environment’s lighting was baked from V-Ray 3 to a custom UDIM Unity shader and supported by Amplify Creations beta Unity Plug-in.

The xxArray 3D photometry scanner

The actors were scanned in using xxArray photogrammetry system and Mari was used to project the high resolution textures. All of this technology was being enabled by Nvidia’s Quadro GPU line, to allow fast 8k texture buffering.  The actors were later imported in to the real time environment that had been captured and were viewable from all angles through an Oculus Rift HMD.

Real time environment for Oculus

Virtual Reality Filmmaking

Scott brings up some incredibly relevant and important questions about virtual reality for filmmakers (directors/DPs) who plan to work in virtual reality.

  • How do you tell a story in Virtual Reality?
  • How do you direct the viewer to face a certain direction?
  • How do you create a passive experience on the Oculus?

He even give a glimpse at the future distribution model of VR content. His demo for the film Rise will be released for Oculus/VR in the following formats:

  1. A free roam view where the action happens and the viewer is allowed to completely control the camera and point of view.
  2. A directed view where the viewer and look around but the positioning is dictated by the script/director. This model very much interests me and sounds like a video game.
  3. And a tradition 2D post rendered version. Like a tradition cinematic or film, best suited for Vimeo/Youtube/DVD/TV.

A year ago this technology seemed like science fiction, but every year we come closer to completely capturing humans (form/texture), their motions, environments with their textures, real world lighting, and viewing them in real time in virtual reality.

The industry is evolving at an incredibly rapid pace and so must the creatives working in it. Especially the persons responsible for the camera and the lighting, the director of photography.

https://scanable.com/wp-content/uploads/2014/08/siggraph2014-header02.jpg 500 1050 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2014-08-28 16:30:322014-08-28 16:44:25Capturing Real-World Environments for Virtual Cinematography
face 3d projection mapping

OMOTE Real-time Face Tracking 3D Projection Mapping

August 19, 2014/0 Comments/in Animation, New Technology, Visual Effects (VFX)/by Travis Reinke

Forget the faces of historic monuments, the new frontier of 3D projection mapping is the faces of humans.

Created by Nobumichi Asai and friends, technical details behind the process are scant at the moment, but from what can be found in this Tumblr post, it’s clear that step one is a 3D scan of the model’s face.

Here is the translated text from that post:

I will continue to explain how to make a face mapping of this time.
Title OMOTE that (= table) is coming from the “ability”, but it has become the idea of Noh mask also in how to make. That is the idea, covered by creating a “surface”. That it is possible to pursue the accuracy, theme that represent a very delicate make-up art as its output was important. I started from the fact that it is 3D laser scanning the face of the model for the first.

I suspect that a structured light scanner was used to capture the geometry of the model’s face rather than a 3D laser scanner. Nonetheless, this is a very cool application of 3D projection mapping.

3D Face Scanning 3D Face Scanning projection mapping3D Face Scanning projection mapping

OMOTE / REAL-TIME FACE TRACKING & PROJECTION MAPPING. from something wonderful on Vimeo.

https://scanable.com/wp-content/uploads/2014/08/face-projection-mapping.gif 287 500 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2014-08-19 04:44:422014-08-19 04:44:42OMOTE Real-time Face Tracking 3D Projection Mapping
Google's Project Tango 3D Capture Device

Mantis Vision’s MV4D Tapped As Core 3D Capture Tech Behind Google’s Project Tango Tablets

June 6, 2014/0 Comments/in 3D Laser Scanning, In the News, LiDAR, Mobile Scanning, New Hardware, New Technology, Point Cloud, Software, Visual Effects (VFX)/by Travis Reinke

Mantis Vision, a developer of some of the world’s most advanced 3D enabling technologies, today confirmed that its MV4D technology platform will serve as the core 3D engine behind Google’s Project Tango. Mantis Vision provides the 3D sensing platform, consisting of flash projector hardware components and Mantis Vision’s core MV4D technology which includes structured light-based depth sensing algorithms.

Project Tango Mantis Vision-Tablet_whiteGoogle’s new seven-inch tablet is the first mobile device released that will access the MV4D platform to easily capture, enrich and deliver quality 3D data at scale allowing Google developers to quickly build consumer and professional applications on top of the MV4D platform.

“3D represents a major paradigm shift for mobile. We haven’t seen a change this significant since the introduction of the camera-phone. MV4D allows developers to deliver 3D-enabled mobile devices and capabilities to the world,” said Amihai Loven, CEO, Mantis Vision. “This partnership with Google offers Mantis Vision the flexibility to expand quickly and strategically. It will fuel adoption and engagement directly with consumer audiences worldwide. Together, we are bringing 3D to the masses.”

MV4D is Mantis Vision’s highly-scalable 3D capture and processing platform that allows developers to integrate Mantis’ technology into new and existing applications with ease, to drive user-generated 3D content creation throughout the mobile ecosystem. MV4D’s combination of field-proven 3D imaging hardware and software and a soon-to-be released software development kit (SDK) will ultimately serve as the backbone of 3D-enabled mobile and tablet devices.

“We are excited about working with partners, such as Mantis Vision, as we push forward the hardware and software technologies for 3D sensing and motion tracking on mobile devices,” said Johnny Lee, Technical Product Lead at Google.

Since its inception, Mantis Vision has been dedicated to bringing professional-grade 3D technology to the masses. The company’s technology will be a key component of both professional and consumer level devices and applications across a wide customer base of leading mobile technology companies, application developers and device manufacturers. Because the MV4D platform and SDK is fully scalable, it is already being planned for use in more powerful, diverse range of products in the future.

Learn more about the project here

https://scanable.com/wp-content/uploads/2014/06/Project-Tango.jpg 1080 1920 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2014-06-06 16:40:282014-06-06 16:40:28Mantis Vision’s MV4D Tapped As Core 3D Capture Tech Behind Google’s Project Tango Tablets
en-INTL-L-Kinect-for-Windows-Commercial-74Z-00001-mnco mocap 3d scanning

Microsoft Kinect for Windows v2: Affordable MoCap and 3D Scanning Solution

June 5, 2014/0 Comments/in 3D Laser Scanning, New Hardware, New Technology, Point Cloud, Visual Effects (VFX)/by Travis Reinke

Amid the volley of announcements from Microsoft’s Build conference, is word that the new Kinect for Windows has a near-future release timeframe for both the hardware and its SDK. The desktop version of Microsoft’s do-all sensor will be available to the public this summer, as will its development framework. Perhaps more importantly, once they’re done, developers can publish their creations to the Windows Store; meaning, there’ll probably be more Kinect applications for Windows in one place than ever before. As Redmond tells it, this self-publishing will happen “later this summer.” Next summer, Microsoft is adding support for one of gaming’s most pervasive dev toolkits to Kinect for Windows: the Unity engine — tools developers already know the strengths and weaknesses of, which should bolster the app selection even further. Given that the Xbox One will see Unity support this year, this could mean that cross-platform apps and games are a distinct possibility.

With the specs of Kinect for Windows V2, the 3D scanning and imaging industries may be in for a game-changer. Indie film and game developers will hopefully be able to take advantage of its features as an affordable motion capture (mocap) solution.

Kinect motion capture guru and programmer, Jasper Brekelmans, has been playing with the second release of the Kinect for quite some time and has been posting some impressive results. You can stay on top of everything he is doing on his personal site http://www.brekel.com/.

You can pre-order your Kinect for Windows V2 today for $199 from the Microsoft Store.

https://scanable.com/wp-content/uploads/2014/06/en-INTL-L-Kinect-for-Windows-Commercial-74Z-00001-mnco.jpg 332 590 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2014-06-05 20:46:592014-06-05 20:46:59Microsoft Kinect for Windows v2: Affordable MoCap and 3D Scanning Solution
Leap Motion Controller Update to Offer Affordable Individual Joint MoCap

Leap Motion Controller Update to Offer Affordable Individual Joint MoCap

June 5, 2014/0 Comments/in Animation, New Technology, Visual Effects (VFX)/by Travis Reinke

Leap Motion has announced that the software for its self-titled PC gesture-control device will soon track the movement of individual finger joints, as well as the overall motion of a user’s hands.

Since its launch in 2012, the $80 Leap Motion controller has attracted a lot of interest in the CG community, with Autodesk releasing Maya and MotionBuilder plugins last year.

Individual joint tracking, more parameters captured
In a post on the company’s blog, Leap Motion CEO Michael Buckwald revealed that version 2 of its software will track the individual joints of a user’s fingers, compensating automatically where individual fingers are occluded.

The software will also expose “much more granular data” via its SDK, including 27 dimensions per hand.

Affordable Individual MoCap tools coming soon
The update, which will be free, and does not require a change of hardware, is now in public beta for developers, although there’s no news of a consumer release date yet.

Jasper Brekelmans, creator of upcoming hand-tracking tool Brekel Pro Hands, has already announced that he is using the SDK.

Read more about the Leap Motion V2 update on the developer’s blog

https://scanable.com/wp-content/uploads/2014/06/Leap-Motion-Controller-Update-to-Offer-Affordable-Individual-Joint-MoCap.jpg 716 1200 Travis Reinke https://scanable.com/wp-content/uploads/2025/01/SCANable_logo_emblemSimple-180x180.png Travis Reinke2014-06-05 20:10:472014-06-05 20:10:47Leap Motion Controller Update to Offer Affordable Individual Joint MoCap
Page 3 of 41234

Pages

  • 2023 Trilith Charity Gala
  • 3D Body Scanning, LiDAR and VFX Support in Houston, Texas
  • 3D Head and Body Scanning
  • 3D Object Scanning
  • 3D VFX LiDAR Scanning and Support in New Orleans, Louisiana
  • 3D VFX Scanning and Support in Atlanta, Georgia
  • 3D VFX Scanning and Support in Los Angeles, California
  • 3D VFX Scanning and Support in New York
  • About Us
  • Architecture & Heritage
  • Artec Leo
  • Asset
  • Awards
  • Blog
  • CAL – Capture Array for Lumetrics
  • Cart
  • Checkout
  • Civil/Survey
  • Clients
  • Contact
  • Drone Photogrammetry
  • FARO Focus S 150 and 350 3D Laser Scanner
  • FARO Focus3D X 330 Long Range 3D Laser Scanner
  • FARO Freestyle3D Handheld Scanner
  • Home
  • Industry News
  • Leica HDS6200
  • LiDAR Location and Environment 3D Scanning
  • Logout
  • Maquette, Statues and Fine Art 3D scanning
  • Mobile 3D Scanning
  • Mobile Photogrammetry Studios
  • MoPho 1 Service Request
  • MoPho 2 Service Request
  • MoPho 3 Service Request
  • My Account
  • New
  • News
  • Plant, Process & Marine
  • Pricing and FAQ
  • Privacy Policy
  • Purchase Affordable 3D Laser Scanners
  • Recent Productions
  • Rent the Leica RTC360
  • Request 3D Scanning Services
  • SCANable Rig Checklist
  • SCANable UAS (Unmanned Aerial Systems)
  • SCANable | COVID-19 Response
  • Security and Threat Analysis
  • Services
  • Shop
  • Surphaser 100HSX
  • Trailer Move
  • Training and Support
  • VFX 3D Vehicle Scanning For Film and Television
  • VFX is not AI
  • Virtual Reality
  • Visual Effects/CGI

Categories

  • 3D Laser Scanning
  • 3D Printing
  • Animation
  • Archaeology
  • Blog
  • Building Information Modeling (BIM)
  • Events
  • Featured
  • Forensic
  • Government
  • In the News
  • Industrial
  • Industry
  • LiDAR
  • Mobile Scanning
  • Modeling
  • New Hardware
  • New Technology
  • News
  • Photogrammetry
  • Point Cloud
  • Reviews
  • Software
  • Uncategorized
  • Virtual Reality
  • Visual Effects (VFX)

Archive

  • June 2025
  • May 2025
  • January 2025
  • December 2024
  • November 2022
  • October 2020
  • August 2020
  • July 2020
  • June 2020
  • January 2020
  • July 2017
  • May 2017
  • April 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • July 2016
  • May 2016
  • March 2016
  • September 2015
  • June 2015
  • May 2015
  • March 2015
  • January 2015
  • December 2014
  • November 2014
  • August 2014
  • June 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • November 2013
  • June 2013
  • March 2013
  • January 2013
  • December 2012
  • November 2012
  • September 2012
  • August 2012
  • July 2012
  • December 2010
  • November 2010
  • October 2010
  • August 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009
  • July 2009
  • June 2009
  • May 2009

Quick Pages

  • Home
  • News
  • Services
  • Contact
  • Privacy Policy

Recent Productions

  • Stranger Things 5 (Netflix)
  • Mountainhead (HBO Films)
  • Weapons (Warner Bros.)
  • The Studio (Lionsgate Television)
  • The Woman in the Yard (Universal Pictures)

Latest News

  • A $1.5B Investment in Texas’ Film is Now LawJune 23, 2025 - 4:49 pm
  • Daredevil: Born Again: the art and craft of critical VFX collaborationMay 3, 2025 - 1:58 pm
  • Matthew McConaughey, Woody Harrelson Revive ‘True Detective’ Roles to Call for Filming in Texas: ‘Hollywood Is a Flat Circle’January 29, 2025 - 4:22 pm
  • Here are all the nominees for the 23rd Annual VES AwardsJanuary 15, 2025 - 6:58 pm

Locations

Georgia

Los Angeles

New York

Houston

New Orleans

© Copyright 2025 - SCANable - Enfold Theme by Kriesi
  • Link to Facebook
  • Link to X
  • Link to LinkedIn
  • Link to Youtube
  • Link to Rss this site
  • Link to Instagram
Scroll to top Scroll to top Scroll to top