Artec Announces the World’s First 3D Full Body Scanner – Shapify Booth

A twelve second body scan and shoppers pick up their 3D printed figurine next time they visit the supermarket

P-3D SELFIE_ITV2000_Vimeo from Granada Reports on Vimeo.

This week Asda and Artec Group are happy to announce their partnership as Asda becomes the first supermarket to bring a new cutting edge 3D printing technology to shoppers in the UK with the installation of Artec Shapify Booth — the world’s first high speed 3D full body scanner in its Trafford Park store. The scanning booth will allow thousands of customers to create a 3D miniature replica of themselves.

Artec Shapify Booth

The Artec scanning booth, equipped with wide-view, high-resolution 3D scanners and a rotation rig, takes just 12 seconds to scan a person. The Artec algorithms automatically fuse 700 captured surfaces into a detailed printable file. This digital model is then sent to the Asda 3D printing centre to be made into an 8″ mini-statue in full colour which can be collected from the store just one week later. Asda’s unique 3D printing technologies allows the processing of a huge volume of high quality figurines at a time, while the prints costs just £60.

Asda first introduced 3D scanning and 3D printing customer’s figurines six months ago, using Artec handheld scanners. Driven by the immediate success of the venture and Asda’s vision to offer 3D technology to the public, Artec Group tailored its professional scanning equipment to spec and created the Shapify Booth, a high speed full body scanner that Asda is now making available to all.
Making 3D prints of all the family, customers can also come along to be scanned in their sports kit, wedding outfits, graduation robes or fancy dress, taking something totally new and personalised back home with them after their weekly shop.

If the trial of the Shapify technology at Trafford Park is successful the new booths will be rolled out to more stores in the Autumn.

Phil Stout, Asda Innovation Manager – Asda is fast becoming, not just a retailer but, a technology company and this innovation is another example of how we’re leading the way on in store consumer facing technology. We’ve been working with Artec technology for a while now and we’re delighted to be the first company in the world able to offer our customers this unique service.

Artyom Yukhin, Artec Group President and CEO – Over the last 5 years Artec has been providing 3D technologies to professionals in industries from space and automotive to medical and movie special effects, but we have always been looking for the chance to do something for the public. Asda’s backing and second to none customer understanding allowed us to create high speed scanners which are fun and easy for people to use.

About Asda Stores Ltd.

Founded in the 1960s in Yorkshire, Asda is one of Britain’s leading retailers. It has more than 180,000 dedicated Asda colleagues serving customers from 551 stores, including 32 Supercentres, 311 Superstores, 29 Asda Living stores, 179 Supermarkets, 25 depots and seven recycling centres across the UK. Its main office is in Leeds, Yorkshire and its George clothing division is in Lutterworth, Leicestershire. More than 18 million people shop at Asda stores every week and 98 per cent of UK homes are served by www.asda.com. Asda joined Walmart, the world’s number one retailer, in 1999.

About Artec Group

Artec Group is a manufacturer and developer of professional 3D hardware and software, headquartered in Luxembourg. Artec Group is a global market leader in 3D scanning solutions used by thousands of people all over the world.
Shapify, the technology for creating 3D printed figurines, was conceived and launched by Artec Group in 2013:www.shapify.me
For more information about Artec Group, visit www.artec-group.com.

Contacts:
Artec Group : press@artec-group.com

FARO SCENE 5.3 Laser Scanning Software Provides Scan Registration without Targets

[source]

FARO® Technologies, Inc. (NASDAQ: FARO), the world’s most trusted source for 3D measurement, imaging, and realization technology, announced the release of their newest version of laser scanning software, SCENE 5.3, and scan data hosting-service, SCENE WebShare Cloud 1.5.

FARO’s SCENE 5.3 software, for use with the Laser Scanner Focus3D X Series, delivers scan registration by eliminating artificial targets, such as spheres and checkerboards. Users can choose from two available registration methods: Top View Based or Cloud to Cloud. Top View Based registration allows for targetless positioning of scans. In interiors and in built-up areas without reliable GPS positioning of the individual scans, targetless positioning represents a highly efficient and largely automated method of scanning. The second method, Cloud to Cloud registration, opens up new opportunities for the user to position scans quickly and accurately, even under difficult conditions. In exterior locations with good positioning of the scans by means of the integrated GPS receiver of the Laser Scanner Focus3D X Series, Cloud to Cloud is the method of choice for targetless registration.

In addition, the software also offers various new processes that enable the user to flexibly respond to a wide variety of project requirements. For instance, Correspondence Split View matches similar areas in neighbouring scans to resolve any missing positioning information, and Layout Image Overlay allows users to place scan data in a geographical context using image files, CAD drawings, or maps.

Oliver Bürkler, Senior Product Manager for 3D Documentation Software, remarked, “SCENE 5.3 is the ideal tool for processing laser scanning projects. FARO’s cloud-based hosting solution, SCENE WebShare Cloud, allows scan projects to be published and shared worldwide via the Internet. The collective upgrades to FARO’s laser scanning software solution, SCENE 5.3 and WebShare Cloud 1.5, make even complex 3D documentation projects faster, more efficient, and more effective. “

About FARO
FARO is the world’s most trusted source for 3D measurement, imaging and realization technology. The Company develops and markets computer-aided measurement and imaging devices and software. Technology from FARO permits high-precision 3D measurement, imaging and comparison of parts and complex structures within production and quality assurance processes. The devices are used for inspecting components and assemblies, production planning, documenting large volume spaces or structures in 3D, surveying and construction, as well as for investigation and reconstruction of accident sites or crime scenes.

Worldwide, approximately 15,000 customers are operating more than 30,000 installations of FARO’s systems. The Company’s global headquarters is located in Lake Mary, FL., its European head office in Stuttgart, Germany and its Asia/Pacific head office in Singapore. FARO has branches in Brazil, Mexico, Germany, United Kingdom, France, Spain, Italy, Poland, Netherlands, Turkey, India, China, Singapore, Malaysia, Vietnam, Thailand, South Korea and Japan.

Click here for more information or to download a 30-day evaluation version.

Mantis Vision’s MV4D Tapped As Core 3D Capture Tech Behind Google’s Project Tango Tablets

Mantis Vision, a developer of some of the world’s most advanced 3D enabling technologies, today confirmed that its MV4D technology platform will serve as the core 3D engine behind Google’s Project Tango. Mantis Vision provides the 3D sensing platform, consisting of flash projector hardware components and Mantis Vision’s core MV4D technology which includes structured light-based depth sensing algorithms.

Project Tango Mantis Vision-Tablet_whiteGoogle’s new seven-inch tablet is the first mobile device released that will access the MV4D platform to easily capture, enrich and deliver quality 3D data at scale allowing Google developers to quickly build consumer and professional applications on top of the MV4D platform.

“3D represents a major paradigm shift for mobile. We haven’t seen a change this significant since the introduction of the camera-phone. MV4D allows developers to deliver 3D-enabled mobile devices and capabilities to the world,” said Amihai Loven, CEO, Mantis Vision. “This partnership with Google offers Mantis Vision the flexibility to expand quickly and strategically. It will fuel adoption and engagement directly with consumer audiences worldwide. Together, we are bringing 3D to the masses.”

MV4D is Mantis Vision’s highly-scalable 3D capture and processing platform that allows developers to integrate Mantis’ technology into new and existing applications with ease, to drive user-generated 3D content creation throughout the mobile ecosystem. MV4D’s combination of field-proven 3D imaging hardware and software and a soon-to-be released software development kit (SDK) will ultimately serve as the backbone of 3D-enabled mobile and tablet devices.

“We are excited about working with partners, such as Mantis Vision, as we push forward the hardware and software technologies for 3D sensing and motion tracking on mobile devices,” said Johnny Lee, Technical Product Lead at Google.

Since its inception, Mantis Vision has been dedicated to bringing professional-grade 3D technology to the masses. The company’s technology will be a key component of both professional and consumer level devices and applications across a wide customer base of leading mobile technology companies, application developers and device manufacturers. Because the MV4D platform and SDK is fully scalable, it is already being planned for use in more powerful, diverse range of products in the future.

Learn more about the project here

Microsoft Kinect for Windows v2: Affordable MoCap and 3D Scanning Solution

Amid the volley of announcements from Microsoft’s Build conference, is word that the new Kinect for Windows has a near-future release timeframe for both the hardware and its SDK. The desktop version of Microsoft’s do-all sensor will be available to the public this summer, as will its development framework. Perhaps more importantly, once they’re done, developers can publish their creations to the Windows Store; meaning, there’ll probably be more Kinect applications for Windows in one place than ever before. As Redmond tells it, this self-publishing will happen “later this summer.” Next summer, Microsoft is adding support for one of gaming’s most pervasive dev toolkits to Kinect for Windows: the Unity engine — tools developers already know the strengths and weaknesses of, which should bolster the app selection even further. Given that the Xbox One will see Unity support this year, this could mean that cross-platform apps and games are a distinct possibility.

With the specs of Kinect for Windows V2, the 3D scanning and imaging industries may be in for a game-changer. Indie film and game developers will hopefully be able to take advantage of its features as an affordable motion capture (mocap) solution.

Kinect motion capture guru and programmer, Jasper Brekelmans, has been playing with the second release of the Kinect for quite some time and has been posting some impressive results. You can stay on top of everything he is doing on his personal site http://www.brekel.com/.

You can pre-order your Kinect for Windows V2 today for $199 from the Microsoft Store.

Leap Motion Controller Update to Offer Affordable Individual Joint MoCap

Leap Motion has announced that the software for its self-titled PC gesture-control device will soon track the movement of individual finger joints, as well as the overall motion of a user’s hands.

Since its launch in 2012, the $80 Leap Motion controller has attracted a lot of interest in the CG community, with Autodesk releasing Maya and MotionBuilder plugins last year.

Individual joint tracking, more parameters captured
In a post on the company’s blog, Leap Motion CEO Michael Buckwald revealed that version 2 of its software will track the individual joints of a user’s fingers, compensating automatically where individual fingers are occluded.

The software will also expose “much more granular data” via its SDK, including 27 dimensions per hand.

Affordable Individual MoCap tools coming soon
The update, which will be free, and does not require a change of hardware, is now in public beta for developers, although there’s no news of a consumer release date yet.

Jasper Brekelmans, creator of upcoming hand-tracking tool Brekel Pro Hands, has already announced that he is using the SDK.

Read more about the Leap Motion V2 update on the developer’s blog

Autodesk Announces ReCap Connect Partnership Program

With its new ReCap Connect Partnership Program, Autodesk will open up Autodesk ReCap – its reality capture platform – to third party developers and partners, allowing them to extend ReCap’s functionality.

“Autodesk has a long history of opening our platforms to support innovation and extension,” said Robert Shear, senior director, Reality Solutions, Autodesk. “With the ReCap Connect Partnership Program, we’ll be allowing a talented pool of partners to expand what our reality capture software can do. As a result, customers will have even more ways to start their designs with accurate dimensions and full photo-quality context rather than a blank screen.”

There are many ways for partners to connect to the ReCap pipeline, which encompasses both laser-based and photo-based workflows.  Partners can write their own import plug-in to bring structured point cloud data into ReCap and ReCap Pro using the Capture Codec Kit that is available as part of the new ReCap desktop version. DotProduct – a maker of handheld, self-contained 3D scanners – is the first partner to take advantage of this capability.

“Autodesk’s ReCap Connect program will enable a 50x data transfer performance boost for DotProduct customers — real time 3D workflows on tablets just got a whole lot faster. Our lean color point clouds will feed reality capture pipelines without eating precious schedule and bandwidth.” Tom Greaves, Vice President, Sales and Marketing, DotProduct LLC.

Alternately, partners can take advantage of the new Embedded ReCap OEM program to send Reality Capture Scan (RCS) data exports from their point cloud processing software directly to Autodesk design products, which all support this new point cloud engine, or to ReCap and ReCap Pro. The first signed partners in the Embedded ReCap OEM program are: Faro, for their Faro Scenesoftware; Z+F for their LaserControl software; CSA for their PanoMap software, LFM for their LFM software products; and Kubit for their VirtuSurv software.  All these partners’ software will feature this RCS export in their coming releases.

“Partnering with Autodesk and participating in the ReCap Connect program helps FARO to ensure a fluent workflow for customers who work with Autodesk products. Making 3D documentation and the use of the captured reality as easy as possible is one of FARO’s foremost goals when developing our products. Therefore, integrating with Autodesk products suits very well to our overall product strategy.” – Oliver Bürkler, Senior Product Manager, 3D Documentation Software & Innovation, FARO

As a third option, partners can build their own application on top of the Autodesk photo-to-3D cloud service by using the ReCap Photo Web API. More than 10 companies – serving markets ranging from medical and civil engineering, to video games and Unmanned Aerial Vehicles (UAVs) – have started developing specific applications that leverage this capability, or have started integrating this capability right into their existing apps. Some of the first partners to use the ReCap Photo Web API include Soundfit, SkyCatch and Twnkls.

“Autodesk’s cloud based ReCap is an important part of the SoundFit’s 3D SugarCube Scanning Service.  Autodesk’s ReCap service has enabled SoundFit to keep the per scan cost of its service very low, opening new markets, such as scans for hearing aids, custom fit communications headsets, musicians monitors and industrial hearing protection. ReCap allows SoundFit to export 3D models in a wide variety of popular 3D formats, so SoundFit customers and manufacturers can import them into Autodesk CAD packages from AutoCAD to 123D Design, or send them directly to any 3D printer or 3D printing service bureau.” – Ben Simon-Thomas, CEO & Co-Founder

For more information about the ReCap Connect Partnership Program, contact Dominique Pouliquen at Email Contact.

Additional Partner Supporting Quotes

“ReCap Connect gives our PointSense and PhoToPlan users smart and fully integrated access to powerful ReCap utilities directly within their familiar AutoCAD design environments. The result is a more simple and efficient overall workflow. ReCap Photo 360 image calibration eliminates the slowest part of a kubit user’s design process resulting in significant time savings per project.” – Matthias Koksch, CEO, kubit

“ReCap, integrated with CSA’s PanoMap Server, provides a powerful functionality to transfer laser scan point cloud data from large-scale 3D laser scan databases to Autodesk products.  Using the interface, the user can select any plant area by a variety of selection criteria and transfer the laser scan points to the design environment in which they are working. The laser scan 3D database of the plant can have thousands of laser scans.” – Amadeus Burger, President, CSA Laser Scanning

“Autodesk’s industry leading Recap photogrammetry technology will be instrumental in introducing BuildIT’s 3D Metrology solution to a broader audience by significantly reducing data capture complexity and cost.” – Vito Marone, Director Sales & Marketing, BuildIT Software & Solutions

“I am very pleased with the ReCap Photo API performance and its usefulness in fulfilling our 3D personalization needs. I believe the ReCap Photo API is the only product that is available in the market today that meets our needs.” – Dr. Masuma, PhD., Founder of iCrea8

 

Angela Costa Simoes

Senior PR Manager

DIRECT  +1 415 547 2388

MOBILE  +1 415 302 2934

@ASimoes76

Autodesk, Inc.

The Landmark @ One Market, 5th Floor

San Francisco, CA 94105

www.autodesk.com

Massive Software announces Massive 6.0 crowd simulation software

Massive 6.0

New look, new GUI

Massive has a completely new graphic user interface. With graphic design by Lost in Space the new interface not only looks stylish and modern but provides a much smoother interactive user experience. Dialog windows and editors now turn up in the new side panel, keeping the workspace clear and tidy. The main window now hosts multiple panels that can be configured to suit the users needs, and the configurations can be recalled for later use. Since any panel can be a viewport it’s now possible to have up to 5 viewports open at once, each using a different camera.

Massive

 

3D placement

The existing placement tools in Massive have been extended to work in 3 dimensions, independently of the terrain. The point generator can be placed anywhere in space, the circle generator becomes a sphere, the polygon generator gains depth, the spline generator becomes tubular. There’s also a new generator called the geometry generator, which takes a wavefront .obj file and fills the polygonal volume with agents.

 

Auto action import

Building complex agents with hundreds of actions can be a time consuming process, but it doesn’t have to be anymore. In Massive 6.0 the action importing process can be completely automated, reducing what could be months of work to a few minutes. Also, all of the import settings for all of the actions can be saved to a file so that revisions of source motion can be imported in seconds using the same settings as earlier revisions.

Massive

Bullet dynamics

To effortlessly build a mountain of zombies it would be useful to have extremely stable rigid body dynamics. Massive 6.0 supports bullet dynamics, significantly increasing dynamics stability. Just for fun we had 1000 mayhem agents throw themselves off a cliff into a pile on the ground below. Without tweaking of parameters we easily created an impressive zombie shot, demonstrating the stability and ease of use of bullet dynamics.

No typing required

While it is possible to create almost any kind of behaviour using the brain nodes in Massive, it has always required a little typing to specify inputs and outputs of the brain. This is no longer necessary with the new channel menu which allows the user to very quickly construct any possible input or output channel string with a few mouse clicks.

These are just some of the new features of Massive 6.0, which is scheduled for release in September.

 

Massive for Maya

 

Massive has always been a standalone system, and now there’s the choice to use Massive standalone as Massive Prime and Massive Jet, or in Maya as Massive for Maya.

 

Set up and run simulations in Maya

Massive for Maya facilitates the creation of Massive silmuations directly in Maya. All of the Massive scene setup tools such as flow field, lanes, paint and placement editors have been seamlessly reconstructed inside Maya. The simulation workflow has been integrated into Maya to allow for intuitive running, recording and playback of simulations. To achieve this a record button has been added next to the transport controls and a special status indicator has been included in the Massive shelf. Scrubbing of simulations of thousands of agents in Maya is now as simple and efficient as scrubbing the animation of a single character.

Massive Software 3D imaging for crowd placement and simulation

Set up lighting in Maya

The Massive agents automatically appear in preview renders as well as batch renders alongside any other objects in the scene. Rendering in Maya works for Pixar’s RenderMan, Air, 3Delight, Mental Ray and V-Ray. This allows for lighting scenes using the familiar Maya lighting tools, without requiring any special effort to integrate Massive elements into the scene. Furthermore, all of this has been achieved without losing any of the efficiency and scalability of Massive.

 

Edit simulations in Maya graph editor

Any of the agents in a simulation can be made editable in the Maya graph editor. This allows for immediate editing of simulations without leaving the Maya environment. Any changes made to the animation in the graph editor automatically feed back to the Massive agents, so the tweaked agents will appear in the render even though the user sees a Maya character for editing purposes in the viewport. The editing process can even be used with complex animation control rigs, allowing animators and motion editors complete freedom to work however they want to.

 

 

Massive Software 3D imaging for crowd placement and simulationDirectable characters

A major advantage of Massive for Maya is the ability to bring Massive’s famous brains to character animation, providing another vital tool for creating the illusion of life. While animation studios have integrated Massive into their pipeline to do exactly this for years, the ability to create directable characters has not been within easy reach for those using off-the-shelf solutions. With Massive for Maya it’s now possible to create characters using a handful of base cycles, takes and expressions that can handle such tasks as keeping alive, responding to the the focus of the shot, responding to simple direction, or simply walking along a path, thus reducing the amount of work required to fill out a scene with characters which are not currently the focus of the shot. For example, in a scene in which two characters are talking with eachotherand a third character, say a mouse, is reacting, the mouse could be driven by it’s Massive counterpart. The talking characters would drive their Massive counterparts thereby being visible to the mouse. Using attributes in the talking characters, their Massive counterparts could change colour to convey their emotional states to the mouse agent. The mouse agent then performs appropriately, using it’s animation cycles, blend shape animations etc in response to the performance of the talking characters, and looking at whichever character is talking. Once the agents for a project have been created, setting up a shot for this technique requires only a few mouse clicks and the results happen in real-time. Any edits to the timing of the shot will simply flow through to the mouse performance.

SCANable offers on-site 3D imaging of real-world people/characters to populate your 3D crowd asset library in Massive’s crowd placement and simulation software. Contact us today for a free quote.

Introducing R3DS Wrap – Topology Transfer Tool

Wrap is a topology transfer tool. It allows to utilize the topology you already have and transfer your new 3D-scanned data onto it. The resulting models will not only share the same topology and UV-coordinates but also will naturally become blendshapes of each other. Here’s a short video how it works:

and here are a couple of examples based on 3D-scans kindly provided by Lee Perry-Smith

R3dS Wrap Topology Transfer Software

You can download a demo-version from their website http://www.russian3dscanner.com

As with all new technology during its final beta stages, Wrap is not perfect yet. R3DS would be highly appreciative and grateful of everyone that gives us the support and feedback to finalize things in the best possible way. This software has some potential to be a great tool. Check it out!

Introducing the World’s First App for LiDAR data visualization on the iPad: RiALITY

RIEGL proudly announces its new iPad point cloud viewer: RiALITY, now available for free in the iTunes App Store.

This new, innovative App, the first of its kind, allows users to experience LiDAR data in a completely new environment. It also allows easier LiDAR data demonstrations through the use of an iPad.

RIEGL’s RiALITY App enables users to visualize and navigate through point clouds acquired with RIEGL laser scanners. As an example, users are able to explore a dataset of the beautiful Rosenburg Castle in Austria. RIEGL scans can also be imported from RIEGL’s RiSCAN PRO software into the App, as well.

“We’re pleased to present a new way of visualizing point clouds. RiALITY delivers this new technology by providing Augmented Reality technology in an easy-to-use app. Now you can easily send your client a 3D point cloud that they can visualize on their iPad, for free.” said Ananda Fowler, RIEGL’s manager of terrestrial laser scanning software.

RiALITY features true color point clouds and 3D navigation. In a breakthrough technological development, the app features an Augmented Reality Mode. The Augmented Reality Mode allows point clouds to be virtually projected into the real world.

Dive into the point cloud!

Find out more at www.riegl.com/app.

Olympic athletes will wear state-of-the-art, 3D-scanned, custom-fitted uniforms

[Source: Mashable]

As if we needed another reason to worship athletes, select Olympic hockey players will wear state-of-the-art, 3D-scanned uniforms custom-fitted to their body parts. That’s right, like superheroes.

Hockey equipment manufacturer Bauer officially unveiled the new line of high-tech hockey equipment, called “OD1N,” in December. CEO Kevin Davis has touted the gear as the “concept car” of hockey equipment. Pouring a cool million dollars into outfitting six elite hockey players, Bauer used a tech-friendly combination of composite materials, compression-molded foam and 3D optical scanning to personalize the equipment, while lightening the skates, protective gear and goalie pads by one-third.

Hockey enthusiasts will see the line in action on the ice when it debuts at the Sochi Winter Games in February. The equipment will be worn by the NHL‘s Jonathan Toews (Chicago Blackhawks/Team Canada), Patrick Kane (Chicago Blackhawks/Team USA), Nicklas Backstrom (Washington Capitals/Team Sweden) and goaltender Henrik Lundqvist (New York Rangers/Team Sweden). Claude Giroux (Philadelphia Flyers/Team Canada) and Alex Ovechkin (Washington Capitals/Team Russia) round out the group of six players who worked with Bauer to test the equipment.

Lundqvist has been practicing and playing with the Od1n goal pads since November, while Toews, Kane and Backstrom are sporting elements of the protective body suit.

Bauer OD1N Equipment

 

Bauer’s new line of hockey equipment comprises skates, goal pads and protective base-layer suits molded to each player’s form.

IMAGE: BAUER

The equipment’s weight reduction should provide a significant on-ice advantage. The skates alone, with their lighter, carbon-composite blade holders, amount to roughly 1,000 fewer pounds of lifted weight during a regulation game, according to Bauer. Lundqvist will lift 180 fewer pounds with the Od1n goalie pads, which replace traditional layers of synthetic leather with compression-molded foam that can be modified depending on the goaltender’s style of play.

“The benefit is not only in the quickness to the puck but in their ultimate endurance and stamina going into the third period,”

“The benefit is not only in the quickness to the puck but in their ultimate endurance and stamina going into the third period,” says Craig Desjardins, Bauer’s general manager of player equipment and project leader for Od1n. “For [Lundqvist], that was the difference between getting a block or getting scored on.”

Like many a concept car, Bauer also drew on new technologies via its designs. Using 3D optical scanning borrowed from the automotive industry, it manufactured protective base-layer suits molded to each player’s physique. The scans generated computerized models, from which Bauer designed custom equipment.

“Being able to customize, for example, a shin guard or elbow pad based on the individual geometry of a player, we’ve taken the guesswork out completely,” says Desjardins. “It’s going to better protect you if it stays in place.”

 

 

It’s all very spiffy, but in automotive terms, a concept car showcases radical new developments in technology and design that make it prohibitively expensive for consumers. The cars don’t often make it to mass production. The cost of Bauer’s own “concept car” design, with its attendant technological advancements, places the equipment well out of reach of all but the most elite hockey players.

Much like BMW’s shape-shifting sedan, the idea is to ogle Od1n, not to own it — although, Bauer will likely outfit a few more NHL bodies in the future.

Its creators are optimistic that certain elements will make their way to mass production, however.

“We’re trying to invent the future of hockey equipment, to show the industry and consumers where it could go, where it will go,” says Desjardins. “In the next few years, we’ll be able to take that technology down into multiple price points.”

So if, a few years from Sochi, your neighborhood is teeming with hockey prodigies, you’ll know why.