SynthEyes 3D Tracking Software

Andersson Technologies releases SynthEyes 1502 3D Tracking Software

Andersson Technologies has released SynthEyes 1502, the latest version of its 3D tracking software, improving compatibility with Blackmagic Design’s Fusion compositing software.

Reflecting the renewed interest in Fusion
According to the official announcement: “Blackmagic Design’s recent decision to make Fusion 7 free of charge has led to increased interest in that package. While SynthEyes has exported to Fusion for many years now — for projects such as Battlestar Galactica — Andersson Technologies LLC upgraded SynthEyes’s Fusion export.”

Accordingly, the legacy Fusion exporter now supports 3D planar trackers; primitive, imported, or tracker-built meshes; imported or extracted textures; multiple cameras; and lens distortion via image maps.

The new lens distortion feature should make it possible to reproduce the distortion patterns of any real-world lens without its properties having been coded explicitly in the software or a custom plugin.

A new second exporter creates corner pin nodes in Fusion from 2D or 3D planar trackers in SynthEyes.

Other new features in SynthEyes 1502 include an error curve mini-view, a DNG/CinemaDNG file reader, and a refresh of the user interface, including the option to turn toolbar icons on or off.

Pricing and availability
SynthEyes 1502 is available now for Windows, Linux and Mac OS X. New licences cost from $249 to $999, depending on which edition you buy. The new version is free to registered users.

New features in SynthEyes 1502 include:

  • Toolbar icons are back! Some love ’em, some hate ’em. Have it your way: set the preference. Shows both text and icon by default to make it easiest, especially for new users with older tutorials. Some new and improved icons.
  • Refresh of user interface color preferences to a somewhat darker and trendier look. Other minor appearance tweaks.
  • New error curve mini-view.
  • Updated Fusion 3D exporter now exports all cameras, 3D planars, all meshes (including imported), lens distortion via image maps, etc.
  • New Fusion 2D corner pinning exporter.
  • Lens distortion export via color maps, currently for Fusion (Nuke for testing).
  • During offset tracking, a tracker can be (repeatedly) shift-dragged to different reference patterns on any frame, and SynthEyes will automatically adjust the offset channel keying.
  • Rotopanel’s Import tracker to CP (control point) now asks whether you want to import the relative motion or absolute position.
  • DNG/CinemaDNG reading. Marginal utility: DNG requires much proprietary postprocessing to get usable images, despite new luma and chroma blur settings in the image preprocessor.
  • New script to “Reparent meshes to active host” (without moving them)
  • New section in the user manual on “Realistic Compositing for 3-D”
  • New tutorials on offset tracking and Fusion.
  • Upgraded to RED 5.3 SDK (includes REDcolor4, DRAGONcolor2).
    • Faster camera and perspective drawing with large meshes and lidar scan data.
  • Windows: Installing license data no longer requires “right click/Start as Administrator”—the UAC dialog will appear instead.
  • Windows: Automatically keeps the last 3 crash dumps. Even one crash is one too many.
  • Windows: Installers, SynthEyes, and Synthia are now code-signed for “Andersson Technologies LLC” instead of showing “Unknown publisher”.
  • Mac OS X: Yosemite required that we change to the latest XCode 6—this eliminated support for OS X 10.7. Apple made 10.8 more difficult as well.

About SynthEyes

SynthEyes is a program for 3-D camera-tracking, also known as match-moving. SynthEyes can look at the image sequence from your live-action shoot and determine how the real camera moved during the shoot, what the camera’s field of view (~focal length) was, and where various locations were in 3-D, so that you can create computer-generated imagery that exactly fits into the shot. SynthEyes is widely used in film, television, commercial, and music video post-production.

What can SynthEyes do for me? You can use SynthEyes to help insert animated creatures or vehicles; fix shaky shots; extend or fix a set; add virtual sets to green-screen shoots; replace signs or insert monitor images; produce 3D stereoscopic films; create architectural previews; reconstruct accidents; do product placements after the shoot; add 3D cybernetic implants, cosmetic effects, or injuries to actors; produce panoramic backdrops or clean plates; build textured 3-D meshes from images; add 3-D particle effects; or capture body motion to drive computer-generated characters. And those are just the more common uses; we’re sure you can think of more.

What are its features? Take a deep breath! SynthEyes offers 3-D tracking, set reconstruction, stabilization, and motion capture. It handles camera tracking, 2- and 3-D planar tracking, object tracking, object tracking from reference meshes, camera+object tracking, survey shots, multiple-shot tracking, tripod (nodal, 2.5-D) tracking, mixed tripod and translating shots, stereoscopic shots, nodal stereoscopic shots, zooming shots, lens distortion, light solving. It can handle shots of any resolution (Intro version limited to 1920×1080)—HD, film, IMAX, with 8-bit, 16-bit, or 32-bit float data, and can be used on shots with thousands of frames. A keyer simplifies and speeds tracking for green-screen shots. The image preprocessor helps remove grain, compression artifacts, off-centering, or varying lighting and improve low-contrast shots. Textures can be extracted for a mesh from the image sequence, producing higher resolution and lower noise than any individual image. A revolutionary Instructible Assistant, Synthia™, helps you work faster and better, from spoken or typed natural language directions.

SynthEyes offers complete control over the tracking process for challenging shots, including an efficient workflow for supervised trackers, combined automated/supervised tracking, offset tracking, incremental solving, rolling-shutter compensation, a hard and soft path locking system, distance constraints for low-perspective shots, and cross-camera constraints for stereo. A solver phase system lets you set up complex solving strategies with a visual node-based approach (not in Intro version). You can set up a coordinate system with tracker constraints, camera constraints, an automated ground-plane-finding tool, by aligning to a mesh, a line-based single-frame alignment system, manually, or with some cool phase techniques.

Eyes starting to glaze over at all the features? Don’t worry, there’s a big green AUTO button too. Download the free demo and see for yourself.

What can SynthEyes talk to? SynthEyes is a tracking app; you’ll use the other apps you already know to generate the pretty pictures. SynthEyes exports to about 25 different 2-D and 3-D programs. The Sizzle scripting language lets you customize the standard exports, or add your own imports, exports, or tools. You can customize toolbars, color scheme, keyboard mapping, and viewport configurations too. Advanced customers can use the SyPy Python API/SDK.

TC2 Announces Availability of Its Most Advanced 3D-4D Body Scanner

[TC]2 Announces Availability of Its Most Advanced 3D/4D Body Scanner

New TC2-19 Offers Fastest and Most Accurate Measurements on the Market [source]

TC2 Announces Availability of Its Most Advanced 3D-4D Body ScannerCary, NC – April 30, 2015 – [TC]², the innovation leader for the fashion industry and 3D body scanning technology, announces general availability of the TC2-19, the most advanced 3D/4D body scanning and measurement technology available on the market.

The TC2-19 provides the option of using a touch screen inside the scanner booth which allows users to “self-scan” by following on-screen instructions. The “self-scan mode” is ideal for use with the iStyling™ Full Retail Solution that provides for greater fit accuracy, styling advice, and garment customization. The body scanner interfaces with multiple CAD technologies and avatar engines.

While capturing the accurate body measurements that [TC]² 3D body scanners are well-known for today, the TC2-19 offers a “quick scan” option (2 seconds), 360° 3D body scanning and rapid processing speeds (17 seconds). The newly developed 4D mode enables 3D movement visualization inside the scanner.

“The combination of speed, accuracy, and stability enabled by software and hardware advancements make this the most advanced 3D/4D body scanner ever built”, said Dr. Mike Fralix, CEO of [TC]². “We are excited that our existing retail customers and major brands now have a scalable solution that can easily and cost effectively roll out to multiple locations.”

The [TC]² body scanner captures thousands of body measurements used by the fashion, medical, and fitness industries to make custom garments, predict customer sizing, benchmark fitness goals, and augment surgical processes. The scanner fits in a space the size of the average retail dressing room and features a booth for privacy. The TC2-19 comes with a lifetime scanner software license and PC.

The TC2-19 can be viewed at the [TC]² National Apparel Technology Center in Cary, N.C.

About [TC]²:

[TC]², Textile / Clothing and Technology Corporation, is a leader in innovation and dedicated to the advancement of the fashion and sewn products industry. Its research, consulting services, and products help brands, retailers, and manufacturers provide increased value for their customers while improving their bottom line. [TC]²’s mission focuses on the development, promotion and implementation of new technologies and ideas that significantly impact the industry.

faro freestyle 3d handheld scanner

FARO® Launches Innovative, User-Friendly Handheld 3D Scanner to Meet Growing Demand for Portable Scanning

LAKE MARY, Fla.Jan. 7, 2015 /PRNewswire/ — FARO Technologies, Inc. (NASDAQ: FARO), the world’s most trusted source for 3D measurement, imaging, and realization technology, announces the release of the new FARO Freestyle3D Handheld Laser Scanner, an easy, intuitive device for use in Architecture, Engineering and Construction (AEC), Law Enforcement, and other industries.

The FARO Freestyle3D is equipped with a Microsoft Surface™ tablet and offers unprecedented real-time visualization by allowing the user to view point cloud data as it is captured. The Freestyle3D scans to a distance of up to three (3) meters and captures up to 88K points per second with accuracy better than 1.5mm.  The patent-pending, self-compensating optical system also allows users to start scanning immediately with no warm up time required.

“The Freestyle3D is the latest addition to the FARO 3D laser scanning portfolio and represents another step on our journey to democratize 3D scanning,” stated Jay Freeland, FARO’s President and CEO.  “Following the successful adoption of our Focus scanners for long-range scanning, we’ve developed a scanner that provides customers with the same intuitive feel and ease-of-use in a handheld device.”
The portability of Freestyle3D enables users to maneuver and scan in tight and hard-to-reach areas such as car interiors, under tables and behind objects making it ideal for crime scene data collection or architectural preservation and restoration activities.  Memory-scan technology enables Freestyle3D users to pause scanning at any time and then resume data collection where they left off without the use of artificial targets.

Mr. Freeland added, “FARO’s customers continue to stress the importance of work-flow simplicity, portability, and affordability as key drivers to their continued use and adoption of 3D laser scanning.  We have responded by developing an easy-to-use, industrial grade, handheld laser scanning device that weighs less than 2 lbs.”

The Freestyle3D can be employed as a standalone device to scan areas of interest, or used in concert with FARO’s Focus X 130 / X 330 scanners.  Point cloud data from all of these devices can be seamlessly integrated and shared with all of FARO’s software visualization tools including FARO SCENE, WebShare Cloud, and FARO CAD Zone packages.

For more information about FARO’s 3D scanning solutions visit: www.faro.com

This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995 that are subject to risks and uncertainties, such as statements about demand for and customer acceptance of FARO’s products, and FARO’s product development and product launches. Statements that are not historical facts or that describe the Company’s plans, objectives, projections, expectations, assumptions, strategies, or goals are forward-looking statements. In addition, words such as “is,”“will,” and similar expressions or discussions of FARO’s plans or other intentions identify forward-looking statements. Forward-looking statements are not guarantees of future performance and are subject to various known and unknown risks, uncertainties, and otherfactors that may cause actual results, performances, or achievements to differ materially from future results, performances, or achievements expressed or implied by such forward-looking statements. Consequently, undue reliance should not be placed on these forward-looking statements.

Factors that could cause actual results to differ materially from what is expressed or forecasted in such forward-looking statements include, but are not limited to:

  • development by others of new or improved products, processes or technologies that make the Company’s products less competitive or obsolete;
  • the Company’s inability to maintain its technological advantage by developing new products and enhancing its existing products;
  • declines or other adverse changes, or lack of improvement, in industries that the Company serves or the domestic and international economies in the regions of the world where the Company operates and other general economic, business, and financial conditions; and
  • other risks detailed in Part I, Item 1A. Risk Factors in the Company’s Annual Report on Form 10-K for the year ended December 31, 2013 and Part II, Item 1A. Risk Factors in the Company’s Quarterly Report on Form 10-Q for the quarter ended June 28, 2014.

Forward-looking statements in this release represent the Company’s judgment as of the date of this release. The Company undertakes no obligation to update publicly any forward-looking statements, whether as a result of new information, future events, or otherwise, unless otherwise required by law.

About FARO

FARO is the world’s most trusted source for 3D measurement technology. The Company develops and markets computer-aided measurement and imaging devices and software. Technology from FARO permits high-precision 3D measurement, imaging and comparison of parts and complex structures within production and quality assurance processes. The devices are used for inspecting components and assemblies, rapid prototyping, documenting large volume spaces or structures in 3D, surveying and construction, as well as for investigation and reconstruction of accident sites or crime scenes.

Approximately 15,000 customers are operating more than 30,000 installations of FARO’s systems, worldwide. The Company’s global headquarters is located in Lake Mary, FL; its European regional headquarters in Stuttgart, Germany; and its Asia/Pacific regional headquarters in Singapore. FARO has other offices in the United StatesCanadaMexicoBrazilGermany, the United Kingdom,FranceSpainItalyPolandTurkeythe NetherlandsSwitzerlandPortugalIndiaChinaMalaysiaVietnamThailandSouth Korea, and Japan.

More information is available at http://www.faro.com

SOURCE FARO Technologies, Inc.

Mattepainting Toolkit Camera Projection

Photogrammetry and camera projection mapping in Maya made easy

The Mattepainting Toolkit

Photogrammetry and camera projection mapping in Maya made easy.

What’s included?

The Mattepainting Toolkit (gs_mptk) is a plugin suite for Autodesk Maya that helps artists build photorealistic 3D environments with minimal rendering overhead. It offers an extensive toolset for working with digital paintings as well as datasets sourced from photographs.

Version 3.0 is now released!

For Maya versions 2014 and 2015, version 3.0 of the toolkit adds support for Viewport 2.0, and a number of new features. Version 2.0 is still available for Maya versions 2012-2014. A lite version of the toolkit, The Camera Projection Toolkit (gs_cptk) is available for purchase from the Autodesk Exchange. To see a complete feature comparison list between these versions, click here.

How does it work?

The Mattepainting Toolkit uses an OpenGL implementation for shader feedback within Maya’s viewport. This allows users to work directly with paintings, photos, and image sequences that are mapped onto geometry in an immediate and intuitive way.

Overview

The User Interface

Textures are organized in a UI that manages the shaders used for viewport display and rendering.

...

  • Clicking on an image thumbnail will load the texture in your preferred image editor.
  • Texture layer order is determined by a drag-and-drop list.
  • Geometry shading assignments can be quickly added and removed.

Point Cloud Data

Import Bundler and PLY point cloud data from Agisoft Photoscan, Photosynth, or other Structure From Motion (SFM) software.

...

  • Point clouds can be used as a modeling guide to quickly reconstruct a physical space.
  • Cameras are automatically positioned in the scene for projection mapping.

The Viewport

A custom OpenGL shader allows textures to be displayed in high quality and manipulated interactively within the viewport.

...

  • Up to 16 texture layers can be displayed per shader.
  • HDR equirectangular images can be projected spherically.
  • Texture mattes can be painted directly onto geometry within the viewport.
  • Image sequences are supported so that film plates can be mapped to geometry.

Rendering

The layered textures can be rendered with any renderer available to Maya. Custom Mental Ray and V-Ray shaders included with the toolkit extend the texture blending capabilities for those renderers.

...

  • The texture layers can be baked down to object UVs.
  • A coverage map can be rendered to isolate which areas of the geometry are most visible to the camera.
  • For Mental Ray and V-Ray, textures can be blended based on object occlusion, distance from the projection camera, and object facing ratio.
Capturing Real-World Environments for Virtual Cinematography

Capturing Real-World Environments for Virtual Cinematography

[source] written by Matt Workman

Virtual Reality Cinematography

As Virtual Reality HMDs (Oculus) come speeding towards consumers, there is an emerging need to capture 360 media and 360 environments. Capturing a location for virtual reality or virtual production is a task that is well suited for a DP and maybe a new niche of cinematography/photography. Not only are we capturing the physical dimensions of the environment using LIDAR, but we capturing the lighting using 360 degree HDR light probes captured with DSLRs/Nodal Tripod systems.

A LIDAR scanner is essentially a camera that shoots in all directions. It lives on a tripod and it can record the physical dimensions and color of an environment/space. It captures millions of points and saves their position and color to be later used to construct the space digitally.

An HDR Latlong Probe in Mari

Using a DSLR camera and a nodal tripod head, the DP would capture High Dynamic Range (32bit float HDR) 360 degree probes of the location, to record the lighting.  This process would essentially capture the lighting in the space at a VERY high dynamic range and that would be later reprojected onto the geometry constructed using the LIDAR data.

Realtime 3D Asset being lit by an HDR environment real time (baked)

The DP is essentially lighting the entire space in 360 degrees and then capturing it. Imagine an entire day of lighting a space in all directions. Lights outside windows, track lighting on walls, practicals, etc. Then capturing that space using the above outlined techniques as an asset to be used later. Once the set is constructed virtually, the director can add actors/props and start filmmaking, like he/she would do on a real set.  And the virtual cinematographer would line up the shots, cameras moves, and real time lighting.

I’ve already encountered a similar paradigm as a DP, when I shot a 360 VR commercial. A few years ago I shot a commercial for Bacardi with a 360 VR camera and we had to light and block talent in all directions within a loft space. The end user was then able to control which way the camera looked in the web player, but the director/DP controlled it’s travel path.

360 Virtual Reality Bacardi Commercial

 

http://www.mattworkman.com/2012/03/18/bacardi-360-virtual-reality/

Capturing a set for VR cinematography would allow the user to control their position in the space as well as which way they were facing. And the talent and interactive elements would be added later.

Final Product: VR Environment Capture

 

In this video you can see the final product of a location captured for VR. The geometry for the set was created using the LIDAR as a reference. The textures and lighting data are baked in from a combination of the LIDAR color data and the reprojected HDR probes.

After all is said in done, we have captured a location, it’s textures, and it’s lighting that can be used a digital location however we need. For previs, virtual production, background VFX plates, a real time asset for Oculus, etc.

SIGGRAPH 2014 and NVIDIA

SG4141: Building Photo-Real Virtual Reality from Real Reality, Byte by Byte
http://www.ustream.tv/recorded/51331701

In this presentation Scott Metzger speaks about his new virtual reality company Nurulize and his work with the Nvidia K5200 GPU and The Foundry’s Mari to create photo real 360 degree environments. He shows a demo of the environment that was captured in 32bit float with 8k textures being played in real time on an Oculus Rift and the results speak for themselves. (The real time asset was down sampled to 16bit EXR)

UDIM Texture Illustration

Some key technologies mentioned were the development of virtual texture engines that allow objects to have MANY 8k textures at once using the UDIM model. The environment’s lighting was baked from V-Ray 3 to a custom UDIM Unity shader and supported by Amplify Creations beta Unity Plug-in.

The xxArray 3D photometry scanner

The actors were scanned in using xxArray photogrammetry system and Mari was used to project the high resolution textures. All of this technology was being enabled by Nvidia’s Quadro GPU line, to allow fast 8k texture buffering.  The actors were later imported in to the real time environment that had been captured and were viewable from all angles through an Oculus Rift HMD.

Real time environment for Oculus

Virtual Reality Filmmaking

Scott brings up some incredibly relevant and important questions about virtual reality for filmmakers (directors/DPs) who plan to work in virtual reality.

  • How do you tell a story in Virtual Reality?
  • How do you direct the viewer to face a certain direction?
  • How do you create a passive experience on the Oculus?

He even give a glimpse at the future distribution model of VR content. His demo for the film Rise will be released for Oculus/VR in the following formats:

  1. A free roam view where the action happens and the viewer is allowed to completely control the camera and point of view.
  2. A directed view where the viewer and look around but the positioning is dictated by the script/director. This model very much interests me and sounds like a video game.
  3. And a tradition 2D post rendered version. Like a tradition cinematic or film, best suited for Vimeo/Youtube/DVD/TV.

A year ago this technology seemed like science fiction, but every year we come closer to completely capturing humans (form/texture), their motions, environments with their textures, real world lighting, and viewing them in real time in virtual reality.

The industry is evolving at an incredibly rapid pace and so must the creatives working in it. Especially the persons responsible for the camera and the lighting, the director of photography.

Eyesmap 3D Scanning Tablet

3D Sensing Tablets Aims To Replace Multiple Surveyor Tools

 

Source: Tech Crunch

As we reported earlier this year, Google is building a mobile device with 3D sensing capabilities — under the Project Tango moniker. But it’s not the only company looking to combine 3D sensing with mobility.

Spanish startup E-Capture R&D is building a tablet with 3D sensing capabilities that’s aiming to target the enterprise space — for example as a portable tool for surveyors, civil engineers, architects and the like — which is due to go on sale at the beginning of 2015.

The tablet, called EyesMap, will have two rear 13 megapixel cameras, along with a depth sensor and GPS to enable it to measure co-ordinates, surface and volumes of objects up to a distance of 70 to 80 meters in real-time.

Eyesmap 3D Scanning Tablet

 

So, for instance, it could be used to capture measurements of – or create a 3D model of — a bridge or a building from a distance. Or to model objects as small as insects so it could be used to 3D scan individual components by civil engineers, for instance.

Its makers claim it can build high-resolution models with HD realistic textures.

EyesMap uses photogrammetry to ensure accurate measurements and to build outdoor 3D models, but also has an RGBD sensor for indoor scanning.

Eyesmap 3D Scanning TabletThe tablet will apparently be able to scan an “advanced photogrammetric picture” with up to 4 million dots in around 2 minutes. It will also be able to capture 3D objects in motion. It’s using a blend of computer vision techniques, photogrammetry, visual odometer, “precision sensor fine tuning” and other image measuring techniques, say its makers.

E-Capture was founded back in April 2012 by a group of experienced surveyors and Pedro Ortiz-Coder, a researcher in the laser scanning and photogrammetry field. The business has been founder funded thus far, but has also received a public grant of €800,000 to help with development.

In terms of where EyesMap fits into the existing enterprise device market, Ortiz-Coder says it’s competing with multiple standalone instruments in the survey field — such as 3D scanners, telemeters, photogrammetry software and so on — but is bundling multiple functions into a single portable device.

“To [survey small objects], a short range laser scanner is required but, a short-range LS cannot capture big or far away objects. That’s why we thought to create a definitive instrument, which permits the user to scan small objects, indoors, buildings, big objects and do professional works with a portable device,” he tells TechCrunch.

“Moreover, there wasn’t in the market any instrument which can measure objects in motion accurately more than 3-4 meters. EyesMap can measure people, animals, objects in motion in real time with a high range distance.”

The tablet will run Windows and, on the hardware front, will have Intel’s 4th generation i7 processor and 16 GB of RAM. Pricing for the EyesMap slate has not yet been announced.

 

Another 3D mobility project we previously covered, called LazeeEye, was aiming to bring 3D sensing smarts to any smartphone via an add on device (using just RGBD sensing) — albeit that project fell a little short of its funding goal on Kickstarter.

Also in the news recently, Mantis Vision raising $12.5 million in funding from Qualcomm Ventures, Samsung and others for its mobile 3D capture engine that’s designed to work on handheld devices.

There’s no denying mobile 3D as a space is heating up for device makers, although it remains to be seen how slick the end-user applications end up being — and whether they can capture the imagination of mainstream mobile users or, as with E-Capture’s positioning, carve out an initial user base within niche industries.

Shapify Booth Full Body 3D Scanner

Artec Announces the World’s First 3D Full Body Scanner – Shapify Booth

A twelve second body scan and shoppers pick up their 3D printed figurine next time they visit the supermarket

P-3D SELFIE_ITV2000_Vimeo from Granada Reports on Vimeo.

This week Asda and Artec Group are happy to announce their partnership as Asda becomes the first supermarket to bring a new cutting edge 3D printing technology to shoppers in the UK with the installation of Artec Shapify Booth — the world’s first high speed 3D full body scanner in its Trafford Park store. The scanning booth will allow thousands of customers to create a 3D miniature replica of themselves.

Artec Shapify Booth

The Artec scanning booth, equipped with wide-view, high-resolution 3D scanners and a rotation rig, takes just 12 seconds to scan a person. The Artec algorithms automatically fuse 700 captured surfaces into a detailed printable file. This digital model is then sent to the Asda 3D printing centre to be made into an 8″ mini-statue in full colour which can be collected from the store just one week later. Asda’s unique 3D printing technologies allows the processing of a huge volume of high quality figurines at a time, while the prints costs just £60.

Asda first introduced 3D scanning and 3D printing customer’s figurines six months ago, using Artec handheld scanners. Driven by the immediate success of the venture and Asda’s vision to offer 3D technology to the public, Artec Group tailored its professional scanning equipment to spec and created the Shapify Booth, a high speed full body scanner that Asda is now making available to all.
Making 3D prints of all the family, customers can also come along to be scanned in their sports kit, wedding outfits, graduation robes or fancy dress, taking something totally new and personalised back home with them after their weekly shop.

If the trial of the Shapify technology at Trafford Park is successful the new booths will be rolled out to more stores in the Autumn.

Phil Stout, Asda Innovation Manager – Asda is fast becoming, not just a retailer but, a technology company and this innovation is another example of how we’re leading the way on in store consumer facing technology. We’ve been working with Artec technology for a while now and we’re delighted to be the first company in the world able to offer our customers this unique service.

Artyom Yukhin, Artec Group President and CEO – Over the last 5 years Artec has been providing 3D technologies to professionals in industries from space and automotive to medical and movie special effects, but we have always been looking for the chance to do something for the public. Asda’s backing and second to none customer understanding allowed us to create high speed scanners which are fun and easy for people to use.

About Asda Stores Ltd.

Founded in the 1960s in Yorkshire, Asda is one of Britain’s leading retailers. It has more than 180,000 dedicated Asda colleagues serving customers from 551 stores, including 32 Supercentres, 311 Superstores, 29 Asda Living stores, 179 Supermarkets, 25 depots and seven recycling centres across the UK. Its main office is in Leeds, Yorkshire and its George clothing division is in Lutterworth, Leicestershire. More than 18 million people shop at Asda stores every week and 98 per cent of UK homes are served by www.asda.com. Asda joined Walmart, the world’s number one retailer, in 1999.

About Artec Group

Artec Group is a manufacturer and developer of professional 3D hardware and software, headquartered in Luxembourg. Artec Group is a global market leader in 3D scanning solutions used by thousands of people all over the world.
Shapify, the technology for creating 3D printed figurines, was conceived and launched by Artec Group in 2013:www.shapify.me
For more information about Artec Group, visit www.artec-group.com.

Contacts:
Artec Group : press@artec-group.com

FARO SCENE Cloud to Cloud Registration

FARO SCENE 5.3 Laser Scanning Software Provides Scan Registration without Targets

[source]

FARO® Technologies, Inc. (NASDAQ: FARO), the world’s most trusted source for 3D measurement, imaging, and realization technology, announced the release of their newest version of laser scanning software, SCENE 5.3, and scan data hosting-service, SCENE WebShare Cloud 1.5.

FARO’s SCENE 5.3 software, for use with the Laser Scanner Focus3D X Series, delivers scan registration by eliminating artificial targets, such as spheres and checkerboards. Users can choose from two available registration methods: Top View Based or Cloud to Cloud. Top View Based registration allows for targetless positioning of scans. In interiors and in built-up areas without reliable GPS positioning of the individual scans, targetless positioning represents a highly efficient and largely automated method of scanning. The second method, Cloud to Cloud registration, opens up new opportunities for the user to position scans quickly and accurately, even under difficult conditions. In exterior locations with good positioning of the scans by means of the integrated GPS receiver of the Laser Scanner Focus3D X Series, Cloud to Cloud is the method of choice for targetless registration.

In addition, the software also offers various new processes that enable the user to flexibly respond to a wide variety of project requirements. For instance, Correspondence Split View matches similar areas in neighbouring scans to resolve any missing positioning information, and Layout Image Overlay allows users to place scan data in a geographical context using image files, CAD drawings, or maps.

Oliver Bürkler, Senior Product Manager for 3D Documentation Software, remarked, “SCENE 5.3 is the ideal tool for processing laser scanning projects. FARO’s cloud-based hosting solution, SCENE WebShare Cloud, allows scan projects to be published and shared worldwide via the Internet. The collective upgrades to FARO’s laser scanning software solution, SCENE 5.3 and WebShare Cloud 1.5, make even complex 3D documentation projects faster, more efficient, and more effective. “

About FARO
FARO is the world’s most trusted source for 3D measurement, imaging and realization technology. The Company develops and markets computer-aided measurement and imaging devices and software. Technology from FARO permits high-precision 3D measurement, imaging and comparison of parts and complex structures within production and quality assurance processes. The devices are used for inspecting components and assemblies, production planning, documenting large volume spaces or structures in 3D, surveying and construction, as well as for investigation and reconstruction of accident sites or crime scenes.

Worldwide, approximately 15,000 customers are operating more than 30,000 installations of FARO’s systems. The Company’s global headquarters is located in Lake Mary, FL., its European head office in Stuttgart, Germany and its Asia/Pacific head office in Singapore. FARO has branches in Brazil, Mexico, Germany, United Kingdom, France, Spain, Italy, Poland, Netherlands, Turkey, India, China, Singapore, Malaysia, Vietnam, Thailand, South Korea and Japan.

Click here for more information or to download a 30-day evaluation version.

Google's Project Tango 3D Capture Device

Mantis Vision’s MV4D Tapped As Core 3D Capture Tech Behind Google’s Project Tango Tablets

Mantis Vision, a developer of some of the world’s most advanced 3D enabling technologies, today confirmed that its MV4D technology platform will serve as the core 3D engine behind Google’s Project Tango. Mantis Vision provides the 3D sensing platform, consisting of flash projector hardware components and Mantis Vision’s core MV4D technology which includes structured light-based depth sensing algorithms.

Project Tango Mantis Vision-Tablet_whiteGoogle’s new seven-inch tablet is the first mobile device released that will access the MV4D platform to easily capture, enrich and deliver quality 3D data at scale allowing Google developers to quickly build consumer and professional applications on top of the MV4D platform.

“3D represents a major paradigm shift for mobile. We haven’t seen a change this significant since the introduction of the camera-phone. MV4D allows developers to deliver 3D-enabled mobile devices and capabilities to the world,” said Amihai Loven, CEO, Mantis Vision. “This partnership with Google offers Mantis Vision the flexibility to expand quickly and strategically. It will fuel adoption and engagement directly with consumer audiences worldwide. Together, we are bringing 3D to the masses.”

MV4D is Mantis Vision’s highly-scalable 3D capture and processing platform that allows developers to integrate Mantis’ technology into new and existing applications with ease, to drive user-generated 3D content creation throughout the mobile ecosystem. MV4D’s combination of field-proven 3D imaging hardware and software and a soon-to-be released software development kit (SDK) will ultimately serve as the backbone of 3D-enabled mobile and tablet devices.

“We are excited about working with partners, such as Mantis Vision, as we push forward the hardware and software technologies for 3D sensing and motion tracking on mobile devices,” said Johnny Lee, Technical Product Lead at Google.

Since its inception, Mantis Vision has been dedicated to bringing professional-grade 3D technology to the masses. The company’s technology will be a key component of both professional and consumer level devices and applications across a wide customer base of leading mobile technology companies, application developers and device manufacturers. Because the MV4D platform and SDK is fully scalable, it is already being planned for use in more powerful, diverse range of products in the future.

Learn more about the project here

en-INTL-L-Kinect-for-Windows-Commercial-74Z-00001-mnco mocap 3d scanning

Microsoft Kinect for Windows v2: Affordable MoCap and 3D Scanning Solution

Amid the volley of announcements from Microsoft’s Build conference, is word that the new Kinect for Windows has a near-future release timeframe for both the hardware and its SDK. The desktop version of Microsoft’s do-all sensor will be available to the public this summer, as will its development framework. Perhaps more importantly, once they’re done, developers can publish their creations to the Windows Store; meaning, there’ll probably be more Kinect applications for Windows in one place than ever before. As Redmond tells it, this self-publishing will happen “later this summer.” Next summer, Microsoft is adding support for one of gaming’s most pervasive dev toolkits to Kinect for Windows: the Unity engine — tools developers already know the strengths and weaknesses of, which should bolster the app selection even further. Given that the Xbox One will see Unity support this year, this could mean that cross-platform apps and games are a distinct possibility.

With the specs of Kinect for Windows V2, the 3D scanning and imaging industries may be in for a game-changer. Indie film and game developers will hopefully be able to take advantage of its features as an affordable motion capture (mocap) solution.

Kinect motion capture guru and programmer, Jasper Brekelmans, has been playing with the second release of the Kinect for quite some time and has been posting some impressive results. You can stay on top of everything he is doing on his personal site http://www.brekel.com/.

You can pre-order your Kinect for Windows V2 today for $199 from the Microsoft Store.