Eyesmap 3D Scanning Tablet

3D Sensing Tablets Aims To Replace Multiple Surveyor Tools

 

Source: Tech Crunch

As we reported earlier this year, Google is building a mobile device with 3D sensing capabilities — under the Project Tango moniker. But it’s not the only company looking to combine 3D sensing with mobility.

Spanish startup E-Capture R&D is building a tablet with 3D sensing capabilities that’s aiming to target the enterprise space — for example as a portable tool for surveyors, civil engineers, architects and the like — which is due to go on sale at the beginning of 2015.

The tablet, called EyesMap, will have two rear 13 megapixel cameras, along with a depth sensor and GPS to enable it to measure co-ordinates, surface and volumes of objects up to a distance of 70 to 80 meters in real-time.

Eyesmap 3D Scanning Tablet

 

So, for instance, it could be used to capture measurements of – or create a 3D model of — a bridge or a building from a distance. Or to model objects as small as insects so it could be used to 3D scan individual components by civil engineers, for instance.

Its makers claim it can build high-resolution models with HD realistic textures.

EyesMap uses photogrammetry to ensure accurate measurements and to build outdoor 3D models, but also has an RGBD sensor for indoor scanning.

Eyesmap 3D Scanning TabletThe tablet will apparently be able to scan an “advanced photogrammetric picture” with up to 4 million dots in around 2 minutes. It will also be able to capture 3D objects in motion. It’s using a blend of computer vision techniques, photogrammetry, visual odometer, “precision sensor fine tuning” and other image measuring techniques, say its makers.

E-Capture was founded back in April 2012 by a group of experienced surveyors and Pedro Ortiz-Coder, a researcher in the laser scanning and photogrammetry field. The business has been founder funded thus far, but has also received a public grant of €800,000 to help with development.

In terms of where EyesMap fits into the existing enterprise device market, Ortiz-Coder says it’s competing with multiple standalone instruments in the survey field — such as 3D scanners, telemeters, photogrammetry software and so on — but is bundling multiple functions into a single portable device.

“To [survey small objects], a short range laser scanner is required but, a short-range LS cannot capture big or far away objects. That’s why we thought to create a definitive instrument, which permits the user to scan small objects, indoors, buildings, big objects and do professional works with a portable device,” he tells TechCrunch.

“Moreover, there wasn’t in the market any instrument which can measure objects in motion accurately more than 3-4 meters. EyesMap can measure people, animals, objects in motion in real time with a high range distance.”

The tablet will run Windows and, on the hardware front, will have Intel’s 4th generation i7 processor and 16 GB of RAM. Pricing for the EyesMap slate has not yet been announced.

 

Another 3D mobility project we previously covered, called LazeeEye, was aiming to bring 3D sensing smarts to any smartphone via an add on device (using just RGBD sensing) — albeit that project fell a little short of its funding goal on Kickstarter.

Also in the news recently, Mantis Vision raising $12.5 million in funding from Qualcomm Ventures, Samsung and others for its mobile 3D capture engine that’s designed to work on handheld devices.

There’s no denying mobile 3D as a space is heating up for device makers, although it remains to be seen how slick the end-user applications end up being — and whether they can capture the imagination of mainstream mobile users or, as with E-Capture’s positioning, carve out an initial user base within niche industries.

Shapify Booth Full Body 3D Scanner

Artec Announces the World’s First 3D Full Body Scanner – Shapify Booth

A twelve second body scan and shoppers pick up their 3D printed figurine next time they visit the supermarket

P-3D SELFIE_ITV2000_Vimeo from Granada Reports on Vimeo.

This week Asda and Artec Group are happy to announce their partnership as Asda becomes the first supermarket to bring a new cutting edge 3D printing technology to shoppers in the UK with the installation of Artec Shapify Booth — the world’s first high speed 3D full body scanner in its Trafford Park store. The scanning booth will allow thousands of customers to create a 3D miniature replica of themselves.

Artec Shapify Booth

The Artec scanning booth, equipped with wide-view, high-resolution 3D scanners and a rotation rig, takes just 12 seconds to scan a person. The Artec algorithms automatically fuse 700 captured surfaces into a detailed printable file. This digital model is then sent to the Asda 3D printing centre to be made into an 8″ mini-statue in full colour which can be collected from the store just one week later. Asda’s unique 3D printing technologies allows the processing of a huge volume of high quality figurines at a time, while the prints costs just £60.

Asda first introduced 3D scanning and 3D printing customer’s figurines six months ago, using Artec handheld scanners. Driven by the immediate success of the venture and Asda’s vision to offer 3D technology to the public, Artec Group tailored its professional scanning equipment to spec and created the Shapify Booth, a high speed full body scanner that Asda is now making available to all.
Making 3D prints of all the family, customers can also come along to be scanned in their sports kit, wedding outfits, graduation robes or fancy dress, taking something totally new and personalised back home with them after their weekly shop.

If the trial of the Shapify technology at Trafford Park is successful the new booths will be rolled out to more stores in the Autumn.

Phil Stout, Asda Innovation Manager – Asda is fast becoming, not just a retailer but, a technology company and this innovation is another example of how we’re leading the way on in store consumer facing technology. We’ve been working with Artec technology for a while now and we’re delighted to be the first company in the world able to offer our customers this unique service.

Artyom Yukhin, Artec Group President and CEO – Over the last 5 years Artec has been providing 3D technologies to professionals in industries from space and automotive to medical and movie special effects, but we have always been looking for the chance to do something for the public. Asda’s backing and second to none customer understanding allowed us to create high speed scanners which are fun and easy for people to use.

About Asda Stores Ltd.

Founded in the 1960s in Yorkshire, Asda is one of Britain’s leading retailers. It has more than 180,000 dedicated Asda colleagues serving customers from 551 stores, including 32 Supercentres, 311 Superstores, 29 Asda Living stores, 179 Supermarkets, 25 depots and seven recycling centres across the UK. Its main office is in Leeds, Yorkshire and its George clothing division is in Lutterworth, Leicestershire. More than 18 million people shop at Asda stores every week and 98 per cent of UK homes are served by www.asda.com. Asda joined Walmart, the world’s number one retailer, in 1999.

About Artec Group

Artec Group is a manufacturer and developer of professional 3D hardware and software, headquartered in Luxembourg. Artec Group is a global market leader in 3D scanning solutions used by thousands of people all over the world.
Shapify, the technology for creating 3D printed figurines, was conceived and launched by Artec Group in 2013:www.shapify.me
For more information about Artec Group, visit www.artec-group.com.

Contacts:
Artec Group : press@artec-group.com

Google's Project Tango 3D Capture Device

Mantis Vision’s MV4D Tapped As Core 3D Capture Tech Behind Google’s Project Tango Tablets

Mantis Vision, a developer of some of the world’s most advanced 3D enabling technologies, today confirmed that its MV4D technology platform will serve as the core 3D engine behind Google’s Project Tango. Mantis Vision provides the 3D sensing platform, consisting of flash projector hardware components and Mantis Vision’s core MV4D technology which includes structured light-based depth sensing algorithms.

Project Tango Mantis Vision-Tablet_whiteGoogle’s new seven-inch tablet is the first mobile device released that will access the MV4D platform to easily capture, enrich and deliver quality 3D data at scale allowing Google developers to quickly build consumer and professional applications on top of the MV4D platform.

“3D represents a major paradigm shift for mobile. We haven’t seen a change this significant since the introduction of the camera-phone. MV4D allows developers to deliver 3D-enabled mobile devices and capabilities to the world,” said Amihai Loven, CEO, Mantis Vision. “This partnership with Google offers Mantis Vision the flexibility to expand quickly and strategically. It will fuel adoption and engagement directly with consumer audiences worldwide. Together, we are bringing 3D to the masses.”

MV4D is Mantis Vision’s highly-scalable 3D capture and processing platform that allows developers to integrate Mantis’ technology into new and existing applications with ease, to drive user-generated 3D content creation throughout the mobile ecosystem. MV4D’s combination of field-proven 3D imaging hardware and software and a soon-to-be released software development kit (SDK) will ultimately serve as the backbone of 3D-enabled mobile and tablet devices.

“We are excited about working with partners, such as Mantis Vision, as we push forward the hardware and software technologies for 3D sensing and motion tracking on mobile devices,” said Johnny Lee, Technical Product Lead at Google.

Since its inception, Mantis Vision has been dedicated to bringing professional-grade 3D technology to the masses. The company’s technology will be a key component of both professional and consumer level devices and applications across a wide customer base of leading mobile technology companies, application developers and device manufacturers. Because the MV4D platform and SDK is fully scalable, it is already being planned for use in more powerful, diverse range of products in the future.

Learn more about the project here

en-INTL-L-Kinect-for-Windows-Commercial-74Z-00001-mnco mocap 3d scanning

Microsoft Kinect for Windows v2: Affordable MoCap and 3D Scanning Solution

Amid the volley of announcements from Microsoft’s Build conference, is word that the new Kinect for Windows has a near-future release timeframe for both the hardware and its SDK. The desktop version of Microsoft’s do-all sensor will be available to the public this summer, as will its development framework. Perhaps more importantly, once they’re done, developers can publish their creations to the Windows Store; meaning, there’ll probably be more Kinect applications for Windows in one place than ever before. As Redmond tells it, this self-publishing will happen “later this summer.” Next summer, Microsoft is adding support for one of gaming’s most pervasive dev toolkits to Kinect for Windows: the Unity engine — tools developers already know the strengths and weaknesses of, which should bolster the app selection even further. Given that the Xbox One will see Unity support this year, this could mean that cross-platform apps and games are a distinct possibility.

With the specs of Kinect for Windows V2, the 3D scanning and imaging industries may be in for a game-changer. Indie film and game developers will hopefully be able to take advantage of its features as an affordable motion capture (mocap) solution.

Kinect motion capture guru and programmer, Jasper Brekelmans, has been playing with the second release of the Kinect for quite some time and has been posting some impressive results. You can stay on top of everything he is doing on his personal site http://www.brekel.com/.

You can pre-order your Kinect for Windows V2 today for $199 from the Microsoft Store.

Leap Motion Controller Update to Offer Affordable Individual Joint MoCap

Leap Motion Controller Update to Offer Affordable Individual Joint MoCap

Leap Motion has announced that the software for its self-titled PC gesture-control device will soon track the movement of individual finger joints, as well as the overall motion of a user’s hands.

Since its launch in 2012, the $80 Leap Motion controller has attracted a lot of interest in the CG community, with Autodesk releasing Maya and MotionBuilder plugins last year.

Individual joint tracking, more parameters captured
In a post on the company’s blog, Leap Motion CEO Michael Buckwald revealed that version 2 of its software will track the individual joints of a user’s fingers, compensating automatically where individual fingers are occluded.

The software will also expose “much more granular data” via its SDK, including 27 dimensions per hand.

Affordable Individual MoCap tools coming soon
The update, which will be free, and does not require a change of hardware, is now in public beta for developers, although there’s no news of a consumer release date yet.

Jasper Brekelmans, creator of upcoming hand-tracking tool Brekel Pro Hands, has already announced that he is using the SDK.

Read more about the Leap Motion V2 update on the developer’s blog

R3dS Wrap Topology Transfer Software

Introducing R3DS Wrap – Topology Transfer Tool

Wrap is a topology transfer tool. It allows to utilize the topology you already have and transfer your new 3D-scanned data onto it. The resulting models will not only share the same topology and UV-coordinates but also will naturally become blendshapes of each other. Here’s a short video how it works:

and here are a couple of examples based on 3D-scans kindly provided by Lee Perry-Smith

R3dS Wrap Topology Transfer Software

You can download a demo-version from their website http://www.russian3dscanner.com

As with all new technology during its final beta stages, Wrap is not perfect yet. R3DS would be highly appreciative and grateful of everyone that gives us the support and feedback to finalize things in the best possible way. This software has some potential to be a great tool. Check it out!

Asif-Khan-Megaface-Sochi-3d

2014 Sochi Winter Olympics to Feature Giant 3D Pinscreen of Your Face

Visitors to this year’s Sochi Winter Olympics will have the opportunity to see their face rendered on the side of a building in giant 3D mechanical polygons. The work of British architect Asif KhanMegaface is like a cross between Mount Rushmore’s sculpted facade and the pinscreens that adorned executive offices of the ’90s.

The design of a 2000 sq.m pavilion and landscape for MegaFon, one of the largest Russian telecoms companies and general partner of the Sochi Winter Olympics.

3D photo booths within the pavilion and across Russia in MegaFon retail stores will scan visitors’ portraits to be recreated in by the pavilion. It’s facade is designed to function like a huge pin screen. It is made up of over 10,000 actuators which transform the building’s skin into a three-dimensional portrait of each visitor’s face.

The concept is to give everyone the opportunity to be the face of the Olympics.

The structure is sited at the entrance to the Olympic Park, and incorporates an exhibition hall, hospitality areas, a rooftop viewing deck and 2 broadcasting suites.

The installation consists of 10,000 actuators fitted with LEDs and arranged into triangles that can extend up to six feet out from the side of the building to form 3D shapes. Visitors will be invited to have their face scanned at on-site “3D photo booths” before Khan’s actuators will move to form giant 500-square-foot representations of the scans. Three faces will be shown at any given time for 20 seconds, and it’s estimated 170,000 faces will be rendered during the games. Visitors will also be given a link where they can watch a 20-second video showing the exact moment when their face was on the side of the building.

170,000 FACES WILL BE RENDERED DURING THE GAMES

Megaface will comprise one side of Russian carrier Megafon’s pavilion — the installation’s name itself part of the massive branding exercise that is the Olympics. It’s some way from completion, but Khan and Swiss firm iart, which is realizing Khan’s vision, have successfully demonstrated a prototype (shown below) that uses just 1,000 actuators to render a small-scale image.

[Asif Khan via Verge]

The Kinetic Facade of the MegaFaces Pavilion: Initial Batch Test from iart on Vimeo.

Staples Easy 3D Printing

Staples to offer full-color ‘Easy 3D’ printing service

FRANKFURT, Germany, November 29, 2012 /PRNewswire/ —

Full color and low cost make 3D printing accessible to everyone

In a giant step toward the reality of 3D printing for all, Mcor Technologies Ltd has struck a deal with Staples Printing Systems Division to launch a new 3D printing service called “Staples Easy 3D,” online via the Staples Office Centre.

Staples’ Easy 3D will offer consumers, product designers, architects, healthcare professionals, educators, students and others low-cost, brilliantly coloured, photo-realistic 3D printed products from Staples stores. Customers will simply upload electronic files to the Staples Office Centre and pick up the models in their nearby Staples stores, or have them shipped to their address. Staples will produce the models with the Mcor IRIS, a 3D printer with the highest colour capability in the industry and lowest operating cost of any commercial-class 3D printer.

Read more

Kinect: Point Clouds and Face Tracking in AutoCAD

Originally posted by Greg Duncan at http://channel9.msdn.com/coding4fun/kinect/Kinect-to-AutoCAD-v15-and-some-AutoCAD-Face-Tracking-too

Today’s projects takes us back to AutoCAD, with an update to Kean’s last mentioned here, AutoCAD and the Kinect for v1, as well as using Face Tracking inside of AutoCAD…

Integrating Kinect with AutoCAD 2013

As promised in the last post, today we’re going to see the adjusted point cloud import workflow applied to the previously posted Kinect integration samples. This was also an opportunity to look at the improvements in version 1.5 of the Kinect for Windows SDK.

Here’s the updated set of samples for AutoCAD 2013 and the Microsoft Kinect SDK v1.5.

The main changes were related to the updated point cloud import workflow, but I also updated the code to allow the user to choose to enter “near mode” (by setting the KINNEAR system variable to 1), and to make sure the reduced set of 10 joints get displayed properly when jigging using either the KINSKEL or KINBOTH commands.

I also tested the British English language pack, and sure enough it did a a much better job of understanding my commands. I’ve left the samples defaulting to US English – just search for and replace “en-US” with “en-GB” (having installed the language pack, of course) to give it a try.

Project Information URL: http://through-the-interface.typepad.com/through_the_interface/2012/06/integrating-kinect-with-autocad-2013.html

Project Download URL: http://through-the-interface.typepad.com/files/KinectSamples-v1.5.zip

Face tracking inside AutoCAD using Kinect

After discovering, earlier in the week, that version 1.5 of the Kinect SDK provides the capability to get a 3D mesh of a tracked face, I couldn’t resist seeing how to bring that into AutoCAD (both inside a jig and as database-resident geometry).

I started by checking out the pretty-cool FaceTracking3D sample, which gives you a feel for how Kinect tracks faces, super-imposing an exaggerated, textured mesh on top of your usual face in a WPF dialog:

I decided to go for a more minimalist (which also happens to mean it’s simpler and with less scary results Smiley approach for the AutoCAD integration: just to generate a wireframe view during a jig by drawing a series of polygons…

image

… and then create some database-resident 3D faces once we’re done:

image

Project Information URL: http://through-the-interface.typepad.com/through_the_interface/2012/06/face-tracking-inside-autocad-using-kinect.html

Project Download URL: http://through-the-interface.typepad.com/files/KinectSamples-v1.5.1.zip

Project Source URL: http://through-the-interface.typepad.com/files/KinectSamples-v1.5.1.zip

Contact Information:

Microsoft Unknowingly Revolutionizes

Microsoft Unknowingly Revolutionizes the 3D Imaging Industry [Kinect]

Being the bleeding edge technology geeks that we are here at SCANable, we have been closely following Microsoft’s adoption of Israeli developer PrimeSense’s controller-less motion capture technology which interprets 3D scene information from a continuously-projected infrared structured light. Now released as the Kinect for Xbox 360, or simply Kinect (originally known by the code name Project Natal), defined as a “controller-free gaming and entertainment experience” by Microsoft for the Xbox 360 video game platform, and may later be officially supported by PCs. Based around a webcam-style add-on peripheral for the Xbox 360 console, it enables users to control and interact with the Xbox 360 without the need to touch a game controller, through a natural user interface using gestures, spoken commands, or presented objects and images. The project is aimed at broadening the Xbox 360’s audience beyond its typical gamer base. Kinect competes with the Wii Remote with Wii MotionPlus and PlayStation Move motion control systems for the Wii and PlayStation 3 home consoles, respectively.

Weeks before the Kinect was officially released, the hacking community was hard at work digging through this revolutionary hardware in order to test the true limits of its capabilities. There was even a bounty of $3,000 offered by development company “Adafruit” to obtain an open-source driver. A mere two days after the bounty was announced, that goal had already been reached — this according to an email Adafruit’s Phillip Torrone sent Gizmodo. Drivers have been available for Mac and Linux for a couple of weeks, but there are now working drivers for Windows for which we have successfully tested here at SCANable. Our early assessment has indicated that this inexpensive device is actually capable of  much more than just a game controller. To our amazement, we discovered that it continuously captures 3d point cloud data of everything in your living room/game room. By tapping into the Kinect with a PC (Mac, Linux or Windows), we were able to gain full access to this multi-purpose 4-dimensional data with the ability to freely move around the feed in real-time. Using the OpenKinect drivers and basic viewing software, we were even able to set cut-planes which gave us the ability to isolate the moving object in the scene and view this data as colored depth ranges or true RGB color generated by the units embedded camera.

Drivers:
Kinect drivers for Windows can be found here.
Drivers for Mac and Linux can be found here.

The possibilities of this technology are tremendous. We see a near future where we can navigate through a point cloud dataset or virtual 3D model using simple hand gestures (see Evoluce’s example below). Imagine being able to digitally record “true” 3D video and having the ability to easily remove data at certain depths instead of by color eliminating typical green screen procedures. Even better, what if you strapped one of these bad boys onto a robotic vacuum and used it to remotely capture 3D data of interior spaces. Think we are crazy? Keep reading…

How does it work?
Wired has a great article about!

Canesta-Howitworks1

Examples:

We have compiled several of the best videos of the Kinect in-use. Check them out and be sure to post comments below. We all are masters of manipulating point cloud data, let’s pull together our resources and expertise and come up with some great applications for this affordable technology!

Evoluce, one of the leading manufacturers of high-quality multi-touch and gesture computing displays, demonstrates the future of how we interact with our computers.

MIT early experiments with a Microsoft Kinect depth camera on a mobile robot base. Say hello to KinectBot. Is this the indoor mobile mapping solution we have been waiting for?

Kinect-style device used to map the interior of a building:

For the launch of Xbox Kinect in Germany, seeper created an interactive projection mapping. Set at the highly visible Stachus in central Munich, this project attracted hoards of participants. Immersed in the experience, users took part in epic particle ball games, sending fluids shooting three stories high. Together with guests, including Sylvie van der Vaart, we explored the limits of controller free gaming!

Kinect used for real-time lightsaber:

What are your thoughts about this revolutionary device? Be sure to leave your comments and feedback below. Also be sure to check back here over the coming weeks for new updates!