zLense real-time 3D tracking

zLense Announces World’s First Real-Time 3D Depth Mapping Technology for Broadcast Cameras

New virtual production platform dramatically lowers the cost of visual effects (VFX) for live and recorded TV, enabling visual environments previously unattainable in a live studio without any special studio set-up…

27 October 2014, London, UK – zLense, a specialist provider of virtual production platforms to the film, production, broadcast and gaming industries, today announced the launch of the world’s first depth-mapping camera solution that captures 3D data and scenery in real-time and adds a 3D layer, which is optimized for broadcasters and film productions, to the footage. The ground breaking industry-first technology processes space information, making  new and real three-dimensional compositing methods possible, enabling production teams to create stunning 3D effects and utilise state-of-the-art CGI in live TV or pre-recorded transmissions – with no special studio set up.

Utilising the solution, directors can produce unique simulated and augmented reality worlds, generating and combining dynamic virtual reality (VR) and augmented (AR) effects in live studio or outside broadcast transmissions. The unique depth-sensing technology allows for a full 360 degree freedom of camera movement and gives presenters and anchormen greater liberty of performance. Directors can combine dolly, jib arm and handheld shots as presenters move within, interact with and control the virtual environment and, in the near future, using only natural gestures and motions.

“We’re poised to shake up the Virtual Studio world by putting affordable high-quality real-time CGI into the hands of broadcasters,” said Bruno Gyorgy, President of zLense. “This unique world-leading technology changes the face of TV broadcasting as we know it, giving producers and programme directors access to CGI tools and techniques that transform the audience viewing experience.”

Doing away with the need for expensive match-moving work, the zLense Virtual Production platform dramatically speeds up the 3D compositing process, making it possible for directors to mix CGI and live action shots in real-time pre-visualization and take the production values of their studio and OB live transmissions to a new level. The solution is quick to install, requires just a single operator, and is operable in almost any studio lighting.

“With minimal expense and no special studio modifications, local and regional TV channels can use this technology to enhance their news and weather graphics programmes – unleashing live augmented reality, interactive simulations and visualisations that make the delivery of infographics exciting, enticing and totally immersive for viewers,” he continued.

The zLense Virtual Production platform combines depth-sensing technology and image-processing in a standalone camera rig that captures the 3D scene and camera movement. The ‘matte box’ sensor unit, which can be mounted on almost any camera rig, removes the need for external tracking devices or markers, while the platform’s built-in rendering engine cuts the cost and complexity of using visual effects in live and pre-recorded TV productions. The zLense Virtual Production platform can be used alongside other, pre-existing, rendering engines, VR systems and tracking technologies.

The VFX real-time capabilities enabled by the zLense Virtual Production platform include:

  • Volumetric effects
  • Additional motion and depth blur
  • Shadows and reflections to create convincing state-of-the-art visual appearances
  • Dynamic relighting
  • Realistic 3D distortions
  • Creation of a fully interactive virtual environment with interactive physical particle simulation
  • Wide shot and in-depth compositions with full body figures
  • Real-time Z-map and 3D models of the picture

For more information on the zLense features and functionalities, please visit: zlense.com/features

About Zinemath
Zinemath, a leader in developing the re-invention of how professional moving images are going to be processed in the future, is the producer of zLense, a revolutionary real-time depth sensing and modelling platform that adds third dimensional information to the filming process.  zLense is the first depth mapping camera accessory optimized for broadcasters and cinema previsualization. With an R&D center in Budapest, Zinemath, part of the Luxemburg-based Docler Group, is spreading this new vision to all industries in the film, television and mobile technology sectors.

For more information please visit: www.zlense.com

Autodesk-Meshmixer-Launch-2

Make a 3D Printed Kit with Meshmixer 2.7

[source]

Meshmixer 2.7 was released today full of new tools for 3D printing. Here I use the new version of the app to create a 3D printed kit of parts that can be printed in one job and assembled together pin connectors.

To do this I used several of the new features to make this a fast and painless process. I dug up a 123D Catch capture I took of a bronze sculpture of John Muir. I found it in my dentists office, it turns out my dentist sculpted it. I thought I’d make my own take on it by slicing it up and connecting it back together so it can be interactive, swiveling the pieces around the pin connectors.

I made use of the new pin connectors solid parts that are included in the release (in the miscellaneous bin). I also used the powerful Layout/Packing tool to layout parts on the print bed as a kit of parts to print in one print job. Also, the addition of the Orthographic view is incredibly helpful when creating the kit and laying it out within the print volume of my Replicator 2X. An instructable is in progress with a how-to for a 3D printed kit such as this.

 

This new release has some other nice updates. Check em out below:

– New Layout/Packing Tool under Analysis for 3D print bed layout

– New Deviation Tool for visualizing max distance between two objects (ie original & reduced version)

– New Clearance Tool for visualizing min distance between two objects (ie to verify tolerances)

– Under Analysis menu, requires selection of two objects)

– Reduce Tool now supports reducing to triangle count, (approximate) maximum deviation

– Support Generation improvements

– Better DLP/SLA preset

– Can now draw horizontal bars in support generator

– Ctrl-click now deletes all support segments above click point

– Shift-ctrl-click to only delete clicked segment

– Solid Part dropping now has built-in option to boolean add/subtract

– Can set operation-type preference during Convert To Solid Part

– Can set option to preserve physical dimensions during Convert To Solid Part

– New Snapping options in Measure tool

– Can now turn on Print Bed rendering in Modeling view (under View menu)

– Must enter Print View to change/configure printer

– Improved support for low-end graphics cards

For your kit of parts, try out the new pin connectors included in the Misc. parts library. One is a negative (boolean subtract it when dropping the part). The other you can drop on the print bed for printing by itself. It fits into the negative hole. You can also author your own parts and they will drop at a fixed scale (so they fit!).

Let us know what kind of kits you create…maybe we can add in your connectors in a future release. (There’s a free 3d print and t-shirt involved). Let us know at meshmixer@autodesk.com.

Have fun!!

Rent or Buy Leica Geosystems Cyclone 9

Leica Geosystems HDS Introduces Patent-Pending Innovations for Laser Scanning Project Efficiency

With Leica Cyclone 9.0, the industry leading point cloud solution for processing laser scan data, Leica Geosystems HDS introduces major, patent-pending innovations for greater project efficiency. Innovations benefit both field and office via significantly faster, easier scan registration, plus quicker deliverable creation thanks to better 2D and 3D drafting tools and steel modelling. Cyclone 9.0 allows users to scale easily for larger, more complex projects while ensuring high quality deliverables consistently.

Greatest advancement in office scan registration since cloud-to-cloud registration
When Leica Geosystems pioneered cloud-to-cloud registration, it enabled users – for the first time – to accurately execute laser scanning projects without having to physically place special targets around the scene, scan them, and model them in the office. With cloud-to-cloud registration software, users take advantage of overlaps among scans to register them together.

“The cloud-to-cloud registration approach has delivered significant logistical benefits onsite and time savings for many projects. We’ve constantly improved it, but the new Automatic Scan Alignment and Visual Registration capabilities in Cyclone 9.0 represent the biggest advancement in cloud-to-cloud registration since we introduced it,” explained Dr. Chris Thewalt, VP Laser Scanning Software. “Cyclone 9.0 lets users benefit from targetless scanning more often by performing the critical scan registration step far more efficiently in the office for many projects. As users increase the size and scope of their scanning projects, Cyclone 9.0 pays even bigger dividends. Any user who registers laser scan data will find great value in these capabilities.“

With the push of a button, Cyclone 9.0 automatically processes scans, and digital images if available, to create groups of overlapping scans that are initially aligned to each other. Once scan alignment is completed, algorithmic registration is applied for final registration. This new workflow option can be used in conjunction with target registration methods as well. These combined capabilities not only make the most challenging registration scenarios feasible, but also exponentially faster. Even novice users will appreciate their ease-of-use and ready scalability beyond small projects.

Power user Marta Wren, technical specialist at Plowman Craven Associates (PCA – leading UK chartered surveying firm) found that Cyclone 9.0’s Visual Registration tools alone sped up registration processing of scans by up to four times (4X) faster than previous methods. PCA uses laser scanning for civil infrastructure, commercial property, forensics, entertainment, and Building Information Modelling (BIM) applications.

New intuitive 2D and 3D drafting from laser scans
For civil applications, new roadway alignment drafting tools let users import LandXML-based roadway alignments or use simple polylines imported or created in Cyclone. These tools allow users to easily create cross section templates using feature codes, as well as copy them to the next station and visually adjust them to fit roadway conditions at the new location. A new vertical exaggeration tool in Cyclone 9.0 allows users to clearly see subtle changes in elevation; linework created between cross sections along the roadway can be used as breaklines for surface meshing or for 2D maps and drawings in other applications.

For 2D drafting of forensic scenes, building and BIM workflows, a new Quick Slice tool streamlines the process of creating a 2D sketch plane for drafting items, such as building footprints and sections, into just one step. A user only needs to pick one or two points on the face of a building to get started. This tool can also be used to quickly analyse the quality of registrations by visually checking where point clouds overlap.

Also included in Cyclone 9.0 are powerful, automatic point extraction features first introduced in Cyclone II TOPO and Leica CloudWorx. These include efficient SmartPicks for automatically finding bottom, top, and tie point locations and Points-on-a-Grid for automatically placing up to a thousand scan survey points on a grid for ground surfaces or building faces.

Simplified steel fitting of laser scan data
For plant, civil, building and BIM applications, Cyclone 9.0 also introduces a patent-pending innovation for modelling steel from point cloud data more quickly and easily. Unlike time consuming methods that require either processing an entire available cloud to fit a steel shape or isolating a cloud section before fitting, this new tool lets users to quickly and accurately model specific steel elements directly within congested point clouds. Users only need to make two picks along a steel member to model it. Shapes include wide flange, channel, angle, tee, and rectangular tube shapes.

Faster path to deliverables
Leica Cyclone 9.0 also provides users with valuable, new capabilities for faster creation of deliverables for civil, architectural, BIM, plant, and forensic scene documentation from laser scans and High-Definition Surveying™ (HDS™).

Availability
Leica Cyclone 9.0 is available today. Further information about the Leica Cyclone family of products can be found at http://hds.leica-geosystems.com, and users may download new product versions online from this website or purchase or rent licenses from SCANable, your trusted Leica Geosystems representative. Contact us today for pricing on software and training.

Capturing Real-World Environments for Virtual Cinematography

Capturing Real-World Environments for Virtual Cinematography

[source] written by Matt Workman

Virtual Reality Cinematography

As Virtual Reality HMDs (Oculus) come speeding towards consumers, there is an emerging need to capture 360 media and 360 environments. Capturing a location for virtual reality or virtual production is a task that is well suited for a DP and maybe a new niche of cinematography/photography. Not only are we capturing the physical dimensions of the environment using LIDAR, but we capturing the lighting using 360 degree HDR light probes captured with DSLRs/Nodal Tripod systems.

A LIDAR scanner is essentially a camera that shoots in all directions. It lives on a tripod and it can record the physical dimensions and color of an environment/space. It captures millions of points and saves their position and color to be later used to construct the space digitally.

An HDR Latlong Probe in Mari

Using a DSLR camera and a nodal tripod head, the DP would capture High Dynamic Range (32bit float HDR) 360 degree probes of the location, to record the lighting.  This process would essentially capture the lighting in the space at a VERY high dynamic range and that would be later reprojected onto the geometry constructed using the LIDAR data.

Realtime 3D Asset being lit by an HDR environment real time (baked)

The DP is essentially lighting the entire space in 360 degrees and then capturing it. Imagine an entire day of lighting a space in all directions. Lights outside windows, track lighting on walls, practicals, etc. Then capturing that space using the above outlined techniques as an asset to be used later. Once the set is constructed virtually, the director can add actors/props and start filmmaking, like he/she would do on a real set.  And the virtual cinematographer would line up the shots, cameras moves, and real time lighting.

I’ve already encountered a similar paradigm as a DP, when I shot a 360 VR commercial. A few years ago I shot a commercial for Bacardi with a 360 VR camera and we had to light and block talent in all directions within a loft space. The end user was then able to control which way the camera looked in the web player, but the director/DP controlled it’s travel path.

360 Virtual Reality Bacardi Commercial

 

http://www.mattworkman.com/2012/03/18/bacardi-360-virtual-reality/

Capturing a set for VR cinematography would allow the user to control their position in the space as well as which way they were facing. And the talent and interactive elements would be added later.

Final Product: VR Environment Capture

 

In this video you can see the final product of a location captured for VR. The geometry for the set was created using the LIDAR as a reference. The textures and lighting data are baked in from a combination of the LIDAR color data and the reprojected HDR probes.

After all is said in done, we have captured a location, it’s textures, and it’s lighting that can be used a digital location however we need. For previs, virtual production, background VFX plates, a real time asset for Oculus, etc.

SIGGRAPH 2014 and NVIDIA

SG4141: Building Photo-Real Virtual Reality from Real Reality, Byte by Byte
http://www.ustream.tv/recorded/51331701

In this presentation Scott Metzger speaks about his new virtual reality company Nurulize and his work with the Nvidia K5200 GPU and The Foundry’s Mari to create photo real 360 degree environments. He shows a demo of the environment that was captured in 32bit float with 8k textures being played in real time on an Oculus Rift and the results speak for themselves. (The real time asset was down sampled to 16bit EXR)

UDIM Texture Illustration

Some key technologies mentioned were the development of virtual texture engines that allow objects to have MANY 8k textures at once using the UDIM model. The environment’s lighting was baked from V-Ray 3 to a custom UDIM Unity shader and supported by Amplify Creations beta Unity Plug-in.

The xxArray 3D photometry scanner

The actors were scanned in using xxArray photogrammetry system and Mari was used to project the high resolution textures. All of this technology was being enabled by Nvidia’s Quadro GPU line, to allow fast 8k texture buffering.  The actors were later imported in to the real time environment that had been captured and were viewable from all angles through an Oculus Rift HMD.

Real time environment for Oculus

Virtual Reality Filmmaking

Scott brings up some incredibly relevant and important questions about virtual reality for filmmakers (directors/DPs) who plan to work in virtual reality.

  • How do you tell a story in Virtual Reality?
  • How do you direct the viewer to face a certain direction?
  • How do you create a passive experience on the Oculus?

He even give a glimpse at the future distribution model of VR content. His demo for the film Rise will be released for Oculus/VR in the following formats:

  1. A free roam view where the action happens and the viewer is allowed to completely control the camera and point of view.
  2. A directed view where the viewer and look around but the positioning is dictated by the script/director. This model very much interests me and sounds like a video game.
  3. And a tradition 2D post rendered version. Like a tradition cinematic or film, best suited for Vimeo/Youtube/DVD/TV.

A year ago this technology seemed like science fiction, but every year we come closer to completely capturing humans (form/texture), their motions, environments with their textures, real world lighting, and viewing them in real time in virtual reality.

The industry is evolving at an incredibly rapid pace and so must the creatives working in it. Especially the persons responsible for the camera and the lighting, the director of photography.

face 3d projection mapping

OMOTE Real-time Face Tracking 3D Projection Mapping

Forget the faces of historic monuments, the new frontier of 3D projection mapping is the faces of humans.

Created by Nobumichi Asai and friends, technical details behind the process are scant at the moment, but from what can be found in this Tumblr post, it’s clear that step one is a 3D scan of the model’s face.

Here is the translated text from that post:

I will continue to explain how to make a face mapping of this time.
Title OMOTE that (= table) is coming from the “ability”, but it has become the idea of Noh mask also in how to make. That is the idea, covered by creating a “surface”. That it is possible to pursue the accuracy, theme that represent a very delicate make-up art as its output was important. I started from the fact that it is 3D laser scanning the face of the model for the first.

I suspect that a structured light scanner was used to capture the geometry of the model’s face rather than a 3D laser scanner. Nonetheless, this is a very cool application of 3D projection mapping.

3D Face Scanning 3D Face Scanning projection mapping3D Face Scanning projection mapping

OMOTE / REAL-TIME FACE TRACKING & PROJECTION MAPPING. from something wonderful on Vimeo.

Eyesmap 3D Scanning Tablet

3D Sensing Tablets Aims To Replace Multiple Surveyor Tools

 

Source: Tech Crunch

As we reported earlier this year, Google is building a mobile device with 3D sensing capabilities — under the Project Tango moniker. But it’s not the only company looking to combine 3D sensing with mobility.

Spanish startup E-Capture R&D is building a tablet with 3D sensing capabilities that’s aiming to target the enterprise space — for example as a portable tool for surveyors, civil engineers, architects and the like — which is due to go on sale at the beginning of 2015.

The tablet, called EyesMap, will have two rear 13 megapixel cameras, along with a depth sensor and GPS to enable it to measure co-ordinates, surface and volumes of objects up to a distance of 70 to 80 meters in real-time.

Eyesmap 3D Scanning Tablet

 

So, for instance, it could be used to capture measurements of – or create a 3D model of — a bridge or a building from a distance. Or to model objects as small as insects so it could be used to 3D scan individual components by civil engineers, for instance.

Its makers claim it can build high-resolution models with HD realistic textures.

EyesMap uses photogrammetry to ensure accurate measurements and to build outdoor 3D models, but also has an RGBD sensor for indoor scanning.

Eyesmap 3D Scanning TabletThe tablet will apparently be able to scan an “advanced photogrammetric picture” with up to 4 million dots in around 2 minutes. It will also be able to capture 3D objects in motion. It’s using a blend of computer vision techniques, photogrammetry, visual odometer, “precision sensor fine tuning” and other image measuring techniques, say its makers.

E-Capture was founded back in April 2012 by a group of experienced surveyors and Pedro Ortiz-Coder, a researcher in the laser scanning and photogrammetry field. The business has been founder funded thus far, but has also received a public grant of €800,000 to help with development.

In terms of where EyesMap fits into the existing enterprise device market, Ortiz-Coder says it’s competing with multiple standalone instruments in the survey field — such as 3D scanners, telemeters, photogrammetry software and so on — but is bundling multiple functions into a single portable device.

“To [survey small objects], a short range laser scanner is required but, a short-range LS cannot capture big or far away objects. That’s why we thought to create a definitive instrument, which permits the user to scan small objects, indoors, buildings, big objects and do professional works with a portable device,” he tells TechCrunch.

“Moreover, there wasn’t in the market any instrument which can measure objects in motion accurately more than 3-4 meters. EyesMap can measure people, animals, objects in motion in real time with a high range distance.”

The tablet will run Windows and, on the hardware front, will have Intel’s 4th generation i7 processor and 16 GB of RAM. Pricing for the EyesMap slate has not yet been announced.

 

Another 3D mobility project we previously covered, called LazeeEye, was aiming to bring 3D sensing smarts to any smartphone via an add on device (using just RGBD sensing) — albeit that project fell a little short of its funding goal on Kickstarter.

Also in the news recently, Mantis Vision raising $12.5 million in funding from Qualcomm Ventures, Samsung and others for its mobile 3D capture engine that’s designed to work on handheld devices.

There’s no denying mobile 3D as a space is heating up for device makers, although it remains to be seen how slick the end-user applications end up being — and whether they can capture the imagination of mainstream mobile users or, as with E-Capture’s positioning, carve out an initial user base within niche industries.

Shapify Booth Full Body 3D Scanner

Artec Announces the World’s First 3D Full Body Scanner – Shapify Booth

A twelve second body scan and shoppers pick up their 3D printed figurine next time they visit the supermarket

P-3D SELFIE_ITV2000_Vimeo from Granada Reports on Vimeo.

This week Asda and Artec Group are happy to announce their partnership as Asda becomes the first supermarket to bring a new cutting edge 3D printing technology to shoppers in the UK with the installation of Artec Shapify Booth — the world’s first high speed 3D full body scanner in its Trafford Park store. The scanning booth will allow thousands of customers to create a 3D miniature replica of themselves.

Artec Shapify Booth

The Artec scanning booth, equipped with wide-view, high-resolution 3D scanners and a rotation rig, takes just 12 seconds to scan a person. The Artec algorithms automatically fuse 700 captured surfaces into a detailed printable file. This digital model is then sent to the Asda 3D printing centre to be made into an 8″ mini-statue in full colour which can be collected from the store just one week later. Asda’s unique 3D printing technologies allows the processing of a huge volume of high quality figurines at a time, while the prints costs just £60.

Asda first introduced 3D scanning and 3D printing customer’s figurines six months ago, using Artec handheld scanners. Driven by the immediate success of the venture and Asda’s vision to offer 3D technology to the public, Artec Group tailored its professional scanning equipment to spec and created the Shapify Booth, a high speed full body scanner that Asda is now making available to all.
Making 3D prints of all the family, customers can also come along to be scanned in their sports kit, wedding outfits, graduation robes or fancy dress, taking something totally new and personalised back home with them after their weekly shop.

If the trial of the Shapify technology at Trafford Park is successful the new booths will be rolled out to more stores in the Autumn.

Phil Stout, Asda Innovation Manager – Asda is fast becoming, not just a retailer but, a technology company and this innovation is another example of how we’re leading the way on in store consumer facing technology. We’ve been working with Artec technology for a while now and we’re delighted to be the first company in the world able to offer our customers this unique service.

Artyom Yukhin, Artec Group President and CEO – Over the last 5 years Artec has been providing 3D technologies to professionals in industries from space and automotive to medical and movie special effects, but we have always been looking for the chance to do something for the public. Asda’s backing and second to none customer understanding allowed us to create high speed scanners which are fun and easy for people to use.

About Asda Stores Ltd.

Founded in the 1960s in Yorkshire, Asda is one of Britain’s leading retailers. It has more than 180,000 dedicated Asda colleagues serving customers from 551 stores, including 32 Supercentres, 311 Superstores, 29 Asda Living stores, 179 Supermarkets, 25 depots and seven recycling centres across the UK. Its main office is in Leeds, Yorkshire and its George clothing division is in Lutterworth, Leicestershire. More than 18 million people shop at Asda stores every week and 98 per cent of UK homes are served by www.asda.com. Asda joined Walmart, the world’s number one retailer, in 1999.

About Artec Group

Artec Group is a manufacturer and developer of professional 3D hardware and software, headquartered in Luxembourg. Artec Group is a global market leader in 3D scanning solutions used by thousands of people all over the world.
Shapify, the technology for creating 3D printed figurines, was conceived and launched by Artec Group in 2013:www.shapify.me
For more information about Artec Group, visit www.artec-group.com.

Contacts:
Artec Group : press@artec-group.com

FARO SCENE Cloud to Cloud Registration

FARO SCENE 5.3 Laser Scanning Software Provides Scan Registration without Targets

[source]

FARO® Technologies, Inc. (NASDAQ: FARO), the world’s most trusted source for 3D measurement, imaging, and realization technology, announced the release of their newest version of laser scanning software, SCENE 5.3, and scan data hosting-service, SCENE WebShare Cloud 1.5.

FARO’s SCENE 5.3 software, for use with the Laser Scanner Focus3D X Series, delivers scan registration by eliminating artificial targets, such as spheres and checkerboards. Users can choose from two available registration methods: Top View Based or Cloud to Cloud. Top View Based registration allows for targetless positioning of scans. In interiors and in built-up areas without reliable GPS positioning of the individual scans, targetless positioning represents a highly efficient and largely automated method of scanning. The second method, Cloud to Cloud registration, opens up new opportunities for the user to position scans quickly and accurately, even under difficult conditions. In exterior locations with good positioning of the scans by means of the integrated GPS receiver of the Laser Scanner Focus3D X Series, Cloud to Cloud is the method of choice for targetless registration.

In addition, the software also offers various new processes that enable the user to flexibly respond to a wide variety of project requirements. For instance, Correspondence Split View matches similar areas in neighbouring scans to resolve any missing positioning information, and Layout Image Overlay allows users to place scan data in a geographical context using image files, CAD drawings, or maps.

Oliver Bürkler, Senior Product Manager for 3D Documentation Software, remarked, “SCENE 5.3 is the ideal tool for processing laser scanning projects. FARO’s cloud-based hosting solution, SCENE WebShare Cloud, allows scan projects to be published and shared worldwide via the Internet. The collective upgrades to FARO’s laser scanning software solution, SCENE 5.3 and WebShare Cloud 1.5, make even complex 3D documentation projects faster, more efficient, and more effective. “

About FARO
FARO is the world’s most trusted source for 3D measurement, imaging and realization technology. The Company develops and markets computer-aided measurement and imaging devices and software. Technology from FARO permits high-precision 3D measurement, imaging and comparison of parts and complex structures within production and quality assurance processes. The devices are used for inspecting components and assemblies, production planning, documenting large volume spaces or structures in 3D, surveying and construction, as well as for investigation and reconstruction of accident sites or crime scenes.

Worldwide, approximately 15,000 customers are operating more than 30,000 installations of FARO’s systems. The Company’s global headquarters is located in Lake Mary, FL., its European head office in Stuttgart, Germany and its Asia/Pacific head office in Singapore. FARO has branches in Brazil, Mexico, Germany, United Kingdom, France, Spain, Italy, Poland, Netherlands, Turkey, India, China, Singapore, Malaysia, Vietnam, Thailand, South Korea and Japan.

Click here for more information or to download a 30-day evaluation version.

Google's Project Tango 3D Capture Device

Mantis Vision’s MV4D Tapped As Core 3D Capture Tech Behind Google’s Project Tango Tablets

Mantis Vision, a developer of some of the world’s most advanced 3D enabling technologies, today confirmed that its MV4D technology platform will serve as the core 3D engine behind Google’s Project Tango. Mantis Vision provides the 3D sensing platform, consisting of flash projector hardware components and Mantis Vision’s core MV4D technology which includes structured light-based depth sensing algorithms.

Project Tango Mantis Vision-Tablet_whiteGoogle’s new seven-inch tablet is the first mobile device released that will access the MV4D platform to easily capture, enrich and deliver quality 3D data at scale allowing Google developers to quickly build consumer and professional applications on top of the MV4D platform.

“3D represents a major paradigm shift for mobile. We haven’t seen a change this significant since the introduction of the camera-phone. MV4D allows developers to deliver 3D-enabled mobile devices and capabilities to the world,” said Amihai Loven, CEO, Mantis Vision. “This partnership with Google offers Mantis Vision the flexibility to expand quickly and strategically. It will fuel adoption and engagement directly with consumer audiences worldwide. Together, we are bringing 3D to the masses.”

MV4D is Mantis Vision’s highly-scalable 3D capture and processing platform that allows developers to integrate Mantis’ technology into new and existing applications with ease, to drive user-generated 3D content creation throughout the mobile ecosystem. MV4D’s combination of field-proven 3D imaging hardware and software and a soon-to-be released software development kit (SDK) will ultimately serve as the backbone of 3D-enabled mobile and tablet devices.

“We are excited about working with partners, such as Mantis Vision, as we push forward the hardware and software technologies for 3D sensing and motion tracking on mobile devices,” said Johnny Lee, Technical Product Lead at Google.

Since its inception, Mantis Vision has been dedicated to bringing professional-grade 3D technology to the masses. The company’s technology will be a key component of both professional and consumer level devices and applications across a wide customer base of leading mobile technology companies, application developers and device manufacturers. Because the MV4D platform and SDK is fully scalable, it is already being planned for use in more powerful, diverse range of products in the future.

Learn more about the project here

en-INTL-L-Kinect-for-Windows-Commercial-74Z-00001-mnco mocap 3d scanning

Microsoft Kinect for Windows v2: Affordable MoCap and 3D Scanning Solution

Amid the volley of announcements from Microsoft’s Build conference, is word that the new Kinect for Windows has a near-future release timeframe for both the hardware and its SDK. The desktop version of Microsoft’s do-all sensor will be available to the public this summer, as will its development framework. Perhaps more importantly, once they’re done, developers can publish their creations to the Windows Store; meaning, there’ll probably be more Kinect applications for Windows in one place than ever before. As Redmond tells it, this self-publishing will happen “later this summer.” Next summer, Microsoft is adding support for one of gaming’s most pervasive dev toolkits to Kinect for Windows: the Unity engine — tools developers already know the strengths and weaknesses of, which should bolster the app selection even further. Given that the Xbox One will see Unity support this year, this could mean that cross-platform apps and games are a distinct possibility.

With the specs of Kinect for Windows V2, the 3D scanning and imaging industries may be in for a game-changer. Indie film and game developers will hopefully be able to take advantage of its features as an affordable motion capture (mocap) solution.

Kinect motion capture guru and programmer, Jasper Brekelmans, has been playing with the second release of the Kinect for quite some time and has been posting some impressive results. You can stay on top of everything he is doing on his personal site http://www.brekel.com/.

You can pre-order your Kinect for Windows V2 today for $199 from the Microsoft Store.