The first presidential portraits created from 3-D scan data are now on display in the Smithsonian Castle. The portraits of President Barack Obama were created based on data collected by a Smithsonian-led team of 3-D digital imaging specialists and include a digital and 3-D printed bust and life mask. A new video released today by the White House details the behind-the-scenes process of scanning, creating and printing the historic portraits. The portraits will be on view in the Commons gallery of the Castle starting today, Dec. 2, through Dec. 31. The portraits were previously displayed at the White House Maker Faire June 18.
The Smithsonian-led team scanned the President earlier this year using two distinct 3-D documentation processes. Experts from the University of Southern California’s Institute for Creative Technologies used their Light Stage face scanner to document the President’s face from ear to ear in high resolution. Next, a Smithsonian team used handheld 3-D scanners and traditional single-lens reflex cameras to record peripheral 3-D data to create an accurate bust.
The data captured was post-processed by 3-D graphics experts at the software company Autodesk to create final high-resolution models. The life mask and bust were then printed using 3D Systems’ Selective Laser Sintering printers.
The data and the printed models are part of the collection of the Smithsonian’s National Portrait Gallery. The Portrait Gallery’s collection has multiple images of every U.S. president, and these portraits will support the current and future collection of works the museum has to represent Obama.
The life-mask scan of Obama joins only three other presidential life masks in the Portrait Gallery’s collection: one of George Washington created by Jean-Antoine Houdon and two of Abraham Lincoln created by Leonard Wells Volk (1860) and Clark Mills (1865). The Washington and Lincoln life masks were created using traditional plaster-casting methods. The Lincoln life masks are currently available to explore and download on the Smithsonian’s X 3D website.
The video below shows an Artec Eva being used to capture a 3D portrait of President Barack Obama along with Mobile Light Stage – in essence, eight high-end DSLRs and 50 light sources mounted in a futuristic-looking quarter-circle of aluminum scaffolding. During a facial scan, the cameras capture 10 photographs each under different lighting conditions for a total of 80 photographs. All of this happened in a single second. Afterwards, sophisticated algorithms processed this data into high-resolution 3D models. The Light Stage captured the President’s facial features from ear to ear, similar to the 1860 Lincoln life mask.
About Smithsonian X 3D
The Smithsonian publicly launched its 3-D scanning and imaging program Smithsonian X 3D in 2013 to make museum collections and scientific specimens more widely available for use and study. The Smithsonian X 3D Collection features objects from the Smithsonian that highlight different applications of 3-D capture and printing, as well as digital delivery methods for 3-D data in research, education and conservation. Objects include the Wright Flyer, a model of the remnants of supernova Cassiopeia A, a fossil whale and a sixth-century Buddha statue. The public can explore all these objects online through a free custom-built, plug-in browser and download the data for their own use in modeling programs or to print using a 3-D printer.
Virtual Reality Cinematography
As Virtual Reality HMDs (Oculus) come speeding towards consumers, there is an emerging need to capture 360 media and 360 environments. Capturing a location for virtual reality or virtual production is a task that is well suited for a DP and maybe a new niche of cinematography/photography. Not only are we capturing the physical dimensions of the environment using LIDAR, but we capturing the lighting using 360 degree HDR light probes captured with DSLRs/Nodal Tripod systems.
A LIDAR scanner is essentially a camera that shoots in all directions. It lives on a tripod and it can record the physical dimensions and color of an environment/space. It captures millions of points and saves their position and color to be later used to construct the space digitally.
Using a DSLR camera and a nodal tripod head, the DP would capture High Dynamic Range (32bit float HDR) 360 degree probes of the location, to record the lighting. This process would essentially capture the lighting in the space at a VERY high dynamic range and that would be later reprojected onto the geometry constructed using the LIDAR data.
The DP is essentially lighting the entire space in 360 degrees and then capturing it. Imagine an entire day of lighting a space in all directions. Lights outside windows, track lighting on walls, practicals, etc. Then capturing that space using the above outlined techniques as an asset to be used later. Once the set is constructed virtually, the director can add actors/props and start filmmaking, like he/she would do on a real set. And the virtual cinematographer would line up the shots, cameras moves, and real time lighting.
I’ve already encountered a similar paradigm as a DP, when I shot a 360 VR commercial. A few years ago I shot a commercial for Bacardi with a 360 VR camera and we had to light and block talent in all directions within a loft space. The end user was then able to control which way the camera looked in the web player, but the director/DP controlled it’s travel path.
360 Virtual Reality Bacardi Commercial
Capturing a set for VR cinematography would allow the user to control their position in the space as well as which way they were facing. And the talent and interactive elements would be added later.
Final Product: VR Environment Capture
In this video you can see the final product of a location captured for VR. The geometry for the set was created using the LIDAR as a reference. The textures and lighting data are baked in from a combination of the LIDAR color data and the reprojected HDR probes.
After all is said in done, we have captured a location, it’s textures, and it’s lighting that can be used a digital location however we need. For previs, virtual production, background VFX plates, a real time asset for Oculus, etc.
SIGGRAPH 2014 and NVIDIA
SG4141: Building Photo-Real Virtual Reality from Real Reality, Byte by Byte
In this presentation Scott Metzger speaks about his new virtual reality company Nurulize and his work with the Nvidia K5200 GPU and The Foundry’s Mari to create photo real 360 degree environments. He shows a demo of the environment that was captured in 32bit float with 8k textures being played in real time on an Oculus Rift and the results speak for themselves. (The real time asset was down sampled to 16bit EXR)
Some key technologies mentioned were the development of virtual texture engines that allow objects to have MANY 8k textures at once using the UDIM model. The environment’s lighting was baked from V-Ray 3 to a custom UDIM Unity shader and supported by Amplify Creations beta Unity Plug-in.
The actors were scanned in using xxArray photogrammetry system and Mari was used to project the high resolution textures. All of this technology was being enabled by Nvidia’s Quadro GPU line, to allow fast 8k texture buffering. The actors were later imported in to the real time environment that had been captured and were viewable from all angles through an Oculus Rift HMD.
Virtual Reality Filmmaking
Scott brings up some incredibly relevant and important questions about virtual reality for filmmakers (directors/DPs) who plan to work in virtual reality.
- How do you tell a story in Virtual Reality?
- How do you direct the viewer to face a certain direction?
- How do you create a passive experience on the Oculus?
He even give a glimpse at the future distribution model of VR content. His demo for the film Rise will be released for Oculus/VR in the following formats:
- A free roam view where the action happens and the viewer is allowed to completely control the camera and point of view.
- A directed view where the viewer and look around but the positioning is dictated by the script/director. This model very much interests me and sounds like a video game.
- And a tradition 2D post rendered version. Like a tradition cinematic or film, best suited for Vimeo/Youtube/DVD/TV.
A year ago this technology seemed like science fiction, but every year we come closer to completely capturing humans (form/texture), their motions, environments with their textures, real world lighting, and viewing them in real time in virtual reality.
The industry is evolving at an incredibly rapid pace and so must the creatives working in it. Especially the persons responsible for the camera and the lighting, the director of photography.
Source: Tech Crunch
As we reported earlier this year, Google is building a mobile device with 3D sensing capabilities — under the Project Tango moniker. But it’s not the only company looking to combine 3D sensing with mobility.
Spanish startup E-Capture R&D is building a tablet with 3D sensing capabilities that’s aiming to target the enterprise space — for example as a portable tool for surveyors, civil engineers, architects and the like — which is due to go on sale at the beginning of 2015.
The tablet, called EyesMap, will have two rear 13 megapixel cameras, along with a depth sensor and GPS to enable it to measure co-ordinates, surface and volumes of objects up to a distance of 70 to 80 meters in real-time.
So, for instance, it could be used to capture measurements of – or create a 3D model of — a bridge or a building from a distance. Or to model objects as small as insects so it could be used to 3D scan individual components by civil engineers, for instance.
Its makers claim it can build high-resolution models with HD realistic textures.
EyesMap uses photogrammetry to ensure accurate measurements and to build outdoor 3D models, but also has an RGBD sensor for indoor scanning.
The tablet will apparently be able to scan an “advanced photogrammetric picture” with up to 4 million dots in around 2 minutes. It will also be able to capture 3D objects in motion. It’s using a blend of computer vision techniques, photogrammetry, visual odometer, “precision sensor fine tuning” and other image measuring techniques, say its makers.
E-Capture was founded back in April 2012 by a group of experienced surveyors and Pedro Ortiz-Coder, a researcher in the laser scanning and photogrammetry field. The business has been founder funded thus far, but has also received a public grant of €800,000 to help with development.
In terms of where EyesMap fits into the existing enterprise device market, Ortiz-Coder says it’s competing with multiple standalone instruments in the survey field — such as 3D scanners, telemeters, photogrammetry software and so on — but is bundling multiple functions into a single portable device.
“To [survey small objects], a short range laser scanner is required but, a short-range LS cannot capture big or far away objects. That’s why we thought to create a definitive instrument, which permits the user to scan small objects, indoors, buildings, big objects and do professional works with a portable device,” he tells TechCrunch.
“Moreover, there wasn’t in the market any instrument which can measure objects in motion accurately more than 3-4 meters. EyesMap can measure people, animals, objects in motion in real time with a high range distance.”
The tablet will run Windows and, on the hardware front, will have Intel’s 4th generation i7 processor and 16 GB of RAM. Pricing for the EyesMap slate has not yet been announced.
Another 3D mobility project we previously covered, called LazeeEye, was aiming to bring 3D sensing smarts to any smartphone via an add on device (using just RGBD sensing) — albeit that project fell a little short of its funding goal on Kickstarter.
Also in the news recently, Mantis Vision raising $12.5 million in funding from Qualcomm Ventures, Samsung and others for its mobile 3D capture engine that’s designed to work on handheld devices.
There’s no denying mobile 3D as a space is heating up for device makers, although it remains to be seen how slick the end-user applications end up being — and whether they can capture the imagination of mainstream mobile users or, as with E-Capture’s positioning, carve out an initial user base within niche industries.
A twelve second body scan and shoppers pick up their 3D printed figurine next time they visit the supermarket
This week Asda and Artec Group are happy to announce their partnership as Asda becomes the first supermarket to bring a new cutting edge 3D printing technology to shoppers in the UK with the installation of Artec Shapify Booth — the world’s first high speed 3D full body scanner in its Trafford Park store. The scanning booth will allow thousands of customers to create a 3D miniature replica of themselves.
The Artec scanning booth, equipped with wide-view, high-resolution 3D scanners and a rotation rig, takes just 12 seconds to scan a person. The Artec algorithms automatically fuse 700 captured surfaces into a detailed printable file. This digital model is then sent to the Asda 3D printing centre to be made into an 8″ mini-statue in full colour which can be collected from the store just one week later. Asda’s unique 3D printing technologies allows the processing of a huge volume of high quality figurines at a time, while the prints costs just £60.
Asda first introduced 3D scanning and 3D printing customer’s figurines six months ago, using Artec handheld scanners. Driven by the immediate success of the venture and Asda’s vision to offer 3D technology to the public, Artec Group tailored its professional scanning equipment to spec and created the Shapify Booth, a high speed full body scanner that Asda is now making available to all.
Making 3D prints of all the family, customers can also come along to be scanned in their sports kit, wedding outfits, graduation robes or fancy dress, taking something totally new and personalised back home with them after their weekly shop.
If the trial of the Shapify technology at Trafford Park is successful the new booths will be rolled out to more stores in the Autumn.
Phil Stout, Asda Innovation Manager – Asda is fast becoming, not just a retailer but, a technology company and this innovation is another example of how we’re leading the way on in store consumer facing technology. We’ve been working with Artec technology for a while now and we’re delighted to be the first company in the world able to offer our customers this unique service.
Artyom Yukhin, Artec Group President and CEO – Over the last 5 years Artec has been providing 3D technologies to professionals in industries from space and automotive to medical and movie special effects, but we have always been looking for the chance to do something for the public. Asda’s backing and second to none customer understanding allowed us to create high speed scanners which are fun and easy for people to use.
About Asda Stores Ltd.
Founded in the 1960s in Yorkshire, Asda is one of Britain’s leading retailers. It has more than 180,000 dedicated Asda colleagues serving customers from 551 stores, including 32 Supercentres, 311 Superstores, 29 Asda Living stores, 179 Supermarkets, 25 depots and seven recycling centres across the UK. Its main office is in Leeds, Yorkshire and its George clothing division is in Lutterworth, Leicestershire. More than 18 million people shop at Asda stores every week and 98 per cent of UK homes are served by www.asda.com. Asda joined Walmart, the world’s number one retailer, in 1999.
About Artec Group
Artec Group is a manufacturer and developer of professional 3D hardware and software, headquartered in Luxembourg. Artec Group is a global market leader in 3D scanning solutions used by thousands of people all over the world.
Shapify, the technology for creating 3D printed figurines, was conceived and launched by Artec Group in 2013:www.shapify.me
For more information about Artec Group, visit www.artec-group.com.
Artec Group : firstname.lastname@example.org
A key addition to the complete 2014 portfolio of Suites is Autodesk® ReCap™ product, a family of powerful and easy-to-use software and services on the desktop and in the cloud to create intelligent 3D data from captured photos and laser scans in a streamlined workflow. Autodesk ReCap is the first industry solution to bring together laser scanning and photogrammetry into one streamlined process. In addition, no other solution on the market provides the visualization quality and scalability to handle extremely large data sets.
The Autodesk ReCap product line comprises two main offerings – Autodesk ReCap Studio and Autodesk ReCap Photo. Autodesk ReCap Studio makes it easy to clean, organize and visualize massive datasets captured from reality. Autodesk ReCap Photo helps users create high-resolution textured 3D models from photos using the power of cloud computing. Rather than beginning with a blank screen, Autodesk ReCap now enables any designer, architect or engineer to add, modify, validate and document their design process in context from existing environments.
For example, a civil engineer can bypass an existing bridge or expand the road underneath digitally and test feasibility. At construction phase, builders can run clash detection to understand if utilities will be in the way. Urban planners can get answers to specific design questions about large areas, such as how much building roof surface is covered by shadow or vegetation.
ReCap Studio is a data preparation environment that runs on the desktop. Users can import captured data directly into Autodesk design solutions, such as AutoCAD®, Autodesk® Revit®, Autodesk Inventor®, etc., to conduct QA and verification of data. The data can come from non-intelligent, black and white sparse point clouds to intelligent, visually high appealing content. ReCap Studio will ship in Autodesk product and suite installers or be available for free on the Autodesk Exchange Apps store.
ReCap Photo is an Autodesk 360 service designed to create high resolution 3D data from photos to enable users to visualize and share 3D data. By leveraging the power of the cloud to process and store massive data files, users can upload images on Autodesk 360 and instantly create a 3D mesh model. ReCap Photo is available with Standard Suites entitlement and higher.
Key features of Autodesk ReCap include:
- Visualize and edit massive datasets: On the desktop, ReCap users can view and edit billions of points to prepare them for use in Autodesk portfolio products to enable realistic in context design work
- Professional-Grade Photo to 3D Features: ReCap unlocks the power of ubiquitous cameras to capture high-quality 3D models, bringing reality capture within reach of anyone with a camera. ReCap supports objects of any size and range, full resolution for high-density meshes, survey points and multiple file exports.
- Photo and Laser: ReCap incorporates the best of both photo and laser data capture so that customers can use photos to fill in holes or augment laser scan data. Users can both increase photos scene accuracy with laser points and add photo-realistic detail to laser scans. Create point clouds from photos, align scans and photos and convert professional grade photo to 3D models.
Autodesk continues to invest in developing sophisticated, easy-to-use reality capture technologies. The company has made several key acquisitions including Alice Labs and Allpoint Systems as well as applied its own research and development resources to accelerate the mainstream adoption of these technologies. As customers are looking for ways to easily and accurately capture the world around them, Autodesk ReCap streamlines Reality Capture workflows, making working with Reality Capture data easy, quick and cost effective.
Autodesk is the only company who has combined laser scanning data and photogrammetry into one product family to address and streamline the entire workflow. Whereas traditional point clouds appear as dots, Autodesk technology can now visualize truly massive point clouds as realistic surfaces. Unique to Autodesk is that users can interact with these huge data sets doing CAD-like operations such as selection, tagging, moving, measuring, clash detection, and object extraction, all with native points. Laser scanning and photogrammetry are historically very expensive and data intensive. Autodesk’s goal is to democratize the process of reality capture so that anyone can capture the world around them to create high quality 3D models.
Trilith Studios Media Park
500 Sandy Creek Road
WH2008 Suite 101
Fayetteville, GA 30214
New York Office
630 Flushing Avenue Suite 503
Brooklyn, NY 11206
Los Angeles Office
6080 Center Drive
Los Angeles, California 90045
2002 Timberloch Place
The Woodlands, Texas 77380
New Orleans Office
201 St Charles Ave #2500
New Orleans, LA 70170