Creating a 3D experience with current tools or processes can seem daunting, or be time-consuming and costly. But the most recent graphic hardware and software are providing creators with more power and opportunities than ever, and the new sets of constantly evolving tools and approaches now allow users to seamlessly capture a world and translate it directly into 3D.
Let's take a quick look at some of these tools.
Motion Capture (or Mo-Cap)
One of the first challenges posed by real object 3D virtualization was to capture complex body motions for characters in CG films and Video Games. Motion Capture methods have evolved over the past decade, from the capture of fighters in Sega’s 1994 Virtual Fighter, which featured complete body suits with heavy light markers on the actors, to the more recent methods using inexpensive LED and Infrared projectors that detect small marker tags by sensor which can then be applied with a pen to any surface or person.
Now new software developments, along with the new tiny IR sensor cameras or consumer devices like the Kinect, can intelligently and accurately track body, and even hand gestures, in real-time, making it easy to map, copy and reuse real-life movement.

Facial Capture


The complex movements of an individual have always been hard to capture with accuracy, and the capture of real-life expression even harder. The technology, which requires expensive digital cameras, has been used in CG films for years, but has now evolved and been streamlined to work flawlessly for use in video games and even for real-time feedback.
A 3D avatar can now mimic facial features and expressions in real-time, as in the case of Digital Puppetry.

3D Scanning
One of the simplest and fastest ways of capturing real-life environments or objects in 3D is by using 3D scanners. 3D scanners, once unwieldy and equipped with costly radar components, are now as small and affordable as a mobile digital camera.
One of the most widespread and accessible methods around is the brilliant combination of light and radar called LiDar (i.e., Light Radar), used for a wide range of applications such as drones and gaming, as well as for geo-mapping and architecture pre-visualization.
And now new software technologies have come on the market which allow any digital camera or smartphone to translate an image captured in a 3D cloud of points that can give a sense of the object being captured or pointed at.

Photogrammetry
The most advanced, precise and yet technically accessible of them all is called photogrammetry. Cameras, software algorithms and imaging have evolved to the point that they can not only reconstruct a real object or place in 3D, but can also accurately translate the High Definition photos into complex texture mapping, enable highly realistic 3D model reconstructions.
A combination of technologies such as HDRI, LightField, VSLAM, Gyro-sensors, etc... now allow any Smartphone to capture objects, environments and people on-the-fly and convert them into super lifelike 3D models.

The Future: Videogrammetry?
While photogrammetry is now accessible to most people, its processes and software technology are still evolving and being streamlined. While the conversion of fixed objects or landscapes is relatively easy with a single camera, capturing models, movements and actors can be more challenging.
One promising solution lies in a direct offshoot of photogrammetry: videogrammetry. Although it is still in its infancy, it is not inconceivable to imagine that this technology with sufficiently advanced software and a multi-camera configuration could easily enable video, game and virtual directors to capture ultra-realistic 3D scenes and sequences. This would herald in a truly new era at the crossroads of films, video-games and any of the current moving visual media to be stocked to a flat frame.

While photogrammetry is now accessible to most people, its processes and software technology are still evolving and being streamlined. While the conversion of fixed objects or landscapes is relatively easy with a single camera, capturing models, movements and actors can be more challenging.
One promising solution lies in a direct offshoot of photogrammetry: videogrammetry. Although it is still in its infancy, it is not inconceivable to imagine that this technology with sufficiently advanced software and a multi-camera configuration could easily enable video, game and virtual directors to capture ultra-realistic 3D scenes and sequences. This would herald in a truly new era at the crossroads of films, video-games and any of the current moving visual media to be stocked to a flat frame.

These are just some of the most popular and up-to-date methods of capturing reality to translate it into 3D, but there are already many inexpensive and optimized tools that are easily accessible to help in the transition to a full-featured cross-platform/media 3D experience.