There was a time not long ago when digital image editing was something left to the experts. Then tools like iPhoto and Instagram came along and brought some of those capabilities to the masses. There’s still room for experts today, but for most of us, quick fixes and prepackaged effects are enough. These tools effectively democratized image editing. Tao Chen hopes that 3-Sweep, an incredible piece of software he created with a team of researchers from Tel Aviv University, can do the same for 3-D objects.
The video demo is genuinely jaw-dropping stuff. It takes us through a series of photographs, showing how with just a few clicks of the mouse, 3-Sweep can turn the objects inside them into resizable, turnaroundable 3-D models. The first two clicks establish the object’s profile; the third traces its main axis, with the software snapping intelligently to the object along the way, like a more sophisticated version of Photoshop’s magic lasso. The background of the image is filled in with something like Photoshop’s content-aware fill, allowing the object to be turned and repositioned anywhere in its environment. Basic shapes like beer bottles and jars are easy to pull out of their static surroundings, but we see how more complex objects, like water taps and telescopes, can be similarly mapped with just a little more legwork.
“Our biggest goal is still to help novice users to do this,” Chen explains.
The video’s title, “Extracting Editable Objects from a Single Photo,” is technically correct, but it doesn’t really capture the wow-factor of seeing 3-Sweep in action. YouTube commenters do a better job. Things like “Witchcraft,” “Magic,” and “Mind blown” are a common refrain. Another: “RIP 3-D designers.”
But the experts don’t need to worry about losing their jobs just yet. 3-Sweep is very much a work in progress, and we see how irregular objects, like tubes of toothpaste, can outsmart the system. It’s not fast enough for primetime, either; the system can’t quite understand complex volumes in real time. And while the 3-D models the software produces are surprisingly good, they’re far from perfect–the difference between a plastic deck chair from Target and an hand-made Adirondack.
Still, though, it’s exciting stuff. And as Chen explains, making the 3-D experts irrelevant was never the point. “Our biggest goal is still to help novice users to do this,” he explains. For the team, led by Daniel Cohen-Or, the biggest challenge wasn’t necessarily working out all the algorithmic magic behind the model-making, but rather giving lay users an easy way to harness it. “It took us some time to figure out how to make a very convenient user interface to generate this stuff,” Chen says.
The approach is a smart one. As the accompanying paper explains, 3-Sweep leverages “the strengths of both humans and computers.” Our perceptual abilities are tapped to recognize, position and partition shapes, while the computer does all the heavier lifting like the texturizing, the computation, and the edge-detection.
Where 3-D printers will make it easy for us to transform our digital creations into physical ones, Chen hopes tools like 3-Sweep will facilitate just the opposite exchange. He imagines future versions of games like the Sims or Second Life that will allow you to effortlessly populate the game world with objects from your own. It’s wild stuff, breaking down the barriers between our physical and digital lives. And even though many of the core team members have moved on to other projects, they’ve taken note of overwhelming response to the video. Chen says they’re trying to figure out how to get a demo version out to the public as soon as possible.
Learn more here http://3dprintingindustry.com/2013/09/16/3-sweep-sweeps-the-net-with-smart-2d-to-3d-conversion/