I'm researching 3D quite a bit at the moment. I'm fascinated by the idea that you can extrapolate new camera angles from multiple existing ones. If you can do this, then, ultimately, "camera angle" will just be another setting you can manipulate in post production.
I think we're reaching a point with technology where this should be possible to a high quality, although real-time viewpoint interpolation may be a few years off.
One side benefit of being able to generate new viewpoints is that to be able to do this, you must first have been able to generate some sort of 3D model of the space in front of the cameras. That's a huge advantage for several reasons.
It takes ages to create 3D models of environments using conventional tools like Maya. Just creating a realistic street scene could take months, and cost tens or hundreds of thousands of dollars. With viewpoint extrapolation, you could do it in real time.
And then, once you have your 3D model in place, you can start to apply environmental phenomena to it, like fog, depth of field, as well as integrating effectively with totally synthesised elements and characters.
There are surprisingly few papers on this technique, mainly, I suspect, because research is going on behind closed doors.
We're starting to see a few products, though: Microsoft Photosynth goes a long way towards it, and could surely be adapted for video, given enough processing power. Photosynth creates a 3D Point Cloud, not a vector model, so you'd need the equivalent of a 3D "autotrace" process to create a genuine 3D model that could form the basis of proper 3D environment; but I don't see why that shouldn't be possible - especially because with video there is so much more information than with still images.
And with video, you could probably generate a 3d space with one camera, as long as it was moving; but only for static objects, of course.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment