I've never been a fan of pixels.
I mean, they're square, whereas virtually nothing in the real world is.
But I like them even less now that just about everyone has an opinion on them, which is usually that the more of them you have, the better.
You can hardly blame them (people, not pixels, that is) because digital cameras and screens keep getting better - visibly so; and, at the same time, they seem to have more and more pixels. So it's an easy causal connection to spot.
Except that it's not necessarily true.
And I'm not even referring here to the increasingly commonly accepted truth that if you have too many pixels in a restricted space the resulting electronic noise will outweigh the resolution benefits. Nor am I talking about the fact that if you have gazillions of perfectly nice pixels but a horrible lens you'll get an accurate capture of a terrible optical image.
What I mean is, we don't see in pixels.
The way we see is at least an order of magnitude more complicated than any electronic/digital imaging device we've ever build. Software emulations of our brain mechanisms have at least twelve simultaneous processes going on to identify and track an image. And none of them use pixels.
Ultimately, the closer we get to the actual point of perception (is there such a point?) what we are seeing is objects, not pixels.
We percieve a face, a zebra, a pair of glasses, and a bannana (although, typically, not in the same scene). When we remember an image of a face on the television, we don't remember the pattern of pixels: we remember the features.
And we have, in our heads, a database of features. That's how we recognise things: from a kit of parts.
And this, I think, is the future of video.
Demonstrations of Ultra High Definition, with as many as sixteen megapixels per frame (as opposed to around 2.5 mexapixels for "standard" high definition) are incredibly impressive. But unless they show more "objects", then, ultimately it's pointless.
What this means is that when we compress video, we shouldn't be looking at patterns of pixels, but identifying objects, and storing them in a heirachical set of characteristics.
Playing them back would be a case of redrawing the scene with vectors rather than pixels.
And, as we all know, vector graphics always look sharp. Whatever the resolution of the playback device.
Interestingly, what we now know as "artifacts" would come in two varieties with vector video:
As the bandwidth drops, faces would become less recognisable and more generic.
And as the data connection fails completely, instead of seeing the "multicoloured chess board" type of artifacts that you get with MPEG, instead of a face, you might get a teapot.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment