Saturday, 26 September 2009

Vintage Column: my predictions for the London Olympics from 4 years ago

The following article appeared in Newsreel magazine, which was the sister publication to the sucessful Showreel mag. I was asked to write about the sort of technology we might see at the London Olympics in 2012. It was written about four and a half years ago, which is a long time in the technology business.

I'm not sure about my guess that there would be 1Gb/s broadband by 2012 - but the rest of it seems solid. The thing is, there's still nearly three years to go. The rate of change is so great that I would actually be less confident about making predictions now than I was four years ago.

Bluesky predictions for the 2012 Olympics - from May 2004

Freelance technical journalists develop some extraordinary skills, probably the most important of which is staring out of the window for hours on end instead of writing a chronically overdue article. Inevitably, as the deadline approaches, miraculous things happen. The ironing gets done, and the house looks spotless. Any task – the more tedious or obnoxious the better – is preferable to the prospect of sitting in front of a blank screen and bashing out a couple of thousand words on the designated subject.

This time, though, staring out of the window at the (infrequently) blue sky was a necessary antecedent to the production of this article, because what you’re about to read is almost entirely speculative, and looks much further ahead into the future than anyone should reasonably feel comfortable with. Seven years, to be specific.

I’m referring, of course to the London Olympics (or, to be completely politically correct the “British” Olympics).

From a UK perspective, this is going to be a big event. I’ve heard that it’s going to involve the biggest civil engineering project ever. When you consider that it may even make it feasible to get from the Elephant to Lewisham at more than six miles an hour, you begin to realise the magnitude of this endeavour. And it’s probably reasonable to expect that it’s going to be the biggest broadcasting event ever, too.

So, now that we know that the Olympics is coming to London, it seems like a good time to speculate about the shape and form that such a broadcast event might take. I don’t have any insider information on this (I’m not sure if anyone does at the moment) but we can at least have a stab at it. And probably the best place to start is to look at the trends already hinted at by the phenomenon called “convergence”.

If you’ve blinked at any time recently you’ve probably missed it. Convergence has happened already. IT and telecommunications are now integral to acquisition, production and broadcasting, and it’s a very long time since anyone watched a television program that hadn’t been in and out of the digital domain at some point in its lifecycle. Computers and digital communication are an inescapable part of most people’s daily routine. I’m probably not completely typical, but, today, for example, I’ve watched the live output from BBC News 24 on my laptop via a 3G datacard, and talked to colleagues in Dubai and America using Skype, the popular (and free) Voice Over Internet Protocol phone service.

While convergence trends have been discernable for several years, it’s only recently that the benefits have been available to the masses, and the rate of progress is accelerating.

There are several reasons for this. First, virtually every aspect of computing technology is getting better, faster. Processors are effectively a thousand times quicker than they were around twelve years ago. Both RAM and hard disk storage are at least three orders of magnitude cheaper as well. Harder to quantify, but just as important, is that our knowledge of how to work with digital media has grown, as well as the sophistication of software tools and our ability to compress and process video and audio.

Even if all of these factors were (merely) developing linearly, the compound effect would be an acceleration of capability. In reality, the rate of change is even faster. So much so, that it’s getting harder and harder to look even a couple of years into the future. Ironically, having to stick with existing technological standards is helping us to look ahead with a little more clarity because they have a damping influence on our ability to innovate (DAB radio in the UK is a good example of this. It uses a technology devised in the 80s that is agonisingly out of date in comparison with the compression and digital transmission techniques available today).

There isn’t the space here to invent the entire future of broadcasting, although I’m sure we could all speculate endlessly. Instead, I’m going to do a rapid “flypast” through the areas where I think we’ll see the biggest change in the way sporting events will be broadcast

Resolution

This is an easy one. The London Olympics will be broadcast in high definition. So will the Bejing games, as well as next year’s world cup. What will be different, though, with the London event is that virtually everyone will be watching in high definition. It’s probably going to take seven years from now for the majority of televisions to be updated.

But, although it’s virtually certain that the games will have “better than PAL” coverage, it’s definitely not clear what the characteristics of the acquisition format will be.

For a start, there’s no reason to expect that video cameras will be restricted to any of the current HD raster sizes (1280 by 720 and 1920 by 1080). What’s more likely is that acquisition and production resolutions will be completely decoupled from delivery resolutions. It’s easy to derive virtually any raster size from a higher resolution (especially if progressive formats are used throughout), so there’s every possibility that we’ll see camcorders with 4K resolution or above. This might seem unrealistic, but you can already buy digital still cameras (such as the Canon Ds1 Mk II) with a sixteen megapixel resolution: arguably better than film. Of course you then have to deal with the torrents of data: a single frame at this size is 50 Megabytes!

What would be the point of shooting in Ultra HD resolutions? At the risk of stating the obvious, the pictures are better. What’s not so obvious is that the pictures are better whatever resolution you’re viewing them at, because they started out with so much information. The better the original images, the better job compression algorithms can do with the material. So, although this is probably more an academic point than anything else, shooting in ultra high definition could even improve the picture on your mobile phone.

For sport, there are more potential benefits when the acquisition format exceeds the resolution of the delivery format.

If the pictures from a video camera are, say, four times the resolution of the pictures that are broadcast, then it’s possible to zoom in by a factor of four without any apparent loss of quality. This gives enormous flexibility to “post produce” the event. It’s effectively like having more cameras and more operators.

Once you start down this path, it takes you a very long way indeed.

In still photography, you can use software applications to “stitch” together several overlapping images. It’s a process that works surprisingly well, and can yield impressive panoramic shots or whole patchwork quilts of landscapes that have a quite astonishing resolution. (Google Earth is perhaps the ultimate embodiment of this technique).

There’s really no difference between video and still photography, except that video effectively takes a lot of photographs in a given time. So there’s no reason why the output from several suitably arranged cameras shouldn’t be stitched together, in real time, to form a giant, contiguous, moving vista, which a sports editor could then pan and zoom around, effectively framing his or her own shots.

As the software improves, even the output from cameras that haven’t been deliberately aligned can be incorporated.

(This, incidentally, has great potential for security applications, because it would give the police, for example, the ability to amalgamate the output from all the cameras at a venue, and then perform searches based on timecode and the possible location within a stadium of the suspect).

Perhaps the ultimate manifestation of this technique is the ability to invoke “virtual” camera positions.

Here’s how it would work.

It’s already possible to create 3D landscapes using the output from two cameras. Most of us have seen 3D flypasts of the surface of Mars, generated from two cameras shooting the same scene from a slightly different viewpoint.

A sports stadium is a much more controlled environment, and it may just be possible to arrange a complete array of cameras, suitably spaced, so that by analysing and concatenating the combined output it would be possible to generate a speculative “virtual” viewpoint.

This is hard enough to do with still images, never mind with moving video, in real-time. But the last ten years has shown us that if you can think about something, without coming across contradictions or sheer impossibilities, then it’s probably going to happen, given merely enough engineering skill and processing power.

Analysis

Do you remember the Channel 4 Snickometer? In case you don’t, it was a gadget invoked during live cricket matches to give viewers a visual display of the sounds from an LBW incident, synchronised to slow motion video playback of the event. This quasi-scientific approach has now been joined by Hawkeye, which uses multiple camera positions to predict the trajectory of balls.

There’s almost no limit to the amount of analysis and resultant viewer aids. It’s all a question of computing power, and, of course, there being a sensible balance between gimmickry and genuine usefulness.

Ultimately, we’re going to be able to create real-time, detailed dynamic models of sporting action of virtually any complexity. We’ll be able to apply highly evolved heuristics (common-sense or experience-derived rules), based on years of sporting analysis, to live action. We’ll be able to create entire teams of virtual footballers and athletes. A lot of the technology is here today.

You only have to look at next-generation games consoles to see how we might get there sooner than you’d expect. The clue is that these devices have not only superb graphics capabilities, but astonishing real-time “physics” engines. So, what might look like a graphical facsimile of a boxer, will actually behave like a boxer, right down to the way the virtual bones, muscles and skin react on receipt of the winning punch.

So I think we can expect to see action replays that superimpose real-time anatomical analyses. And of course masses of data to accompany the pictures.

Metadata

On the face of it, Metadata is not a sexy subject. But just wait, because metadata and video are ultimately going to become the same thing.

If we are in a position to create virtual models of reality, using advanced motion-capture techniques that don’t really on athletes wearing ping pong balls on their elbows, then, as the technology improves, the models will ultimately be indistinguishable from the real thing. At that point, we leave pixels behind and transmit video using metadata alone. (If you’ve ever used Adobe Illustrator or Corel Draw, you’ll understand what I mean by this). We’ll be able to broadcast great-looking video to any device, which will be displayed at the maximum resolution of the screen. There’ll be perfect slow-motion as well, because this technique is temporally resolution-independent as well as spacially. This may be something that doesn’t happen until way beyond the London Olympics, but, even so, metadata’s going to have a huge role in future sporting action.

The more metadata you can generate automatically, the more value your video has to anyone searching for it. An automatically generated metadata tagging schema, based on a standard taxonomy (a taxonomy is a hierarchy of categories, like Animals/Mammals/Primates/Chimpanzees etc) will allow productive searches by anyone who’s looking for highlights in their favourite sport. Comprehensive automatically generated metadata will help create automatically formatted web, video-rich websites, which can be completely tailored to an individual’s preferences. You can think of this in an abstract sense as giving the viewer control over the playout server’s schedules!

Bandwidth

By 2012, bandwidth will be as abundant as mains electricity. We’ll no more worry about bandwidth constraints than we would about plugging a mobile phone charger into a mains socket. And that includes mobile devices. The first 24 megabit/second DSL service has just been announced in London. In seven years time, we could have gigabit into the home, and, at last, mobile phones will show decent video.

I think it’s likely that download-store-play will replace standard, linear broadcast viewing habits. With rich metadata, video podcasting and semantically enabled Tivos, changing channels will be a meaningless activity, to be replaced by merely expressing the nature and strength of your viewing preferences.

Of course, there will always be live broadcasts. And, although there will always be the option to be “spoon fed” with the directors take on the event, viewers will be able to select camera angles, action replays and even move a “virtual” camera to any position imaginable.

DRM

Every technology has a “damping factor” that slows down its adoption. And, as we’ve seen above with legacy broadcasting standards, they can act both as an enabling and disabling influence. With major sporting events, this factor may well be Digital Rights Management.

No one imagines that organising the greatest broadcasting event ever seen will be entirely without cost. The political and economic wranglings that surround sport are legendary and are likely to be exceeded by orders of magnitude where the London Olympics are concerned.

The combination of advanced interactive acquisition technology with the internet is a DRM nightmare. Have you ever tried to listen to Radio 5’s coverage of football matches on the internet when you’re abroad? More often than not, they’re blanked out; presumably because of rights issues.

The only way to walk this particular tightrope is to make use of detailed metadata to derive permissions for who can watch what from where. It’s a rather dry subject, superficially devoid of interest, but could scupper whole swathes of new-generation media coverage.

So, is all of the above prescient, visionary reporting, or is it third rate science fiction?

I’ll let you know in seven years.


Friday, 25 September 2009

Convergence: Time to see the wood for the trees

In between broadcasting and digital signage, there's a wide area that's completely unpopulated by business ideas and revenues, and yet, the technology is there to create a media platform that's potentially massive. I'm not going to go into too much detail here because that's likely to give away too much information that I normally charge people for; but think about it for a minute: broadcasting and digital signage are converging.

Up to now, I've always said that there are (at least) two models for digital signage: Adaptive Ultra Narrowcasting and Digital Poster Replacement. I still think this is true, but I think that the former will subsume the latter. In other words, I think that the "TV Channel" idea is even applicable, in an extreme way, to digital poster replacement, as well as everything else.

Now, what I don't mean here is that conventional TV content is always suitable for digital signage. I don't even think that people regard digital signage playlists as the same as a TV channel. It's more to do with scale and scalability.

When you have a small digital signage network, it's a bit like a coconut. Just like a planet, it has a gravitational pull, but it's so small, you wouldn't notice it - to the extent that you might as well ignore it.

But when an object is big enough to have it's own gravitational field, then you really do have to take notice of it.

And that's what I think the model for digital signage will be, ultimately. I think there will by huge networks, with aggregated and syndicated content, but (with technology help from the broadcast industry) there will be localised insertions of ads and data that will give the best of both worlds: relevant content, but delivered on an industrial scale, to millions of players.

At that point, the "gravitational pull" on advertisers and other contributors becomes too big to ignore.

Wednesday, 12 August 2009

Apple Cocktail is nine years late

I don't like to boast about it, but if you're going to claim to have thought of something first, it's always best if you've either got a patent on it (if you want to make money from it) or if you publish it, because at least that means you can prove it was your idea.

Anyway, nine years ago, in 2000, I wrote an article in the UK magazine, Sound on Sound, suggesting, essentially, what Apple is now calling "Cocktail". It's the idea that you can add value to digital downloads by packaging the audio track with additional videos, biogs, images etc: stuff that you wouldn't get from Limewire.

But my idea was a bit different, and possibly better. It was that you let people download things freely and legally and let friends share their files without restriction. But at some point, if you own a digital file, you have to buy a licence for it.

Now, on the face of it, that's a pretty lame idea. It would never work. No-one - including me - would fork out real money just to buy a certificate just to say that something I have already is legal.

But what if the "licence" was in the form of something that had a value in itself?

What if it came in the form of a linear PCM encoded (ie not compressed) version of the track? And what if it came with sleevenotes, password-protected video downloads and entitlement to other paraphernalia like T-shirts?

The beauty of this idea is that you don't have to change anything. All you have to do is call the CDs you buy in shops "Licences".

Oh, and you have to bring the price down, as well.

Here's the article in Sound on Sound. Look at the last paragraph.

Sunday, 26 July 2009

Media Accountability

This is just a short note about something very important. If you get it right, it's completely hidden. Get it wrong and it will make your whole digital signage project fail. It's media management.

On the face of it, organising media is about as interesting as tidying up your bedroom.

But, done well, it opens up new possibilities in digital signage, and it safeguards your projects from bad media, lost media and all sort of confusion that can completely scupper a digital signage installation.

If you want to see it done properly, look at digital signage products that are derived from broadcast products. And be a little bit wary of products that are thrown together using media players and file systems that come free with operating systems.

Sorry if this all sounds a bit abstract. I'll write about this in more detail later. It's important stuff.

Monday, 13 July 2009

Pixels. The more the better. Really.

I hate to be seen as overly critical, but, in the same way you'd want to get a doctor who didn't know the difference between a suppository and a syringe struck off the medical register, you have to wonder about anyone who doesn't understand even the basics about pixels.

I read a review this morning about a new netbook computer. Unlike the hundreds of identikit devices that have exploded onto the scene like a rash, this one had a feature that made it stand out: a 1366 x 768 screen, instead of the usual, meagre, 1024 x 600; this latter resolution being just wide enough for the majority of web pages to be displayed without sideways scrolling, but still an annoying paucity short of a full screen for most people.

The reviewer's take on this improved resolution was that it would make the text harder to read.

That's a bit like winning the lottery and complaining you've got to spend some of the money on a safe.

And anyway, does it make the text harder to read? Possibly, if you've got marginal eyesight, because it will be slightly smaller. But you can use that resolution to either fit more text on the screen, or, if your astigmatism completely defeats you below a certain distance and text size, you can make the text bigger.

And when you do, it will look better as well, because you've got more pixels to accurately describe each character in a given font.

This type of misunderstanding is endemic in the digital signage industry. It's the biggest reason why some displays lose impact: because they're fuzzy. Even with digital interfaces almost universally available now, if you don't prepare your graphics at the native resolution of the screen, and - even if you do - if your screen isn't set to its native resolution, then the results will be dismal.

On the other hand, when you get it right, graphics look crisp and vibrant, even on cheaper displays.

Understanding resolutions in digital signage should be as fundamental as understanding hygiene in surgery.

Friday, 10 July 2009

Amscreen. Time will tell.

I'm not saying anything.

But it's good to have Amscreen in the industry. If people think they can do better, then let them do better. There's nothing wrong with that.

There are no real quality standards or expectations in this industry and if it takes Amscreen to make people decide what's good and bad, then that's probably a good thing.

(Video courtesy of DailyDOOH).

Thursday, 9 July 2009

Details of my new network company

Here's the scoop on my new network company. It's a sophisticated MPLS core network designed to give the kind of service you need if you're setting up or running a digital signage network. It's run by me, and our other supremely talented people, with the backing of some much larger organisations.

Content Networks Limited
Security and simplicity for
digital signage


Content Networks is a completely new kind of network service specifically for digital signage, designed to take the complexity and uncertainty out of setting up content-oriented data connections.

At the heart of Content Network's technology is the ability to set up private networks over a wide area. Imagine having a hundred screen locations throughout the UK, all behaving as if they were on a Local Area Network. That's how simple it is. It's like having your own private internet.

With Content Networks, you don't have to be a small number in the big, impersonal world of telecommunications. Instead, you get a personal and efficient service.

We handle the installation of the network and we set up your private wide area network to work the way you want it to. We have short install times - as little as a few days for 3G connections.


Scared of Viruses and Hackers? There's no need to be as this is non-internet facing, with advanced firewalls and security policies. If you need the internet, it's via a single, secure, centrally managed gateway.


How does it work?

1) The Simple View:
We have built an advanced Core Network (the physical infrastructure consisting of switches and routers) that is able to deliver secure, private connectivity to pretty well any location. The connectivity can be provided via ADSL, Leased Lines, Ethernet, 3G and even Radio if required. Content Networks, will design and implement the most appropriate network for your needs and ensure it is scalable for future growth.

2) The Technical View: We have a Cisco based "MPLS Core Network". The core network is the central part of a communications network and it is connected to an Access Network (the copper, fibre or mobile connections that cover the UK), which we use to set up virtual private networks (VPNs) to customer premises using the most effective connectivity for the requirement. We have interconnects with Tier 1 carriers and our own APN for mobile connectivity which gives us the reach required to offer a fully scalable solution.

To a customer, it's just like having a secure Local Area Network, with permanent, private IP addresses

How secure is it?

What we are delivering here is an MPLS based solution which means that traffic management is more efficient, providing greater reliability and increased performance. Our service is not a vanilla solution like a traditional ADSL connection. Whilst traditional ADSL connectivity can deliver content, there are security risks associated with 'local internet breakout' based services. Our service is provided as standard with no internet access (as digital signage does not need to 'surf the web') and is therefore not visible to the outside world. If you need internet access for RSS feeds etc, it's through a centralised breakout in our core network which allows internet access, controlled by a central security policy.

How do I set it up?

You tell us how you want it to work and we'll set it up for you. We will do a full requirements capture, draw up the network, discuss with you the benefits of the way we have designed it and once signed off, we will deliver it.

Is it reliable?

Our core network technology is fully redundant. For extra resilience, you can specify wired/wireless failover. What this means is that if your wired connection fails for any reason, the system will seamlessly switch to a wireless (3G) connection

What does it cost?

It's hard to compare our solution to standard offerings in the market. We are not delivering a standard 'vanilla' ADSL solution that can be bought from any ISP. We provide an MPLS based service, capable of being delivered over multiple types of access technology - not just ADSL. Our service is non-internet facing. It is secure and private and we offer the added benefit of 3G failover technology should the service be mission critical. For our 3G solutions, we have a next day, pre-9am fully-configured replacement router service. This provides continuity of service as it just slots back in and works.

Our MPLS based ADSL connection may cost slightly more than a standard ADSL connection but in return we we can set up the VPN tunnels far more cost effectively and quickly than would be possible with a standard connection. Management visibility and control of those connections is also greatly enhanced. We supply Cisco routers at the customer premises rather than cheaper consumer oriented routers, because Cisco routers are business grade, fully supportable and - above all - reliable.

For our 3G solutions we use enterprise class 3G routers with built in SIM and wireless interfaces, rather than cheaper routers with USB dongle attachments. This ensures we can offer a quality of service that far outstrips our nearest competition. These routers cost more to buy, but support over the lifetime of the project costs considerably less. The routers can be remotely configured and set up to monitor and alert users should there be an issue with a particular location.

The bottom line is that the Total Cost of Ownership with Content Networks can be significantly lower than using vanilla data connections and routers.

Can you really do digital signage over (wireless) 3G?

Absolutely and we are doing so at present. Our understanding of 3G/HSDPA technology is second to none and we have been working with this medium for a number of years. We have our own private APN (interconnect into the mobile network) and can provide Fixed IP solutions that enable the delivery of content, management and control to individual sites. We can also use the 3G technology to provide automatic failover from an ADSL connection for business continuity.

3G technology has developed over the last few years, with ever increasing speeds and coverage and huge advancements in the router capabilities. We would be happy to provide more details around this.


How are we different to the BTs and Virgin Medias of this world?

A standard BT or Virgin Media product will be broadband ADSL, which may look like it has a great price, with wireless routers included, free wi-fi minutes (which you don't need) and so on, but are these the best options for digital signage?

They are good value if you just want to set it up for Internet access and email, but these are not the priorities for digital signage. Digital signage networks need reliability, the ability to manage the connection, traffic prioritisation, remote access and management, potentially centralised content distribution, virtual private networks that can be centrally configured, and more advanced router technology that can deal with advanced protocols.


In effect, digital signage needs a network that has been specifically designed to deal with digital signage content and delivery. Whilst there is nothing wrong with the BT ADSL products; they have been packaged to deliver web and email services to consumers, not private, secure content services that are part of a revenue generating business.

Content Networks can deliver MPLS ADSL connectivity for not that much more than standard BT products, but as we have highlighted before, the Total Cost of Ownership of using a network specifically designed for this type of data is considerably less than that of using a generic "free for all'" service.

We also offer flexibility. Our network services can be installed in short timescales and both our DSL and 3G solutions can be re-located with short notice, enabling digital signage companies to vary their programmes, sites and campaigns.

We don't 'traffic shape' our network. Most ISPs offer packages with what they describe as "Unlimited Downloads", but what this often means is unlimited..up to a point. if you go over what is deemed to be an acceptable limit, they will throttle your connection back to Kbps speeds, rather than Mbps speeds. This can have a dramatic impact on the ability to deliver content. We control our own bandwidth and do not traffic shape. We work with you to identify how much bandwidth you will require and we work with that. We won't throttle your bandwidth, and so won't cause issues when you don't need them.

Companies should see Content Networks as their network partner rather than a network supplier as we will work with you to design the best solution for the job. Our technical engineers have designed bespoke systems for Police Authorities, Formula 1 Racing teams and Cinema chains; systems that have had to deliver critical network services.

www.contentnetworks.co.uk