Linked data and public broadcasting
Lately, I've been talking up linked data and the semantic web to some of my colleagues in US-based public broadcasting, which is heavily fragmented (by design) and operates on a number of levels (producers, distributers, and broadcasters at both local and national levels) with many competing interests, funding models, and missions. Linked data seems to offer a common framework to disseminate, describe, and aggregate information, beyond one-way APIs, custom solutions, and one-size-fits-all software. It seems elegant to pair the organizational models with a data model that already deals with issues of authority, distributed information, and relationships between objects. Further, the BBC have done or enabled some exciting linked-data based projects that expose the programme catalog, mash-up BBC content with user-generated content, and contextualize BBC content within the wider web in a way that makes it useful and discoverable outside of a walled garden.
Getting started seems easy enough, and at least a few of us on the inside are making some quiet progress. Glenn Clatworthy at PBS has done some very early RDF experiments with the PBS catalog, which could unlock a valuable resource, that has the potential to tie together programs assets, extra production material, and all manner of external resources.
So, why should public broadcasting begin this process now?
- it frees and decentralizes information, making it available for new applications and better resource discovery (especially within news and public affairs programming, which has many different outlets gathering different pieces and angles on a story)
- legacy content is already being moved into new content management and asset management system, so additional overhead is minimal.
- it can begin at any level of effort and still produce valuable results -- and it can begin as unilateral collaboration, without the need for extensive oversight, project planning, or finalized use-cases.