Its About Time: Link Streams as
Continuous Metadata
Kevin Page,
Don Cruickshank and David De Roure
Intelligence, Agents, Multimedia
Department of Electronics and Computer Science
We're talking in terms of distributed
multimedia applications, and this is a nice checklist
of what is required to support such applications:
How can we categorise metadata in relation to its
associated media content, which we refer to as
mediadata?
Should we stream the metadata? Can
we just pre-load it?
- Stored Mediadata: Little justification for streaming
except due to sheer volume
Stored multimedia is a persistent
entity. It can be described by associated metadata
in the same way as any document, but with a
temporal element. Metadata might be used to find,
deliver and navigate the material, but is likely
to be small in comparison to the original material.
- Live Mediadata: Metadata is not available for
pre-fetching
It will become available as the
multimedia stream is generated. Metadata might
be about camera angles, or from real-time
processing of the stream. A delay might be acceptably
introduced to allow a pipeline of processing.
Similarly, if a first user starts to view a long
presentation, their annotations might not be
available to a second user who starts to view the
same presentation before the first user has
finished.
In some cases it might be necessary to stream
pre-existing metadata because a receive-only device
might join a live broadcast at an arbitrary point
in time.
- Live Mediadata with multiple users: content metadata
is created on-the-fly with little time for
pre-processing
Video-conferencing imposes tight time
constraints on this style of synchronous interaction.
Although session and party metadata might be
available in advance, the large quantities of
possible content metadata will not be.
The
presentation point is the node where a user
views a combination of media and metadata flows.
There is no reason why a presentation
point should only be the convergence of a single mediadata
and metadata flow; it should pull together and synchronise
as many metadata flows as the user requests.
It places requirements on the metadata flow:
and the information encoded in the flow
- an identifying code
When multiple flows are combined
at a presentation point an identifier is needed to
deal with packets from a particular flow in a
consistent manner; the identifier should also allow
derivation of the mediadata flow with which it must
be synchronised.
- validity timestamps
to bound when the metadata is true
so it can be synchronised to the mediadata
- display hints
Should the metadata be displayed for
a period outside its validity
- standardised content type identifier
though the content could be of
any type (RDF etc.) as long as the relevant nodes
understand it