DANAE Project

DANAE = Dynamic and distributed Adaptation of scalable multimedia coNtent in a context Aware Environment)
IST-FP6-1-507113 - STREP Project

Research Overview: 

DANAE proposes to address the dynamic and distributed adaptation of scalable multimedia content in a context-aware environment. Its objectives are to specify, develop, integrate and validate in a testbed a complete framework able to provide end-to-end quality of (multimedia) service at a minimal cost to the end-user. An application will be specifically developed and implemented on a demonstrator, to illustrate the new service concepts pioneered by the Project.

Our contributions are: The research project started in 01/2004. First major results are expected for end of 2004 (see Abstract below). Running time: July 2006.

Team Member:
Michael Zufferey (PhD-student).

Official DANAE Homepage

Abstract:
The increasing diversity of devices and heterogeneity of networks pose nowadays a challenge in the delivery and consumption of multimedia content. In this context, the Part 7 of the MPEG-21 standard formally named Digital Item Adaptation (DIA) targets the adaptation of multimedia content based on usage environment, such as network characteristics, terminal capabilities and user characteristics. But, MPEG-21 DIA does not take into account MPEG-7 semantic description tools, which provide means for a conceptual (semantic) description that is close to the human understanding of multimedia content. Therefore, to fill this gap, we propose an interactive and user-centric framework called Semantic Adaptation Framework (SAF). The SAF provides facilities for the generation of all the required semantic metadata and enables an MPEG-21 adaptation engine to semantically adapt the multimedia content in order to provide the user with the best possible experience.

The SAF specifies the following semantic metadata:
•    MPEG-7 compliant ontologies for content annotation and context representation.
•    MPEG-7 semantic descriptions of the resource.
•    Semantic generic bitstream syntax descriptions (semantic gBSD) of the resource.
•    Steering descriptions, which define all the possible semantic adaptation for the resource.
•    Semantic user preferences.
•    Content Digital Items, which provide a flexible link between the semantic gBSD and the steering description.

The modular architecture of the SAF (see below) allows an easy extensibility and scalability of both software modules and semantic metadata.

The SAF consists of the following modules:
•    SemanticGenerator: Provides means for the semantic annotation of the resource and generation of the MPEG-7 semantic description. It generates the semantic gBSD and the steering description.
•    Content Digital Item Tool (CDITool): Builds a content Digital Item (contentDI) by aggregating the semantic indexed gBSD, the steering description and other content retrieved metadata.
•    Semantic User Preferences Tool: Allows a user to select her/his preferred topics, content structure and semantic entities with their relations from the available ontologies.

 
Semantic Annotation Framework
Figure : Architecture of the SAF.


Maintained by : harald.kosch@itec.uni-klu.ac.at
Last updated 06/12/2004.