[14] | Markus Waltl, Christian Raffelsberger, Christian Timmerer, Hermann Hellwagner, Metadata-Based Content Management and Sharing System for Improved User Experience, Chapter in User Centric Media (Federico Alvarez, Cristina Costa, eds.), Springer Verlag, vol. 60, Berlin, Heidelberg, New York, pp. 132-140, 2012.
[bib][url] [doi] |
[13] | Markus Waltl, Christian Timmerer, Benjamin Rainer, Hermann Hellwagner, Sensory Effects for Ambient Experiences in the World Wide Web, Alpen-Adria Universität Klagenfurt, no. TR/ITEC/11/1.13, Klagenfurt, Austria, pp. 12, 2011.
[bib] [pdf] [abstract]
Abstract: More and more content in various formats become available via the World Wide Web (WWW). Currently available Web browsers are able to access and interpret these contents (i.e., Web videos, text, image, and audio). These contents stimulate only senses like audition or vision. Recently, it has been proposed to stimulate also other senses while consuming multimedia content through so-called sensory effects. These sensory effects aim to enhance the ambient experience by providing effects, such as, light, wind, vibration, etc. The effects are represented as Sensory Effect Metadata (SEM) which is associated to multimedia content and is rendered on devices like fans, vibration chairs, or lamps. In this paper we present a plug-in for the Mozilla Firefox browser which is able to render such sensory effects that are provided via the WWW. Furthermore, the paper describes two user studies conducted with the plug-in and presents the results achieved.
|
[12] | Markus Waltl, Christian Timmerer, Hermann Hellwagner, Increasing the User Experience of Multimedia Presentations with Sensory Effects, In Proceedings of the 11th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS'10) (Riccardo Leonardi, Pierangelo Migliorati, Andrea Cavallaro, eds.), IEEE, Los Alamitos, CA, USA, pp. 1-4, 2010.
[bib] [pdf] [abstract]
Abstract: The term Universal Multimedia Experience (UME) has gained momentum and is well recognized within the research community. As this approach puts the user into the center stage, additional complexity is added to the overall quality assessment problem which calls for a scientific framework to capture, measure, quantify, judge, and explain the user experience. In previous work we have proposed the annotation of multimedia content with sensory effect metadata that can be used to stimulate also other senses than vision or audition. In this paper we report first results obtained from subjective tests in the area of sensory effects attached to traditional multimedia presentations such as movies that shall lead to an enhanced, unique, and worthwhile user experience.
|
[11] | Markus Waltl, Christian Timmerer, Hermann Hellwagner, Improving the Quality of Multimedia Experience through Sensory Effects, In Proceedings of the 2nd International Workshop on Quality of Multimedia Experience (QoMEX'10) (Andrew Perkis, Sebastian Möller, Peter Svensson, Amy Reibman, eds.), IEEE, Los Alamitos, CA, USA, pp. 124-129, 2010.
[bib][url] [doi] [pdf] [abstract]
Abstract: In previous and related work sensory effects are presented as a tool for increasing the user experience of multimedia presentations by stimulating also other senses than vision or audition. In this paper we primarily investigated the relationship of the Quality of Experience (QoE) due to various video bit-rates of multimedia contents annotated with sensory effects (e.g., wind, vibration, light). Therefore, we defined a subjective quality assessment methodology based on standardized methods. The paper describes the test environment, its setup, and conditions in detail. Furthermore, we experimented with a novel voting device that allows for continuous voting feedback during a sequence in addition to the overall quality voting at the end of each sequence. The results obtained from the subjective quality assessment are presented and discussed thoroughly. In anticipation of the results we can report an improvement of the quality of the multimedia experience thanks to the sensory effects.
|
[10] | Markus Waltl, Christian Raffelsberger, Christian Timmerer, Hermann Hellwagner, Metadata-based Content Management and Sharing System for Improved User Experience, In Proceedings CD of the 2nd International ICST Conference on User Centric Future Media Internet (Federico Alvarez, Cristina Costa, eds.), Springer Verlag GmbH, Berlin, Heidelberg, New York, pp. 1-9, 2010.
[bib] [pdf] [abstract]
Abstract: In the past years the amount of multimedia content on the Internet or in home networks has been drastically increasing. Instead of buying traditional media (such as CDs or DVDs) users tend to buy online media. This leads to the difficulty of managing the content (e.g., movies, images). A vast amount of tools for content management exists but they are mainly focusing on one type of content (e.g., only images). Furthermore, most of the available tools are not configurable to the user’s preferences and cannot be accessed by different devices (e.g., TV, computer, mobile phone) in the home network. In this paper we present a UPnP A/V-based system for managing and sharing audio/visual content in home environments which is configurable to the user’s preferences. Furthermore, the paper depicts how this system can be used to improve the user experience by using MPEG-V.
|
[9] | Christian Timmerer, Markus Waltl, Hermann Hellwagner, Are Sensory Effects Ready for the World Wide Web?, In Proceedings of the Workshop on Interoperable Social Multimedia Applications (WISMA 2010) (Anna Carreras, Jaime Delgado, Xavier Maroñas, Víctor Rodríguez, eds.), CEUR Workshop Proceedings (CEUR-WS.org), Aachen, Germany, pp. 57-60, 2010.
[bib] [pdf] [abstract]
Abstract: The World Wide Web (WWW) is one of the main entry points to access and consume Internet content in various forms. In particular, the Web browser is used to access different types of media (i.e., text, image, audio, and video) and on some platforms is the only way to access the vast amount of information on the Web. Recently, it has been proposed to stimulate also other senses than vision or audition while consuming multimedia content through so- called sensory effects, with the aim to increase the user’s Quality of Experience (QoE). The effects are represented as Sensory Effects Metadata (SEM) which is associated to traditional multimedia content and is rendered (synchronized with the media) on sensory devices like fans, vibration chairs, lamps, etc. In this paper we provide a principal investigation of whether the sensory effects are ready for the WWW and, in anticipation of the result, we propose how to embed sensory effect metadata within Web content and the synchronized rendering thereof.
|
[8] | Markus Waltl, Christian Timmerer, Hermann Hellwagner, A Test-Bed for Quality of Multimedia Experience Evaluation of Sensory Effects, In Proceedings of the First International Workshop on Quality of Multimedia Experience (QoMEX 2009) (Touradj Ebrahim, Khaled El-Maleh, Gokce Dane, Lina Karam, eds.), IEEE, Los Alamitos, CA, USA, pp. 145-150, 2009.
[bib][url] [doi] [pdf] [abstract]
Abstract: This paper introduces a prototype test-bed for triggering sensory effects like light, wind, or vibration when presenting audiovisual resources, e.g., a video, to users. The ISO/IEC MPEG is currently standardizing the Sensory Effect Description Language (SEDL) for describing such effects. This language is briefly described in the paper and the testbed that is destined to evaluate the quality of the multimedia experience of users is presented. It consists of a video annotation tool for sensory effects, a corresponding simulation tool, and a real test system. Initial experiments and results on determining the color of light effects from the video content are reported.
|
[7] | Bernhard Reiterer, Cyril Concolato, Hermann Hellwagner, Natural-Language-based Conversion of Images to Mobile Multimedia Experiences, In Proceedings of 1st International ICST Conference on User Centric Media - UCMedia 2009 (Patros Daras, Imrich Chlamtac, eds.), Springer, Berlin, Heidelberg, New York, pp. 4 - CD, 2009.
[bib][url] [abstract]
Abstract: We describe an approach for viewing any large, detail-rich picture on a small display by generating a video from the image, as taken by a virtual camera moving across it at varying distance. Our main innovation is the ability to build the virtual camera's motion from a textual description of a picture, e.g., a museum caption, so that relevance and ordering of image regions are determined by co-analyzing image annotations and natural language text. Furthermore, our system arranges the resulting presentation such that it is synchronized with an audio track generated from the text by use of a text-to-speech system.
|
[6] | Bernhard Reiterer, Hermann Hellwagner, Animated Picture Presentation Steered by Natural Language, In Proceedings International InterMedia Summer School 2009 (Magnenat-Thalmann Nadia, Han Seunghyun, Potopsaltou Dimitris, eds.), MIRALab at University of Geneva, Geneva, pp. 24-32, 2009.
[bib][url] [abstract]
Abstract: In this paper, we present an approach for presenting large, feature-rich pictures on small displays by generating an animation and subsequently a video from the image, as it could be taken by a virtual camera moving across the image. Our main innovation is the ability to build the virtual camera's motion upon a textual description of a picture, as from a museum caption, so that relevance and ordering of image regions is determined by co-analyzing image annotations and text. Furthermore, our system can arrange the resulting presentation in a way that it is synchronized with an audio track generated from the text by use of a text-to-speech system.
|
[5] | Bernhard Reiterer, Janine Lachner, Andreas Lorenz, Andreas Zimmermann, Hermann Hellwagner, Research Directions Toward User-centric Multimedia, In Advances in Semantic Media Adaptation and Personalization (Marios C Angelides, Phivos Mylonas, Manolis Wallace, eds.), Auerbach Publications, Boca Raton (Florida), pp. 21-42, 2009.
[bib][url] [doi] [abstract]
Abstract: Currently, much research aims at coping with the shortcomings in multimedia consumption that may exist in a user's current context, e.g., due to the absence of appropriate devices at many locations, a lack of capabilities of mobile devices, restricted access to content, or non-personalized user interfaces. Recently, solutions to specific problems have been emerging, e.g., wireless access to multimedia repositories over standardized interfaces; however, due to usability restrictions the user has to spend much effort to or is even incapable of fulfilling his/her demands. The vision of user-centric multimedia places the user in the center of multimedia services to support his/her multimedia consumption intelligently, dealing with the aforementioned issues while minimizing required work. Essential features of such a vision are comprehensive context awareness, personalized user interfaces, and multimedia content adaptation. These aspects are addressed in this paper as major challenges toward a user-centric multimedia framework.
|
[4] | Bernhard Reiterer, Cyril Concolato, Janine Lachner, Jean Le Feuvre, Jean-Claude Moissinac, Stefano Lenzi, Stefano Chessa, Enrique Fernández Ferrá, Juan José González Menaya, Hermann Hellwagner, User-centric universal multimedia access in home networks, In The Visual Computer, International Journal of Computer Graphics, Springer, vol. 24, no. 7-9, Berlin, Heidelberg, New York, pp. 837-845, 2008.
[bib][url] [doi] [pdf] [abstract]
Abstract: Much research is currently being conducted towards Universal Multimedia Access, aiming at removing barriers that arise when multimedia content is to be consumed with more and more heterogeneous devices and over diverse networks. We argue that users should be put at the center of the research work to enable user-centric multimedia access. In this paper we present the requirements for a user-centric multimedia access system in a networked home environment. These requirements are easy access to available content repositories, context awareness, content adaptation and session migration. After showing the limits of state-of-the-art technologies, we present the architecture of a system which allows unified access to the home network content, automatically delivered to rendering devices close to the user, adapted according to the rendering device constraints, and which is also capable of session mobility.
|
[3] | Raffaele Bolla, Matteo Repetto, Stefano Chessa, Francecso Furfari, Saar De Zutter, Rik Van de Walle, Bernhard Reiterer, Hermann Hellwagner, Mark Asbach, Mathias Wien, A Context-Aware Architecture for QoS and Transcoding Management of Multimedia Streams in Smart Homes, In 13th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA08) (Industrial Electronics Society IEEE, ed.), IEEE, Los Alamitos, CA, USA, pp. 1354-1361, 2008.
[bib] [pdf] [abstract]
Abstract: Current trends in smart homes suggest that several multimedia services will soon converge towards common standards and platforms. However this rapid evolution gives rise to several issues related to the management of a large number of multimedia streams in the home communication infrastructure. An issue of particular relevance is how a context acquisition system can be used to support the management of such a large number of streams with respect to the Quality of Service (QoS), to their adaptation to the available bandwidth or to the capacity of the involved devices, and to their migration and adaptation driven by the users' needs that are implicitly or explicitly notified to the system. Under this scenario this paper describes the experience of the INTERMEDIA project in the exploitation of context information to support QoS, migration, and adaptation of multimedia streams.
|
[2] | Davy Van Deursen, Sarah De Bruyne, Wim Van Lancker, Wesley De Neve, Davy De Schrijver, Hermann Hellwagner, Rik Van de Walle, MuMiVA: A Multimedia Delivery Platform using Format-agnostic, XML-driven Content Adaptation, In IEEE International Symposium on Multimedia 2007 (ISM2007) (Dick Bulterman, Kinji Mori, Jeffrey J P Tsai, eds.), IEEE, Los Alamitos, CA, USA, pp. 131-138, 2007.
[bib][url] [pdf] [abstract]
Abstract: Due to the increasing heterogeneity in the current multimedia landscape, the delivery of multimedia content has become an important issue today. This heterogeneity is not only reflected by a plethora of different usage environments, but also by the presence of multiple (scalable) coding formats. Therefore, format-independent adaptation engines have to be used within a multimedia delivery platform, which are able to adapt the multimedia content according to a certain usage environment, independent of the underlying coding format of the content. By relying on automatically created textual descriptions of the highlevel syntax of binary media resources, a format-independent adaptation engine can be build. MPEG-21 generic Bitstream Syntax Schema (gBS Schema) is a tool that is part of the MPEG-21 Multimedia Framework. It enables the use of generic Bitstream Syntax Descriptions (gBSDs), i.e., textual descriptions in XML, to steer the adaptation of a binary media resource, using format-independent adaptation logic. In this paper, we address the design and performance evaluation of a multimedia delivery platform that relies on gBS Schema-driven adaptation engines. This platform is called MuMiVA; it is a fully integrated, extensible platform for multimedia delivery in heterogeneous usage environments, using streaming technologies. To demonstrate the flexibility of our multimedia delivery platform, we discuss the functioning of two different applications (i.e., exploitation of temporal scalability and shot selection) applied to two different coding formats (i.e., MPEG-4 Visual and H.264/AVC). Keywords— Content adaptation, Content delivery, MPEG-21 gBS Schema, XML transformations.
|
[1] | Janine Lachner, Andreas Lorenz, Bernhard Reiterer, Andreas Zimmermann, Hermann Hellwagner, Challenges toward User-centric Multimedia, In Second International Workshop on Semantic Media Adaptation and Personalization (SMAP 2007) (Phivos Mylonas, Manolis Wallace, Marios C Angelides, eds.), IEEE, Los Alamitos, CA, USA, pp. 159-164, 2007.
[bib][url] [pdf] [abstract]
Abstract: Currently, much research aims at coping with the shortcomings in multimedia consumption that may exist in a user's current context, e.g., due to the absence of appropriate devices at many locations, a lack of capabilities of mobile devices, restricted access to content, or non-personalized user interfaces. Recently, solutions to specific problems have been emerging, e.g., wireless access to multimedia repositories over standardized interfaces; however, due to usability restrictions the user has to spend much effort to or is even incapable of fulfilling his/her demands. The vision of user-centric multimedia places the user in the center of multimedia services to support his/her multimedia consumption intelligently, dealing with the aforementioned issues while minimizing required work. Essential features of such a vision are comprehensive context awareness, personalized user interfaces, and multimedia content adaptation. These aspects are addressed in this paper as major challenges toward a user-centric multimedia framework.
|