[552] | Torsten Andre, Karin Anna Hummel, Angela Schoellig, Evsen Yanmaz, Mahdi Asadpour, Christian Bettstetter, Pasquale Grippa, Hermann Hellwagner, Stephan Sand, Siwei Zhang, Application-Driven Design of Aerial Communication Networks, In IEEE Communications Magazine, IEEE, vol. 52, no. 5, IEEE Communications Society, pp. 129-137, 2014.
[bib] [abstract]
Abstract: Networks of micro aerial vehicles (MAVs) equipped with various sensors are increasingly used for civil applications, such as monitoring, surveillance, and disaster management. In this article, we discuss the communication requirements raised by applications in MAV networks. We propose a novel system representation that can be used to specify different application demands. To this end, we extract key functionalities expected in an MAV network. We map these functionalities into building blocks to characterize the expected communication needs. Based on insights from our own and related real-world experiments, we discuss the capabilities of existing communications technologies and their limitations to implement the proposed building blocks. Our findings indicate that while certain requirements of MAV applications are met with available technologies, further research and development is needed to address the scalability, heterogeneity, safety, quality of service, and security aspects of multi-MAV systems.
|
[551] | Hermann Hellwagner, The BRIDGE Project - Bridging Resources and Agencies in Large-Scale Emergency Management, In E-Letter on Social Media Analysis for Crisis Management, IEEE Computer Society Special Technical Community on Social Networking (STCSN), vol. 2, no. 1, http://stcsn.ieee.net/e-letter/vol-2-no-1, pp. 1-10, 2014, IEEE Computer Society Special Technical Community on Social Networking E-Letter.
[bib][url] [abstract]
Abstract: BRIDGE is a European collaborative project established within the Security Research sector of the European Commission. The basic goal of BRIDGE is to contribute to the safety of citizens by developing technical and organisational solutions that improve crisis and emergency management in EU member states. A (middleware) platform is being developed that is to provide technical support for multi-agency collaboration in large-scale emergency relief efforts. Several tools and software systems are being implemented and tested to support first responders in their efforts. Beyond technical considerations, organisational measures are being explored to ensure interoperability and cooperation among involved parties; social, ethical and legal issues are being investigated as well. A focus of the project is to demonstrate and validate its results in the course of real-world emergency response exercises. Since most of the BRIDGE work is beyond the scope of this e-letter on social networking, only a brief overview of the BRIDGE goals and work will be given. However, one thread of work is relevant in the context of social networking and deserves to be covered more closely: automatic detection of notable sub-events of a crisis from social networks. This activity makes use of crisis-related information coming from citizens via social networks and thus contributes to building an improved operational picture in a crisis situation and to better planning and performing crisis response tasks.
|
[550] | Gheorghita Ghinea, Christian Timmerer, Weisi Lin, Stephen Gulliver, Mulsemedia: State of the Art, Perspectives, and Challenges, In ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), ACM, vol. 11, no. 1s, New York, NY, USA, pp. 17:1-17:23, 2014.
[bib] [pdf] [abstract]
Abstract: Mulsemedia—multiple sensorial media—captures a wide variety of research efforts and applications. This article presents a historic perspective on mulsemedia work and reviews current developments in the area. These take place across the traditional multimedia spectrum—from virtual reality applications to computer games—as well as efforts in the arts, gastronomy, and therapy, to mention a few. We also describe standardization efforts, via the MPEG-V standard, and identify future developments and exciting challenges the community needs to overcome.
|
[549] | Gheorghita Ghinea, Christian Timmerer, Weisi Lin, Stephen Gulliver, Guest Editorial: Special Issue on Multiple Sensorial (MulSeMedia) Multimodal Media: Advances and Applications, In ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), ACM, vol. 11, no. 1s, New York, NY, USA, pp. 9:1-9:2, 2014.
[bib] [pdf] |
[548] | Claudiu Cobarzan, Marco Andrea Hudelist, Manfred Del Fabro, Content-Based Video Browsing with Collaborating Mobile Clients, In MultiMedia Modeling, 20th Anniversary International Conference (C Gurrin, F Hopfgartner, W Hurst, H Johansen, H Lee, N O'Connor, eds.), Springer, Berlin, Germany, pp. 402-406, 2014.
[bib] |
[547] | Claudiu Cobarzan, Klaus Schoeffmann, How do Users Search with Basic HTML5 Video Players?, In Proceedings of the 20th International Conference on MultiMedia Modeling (MMM2014) (Noel O'Connor, Wolfgang Hurst, Hyowon Lee, Cathal Gurrin, eds.), Springer, Berlin Heidelberg, pp. 12, 2014.
[bib] |
[546] | Florian Stegmaier, Harald Kosch, Ralf Klamma, Mathias Lux, Ernesto Damiani, Multimedia on the web - editorial, In Multimedia Tools and Applications, Springer US, New York, USA, pp. 1-6, 2013.
[bib][url] [doi] |
[545] | Mathias Lux, Oge Marques, Visual Information Retrieval Using Java and LIRE, In Synthesis Lectures on Information Concepts, Retrieval, and Services, Morgan & Claypool Publishers, vol. 5, no. 1, USA, pp. 1-112 pp., 2013.
[bib] [doi] [abstract]
Abstract: Visual information retrieval (VIR) is an active and vibrant research area, which attempts at providing means for organizing, indexing, annotating, and retrieving visual information (images and videos) from large, unstructured repositories. The goal of VIR is to retrieve matches ranked by their relevance to a given query, which is often expressed as an example image and/or a series of keywords. During its early years (1995-2000), the research efforts were dominated by content-based approaches contributed primarily by the image and video processing community. During the past decade, it was widely recognized that the challenges imposed by the lack of coincidence between an image's visual contents and its semantic interpretation, also known as semantic gap, required a clever use of textual metadata (in addition to information extracted from the image's pixel contents) to make image and video retrieval solutions efficient and effective. The need to bridge (or at least narrow) the semantic gap has been one of the driving forces behind current VIR research. Additionally, other related research problems and market opportunities have started to emerge, offering a broad range of exciting problems for computer scientists and engineers to work on. In this introductory book, we focus on a subset of VIR problems where the media consists of images, and the indexing and retrieval methods are based on the pixel contents of those images -- an approach known as content-based image retrieval (CBIR). We present an implementation-oriented overview of CBIR concepts, techniques, algorithms, and figures of merit. Most chapters are supported by examples written in Java, using Lucene (an open-source Java-based indexing and search implementation) and LIRE (Lucene Image REtrieval), an open-source Java-based library for CBIR.
|
[544] | Hermann Hellwagner, The Interplay of Technology Development and Media Convergence: Examples, Chapter in Media and Convergence Management (Sandra Diehl, Matthias Karmasin, eds.), Springer, Berlin, Heidelberg, New York, pp. 205-220, 2013.
[bib] [pdf] |
[543] | Manfred Del Fabro, Laszlo Böszörmenyi, State-of-the-art and future challenges in video scene detection: a survey, In Multimedia Systems, Springer-Verlag, vol. 19, no. 5, Berlin, Heidelberg, New York, pp. 427-454, 2013.
[bib] |
[542] | Matthias Zeppelzauer, Maia Zaharieva, Manfred Del Fabro, Unsupervised Clustering of Social Events, In MediaEval 2013 - Multimedia Benchmark Workshop (Martha Larson, Xavier Anguera, Timo Reuter, Gareth Jones, Bogdan Ionescu, Markus Schedl, Tomas Piatrik, Claudia Hauff, Mohammad Soleymani, eds.), CEUR-WS.org/Vol-1043, Aachen, Germany, pp. 1-2, 2013.
[bib] [pdf] |
[541] | Markus Waltl, Benjamin Rainer, Stefan Lederer, Christian Timmerer, Katharina Gassner, Ralf Terlutter, A 4D Multimedia Player enabling Sensory Experience, In Proceedings of the 5th International Workshop on Quality of Multimedia Experience (QoMEX'13) (Christian Timmerer, Patrick Le Callet, Martin Varela, Stefan Winkler, Tiago H Falk, eds.), IEEE, Los Alamitos, CA, USA, pp. 126-127, 2013.
[bib][url] [pdf] [abstract]
Abstract: Lately, 3D is gaining momentum in cinemas and home environments. However, 2D and 3D video content only stimulates senses like hearing and seeing. In this paper we focus on a more enhanced level of entertainment by presenting a 4D multimedia player and a corresponding demonstration setup, which stimulates further senses such as haptics using the MPEG-V: Media Context and Control standard. The presented demonstration setup uses stereoscopic 3D and sensory devices, i.e., fans, vibration panels and lights. The combination of conventional 3D content with tailored sensory effects allows us to further enhance the viewing experience of the users.
|
[540] | Markus Waltl, The Impact of Sensory Effects on the Quality of Multimedia Experience, PhD thesis, Alpen-Adria-Universität Klagenfurt, pp. 234, 2013.
[bib] [pdf] [abstract]
Abstract: Multimedia content is omnipresent in our life. Thus, one can consume content through various distribution channels such as a DVD, Blu-Ray, or the Internet. Recently, 3D video gained more and more importance and a lot of movies presented in cinemas are 3D. Currently, research on additional constituents such as light and scent effects for further enhancing the viewing experience is conducted. As this research is taken up by more and more researchers and companies, the Moving Picture Experts Group (MPEG) ratified the MPEG-V standard, referred to as Media Context and Control, which allows the annotation of multimedia content with additional effects (e.g., light, wind, vibration) and render these effects synchronized to the multimedia content. Due to this fairly new research area, there are only a few subjective quality assessments evaluating such effects. Moreover, standardized assessment methods cannot be used as originally developed since they are optimized for audio-visual quality evaluations. Thus, this work lists and describes existing subjective quality assessment methods suitable for conducting assessments comprising multimedia content, especially videos, enriched by sensory effects (i.e., light, wind, and vibration). As there is a lack of suitable software for rendering sensory effects, this work introduces a multimedia player for playing multimedia content accompanied by sensory effects. Moreover, in this work, we performed four subjective quality assessments answering the following questions: (1) Do sensory effects enhance the viewing experience for different genres? (2) Do sensory effects have an influence on the perceived video quality? (3) Do light effects enhance the viewing experience for Web videos? (4) Do sensory effects have an impact on the perceived emotions while watching a video? Therefore, this work presents these subjective quality assessments including a detailed description of the assessments and their results. Moreover, this work introduces a dataset consisting of video sequences annotated with sensory effects for conducting subjective quality assessments. Finally, some recommendations for performing assessments comprising sensory effects which have been extracted from the conducted subjective quality assessments are given.
|
[539] | Markus Waltl, Benjamin Rainer, Christian Timmerer, Hermann Hellwagner, An End-to-End tool chain for sensory experience based on MPEG-V, In Signal Processing: Image Communication, Elsevier, vol. 28, no. 2, Amsterdam, Netherlands, pp. 136-150, 2013.
[bib][url] [doi] [abstract]
Abstract: This paper provides an overview of our research conducted in the area of Sensory Experience including our implementations using MPEG-V Part 3 entitled ”Sensory Information”. MPEG-V Part 3 introduces Sensory Experience as a tool to increase the Quality of Experience by annotating traditional multimedia data with sensory effects. These sensory effects are rendered on special devices like fans, vibration chairs, ambient lights, scent disposers, water sprayers, or heating/cooling devices stimulating senses beyond the traditional ones. The paper's main focus is on the end-to-end aspects including the generation, transmission, and synchronized rendering of sensory effects with the traditional multimedia data taking movie clips as an example. Therefore, we present in this paper an open source tool chain that provides a complete end-to-end sensory effect generation and consumption framework. Furthermore, we summarize results from various subjective quality assessments conducted in this area. Finally, we point out research challenges that may encourage further research within this emerging domain.
|
[538] | Christian Timmerer, Benjamin Rainer, Waltl Markus, A Utility Model for Sensory Experience, In Proceedings of the 5th International Workshop on Quality of Multimedia Experience (QoMEX'13) (Christian Timmerer, Patrick Le Callet, Martin Varela, Stefan Winkler, Tiago H Falk, eds.), IEEE, Los Alamitos, CA, USA, pp. 224-229, 2013.
[bib][url] [pdf] [abstract]
Abstract: Enriching multimedia with additional effects such as olfaction, light, wind, or vibration is gaining more and more momentum in both research and industry. Hence, there is the need to determine the influence of individual effects on the Quality of Experience (QoE). In this paper, we present a subjective quality assessment using the MPEG-V standard to annotate video sequences with individual sensory effects (i.e., wind, light, and vibration) and all combinations thereof. Based on the results we derive a utility model for sensory experience that accounts for the assessed sensory effects. Finally, we provide an example instantiation of the utility model and validate it against current and past results of our subjective quality assessments conducted so far.
|
[537] | Christian Timmerer, MPEG Column: 105th MPEG Meeting, In ACM SIGMultimedia Records, ACM, vol. 5, no. 3, New York, NY, USA, pp. 1-2, 2013.
[bib][url] |
[536] | Christian Timmerer, MPEG column: 103rd MPEG meeting, In ACM SIGMultimedia Records, ACM, vol. 5, no. 1, New York, NY, USA, pp. 1-3, 2013.
[bib][url] |
[535] | Christian Timmerer, MPEG Column: 106th MPEG Meeting, In ACM SIGMultimedia Records, ACM, vol. 5, no. 4, New York, NY, USA, pp. 1-2, 2013.
[bib][url] |
[534] | Christian Timmerer, Anthony Vetro, Recent MPEG Standards for Future Media Ecosystems, In Computing Now, IEEE Computer Society [online], vol. 6, no. 10, Los Alamitos, CA, USA, pp. 1, 2013.
[bib][url] |
[533] | Mario Taschwer, Text-Based Medical Case Retrieval Using MeSH Ontology, In CLEF 2013 Evaluation Labs and Workshop, Online Working Notes (Pamela Forner, Roberto Navigli, Dan Tufis, eds.), CLEF Initiative, Padua, Italy, pp. 5, 2013.
[bib][url] [pdf] [slides] [abstract]
Abstract: Our approach to the ImageCLEF medical case retrieval task consists of text-only retrieval combined with utilizing the Medical Subject Headings (MeSH) ontology. MeSH terms extracted from the query are used for query expansion or query term weighting. MeSH annotations of documents available from PubMed Central are added to the corpus. Retrieval results improve slightly upon full-text retrieval.
|
[532] | Tibor Szkaliczki, Michael Eberhard, Hermann Hellwagner, László Szobonya, Piece selection algorithms for layered video streaming in P2P networks, In Discrete Applied Mathematics, Elsevier, Amsterdam, The Netherlands, pp. 11, 2013.
[bib][url] |
[531] | Christian Sieber, Tobias Hoßfeld, Thomas Zinner, Phuoc Tran-Gia, Christian Timmerer, Implementation and User-centric Comparison of a Novel Adaptation Logic for DASH with SVC, In Integrated Network Management (IM 2013), 2013 IFIP/IEEE International Symposium on (Filip De Turck, Yixin Diao, Choong Seon Hong, Deep Medhi, Ramin Sadre, eds.), IEEE Communications Society, New York, NY, USA, pp. 1318-1323, 2013.
[bib] [pdf] [abstract]
Abstract: The MPEG-DASH standard allows the client-centric access to different representations of video content via the HTTP protocol. The client can flexibly switch between different qualities, i.e., different bit rates and thus avoid waiting times during the video playback due to empty playback buffers. However, quality switches and the playback of lower qualities is perceived by the user which may reduce the Quality of Experience (QoE). Therefore, novel algorithms are required which manage the streaming behavior with respect to the user's requirements and which do not waste network resources. As indicated by recent studies, scalable video coding (SVC) may use the current network and content distribution infrastructure in a more efficient way than with single layer codecs. The contribution of this paper is the design and the implementation of a novel DASH/SVC streaming algorithm. By means of measurements in a test-bed, its performance and benefits are evaluated and compared to existing algorithms from an user-centric view point with objective performance metrics. Our findings show that the proposed algorithm outperforms other DASH mechanisms in terms of video quality, low switching frequency and usage of the available resources in a realistic mobile network scenario. This is a first step towards true QoE management of video streaming in the Internet with DASH and SVC.
|
[530] | Klaus Schoeffmann, David Ahlström, Werner Bailer, Claudiu Cobarzan, Frank Hopfgartner, Kevin McGuinness, Cathal Gurrin, Christian Frisson, Duy-Dinh Le, Manfred Del Fabro, Hongliang Bai, Wolfgang Weiss, The Video Browser Showdown: a live evaluation of interactive video search tools, In International Journal of Multimedia Information Retrieval, Springer, Berlin, Germany, pp. 1-15, 2013.
[bib] |
[529] | Klaus Schoeffmann, David Ahlström, Laszlo Böszörmenyi, A User Study of Visual Search Performance of Interactive 2D and 3D Storyboards, In Proceedings of the International Workshop on Adaptive Multimedia Retrieval (AMR2011), LNCS 7836 (M Detyniecki, A Garcia-Serrano, A Nürnberger, S Stober, eds.), Springer, Barcelona, Spain, pp. 18-32, 2013.
[bib] |
[528] | Klaus Schoeffmann, Claudiu Cobarzan, An evaluation of interactive search with modern video players, In Proceedings of 2013 IEEE International Conference on Multimedia and Expo Workshops (ICME) (Xian-Sheng Hua, Irene Cheng, Anup Basu, Nam Ling, Sethuraman Panchanathan, eds.), IEEE, Los Alamitos, CA, USA, pp. 1-4, 2013.
[bib] [doi] [abstract]
Abstract: The navigation features of video players are often used for interactive search in videos, when users want to find a specific segment. Especially non-experts make use of these navigation facilities because they typically do not have any video retrieval tool at hand and - maybe more important - the navigation features of video players are very easy to use. However, in order to design professional video browsing tools that allow for better search performance but still provide ease of use, we need to know how users search with common video players. Therefore, we analyze logging data from a user study with 17 participants that performed Known Item Search tasks with an HTML5 video player. We classify search behavior by type of interaction and speed of interactive search and discuss what we can learn for the design and development of professional video search tools.
|