[677] | Stefan Petscharnig, Klaus Schoeffmann, Learning laparoscopic video shot classification for gynecological surgery, In Multimedia Tools and Applications, Springer, Berlin, Heidelberg, New York, pp. 1-19, 2017.
[bib][url] [doi] [abstract]
Abstract: Videos of endoscopic surgery are used for education of medical experts, analysis in medical research, and documentation for everyday clinical life. Hand-crafted image descriptors lack the capabilities of a semantic classification of surgical actions and video shots of anatomical structures. In this work, we investigate how well single-frame convolutional neural networks (CNN) for semantic shot classification in gynecologic surgery work. Together with medical experts, we manually annotate hours of raw endoscopic gynecologic surgery videos showing endometriosis treatment and myoma resection of over 100 patients. The cleaned ground truth dataset comprises 9 h of annotated video material (from 111 different recordings). We use the well-known CNN architectures AlexNet and GoogLeNet and train these architectures for both, surgical actions and anatomy, from scratch. Furthermore, we extract high-level features from AlexNet with weights from a pre-trained model from the Caffe model zoo and feed them to an SVM classifier. Our evaluation shows that we reach an average recall of .697 and .515 for classification of anatomical structures and surgical actions respectively using off-the-shelf CNN features. Using GoogLeNet, we achieve a mean recall of .782 and .617 for classification of anatomical structures and surgical actions respectively. With AlexNet the achieved recall is .615 for anatomical structures and .469 for surgical action classification respectively. The main conclusion of our work is that advances in general image classification methods transfer to the domain of endoscopic surgery videos in gynecology. This is relevant as this domain is different from natural images, e.g. it is distinguished by smoke, reflections, or a limited amount of colors.
|
[676] | Bernd Münzer, Manfred Jürgen Primus, Sabrina Kletz, Stefan Petscharnig, Klaus Schoeffmann, Static vs. Dynamic Content Descriptors for Video Retrieval in Laparoscopy, In IEEE International Symposium on Multimedia (ISM2017) (Kang-Ming Chang, Wen-Thong Chang, eds.), IEEE, Taichung, Taiwan, pp. 8, 2017.
[bib] [abstract]
Abstract: The domain of minimally invasive surgery has recently attracted attention from the Multimedia community due to the fact that systematic video documentation is on the rise in this medical field. The vastly growing volumes of video archives demand for effective and efficient techniques to retrieve specific information from large video collections with visually very homogeneous content. One specific challenge in this context is to retrieve scenes showing similar surgical actions, i.e., similarity search. Although this task has a high and constantly growing relevance for surgeons and other health professionals, it has rarely been investigated in the literature so far for this particular domain. In this paper, we propose and evaluate a number of both static and dynamic content descriptors for this purpose. The former only take into account individual images, while the latter consider the motion within a scene. Our experimental results show that although static descriptors achieve the highest overall performance, dynamic descriptors are much more discriminative for certain classes of surgical actions. We conclude that the two approaches have complementary strengths and further research should investigate methods to combine them.
|
[675] | Bernd Münzer, Klaus Schoeffmann, Laszlo Böszörmenyi, EndoXplore: A Web-based Video Explorer for Endoscopic Videos, In IEEE International Symposium on Multimedia (ISM2017) (Kang-Ming Chang, Wen-Thong Chang, eds.), IEEE, Taichung, Taiwan, pp. 2, 2017.
[bib] [abstract]
Abstract: The rapidly increasing volume of videos recorded in the course of endoscopic screenings and surgeries poses demanding challenges to video retrieval and browsing systems. Surgeons typically have to use standard video players to retrospectively review their procedures, which is an extremely cumbersome and time-consuming process. We present an HTML5-based video explorer that is specially tailored to this purpose and enables a time-efficient post-operative review of procedures. It incorporates various interactive browsing mechanisms as well as domain-specific content-based features based on previous research results. Preliminary interviews with surgeons indicate that this tool can considerably improve retrieval and browsing efficiency for users in the medical domain and allows surgeons to more easily and quickly revisit specific moments in recordings of their endoscopic surgeries.
|
[674] | Bernd Münzer, Manfred Jürgen Primus, Marco Hudelist, Christian Beecks, Wolfgang Hürst, Klaus Schoeffmann, When content-based video retrieval and human computation unite: Towards effective collaborative video search, In 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) (Yui-Lam Chan, Susanto Rahardja, eds.), IEEE, Hongkong, China, pp. 214-219, 2017.
[bib] [doi] [abstract]
Abstract: Although content-based retrieval methods achieved very good results for large-scale video collections in recent years, they still suffer from various deficiencies. On the other hand, plain human perception is a very powerful ability that still outperforms automatic methods in appropriate settings, but is very limited when it comes to large-scale data collections. In this paper, we propose to take the best from both worlds by combining an advanced content-based retrieval system featuring various query modalities with a straightforward mobile tool that is optimized for fast human perception in a sequential manner. In this collaborative system with multiple users, both subsystems benefit from each other: The results of issued queries are used to re-rank the video list on the tablet tool, which in turn notifies the retrieval tool about parts of the dataset that have already been inspected in detail and can be omitted in subsequent queries. The preliminary experiments show promising results in terms of search performance.
|
[673] | Bernd Münzer, Klaus Schoeffmann, Laszlo Böszörmenyi, Content-based processing and analysis of endoscopic images and videos: A survey, In Multimedia Tools and Applications, Springer, Berlin, Heidelberg, New York, pp. 1-40, 2017.
[bib][url] [doi] [abstract]
Abstract: In recent years, digital endoscopy has established as key technology for medical screenings and minimally invasive surgery. Since then, various research communities with manifold backgrounds have picked up on the idea of processing and automatically analyzing the inherently available video signal that is produced by the endoscopic camera. Proposed works mainly include image processing techniques, pattern recognition, machine learning methods and Computer Vision algorithms. While most contributions deal with real-time assistance at procedure time, the post-procedural processing of recorded videos is still in its infancy. Many post-processing problems are based on typical Multimedia methods like indexing, retrieval, summarization and video interaction, but have only been sparsely addressed so far for this domain. The goals of this survey are (1) to introduce this research field to a broader audience in the Multimedia community to stimulate further research, (2) to describe domain-specific characteristics of endoscopic videos that need to be addressed in a pre-processing step, and (3) to systematically bring together the very diverse research results for the first time to provide a broader overview of related research that is currently not perceived as belonging together.
|
[672] | Christopher Mueller, Stefan Lederer, Christian Timmerer, Adaptation logic for varying a bitrate, Patent, 2017, US 15365886.
[bib][url] |
[671] | Philipp Moll, Daniel Posch, Hermann Hellwagner, Investigation of push-based traffic for conversational services in Named Data Networking, In Proceedings of the IEEE International Conference on Multimedia and Expo Workshops (ICMEW) 2017 (Beatrice Pesquet-Popescu, Chong-Wah Ngo, eds.), IEEE, Hong Kong, pp. 315-320, 2017.
[bib][url] [doi] [pdf] [abstract]
Abstract: Conversational services (e.g., Internet telephony) exhibit hard Quality of Service (QoS) requirements, such as low delay and jitter. Current IP-based solutions for conversational services use push-based data transfer only, since pull-based communication as envisaged in Named Data Networking (NDN) suffers from the two-way delay. Unfortunately, IP's addressing scheme requires additional services for contacting communication partners. NDN provides an inherent solution for this issue by using a location-independent naming scheme. Nevertheless, it currently does not provide a mechanism for push-based data transfer. In this paper, we investigate Persistent Interests as a solution for push-based communication. We improve and implement the idea of Persistent Interests, and study their applicability for conversational services in NDN. This is done by comparing different push- and pull-based approaches for Internet telephony.
|
[670] | Philipp Moll, Julian Janda, Hermann Hellwagner, Adaptive Forwarding of Persistent Interests in Named Data Networking, In Proceedings of the 4th ACM Conference on Information-Centric Networking (Thomas C Schmidt, Jan Seedorf, eds.), ACM, New York, NY, USA, pp. 180-181, 2017.
[bib][url] [doi] [pdf] [abstract]
Abstract: Persistent Interests (PIs) are a promising approach to introduce push-type traffic in Named Data Networking (NDN), in particular for conversational services such as voice and video calls. Forwarding decisions for PIs are crucial in NDN because they establish a long-lived path for the data flowing back toward the PI issuer. In the course of studying the use of PIs in NDN, we investigate adaptive PI forwarding and present a strategy combining regular NDN forwarding information and results from probing potential alternative paths through the network. Simulation results indicate that our adaptive PI forwarding approach is superior to the PI-adapted Best Route strategy when network conditions change due to link failures.
|
[669] | Andreas Leibetseder, Bernd Münzer, Klaus Schoeffmann, A Tool for Endometriosis Annotation in Endoscopic Videos, In IEEE International Symposium on Multimedia (ISM2017) (Kang-Ming Chang, Wen-Thong Chang, eds.), IEEE, Taichung, Taiwan, pp. 2, 2017.
[bib] [abstract]
Abstract: When regarding physicians’ tremendously packed timetables, it comes as no surprise that they start managing even critical situations hastily in order to cope with the high demands laid out for them. Apart from treating patients’ conditions they as well are required to perform time-consuming administrative tasks, including post-surgery video analyses. Concerning documentation of minimally invasive surgeries (MIS), specifically endoscopy, such processes usually involve repeatedly perusing through lengthy, in the worst case uncut recordings – a redundant task that nowadays can be optimized by using readily available technology: we present a tool for annotating endoscopic video frames targeting a specific use case – endometriosis, i.e. the dislocation of uterine-like tissue.
|
[668] | Andreas Leibetseder, Manfred Jürgen Primus, Stefan Petscharnig, Klaus Schoeffmann, Real-Time Image-based Smoke Detection in Endoscopic Videos, In Proceedings of the on Thematic Workshops of ACM Multimedia 2017 (Wanmin Wu, Jiancho Yag, Qi Tian, Roger Zimmermann, eds.), ACM, New York, NY, USA, pp. 296-304, 2017.
[bib][url] [doi] [abstract]
Abstract: The nature of endoscopy as a type of minimally invasive surgery (MIS) requires surgeons to perform complex operations by merely inspecting a live camera feed. Inherently, a successful intervention depends upon ensuring proper working conditions, such as skillful camera handling, adequate lighting and removal of confounding factors, such as fluids or smoke. The latter is an undesirable byproduct of cauterizing tissue and not only constitutes a health hazard for the medical staff as well as the treated patients, it can also considerably obstruct the operating physician's field of view. Therefore, as a standard procedure the gaseous matter is evacuated by using specialized smoke suction systems that typically are activated manually whenever considered appropriate. We argue that image-based smoke detection can be employed to undertake such a decision, while as well being a useful indicator for relevant scenes in post-procedure analyses. This work represents a continued effort to previously conducted studies utilizing pre-trained convolutional neural networks (CNNs) and threshold-based saturation analysis. Specifically, we explore further methodologies for comparison and provide as well as evaluate a public dataset comprising over 100K smoke/non-smoke images extracted from the Cholec80 dataset, which is composed of 80 different cholecystectomy procedures. Having applied deep learning to merely 20K images of a custom dataset, we achieve Receiver Operating Characteristic (ROC) curves enclosing areas of over 0.98 for custom datasets and over 0.77 for the public dataset. Surprisingly, a fixed threshold for saturation-based histogram analysis still yields areas of over 0.78 and 0.75.
|
[667] | Andreas Leibetseder, Manfred Jürgen Primus, Stefan Petscharnig, Klaus Schoeffmann, Image-Based Smoke Detection in Laparoscopic Videos, In Computer Assisted and Robotic Endoscopy and Clinical Image-Based Procedures: 4th International Workshop, CARE 2017, and 6th International Workshop, CLIP 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 14, 2017, Proceedings (M Jorge Cardoso, Tal Arbel, Xiongbiao Luo, Stefan Wesarg, Tobias Reichl, Miguel Angel Gonzalez Ballester, Jonathan McLeod, Klaus Drechsler, Terry Peters, Marius Erdt, Kensaku Mori, Marius George Linguraru, Andreas Uhl, Cristina Oyarzun Laura, Raj Shekhar, eds.), Springer International Publishing, Cham, Schweiz, pp. 70-87, 2017.
[bib][url] [doi] [abstract]
Abstract: The development and improper removal of smoke during minimally invasive surgery (MIS) can considerably impede a patient's treatment, while additionally entailing serious deleterious health effects. Hence, state-of-the-art surgical procedures employ smoke evacuation systems, which often still are activated manually by the medical staff or less commonly operate automatically utilizing industrial, highly-specialized and operating room (OR) approved sensors. As an alternate approach, video analysis can be used to take on said detection process -- a topic not yet much researched in aforementioned context. In order to advance in this sector, we propose utilizing an image-based smoke classification task on a pre-trained convolutional neural network (CNN). We provide a custom data set of over 30 000 laparoscopic smoke/non-smoke images, part of which served as training data for GoogLeNet-based [41] CNN models. To be able to compare our research for evaluation, we separately developed a non-CNN classifier based on observing the saturation channel of a sample picture in the HSV color space. While the deep learning approaches yield excellent results with Receiver Operating Characteristic (ROC) curves enclosing areas of over 0.98, the computationally much less costly analysis of an image's saturation histogram under certain circumstances can, surprisingly, as well be a good indicator for smoke with areas under the curves (AUCs) of around 0.92--0.97.
|
[666] | Sabrina Kletz, Klaus Schoeffmann, Bernd Münzer, Manfred J Primus, Heinrich Husslein, Surgical Action Retrieval for Assisting Video Review of Laparoscopic Skills, In Proceedings of the First ACM Workshop on Educational and Knowledge Technologies (MultiEdTech 2017) (Qiong Li, Rainer Lienhart, Hao Hong Wang, eds.), ACM, Mountain View, California, USA, pp. 9, 2017.
[bib][url] [doi] [abstract]
Abstract: An increasing number of surgeons promote video review of laparoscopic surgeries for detection of technical errors at an early stage as well as for training purposes. The reason behind is the fact that laparoscopic surgeries require specific psychomotor skills, which are difficult to learn and teach. The manual inspection of surgery video recordings is extremely cumbersome and time-consuming. Hence, there is a strong demand for automated video content analysis methods. In this work, we focus on retrieving surgical actions from video collections of gynecologic surgeries. We propose two novel dynamic content descriptors for similarity search and investigate a query-by-example approach to evaluate the descriptors on a manually annotated dataset consisting of 18 hours of video content. We compare several content descriptors including dynamic information of the segments as well as descriptors containing only spatial information of keyframes of the segments. The evaluation shows that our proposed dynamic content descriptors considering motion and spatial information from the segment achieve a better retrieval performance than static content descriptors ignoring temporal information of the segment at all. The proposed content descriptors in this work enable content-based video search for similar laparoscopic actions, which can be used to assist surgeons in evaluating laparoscopic surgical skills.
|
[665] | Matthias Janetschek, Radu Prodan, Shajulin Benedict, A Compiler Transformation-based Approach to Scientific Workflow Enactment, In Proceedings of the 12th Workshop on Workflows in Support of Large-Scale Science, ACM, pp. 1-12, 2017.
[bib][url] [doi] |
[664] | Wolfgang Hürst, Algernon Ip Vai Ching, Klaus Schoeffmann, Manfred Juergen Primus, Storyboard-Based Video Browsing Using Color and Concept Indices, In MultiMedia Modeling: 23rd International Conference, MMM 2017, Reykjavik, Iceland, January 4-6, 2017, Proceedings, Part II (Laurent Amsaleg, Gylfi Þór Guðmundsson, Cathal Gurrin, Björn Þór Jónsson, Shin’ichi Satoh, eds.), Springer International Publishing, Cham, pp. 480-485, 2017.
[bib] [abstract]
Abstract: We present an interface for interactive video browsing where users visually skim storyboard representations of the files in search for known items (known-item search tasks) and textually described subjects, objects, or events (ad-hoc search tasks). Individual segments of the video are represented as a color-sorted storyboard that can be addressed via a color-index. Our storyboard representation is optimized for quick visual inspections considering results from our ongoing research. In addition, a concept based-search is used to filter out parts of the storyboard containing the related concept(s), thus complementing the human-based visual inspection with a semantic, content-based annotation.
|
[663] | Marco Hudelist, Klaus Schoeffmann, An Evaluation of Video Browsing on Tablets with the ThumbBrowser, In International Conference on Multimedia Modeling (Laurent Amsaleg, Gylfi Þór Guðmundsson, Cathal Gurrin, Björn Þór Jónsson, Shin’ichi Satoh, eds.), Springer, Cham, pp. 89-100, 2017.
[bib] [doi] [abstract]
Abstract: We present an extension and evaluation of a novel interaction concept for video browsing on tablets. It can be argued that the best user experience for watching video on tablets can be achieved when the device is held in landscape orientation. Most mobile video players ignore this fact and make the interaction unnecessarily hard when the tablet is held with both hands. Naturally, in this hand posture only the thumbs are available for interaction. Our ThumbBrowser-interface takes this into account and combines it in its latest iteration with content analysis information as well as two different interaction methods. The interface was already introduced in a basic form in earlier work. In this paper we report on extensions that we applied and show first evaluation results in comparison to standard video players. We are able to show that our video browser is superior in terms of search accuracy and user satisfaction.
|
[662] | Marco A Hudelist, Heinrich Husslein, Bernd Münzer, Klaus Schoeffmann, A Tool to Support Surgical Quality Assessment, In Proceedings of the Third IEEE International Conference on Multimedia Big Data (BigMM 2017) (Shu-Ching Chen, Philip Chen-Yu Sheu, eds.), IEEE, Laguna Hills, California, USA, pp. 2, 2017.
[bib][url] [doi] [abstract]
Abstract: In the domain of medical endoscopy an increasing number of surgeons nowadays store video recordings of their interventions in a huge video archive. Among some other purposes, the videos are used for post-hoc surgical quality assessment, since objective assessment of surgical procedures has been identified as essential component for improvement of surgical quality. Currently, such assessment is performed manually and for selected procedures only, since the amount of data and cumbersome interaction is very time-consuming. In the future, quality assessment should be carried out comprehensively and systematically by means of automated assessment algorithms. In this demo paper, we present a tool that supports human assessors in collecting manual annotations and therefore should help them to deal with the huge amount of visual data more efficiently. These annotations will be analyzed and used as training data in the future.
|
[661] | X Zhu, S Mao, M Hassan Hassan, Hermann Hellwagner, Guest Editorial: Video Over Future Networks, In IEEE Transactions on Multimedia, IEEE, vol. 19, no. 10, Piscataway, NJ, pp. 2133 - 2135, 2017.
[bib][url] [doi] [abstract]
Abstract: The papers in this special issue focus on the deployment of video over future networks. The past decade has seen how major improvements in broadband and mobile networks have led to widespread popularity of video streaming applications, and how the latter now becomes the major driving force behind exponentially growing Internet traffic. This special issue seeks to investigate these future Internet technologies through the prism of its most prevalent application, that of video communications. video.
|
[660] | Mario Graf, Christian Timmerer, Christopher Mueller, Towards Bandwidth Efficient Adaptive Streaming of Omnidirectional Video over HTTP: Design, Implementation, and Evaluation, In Proceedings of the 8th ACM on Multimedia Systems Conference (MMSys'17) (Kuan-Ta Chen, ed.), ACM, New York, NY, USA, pp. 11, 2017.
[bib] [pdf] [abstract]
Abstract: Real-time entertainment services such as streaming audio- visual content deployed over the open, unmanaged Internet account now for more than 70% during peak periods. More and more such bandwidth hungry applications and services are proposed like immersive media services such as virtual reality and, specifically omnidirectional/360-degree videos. The adaptive streaming of omnidirectional video over HTTP imposes an important challenge on today’s video delivery infrastructures which calls for dedicated, thoroughly designed techniques for content generation, delivery, and consumption. This paper describes the usage of tiles — as specified within modern video codecs such HEVC/H.265 and VP9 — enabling bandwidth efficient adaptive streaming of omnidirectional video over HTTP and we define various streaming strategies. Therefore, the parameters and characteristics of a dataset for omnidirectional video are proposed and exemplary instanti- ated to evaluate various aspects of such an ecosystem, namely bitrate overhead, bandwidth requirements, and quality as- pects in terms of viewport PSNR. The results indicate bitrate savings from 40% (in a realistic scenario with recorded head movements from real users) up to 65% (in an ideal scenario with a centered/fixed viewport) and serve as a baseline and guidelines for advanced techniques including the outline of a research roadmap for the near future.
|
[659] | Darragh Egan, Conor Keighrey, John Barrett, Yuansong Qiao, Sean Brennan, Christian Timmerer, Niall Murray, Subjective Evaluation of an Olfaction Enhanced Immersive Virtual Reality Environment, In Proceedings of the 2nd International Workshop on Multimedia Alternate Realities (Teresa Chambel, Rene Kaiser, Omar Aziz Niamur, Wei Tsang Ooi, eds.), ACM, New York, NY, USA, pp. 15-18, 2017.
[bib][url] [doi] [pdf] [abstract]
Abstract: Recent research efforts have reported findings on user Quality of Experience (QoE) of immersive virtual reality (VR) experiences. Truly immersive multimedia experiences also include multisensory components such as factional, tactile etc., in addition to audiovisual stimuli. In this context, this paper reports the results of a user QoE study of an olfaction-enhanced immersive VR environment. The results presented compare the user QoE between two groups (VR vs VR + Olfaction) and consider how the addition of olfaction affected user QoE levels (considering sense of enjoyment, immersion and discomfort). Self-reported measures via post-test questionnaire (10 questions) only revealed one statistically significant difference between the groups; in terms of how users felt with respect to their senses being stimulated. The presence of olfaction in the VR environment did not have a statistically significant effect in terms of user levels of enjoyment, immersion and discomfort.
|
[658] | Kirill Borodulin, Gleb Radchenko, Aleksandr Shestakov, Leonid Sokolinsky, Andrey Tchernykh, Radu Prodan, Towards Digital Twins Cloud Platform: Microservices and Computational Workflows to Rule a Smart Factory, In 2017 IEEE/ACM $10^\mathitth$ International Conference on Utility and Cloud Computing, ACM, pp. 209-210, 2017.
[bib] |
[657] | Christian Beecks, Sabrina Kletz, Klaus Schoeffmann, Large-Scale Endoscopic Image and Video Linking with Gradient-Based Signatures, In Proceedings of the Third IEEE International Conference on Multimedia Big Data (BigMM 2017) (Shu-Ching Chen, Philip Chen-Yu Sheu, eds.), IEEE, Laguna Hills, California, USA, pp. 5, 2017.
[bib][url] [doi] [abstract]
Abstract: Given a large-scale video archive of surgical interventions and a medical image showing a specific moment of an operation, how to find the most image-related videos efficiently without the utilization of additional semantic characteristics? In this paper, we investigate a novel content-based approach of linking medical images with relevant video segments arising from endoscopic procedures. We propose to approximate the video segments' content-based features by gradient-based signatures and to index these signatures with the Minkowski distance in order to determine the most query-like video segments efficiently. We benchmark our approach on a large endoscopic image and video archive and show that our approach achieves a significant improvement in efficiency in comparison to the state-of-the-art while maintaining high accuracy.
|
[656] | Konstantin Pogorelov, Kristin Ranheim Randel, Thomas de Lange, Sigrun L. Eskeland, Carsten Griwodz, Concetto Spampinato, Mario Taschwer, Mathias Lux, Peter T. Schmidt, Michael Riegler, Pal Halvorsen, Nerthus: A Bowel Preparation Quality Video Dataset, In Proceedings of the 8th ACM on Multimedia Systems Conference (MMSys 2017) (Kuan-Ta Chen, Pablo Cesar, Cheng-Hsin Hsu, eds.), Association for Computing Machinery (ACM), pp. 170-174, 2017.
[bib][url] [doi] [abstract]
Abstract: Bowel preparation (cleansing) is considered to be a key precondition for successful colonoscopy (endoscopic examination of the bowel). The degree of bowel cleansing directly affects the possibility to detect diseases and may influence decisions on screening and follow-up examination intervals. An accurate assessment of bowel preparation quality is therefore important. Despite the use of reliable and validated bowel preparation scales, the grading may vary from one doctor to another. An objective and automated assessment of bowel cleansing would contribute to reduce such inequalities and optimize use of medical resources. This would also be a valuable feature for automatic endoscopy reporting in the future. In this paper, we present Nerthus, a dataset containing videos from inside the gastrointestinal (GI) tract, showing different degrees of bowel cleansing. By providing this dataset, we invite multimedia researchers to contribute in the medical field by making systems automatically evaluate the quality of bowel cleansing for colonoscopy. Such innovations would probably contribute to improve the medical field of GI endoscopy.
|
[655] | Christian Timmerer, The Future of Multimedia on the Internet, In Computing Now, IEEE Computer Society [online], Los Alamitos, CA, USA, pp. 1, 2016.
[bib][url] |
[654] | Christian Timmerer, Daniel Weinberger, Martin Smole, Reinhard Grandl, Christopher Mueller, Stefan Lederer, Live Transcoding and Streaming-as-a-Service with Low Delay and High QoE, In 2016 NAB Broadcast Engineering Conference Proceedings & CD (not available, ed.), National Association of Broadcasters (NAB), Washington DC, USA, pp. 4, 2016.
[bib] [pdf] |
[653] | Marco A Hudelist, Claudiu Cobârzan, Christian Beecks, Rob van de Werken, Sabrina Kletz, Wolfgang Hürst, Klaus Schoeffmann, Collaborative Video Search Combining Video Retrieval with Human-Based Visual Inspection, In International Conference on Multimedia Modeling (Qi Tian, Nicu Sebe, Guo-Jun Qi, Benoit Huet, Richang Hong, Xueliang Liu, eds.), Springer International Publishing, Cham, Switzerland, pp. 400-405, 2016.
[bib] |