[19] | Stefan Petscharnig, Klaus Schöffmann, ActionVis: An Explorative Tool to Visualize Surgical Actions in Gynecologic Laparoscopy, In International Conference on Multimedia Modeling (yet not available, ed.), Springer, Cham, Switzerland, pp. 1-5, 2018.
[bib][url] [doi] [abstract]
Abstract: Appropriate visualization of endoscopic surgery recordings has a huge potential to benefit surgical work life. For example, it enables surgeons to quickly browse medical interventions for purposes of documentation, medical research, discussion with colleagues, and training of young surgeons. Current literature on automatic action recognition for endoscopic surgery covers domains where surgeries follow a standardized pattern, such as cholecystectomy. However, there is a lack of support in domains where such standardization is not possible, such as gynecologic laparoscopy. We provide ActionVis, an interactive tool enabling surgeons to quickly browse endoscopic recordings. Our tool analyses the results of a post-processing of the recorded surgery. Information on individual frames are aggregated temporally into a set of scenes representing frequent surgical actions in gynecologic laparoscopy, which help surgeons to navigate within endoscopic recordings in this domain.
|
[18] | Klaus Schoeffmann, Heinrich Husslein, Sabrina Kletz, Stefan Petscharnig, Bernd Münzer, Christian Beecks, Video Retrieval in Laparoscopic Video Recordings with Dynamic Content Descriptors, In Multimedia Tools and Applications, Springer US, USA, pp. 18, 2017.
[bib] |
[17] | Klaus Schoeffmann, Manfred Jürgen Primus, Bernd Muenzer, Stefan Petscharnig, Christoph Karisch, Qing Xu, Wolfgang Huerst, Collaborative Feature Maps for Interactive Video Search, In MultiMedia Modeling: 23rd International Conference, MMM 2017, Reykjavik, Iceland, January 4-6, 2017, Proceedings, Part II (Laurent Amsaleg, Gylfi Þór Guðmundsson, Cathal Gurrin, Björn Þór Jónsson, Shin’ichi Satoh, eds.), Springer International Publishing, Cham, pp. 457-462, 2017.
[bib][url] [doi] [abstract]
Abstract: This extended demo paper summarizes our interface used for the Video Browser Showdown (VBS) 2017 competition, where visual and textual known-item search (KIS) tasks, as well as ad-hoc video search (AVS) tasks in a 600-h video archive need to be solved interactively. To this end, we propose a very flexible distributed video search system that combines many ideas of related work in a novel and collaborative way, such that several users can work together and explore the video archive in a complementary manner. The main interface is a perspective Feature Map, which shows keyframes of shots arranged according to a selected content similarity feature (e.g., color, motion, semantic concepts, etc.). This Feature Map is accompanied by additional views, which allow users to search and filter according to a particular content feature. For collaboration of several users we provide a cooperative heatmap that shows a synchronized view of inspection actions of all users. Moreover, we use collaborative re-ranking of shots (in specific views) based on retrieved results of other users.
|
[16] | Manfred Jürgen Primus, Bernd Münzer, Klaus Schoeffmann, ITEC-UNIKLU Ad-Hoc Video Search Submission 2017, In Proceedings of TRECVID 2017 (George Awad, Asad Butt, Jonathan Fiscus, David Joy, Andrew Delgado, Martial Michel, Alan Smeaton, Yvette Graham, Wessel Kraaij, Georges Quénot, Maria Eskevich, Roeland Ordelman, Gareth Jones, Benoit Huet, eds.), NIST, USA, NIST, Gaithersburg, MD, USA, pp. 10, 2017.
[bib] [abstract]
Abstract: This paper describes our approach used for the fully automatic and manually assisted Ad-hoc Video Search (AVS) task for TRECVID 2017. We focus on the combination of different convolutional neural network models and query optimization. Each of this model focus on a specific query part, which could be, e.g., location, objects, or the wide-ranging ImageNet classes. All classification results are collected in different combinations in Lucene indixes. For the manually assisted run we use a junk filter and different query optimization methods.
|
[15] | Stefan Petscharnig, Semi-Automatic Retrieval of Relevant Segments from Laparoscopic Surgery Videos, In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval (Bogdan Ionescu, Nicu Sebe, eds.), ACM, New York, NY, USA, pp. 484-488, 2017.
[bib][url] [doi] [abstract]
Abstract: Over the last decades, progress in medical technology and imaging technology enabled the technique of minimally invasive surgery. In addition, multimedia technologies allow for retrospective analyses of surgeries. The accumulated videos and images allow for a speed-up in documentation, easier medical case assessment across surgeons, training young surgeons, as well as they find the usage in medical research. Considering a surgery lasting for hours of routine work, surgeons only need to see short video segments of interest to assess a case. Surgeons do not have the time to manually extract video sequences of their surgeries from their big multimedia databases as they do not have the resources for this time-consuming task. The thesis deals with the questions of how to semantically classify video frames using Convolutional Neural Networks into different semantic concepts of surgical actions and anatomical structures. In order to achieve this goal, the capabilities of predefined CNN architectures and transfer learning in the laparoscopic video domain are investigated. The results are expected to improve by domain-specific adaptation of the CNN input layers, i.e. by fusion of the image with motion and relevance information. Finally, the thesis investigates to what extent surgeons' needs are covered with the proposed extraction of relevant scenes.
|
[14] | Stefan Petscharnig, Mathias Lux, Savvas Chatzichristofis, Dimensionality Reduction for Image Features using Deep Learning and Autoencoders, In 15th International Workshop on Content-Based Multimedia Indexing (Marco Bertini, ed.), ACM, New York, USA, pp. ., 2017.
[bib][url] [doi] [abstract]
Abstract: The field of similarity based image retrieval has experienced a game changer lately. Hand crafted image features have been vastly outperformed by machine learning based approaches. Deep learning methods are very good at finding optimal features for a domain, given enough data is available to learn from. However, hand crafted features are still means to an end in domains, where the data either is not freely available, i.e. because it violates privacy, where there are commercial concerns, or where it cannot be transmitted, i.e. due to bandwidth limitations. Moreover, we have to rely on hand crafted methods whenever neural networks cannot be trained effectively, e.g. if there is not enough training data. In this paper, we investigate a particular approach to combine hand crafted features and deep learning to (i) achieve early fusion of off the shelf handcrafted global image features and (ii) reduce the overall number of dimensions to combine both worlds. This method allows for fast image retrieval in domains, where training data is sparse.
|
[13] | Stefan Petscharnig, Klaus Schoeffmann, Deep Learning of Shot Classification in Gynecologic Surgery Videos, In International Conference on Multimedia Modeling (Laurent Amsaleg, Gylfi Þór Guðmundsson, Cathal Gurrin, Björn Þór Jónsson, Shin’ichi Satoh, eds.), Springer, Cham, pp. 702-713, 2017.
[bib][url] [abstract]
Abstract: In the last decade, advances in endoscopic surgery resulted in vast amounts of video data which is used for documentation, analysis, and education purposes. In order to find video scenes relevant for aforementioned purposes, physicians manually search and annotate hours of endoscopic surgery videos. This process is tedious and time-consuming, thus motivating the (semi-)automatic annotation of such surgery videos. In this work, we want to investigate whether the single-frame model for semantic surgery shot classification is feasible and useful in practice. We approach this problem by further training of AlexNet, an already pre-trained CNN architecture. Thus, we are able to transfer knowledge gathered from the Imagenet database to the medical use case of shot classification in endoscopic surgery videos. We annotate hours of endoscopic surgery videos for training and testing data. Our results imply that the CNN-based single-frame classification approach is able to provide useful suggestions to medical experts while annotating video scenes. Hence, the annotation process is consequently improved. Future work shall consider the evaluation of more sophisticated classification methods incorporating the temporal video dimension, which is expected to improve on the baseline evaluation done in this work.
|
[12] | Stefan Petscharnig, Klaus Schoeffmann, Mathias Lux, An Inception-like CNN Architecture for GI Disease and Anatomical Landmark Classification, In Working Notes Proceedings of the MediaEval 2017 Workshop (Guillaume Gravier, Benjamin Bischke, Claire-Hélène Demarty, Maia Zaharieva, Michael Riegler, Emmanuel Dellandrea, Dmitry Bogdanov, Richard Sutcliffe, Gareth Jones, Martha Larson, eds.), CEUR-WS, Vol-1984, pp. 1-3, 2017.
[bib][url] [abstract]
Abstract: In this working note, we describe our approach to gastrointestinal disease and anatomical landmark classification for the Medico task at MediaEval 2017. We propose an inception-like CNN architecture and a fixed-crop data augmentation scheme for training and testing. The architecture is based on GoogLeNet and designed to keep the number of trainable parameters and its computational overhead small. Preliminary experiments show that the architecture is able to learn the classification problem from scratch using a tiny fraction of the provided training data only.
|
[11] | Stefan Petscharnig, Klaus Schoeffmann, Learning laparoscopic video shot classification for gynecological surgery, In Multimedia Tools and Applications, Springer, Berlin, Heidelberg, New York, pp. 1-19, 2017.
[bib][url] [doi] [abstract]
Abstract: Videos of endoscopic surgery are used for education of medical experts, analysis in medical research, and documentation for everyday clinical life. Hand-crafted image descriptors lack the capabilities of a semantic classification of surgical actions and video shots of anatomical structures. In this work, we investigate how well single-frame convolutional neural networks (CNN) for semantic shot classification in gynecologic surgery work. Together with medical experts, we manually annotate hours of raw endoscopic gynecologic surgery videos showing endometriosis treatment and myoma resection of over 100 patients. The cleaned ground truth dataset comprises 9 h of annotated video material (from 111 different recordings). We use the well-known CNN architectures AlexNet and GoogLeNet and train these architectures for both, surgical actions and anatomy, from scratch. Furthermore, we extract high-level features from AlexNet with weights from a pre-trained model from the Caffe model zoo and feed them to an SVM classifier. Our evaluation shows that we reach an average recall of .697 and .515 for classification of anatomical structures and surgical actions respectively using off-the-shelf CNN features. Using GoogLeNet, we achieve a mean recall of .782 and .617 for classification of anatomical structures and surgical actions respectively. With AlexNet the achieved recall is .615 for anatomical structures and .469 for surgical action classification respectively. The main conclusion of our work is that advances in general image classification methods transfer to the domain of endoscopic surgery videos in gynecology. This is relevant as this domain is different from natural images, e.g. it is distinguished by smoke, reflections, or a limited amount of colors.
|
[10] | Bernd Münzer, Manfred Jürgen Primus, Sabrina Kletz, Stefan Petscharnig, Klaus Schoeffmann, Static vs. Dynamic Content Descriptors for Video Retrieval in Laparoscopy, In IEEE International Symposium on Multimedia (ISM2017) (Kang-Ming Chang, Wen-Thong Chang, eds.), IEEE, Taichung, Taiwan, pp. 8, 2017.
[bib] [abstract]
Abstract: The domain of minimally invasive surgery has recently attracted attention from the Multimedia community due to the fact that systematic video documentation is on the rise in this medical field. The vastly growing volumes of video archives demand for effective and efficient techniques to retrieve specific information from large video collections with visually very homogeneous content. One specific challenge in this context is to retrieve scenes showing similar surgical actions, i.e., similarity search. Although this task has a high and constantly growing relevance for surgeons and other health professionals, it has rarely been investigated in the literature so far for this particular domain. In this paper, we propose and evaluate a number of both static and dynamic content descriptors for this purpose. The former only take into account individual images, while the latter consider the motion within a scene. Our experimental results show that although static descriptors achieve the highest overall performance, dynamic descriptors are much more discriminative for certain classes of surgical actions. We conclude that the two approaches have complementary strengths and further research should investigate methods to combine them.
|
[9] | Andreas Leibetseder, Bernd Münzer, Klaus Schoeffmann, A Tool for Endometriosis Annotation in Endoscopic Videos, In IEEE International Symposium on Multimedia (ISM2017) (Kang-Ming Chang, Wen-Thong Chang, eds.), IEEE, Taichung, Taiwan, pp. 2, 2017.
[bib] [abstract]
Abstract: When regarding physicians’ tremendously packed timetables, it comes as no surprise that they start managing even critical situations hastily in order to cope with the high demands laid out for them. Apart from treating patients’ conditions they as well are required to perform time-consuming administrative tasks, including post-surgery video analyses. Concerning documentation of minimally invasive surgeries (MIS), specifically endoscopy, such processes usually involve repeatedly perusing through lengthy, in the worst case uncut recordings – a redundant task that nowadays can be optimized by using readily available technology: we present a tool for annotating endoscopic video frames targeting a specific use case – endometriosis, i.e. the dislocation of uterine-like tissue.
|
[8] | Andreas Leibetseder, Manfred Jürgen Primus, Stefan Petscharnig, Klaus Schoeffmann, Real-Time Image-based Smoke Detection in Endoscopic Videos, In Proceedings of the on Thematic Workshops of ACM Multimedia 2017 (Wanmin Wu, Jiancho Yag, Qi Tian, Roger Zimmermann, eds.), ACM, New York, NY, USA, pp. 296-304, 2017.
[bib][url] [doi] [abstract]
Abstract: The nature of endoscopy as a type of minimally invasive surgery (MIS) requires surgeons to perform complex operations by merely inspecting a live camera feed. Inherently, a successful intervention depends upon ensuring proper working conditions, such as skillful camera handling, adequate lighting and removal of confounding factors, such as fluids or smoke. The latter is an undesirable byproduct of cauterizing tissue and not only constitutes a health hazard for the medical staff as well as the treated patients, it can also considerably obstruct the operating physician's field of view. Therefore, as a standard procedure the gaseous matter is evacuated by using specialized smoke suction systems that typically are activated manually whenever considered appropriate. We argue that image-based smoke detection can be employed to undertake such a decision, while as well being a useful indicator for relevant scenes in post-procedure analyses. This work represents a continued effort to previously conducted studies utilizing pre-trained convolutional neural networks (CNNs) and threshold-based saturation analysis. Specifically, we explore further methodologies for comparison and provide as well as evaluate a public dataset comprising over 100K smoke/non-smoke images extracted from the Cholec80 dataset, which is composed of 80 different cholecystectomy procedures. Having applied deep learning to merely 20K images of a custom dataset, we achieve Receiver Operating Characteristic (ROC) curves enclosing areas of over 0.98 for custom datasets and over 0.77 for the public dataset. Surprisingly, a fixed threshold for saturation-based histogram analysis still yields areas of over 0.78 and 0.75.
|
[7] | Andreas Leibetseder, Manfred Jürgen Primus, Stefan Petscharnig, Klaus Schoeffmann, Image-Based Smoke Detection in Laparoscopic Videos, In Computer Assisted and Robotic Endoscopy and Clinical Image-Based Procedures: 4th International Workshop, CARE 2017, and 6th International Workshop, CLIP 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 14, 2017, Proceedings (M Jorge Cardoso, Tal Arbel, Xiongbiao Luo, Stefan Wesarg, Tobias Reichl, Miguel Angel Gonzalez Ballester, Jonathan McLeod, Klaus Drechsler, Terry Peters, Marius Erdt, Kensaku Mori, Marius George Linguraru, Andreas Uhl, Cristina Oyarzun Laura, Raj Shekhar, eds.), Springer International Publishing, Cham, Schweiz, pp. 70-87, 2017.
[bib][url] [doi] [abstract]
Abstract: The development and improper removal of smoke during minimally invasive surgery (MIS) can considerably impede a patient's treatment, while additionally entailing serious deleterious health effects. Hence, state-of-the-art surgical procedures employ smoke evacuation systems, which often still are activated manually by the medical staff or less commonly operate automatically utilizing industrial, highly-specialized and operating room (OR) approved sensors. As an alternate approach, video analysis can be used to take on said detection process -- a topic not yet much researched in aforementioned context. In order to advance in this sector, we propose utilizing an image-based smoke classification task on a pre-trained convolutional neural network (CNN). We provide a custom data set of over 30 000 laparoscopic smoke/non-smoke images, part of which served as training data for GoogLeNet-based [41] CNN models. To be able to compare our research for evaluation, we separately developed a non-CNN classifier based on observing the saturation channel of a sample picture in the HSV color space. While the deep learning approaches yield excellent results with Receiver Operating Characteristic (ROC) curves enclosing areas of over 0.98, the computationally much less costly analysis of an image's saturation histogram under certain circumstances can, surprisingly, as well be a good indicator for smoke with areas under the curves (AUCs) of around 0.92--0.97.
|
[6] | Sabrina Kletz, Klaus Schoeffmann, Bernd Münzer, Manfred J Primus, Heinrich Husslein, Surgical Action Retrieval for Assisting Video Review of Laparoscopic Skills, In Proceedings of the First ACM Workshop on Educational and Knowledge Technologies (MultiEdTech 2017) (Qiong Li, Rainer Lienhart, Hao Hong Wang, eds.), ACM, Mountain View, California, USA, pp. 9, 2017.
[bib][url] [doi] [abstract]
Abstract: An increasing number of surgeons promote video review of laparoscopic surgeries for detection of technical errors at an early stage as well as for training purposes. The reason behind is the fact that laparoscopic surgeries require specific psychomotor skills, which are difficult to learn and teach. The manual inspection of surgery video recordings is extremely cumbersome and time-consuming. Hence, there is a strong demand for automated video content analysis methods. In this work, we focus on retrieving surgical actions from video collections of gynecologic surgeries. We propose two novel dynamic content descriptors for similarity search and investigate a query-by-example approach to evaluate the descriptors on a manually annotated dataset consisting of 18 hours of video content. We compare several content descriptors including dynamic information of the segments as well as descriptors containing only spatial information of keyframes of the segments. The evaluation shows that our proposed dynamic content descriptors considering motion and spatial information from the segment achieve a better retrieval performance than static content descriptors ignoring temporal information of the segment at all. The proposed content descriptors in this work enable content-based video search for similar laparoscopic actions, which can be used to assist surgeons in evaluating laparoscopic surgical skills.
|
[5] | Wolfgang Hürst, Algernon Ip Vai Ching, Klaus Schoeffmann, Manfred Juergen Primus, Storyboard-Based Video Browsing Using Color and Concept Indices, In MultiMedia Modeling: 23rd International Conference, MMM 2017, Reykjavik, Iceland, January 4-6, 2017, Proceedings, Part II (Laurent Amsaleg, Gylfi Þór Guðmundsson, Cathal Gurrin, Björn Þór Jónsson, Shin’ichi Satoh, eds.), Springer International Publishing, Cham, pp. 480-485, 2017.
[bib] [abstract]
Abstract: We present an interface for interactive video browsing where users visually skim storyboard representations of the files in search for known items (known-item search tasks) and textually described subjects, objects, or events (ad-hoc search tasks). Individual segments of the video are represented as a color-sorted storyboard that can be addressed via a color-index. Our storyboard representation is optimized for quick visual inspections considering results from our ongoing research. In addition, a concept based-search is used to filter out parts of the storyboard containing the related concept(s), thus complementing the human-based visual inspection with a semantic, content-based annotation.
|
[4] | Marco A Hudelist, Heinrich Husslein, Bernd Münzer, Klaus Schoeffmann, A Tool to Support Surgical Quality Assessment, In Proceedings of the Third IEEE International Conference on Multimedia Big Data (BigMM 2017) (Shu-Ching Chen, Philip Chen-Yu Sheu, eds.), IEEE, Laguna Hills, California, USA, pp. 2, 2017.
[bib][url] [doi] [abstract]
Abstract: In the domain of medical endoscopy an increasing number of surgeons nowadays store video recordings of their interventions in a huge video archive. Among some other purposes, the videos are used for post-hoc surgical quality assessment, since objective assessment of surgical procedures has been identified as essential component for improvement of surgical quality. Currently, such assessment is performed manually and for selected procedures only, since the amount of data and cumbersome interaction is very time-consuming. In the future, quality assessment should be carried out comprehensively and systematically by means of automated assessment algorithms. In this demo paper, we present a tool that supports human assessors in collecting manual annotations and therefore should help them to deal with the huge amount of visual data more efficiently. These annotations will be analyzed and used as training data in the future.
|
[3] | Christian Beecks, Sabrina Kletz, Klaus Schoeffmann, Large-Scale Endoscopic Image and Video Linking with Gradient-Based Signatures, In Proceedings of the Third IEEE International Conference on Multimedia Big Data (BigMM 2017) (Shu-Ching Chen, Philip Chen-Yu Sheu, eds.), IEEE, Laguna Hills, California, USA, pp. 5, 2017.
[bib][url] [doi] [abstract]
Abstract: Given a large-scale video archive of surgical interventions and a medical image showing a specific moment of an operation, how to find the most image-related videos efficiently without the utilization of additional semantic characteristics? In this paper, we investigate a novel content-based approach of linking medical images with relevant video segments arising from endoscopic procedures. We propose to approximate the video segments' content-based features by gradient-based signatures and to index these signatures with the Minkowski distance in order to determine the most query-like video segments efficiently. We benchmark our approach on a large endoscopic image and video archive and show that our approach achieves a significant improvement in efficiency in comparison to the state-of-the-art while maintaining high accuracy.
|
[2] | Andreas Leibetseder, Mathias Lux, Gamifying Fitness or Fitnessifying Games: a Comparative Study, In Proceedings of the Third International Workshop on Gamification for Information Retrieval - co-located with 39th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2016) (F Hopfgartner, G Kazai, U Kruschwitz, M Meder, eds.), CEUR Workshop Proceedings, vol. 1642, Pisa, Italy, pp. 37-44, 2016.
[bib][url] [pdf] [abstract]
Abstract: Fitness- or exergames are ubiquitously available, but often lack the main ingredient of successfully gamified systems: fun. This can be attributed to the typical way of designing such games -- highly focusing on specific physical activities, thus, gamifying fitness. Instead, we propose a novel alternate approach to improve motivation for exergaming, which we call fitnessification: integrating physical exercise into very popular games that have been developed keeping fun in mind and frequently are played for long periods of time -- so-called AAA games. In order to evaluate this concept, we have conducted a comparative study examining voluntary participants' reactions to testing an ergometer controlled casual game as well as a modified AAA game. Results indicate strong tendencies of players preferring the newly introduced AAA approach over the casual fitness game.
|
[1] | Manfred Jürgen Primus, Bernd Münzer, Stefan Petscharnig, Klaus Schoeffmann, ITEC-UNIKLU Ad-Hoc Video Search Submission 2016, In Proceedings of TRECVID 2016 (George Awad, Jonathan Fiscus, Martial Michel, David Joy, Wessel Kraaij, Alan F Smeaton, Georges Quénot, Maria Eskevich, Robin Aly, Gareth J F Jones, Roeland Ordelman, Benoit Huet, Martha Larson, eds.), NIST, USA, NIST, Gaithersburg, MD, USA, pp. 10, 2016.
[bib] [abstract]
Abstract: In this report we describe our approach to the fully automatic Ad-hoc video search task for TRECVID 2016. We describe how we obtain training data from the web, create according CNN models for the provided queries and use them to classify keyframes from a custom sub-shot detection method. The resulting classifications are fed into a Lucene index in order to obtain the shots that match the query. We also discuss our results and point out potentials for further improvements.
|