[12] | Luca Rossetto, Fabian Berns, Klaus Schöffmann, George M. Awad, Christian Beecks, The V3C1 Dataset: Advancing the State of the Art in Video Retrieval, In ACM SIGMM Records, vol. 11, no. 2, 2019.
[bib][url] [abstract]
Abstract: Standardized datasets are of vital importance in multimedia research, as they form the basis for reproducible experiments and evaluations. In the area of video retrieval, widely used datasets such as the IACC [5], which has formed the basis for the TRECVID Ad-Hoc Video Search Task and other retrieval-related challenges, have started to show their age. For example, IACC is no longer representative of video content as it is found in the wild [7]. This is illustrated by the figures below, showing the distribution of video age and duration across various datasets in comparison with a sample drawn from Vimeo and Youtube.
|
[11] | Fabian Berns, Luca Rossetto, Klaus Schöffmann, Christian Beecks, George M. Awad, V3C1 Dataset: An Evaluation of Content Characteristics, In Proceedings of the ACM International Conference on Multimedia Retrieval, ACM - New York, New York, NY, pp. 334-338, 2019.
[bib][url] [doi] |
[10] | Andreas Leibetseder, Bernd Münzer, Manfred Jürgen Primus, Sabrina Kletz, Klaus Schöffmann, Fabian Berns, Christian Beecks, lifeXplore at the Lifelog Search Challenge 2019, In Proceedings of the ACM Workshop on Lifelog Search Challenge (LSC 19), ACM - New York, New York, NY, pp. 13-17, 2019.
[bib][url] [doi] |
[9] | Klaus Schoeffmann, Heinrich Husslein, Sabrina Kletz, Stefan Petscharnig, Bernd Münzer, Christian Beecks, Video Retrieval in Laparoscopic Video Recordings with Dynamic Content Descriptors, In Multimedia Tools and Applications, Springer US, USA, pp. 18, 2017.
[bib] |
[8] | Bernd Münzer, Manfred Jürgen Primus, Marco Hudelist, Christian Beecks, Wolfgang Hürst, Klaus Schoeffmann, When content-based video retrieval and human computation unite: Towards effective collaborative video search, In 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) (Yui-Lam Chan, Susanto Rahardja, eds.), IEEE, Hongkong, China, pp. 214-219, 2017.
[bib] [doi] [abstract]
Abstract: Although content-based retrieval methods achieved very good results for large-scale video collections in recent years, they still suffer from various deficiencies. On the other hand, plain human perception is a very powerful ability that still outperforms automatic methods in appropriate settings, but is very limited when it comes to large-scale data collections. In this paper, we propose to take the best from both worlds by combining an advanced content-based retrieval system featuring various query modalities with a straightforward mobile tool that is optimized for fast human perception in a sequential manner. In this collaborative system with multiple users, both subsystems benefit from each other: The results of issued queries are used to re-rank the video list on the tablet tool, which in turn notifies the retrieval tool about parts of the dataset that have already been inspected in detail and can be omitted in subsequent queries. The preliminary experiments show promising results in terms of search performance.
|
[7] | Christian Beecks, Sabrina Kletz, Klaus Schoeffmann, Large-Scale Endoscopic Image and Video Linking with Gradient-Based Signatures, In Proceedings of the Third IEEE International Conference on Multimedia Big Data (BigMM 2017) (Shu-Ching Chen, Philip Chen-Yu Sheu, eds.), IEEE, Laguna Hills, California, USA, pp. 5, 2017.
[bib][url] [doi] [abstract]
Abstract: Given a large-scale video archive of surgical interventions and a medical image showing a specific moment of an operation, how to find the most image-related videos efficiently without the utilization of additional semantic characteristics? In this paper, we investigate a novel content-based approach of linking medical images with relevant video segments arising from endoscopic procedures. We propose to approximate the video segments' content-based features by gradient-based signatures and to index these signatures with the Minkowski distance in order to determine the most query-like video segments efficiently. We benchmark our approach on a large endoscopic image and video archive and show that our approach achieves a significant improvement in efficiency in comparison to the state-of-the-art while maintaining high accuracy.
|
[6] | Marco A Hudelist, Claudiu Cobârzan, Christian Beecks, Rob van de Werken, Sabrina Kletz, Wolfgang Hürst, Klaus Schoeffmann, Collaborative Video Search Combining Video Retrieval with Human-Based Visual Inspection, In International Conference on Multimedia Modeling (Qi Tian, Nicu Sebe, Guo-Jun Qi, Benoit Huet, Richang Hong, Xueliang Liu, eds.), Springer International Publishing, Cham, Switzerland, pp. 400-405, 2016.
[bib] |
[5] | Klaus Schoeffmann, Christian Beecks, Mathias Lux, Merih Seran Uysal, Thomas Seidl, Content-based Retrieval in Videos from Laparoscopic Surgery, In Proceedings of SPIE 9786, Medical Imaging 2016: Image-Guided Procedures, Robotic Interventions, and Modeling (Robert Webster, Ziv Yaniv, eds.), SPIE, Bellingham, WA, USA, pp. 97861V-97861V10, 2016.
[bib] |
[4] | Wolfgang Hürst, Algernon Ip Vai Ching, Marco Hudelist, Manfred Primus, Klaus Schoeffmann, Christian Beecks, A New Tool for Collaborative Video Search via Content-based Retrieval and Visual Inspection, In Proceedings of the 2016 ACM on Multimedia Conference (Alan Hanjalic, Cees Snoek, Marcel Worring, eds.), ACM, New York, NY, USA, pp. 731-732, 2016.
[bib][url] [doi] |
[3] | Marco Andrea Hudelist, Claudiu Cobârzan, Christian Beecks, Rob van de Werken, Sabrina Kletz, Wolfgang Hürst, Klaus Schoeffmann, Collaborative Video Search Combining Video Retrieval with Human-Based Visual Inspection, In Multimedia Modeling (Qi Tian, Nicu Sebe, Guo-Jun Qi, Benoit Huet, Richang Hong, Xueliang Liu, eds.), Springer International Publishing, Cham, Switzerland, pp. 400-405, 2016. (Accept This paper describes a video browsing approach that combines machine-based retrieval methods with an interface design optimized for browsing. The proposed approach is inspired and combines the best of 2 well known and well tested approaches from last year’s edition of the Video Search Showcase. Overall, this manuscript is well-written and enjoyable to read. I strongly recommend it. Personally, I would appreciate if the authors could give some details on the following: 1.Please provide some more details regarding spatial information, CIELAB color information, coarseness, and contrast information 2.Please provide some more details regarding Signature Matching Distance Accept This work proposed a novel video browsing approach that aims at optimally integrating traditional, machine-based retrieval methods with an interface design optimized for human browsing performance. Overall, it is interesting to see the incorporation of the CBVR into an interface design and the presented interface along with the system diagram (Fig. 3) give reasonable support for this work to live demo at the VBS venue. However, a concern is its totally lack of objective or subject evaluations. The authors are recommended to include some experiments in the camera ready version.)
[bib][url] [abstract]
Abstract: We propose a novel video browsing approach that aims at optimally integrating traditional, machine-based retrieval methods with an interface design optimized for human browsing performance. Advanced video retrieval and filtering (e.g., via color and motion signatures, and visual concepts) on a desktop is combined with a storyboard-based interface design on a tablet optimized for quick, brute-force visual inspection. Both modules run independently but exchange information to significantly minimize the data for visual inspection and compensate mistakes made by the search algorithms.
|
[2] | Christian Beecks, Klaus Schoeffmann, Mathias Lux, Merih Seran Uysal, Thomas Seidl, Endoscopic Video Retrieval: A Signature-based Approach for Linking Endoscopic Images with Video Segments, In Proceedings of the IEEE International Symposium on Multimedia 2015 (ISM 2015) (Alberto Del Bimbo, Shu-Ching Chen, Haohong Wang, Heather Yu, Roger Zimmermann, eds.), IEEE, Los Alamitos, CA, pp. 1-6, 2015.
[bib] [abstract]
Abstract: In the field of medical endoscopy more and more surgeons are changing over to record and store videos of their endoscopic procedures, such as surgeries and examinations, in long-term video archives. In order to support surgeons in accessing these endoscopic video archives in a content-based way, we propose a simple yet effective signature-based approach: the Signature Matching Distance based on adaptive-binning feature signatures. The proposed distance-based similarity model facilitates an adaptive representation of the visual properties of endoscopic images and allows for matching these properties efficiently. We conduct an extensive performance analysis with respect to the task of linking specific endoscopic images with video segments and show the high efficacy of our approach. We are able to link more than 88% of the endoscopic images to their corresponding correct video segments, which improves the current state of the art by one order of magnitude.
|
[1] | Christian Beecks, Thomas Skopal, Klaus Schoeffmann, Thomas Seidl, Towards Large-Scale Multimedia Exploration, In Proceedings of the 5th International Workshop on Ranking in Databases (DBRank 2011) (Gautam Das, Vagelis Hsristidis, Ihab Ilyas, eds.), VLDB, Seattle, WA, USA, pp. 31-33, 2011.
[bib] [pdf] [abstract]
Abstract: With the advent of the information age and the increasing size and complexity of multimedia databases, the question of how to support users in getting access and insight into those large databases has become immensely important. While traditional content-based retrieval approaches provide query-driven access under the assumption that the users' information needs are clearly specified, modern content-based exploration approaches support users in browsing and navigating through multimedia databases in the case of imprecise or even unknown information needs. By means of interactive graphical user interfaces, exploration approaches offer a convenient and intuitive access to unknown multimedia databases which becomes even more important with the arrival of powerful mobile devices. In this paper, we formulate challenges of user-centric multimedia exploration with a particular focus on large-scale multimedia databases. We claim that adaptability and scalability should be researched on both conceptual as well as technical level in order to model multimedia exploration approaches which are able to cope with millions of multimedia objects in near-realtime.
|