[230] | Pawan Kumar Verma, Prateek Agrawal, Ivone Amorim, Radu Prodan, WELFake: Word Embedding Over Linguistic Features for Fake News Detection, In IEEE Transactions on Computational Social Systems, Institute of Electrical and Electronics Engineers (IEEE), vol. 8, no. 4, pp. 881-893, 2021.
[bib][url] [doi] [abstract]
Abstract: Social media is a popular medium for the dissemination of real-time news all over the world. Easy and quick information proliferation is one of the reasons for its popularity. An extensive number of users with different age groups, gender, and societal beliefs are engaged in social media websites. Despite these favorable aspects, a significant disadvantage comes in the form of fake news, as people usually read and share information without caring about its genuineness. Therefore, it is imperative to research methods for the authentication of news. To address this issue, this article proposes a two-phase benchmark model named WELFake based on word embedding (WE) over linguistic features for fake news detection using machine learning classification. The first phase preprocesses the data set and validates the veracity of news content by using linguistic features. The second phase merges the linguistic feature sets with WE and applies voting classification. To validate its approach, this article also carefully designs a novel WELFake data set with approximately 72,000 articles, which incorporates different data sets to generate an unbiased classification output. Experimental results show that the WELFake model categorizes the news in real and fake with a 96.73% which improves the overall accuracy by 1.31% compared to bidirectional encoder representations from transformer (BERT) and 4.25% compared to convolutional neural network (CNN) models. Our frequency-based and focused analyzing writing patterns model outperforms predictive-based related works implemented using the Word2vec WE method by up to 1.73%.
|
[229] | Christian Timmerer, Mathias Wien, Lu Yu, Amy Reibman, Special issue on Open Media Compression: Overview, Design Criteria, and Outlook on Emerging Standards, In Proceedings of the IEEE, Institute of Electrical and Electronics Engineers (IEEE), vol. 109, no. 9, pp. 1423-1434, 2021.
[bib][url] [doi] [abstract]
Abstract: Universal access to and provisioning of multimedia content is now a reality. It is easy to generate, distribute, share, and consume any multimedia content, anywhere, anytime, or any device. Open media standards took a crucial role toward enabling all these use cases leading to a plethora of applications and services that have now become a commodity in our daily life. Interestingly, most of these services adopt a streaming paradigm, are typically deployed over the open, unmanaged Internet, and account for most of today’s Internet traffic. Currently, the global video traffic is greater than 60% of all Internet traffic [1], and it is expected that this share will grow to more than 80% in the near future [2]. In addition, Nielsen’s law of Internet bandwidth states that the users’ bandwidth grows by 50% per year, which roughly fits data from 1983 to 2019 [3]. Thus, the users’ bandwidth can be expected to reach approximately 1 Gb/s by 2022. At the same time, network applications will grow and utilize the bandwidth provided, just like programs and their data expand to fill the memory available in a computer system. Most of the available bandwidth today is consumed by video applications, and the amount of data is further increasing due to already established and emerging applications, e.g., ultrahigh definition, high dynamic range, or virtual, augmented, mixed realities, or immersive media applications in general.
|
[228] | Babak Taraghi, Minh Nguyen, Hadi Amirpour, Christian Timmerer, Intense: In-Depth Studies on Stall Events and Quality Switches and Their Impact on the Quality of Experience in HTTP Adaptive Streaming, In IEEE Access, Institute of Electrical and Electronics Engineers (IEEE), vol. 9, pp. 118087-118098, 2021.
[bib][url] [doi] [abstract]
Abstract: With the recent growth of multimedia traffic over the Internet and emerging multimedia streaming service providers, improving Quality of Experience (QoE) for HTTP Adaptive Streaming (HAS) becomes more important. Alongside other factors, such as the media quality, HAS relies on the performance of the media player’s Adaptive Bitrate (ABR) algorithm to optimize QoE in multimedia streaming sessions. QoE in HAS suffers from weak or unstable internet connections and suboptimal ABR decisions. As a result of imperfect adaptiveness to the characteristics and conditions of the internet connection, stall events and quality level switches could occur and with different durations that negatively affect the QoE. In this paper, we address various identified open issues related to the QoE for HAS, notably (i) the minimum noticeable duration for stall events in HAS; (ii) the correlation between the media quality and the impact of stall events on QoE; (iii) the end-user preference regarding multiple shorter stall events versus a single longer stall event; and (iv) the end-user preference of media quality switches over stall events. Therefore, we have studied these open issues from both objective and subjective evaluation perspectives and presented the correlation between the two types of evaluations. The findings documented in this paper can be used as a baseline for improving ABR algorithms and policies in HAS.
|
[227] | Natalia Sokolova, Klaus Schoeffmann, Mario Taschwer, Stephanie Sarny, Doris Putzgruber-Adamitsch, Yosuf El-Shabrawi, Automatic detection of pupil reactions in cataract surgery videos, In PLOS ONE (Andreas Wedrich, ed.), Public Library of Science (PLoS), vol. 16, no. 10, pp. e0258390, 2021.
[bib][url] [doi] [abstract]
Abstract: In the light of an increased use of premium intraocular lenses (IOL), such as EDOF IOLs, multifocal IOLs or toric IOLs even minor intraoperative complications such as decentrations or an IOL tilt, will hamper the visual performance of these IOLs. Thus, the post-operative analysis of cataract surgeries to detect even minor intraoperative deviations that might explain a lack of a post-operative success becomes more and more important. Up-to-now surgical videos are evaluated by just looking at a very limited number of intraoperative data sets, or as done in studies evaluating the pupil changes that occur during surgeries, in a small number intraoperative picture only. A continuous measurement of pupil changes over the whole surgery, that would achieve clinically more relevant data, has not yet been described. Therefore, the automatic retrieval of such events may be a great support for a post-operative analysis. This would be especially true if large data files could be evaluated automatically. In this work, we automatically detect pupil reactions in cataract surgery videos. We employ a Mask R-CNN architecture as a segmentation algorithm to segment the pupil and iris with pixel-based accuracy and then track their sizes across the entire video. We can detect pupil reactions with a harmonic mean (H) of Recall, Precision, and Ground Truth Coverage Rate (GTCR) of 60.9% and average prediction length (PL) of 18.93 seconds. However, we consider the best configuration for practical use the one with the H value of 59.4% and PL of 10.2 seconds, which is much shorter. We further investigate the generalization ability of this method on a slightly different dataset without retraining the model. In this evaluation, we achieve the H value of 49.3% with the PL of 18.15 seconds.
|
[226] | Nishant Saurabh, Carlos Rubia, Anandakumar Palanisamy, Spiros Koulouzis, Mirsat Sefidanoski, Antorweep Chakravorty, Zhiming Zhao, Aleksandar Karadimce, Radu Prodan, The ARTICONF Approach to Decentralized Car-Sharing, In Blockchain: Research and Applications, Elsevier BV, pp. 1-37, 2021.
[bib][url] [doi] [abstract]
Abstract: Social media applications are essential for next generation connectivity. Today, social media are centralized platforms with a single proprietary organization controlling the network and posing critical trust and governance issues over the created and propagated content. The ARTICONF project [1] funded by the European Union’s Horizon 2020 program researches a decentralized social media platform based on a novel set of trustworthy, resilient and globally sustainable tools that address privacy, robustness and autonomy-related promises that proprietary social media platforms have failed to deliver so far. This paper presents the ARTICONF approach to a car-sharing decentralized application (DApp) use case, as a new collaborative peer-to-peer model providing an alternative solution to private car ownership. We describe a prototype implementation of the car-sharing social media DApp and illustrate through real snapshots how the different ARTICONF tools support it in a simulated scenario.
|
[225] | Luca Rossetto, Ralph Gasser, Jakub Lokoc, Werner Bailer, Klaus Schoeffmann, Bernd Muenzer, Tomas Soucek, Phuong Anh Nguyen, Paolo Bolettieri, Andreas Leibetseder, Stefanos Vrochidis, Interactive Video Retrieval in the Age of Deep Learning - Detailed Evaluation of VBS 2019, In IEEE Transactions on Multimedia, Institute of Electrical and Electronics Engineers (IEEE), vol. 23, pp. 243-256, 2021.
[bib][url] [doi] [abstract]
Abstract: Despite the fact that automatic content analysis has made remarkable progress over the last decade - mainly due to significant advances in machine learning - interactive video retrieval is still a very challenging problem, with an increasing relevance in practical applications. The Video Browser Showdown (VBS) is an annual evaluation competition that pushes the limits of interactive video retrieval with state-of-the-art tools, tasks, data, and evaluation metrics. In this paper, we analyse the results and outcome of the 8th iteration of the VBS in detail. We first give an overview of the novel and considerably larger V3C1 dataset and the tasks that were performed during VBS 2019. We then go on to describe the search systems of the six international teams in terms of features and performance. And finally, we perform an in-depth analysis of the per-team success ratio and relate this to the search strategies that were applied, the most popular features, and problems that were experienced. A large part of this analysis was conducted based on logs that were collected during the competition itself. This analysis gives further insights into the typical search behavior and differences between expert and novice users. Our evaluation shows that textual search and content browsing are the most important aspects in terms of logged user interactions. Furthermore, we observe a trend towards deep learning based features, especially in the form of labels generated by artificial neural networks. But nevertheless, for some tasks, very specific content-based search features are still being used. We expect these findings to contribute to future improvements of interactive video search systems.
|
[224] | Tobias Ross, Annika Reinke, Peter M. Full, Martin Wagner, Hannes Kenngott, Martin Apitz, Hellena Hempe, Diana Mindroc-Filimon, Patrick Scholz, Thuy Nuong Tran, Pierangela Bruno, Pablo Arbeláez, Gui-Bin Bian, Sebastian Bodenstedt, Jon Lindström Bolmgren, Laura Bravo-Sánchez, Hua-Bin Chen, Cristina González, Dong Guo, Paal Halvorsen, Pheng-Ann Heng, Enes Hosgor, Zeng-Guang Hou, Fabian Isensee, Debesh Jha, Tingting Jiang, Yueming Jin, Kadir Kirtac, Sabrina Kletz, Stefan Leger, Zhixuan Li, Klaus H. Maier-Hein, Zhen-Liang Ni, Michael A. Riegler, Klaus Schoeffmann, Ruohua Shi, Stefanie Speidel, Michael Stenzel, Isabell Twick, Gutai Wang, Jiacheng Wang, Liansheng Wang, Lu Wang, Yujie Zhang, Yan-Jie Zhou, Lei Zhu, Manuel Wiesenfarth, Annette Kopp-Schneider, Beat P. Müller-Stich, Lena Maier-Hein, Comparative validation of multi-instance instrument segmentation in endoscopy: Results of the ROBUST-MIS 2019 challenge, In Medical Image Analysis, Elsevier BV, vol. 70, no. 66, pp. 1-62, 2021.
[bib][url] [doi] [abstract]
Abstract: Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions. While numerous methods for detecting, segmenting and tracking of medical instruments based on endoscopic video images have been proposed in the literature, key limitations remain to be addressed: Firstly, robustness, that is, the reliable performance of state-of-the-art methods when run on challenging images (e.g. in the presence of blood, smoke or motion artifacts). Secondly, generalization; algorithms trained for a specific intervention in a specific hospital should generalize to other interventions or institutions. In an effort to promote solutions for these limitations, we organized the Robust Medical Instrument Segmentation (ROBUST-MIS) challenge as an international benchmarking competition with a specific focus on the robustness and generalization capabilities of algorithms. For the first time in the field of endoscopic image processing, our challenge included a task on binary segmentation and also addressed multi-instance detection and segmentation. The challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures from three different types of surgery. The validation of the competing methods for the three tasks (binary segmentation, multi-instance detection and multi-instance segmentation) was performed in three different stages with an increasing domain gap between the training and the test data. The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap. While the average detection and segmentation quality of the best-performing algorithms is high, future research should concentrate on detection and segmentation of small, crossing, moving and transparent instrument(s) (parts).
|
[223] | Sasko Ristov, Thomas Fahringer, Radu Prodan, Magdalena Kostoska, Marjan Gusev, Schahram Dustdar, Inter-host Orchestration Platform Architecture for Ultra-scale Cloud Applications, In IEEE Internet Computing, Institute of Electrical and Electronics Engineers (IEEE), pp. 1-1, 2021.
[bib][url] [doi] [abstract]
Abstract: Cloud data centers exploit many memory page management techniques that reduce the total memory utilization and access time. Mainly these techniques are applied to a hypervisor in a single host (intra-hypervisor) without the possibility to exploit the knowledge obtained by a group of hosts (clusters). We introduce a novel inter-hypervisor orchestration platform to provide intelligent memory page management for horizontal scaling. It will use the performance behavior of faster virtual machines to activate pre-fetching mechanisms that reduce the number of page faults. The overall platform consists of five modules - profiler, collector, classifier, predictor, and pre-fetcher. We developed and deployed a prototype of the platform, which comprises the first three modules. The evaluation shows that data collection is feasible in real-time, which means that if our approach is used on top of the existing memory page management techniques, it can significantly lower the miss rate that initiates page faults.
|
[222] | Bernhard Rinner, Christian Bettstetter, Hermann Hellwagner, Stephan Weiss, Multidrone Systems: More Than the Sum of the Parts, In Computer, Institute of Electrical and Electronics Engineers (IEEE), vol. 54, no. 5, pp. 34-43, 2021.
[bib][url] [doi] [abstract]
Abstract: Now that drones have evolved from bulky platforms to agile devices, a challenge is to combine multiple drones into an integrated autonomous system, offering functionality that individual drones cannot achieve. Such multidrone systems require connectivity, communication, and coordination. We discuss these building blocks along with case studies and lessons learned.
|
[221] | Philipp Moll, Selina Isak, Hermann Hellwagner, Jeff Burke, A Quadtree-based synchronization protocol for inter-server game state synchronization, In Computer Networks, Elsevier BV, vol. 185, pp. 107723, 2021.
[bib][url] [doi] [abstract]
Abstract: Online games are a fundamental part of the entertainment industry but the current IP infrastructure does not satisfactorily fulfill the needs of these services. The novel networking architecture Named Data Networking (NDN) inherently supports network-level multicast and packet-level security and thereby introduces promising features for online games. In this paper, we propose an NDN-based approach to synchronize game state in a server cluster, a task necessary to allow multiple players in large numbers to play in the same game world. The proposed Quadtree Synchronization Protocol applies NDN’s data-centric nature to decouple the game world from the game servers hosting it. This means that requesting changes of a specific game world region becomes possible without knowing which game server is responsible for the requested region. We use a hierarchic game world structure when requesting data that allows the network to forward requests to the responsible game server without directly addressing it. This region-based naming scheme decouples world regions from servers which eases the management of the game server cluster and allows easier recovery after server failures. In addition, this decoupling allows exchanging information about a geographical region, such as a game world, without knowledge of the other participants changing the world. Such a region-based synchronization mode is not possible to implement with existing protocols. However, it allows building distributed systems that do not require a central server to work. Besides architectural benefits, network emulations show that our protocol increases the efficiency of data transport by utilizing network-level multicast. Our proposed approach can keep up with current protocols which can be used for inter-server game state synchronization.
|
[220] | Vishu Madaan, Aditya Roy, Charu Gupta, Prateek Agrawal, Anand Sharma, Cristian Bologa, Radu Prodan, XCOVNet: Chest X-ray Image Classification for COVID-19 Early Detection Using Convolutional Neural Networks, In New Generation Computing, Springer Science and Business Media LLC, pp. 1-15, 2021.
[bib][url] [doi] [abstract]
Abstract: COVID-19 (also known as SARS-COV-2) pandemic has spread in the entire world. It is a contagious disease that easily spreads from one person in direct contact to another, classified by experts in five categories: asymptomatic, mild, moderate, severe, and critical. Already more than 66 million people got infected worldwide with more than 22 million active patients as of 5 December 2020 and the rate is accelerating. More than 1.5 million patients (approximately 2.5% of total reported cases) across the world lost their life. In many places, the COVID-19 detection takes place through reverse transcription polymerase chain reaction (RT-PCR) tests which may take longer than 48 h. This is one major reason of its severity and rapid spread. We propose in this paper a two-phase X-ray image classification called XCOVNet for early COVID-19 detection using convolutional neural Networks model. XCOVNet detects COVID-19 infections in chest X-ray patient images in two phases. The first phase pre-processes a dataset of 392 chest X-ray images of which half are COVID-19 positive and half are negative. The second phase trains and tunes the neural network model to achieve a 98.44% accuracy in patient classification.
|
[219] | Jakub Lokoc, Patrik Vesely, Frantisek Mejzlik, Gregor Kovalcik, Tomas Soucek, Luca Rossetto, Klaus Schoeffmann, Werner Bailer, Cathal Gurrin, Loris Sauter, Jaeyub Song, Stefanos Vrochidis, Jiaxin Wu, Björn Thor Jonsson, Is the Reign of Interactive Search Eternal? Findings from the Video Browser Showdown 2020, In ACM Transactions on Multimedia Computing, Communications, and Applications, Association for Computing Machinery (ACM), vol. 17, no. 3, pp. 1-26, 2021.
[bib][url] [doi] [abstract]
Abstract: Comprehensive and fair performance evaluation of information retrieval systems represents an essential task for the current information age. Whereas Cranfield-based evaluations with benchmark datasets support development of retrieval models, significant evaluation efforts are required also for user-oriented systems that try to boost performance with an interactive search approach. This article presents findings from the 9th Video Browser Showdown, a competition that focuses on a legitimate comparison of interactive search systems designed for challenging known-item search tasks over a large video collection. During previous installments of the competition, the interactive nature of participating systems was a key feature to satisfy known-item search needs, and this article continues to support this hypothesis. Despite the fact that top-performing systems integrate the most recent deep learning models into their retrieval process, interactive searching remains a necessary component of successful strategies for known-item search tasks. Alongside the description of competition settings, evaluated tasks, participating teams, and overall results, this article presents a detailed analysis of query logs collected by the top three performing systems, SOMHunter, VIRET, and vitrivr. The analysis provides a quantitative insight to the observed performance of the systems and constitutes a new baseline methodology for future events. The results reveal that the top two systems mostly relied on temporal queries before a correct frame was identified. An interaction log analysis complements the result log findings and points to the importance of result set and video browsing approaches. Finally, various outlooks are discussed in order to improve the Video Browser Showdown challenge in the future.
|
[218] | Dragi Kimovski, Roland Matha, Josef Hammer, Narges Mehran, Hermann Hellwagner, Radu Prodan, Cloud, Fog, or Edge: Where to Compute?, In IEEE Internet Computing, Institute of Electrical and Electronics Engineers (IEEE), vol. 25, no. 4, pp. 30-36, 2021.
[bib][url] [doi] [abstract]
Abstract: The computing continuum extends the high-performance cloud data centers with energy-efficient and low-latency devices close to the data sources located at the edge of the network. However, the heterogeneity of the computing continuum raises multiple challenges related to application management. These include where to offload an application – from the cloud to the edge – to meet its computation and communication requirements. To support these decisions, we provide in this article a detailed performance and carbon footprint analysis of a selection of use case applications with complementary resource requirements across the computing continuum over a real-life evaluation testbed.
|
[217] | Dragi Kimovski, Narges Mehran, Christopher Emanuel Kerth, Radu Prodan, Mobility-Aware IoT Applications Placement in the Cloud Edge Continuum, In IEEE Transactions on Services Computing, Institute of Electrical and Electronics Engineers (IEEE), pp. 1-14, 2021.
[bib][url] [doi] [abstract]
Abstract: The Edge computing extension of the Cloud services towards the network boundaries raises important placement challenges for IoT applications running in a heterogeneous environment with limited computing capacities. Unfortunately, existing works only partially address this challenge by optimizing a single or aggregate objective (e.g., response time) and not considering the edge devices' mobility and resource constraints. To address this gap, we propose a novel mobility-aware multi-objective IoT application placement (mMAPO) method in the Cloud -- Edge Continuum that optimizes completion time, energy consumption, and economic cost as conflicting objectives. mMAPO utilizes a Markov model for predictive analysis of the Edge device mobility and constrains the optimization to devices that do not frequently move through the network. We evaluate the quality of the mMAPO placements using simulation and real-world experimentation on two IoT applications. Compared to related work, mMAPO reduces the economic cost by 28% and decreases the completion time by 80% while maintaining a stable energy consumption.
|
[216] | Dragi Kimovski, Roland Matha, Gabriel Iuhasz, Fabrizio Marozzo, Dana Petcu, Radu Prodan, Autotuning of Exascale Applications With Anomalies Detection, In Frontiers in Big Data, Frontiers Media (SA), vol. 4, pp. 1-14, 2021.
[bib][url] [doi] [abstract]
Abstract: The execution of complex distributed applications in exascale systems faces many challenges, as it involves empirical evaluation of countless code variations and application runtime parameters over a heterogeneous set of resources. To mitigate these challenges, the research field of autotuning has gained momentum. The autotuning automates identifying the most desirable application implementation in terms of code variations and runtime parameters. However, the complexity and size of the exascale systems make the autotuning process very difficult, especially considering the number of parameter variations that have to be identified. Therefore, we introduce a novel approach for autotuning exascale applications based on a genetic multi-objective optimization algorithm integrated within the ASPIDE exascale computing framework. The approach considers multi-dimensional search space with support for pluggable objective functions, including execution time and energy requirements. Furthermore, the autotuner employs a machine learning-based event detection approach to detect events and anomalies during application execution, such as hardware failures or communication bottlenecks.
|
[215] | Yasir Noman Khalid, Muhammad Aleem, Usman Ahmed, Radu Prodan, Muhammad Arshad Islam, Muhammad Azhar Iqbal, FusionCL: a machine-learning based approach for OpenCL kernel fusion to increase system performance, In Computing, Springer Science and Business Media LLC, pp. 1-32, 2021.
[bib][url] [doi] [abstract]
Abstract: Employing general-purpose graphics processing units (GPGPU) with the help of OpenCL has resulted in greatly reducing the execution time of data-parallel applications by taking advantage of the massive available parallelism. However, when a small data size application is executed on GPU there is a wastage of GPU resources as the application cannot fully utilize GPU compute-cores. There is no mechanism to share a GPU between two kernels due to the lack of operating system support on GPU. In this paper, we propose the provision of a GPU sharing mechanism between two kernels that will lead to increasing GPU occupancy, and as a result, reduce execution time of a job pool. However, if a pair of the kernel is competing for the same set of resources (i.e., both applications are compute-intensive or memory-intensive), kernel fusion may also result in a significant increase in execution time of fused kernels. Therefore, it is pertinent to select an optimal pair of kernels for fusion that will result in significant speedup over their serial execution. This research presents FusionCL, a machine learning-based GPU sharing mechanism between a pair of OpenCL kernels. FusionCL identifies each pair of kernels (from the job pool), which are suitable candidates for fusion using a machine learning-based fusion suitability classifier. Thereafter, from all the candidates, it selects a pair of candidate kernels that will produce maximum speedup after fusion over their serial execution using a fusion speedup predictor. The experimental evaluation shows that the proposed kernel fusion mechanism reduces execution time by 2.83× when compared to a baseline scheduling scheme. When compared to state-of-the-art, the reduction in execution time is up to 8%.
|
[214] | Nikita Karandikar, Rockey Abhishek, Nishant Saurabh, Zhiming Zhao, Alexander Lercher, Ninoslav Marina, Radu Prodan, Chunming Rong, Antorweep Chakravorty, Blockchain-based prosumer incentivization for peak mitigation through temporal aggregation and contextual clustering.1, In Blockchain: Research and Applications, Elsevier (BV), pp. 1-35, 2021.
[bib][url] [doi] [abstract]
Abstract: Peak mitigation is of interest to power companies as peak periods may require the operator to over provision supply in order to meet the peak demand. Flattening the usage curve can result in cost savings, both for the power companies and the end users. Integration of renewable energy into the energy infrastructure presents an opportunity to use excess renewable generation to supplement supply and alleviate peaks. In addition, demand side management can shift the usage from peak to off peak times and reduce the magnitude of peaks. In this work, we present a data driven approach for incentive based peak mitigation. Understanding user energy profiles is an essential step in this process. We begin by analysing a popular energy research dataset published by the Ausgrid corporation. Extracting aggregated user energy behavior in temporal contexts and semantic linking and contextual clustering give us insight into consumption and rooftop solar generation patterns. We implement, and performance test a blockchain based prosumer incentivization system. The smart contract logic is based on our analysis of the Ausgrid dataset. Our implementation is capable of supporting 792,540 customers with a reasonably low infrastructure footprint.
|
[213] | Debesh Jha, Sharib Ali, Steven Hicks, Vajira Thambawita, Hanna Borgli, Pia H. Smedsrud, Thomas de Lange, Konstantin Pogorelov, Xiaowei Wang, Philipp Harzig, Minh-Triet Tran, Wenhua Meng, Trung-Hieu Hoang, Danielle Dias, Tobey H. Ko, Taruna Agrawal, Olga Ostroukhova, Zeshan Khan, Muhammad Atif Tahir, Yang Liu, Yuan Chang, Mathias Kirkerod, Dag Johansen, Mathias Lux, Haavard D. Johansen, Michael A. Riegler, Paal Halvorsen, A comprehensive analysis of classification methods in gastrointestinal endoscopy imaging, In Medical Image Analysis, Elsevier (BV), vol. 70, pp. 102007, 2021.
[bib][url] [doi] [abstract]
Abstract: Gastrointestinal (GI) endoscopy has been an active field of research motivated by the large number of highly lethal GI cancers. Early GI cancer precursors are often missed during the endoscopic surveillance. The high missed rate of such abnormalities during endoscopy is thus a critical bottleneck. Lack of attentiveness due to tiring procedures, and requirement of training are few contributing factors. An automatic GI disease classification system can help reduce such risks by flagging suspicious frames and lesions. GI endoscopy consists of several multi-organ surveillance, therefore, there is need to develop methods that can generalize to various endoscopic findings. In this realm, we present a comprehensive analysis of the Medico GI challenges: Medical Multimedia Task at MediaEval 2017, Medico Multimedia Task at MediaEval 2018, and BioMedia ACM MM Grand Challenge 2019. These challenges are initiative to set-up a benchmark for different computer vision methods applied to the multi-class endoscopic images and promote to build new approaches that could reliably be used in clinics. We report the performance of 21 participating teams over a period of three consecutive years and provide a detailed analysis of the methods used by the participants, highlighting the challenges and shortcomings of the current approaches and dissect their credibility for the use in clinical settings. Our analysis revealed that the participants achieved an improvement on maximum Mathew correlation coefficient (MCC) from 82.68% in 2017 to 93.98% in 2018 and 95.20% in 2019 challenges, and a significant increase in computational speed over consecutive years.
|
[212] | Samira Hayat, Roland Jung, Hermann Hellwagner, Christian Bettstetter, Driton Emini, Dominik Schnieders, Edge Computing in 5G for Drone Navigation: What to Offload?, In IEEE Robotics and Automation Letters, Institute of Electrical and Electronics Engineers (IEEE), vol. 6, no. 2, pp. 2571-2578, 2021.
[bib][url] [doi] [abstract]
Abstract: Small drones that navigate using cameras may be limited in their speed and agility by low onboard computing power. We evaluate the role of edge computing in 5G for such autonomous navigation. The offloading of image processing tasks to an edge server is studied with a vision-based navigation algorithm. Three computation modes are compared: onboard, fully offloaded to the edge, and partially offloaded. Partial offloading is expected to pose lower demands on the communication network in terms of transfer rate than full offloading but requires some onboard processing. Our results on the computation time help select the most suitable mode for image processing, i.e., whether and what to offload, based on the network conditions.
|
[211] | Alireza Erfanian, Hadi Amirpour, Farzad Tashtarian, Christian Timmerer, Hermann Hellwagner, LwTE: Light-Weight Transcoding at the Edge, In IEEE Access, Institute of Electrical and Electronics Engineers (IEEE), vol. 9, pp. 112276-112289, 2021.
[bib][url] [doi] [abstract]
Abstract: Due to the growing demand for video streaming services, providers have to deal with increasing resource requirements for increasingly heterogeneous environments. To mitigate this problem, many works have been proposed which aim to ( i ) improve cloud/edge caching efficiency, (ii) use computation power available in the cloud/edge for on-the-fly transcoding, and (iii) optimize the trade-off among various cost parameters, e.g., storage, computation, and bandwidth. In this paper, we propose LwTE, a novel L ight- w eight T ranscoding approach at the E dge, in the context of HTTP Adaptive Streaming (HAS). During the encoding process of a video segment at the origin side, computationally intense search processes are going on. The main idea of LwTE is to store the optimal results of these search processes as metadata for each video bitrate and reuse them at the edge servers to reduce the required time and computational resources for on-the-fly transcoding. LwTE enables us to store only the highest bitrate plus corresponding metadata (of very small size) for unpopular video segments/bitrates. In this way, in addition to the significant reduction in bandwidth and storage consumption, the required time for on-the-fly transcoding of a requested segment is remarkably decreased by utilizing its corresponding metadata; unnecessary search processes are avoided. Popular video segments/bitrates are being stored. We investigate our approach for Video-on-Demand (VoD) streaming services by optimizing storage and computation (transcoding) costs at the edge servers and then compare it to conventional methods (store all bitrates, partial transcoding). The results indicate that our approach reduces the transcoding time by at least 80% and decreases the aforementioned costs by 12% to 70% compared to the state-of-the-art approaches.
|
[210] | Alireza Erfanian, Farzad Tashtarian, Anatoliy Zabrovskiy, Christian Timmerer, Hermann Hellwagner, OSCAR: On Optimizing Resource Utilization in Live Video Streaming, In IEEE Transactions on Network and Service Management, Institute of Electrical and Electronics Engineers (IEEE), vol. 18, no. 1, pp. 552-569, 2021.
[bib][url] [doi] [abstract]
Abstract: Live video streaming traffic and related applications have experienced significant growth in recent years. However, this has been accompanied by some challenging issues, especially in terms of resource utilization. Although IP multicasting can be recognized as an efficient mechanism to cope with these challenges, it suffers from many problems. Applying software-defined networking (SDN) and network function virtualization (NFV) technologies enable researchers to cope with IP multicasting issues in novel ways. In this article, by leveraging the SDN concept, we introduce OSCAR (Optimizing reSourCe utilizAtion in live video stReaming) as a new cost-aware video streaming approach to provide advanced video coding (AVC)-based live streaming services in the network. In this article, we use two types of virtualized network functions (VNFs): virtual reverse proxy (VRP) and virtual transcoder function (VTF). At the edge of the network, VRPs are responsible for collecting clients’ requests and sending them to an SDN controller. Then, by executing a mixed-integer linear program (MILP), the SDN controller determines a group of optimal multicast trees for streaming the requested videos from an appropriate origin server to the VRPs. Moreover, to elevate the efficiency of resource allocation and meet the given end-to-end latency threshold, OSCAR delivers only the highest requested quality from the origin server to an optimal group of VTFs over a multicast tree. The selected VTFs then transcode the received video segments and transmit them to the requesting VRPs in a multicast fashion. To mitigate the time complexity of the proposed MILP model, we present a simple and efficient heuristic algorithm that determines a near-optimal solution in polynomial time. Using the MiniNet emulator, we evaluate the performance of OSCAR in various scenarios. The results show that OSCAR surpasses other SVC- and AVC-based multicast and unicast approaches in terms of cost and resource utilization.
|
[209] | Ekrem Cetinkaya, Hadi Amirpour, Christian Timmerer, Mohammad Ghanbari, Fast Multi-Resolution and Multi-Rate Encoding for HTTP Adaptive Streaming Using Machine Learning, In IEEE Open Journal of Signal Processing, Institute of Electrical and Electronics Engineers (IEEE), pp. 1-12, 2021.
[bib][url] [doi] [abstract]
Abstract: Video streaming applications keep getting more attention over the years, and HTTP Adaptive Streaming (HAS) became the de-facto solution for video delivery over the Internet. In HAS, each video is encoded at multiple quality levels and resolutions (i.e., representations) to enable adaptation of the streaming session to viewing and network conditions of the client. This requirement brings encoding challenges along with it, e.g., a video source should be encoded efficiently at multiple bitrates and resolutions. Fast multi-rate encoding approaches aim to address this challenge of encoding multiple representations from a single video by re-using information from already encoded representations. In this paper, a convolutional neural network is used to speed up both multi-rate and multi-resolution encoding for HAS. For multi-rate encoding, the lowest bitrate representation is chosen as the reference. For multi-resolution encoding, the highest bitrate from the lowest resolution representation is chosen as the reference. Pixel values from the target resolution and encoding information from the reference representation are used to predict Coding Tree Unit (CTU) split decisions in High-Efficiency Video Coding (HEVC) for dependent representations. Experimental results show that the proposed method for multi-rate encoding can reduce the overall encoding time by 15.08 % and parallel encoding time by 41.26 %, with a 0.89 % bitrate increase compared to the HEVC reference software. Simultaneously, the proposed method for multi-resolution encoding can reduce the encoding time by 46.27 % for the overall encoding and 27.71 % for the parallel encoding on average with a 2.05 % bitrate increase.
|
[208] | Ekrem Cetinkaya, Hadi Amirpour, Mohammad Ghanbari, Christian Timmerer, CTU depth decision algorithms for HEVC: A survey, In Signal Processing: Image Communication, Elsevier BV, vol. 99, pp. 116442, 2021.
[bib][url] [doi] [abstract]
Abstract: High Efficiency Video Coding (HEVC) surpasses its predecessors in encoding efficiency by introducing new coding tools at the cost of an increased encoding time-complexity. The Coding Tree Unit (CTU) is the main building block used in HEVC. In the HEVC standard, frames are divided into CTUs with the predetermined size of up to 64 × 64 pixels. Each CTU is then divided recursively into a number of equally sized square areas, known as Coding Units (CUs). Although this diversity of frame partitioning increases encoding efficiency, it also causes an increase in the time complexity due to the increased number of ways to find the optimal partitioning. To address this complexity, numerous algorithms have been proposed to eliminate unnecessary searches during partitioning CTUs by exploiting the correlation in the video. In this paper, existing CTU depth decision algorithms for HEVC are surveyed. These algorithms are categorized into two groups, namely statistics and machine learning approaches. Statistics approaches are further subdivided into neighboring and inherent approaches. Neighboring approaches exploit the similarity between adjacent CTUs to limit the depth range of the current CTU, while inherent approaches use only the available information within the current CTU. Machine learning approaches try to extract and exploit similarities implicitly. Traditional methods like support vector machines or random forests use manually selected features, while recently proposed deep learning methods extract features during training. Finally, this paper discusses extending these methods to more recent video coding formats such as Versatile Video Coding (VVC) and AOMedia Video 1(AV1).
|
[207] | Michal Barcis, Agata Barcis, Nikolaos Tsiogkas, Hermann Hellwagner, Information Distribution in Multi-Robot Systems: Generic, Utility-Aware Optimization Middleware, In Frontiers in Robotics and AI, Frontiers Media (SA), vol. 8, pp. 1-11, 2021.
[bib][url] [doi] [abstract]
Abstract: This work addresses the problem of what information is worth sending in a multi-robot system under generic constraints, e.g., limited throughput or energy. Our decision method is based on Monte Carlo Tree Search. It is designed as a transparent middleware that can be integrated into existing systems to optimize communication among robots. Furthermore, we introduce techniques to reduce the decision space of this problem to further improve the performance. We evaluate our approach using a simulation study and demonstrate its feasibility in a real-world environment by realizing a proof of concept in ROS 2 on mobile robots.
|
[206] | Fatima Abdullah, Dragi Kimovski, Radu Prodan, Kashif Munir, Handover authentication latency reduction using mobile edge computing and mobility patterns, In Computing, Springer Science and Business Media (LLC), pp. 1-20, 2021.
[bib][url] [doi] [abstract]
Abstract: With the advancement in technology and the exponential growth of mobile devices, network traffic has increased manifold in cellular networks. Due to this reason, latency reduction has become a challenging issue for mobile devices. In order to achieve seamless connectivity and minimal disruption during movement, latency reduction is crucial in the handover authentication process. Handover authentication is a process in which the legitimacy of a mobile node is checked when it crosses the boundary of an access network. This paper proposes an efficient technique that utilizes mobility patterns of the mobile node and mobile Edge computing framework to reduce handover authentication latency. The key idea of the proposed technique is to categorize mobile nodes on the basis of their mobility patterns. We perform simulations to measure the networking latency. Besides, we use queuing model to measure the processing time of an authentication query at an Edge servers. The results show that the proposed approach reduces the handover authentication latency up to 54% in comparison with the existing approach.
|