[576] | Reza Farahani, Farzad Tashtarian, Alireza Erfanian, Christian Timmerer, Mohammad Ghanbari, Hermann Hellwagner, ES-HAS: an edge- and SDN-assisted framework for HTTP adaptive video streaming, In Proceedings of the 31st ACM Workshop on Network and Operating Systems Support for Digital Audio and Video, ACM, pp. 50-57, 2021.
[bib][url] [doi] [abstract]
Abstract: Recently, HTTP Adaptive Streaming (HAS) has become the dominant video delivery technology over the Internet. In HAS, clients have full control over the media streaming and adaptation processes. Lack of coordination among the clients and lack of awareness of the network conditions may lead to sub-optimal user experience and resource utilization in a pure client-based HAS adaptation scheme. Software Defined Networking (SDN) has recently been considered to enhance the video streaming process. In this paper, we leverage the capability of SDN and Network Function Virtualization (NFV) to introduce an edge- and SDN-assisted video streaming framework called ES-HAS. We employ virtualized edge components to collect HAS clients' requests and retrieve networking information in a time-slotted manner. These components then perform an optimization model in a time-slotted manner to efficiently serve clients' requests by selecting an optimal cache server (with the shortest fetch time). In case of a cache miss, a client's request is served (i) by an optimal replacement quality (only better quality levels with minimum deviation) from a cache server, or (ii) by the original requested quality level from the origin server. This approach is validated through experiments on a large-scale testbed, and the performance of our framework is compared to pure client-based strategies and the SABR system [12]. Although SABR and ES-HAS show (almost) identical performance in the number of quality switches, ES-HAS outperforms SABR in terms of playback bitrate and the number of stalls by at least 70% and 40%, respectively.
|
[575] | Alireza Erfanian, Optimizing QoE and Latency of Live Video Streaming Using Edge Computing and In-Network Intelligence, In Proceedings of the 12th ACM Multimedia Systems Conference, ACM, pp. 373-377, 2021.
[bib][url] [doi] [abstract]
Abstract: Live video streaming traffic and related applications have experienced significant growth in recent years. More users have started generating and delivering live streams with high quality (e.g., 4K resolution) through popular online streaming platforms such as YouTube, Twitch, and Facebook. Typically, the video contents are generated by streamers and watched by many audiences, which are geographically distributed in various locations far away from the streamers' locations. The resource limitation in the network (e.g., bandwidth) is a challenging issue for network and video providers to meet the users' requested quality. In this thesis, we will investigate optimizing QoEand end-to-end (E2E) latency of live video streaming by leveraging edge computing capabilities and in-network intelligence. We present four main research questions aiming to address the various challenges in optimizing live streaming QoE and E2E latency by employing edge computing and in-network intelligence.
|
[574] | Alireza Erfanian, Hadi Amirpour, Farzad Tashtarian, Christian Timmerer, Hermann Hellwagner, LwTE-Live: Light-weight Transcoding at the Edge for Live Streaming, In Proceedings of the Workshop on Design, Deployment, and Evaluation of Network-assisted Video Streaming, ACM, pp. 22-28, 2021.
[bib][url] [doi] [abstract]
Abstract: Live video streaming is widely embraced in video services, and its applications have attracted much attention in recent years. The increased number of users demanding high quality (e.g., 4K resolution) live videos increases the bandwidth utilization in the backhaul network. To decrease bandwidth utilization in HTTP Adaptive Streaming (HAS), in on-the-fly transcoding approaches, only the highest bitrate representation is delivered to the edge, and other representations are generated by transcoding at the edge. However, this approach is inefficient due to the high transcoding cost. In this paper, we propose a light-weight transcoding at the edge method for live applications, LwTE-Live, to decrease the bandwidth utilization and the overall live streaming cost. During the encoding processes at the origin server, the optimal encoding decisions are saved as metadata and the metadata replaces the corresponding representation in the bitrate ladder. The significantly reduced size of the metadata compared to its corresponding representation decreases the bandwidth utilization. The extracted metadata is then utilized at the edge to decrease the transcoding time. We formulate the problem as a Mixed-Binary Linear Programming (MBLP) model to optimize the live streaming cost, including the bandwidth and computation costs. We compare the proposed model with state-of-the-art approaches, and the experimental results show that our proposed method saves the cost and backhaul bandwidth utilization up to 34% and 45%, respectively.
|
[573] | Ekrem Cetinkaya, Machine Learning Based Video Coding Enhancements for HTTP Adaptive Streaming, In Proceedings of the 12th ACM Multimedia Systems Conference, ACM, pp. 418-422, 2021.
[bib][url] [doi] [abstract]
Abstract: Video traffic comprises the majority of today's Internet traffic, and HTTP Adaptive Streaming (HAS) is the preferred method to deliver video content over the Internet. Increasing demand for video and the improvements in the video display conditions over the years caused an increase in the video coding complexity. This increased complexity brought the need for more efficient video streaming and coding solutions. The latest standard video codecs can reduce the size of the videos by using more efficient tools with higher time-complexities. The plans for integrating machine learning into upcoming video codecs raised the interest in applied machine learning for video coding. In this doctoral study, we aim to propose applied machine learning methods to video coding, focusing on HTTP adaptive streaming. We present four primary research questions to target different challenges in video coding for HTTP adaptive streaming.
|
[572] | Michal Barcis, Hermann Hellwagner, Information Distribution in Multi-Robot Systems: Adapting to Varying Communication Conditions, In 2021 Wireless Days (WD), IEEE, pp. 1-8, 2021.
[bib][url] [doi] [abstract]
Abstract: This work addresses the problem of application-layer congestion control in multi-robot systems (MRS). It is motivated by the fact that many MRS constrain the amount of transmitted data in order to avoid congestion in the network and ensure that critical messages get delivered. However, such constraints often need to be manually tuned and assume constant network capabilities. We introduce the adaptive goodput constraint, which smoothly adapts to varying communication conditions. It is suitable for long-term communication planning, where rapid changes are undesirable. We analyze the introduced method in a simulation-based study and show its practical applicability using mobile robots.
|
[571] | Hadi Amirpourazarian, Christian Timmerer, Mohammad Ghanbari, SLFC: Scalable Light Field Coding, In 2021 Data Compression Conference (DCC), IEEE, pp. 43-52, 2021.
[bib][url] [doi] [abstract]
Abstract: Light field imaging enables some post-processing capabilities like refocusing, changing view perspective, and depth estimation. As light field images are represented by multiple views they contain a huge amount of data that makes compression inevitable. Although there are some proposals to efficiently compress light field images, their main focus is on encoding efficiency. However, some important functionalities such as viewpoint and quality scalabilities, random access, and uniform quality distribution have not been addressed adequately. In this paper, an efficient light field image compression method based on a deep neural network is proposed, which classifies multiple views into various layers. In each layer, the target view is synthesized from the available views of previously encoded/decoded layers using a deep neural network. This synthesized view is then used as a virtual reference for the target view inter-coding. In this way, random access to an arbitrary view is provided. Moreover, uniform quality distribution among multiple views is addressed. In higher bitrates where random access to an arbitrary view is more crucial, the required bitrate to access the requested view is minimized.
|
[570] | Hadi Amirpourazarian, Christian Timmerer, Mohammad Ghanbari, PSTR: Per-Title Encoding Using Spatio-Temporal Resolutions, In 2021 IEEE International Conference on Multimedia and Expo (ICME), IEEE, pp. 1-6, 2021.
[bib][url] [doi] [abstract]
Abstract: Current per-title encoding schemes encode the same video content (or snippets/subsets thereof) at various bitrates and spatial resolutions to find an optimal bitrate ladder for each video content. Compared to traditional approaches, in which a predefined, content-agnostic ("fit-to-all") encoding ladder is applied to all video contents, per-title encoding can result in (i) a significant decrease of storage and delivery costs and (ii) an increase in the Quality of Experience (QoE). In the current per-title encoding schemes, the bitrate ladder is optimized using only spatial resolutions, while we argue that with the emergence of high framerate videos, this principle can be extended to temporal resolutions as well. In this paper, we improve the per-title encoding for each content using spatio-temporal resolutions. Experimental results show that our proposed approach doubles the performance of bitrate saving by considering both temporal and spatial resolutions compared to considering only spatial resolutions.
|
[569] | Hadi Amirpour, Raimund Schatz, Christian Timmerer, Mohammad Ghanbari, On the Impact of Viewing Distance on Perceived Video Quality, In 2021 International Conference on Visual Communications and Image Processing (VCIP), IEEE, pp. 1-5, 2021.
[bib][url] [doi] [abstract]
Abstract: Due to the growing importance of optimizing the quality and efficiency of video streaming delivery, accurate assessment of user-perceived video quality becomes increasingly important. However, due to the wide range of viewing distances encountered in real-world viewing settings, the perceived video quality can vary significantly in everyday viewing situations. In this paper, we investigate and quantify the influence of viewing distance on perceived video quality. A subjective experiment was conducted with full HD sequences at three different fixed viewing distances, with each video sequence being encoded at three different quality levels. Our study results confirm that the viewing distance has a significant influence on the quality assessment. In particular, they show that an increased viewing distance generally leads to increased perceived video quality, especially at low media encoding quality levels. In this context, we also provide an estimation of potential bitrate savings that knowledge of actual viewing distance would enable in practice. Since current objective video quality metrics do not systematically take into account viewing distance, we also analyze and quantify the influence of viewing distance on the correlation between objective and subjective metrics. Our results confirm the need for distance-aware objective metrics when the accurate prediction of perceived video quality in real-world environments is required.
|
[568] | Hadi Amirpour, Hannaneh Barahouei Pasandi, Christian Timmerer, Mohammad Ghanbari, Improving Per-title Encoding for HTTP Adaptive Streaming by Utilizing Video Super-resolution, In 2021 International Conference on Visual Communications and Image Processing (VCIP), IEEE, pp. 1-5, 2021.
[bib][url] [doi] [abstract]
Abstract: In per-title encoding, to optimize a bitrate ladder over spatial resolution, each video segment is downscaled to a set of spatial resolutions, and they are all encoded at a given set of bitrates. To find the highest quality resolution for each bitrate, the low-resolution encoded videos are upscaled to the original resolution, and a convex hull is formed based on the scaled qualities. Deep learning-based video super-resolution (VSR) approaches show a significant gain over traditional upscaling approaches, and they are becoming more and more efficient over time. This paper improves the per-title encoding over the upscaling methods by using deep neural network-based VSR algorithms. Utilizing a VSR algorithm by improving the quality of low-resolution encodings can improve the convex hull. As a result, it will lead to an improved bitrate ladder. To avoid bandwidth wastage at perceptually lossless bitrates, a maximum threshold for the quality is set, and encodings beyond it are eliminated from the bitrate ladder. Similarly, a minimum threshold is set to avoid low-quality video delivery. The encodings between the maximum and minimum thresholds are selected based on one Just Noticeable Difference. Our experimental results show that the proposed per-title encoding results in a 24% bitrate reduction and 53% storage reduction compared to the state-of-the-art method.
|
[567] | Jesus Aguilar-Armijo, Multi-access Edge Computing for Adaptive Bitrate Video Streaming, In Proceedings of the 12th ACM Multimedia Systems Conference, ACM, pp. 378-382, 2021.
[bib][url] [doi] [abstract]
Abstract: Video streaming is the most used service in mobile networks and its usage will continue growing in the upcoming years. Due to this increase, content delivery should be improved as a key aspect of video streaming service, supporting higher bandwidth demand while assuring high quality of experience (QoE) for all the users. Multi-access edge computing (MEC) is an emerging paradigm that brings computational power and storage closer to the user. It is seen in the industry as a key technology for 5G mobile networks, with the goals of reducing latency, ensuring highly efficient network operation, improving service delivery and offering an improved user experience, among others. In this doctoral study, we aim to leverage the possibilities of MEC to improve the content delivery of video streaming services. We present four main research questions to target the different challenges in content delivery for HTTP Adaptive Streaming.
|
[566] | Jesus Aguilar-Armijo, Christian Timmerer, Hermann Hellwagner, EADAS: Edge Assisted Adaptation Scheme for HTTP Adaptive Streaming, In 2021 IEEE 46th Conference on Local Computer Networks (LCN), IEEE, pp. 487-494, 2021.
[bib][url] [doi] [abstract]
Abstract: Mobile networks equipped with edge computing nodes enable access to information that can be leveraged to assist client-based adaptive bitrate (ABR) algorithms in making better adaptation decisions to improve both Quality of Experience (QoE) and fairness. For this purpose, we propose a novel on-the-fly edge mechanism, named EADAS (Edge Assisted Adaptation Scheme for HTTP Adaptive Streaming), located at the edge node that assists and improves the ABR decisions on-the-fly. EADAS proposes (i) an edge ABR algorithm to improve QoE and fairness for clients and (ii) a segment prefetching scheme. The results show a QoE increase of 4.6%, 23.5%, and 24.4% and a fairness increase of 11%, 3.4%, and 5.8% when using a buffer-based, a throughput-based, and a hybrid ABR algorithm, respectively, at the client compared with client-based algorithms without EADAS. Moreover, QoE and fairness among clients can be prioritized using parameters of the EADAS algorithm according to service providers’ requirements.
|
[565] | Anatoliy Zabrovskiy, Prateek Agrawal, Roland Matha, Christian Timmerer, Radu Prodan, ComplexCTTP: Complexity Class Based Transcoding Time Prediction for Video Sequences Using Artificial Neural Network, In 2020 IEEE Sixth International Conference on Multimedia Big Data (BigMM), IEEE, pp. 316-325, 2020.
[bib][url] [doi] [abstract]
Abstract: HTTP Adaptive Streaming of video content is becoming an integral part of the Internet and accounts for the majority of today’s traffic. Although Internet bandwidth is constantly increasing, video compression technology plays an important role and the major challenge is to select and set up multiple video codecs, each with hundreds of transcoding parameters. Additionally, the transcoding speed depends directly on the selected transcoding parameters and the infrastructure used. Predicting transcoding time for multiple transcoding parameters with different codecs and processing units is a challenging task, as it depends on many factors. This paper provides a novel and considerably fast method for transcoding time prediction using video content classification and neural network prediction. Our artificial neural network (ANN) model predicts the transcoding times of video segments for state of the art video codecs based on transcoding parameters and content complexity. We evaluated our method for two video codecs/implementations (AVC/x264 and HEVC/x265) as part of large-scale HTTP Adaptive Streaming services. The ANN model of our method is able to predict the transcoding time by minimizing the mean absolute error (MAE) to 1.37 and 2.67 for x264 and x265 codecs, respectively. For x264, this is an improvement of 22\% compared to the state of the art.
|
[564] | Venkata Phani Kumar Malladi, Christian Timmerer, Hermann Hellwagner, Mipso: Multi-Period Per-Scene Optimization For HTTP Adaptive Streaming, In 2020 IEEE International Conference on Multimedia and Expo (ICME), IEEE, pp. 1-6, 2020.
[bib][url] [doi] [abstract]
Abstract: Video delivery over the Internet has become more and more established in recent years due to the widespread use of Dynamic Adaptive Streaming over HTTP (DASH). The current DASH specification defines a hierarchical data model for Media Presentation Descriptions (MPDs) in terms of periods, adaptation sets, representations and segments. Although multi-period MPDs are widely used in live streaming scenarios, they are not fully utilized in Video-on-Demand (VoD) HTTP adaptive streaming (HAS) scenarios. In this paper, we introduce MiPSO, a framework for Multi–Period per-Scene Optimization, to examine multiple periods in VoD HAS scenarios. MiPSO provides different encoded representations of a video at either (i) maximum possible quality or (ii) minimum possible bitrate, beneficial to both service providers and subscribers. In each period, the proposed framework adjusts the video representations (resolution-bitrate pairs) by taking into account the complexities of the video content, with the aim of achieving streams at either higher qualities or lower bitrates. The experimental evaluation with a test video data set shows that the MiPSO reduces the average bitrate of streams with the same visual quality by approximately 10% or increases the visual quality of streams by at least 1 dB in terms of Peak Signal-to-Noise (PSNR) at the same bitrate compared to conventional approaches to video content delivery.
|
[563] | Christian Timmerer, Hermann Hellwagner, HTTP Adaptive Streaming: Where Is It Heading?, In Proceedings of the Brazilian Symposium on Multimedia and the Web, ACM, pp. 349-350, 2020.
[bib][url] [doi] [abstract]
Abstract: In this contribution, we present selected novel approaches and results of our research work in the ATHENA Christian Doppler Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services), a major research project at our department jointly funded by public sources and industry. By putting this work also into the context of related ongoing research activities, we aim at working out where HTTP Adaptive Streaming is currently heading.
|
[562] | Babak Taraghi, Anatoliy Zabrovskiy, Christian Timmerer, Hermann Hellwagner, Cloud-based Adaptive Video Streaming Evaluation Framework for the Automated Testing of Media Players CAdViSE, In Proceedings of the 11th ACM Multimedia Systems Conference, ACM, pp. 349-352, 2020.
[bib][url] [doi] [abstract]
Abstract: Attempting to cope with fluctuations of network conditions in terms of available bandwidth, latency and packet loss, and to deliver the highest quality of video (and audio) content to users, research on adaptive video streaming has attracted intense efforts from the research community and huge investments from technology giants. How successful these efforts and investments are, is a question that needs precise measurements of the results of those technological advancements. HTTP-based Adaptive Streaming (HAS) algorithms, which seek to improve video streaming over the Internet, introduce video bitrate adaptivity in a way that is scalable and efficient. However, how each HAS implementation takes into account the wide spectrum of variables and configuration options, brings a high complexity to the task of measuring the results and visualizing the statistics of the performance and quality of experience. In this paper, we introduce CAdViSE, our Cloud-based Adaptive Video Streaming Evaluation framework for the automated testing of adaptive media players. The paper aims to demonstrate a test environment which can be instantiated in a cloud infrastructure, examines multiple media players with different network attributes at defined points of the experiment time, and finally concludes the evaluation with visualized statistics and insights into the results.
|
[561] | Natalia Sokolova, Mario Taschwer, Stephanie Sarny, Doris Putzgruber-Adamitsch, Klaus Schoeffmann, Pixel-Based Iris and Pupil Segmentation in Cataract Surgery Videos Using Mask R-CNN, In 2020 IEEE 17th International Symposium on Biomedical Imaging Workshops (ISBI Workshops), IEEE, 2020.
[bib][url] [doi] [abstract]
Abstract: Automatically detecting clinically relevant events in surgery video recordings is becoming increasingly important for documentary, educational, and scientific purposes in the medical domain. From a medical image analysis perspective, such events need to be treated individually and associated with specific visible objects or regions. In the field of cataract surgery (lens replacement in the human eye), pupil reaction (dilation or restriction) during surgery may lead to complications and hence represents a clinically relevant event. Its detection requires automatic segmentation and measurement of pupil and iris in recorded video frames. In this work, we contribute to research on pupil and iris segmentation methods by (1) providing a dataset of 82 annotated images for training and evaluating suitable machine learning algorithms, and (2) applying the Mask R-CNN algorithm to this problem, which – in contrast to existing techniques for pupil segmentation – predicts free-form pixel-accurate segmentation masks for iris and pupil. The proposed approach achieves consistent high segmentation accuracies on several metrics while delivering an acceptable prediction efficiency, establishing a promising basis for further segmentation and event detection approaches on eye surgery videos.
|
[560] | Anandhakumar Palanisamy, Mirsat Sefidanoski, Spiros Koulouzis, Carlos Rubia, Nishant Saurabh, Radu Prodan, Decentralized Social Media Applications as a Service: a Car-Sharing Perspective, In 2020 IEEE Symposium on Computers and Communications (ISCC), IEEE, pp. 1-7, 2020.
[bib][url] [doi] [abstract]
Abstract: Social media applications are essential for next generation connectivity. Today, social media are centralized platforms with a single proprietary organization controlling the network and posing critical trust and governance issues over the created and propagated content. The ARTICONF project funded by the European Union’s Horizon 2020 program researches a decentralized social media platform based on a novel set of trustworthy, resilient and globally sustainable tools to fulfil the privacy, robustness and autonomy-related promises that proprietary social media platforms have failed to deliver so far. This paper presents the ARTICONF approach to a car-sharing use case application, as a new collaborative peer-to-peer model providing an alternative solution to private car ownership. We describe a prototype implementation of the car-sharing social media application and illustrate through real snapshots how the different ARTICONF tools support it in a simulated scenario.
|
[559] | Minh Nguyen, Christian Timmerer, Hermann Hellwagner, H2BR: An HTTP/2-based Retransmission Technique to Improve the QoE of Adaptive Video Streaming, In Proceedings of the 25th ACM Workshop on Packet Video, ACM, pp. 1-7, 2020.
[bib][url] [doi] [abstract]
Abstract: HTTP-based Adaptive Streaming (HAS) plays a key role in over-the-top video streaming. It contributes towards reducing the rebuffering duration of video playout by adapting the video quality to the current network conditions. However, it incurs variations of video quality in a streaming session because of the throughput fluctuation, which impacts the user’s Quality of Experience (QoE). Besides, many adaptive bitrate (ABR) algorithms choose the lowest-quality segments at the beginning of the streaming session to ramp up the playout buffer as soon as possible. Although this strategy decreases the startup time, the users can be annoyed as they have to watch a low-quality video initially. In this paper, we propose an efficient retransmission technique, namely H2BR, to replace low-quality segments being stored in the playout buffer with higher-quality versions by using features of HTTP/2 including (i) stream priority, (ii) server push, and (iii) stream termination. The experimental results show that H2BR helps users avoid watching low video quality during video playback and improves the user’s QoE. H2BR can decrease by up to more than 70% the time when the users suffer the lowest-quality video as well as benefits the QoE by up to 13%.
|
[558] | Minh Nguyen, Hadi Amirpour, Christian Timmerer, Hermann Hellwagner, Scalable High Efficiency Video Coding based HTTP Adaptive Streaming over QUIC, In Proceedings of the Workshop on the Evolution, Performance, and Interoperability of QUIC, ACM, pp. 28-34, 2020.
[bib][url] [doi] [abstract]
Abstract: HTTP/2 has been explored widely for adaptive video streaming, but still suffers from Head-of-Line blocking, and three-way handshake delay due to TCP. Meanwhile, QUIC running on top of UDP can tackle these issues. In addition, although many adaptive bitrate (ABR) algorithms have been proposed for scalable and non-scalable video streaming, the literature lacks an algorithm designed for both types of video streaming approaches. In this paper, we investigate the impact of QUIC and HTTP/2 on the performance of ABR algorithms. Moreover, we propose an efficient approach for utilizing scalable video coding formats for adaptive video streaming that combines a traditional video streaming approach (based on non-scalable video coding formats) and a retransmission technique. The experimental results show that QUIC benefits significantly from our proposed method in the context of packet loss and retransmission. Compared to HTTP/2, it improves the average video quality and provides a smoother adaptation behavior. Finally, we demonstrate that our proposed method originally designed for non-scalable video codecs also works efficiently for scalable videos such as Scalable High Efficiency Video Coding (SHVC).
|
[557] | Philipp Moll, Veit Frick, Natascha Rauscher, Mathias Lux, How players play games, In Proceedings of the 12th ACM International Workshop on Immersive Mixed and Virtual Environment Systems, ACM, 2020.
[bib][url] [doi] [abstract]
Abstract: The popularity of computer games is remarkably high and is still growingevery year. Despite this popularity and the economical importance of gaming,research in game design, or to be more precise, of game mechanics that can beused to improve the enjoyment of a game, is still scarce. In this paper, weanalyze Fortnite, one of the currently most successful games, and observe howplayers play the game. We investigate what makes playing the game enjoyable byanalyzing video streams of experienced players from game streaming platformsand by conducting a user study with players who are new to the game. Weformulate four hypotheses about how game mechanics influence the way playersinteract with the game and how it influences player enjoyment. We presentdifferences in player behavior between experienced players and beginners anddiscuss how game mechanics could be used to improve the enjoyment forbeginners. In addition, we describe our approach to analyze games withoutaccess to game-internal data by using a toolchain which automatically extractsgame information from video streams.
|
[556] | Mohamed Ayoub Messous, Hermann Hellwagner, Sidi-Mohammed Senouci, Driton Emini, Dominik Schnieders, Edge Computing for Visual Navigation and Mapping in a UAV Network, In ICC 2020 - 2020 IEEE International Conference on Communications (ICC), IEEE, pp. 1-6, 2020.
[bib][url] [doi] [abstract]
Abstract: This research work presents conceptual considerations and quantitative evaluations into how integrating computation offloading to edge computing servers would offer a paradigm shift for an effective deployment of autonomous drones. The specific mission that has been considered is collaborative autonomous navigation and mapping in a 3D environment of a small drone network. Specifically, in order to achieve this mission, each drone is required to compute a low latency, highly compute intensive task in a timely manner. The proposed model decides for each task, while considering the impact on performance and mission requirements, whether to (i) compute locally, (ii) offload to the edge server, or (iii) to the ground station. Extensive simulation work was performed to assess the effectiveness of the proposed scheme compared to other models.
|
[555] | Petra Mazdin, Michal Barcis, Hermann Hellwagner, Bernhard Rinner, Distributed Task Assignment in Multi-Robot Systems based on Information Utility, In 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), IEEE, pp. 734-740, 2020.
[bib][url] [doi] [abstract]
Abstract: Most multi-robot systems (MRS) require to coordinate the assignment of tasks to individual robots for efficient missions. Due to the dynamics, incomplete knowledge and changing requirements, the robots need to distribute their local state information within the MRS continuously during the mission. Since communication resources are limited and message transfers may be erroneous, the global state estimated by each robot may become inconsistent. This inconsistency may lead to degraded task assignment and mission performance. In this paper, we explore the effect and cost of communication and exploit information utility for online distributed task assignment. In particular, we model the usefulness of the transferred state information by its information utility and use it for controlling the distribution of local state information and for updating the global state. We compare our distributed, utility-based online task assignment with well-known centralized and auction-based methods and show how substantial reduction of communication effort still leads to successful mission completion. We demonstrate our approach in a wireless communication testbed using ROS2.
|
[554] | Andreas Leibetseder, Klaus Schoeffmann, surgXplore: Interactive Video Exploration for Endoscopy, In Proceedings of the 2020 International Conference on Multimedia Retrieval, ACM, pp. 397-401, 2020.
[bib][url] [doi] [abstract]
Abstract: Accumulating recordings of daily conducted surgical interventions such as endoscopic procedures for the long term generates very large video archives that are both difficult to search and explore. Since physicians utilize this kind of media routinely for documentation, treatment planning or education and training, it can be considered a crucial task to make said archives manageable in regards to discovering or retrieving relevant content. We present an interactive tool including a multitude of modalities for browsing, searching and filtering medical content, demonstrating its usefulness on over 140 hours of pre-processed laparoscopic surgery videos.
|
[553] | Andreas Leibetseder, Klaus Schoeffmann, lifeXplore at the Lifelog Search Challenge 2020, In Proceedings of the Third Annual Workshop on Lifelog Search Challenge, ACM, pp. 37-42, 2020.
[bib][url] [doi] [abstract]
Abstract: Since its first iteration in 2018, the Lifelog Search Challenge (LSC) -- an interactive competition for retrieving lifelogging moments -- is co-located at the annual ACM International Conference on Multimedia Retrieval (ICMR) and has drawn international attention. With the goal of making an ever growing public lifelogging dataset searchable, several teams develop systems for quickly solving time-limited queries during the challenge. Having participated in both previous LSC iterations, i.e. LSC2018 and LSC2019, we present our lifeXplore system -- a video exploration and retrieval tool combining feature map browsing, concept search and filtering as well as hand-drawn sketching. The system is improved by including additional deep concept YOLO9000, optical character recognition (OCR) as well as adding uniform sampling as an alternative to the system's traditional underlying shot segmentation.
|
[552] | Dragi Kimovski, Dijana C. Bogatinoska, Narges Mehran, Aleksandar Karadimce, Natasa Paunkoska, Radu Prodan, Ninoslav Marina, Cloud-Edge Offloading Model for Vehicular Traffic Analysis, In 2020 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (BDCloud), IEEE, pp. 746-753, 2020.
[bib][url] [doi] [abstract]
Abstract: The proliferation of smart sensing and computing devices, capable of collecting a vast amount of data, has made the gathering of the necessary vehicular traffic data relatively easy. However, the analysis of these big data sets requires computational resources, which are currently provided by the Cloud Data Centers. Nevertheless, the Cloud Data Centers can have unacceptably high latency for vehicular analysis applications with strict time requirements. The recent introduction of the Edge computing paradigm, as an extension of the Cloud services, has partially moved the processing of big data closer to the data sources, thus addressing this issue. Unfortunately, this unlocked multiple challenges related to resources management. Therefore, we present a model for scheduling of vehicular traffic analysis applications with partial task offloading across the Cloud - Edge continuum. The approach represents the traffic applications as a set of interconnected tasks composed into a workflow that can be partially offloaded to the Edge. We evaluated the approach through a simulated Cloud - Edge environment that considers two representative vehicular traffic applications with a focus on video stream analysis. Our results show that the presented approach reduces the application response time up to eight times while improving energy efficiency by a factor of four.
|