[927] | Vignesh Menon, Hadi Amirpourazarian, Christian Timmerer, Mohammad Ghanbari, Efficient Multi-Encoding Algorithms for HTTP Adaptive Bitrate Streaming, In 2021 Picture Coding Symposium (PCS), IEEE, pp. 1-5, 2021.
[bib][url] [doi] [abstract]
Abstract: Since video accounts for the majority of today’s internet traffic, the popularity of HTTP Adaptive Streaming (HAS) is increasing steadily. In HAS, each video is encoded at multiple bitrates and spatial resolutions (i.e., representations) to adapt to a heterogeneity of network conditions, device characteristics, and end-user preferences. Most of the streaming services utilize cloud-based encoding techniques which enable a fully parallel encoding process to speed up the encoding and consequently to reduce the overall time complexity. State-of-the-art approaches further improve the encoding process by utilizing encoder analysis information from already encoded representation(s) to improve the encoding time complexity of the remaining representations. In this paper, we investigate various multi-encoding algorithms (i.e., multi-rate and multi-resolution) and propose novel multi- encoding algorithms for large-scale HTTP Adaptive Streaming deployments. Experimental results demonstrate that the proposed multi-encoding algorithm optimized for the highest compression efficiency reduces the overall encoding time by 39% with a 1.5% bitrate increase compared to stand-alone encodings. Its optimized version for the highest time savings reduces the overall encoding time by 50% with a 2.6% bitrate increase compared to stand-alone encodings.
|
[926] | Narges Mehran, Dragi Kimovski, Radu Prodan, A Two-Sided Matching Model for Data Stream Processing in the Cloud textendash Fog Continuum, In 2021 IEEE/ACM 21st International Symposium on Cluster, Cloud and Internet Computing (CCGrid), IEEE, pp. 514-524, 2021.
[bib][url] [doi] [abstract]
Abstract: Latency-sensitive and bandwidth-intensive stream processing applications are dominant traffic generators over the Internet network. A stream consists of a continuous sequence of data elements, which require processing in nearly real-time. To improve communication latency and reduce the network congestion, Fog computing complements the Cloud services by moving the computation towards the edge of the network. Unfortunately, the heterogeneity of the new Cloud – Fog continuum raises important challenges related to deploying and executing data stream applications. We explore in this work a two-sided stable matching model called Cloud – Fog to data stream application matching (CODA) for deploying a distributed application rep-resented as a workflow of stream processing microservices on heterogeneous computing continuum resources. In CODA, the application microservices rank the continuum resources based on their microservice stream processing time, while resources rank the stream processing microservices based on their residual bandwidth. A stable many-to-one matching algorithm assigns microservices to resources based on their mutual preferences, aiming to optimize the complete stream processing time on the application side, and the total streaming traffic on the resource side. We evaluate the CODA algorithm using simulated and real-world Cloud – Fog experimental scenarios. We achieved 11-45% lower stream processing time and 1.3-20% lower streaming traffic compared to related state-of-the-art approaches.
|
[925] | Roland Matha, Dragi Kimovski, Anatoliy Zabrovskiy, Christian Timmerer, Radu Prodan, Where to Encode: A Performance Analysis of x86 and Arm-based Amazon EC2 Instances, In 2021 IEEE 17th International Conference on eScience (eScience), IEEE, pp. 118-127, 2021.
[bib][url] [doi] [abstract]
Abstract: Video streaming became an undivided part of the Internet. To efficiently utilise the limited network bandwidth it is essential to encode the video content. However, encoding is a computationally intensive task, involving high-performance resources provided by private infrastructures or public clouds. Public clouds, such as Amazon EC2, provide a large portfolio of services and instances optimized for specific purposes and budgets. The majority of Amazon’s instances use x86 processors, such as Intel Xeon or AMD EPYC. However, following the recent trends in computer architecture, Amazon introduced Arm based instances that promise up to 40% better cost performance ratio than comparable x86 instances for specific workloads. We evaluate in this paper the video encoding performance of x86 and Arm instances of four instance families using the latest FFmpeg version and two video codecs. We examine the impact of the encoding parameters, such as different presets and bitrates, on the time and cost for encoding. Our experiments reveal that Arm instances show high time and cost saving potential of up to 33.63% for specific bitrates and presets, especially for the x264 codec. However, the x86 instances are more general and achieve low encoding times, regardless of the codec.
|
[924] | Vishu Madaan, Aditya Roy, Charu Gupta, Prateek Agrawal, Anand Sharma, Cristian Bologa, Radu Prodan, XCOVNet: Chest X-ray Image Classification for COVID-19 Early Detection Using Convolutional Neural Networks, In New Generation Computing, Springer Science and Business Media LLC, pp. 1-15, 2021.
[bib][url] [doi] [abstract]
Abstract: COVID-19 (also known as SARS-COV-2) pandemic has spread in the entire world. It is a contagious disease that easily spreads from one person in direct contact to another, classified by experts in five categories: asymptomatic, mild, moderate, severe, and critical. Already more than 66 million people got infected worldwide with more than 22 million active patients as of 5 December 2020 and the rate is accelerating. More than 1.5 million patients (approximately 2.5% of total reported cases) across the world lost their life. In many places, the COVID-19 detection takes place through reverse transcription polymerase chain reaction (RT-PCR) tests which may take longer than 48 h. This is one major reason of its severity and rapid spread. We propose in this paper a two-phase X-ray image classification called XCOVNet for early COVID-19 detection using convolutional neural Networks model. XCOVNet detects COVID-19 infections in chest X-ray patient images in two phases. The first phase pre-processes a dataset of 392 chest X-ray images of which half are COVID-19 positive and half are negative. The second phase trains and tunes the neural network model to achieve a 98.44% accuracy in patient classification.
|
[923] | Zezhong Lv, Qing Xu, Klaus Schoeffmann, Simon Parkinson, A Jensen-Shannon Divergence Driven Metric of Visual Scanning Efficiency Indicates Performance of Virtual Driving, In 2021 IEEE International Conference on Multimedia and Expo (ICME), IEEE, pp. 1-6, 2021.
[bib][url] [doi] [abstract]
Abstract: Visual scanning plays an important role in sampling visual information from the surrounding environments for a lot of everyday sensorimotor tasks, such as driving. In this paper, we consider the problem of visual scanning mechanism underpinning sensorimotor tasks in 3D dynamic environments. We exploit the use of eye tracking data as a behaviometric, for indicating the visuo-motor behavioral measure in the context of virtual driving. A new metric of visual scanning efficiency (VSE), which is defined as a mathematical divergence between a fixation distribution and a distribution of optical flows induced by fixations, is proposed by making use of a widely-known information theoretic tool, namely the square root of Jensen-Shannon divergence. Psychophysical eye tracking studies, in virtual reality based driving, are conducted to reveal that the new metric of visual scanning efficiency can be employed very well as a proxy evaluation for driving performance. These results suggest that the exploitation of eye tracking data provides an effective behaviometric for sensorimotor activities.
|
[922] | Ines Krajger, Mathias Lux, Erich J. Schwarz, Digitalization of an Educational Business Model Game, Chapter in Educating Engineers for Future Industrial Revolutions, Springer International Publishing, vol. 1329, pp. 241-252, 2021.
[bib] [doi] [abstract]
Abstract: Entrepreneurship Education is an important field of entrepreneurship research and has become a part of many programs of business and engineering schools. Educational games are a powerful tool to create a motivation learning environment. With the goal of investigating digitalization of business games, which are typically played inlarge groups and face to face, we particularly focus on the use case of thebusiness model game called “inspire! build your business”.
|
[921] | Daniele Lorenzi, Minh Nguyen, Farzad Tashtarian, Simone Milani, Hermann Hellwagner, Christian Timmerer, Days of future past, In Proceedings of the 2021 Workshop on Evolution, Performance and Interoperability of QUIC, ACM, pp. 8-14, 2021.
[bib][url] [doi] [abstract]
Abstract: HTTP Adaptive Streaming (HAS) has become a predominant technique for delivering videos in the Internet. Due to its adaptive behavior according to changing network conditions, it may result in video quality variations that negatively impact the Quality of Experience (QoE) of the user. In this paper, we propose Days of Future Past, an optimization-based Adaptive Bitrate (ABR) algorithm over HTTP/3. Days of Future Past takes advantage of an optimization model and HTTP/3 features, including (i) stream multiplexing and (ii) request cancellation. We design a Mixed Integer Linear Programming (MILP) model that determines the optimal video qualities of both the next segment to be requested and the segments currently located in the buffer. If better qualities for buffered segments are found, the client will send corresponding HTTP GET requests to retrieve them. Multiple segments (i.e., retransmitted segments) might be downloaded simultaneously to upgrade some buffered but not yet played segments to avoid quality decreases using the stream multiplexing feature of QUIC. HTTP/3's request cancellation will be used in case retransmitted segments will arrive at the client after their playout time. The experimental results shows that our proposed method is able to improve the QoE by up to 33.9%.
|
[920] | Jakub Lokoc, Patrik Vesely, Frantisek Mejzlik, Gregor Kovalcik, Tomas Soucek, Luca Rossetto, Klaus Schoeffmann, Werner Bailer, Cathal Gurrin, Loris Sauter, Jaeyub Song, Stefanos Vrochidis, Jiaxin Wu, Björn Thor Jonsson, Is the Reign of Interactive Search Eternal? Findings from the Video Browser Showdown 2020, In ACM Transactions on Multimedia Computing, Communications, and Applications, Association for Computing Machinery (ACM), vol. 17, no. 3, pp. 1-26, 2021.
[bib][url] [doi] [abstract]
Abstract: Comprehensive and fair performance evaluation of information retrieval systems represents an essential task for the current information age. Whereas Cranfield-based evaluations with benchmark datasets support development of retrieval models, significant evaluation efforts are required also for user-oriented systems that try to boost performance with an interactive search approach. This article presents findings from the 9th Video Browser Showdown, a competition that focuses on a legitimate comparison of interactive search systems designed for challenging known-item search tasks over a large video collection. During previous installments of the competition, the interactive nature of participating systems was a key feature to satisfy known-item search needs, and this article continues to support this hypothesis. Despite the fact that top-performing systems integrate the most recent deep learning models into their retrieval process, interactive searching remains a necessary component of successful strategies for known-item search tasks. Alongside the description of competition settings, evaluated tasks, participating teams, and overall results, this article presents a detailed analysis of query logs collected by the top three performing systems, SOMHunter, VIRET, and vitrivr. The analysis provides a quantitative insight to the observed performance of the systems and constitutes a new baseline methodology for future events. The results reveal that the top two systems mostly relied on temporal queries before a correct frame was identified. An interaction log analysis complements the result log findings and points to the importance of result set and video browsing approaches. Finally, various outlooks are discussed in order to improve the Video Browser Showdown challenge in the future.
|
[919] | Andreas Leibetseder, Klaus Schoeffmann, lifeXplore at the Lifelog Search Challenge 2021, In Proceedings of the 4th Annual on Lifelog Search Challenge, ACM, pp. 23-28, 2021.
[bib][url] [doi] [abstract]
Abstract: Since its first iteration in 2018, the Lifelog Search Challenge (LSC) continues to rise in popularity as an interactive lifelog data retrieval competition, co-located at the ACM International Conference on Multimedia Retrieval (ICMR). The goal of this annual live event is to search a large corpus of lifelogging data for specifically announced memories using a purposefully developed tool within a limited amount of time. As long-standing participants, we present our improved lifeXplore -- a retrieval system combining chronologic day summary browsing with interactive combinable concept filtering. Compared to previous versions, the tool is improved by incorporating temporal queries, advanced day summary features as well as usability improvements.
|
[918] | Andreas Leibetseder, Klaus Schoeffmann, Less is More - diveXplore 5.0 at VBS 2021, Chapter in MultiMedia Modeling, Springer International Publishing, no. 12573, pp. 455-460, 2021.
[bib][url] [doi] [abstract]
Abstract: As a longstanding participating system in the annual Video Browser Showdown (VBS2017-VBS2020) as well as in two iterations of the more recently established Lifelog Search Challenge (LSC2018-LSC2019), diveXplore is developed as a feature-rich Deep Interactive Video Exploration system. After its initial successful employment as a competitive tool at the challenges, its performance, however, declined as new features were introduced increasing its overall complexity. We mainly attribute this to the fact that many additions to the system needed to revolve around the system’s core element – an interactive self-organizing browseable featuremap, which, as an integral component did not accommodate the addition of new features well. Therefore, counteracting said performance decline, the VBS 2021 version constitutes a completely rebuilt version 5.0, implemented from scratch with the aim of greatly reducing the system’s complexity as well as keeping proven useful features in a modular manner.
|
[917] | Andreas Leibetseder, Klaus Schoeffmann, Joerg Keckstein, Simon Keckstein, Post-surgical Endometriosis Segmentation in Laparoscopic Videos, In 2021 International Conference on Content-Based Multimedia Indexing (CBMI), IEEE, pp. 1-4, 2021.
[bib][url] [doi] [abstract]
Abstract: Endometriosis is a common women's condition exhibiting a manifold visual appearance in various body-internal locations. Having such properties makes its identification very difficult and error-prone, at least for laymen and non-specialized medical practitioners. In an attempt to provide assistance to gynecologic physicians treating endometriosis, this demo paper describes a system that is trained to segment one frequently occurring visual appearance of endometriosis, namely dark endometrial implants. The system is capable of analyzing laparoscopic surgery videos, annotating identified implant regions with multi-colored overlays and displaying a detection summary for improved video browsing.
|
[916] | Dragi Kimovski, Roland Matha, Josef Hammer, Narges Mehran, Hermann Hellwagner, Radu Prodan, Cloud, Fog, or Edge: Where to Compute?, In IEEE Internet Computing, Institute of Electrical and Electronics Engineers (IEEE), vol. 25, no. 4, pp. 30-36, 2021.
[bib][url] [doi] [abstract]
Abstract: The computing continuum extends the high-performance cloud data centers with energy-efficient and low-latency devices close to the data sources located at the edge of the network. However, the heterogeneity of the computing continuum raises multiple challenges related to application management. These include where to offload an application – from the cloud to the edge – to meet its computation and communication requirements. To support these decisions, we provide in this article a detailed performance and carbon footprint analysis of a selection of use case applications with complementary resource requirements across the computing continuum over a real-life evaluation testbed.
|
[915] | Dragi Kimovski, Narges Mehran, Christopher Emanuel Kerth, Radu Prodan, Mobility-Aware IoT Applications Placement in the Cloud Edge Continuum, In IEEE Transactions on Services Computing, Institute of Electrical and Electronics Engineers (IEEE), pp. 1-14, 2021.
[bib][url] [doi] [abstract]
Abstract: The Edge computing extension of the Cloud services towards the network boundaries raises important placement challenges for IoT applications running in a heterogeneous environment with limited computing capacities. Unfortunately, existing works only partially address this challenge by optimizing a single or aggregate objective (e.g., response time) and not considering the edge devices' mobility and resource constraints. To address this gap, we propose a novel mobility-aware multi-objective IoT application placement (mMAPO) method in the Cloud -- Edge Continuum that optimizes completion time, energy consumption, and economic cost as conflicting objectives. mMAPO utilizes a Markov model for predictive analysis of the Edge device mobility and constrains the optimization to devices that do not frequently move through the network. We evaluate the quality of the mMAPO placements using simulation and real-world experimentation on two IoT applications. Compared to related work, mMAPO reduces the economic cost by 28% and decreases the completion time by 80% while maintaining a stable energy consumption.
|
[914] | Dragi Kimovski, Roland Matha, Gabriel Iuhasz, Fabrizio Marozzo, Dana Petcu, Radu Prodan, Autotuning of Exascale Applications With Anomalies Detection, In Frontiers in Big Data, Frontiers Media (SA), vol. 4, pp. 1-14, 2021.
[bib][url] [doi] [abstract]
Abstract: The execution of complex distributed applications in exascale systems faces many challenges, as it involves empirical evaluation of countless code variations and application runtime parameters over a heterogeneous set of resources. To mitigate these challenges, the research field of autotuning has gained momentum. The autotuning automates identifying the most desirable application implementation in terms of code variations and runtime parameters. However, the complexity and size of the exascale systems make the autotuning process very difficult, especially considering the number of parameter variations that have to be identified. Therefore, we introduce a novel approach for autotuning exascale applications based on a genetic multi-objective optimization algorithm integrated within the ASPIDE exascale computing framework. The approach considers multi-dimensional search space with support for pluggable objective functions, including execution time and energy requirements. Furthermore, the autotuner employs a machine learning-based event detection approach to detect events and anomalies during application execution, such as hardware failures or communication bottlenecks.
|
[913] | Roman Dumitru, Nikolay Nikolov, Brian Elvesater, Ahmet Soylu, Radu Prodan, Dragi Kimovski, Andrea Marrella, Francesco Leotta, Dario Benvenuti, Mihhail Matskin, Giannis Ledakis, Anthony Simonet-Boulogne, Fernando Perales, Evgeny Kharlamov, Alexandre Ulisses, Arnor Solberg, Raffaele Ceccarelli, DataCloud: Enabling the Big Data Pipelines on the Computing Continuum, RCIS '21 Proceedings of the 15th International Conference on Research Challenges in Information Science, 2021.
[bib][url] [doi] [abstract]
Abstract: With the recent developments of Internet of Things (IoT) and cloud-based technologies, massive amounts of data are generated by heterogeneous sources and stored through dedicated cloud solutions. Often organizations generate much more data than they are able to interpret, and current Cloud Computing technologies cannot fully meet the requirements of the Big Data processing applications and their data transfer overheads. Many data are stored for compliance purposes only but not used and turned into value, thus becoming Dark Data, which are not only an untapped value, but also pose a risk for organizations. To guarantee a better exploitation of Dark Data, the DataCloud project aims to realize novel methods and tools for effective and efficient management of the Big Data Pipeline lifecycle encompassing the Computing Continuum. Big Data pipelines are composite pipelines for processing data with nontrivial properties, commonly referred to as the Vs of Big Data (e.g., volume, velocity, value, etc.). Tapping their potential is a key aspect to leverage Dark Data, although it requires to go beyond the current approaches and frameworks for Big Data processing. In this respect, the concept of Computing Continuum extends the traditional centralised Cloud Computing with Edge and Fog computing in order to ensure low latency pre-processing and filtering close to the data sources. This will prevent to overwhelm the centralised cloud data centres enabling new opportunities for supporting Big Data pipelines.
|
[912] | Yasir Noman Khalid, Muhammad Aleem, Usman Ahmed, Radu Prodan, Muhammad Arshad Islam, Muhammad Azhar Iqbal, FusionCL: a machine-learning based approach for OpenCL kernel fusion to increase system performance, In Computing, Springer Science and Business Media LLC, pp. 1-32, 2021.
[bib][url] [doi] [abstract]
Abstract: Employing general-purpose graphics processing units (GPGPU) with the help of OpenCL has resulted in greatly reducing the execution time of data-parallel applications by taking advantage of the massive available parallelism. However, when a small data size application is executed on GPU there is a wastage of GPU resources as the application cannot fully utilize GPU compute-cores. There is no mechanism to share a GPU between two kernels due to the lack of operating system support on GPU. In this paper, we propose the provision of a GPU sharing mechanism between two kernels that will lead to increasing GPU occupancy, and as a result, reduce execution time of a job pool. However, if a pair of the kernel is competing for the same set of resources (i.e., both applications are compute-intensive or memory-intensive), kernel fusion may also result in a significant increase in execution time of fused kernels. Therefore, it is pertinent to select an optimal pair of kernels for fusion that will result in significant speedup over their serial execution. This research presents FusionCL, a machine learning-based GPU sharing mechanism between a pair of OpenCL kernels. FusionCL identifies each pair of kernels (from the job pool), which are suitable candidates for fusion using a machine learning-based fusion suitability classifier. Thereafter, from all the candidates, it selects a pair of candidate kernels that will produce maximum speedup after fusion over their serial execution using a fusion speedup predictor. The experimental evaluation shows that the proposed kernel fusion mechanism reduces execution time by 2.83× when compared to a baseline scheduling scheme. When compared to state-of-the-art, the reduction in execution time is up to 8%.
|
[911] | Vladislav Kashansky, Radu Prodan, Gleb Radchenko, Some aspects of the workflow scheduling in the computing continuum systems, In 9th International Conference "Distributed Computing and Grid Technologies in Science and Education", Crossref, pp. 106-110, 2021.
[bib][url] [doi] [abstract]
Abstract: Contemporary computing systems are commonly characterized in terms of data-intensive workflows, that are managed by utilizing large number of heterogeneous computing and storage elements interconnected through complex communication topologies. As the scale of the system grows and workloads become more heterogeneous in both inner structure and the arrival patterns, scheduling problem becomes exponentially harder, requiring problem-specifc heuristics. Despite several decades of the active research on it, one issue that still requires effort is to enable efficient workflows scheduling in such complex environments, while preserving robustness of the results. Moreover, recent research trend coined under term "computing continuum" prescribes convergence of the multi-scale computational systems with complex spatio-temporal dynamics and diverse sets of the management policies. This paper contributes with the set of recommendations and brief analysis for the existing scheduling algorithms.
|
[910] | Vladislav Kashansky, Nishant Saurabh, Radu Prodan, Aso Validi, Cristina Olaverri-Monreal, Renate Burian, Gerhard Burian, Dimo Hirsch, Yisheng Lv, Fei-Yue Wang, Hai Zuhge, The ADAPT Project: Adaptive and Autonomous Data Performance Connectivity and Decentralized Transport Network, In Proceedings of the Conference on Information Technology for Social Good (GoodIT 2021), ACM, pp. 115-120, 2021.
[bib][url] [doi] [abstract]
Abstract: The ADAPT project started during the most critical phase of the COVID-19 outbreak in Europe when the demand for Personal Protective Equipment (PPE) from each country's healthcare system surpassed national stock amounts. Due to national shutdowns, reduced transport logistics, and containment measures on the federal and provincial levels, the authorities could not meet the rising demand from the health care system on the PPE equipment. Fortunately, the PPE production capacities in China have regained (and expanded) their available capacities through which Austria now can get the demand of PPE to protect its citizens. ADAPT develops an adaptive and autonomous decision-making network to support the involved stakeholders along the PPE supply chain to save and protect human lives. The ADAPT decentralized blockchain platform optimizes supply, demand, and transport capacities between China and Austria with transparent, real-time certification checks on equipment, production documentation, and intelligent decision-making capabilities at all levels of this multidimensional logistic problem.
|
[909] | Vladislav Kashansky, Gleb Radchenko, Radu Prodan, Monte Carlo Approach to the Computational Capacities Analysis of the Computing Continuum, Chapter in Computational Science (ICCS 2021), Springer International Publishing, pp. 779-793, 2021.
[bib][url] [doi] [abstract]
Abstract: This article proposes an approach to the problem of computational capacities analysis of the computing continuum via theoretical framework of equilibrium phase-transitions and numerical simulations. We introduce the concept of phase transitions in computing continuum and show how this phenomena can be explored in the context of workflow makespan, which we treat as an order parameter. We simulate the behavior of the computational network in the equilibrium regime within the framework of the XY-model defined over complex agent network with Barabasi-Albert topology. More specifically, we define Hamiltonian over complex network topology and sample the resulting spin-orientation distribution with the Metropolis-Hastings technique. The key aspect of the paper is derivation of the bandwidth matrix, as the emergent effect of the “low-level” collective spin interaction. This allows us to study the first order approximation to the makespan of the “high-level” system-wide workflow model in the presence of data-flow anisotropy and phase transitions of the bandwidth matrix controlled by the means of “noise regime” parameter η. For this purpose, we have built a simulation engine in Python 3.6. Simulation results confirm existence of the phase transition, revealing complex transformations in the computational abilities of the agents. Notable feature is that bandwidth distribution undergoes a critical transition from single to multi-mode case. Our simulations generally open new perspectives for reproducible comparative performance analysis of the novel and classic scheduling algorithms.
|
[908] | Vladislav Kashanskii, Gleb Radchenko, Radu Prodan, Anatoliy Zabrovskiy, Prateek Agrawal, Automated Workflows Scheduling via Two-Phase Event-based MILP Heuristic for MRCPSP Problem, Online Publication (Abstract), 2021.
[bib][url] [abstract]
Abstract: In today’s reality massive amounts of data-intensive tasks are managed by utilizing a large number of heterogeneous computing and storage elements interconnected through high-speed communication networks. However, one issue that still requires research effort is to enable effcient workflows scheduling in such complex environments. As the scale of the system grows and the workloads become more heterogeneous in the inner structure and the arrival patterns, scheduling problem becomes exponentially harder, requiring problem-specifc heuristics. Many techniques evolved to tackle this problem, including, but not limited to Heterogeneous Earliest Finish Time (HEFT), The Dynamic Scaling Consolidation Scheduling (DSCS), Partitioned Balanced Time Scheduling (PBTS), Deadline Constrained Critical Path (DCCP) and Partition Problem-based Dynamic Provisioning Scheduling (PPDPS). In this talk, we will discuss the two-phase heuristic for makespan-optimized assignment of tasks and computing machines on large-scale computing systems, consisting of matching phase with subsequent event-based MILP method for schedule generation. We evaluated the scalability of the heuristic using the Constraint Integer Programing (SCIP) solver with various configurations based on data sets, provided by the MACS framework. Preliminary results show that the model provides near-optimal assignments and schedules for workflows composed of up to 100 tasks with complex task I/O interactions and demonstrates variable sensitivity with respect to the scale of workflows and resource limitation policies imposed.
|
[907] | Christof Karisch, Andreas Leibetseder, Klaus Schoeffmann, NoShot Video Browser at VBS2021, Chapter in MultiMedia Modeling, Springer International Publishing, no. 12573, pp. 405-409, 2021.
[bib][url] [doi] [abstract]
Abstract: We present our NoShot Video Browser, which has been successfully used at the last Video Browser Showdown competition VBS2020 at the MMM2020. NoShot is given its name due to the fact, that it neither makes use of any kind of shot detection nor utilize the VBS master shots. Instead videos are split into frames with a time distance of one second. The biggest strength of the system lies in its feature “time cache”, which shows results with the best confidence in a range of seconds.
|
[906] | Nikita Karandikar, Rockey Abhishek, Nishant Saurabh, Zhiming Zhao, Alexander Lercher, Ninoslav Marina, Radu Prodan, Chunming Rong, Antorweep Chakravorty, Blockchain-based prosumer incentivization for peak mitigation through temporal aggregation and contextual clustering.1, In Blockchain: Research and Applications, Elsevier (BV), pp. 1-35, 2021.
[bib][url] [doi] [abstract]
Abstract: Peak mitigation is of interest to power companies as peak periods may require the operator to over provision supply in order to meet the peak demand. Flattening the usage curve can result in cost savings, both for the power companies and the end users. Integration of renewable energy into the energy infrastructure presents an opportunity to use excess renewable generation to supplement supply and alleviate peaks. In addition, demand side management can shift the usage from peak to off peak times and reduce the magnitude of peaks. In this work, we present a data driven approach for incentive based peak mitigation. Understanding user energy profiles is an essential step in this process. We begin by analysing a popular energy research dataset published by the Ausgrid corporation. Extracting aggregated user energy behavior in temporal contexts and semantic linking and contextual clustering give us insight into consumption and rooftop solar generation patterns. We implement, and performance test a blockchain based prosumer incentivization system. The smart contract logic is based on our analysis of the Ausgrid dataset. Our implementation is capable of supporting 792,540 customers with a reasonably low infrastructure footprint.
|
[905] | Debesh Jha, Sharib Ali, Steven Hicks, Vajira Thambawita, Hanna Borgli, Pia H. Smedsrud, Thomas de Lange, Konstantin Pogorelov, Xiaowei Wang, Philipp Harzig, Minh-Triet Tran, Wenhua Meng, Trung-Hieu Hoang, Danielle Dias, Tobey H. Ko, Taruna Agrawal, Olga Ostroukhova, Zeshan Khan, Muhammad Atif Tahir, Yang Liu, Yuan Chang, Mathias Kirkerod, Dag Johansen, Mathias Lux, Haavard D. Johansen, Michael A. Riegler, Paal Halvorsen, A comprehensive analysis of classification methods in gastrointestinal endoscopy imaging, In Medical Image Analysis, Elsevier (BV), vol. 70, pp. 102007, 2021.
[bib][url] [doi] [abstract]
Abstract: Gastrointestinal (GI) endoscopy has been an active field of research motivated by the large number of highly lethal GI cancers. Early GI cancer precursors are often missed during the endoscopic surveillance. The high missed rate of such abnormalities during endoscopy is thus a critical bottleneck. Lack of attentiveness due to tiring procedures, and requirement of training are few contributing factors. An automatic GI disease classification system can help reduce such risks by flagging suspicious frames and lesions. GI endoscopy consists of several multi-organ surveillance, therefore, there is need to develop methods that can generalize to various endoscopic findings. In this realm, we present a comprehensive analysis of the Medico GI challenges: Medical Multimedia Task at MediaEval 2017, Medico Multimedia Task at MediaEval 2018, and BioMedia ACM MM Grand Challenge 2019. These challenges are initiative to set-up a benchmark for different computer vision methods applied to the multi-class endoscopic images and promote to build new approaches that could reliably be used in clinics. We report the performance of 21 participating teams over a period of three consecutive years and provide a detailed analysis of the methods used by the participants, highlighting the challenges and shortcomings of the current approaches and dissect their credibility for the use in clinical settings. Our analysis revealed that the participants achieved an improvement on maximum Mathew correlation coefficient (MCC) from 82.68% in 2017 to 93.98% in 2018 and 95.20% in 2019 challenges, and a significant increase in computational speed over consecutive years.
|
[904] | Antonia Stornig, Aymen Fakhreddine, Hermann Hellwagner, Petar Popovski, Christian Bettstetter, Video Quality and Latency for UAV Teleoperation over LTE: A Study with ns3, In 2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring), IEEE, pp. 1-7, 2021.
[bib][url] [doi] [abstract]
Abstract: Teleoperation of an unmanned aerial vehicle (UAV) is a challenging mobile application with real-time control from a first-person view. It poses stringent latency requirements for both video and control traffic. This paper studies the video quality and latencies for UAV teleoperation over LTE using ns3 simulations. A key ingredient is the latency budget model. We observe that the latency of the video is higher and more sensitive to mobility than that of the control traffic. The latency is influenced by the traffic variation caused by the variable bit rate of the streaming application. High mobility tends to increase latency and lead to more outliers, being problematic in real-time control.
|
[903] | Samira Hayat, Roland Jung, Hermann Hellwagner, Christian Bettstetter, Driton Emini, Dominik Schnieders, Edge Computing in 5G for Drone Navigation: What to Offload?, In IEEE Robotics and Automation Letters, Institute of Electrical and Electronics Engineers (IEEE), vol. 6, no. 2, pp. 2571-2578, 2021.
[bib][url] [doi] [abstract]
Abstract: Small drones that navigate using cameras may be limited in their speed and agility by low onboard computing power. We evaluate the role of edge computing in 5G for such autonomous navigation. The offloading of image processing tasks to an edge server is studied with a vision-based navigation algorithm. Three computation modes are compared: onboard, fully offloaded to the edge, and partially offloaded. Partial offloading is expected to pose lower demands on the communication network in terms of transfer rate than full offloading but requires some onboard processing. Our results on the computation time help select the most suitable mode for image processing, i.e., whether and what to offload, based on the network conditions.
|