[877] | Anatoliy Zabrovskiy, Prateek Agrawal, Roland Matha, Christian Timmerer, Radu Prodan, ComplexCTTP: Complexity Class Based Transcoding Time Prediction for Video Sequences Using Artificial Neural Network, In 2020 IEEE Sixth International Conference on Multimedia Big Data (BigMM), IEEE, pp. 316-325, 2020.
[bib][url] [doi] [abstract]
Abstract: HTTP Adaptive Streaming of video content is becoming an integral part of the Internet and accounts for the majority of today’s traffic. Although Internet bandwidth is constantly increasing, video compression technology plays an important role and the major challenge is to select and set up multiple video codecs, each with hundreds of transcoding parameters. Additionally, the transcoding speed depends directly on the selected transcoding parameters and the infrastructure used. Predicting transcoding time for multiple transcoding parameters with different codecs and processing units is a challenging task, as it depends on many factors. This paper provides a novel and considerably fast method for transcoding time prediction using video content classification and neural network prediction. Our artificial neural network (ANN) model predicts the transcoding times of video segments for state of the art video codecs based on transcoding parameters and content complexity. We evaluated our method for two video codecs/implementations (AVC/x264 and HEVC/x265) as part of large-scale HTTP Adaptive Streaming services. The ANN model of our method is able to predict the transcoding time by minimizing the mean absolute error (MAE) to 1.37 and 2.67 for x264 and x265 codecs, respectively. For x264, this is an improvement of 22\% compared to the state of the art.
|
[876] | Radu Prodan, Vladislav Kashanskii, Dragi Kimovski, Prateek Agrawal, ASPIDE Project: Perspectives on the Scalable Monitoring and Auto-tuning, Online Publication (Abstract), 2020.
[bib][url] [abstract]
Abstract: Extreme Data is an incarnation of Big Data concept distinguished by the massive amounts of data that must be queried, communicated and analyzed in (near) real-time by using a very large number of memory/storage elements of both, the converging Cloud and Pre-Exascale computing systems. Notable examples are the raw high energy physics data produced at a rate of hundreds of gigabits-per-second that must be filtered, stored and analyzed in a fault-tolerant fasion, multi-scale brain imaging data analysis and simulations, complex networks data analyses, driven by the social media systems. To handle such amounts of data multi-tierung architectures are introduced, including scheduling systems and distributed storage systems, ranging from in-memory databases to tape libraries. The ASPIDE project is contributing with the definition of a new programming paradigm, APIs, runtime tools and methodologies for expressing data intensive tasks on the converging large-scale systems , which can pave the way for the exploitation of parallelism policies over the various models of the system architectures, promoting high performance and efficiency, and offering powerful operations and mechanisms for processing extreme data sources at high speed and / or real-time.
|
[875] | Laurens Versluis, Roland Matha, Sacheendra Talluri, Tim Hegeman, Radu Prodan, Ewa Deelman, Alexandru Iosup, The Workflow Trace Archive: Open-Access Data From Public and Private Computing Infrastructures, In IEEE Transactions on Parallel and Distributed Systems, Institute of Electrical and Electronics Engineers (IEEE), vol. 31, no. 9, pp. 2170-2184, 2020.
[bib][url] [doi] [abstract]
Abstract: Realistic, relevant, and reproducible experiments often need input traces collected from real-world environments. In this work, we focus on traces of workflows—common in datacenters, clouds, and HPC infrastructures. We show that the state-of-the-art in using workflow-traces raises important issues: (1) the use of realistic traces is infrequent and (2) the use of realistic, open-access traces even more so. Alleviating these issues, we introduce the Workflow Trace Archive (WTA), an open-access archive of workflow traces from diverse computing infrastructures and tooling to parse, validate, and analyze traces. The WTA includes >48 million workflows captured from >10 computing infrastructures, representing a broad diversity of trace domains and characteristics. To emphasize the importance of trace diversity, we characterize the WTA contents and analyze in simulation the impact of trace diversity on experiment results. Our results indicate significant differences in characteristics, properties, and workflow structures between workload sources, domains, and fields.
|
[874] | Pawan Kumar Verma, Prateek Agrawal, Study and Detection of Fake News: P2C2-Based Machine Learning Approach, Chapter in Data Management, Analytics and Innovation, Springer Singapore, pp. 261-278, 2020.
[bib][url] [doi] [abstract]
Abstract: News is the most important and sensitive piece of information which affects the society nowadays. In the current scenario, there are two ways to propagate news all over the world; first one is the traditional way, i.e., newspaper and second is electronic media like social media websites. Electronic media is the most popular medium these days because it helps to propagate news to huge audience in few seconds. Besides these benefits of electronic media, it has one disadvantage also, i.e., “spreading the Fake News”. Fake news is the most common problem these days. Even big companies like Twitter, Facebook, etc. are facing fake news problems. Several researchers are working in these big companies to solve this problem. Fake news can be defined as the news story that is not true. In some specific words, we can say that news is fake if any news agency declares a piece of news deliberately written as false and it is also verifiably as false. This paper focuses on some key characteristics of fake news and how it is affecting the society nowadays. It also includes various key viewpoints which are useful to categorize whether the news is fake or not. At last, this paper discussed some key challenges and future directions that help in increasing accuracy in detection of fake news on the basis of P2C2 (Propagation, Pattern, Comprehension & Credibility) approach having two phases: Detection and Verification. This paper helps readers in two ways (i) Newcomer can easily get the basic knowledge and impact of fake news; (ii) They can get knowledge of different perspectives of fake news which are helpful in the detection process.
|
[873] | Venkata Phani Kumar Malladi, Christian Timmerer, Hermann Hellwagner, Mipso: Multi-Period Per-Scene Optimization For HTTP Adaptive Streaming, In 2020 IEEE International Conference on Multimedia and Expo (ICME), IEEE, pp. 1-6, 2020.
[bib][url] [doi] [abstract]
Abstract: Video delivery over the Internet has become more and more established in recent years due to the widespread use of Dynamic Adaptive Streaming over HTTP (DASH). The current DASH specification defines a hierarchical data model for Media Presentation Descriptions (MPDs) in terms of periods, adaptation sets, representations and segments. Although multi-period MPDs are widely used in live streaming scenarios, they are not fully utilized in Video-on-Demand (VoD) HTTP adaptive streaming (HAS) scenarios. In this paper, we introduce MiPSO, a framework for Multi–Period per-Scene Optimization, to examine multiple periods in VoD HAS scenarios. MiPSO provides different encoded representations of a video at either (i) maximum possible quality or (ii) minimum possible bitrate, beneficial to both service providers and subscribers. In each period, the proposed framework adjusts the video representations (resolution-bitrate pairs) by taking into account the complexities of the video content, with the aim of achieving streams at either higher qualities or lower bitrates. The experimental evaluation with a test video data set shows that the MiPSO reduces the average bitrate of streams with the same visual quality by approximately 10% or increases the visual quality of streams by at least 1 dB in terms of Peak Signal-to-Noise (PSNR) at the same bitrate compared to conventional approaches to video content delivery.
|
[872] | Ennio Torre, Juan J. Durillo, Vincenzo de Maio, Prateek Agrawal, Shajulin Benedict, Nishant Saurabh, Radu Prodan, A dynamic evolutionary multi-objective virtual machine placement heuristic for cloud data centers, In Information and Software Technology, Elsevier BV, vol. 128, pp. 106390, 2020.
[bib][url] [doi] [abstract]
Abstract: Minimizing the resource wastage reduces the energy cost of operating a data center, but may also lead to a considerably high resource overcommitment affecting the Quality of Service (QoS) of the running applications. The effective tradeoff between resource wastage and overcommitment is a challenging task in virtualized Clouds and depends on the allocation of virtual machines (VMs) to physical resources. We propose in this paper a multi-objective method for dynamic VM placement, which exploits live migration mechanisms to simultaneously optimize the resource wastage, overcommitment ratio and migration energy. Our optimization algorithm uses a novel evolutionary meta-heuristic based on an island population model to approximate the Pareto optimal set of VM placements with good accuracy and diversity. Simulation results using traces collected from a real Google cluster demonstrate that our method outperforms related approaches by reducing the migration energy by up to 57% with a QoS increase below 6%.
|
[871] | Christian Timmerer, Hermann Hellwagner, HTTP Adaptive Streaming: Where Is It Heading?, In Proceedings of the Brazilian Symposium on Multimedia and the Web, ACM, pp. 349-350, 2020.
[bib][url] [doi] [abstract]
Abstract: In this contribution, we present selected novel approaches and results of our research work in the ATHENA Christian Doppler Laboratory (Adaptive Streaming over HTTP and Emerging Networked Multimedia Services), a major research project at our department jointly funded by public sources and industry. By putting this work also into the context of related ongoing research activities, we aim at working out where HTTP Adaptive Streaming is currently heading.
|
[870] | Babak Taraghi, Anatoliy Zabrovskiy, Christian Timmerer, Hermann Hellwagner, Cloud-based Adaptive Video Streaming Evaluation Framework for the Automated Testing of Media Players CAdViSE, In Proceedings of the 11th ACM Multimedia Systems Conference, ACM, pp. 349-352, 2020.
[bib][url] [doi] [abstract]
Abstract: Attempting to cope with fluctuations of network conditions in terms of available bandwidth, latency and packet loss, and to deliver the highest quality of video (and audio) content to users, research on adaptive video streaming has attracted intense efforts from the research community and huge investments from technology giants. How successful these efforts and investments are, is a question that needs precise measurements of the results of those technological advancements. HTTP-based Adaptive Streaming (HAS) algorithms, which seek to improve video streaming over the Internet, introduce video bitrate adaptivity in a way that is scalable and efficient. However, how each HAS implementation takes into account the wide spectrum of variables and configuration options, brings a high complexity to the task of measuring the results and visualizing the statistics of the performance and quality of experience. In this paper, we introduce CAdViSE, our Cloud-based Adaptive Video Streaming Evaluation framework for the automated testing of adaptive media players. The paper aims to demonstrate a test environment which can be instantiated in a cloud infrastructure, examines multiple media players with different network attributes at defined points of the experiment time, and finally concludes the evaluation with visualized statistics and insights into the results.
|
[869] | Natalia Sokolova, Mario Taschwer, Stephanie Sarny, Doris Putzgruber-Adamitsch, Klaus Schoeffmann, Pixel-Based Iris and Pupil Segmentation in Cataract Surgery Videos Using Mask R-CNN, In 2020 IEEE 17th International Symposium on Biomedical Imaging Workshops (ISBI Workshops), IEEE, 2020.
[bib][url] [doi] [abstract]
Abstract: Automatically detecting clinically relevant events in surgery video recordings is becoming increasingly important for documentary, educational, and scientific purposes in the medical domain. From a medical image analysis perspective, such events need to be treated individually and associated with specific visible objects or regions. In the field of cataract surgery (lens replacement in the human eye), pupil reaction (dilation or restriction) during surgery may lead to complications and hence represents a clinically relevant event. Its detection requires automatic segmentation and measurement of pupil and iris in recorded video frames. In this work, we contribute to research on pupil and iris segmentation methods by (1) providing a dataset of 82 annotated images for training and evaluating suitable machine learning algorithms, and (2) applying the Mask R-CNN algorithm to this problem, which – in contrast to existing techniques for pupil segmentation – predicts free-form pixel-accurate segmentation masks for iris and pupil. The proposed approach achieves consistent high segmentation accuracies on several metrics while delivering an acceptable prediction efficiency, establishing a promising basis for further segmentation and event detection approaches on eye surgery videos.
|
[868] | Nishant Saurabh, Shajulin Benedict, Jorge G. Barbosa, Radu Prodan, Expelliarmus: Semantic-centric virtual machine image management in IaaS Clouds, In Journal of Parallel and Distributed Computing, Elsevier BV, vol. 146, pp. 107-121, 2020.
[bib][url] [doi] [abstract]
Abstract: Infrastructure-as-a-service (IaaS) Clouds concurrently accommodate diverse sets of user requests, requiring an efficient strategy for storing and retrieving virtual machine images (VMIs) at a large scale. The VMI storage management requires dealing with multiple VMIs, typically in the magnitude of gigabytes, which entails VMI sprawl issues hindering the elastic resource management and provisioning. Unfortunately, existing techniques to facilitate VMI management overlook VMI semantics (i.e at the level of base image and software packages), with either restricted possibility to identify and extract reusable functionalities or with higher VMI publishing and retrieval overheads. In this paper, we propose Expelliarmus, a novel VMI management system that helps to minimize VMI storage, publishing and retrieval overheads. To achieve this goal, Expelliarmus incorporates three complementary features. First, it models VMIs as semantic graphs to facilitate their similarity computation. Second, it provides a semantically-aware VMI decomposition and base image selection to extract and store non-redundant base image and software packages. Third, it assembles VMIs based on the required software packages upon user request. We evaluate Expelliarmus through a representative set of synthetic Cloud VMIs on a real test-bed. Experimental results show that our semantic-centric approach is able to optimize the repository size by 2.3 - 22 times compared to state-of-the-art systems (e.g. IBM’s Mirage and Hemera) with significant VMI publishing and slight retrieval performance improvement.
|
[867] | Radu Prodan, Nishant Saurabh, Zhiming Zhao, Kate Orton-Johnson, Antorweep Chakravorty, Aleksandar Karadimce, Alexandre Ulisses, ARTICONF: Towards a Smart Social Media Ecosystem in a Blockchain Federated Environment, Chapter in Euro-Par 2019: Parallel Processing Workshops, Springer International Publishing, no. 1997, pp. 417-428, 2020.
[bib][url] [doi] [abstract]
Abstract: The ARTICONF project funded by the European Horizon 2020 program addresses issues of trust, time-criticality and democratisation for a new generation of federated infrastructure, to full the privacy, robustness, and autonomy related promises critical in proprietary social media platforms. It aims to: (1) simplify the creation of open and agile social media ecosystem with trusted participation using a two stage permissioned blockchain; (2) automatically detect interest groups and communities using graph anonymization techniques for decentralised and tokenized decision-making and reasoning; (3) elastically autoscale time-critical social media applications through an adaptive orchestrated Cloud edge-based infrastructure meeting application runtime requirements; and (4) enhance monetary inclusion in collaborative models through cognition and knowledge supply chains. We summarize the initial envisaged architecture of the ARTICONF ecosystem, the industrial pilot use cases for validating it, and the planned innovations compared to related other European research projects.
|
[866] | Andrew Perkis, Christian Timmerer, Sabina Baraković, Jasmina Baraković Husić, Søren Bech, Sebastian Bosse, Jean Botev, Kjell Brunnström, Luis Cruz, Katrien De Moor, Andrea de Polo Saibanti, Wouter Durnez, Sebastian Egger-Lampl, Ulrich Engelke, Tiago H. Falk, Jesús Gutiérrez, Asim Hameed, Andrew Hines, Tanja Kojic, Dragan Kukolj, Eirini Liotou, Dragorad Milovanovic, Sebastian Möller, Niall Murray, Babak Naderi, Manuela Pereira, Stuart Perry, Antonio Pinheiro, Andres Pinilla, Alexander Raake, Sarvesh Rajesh Agrawal, Ulrich Reiter, Rafael Rodrigues, Raimund Schatz, Peter Schelkens, Steven Schmidt, Saeed Shafiee Sabet, Ashutosh Singla, Lea Skorin-Kapov, Mirko Suznjevic, Stefan Uhrig, Sara Vlahović, Jan-Niklas Voigt-Antons, Saman Zadtootaghaj, QUALINET White Paper on Definitions of Immersive Media Experience (IMEx), In , 2020.
[bib] [abstract]
Abstract: With the coming of age of virtual/augmented reality and interactive media, numerous definitions, frameworks, and models of immersion have emerged across different fields ranging from computer graphics to literary works. Immersion is oftentimes used interchangeably with presence as both concepts are closely related. However, there are noticeable interdisciplinary differences regarding definitions, scope, and constituents that are required to be addressed so that a coherent understanding of the concepts can be achieved. Such consensus is vital for paving the directionality of the future of immersive media experiences (IMEx) and all related matters. The aim of this white paper is to provide a survey of definitions of immersion and presence which leads to a definition of immersive media experience (IMEx). The Quality of Experience (QoE) for immersive media is described by establishing a relationship between the concepts of QoE and IMEx followed by application areas of immersive media experience. Influencing factors on immersive media experience are elaborated as well as the assessment of immersive media experience. Finally, standardization activities related to IMEx are highlighted and the white paper is concluded with an outlook related to future developments.
|
[865] | Anandhakumar Palanisamy, Mirsat Sefidanoski, Spiros Koulouzis, Carlos Rubia, Nishant Saurabh, Radu Prodan, Decentralized Social Media Applications as a Service: a Car-Sharing Perspective, In 2020 IEEE Symposium on Computers and Communications (ISCC), IEEE, pp. 1-7, 2020.
[bib][url] [doi] [abstract]
Abstract: Social media applications are essential for next generation connectivity. Today, social media are centralized platforms with a single proprietary organization controlling the network and posing critical trust and governance issues over the created and propagated content. The ARTICONF project funded by the European Union’s Horizon 2020 program researches a decentralized social media platform based on a novel set of trustworthy, resilient and globally sustainable tools to fulfil the privacy, robustness and autonomy-related promises that proprietary social media platforms have failed to deliver so far. This paper presents the ARTICONF approach to a car-sharing use case application, as a new collaborative peer-to-peer model providing an alternative solution to private car ownership. We describe a prototype implementation of the car-sharing social media application and illustrate through real snapshots how the different ARTICONF tools support it in a simulated scenario.
|
[864] | Minh Nguyen, Christian Timmerer, Hermann Hellwagner, H2BR: An HTTP/2-based Retransmission Technique to Improve the QoE of Adaptive Video Streaming, In Proceedings of the 25th ACM Workshop on Packet Video, ACM, pp. 1-7, 2020.
[bib][url] [doi] [abstract]
Abstract: HTTP-based Adaptive Streaming (HAS) plays a key role in over-the-top video streaming. It contributes towards reducing the rebuffering duration of video playout by adapting the video quality to the current network conditions. However, it incurs variations of video quality in a streaming session because of the throughput fluctuation, which impacts the user’s Quality of Experience (QoE). Besides, many adaptive bitrate (ABR) algorithms choose the lowest-quality segments at the beginning of the streaming session to ramp up the playout buffer as soon as possible. Although this strategy decreases the startup time, the users can be annoyed as they have to watch a low-quality video initially. In this paper, we propose an efficient retransmission technique, namely H2BR, to replace low-quality segments being stored in the playout buffer with higher-quality versions by using features of HTTP/2 including (i) stream priority, (ii) server push, and (iii) stream termination. The experimental results show that H2BR helps users avoid watching low video quality during video playback and improves the user’s QoE. H2BR can decrease by up to more than 70% the time when the users suffer the lowest-quality video as well as benefits the QoE by up to 13%.
|
[863] | Minh Nguyen, Hadi Amirpour, Christian Timmerer, Hermann Hellwagner, Scalable High Efficiency Video Coding based HTTP Adaptive Streaming over QUIC, In Proceedings of the Workshop on the Evolution, Performance, and Interoperability of QUIC, ACM, pp. 28-34, 2020.
[bib][url] [doi] [abstract]
Abstract: HTTP/2 has been explored widely for adaptive video streaming, but still suffers from Head-of-Line blocking, and three-way handshake delay due to TCP. Meanwhile, QUIC running on top of UDP can tackle these issues. In addition, although many adaptive bitrate (ABR) algorithms have been proposed for scalable and non-scalable video streaming, the literature lacks an algorithm designed for both types of video streaming approaches. In this paper, we investigate the impact of QUIC and HTTP/2 on the performance of ABR algorithms. Moreover, we propose an efficient approach for utilizing scalable video coding formats for adaptive video streaming that combines a traditional video streaming approach (based on non-scalable video coding formats) and a retransmission technique. The experimental results show that QUIC benefits significantly from our proposed method in the context of packet loss and retransmission. Compared to HTTP/2, it improves the average video quality and provides a smoother adaptation behavior. Finally, we demonstrate that our proposed method originally designed for non-scalable video codecs also works efficiently for scalable videos such as Scalable High Efficiency Video Coding (SHVC).
|
[862] | Zahra Najafabadi Samani, Alexander Lercher, Nishant Saurabh, Radu Prodan, A Semantic Model with Self-adaptive and Autonomous Relevant Technology for Social Media Applications, Chapter in Euro-Par 2019: Parallel Processing Workshops, Springer International Publishing, no. 11997, pp. 442-451, 2020.
[bib][url] [doi] [abstract]
Abstract: With the rapidly increasing popularity of social media applications, decentralized control and ownership is taking more attention topreserve user's privacy. However, the lack of central control in the decentralized social network poses new issues of collaborative decision makingand trust to this permission-less environment. To tackle these problemsand ful ll the requirements of social media services, there is a need forintelligent mechanisms integrated to the decentralized social media thatconsider trust in various aspects according to the requirement of services. In this paper, we describe an adaptive microservice-based designcapable of nding relevant communities and accurate decision makingby extracting semantic information and applying role-stage model whilepreserving anonymity. We apply this information along with exploitingPareto solutions to estimate the trust in accordance with the quality ofservice and various con icting parameters, such as accuracy, timeliness,and latency.
|
[861] | Philipp Moll, Veit Frick, Natascha Rauscher, Mathias Lux, How players play games, In Proceedings of the 12th ACM International Workshop on Immersive Mixed and Virtual Environment Systems, ACM, 2020.
[bib][url] [doi] [abstract]
Abstract: The popularity of computer games is remarkably high and is still growingevery year. Despite this popularity and the economical importance of gaming,research in game design, or to be more precise, of game mechanics that can beused to improve the enjoyment of a game, is still scarce. In this paper, weanalyze Fortnite, one of the currently most successful games, and observe howplayers play the game. We investigate what makes playing the game enjoyable byanalyzing video streams of experienced players from game streaming platformsand by conducting a user study with players who are new to the game. Weformulate four hypotheses about how game mechanics influence the way playersinteract with the game and how it influences player enjoyment. We presentdifferences in player behavior between experienced players and beginners anddiscuss how game mechanics could be used to improve the enjoyment forbeginners. In addition, we describe our approach to analyze games withoutaccess to game-internal data by using a toolchain which automatically extractsgame information from video streams.
|
[860] | Mohamed Ayoub Messous, Hermann Hellwagner, Sidi-Mohammed Senouci, Driton Emini, Dominik Schnieders, Edge Computing for Visual Navigation and Mapping in a UAV Network, In ICC 2020 - 2020 IEEE International Conference on Communications (ICC), IEEE, pp. 1-6, 2020.
[bib][url] [doi] [abstract]
Abstract: This research work presents conceptual considerations and quantitative evaluations into how integrating computation offloading to edge computing servers would offer a paradigm shift for an effective deployment of autonomous drones. The specific mission that has been considered is collaborative autonomous navigation and mapping in a 3D environment of a small drone network. Specifically, in order to achieve this mission, each drone is required to compute a low latency, highly compute intensive task in a timely manner. The proposed model decides for each task, while considering the impact on performance and mission requirements, whether to (i) compute locally, (ii) offload to the edge server, or (iii) to the ground station. Extensive simulation work was performed to assess the effectiveness of the proposed scheme compared to other models.
|
[859] | Petra Mazdin, Michal Barcis, Hermann Hellwagner, Bernhard Rinner, Distributed Task Assignment in Multi-Robot Systems based on Information Utility, In 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), IEEE, pp. 734-740, 2020.
[bib][url] [doi] [abstract]
Abstract: Most multi-robot systems (MRS) require to coordinate the assignment of tasks to individual robots for efficient missions. Due to the dynamics, incomplete knowledge and changing requirements, the robots need to distribute their local state information within the MRS continuously during the mission. Since communication resources are limited and message transfers may be erroneous, the global state estimated by each robot may become inconsistent. This inconsistency may lead to degraded task assignment and mission performance. In this paper, we explore the effect and cost of communication and exploit information utility for online distributed task assignment. In particular, we model the usefulness of the transferred state information by its information utility and use it for controlling the distribution of local state information and for updating the global state. We compare our distributed, utility-based online task assignment with well-known centralized and auction-based methods and show how substantial reduction of communication effort still leads to successful mission completion. We demonstrate our approach in a wireless communication testbed using ROS2.
|
[858] | Roland Matha, Sasko Ristov, Thomas Fahringer, Radu Prodan, Simplified Workflow Simulation on Clouds based on Computation and Communication Noisiness, In IEEE Transactions on Parallel and Distributed Systems, Institute of Electrical and Electronics Engineers (IEEE), vol. 31, no. 7, pp. 1559-1574, 2020.
[bib][url] [doi] [abstract]
Abstract: Many researchers rely on simulations to analyze and validate their researched methods on Cloud infrastructures. However, determining relevant simulation parameters and correctly instantiating them to match the real Cloud performance is a difficult and costly operation, as minor configuration changes can easily generate an unreliable inaccurate simulation result. Using legacy values experimentally determined by other researchers can reduce the configuration costs, but is still inaccurate as the underlying public Clouds and the number of active tenants are highly different and dynamic in time. To overcome these deficiencies, we propose a novel model that simulates the dynamic Cloud performance by introducing noise in the computation and communication tasks, determined by a small set of runtime execution data. Although the estimating method is apparently costly, a comprehensive sensitivity analysis shows that the configuration parameters determined for a certain simulation setup can be used for other simulations too, thereby reducing the tuning cost by up to 82.46%, while declining the simulation accuracy by only 1.98% in average. Extensive evaluation also shows that our novel model outperforms other state-of-the-art dynamic Cloud simulation models, leading up to 22% lower makespan inaccuracy.
|
[857] | Vincenzo De Maio, Dragi Kimovski, Multi-objective scheduling of extreme data scientific workflows in Fog, In Future Generation Computer Systems, Elsevier BV, vol. 106, pp. 171-184, 2020.
[bib][url] [doi] [abstract]
Abstract: The concept of “extreme data” is a recent re-incarnation of the “big data” problem, which is distinguished by the massive amounts of information that must be analyzed with strict time requirements. In the past decade, the Cloud data centers have been envisioned as the essential computing architectures for enabling extreme data workflows. However, the Cloud data centers are often geographically distributed. Such geographical distribution increases offloading latency, making it unsuitable for processing of workflows with strict latency requirements, as the data transfer times could be very high. Fog computing emerged as a promising solution to this issue, as it allows partial workflow processing in lower-network layers. Performing data processing on the Fog significantly reduces data transfer latency, allowing to meet the workflows’ strict latency requirements. However, the Fog layer is highly heterogeneous and loosely connected, which affects reliability and response time of task offloading. In this work, we investigate the potential of Fog for scheduling of extreme data workflows with strict response time requirements. Moreover, we propose a novel Pareto-based approach for task offloading in Fog, called Multi-objective Workflow Offloading (MOWO). MOWO considers three optimization objectives, namely response time, reliability, and financial cost. We evaluate MOWO workflow scheduler on a set of real-world biomedical, meteorological and astronomy workflows representing examples of extreme data application with strict latency requirements.
|
[856] | Nivid Limbasiya, Prateek Agrawal, Bidirectional Long Short-Term Memory-Based Spatio-Temporal in Community Question Answering, Chapter in Algorithms for Intelligent Systems, Springer Singapore, pp. 291-310, 2020.
[bib][url] [doi] [abstract]
Abstract: Community-based question answering (CQA) is an online-based crowdsourcing service that enables users to share and exchange information in the field of natural language processing. A major challenge of CQA service is to determine the high-quality answer with respect to the given question. The existing methods perform semantic matches between a single pair of a question and its relevant answer. In this paper, a Spatio-Temporal bidirectional Long Short-Term Memory (ST-BiLSTM) method is proposed to predict the semantic representation between the question–answer and answer–answer. ST-BiLSTM has two LSTM network instead of one LSTM network (i.e., forward and backward LSTM). The forward LSTM controls the spatial relationship and backward LSTM for examining the temporal interactions for accurate answer prediction. Hence, it captures both the past and future context by using two networks for accurate answer prediction based on the user query. Initially, preprocessing is carried out by name-entity recognition (NER), dependency parsing, tokenization, part of speech (POS) tagging, lemmatization, stemming, syntactic parsing, and stop word removal techniques to filter out the useless information. Then, a par2vec is applied to transform the distributed representation of question and answer into a fixed vector representation. Next, ST-BiLSTM cell learns the semantic relationship between question–answer and answer–answer to determine the relevant answer set for the given user question. The experiment performed on SemEval 2016 and Baidu Zhidao datasets shows that our proposed method outperforms than other state-of-the-art approaches.
|
[855] | Andreas Leibetseder, Klaus Schoeffmann, surgXplore: Interactive Video Exploration for Endoscopy, In Proceedings of the 2020 International Conference on Multimedia Retrieval, ACM, pp. 397-401, 2020.
[bib][url] [doi] [abstract]
Abstract: Accumulating recordings of daily conducted surgical interventions such as endoscopic procedures for the long term generates very large video archives that are both difficult to search and explore. Since physicians utilize this kind of media routinely for documentation, treatment planning or education and training, it can be considered a crucial task to make said archives manageable in regards to discovering or retrieving relevant content. We present an interactive tool including a multitude of modalities for browsing, searching and filtering medical content, demonstrating its usefulness on over 140 hours of pre-processed laparoscopic surgery videos.
|
[854] | Andreas Leibetseder, Klaus Schoeffmann, lifeXplore at the Lifelog Search Challenge 2020, In Proceedings of the Third Annual Workshop on Lifelog Search Challenge, ACM, pp. 37-42, 2020.
[bib][url] [doi] [abstract]
Abstract: Since its first iteration in 2018, the Lifelog Search Challenge (LSC) -- an interactive competition for retrieving lifelogging moments -- is co-located at the annual ACM International Conference on Multimedia Retrieval (ICMR) and has drawn international attention. With the goal of making an ever growing public lifelogging dataset searchable, several teams develop systems for quickly solving time-limited queries during the challenge. Having participated in both previous LSC iterations, i.e. LSC2018 and LSC2019, we present our lifeXplore system -- a video exploration and retrieval tool combining feature map browsing, concept search and filtering as well as hand-drawn sketching. The system is improved by including additional deep concept YOLO9000, optical character recognition (OCR) as well as adding uniform sampling as an alternative to the system's traditional underlying shot segmentation.
|
[853] | Dragi Kimovski, Dijana C. Bogatinoska, Narges Mehran, Aleksandar Karadimce, Natasa Paunkoska, Radu Prodan, Ninoslav Marina, Cloud-Edge Offloading Model for Vehicular Traffic Analysis, In 2020 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (BDCloud), IEEE, pp. 746-753, 2020.
[bib][url] [doi] [abstract]
Abstract: The proliferation of smart sensing and computing devices, capable of collecting a vast amount of data, has made the gathering of the necessary vehicular traffic data relatively easy. However, the analysis of these big data sets requires computational resources, which are currently provided by the Cloud Data Centers. Nevertheless, the Cloud Data Centers can have unacceptably high latency for vehicular analysis applications with strict time requirements. The recent introduction of the Edge computing paradigm, as an extension of the Cloud services, has partially moved the processing of big data closer to the data sources, thus addressing this issue. Unfortunately, this unlocked multiple challenges related to resources management. Therefore, we present a model for scheduling of vehicular traffic analysis applications with partial task offloading across the Cloud - Edge continuum. The approach represents the traffic applications as a set of interconnected tasks composed into a workflow that can be partially offloaded to the Edge. We evaluated the approach through a simulated Cloud - Edge environment that considers two representative vehicular traffic applications with a focus on video stream analysis. Our results show that the presented approach reduces the application response time up to eight times while improving energy efficiency by a factor of four.
|