[180] | Muhammad Aleem, Radu Aurel Prodan, Muhammad Arshad Islam, Muhammad Azhar Iqbal, On the Parallel Programmability of JavaSymphony for Multi-cores and Clusters, In International Journal of Ad Hoc and Ubiquitous Computing, vol. 30, no. 4, pp. 247-264, 2019.
[bib][url] [doi] [abstract]
Abstract: This paper explains the programming aspects of a promising Java-based programming and execution framework called JavaSymphony. JavaSymphony provides unified high-level programming constructs for applications related to shared, distributed, hybrid memory parallel computers, and co-processors accelerators. JavaSymphony applications can be executed on a variety of multi-/many-core conventional and data-parallel architectures. JavaSymphony is based on the concept of dynamic virtual architectures, which allows programmers to define a hierarchical structure of the underlying computing resources and to control load-balancing and task-locality. In addition to GPU support, JavaSymphony provides a multi-core aware scheduling mechanism capable of mapping parallel applications on large multi-core machines and heterogeneous clusters. Several real applications and benchmarks (on modern multi-core computers, heterogeneous clusters, and machines consisting of a combination of different multi-core CPU and GPU devices) have been used to evaluate the performance. The results demonstrate that the JavaSymphony outperforms the Java implementations, as well as other modern alternative solutions.
|
[179] | Muhammad Aleem, Radu Aurel Prodan, Muhammad Arshad Islam, Muhammad Azhar Iqbal, On the paralell programmability of JavaSymphony for multi-cores and clusters, In International Journal of Ad Hoc and Ubiquitous Computing, vol. 30, no. 4, pp. 247-264, 2019.
[bib][url] [doi] [abstract]
Abstract: This paper explains the programming aspects of a promising Java-based programming and execution framework called JavaSymphony. JavaSymphony provides unified high-level programming constructs for applications related to shared, distributed, hybrid memory parallel computers, and co-processors accelerators. JavaSymphony applications can be executed on multi-/many-core conventional and data-parallel architectures. JavaSymphony is based on the concept of dynamic virtual architectures, which allows programmers to define a hierarchical structure of the underlying computing resources and to control load-balancing and task-locality. In addition to GPU support, JavaSymphony provides a multi-core aware scheduling mechanism capable of mapping parallel applications on large multi-core machines and heterogeneous clusters. Several real applications and benchmarks (on modern multi-core computers, heterogeneous clusters, and machines consisting of a combination of different multi-core CPU and GPU devices) have been used to evaluate the performance. The results demonstrate that the JavaSymphony outperforms the Java implementations, as well as other modern alternative solutions.
|
[178] | Daniela Pohl, Abdelhamid Bouchachia, Hermann Hellwagner, Active Online Learning for Social Media Analysis to Support Crisis Management, In IEEE Transactions on Knowledge and Data Engineering, pp. 1-14, 2019.
[bib][url] [doi] [abstract]
Abstract: People use social media (SM) to describe and discuss different situations they are involved in, like crises. It is therefore worthwhile to exploit SM contents to support crisis management, in particular by revealing useful and unknown information about the crises in real-time. Hence, we propose a novel active online multiple-prototype classifier, called AOMPC. It identifies relevant data related to a crisis. AOMPC is an online learning algorithm that operates on data streams and which is equipped with active learning mechanisms to actively query the label of ambiguous unlabeled data. The number of queries is controlled by a fixed budget strategy. Typically, AOMPC accommodates partly labeled data streams. AOMPC was evaluated using two types of data: (1) synthetic data and (2) SM data from Twitter related to two crises, Colorado Floods and Australia Bushfires. To provide a thorough evaluation, a whole set of known metrics was used to study the quality of the results. Moreover, a sensitivity analysis was conducted to show the effect of AOMPC's parameters on the accuracy of the results. A comparative study of AOMPC against other available online learning algorithms was performed. The experiments showed very good behavior of AOMPC for dealing with evolving, partly-labeled data streams.
|
[177] | Jakub Lokoc, Gregor Kovalcik, Bernd Münzer, Klaus Schöffmann, Werner Bailer, Ralph Gasser, Stefanos Vrochidis, Phuong Anh Nguyen, Sitapa Rujikietgumjorn, Kai Uwe Barthel, Interactive Search or Sequential Browsing? A Detailed Analysis of the Video Browser Showdown 2018, In ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), vol. 15, pp. 1-26, 2019.
[bib][url] [doi] [abstract]
Abstract: This work summarizes the findings of the 7th iteration of the Video Browser Showdown (VBS) competition organized as a workshop at the 24th International Conference on Multimedia Modeling in Bangkok. The competition focuses on video retrieval scenarios in which the searched scenes were either previously observed or described by another person (i.e., an example shot is not available). During the event, nine teams competed with their video retrieval tools in providing access to a shared video collection with 600 hours of video content. Evaluation objectives, rules, scoring, tasks, and all participating tools are described in the article. In addition, we provide some insights into how the different teams interacted with their video browsers, which was made possible by a novel interaction logging mechanism introduced for this iteration of the VBS. The results collected at the VBS evaluation server confirm that searching for one particular scene in the collection when given a limited time is still a challenging task for many of the approaches that were showcased during the event. Given only a short textual description, finding the correct scene is even harder. In ad hoc search with multiple relevant scenes, the tools were mostly able to find at least one scene, whereas recall was the issue for many teams. The logs also reveal that even though recent exciting advances in machine learning narrow the classical semantic gap problem, user-centric interfaces are still required to mediate access to specific content. Finally, open challenges and lessons learned are presented for future VBS events.
|
[176] | Wilfried Elmenreich, Philipp Moll, Sebastian Theuermann, Mathias Lux, Making simulation results reproducible - Survey, guidelines, and examples based on Gradle and Docker, In PeerJ Computer Science, vol. 5, no. e240, pp. 1-27, 2019.
[bib][url] [doi] [abstract]
Abstract: This article addresses two research questions related to reproducibility within the context of research related to computer science. First, a survey on reproducibility addressed to researchers in the academic and private sectors is described and evaluated. The survey indicates a strong need for open and easily accessible results, in particular, reproducing an experiment should not require too much effort. The results of the survey are then used to formulate guidelines for making research results reproducible. In addition, this article explores four approaches based on software tools that could bring forward reproducibility in research results. After a general analysis of tools, three examples are further investigated based on actual research projects which are used to evaluate previously introduced tools. Results indicate that the evaluated tools contribute well to making simulation results reproducible but due to conflicting requirements, none of the presented solutions fulfills all intended goals perfectly.
|
[175] | Iakovidou Chryssanthi, Nektarios Anagnostopoulos, Mathias Lux, Klitos Christodoulou, Yiannis Boutalis, Savvas Chatzichristofis, Composite Description Based on Salient Contours and Color Information for CBIR Tasks, In IEEE Transactions on Image Processing, vol. 28, no. 6, pp. 3115-3129, 2019.
[bib][url] [doi] [abstract]
Abstract: This paper introduces a novel image descriptor for content-based image retrieval tasks that integrates contour and color information into a compact vector. Loosely inspired by the human visual system and its mechanisms in efficiently identifying visual saliency, operations are performed on a fixed lattice of discrete positions by a set of edge detecting kernels that calculate region derivatives at different scales and orientation. The description method utilizes a weighted edge histogram where bins are populated on the premise of whether the regions contain edges belonging to the salient contours, while the discriminative power is further enhanced by integrating regional quantized color information. The proposed technique is both efficient and adaptive to the specifics of each depiction, while it does not need any training data to adjust parameters. An experimental evaluation conducted on seven benchmarking datasets against 13 well known global descriptors along with SIFT, SURF implementations (both in VLAD and BOVW), highlight the effectiveness and efficiency of the proposed descriptor.
|
[174] | Sabrina Kletz, Klaus Schöffmann, Heinrich Husslein, Learning the representation of instrument images in laparoscopy videos, In IET Healthcare Technology Letters, vol. 6, no. 6, pp. 197-203, 2019.
[bib][url] [doi] [abstract]
Abstract: Automatic recognition of instruments in laparoscopy videos poses many challenges that need to be addressed, like identifying multiple instruments appearing in various representations and in different lighting conditions, which in turn may be occluded by other instruments, tissue, blood, or smoke. Considering these challenges, it may be beneficial for recognition approaches that instrument frames are first detected in a sequence of video frames for further investigating only these frames. This pre-recognition step is also relevant for many other classification tasks in laparoscopy videos, such as action recognition or adverse event analysis. In this work, the authors address the task of binary classification to recognise video frames as either instrument or non-instrument images. They examine convolutional neural network models to learn the representation of instrument frames in videos and take a closer look at learned activation patterns. For this task, GoogLeNet together with batch normalisation is trained and validated using a publicly available dataset for instrument count classifications. They compared transfer learning with learning from scratch and evaluate on datasets from cholecystectomy and gynaecology. The evaluation shows that fine-tuning a pre-trained model on the instrument and non-instrument images is much faster and more stable in learning than training a model from scratch.
|
[173] | Sandi Gec, Dragi Kimovski, Uros Pascinski, Radu Aurel Prodan, Vlado Stankovski, Semantic approach for multi-objective optimisation of the ENTICE distributed Virtual Machine and container images repository, In Concurrency and Computation: Practice and Experience, vol. 31, no. 3, 2019.
[bib][url] [doi] [abstract]
Abstract: New software engineering technologies facilitate development of applications from reusable software components, such as Virtual Machine and container images (VMI/CIs). Key requirements for the storage of VMI/CIs in public or private repositories are their fast delivery and cloud deployment times. ENTICE is a federated storage facility for VMI/CIs that provides optimisation mechanisms through the use of fragmentation and replication of images and a Pareto Multi‐Objective Optimisation (MO) solver. The operation of the MO solver is, however, time‐consuming due to the size and complexity of the metadata, specifying various non‐functional requirements for the management of VMI/CIs, such as geolocation, operational cost, and delivery time. In this work, we address this problem with a new semantic approach, which uses an ontology of the federated ENTICE repository, knowledge base, and constraint‐based reasoning mechanism. Open Source technologies such as Protégé, Jena Fuseki, and Pellet were used to develop a solution. Two specific use cases, (1) repository optimisation with offline and (2) online redistribution of VMI/CIs, are presented in detail. In both use cases, data from the knowledge base are provided to the MO solver. It is shown that Pellet‐based reasoning can be used to reduce the input metadata size used in the optimisation process by taking into consideration the geographic location of the VMI/CIs and the provenance of the VMI fragments. It is shown that this process leads to reduction of the input metadata size for the MO solver by up to 60% and reduction of the total optimisation time of the MO solver by up to 68%, while fully preserving the quality of the solution, which is significant.
|
[172] | Christian Timmerer, MPEG column: 121st MPEG meeting in Gwangju, Korea, In SIGMultimedia Records, ACM, vol. 10, no. 1, New York, NY, USA, pp. 6:6-6:6, 2018.
[bib][url] [doi] |
[171] | Christian Timmerer, MPEG Column: 120th MPEG Meeting in Macau, China, In SIGMultimedia Records, ACM, vol. 9, no. 3, New York, NY, USA, pp. 4:4-4:4, 2018.
[bib][url] [doi] |
[170] | Mario Taschwer, Oge Marques, Automatic separation of compound figures in scientific articles, In Multimedia Tools and Applications, no. 77, pp. 519-548, 2018.
[bib][url] [doi] [abstract]
Abstract: Content-based analysis and retrieval of digital images found in scientific articles is often hindered by images consisting of multiple subfigures (compound figures). We address this problem by proposing a method (ComFig) to automatically classify and separate compound figures, which consists of two main steps: (i) a supervised compound figure classifier (ComFig classifier) discriminates between compound and non-compound figures using task-specific image features; and (ii) an image processing algorithm is applied to predicted compound images to perform compound figure separation (ComFig separation). The proposed ComFig classifier is shown to achieve state-of-the-art classification performance on a published dataset. Our ComFig separation algorithm shows superior separation accuracy on two different datasets compared to other known automatic approaches. Finally, we propose a method to evaluate the effectiveness of the ComFig chain combining classifier and separation algorithm, and use it to optimize the misclassification loss of the ComFig classifier for maximal effectiveness in the chain.
|
[169] | Vlado Stankovski, Radu Prodan, Guest Editors’ Introduction: Special Issue on Storagefor the Big Data Era, In Journal of Grid Computing, 2018.
[bib][url] [doi] |
[168] | Laura Ricci, Alexander Iosup, Radu Prodan, Large Scale Cooperative Virtual Environments, In Concurrency and Computation: Practice and Experience, 2018.
[bib][url] [doi] |
[167] | Florin Pop, Radu Prodan, Gabriel Antoniu, RM-BDP: Resource management for Big Data platforms, In Future Generation Computer Systems, vol. 86, pp. 961-963, 2018.
[bib][url] [doi] [abstract]
Abstract: Nowadays, when we face with numerous data, when data cannot be classified into regular relational databases and new solutions are required, and when data are generated and processed rapidly, we need powerful platforms and infrastructure as support. Extracting valuable information from raw data is especially difficult considering the velocity of growing data from year to year and the fact that 80% of data is unstructured. In addition, data sources are heterogeneous (various sensors, users with different profiles, etc.) and are located in different situations or contexts. Cloud computing, which concerns large-scale interconnected systems with the main purpose of aggregation and efficient exploiting the power of widely distributed resources, represent one viable solution. Resource management and task scheduling play an essential role, in cases where one is concerned with optimized use of resources (Negru et al., 2017) [1].The goal of this special issue is to explore new directions and approaches for reasoning about advanced resource management and task scheduling methods and algorithms for Big Data platforms. The accepted papers present new results in the domain of resource management and task scheduling, Cloud platforms supporting Big Data processing, data handling and Big Data applications.
|
[166] | Florin Pop, Alexandru Iusup, Radu Prodan, HPS-HDS: High Performance Scheduling for Heterogeneous Distributed Systems, In Future Generation Computer Systems, Elsevier, vol. 78, pp. 242-244, 2018.
[bib][url] [doi] |
[165] | Stefan Petscharnig, Klaus Schöffmann, Binary convolutional neural network features off-the-shelf for image to video linking in endoscopic multimedia databases, In Multimedia Tools and Applications, 2018.
[bib][url] [doi] [abstract]
Abstract: With a rigorous long-term archival of endoscopic surgeries, vast amounts of video and image data accumulate. Surgeons are not able to spend their valuable time to manually search within endoscopic multimedia databases (EMDBs) or manually maintain links to interesting sections in order to quickly retrieve relevant surgery sections. Enabling the surgeons to quickly access the relevant surgery scenes, we utilize the fact that surgeons record external images additionally to the surgery video and aim to link them to the appropriate video sequence in the EMDB using a query-by-example approach. We propose binary Convolutional Neural Network (CNN) features off-the-shelf and compare them to several baselines: pixel-based comparison (PSNR), image structure comparison (SSIM), hand-crafted global features (CEDD and feature signatures), as well as CNN baselines Histograms of Class Confidences (HoCC) and Neural Codes (NC). For evaluation, we use 5.5 h of endoscopic video material and 69 query images selected by medical experts and compare the performance of the aforementioned image mathing methods in terms of video hit rate and distance to the true playback time stamp (PTS) for correct video predictions. Our evaluation shows that binary CNN features are compact, yet powerful image descriptors for retrieval in the endoscopic imaging domain. They are able to maintain state-of-the-art performance, while providing the benefit of low storage space requirements and hence provide the best compromise.
|
[164] | Roland Mathá, Dragi Kimovski, Radu Prodan, Marjan Gusev, A new model for cloud elastic services efficiency, In International Journal of Parallel, Emergent and Distributed Systems, 2018.
[bib][url] [doi] [abstract]
Abstract: The speedup measures the improvement in performance when the computational resources are being scaled. The efficiency, on the other side, provides the ratio between the achieved speedup and the number of scaled computational resources (processors). Both parameters (speedup and efficiency), which are defined according to Amdahl’s Law, provide very important information about performance of a computer system with scaled resources compared with a computer system with a single processor. However, as cloud elastic services’ load is variable, apart of the scaled resources, it is vital to analyse the load in order to determine which system is more effective and efficient. Unfortunately, both the speedup and efficiency are not sufficient enough for proper modeling of cloud elastic services, as the assumptions for both the speedup and efficiency are that the system’s resources are scaled, while the load is constant. In this paper, we extend the scaling of resources and define two additional scaled systems by (i) scaling the load and (ii) scaling both the load and resources. We introduce a model to determine the efficiency for each scaled system, which can be used to compare the efficiencies of all scaled systems, regardless if they are scaled in terms of load or resources. We have evaluated the model by using Windows Azure and the experimental results confirm the theoretical analysis. Although one can argue that web services are scalable and comply with Gustafson’s Law only, we provide a taxonomy that classifies scaled systems based on the compliance with both the Amdahl’s and Gustafson’s laws.For three different scaled systems (scaled resources R, scaled load L and combination RL), we introduce a model to determine the scaling efficiency. Our model extends the current definition of efficiency according to Amdahl’s Law, which assumes scaling the resources, and not the load.
|
[163] | Jakub Lokoč, Werner Bailer, Klaus Schöffmann, Bernd Münzer, George M. Awad, On influential trends in interactive video retrieval: Video Browser Showdown 2015-2017, In IEEE Transactions on Multimedia, 2018.
[bib][url] [doi] [abstract]
Abstract: The last decade has seen innovations that make video recording, manipulation, storage and sharing easier than ever before, thus impacting many areas of life. New video retrieval scenarios emerged as well, which challenge the state-of-the-art video retrieval approaches. Despite recent advances in content analysis, video retrieval can still benefit from involving the human user in the loop. We present our experience with a class of interactive video retrieval scenarios and our methodology to stimulate the evolution of new interactive video retrieval approaches. More specifically, the Video Browser Showdown evaluation campaign is thoroughly analyzed, focusing on the years 2015-2017. Evaluation scenarios, objectives and metrics are presented, complemented by the results of the annual evaluations. The results reveal promising interactive video retrieval techniques adopted by the most successful tools and confirm assumptions about the different complexity of various types of interactive retrieval scenarios. A comparison of the interactive retrieval tools with automatic approaches (including fully automatic and manual query formulation) participating in the TRECVID 2016 Ad-hoc Video Search (AVS) task is discussed. Finally, based on the results of data analysis, a substantial revision of the evaluation methodology for the following years of the Video Browser Showdown is provided.
|
[162] | Yasir Noman Khalid, Muhammad Aleem, Radu Prodan, Azhar Iqbal Muhammad, Muhammad Arshad Islam, E-OSched: a load balancing scheduler for heterogeneous multicores, In Journal of Supercomputing, 2018.
[bib][url] [doi] [abstract]
Abstract: The contemporary multicore era has adhered to the heterogeneous computing devices as one of the proficient platforms to execute compute-intensive applications. These heterogeneous devices are based on CPUs and GPUs. OpenCL is deemed as one of the industry standards to program heterogeneous machines. The conventional application scheduling mechanisms allocate most of the applications to GPUs while leaving CPU device underutilized. This underutilization of slower devices (such as CPU) often originates the sub-optimal performance of data-parallel applications in terms of load balance, execution time, and throughput. Moreover, multiple scheduled applications on a heterogeneous system further aggravate the problem of performance inefficiency. This paper is an attempt to evade the aforementioned deficiencies via initiating a novel scheduling strategy named OSched. An enhancement to the OSched named E-OSched is also part of this study. The OSched performs the resource-aware assignment of jobs to both CPUs and GPUs while ensuring a balanced load. The load balancing is achieved via contemplation on computational requirements of jobs and computing potential of a device. The load-balanced execution is beneficiary in terms of lower execution time, higher throughput, and improved utilization. The E-OSched reduces the magnitude of the main memory contention during concurrent job execution phase. The mathematical model of the proposed algorithms is evaluated by comparison of simulation results with different state-of-the-art scheduling heuristics. The results revealed that the proposed E-OSched has performed significantly well than the state-of-the-art scheduling heuristics by obtaining up to 8.09% improved execution time and up to 7.07% better throughput.
|
[161] | Evsen Yanmaz, Saeed Yahyanejad, Bernhard Rinner, Hermann Hellwagner, Christian Bettstetter, Drone networks: Communications, coordination, and sensing, In Ad Hoc Networks, Elsevier, vol. 68, Amsterdam, pp. 1-15, 2018.
[bib][url] [doi] [abstract]
Abstract: Small drones are being utilized in monitoring, transport, safety and disaster management, and other domains. Envisioning that drones form autonomous networks incorporated into the air traffic, we describe a high-level architecture for the design of a collaborative aerial system consisting of drones with on-board sensors and embedded processing, coordination, and networking capabilities. We implement a multi-drone system consisting of quadcopters and demonstrate its potential in disaster assistance, search and rescue, and aerial monitoring. Furthermore, we illustrate design challenges and present potential solutions based on the lessons learned so far.
|
[160] | Daniela Pohl, Abdelhamid Bouchachia, Hermann Hellwagner, Batch-based active learning: Application to social media data for crisis management, In Expert Systems with Applications, Elsevier Ltd., vol. 93, Amsterdam, pp. 232-244, 2018.
[bib][url] [doi] [abstract]
Abstract: Classification of evolving data streams is a challenging task, which is suitably tackled with online learning approaches. Data is processed instantly requiring the learning machinery to (self-)adapt by adjusting its model. However for high velocity streams, it is usually difficult to obtain labeled samples to train the classification model. Hence, we propose a novel online batch-based active learning algorithm (OBAL) to perform the labeling. OBAL is developed for crisis management applications where data streams are generated by the social media community. OBAL is applied to discriminate relevant from irrelevant social media items. An emergency management user will be interactively queried to label chosen items. OBAL exploits the boundary items for which it is highly uncertain about their class and makes use of two classifiers: k-Nearest Neighbors (kNN) and Support Vector Machine (SVM). OBAL is equipped with a labeling budget and a set of uncertainty strategies to identify the items for labeling. An extensive analysis is carried out to show OBAL’s performance, the sensitivity of its parameters, and the contribution of the individual uncertainty strategies. Two types of datasets are used: synthetic and social media datasets related to crises. The empirical results illustrate that OBAL has a very good discrimination power.
|
[159] | Chang Ge, Ning Wang, Wei Koong Chai, Hermann Hellwagner, QoE-Assured 4K HTTP Live Streaming via Transient Segment Holding at Mobile Edge, In IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, vol. 36, no. 8, pp. 1816-1830, 2018.
[bib][url] [doi] [pdf] [abstract]
Abstract: HTTP-based live streaming has become increasingly popular in recent years, and more users have started generating 4K live streams from their devices (e.g., mobile phones) through social-media service providers like Facebook or YouTube. If the audience is located far from a live stream source across the global Internet, TCP throughput becomes substantially suboptimal due to slow start and congestion control mechanisms. This is especially the case when the end-to-end content delivery path involves radio access network at the last mile. As a result, the data rate perceived by a mobile receiver may not meet the high requirement of 4K video streams, which causes deteriorated quality-of-experience (QoE). In this paper, we propose a scheme named edge-based transient holding of live segment (ETHLE), which addresses the above-mentioned issue by performing context-aware transient holding of video segments at the mobile edge with virtualized content caching capability. Through holding the minimum number of live video segments at the mobile edge cache in a context-aware manner, the ETHLE scheme is able to achieve seamless 4K live streaming experiences across the global Internet by eliminating buffering and substantially reducing initial startup delay and live stream latency. It has been deployed as a virtual network function at an LTE-A network, and its performance has been evaluated using real live stream sources that are distributed around the world. The significance of this paper is that by leveraging virtualized caching resources at the mobile edge, we address the conventional transport-layer bottleneck and enable QoE-assured Internet-wide live streaming services with high data rate requirements.
|
[158] | Abdelhak Bentaleb, Bayan Taani, Ali Cengiz Begen, Christian Timmerer, Roger Zimmermann, A Survey on Bitrate Adaptation Schemes for Streaming Media over HTTP, In IEEE Communications Surveys Tutorials, 2018.
[bib] [doi] |
[157] | Muhammad Aleem, Radu Prodan, On the Parallel Programmability of JavaSymphony for Multi-cores and Clusters, In International Journal of Ad Hoc and Ubiquitous Computing, 2018.
[bib][url] [doi] [abstract]
Abstract: This paper explains the programming aspects of a promising Java-based programming and execution framework called JavaSymphony. JavaSymphony provides unified high-level programming constructs for applications related to shared, distributed, hybrid memory parallel computers, and co-processors accelerators. JavaSymphony applications can be executed on a variety of multi-/many-core conventional and data-parallel architectures. JavaSymphony is based on the concept of dynamic virtual architectures, which allows programmers to define a hierarchical structure of the underlying computing resources and to control load-balancing and task-locality. In addition to GPU support, JavaSymphony provides a multi-core aware scheduling mechanism capable of mapping parallel applications on large multi-core machines and heterogeneous clusters. Several real applications and benchmarks (on modern multi-core computers, heterogeneous clusters, and machines consisting of a combination of different multi-core CPU and GPU devices) have been used to evaluate the performance. The results demonstrate that the JavaSymphony outperforms the Java implementations, as well as other modern alternative solutions.
|
[156] | Anatoliy Zabrovskiy, Evgeny Petrov, Evgeny Kuzmin, Christian Timmerer, Evaluation of the Performance of Adaptive HTTP Streaming Systems, In arXiv.org [cs.MM], N.N., vol. abs/1710.02459, N.N., pp. 7, 2017.
[bib][url] [pdf] [abstract]
Abstract: Adaptive video streaming over HTTP is becoming omnipresent in our daily life. In the past, dozens of research papers have proposed novel approaches to address different aspects of adaptive streaming and a decent amount of player implementations (commercial and open source) are available. However, state of the art evaluations are sometimes superficial as many proposals only investigate a certain aspect of the problem or focus on a specific platform – player implementations used in actual services are rarely considered. HTML5 is now available on many platforms and foster the deployment of adaptive media streaming applications. We propose a common evaluation framework for adaptive HTML5 players and demonstrate its applicability by evaluating eight different players which are actually deployed in real-world services.
|