[13] | Yasir Noman Khalid, Muhammad Aleem, Radu Prodan, Azhar Iqbal Muhammad, Muhammad Arshad Islam, E-OSched: a load balancing scheduler for heterogeneous multicores, In Journal of Supercomputing, 2018.
[bib][url] [doi] [abstract]
Abstract: The contemporary multicore era has adhered to the heterogeneous computing devices as one of the proficient platforms to execute compute-intensive applications. These heterogeneous devices are based on CPUs and GPUs. OpenCL is deemed as one of the industry standards to program heterogeneous machines. The conventional application scheduling mechanisms allocate most of the applications to GPUs while leaving CPU device underutilized. This underutilization of slower devices (such as CPU) often originates the sub-optimal performance of data-parallel applications in terms of load balance, execution time, and throughput. Moreover, multiple scheduled applications on a heterogeneous system further aggravate the problem of performance inefficiency. This paper is an attempt to evade the aforementioned deficiencies via initiating a novel scheduling strategy named OSched. An enhancement to the OSched named E-OSched is also part of this study. The OSched performs the resource-aware assignment of jobs to both CPUs and GPUs while ensuring a balanced load. The load balancing is achieved via contemplation on computational requirements of jobs and computing potential of a device. The load-balanced execution is beneficiary in terms of lower execution time, higher throughput, and improved utilization. The E-OSched reduces the magnitude of the main memory contention during concurrent job execution phase. The mathematical model of the proposed algorithms is evaluated by comparison of simulation results with different state-of-the-art scheduling heuristics. The results revealed that the proposed E-OSched has performed significantly well than the state-of-the-art scheduling heuristics by obtaining up to 8.09% improved execution time and up to 7.07% better throughput.
|
[12] | Bogdan Ionescu, Henning Müller, Mauricio Villegas, Aöna Garcoa Secp de Herrera, Carsten Eickhoff, Vincent Andrearczyk, Yashin Dicente Cid, Vitali Liauchuk, Vassili Kovalev, Sadid H. Hasan, Yuan Ling, Oladimeji Farri, Joey Liu, Matthew Lungren, Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Liting Zhou, Mathias Lux, Cathal Gurrin, Overview of ImageCLEF 2018: Challenges, Datasets and Evaluation, In Experimental IR Meets Multilinguality, Multimodality, and Interaction, Springer, vol. 11018, Berlin, 2018.
[bib][url] [doi] [abstract]
Abstract: This paper presents an overview of the ImageCLEF 2018 evaluation campaign, an event that was organized as part of the CLEF (Conference and Labs of the Evaluation Forum) Labs 2018. ImageCLEF is an ongoing initiative (it started in 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval with the aim of providing information access to collections of images in various usage scenarios and domains. In 2018, the 16th edition of ImageCLEF ran three main tasks and a pilot task: (1) a caption prediction task that aims at predicting the caption of a figure from the biomedical literature based only on the figure image; (2) a tuberculosis task that aims at detecting the tuberculosis type, severity and drug resistance from CT (Computed Tomography) volumes of the lung; (3) a LifeLog task (videos, images and other sources) about daily activities understanding and moment retrieval, and (4) a pilot task on visual question answering where systems are tasked with answering medical questions. The strong participation, with over 100 research groups registering and 31 submitting results for the tasks, shows an increasing interest in this benchmarking campaign.
|
[11] | Tobias Hossfeld, Christian Timmerer, Quality of experience column: an introduction, In ACM SIGMultimedia Records, ACM Press, vol. 10, New York (NY), 2018.
[bib][url] [doi] [abstract]
Abstract: Research on Quality of Experience (QoE) has advanced significantly in recent years and attracts attention from various stakeholders. Different facets have been addressed by the research community like subjective user studies to identify QoE influence factors for particular applications like video streaming, QoE models to capture the effects of those influence factors on concrete applications, QoE monitoring approaches at the end user site but also within the network to assess QoE during service consumption and to provide means for QoE management for improved QoE. However, in order to progress in the area of QoE, new research directions have to be taken. The application of QoE in practice needs to consider the entire QoE eco-system and the stakeholders along the service delivery chain to the end user.
|
[10] | Mohammad Hosseini, Christian Timmerer, Dynamic Adaptive Point Cloud Streaming, In PV '18 Proceedings of the 23rd Packet Video Workshop, ACM Press, New York (NY), pp. 25-30, 2018.
[bib][url] [doi] [abstract]
Abstract: High-quality point clouds have recently gained interest as an emerging form of representing immersive 3D graphics. Unfortunately, these 3D media are bulky and severely bandwidth intensive, which makes it difficult for streaming to resource-limited and mobile devices. This has called researchers to propose efficient and adaptive approaches for streaming of high-quality point clouds.In this paper, we run a pilot study towards dynamic adaptive point cloud streaming, and extend the concept of dynamic adaptive streaming over HTTP (DASH) towards DASH-PC, a dynamic adaptive bandwidth-efficient and view-aware point cloud streaming system. DASH-PC can tackle the huge bandwidth demands of dense point cloud streaming while at the same time can semantically link to human visual acuity to maintain high visual quality when needed. In order to describe the various quality representations, we propose multiple thinning approaches to spatially sub-sample point clouds in the 3D space, and design a DASH Media Presentation Description manifest speci.c for point cloud streaming. Our initial evaluations show that we can achieve signi.cant bandwidth and performance improvement on dense point cloud streaming with minor negative quality impacts compared to the baseline scenario when no adaptations is applied.
|
[9] | Steven Alexander Hicks, Konstantin Pogorelov, Thomas de Lange, Mathias Lux, Mattis Jeppsson, Kristin Ranheim Randel, Sigrun L. Eskeland, Pal Halvorsen, Michael Riegler, Comprehensible reasoning and automated reporting of medical examinations based on deep learning analysis, In MMSys '18 Proceedings of the 9th ACM Multimedia Systems Conference, ACM Press, New York (NY), pp. 490-493, 2018.
[bib][url] [doi] [abstract]
Abstract: In the future, medical doctors will to an increasing degree be assisted by deep learning neural networks for disease detection during examinations of patients. In order to make qualified decisions, the black box of deep learning must be opened to increase the understanding of the reasoning behind the decision of the machine learning system. Furthermore, preparing reports after the examinations is a significant part of a doctors work-day, but if we already have a system dissecting the neural network for understanding, the same tool can be used for automatic report generation. In this demo, we describe a system that analyses medical videos from the gastrointestinal tract. Our system dissects the Tensorflow-based neural network to provide insights into the analysis and uses the resulting classification and rationale behind the classification to automatically generate an examination report for the patient's medical journal.
|
[8] | Steven Alexander Hicks, Sigrun L. Eskeland, Mathias Lux, Thomas de Lange, Kristin Ranheim Randel, Mattis Jeppsson, Konstantin Pogorelov, Pal Halvorsen, Michael Riegler, Mimir: an automatic reporting and reasoning system for deep learning based analysis in the medical domain, In MMSys '18 Proceedings of the 9th ACM Multimedia Systems Conference, ACM Press, New York (NY), pp. 369-374, 2018.
[bib][url] [doi] [abstract]
Abstract: Automatic detection of diseases is a growing field of interest, and machine learning in form of deep learning neural networks are frequently explored as a potential tool for the medical video analysis. To both improve the "black box"-understanding and assist in the administrative duties of writing an examination report, we release an automated multimedia reporting software dissecting the neural network to learn the intermediate analysis steps, i.e., we are adding a new level of understanding and explainability by looking into the deep learning algorithms decision processes. The presented open-source software can be used for easy retrieval and reuse of data for automatic report generation, comparisons, teaching and research. As an example, we use live colonoscopy as a use case which is the gold standard examination of the large bowel, commonly performed for clinical and screening purposes. The added information has potentially a large value, and reuse of the data for the automatic reporting may potentially save the doctors large amounts of time.
|
[7] | Evsen Yanmaz, Saeed Yahyanejad, Bernhard Rinner, Hermann Hellwagner, Christian Bettstetter, Drone networks: Communications, coordination, and sensing, In Ad Hoc Networks, Elsevier, vol. 68, Amsterdam, pp. 1-15, 2018.
[bib][url] [doi] [abstract]
Abstract: Small drones are being utilized in monitoring, transport, safety and disaster management, and other domains. Envisioning that drones form autonomous networks incorporated into the air traffic, we describe a high-level architecture for the design of a collaborative aerial system consisting of drones with on-board sensors and embedded processing, coordination, and networking capabilities. We implement a multi-drone system consisting of quadcopters and demonstrate its potential in disaster assistance, search and rescue, and aerial monitoring. Furthermore, we illustrate design challenges and present potential solutions based on the lessons learned so far.
|
[6] | Daniela Pohl, Abdelhamid Bouchachia, Hermann Hellwagner, Batch-based active learning: Application to social media data for crisis management, In Expert Systems with Applications, Elsevier Ltd., vol. 93, Amsterdam, pp. 232-244, 2018.
[bib][url] [doi] [abstract]
Abstract: Classification of evolving data streams is a challenging task, which is suitably tackled with online learning approaches. Data is processed instantly requiring the learning machinery to (self-)adapt by adjusting its model. However for high velocity streams, it is usually difficult to obtain labeled samples to train the classification model. Hence, we propose a novel online batch-based active learning algorithm (OBAL) to perform the labeling. OBAL is developed for crisis management applications where data streams are generated by the social media community. OBAL is applied to discriminate relevant from irrelevant social media items. An emergency management user will be interactively queried to label chosen items. OBAL exploits the boundary items for which it is highly uncertain about their class and makes use of two classifiers: k-Nearest Neighbors (kNN) and Support Vector Machine (SVM). OBAL is equipped with a labeling budget and a set of uncertainty strategies to identify the items for labeling. An extensive analysis is carried out to show OBAL’s performance, the sensitivity of its parameters, and the contribution of the individual uncertainty strategies. Two types of datasets are used: synthetic and social media datasets related to crises. The empirical results illustrate that OBAL has a very good discrimination power.
|
[5] | Chang Ge, Ning Wang, Wei Koong Chai, Hermann Hellwagner, QoE-Assured 4K HTTP Live Streaming via Transient Segment Holding at Mobile Edge, In IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, vol. 36, no. 8, pp. 1816-1830, 2018.
[bib][url] [doi] [pdf] [abstract]
Abstract: HTTP-based live streaming has become increasingly popular in recent years, and more users have started generating 4K live streams from their devices (e.g., mobile phones) through social-media service providers like Facebook or YouTube. If the audience is located far from a live stream source across the global Internet, TCP throughput becomes substantially suboptimal due to slow start and congestion control mechanisms. This is especially the case when the end-to-end content delivery path involves radio access network at the last mile. As a result, the data rate perceived by a mobile receiver may not meet the high requirement of 4K video streams, which causes deteriorated quality-of-experience (QoE). In this paper, we propose a scheme named edge-based transient holding of live segment (ETHLE), which addresses the above-mentioned issue by performing context-aware transient holding of video segments at the mobile edge with virtualized content caching capability. Through holding the minimum number of live video segments at the mobile edge cache in a context-aware manner, the ETHLE scheme is able to achieve seamless 4K live streaming experiences across the global Internet by eliminating buffering and substantially reducing initial startup delay and live stream latency. It has been deployed as a virtual network function at an LTE-A network, and its performance has been evaluated using real live stream sources that are distributed around the world. The significance of this paper is that by leveraging virtualized caching resources at the mobile edge, we address the conventional transport-layer bottleneck and enable QoE-assured Internet-wide live streaming services with high data rate requirements.
|
[4] | Duc-Tien Dang-Nguyen, Klaus Schöffmann, Wolfgang Hürst, LSE2018 Panel - Challenges of Lifelog Search and Access, In LSC '18 Proceedings of the 2018 ACM Workshop on The Lifelog Search Challenge, ACM Digital Library, New York, NY, 2018.
[bib][url] [doi] [abstract]
Abstract: Lifelogging is becoming an increasingly important topic of research and this paper highlights the thoughts of the three panelists at the LSC - Lifelog Search Challenge at ICMR 2018 in Yokohama, Japan on June 11, 2018. The thoughts cover important topics such as the need for challenges in multimedia access, the need for a better user interface and the challenges in building datasets and organising benchmarking activities such as the LSC.
|
[3] | Duc-Tien Dang-Nguyen, Luca Piras, Michael Riegler, Liting Zhou, Mathias Lux, Cathal Gurrin, Overview of ImageCLEFlifelog 2018: Daily Living Understanding andL ifelog Moment Retrieval, In CLEF 2018 Working Notes, CEUR-Workshop Proceedings, vol. 2125, 2018.
[bib][url] [abstract]
Abstract: Benchmarking in Multimedia and Retrieval related researchelds has a long tradition and important position within the community.Benchmarks such as the MediaEval Multimedia Benchmark or CLEFare well established and also served by the community. One major goalof these competitions beside of comparing dierent methods and approachesis also to create or promote new interesting research directionswithin multimedia. For example the Medico task at MediaEval with thegoal of medical related multimedia analysis. Although lifelogging createsa lot of attention in the community which is shown by several workshopsand special session hosted about the topic. Despite of that there exist alsosome lifelogging related benchmarks. For example the previous editionof the lifelogging task at ImageCLEF. The last years ImageCLEFlifelogtask was well received but had some barriers that made it dicult forsome researchers to participate (data size, multi modal features, etc.) TheImageCLEFlifelog 2018 tries to overcome these problems and make thetask accessible for an even broader audience (e.g., pre-extracted featuresare provided). Furthermore, the task is divided into two subtasks (challenges).The two challenges are lifelog moment retrieval (LMRT) and theActivities of Daily Living understanding (ADLT). All in all seven teamsparticipated with a total number of 41 runs which was an signicantincrease compared to the previous year.
|
[2] | Abdelhak Bentaleb, Bayan Taani, Ali Cengiz Begen, Christian Timmerer, Roger Zimmermann, A Survey on Bitrate Adaptation Schemes for Streaming Media over HTTP, In IEEE Communications Surveys Tutorials, 2018.
[bib] [doi] |
[1] | Muhammad Aleem, Radu Prodan, On the Parallel Programmability of JavaSymphony for Multi-cores and Clusters, In International Journal of Ad Hoc and Ubiquitous Computing, 2018.
[bib][url] [doi] [abstract]
Abstract: This paper explains the programming aspects of a promising Java-based programming and execution framework called JavaSymphony. JavaSymphony provides unified high-level programming constructs for applications related to shared, distributed, hybrid memory parallel computers, and co-processors accelerators. JavaSymphony applications can be executed on a variety of multi-/many-core conventional and data-parallel architectures. JavaSymphony is based on the concept of dynamic virtual architectures, which allows programmers to define a hierarchical structure of the underlying computing resources and to control load-balancing and task-locality. In addition to GPU support, JavaSymphony provides a multi-core aware scheduling mechanism capable of mapping parallel applications on large multi-core machines and heterogeneous clusters. Several real applications and benchmarks (on modern multi-core computers, heterogeneous clusters, and machines consisting of a combination of different multi-core CPU and GPU devices) have been used to evaluate the performance. The results demonstrate that the JavaSymphony outperforms the Java implementations, as well as other modern alternative solutions.
|