[27] | Laszlo Böszörmenyi, Informatik in der Schule, In Erziehungskunst - Monatsschrift zur Pädagogik Rudolf Steiners, Erziehungskunst, Stuttgart, Germany, pp. 113-121, 1997.
[bib][url] [pdf] |
[26] | Laszlo Böszörmenyi, Karl-Heinz Eder, Carsten Weich, A Very Fast Parallel Object Store for Very Fast Applications, In Simulations Practice and Theory, Elsevier, vol. Volume 5, Numbers 7-8, Oxford, United Kingdom, pp. 605-622, 1997.
[bib] [pdf] [abstract]
Abstract: An architecture for a memory-resident, Parallel and Persistent ObjectSTore (PPOST) is suggested. Different object-oriented databases might be built on top of PPOST. The term memory-resident (or main memory based) means that the primary storage device is main memory. Persistence is guaranteed automatically by managing secondary and stable storage devices (such as main memory with uninterrupted power supply, discs and tapes). The architecture is able to take advantage of available main memory in a parallel or distributed environment. Thus, transactions can be actually performed with memory-speed, without being limited by the size of the memory of a given computer. Such an architecture is especially advantageous for applications requiring very fast answers, such as CAD or high-performance simulation.
|
[25] | Laszlo Böszörmenyi, Karl-Heinz Eder, M3Set - A Language for Handling of Distributed and Persistent Sets of Objects, In Parallel Computing, Elsevier, vol. 22, no. 13, Oxford, United Kingdom, pp. 1897-1912, 1997.
[bib][url] [abstract]
Abstract: We claim that distributed object-oriented systems must provide a higher level of abstraction to their users, than usually provided. Especially, it is necessary to provide application-oriented, intelligent aggregates of objects with transparent distribution of their elements. Beside that, it seems to be not only reasonable, but also relatively easy to connect persistence with distribution. A system, offering distributed and persistent polymorphic sets of objects, on the level of a clean, type safe programming language is introduced. The user of such a system gets distribution and persistence in the same "natural" way, as users of traditional systems get volatile arrays of numbers or classes of objects.
|
[24] | Georg Acher, Hermann Hellwagner, Wolfgang Karl, Markus Leberecht, Eine PCI-SCI-Adapterkarte für ein PC-Cluster mit verteiltem gemeinsamen Speicher, In Arbeitsplatz-Rechensysteme: Anwendungen, Architekturen, Betriebssysteme und Netzwerke (A N, ed.), N, A, N, A, pp. -, 1997.
[bib] |
[23] | Hermann Hellwagner, Ivan Zoraja, Vaidy Sunderam, PVM Data Transfers on SCI Workstation Clusters, In Proceedings PVM User Group Meeting (Arndt Bode, Jack Dongarra, Thomas Ludwig, Vaidy Sunderam, eds.), Springer, N, A, pp. -, 1996.
[bib] |
[22] | Karl-Heinz Eder, Laszlo Böszörmenyi, Optimized Parallel Sets for Data Intensive Applications, In DEXA '96 Proceedings of the 7th International Workshop on Database and Expert Systems Applications (Roland Wagner, Helmut Thoma, eds.), Springer Verlag, Heidelberg, pp. 185, 1996.
[bib] [doi] [abstract]
Abstract: An extension of a general-purpose programming language (gpPL) is presented. It enables parallelism, persistence and query optimization based on sets. The authors demonstrate that in gpPLs the primitive "set" can be generalised for the needs of database and expert system applications. Side-effect free declarative queries, based on set expressions, can be optimized and executed in parallel. Individual optimization and parallelization are integral parts of the language system and compiler. Very different combinations of persistent or volatile, and parallel or sequential, and optimized or non-optimized implementations are possible. This is eased by the fact that a great part of the implementation is located outside the compiler with the help of predefined interfaces. Different algebras, optimizers or algorithms can be considered. The same program can be executed without modification in various systems or platforms.
|
[21] | Laszlo Böszörmenyi, ed., Parallel Computation, Springer Verlag, Berlin, Heidelberg, New York, pp. 123, 1996.
[bib] [abstract]
Abstract: The Austrian Center for Parallel Computation (ACPC) is a co-operative research organization founded in 1989 to promote research and education in the field of software for parallel computer systems. The areas in which the ACPC is active include algorithms, languages, compilers, programming environments, parallel databases, parallel//O, and applications for parallel and high-performance computing systems. The partner institutions of the ACPC are the University of Vienna, the Technical University of Vienna, and the Universities of Linz, Salzburg, and Klagenfurt. They carry out joint research projects, share a pool of hardware resources, and offer a curriculum in parallel computation for graduate and postgraduate students. In addition, an international conference is organized every other year. The Third International Conference of the A CPC took place in Klagenfurt, Austria, from September 23 to September 25, 1996. The conference attracted many participants from around the world. Authors from 13 countries submitted 31 papers, from which 15 were selected and presented at the conference. Six contributions were accepted for a poster session. In addition, two distinguished researchers presented invited papers. The papers from these presentations are contained in this proceedings volume.
|
[20] | Laszlo Böszörmenyi, Andreas Stopper, Acceleration of Distributed, Object-Oriented Simulations Using a Graph-Optimizing Approach, In Directory of Simulation Software, 1996 (Agostino Bruzzone, Eugene Kerckhoffs, eds.), Society for Computer Simulation International, Genoa, Italy, pp. 56, 1996.
[bib] [pdf] [abstract]
Abstract: An approach to accelerate distributed, object-oriented simulations is presented in this paper. It is based on the assumption that a higher acceleration can be achieved in an easier way, if the problem is alread tackled early at the modeling stage [STOP 95]. The user adds hints about the communication behavior and frequencies of object classes to the simulation model. Based on this information, an object graph is generated and distributed to a selected number of partitions. The distribution phase is fully automatic. As a result a distribution of the problem nearby the communication optimum is generated. In the next phase the distributed simulation program (code) is generated. In a final step the user only has to code the methods of the object classes and run the simulation. The major advantage of this approach is that the user is freed from the difficult task of finding a good distribution for the problem to be simulated, which is an important factor for the overall performance of the simulation. Another advantage is the possibility to vary model information (hints) about the communication, and get a new (quasi optimal) version of the simulation automatically generated.
|
[19] | Laszlo Böszörmenyi, Woher kommt die Information?, Chapter in 25 Jahre Universität Klagenfurt (Universitaet Klagenfurt, ed.), Carinthia GmbH, Klagenfurt, Austria, pp. 278, 1996.
[bib] |
[18] | Laszlo Böszörmenyi, Carsten Weich, eds., Programming in Modula-3, Springer Verlag, Heidelberg, pp. 571, 1996.
[bib] [abstract]
Abstract: The difficulty of programming lies in the need to bring our ideas into a form that can be processed by a machine. This book shows how to write and understand even complex programs by applying proper structures and good style. It uses the programming language Modula-3, which relies on and extends the well-known concepts of Pascal and Modula-2. The steps needed do become an expert programmer are based first of all on the elegant type concept of Modula-3. The programming style supported by this concept leads the reader step-by-stepo toward coping with complex data structures and algorithms. Such new and exciting subjects as object-oriented and parallel programming are touched upon. The book requires no prior programming experience.
|
[17] | Günter Böckle, Hermann Hellwagner, Roland Lepold, Gerd Sandweg, Burghardt Schallenberger, Raimar Thudt, Stefan and Wallstab, Structured Evaluation of Computer Systems, In IEEE Computer Society, N, A, vol. Vol. 29, no. No 6, N, A, pp. 45-51, 1996.
[bib] [doi] [pdf] [abstract]
Abstract: Evaluating computers and other systems is difficult for a couple of reasons. First, the goal of evaluation is typically ill-defined: customers, sometimes even designers, either don't know or can't specify exactly what result they expect. Often, they don't specify the architectural variants to consider, and often the metrics and workload they expect you to use are ill-defined. Second, they rarely clarify which kind of model and evaluation method best suit the evaluation problem. These problems have consequences. For one thing, the decision-maker may not trust the evaluation. For another, poor planning means the evaluation cannot be reproduced if any of the parameters are changed slightly. Finally, the evaluation documentation is usually inadequate, and so some time after the evaluation you might ask yourself, how did I come to that conclusion? An approach developed at Siemens makes decisions explicit and the process reproducible
|
[16] | Arndt Bode, Michael Gerndt, R Hackenberg, Hermann Hellwagner, High-Level Programming Models and Supportive Environments (HIPS´96), In Proceedings of IPPS '96, The 10th International Parallel Processing Symposium (A N, ed.), IEEE Computer Society, N, A, pp. -, 1996.
[bib] |
[15] | Arndt Bode, Michael Gerndt, R G Hackenberg, Hermann Hellwagner, Proceedings First International Workshop on High-Level Parallel Programming Models and Supportive Environments, IEEE Computer Society Press, NA, pp. 128, 1996.
[bib] |
[14] | Laszlo Böszörmenyi, Andreas Stopper, A Distributed, Object Oriented Simulation System based on Hints, In Eurosim´95 (Felix Breitenecker, Irmgard Husinsky, eds.), Elsevier, Vienna, pp. 1356, 1995.
[bib] [abstract]
Abstract: A hint-based, distributed, discrete, object-oriented simulation system is described. In course of the design phase of the simulation model, explicit hints can be provided concerning dependencies and information flow inside the model. The process of parallelization consists of two major steps. In the first step, the abstract model, enriched with user-supplied hints is mapped onto an arbitrary number of active units. In the second step, the active units are mapped onto a given number of physical nodes, characterized by their processing capacity and by the communication latency between them. The distribution scheme may be dynamic, i.e. simulation objects can change their location in course of the simulation, in order to get better performance. Klaus.Leopold 05.04.2001.
|
[13] | Laszlo Böszörmenyi, Carsten Weich, eds., Programmieren mit Modula-3, Springer Verlag, Heidelberg, pp. 577, 1995.
[bib] [abstract]
Abstract: Die Hauptaufgabe der Programmierung besteht darin, daß Lösungsideen für ein Problem in eine Form gebracht werden müssen, die maschinell verarbeitet werden kann. Diese Umsetzung von Ideen in mechanische Form fällt oft schwer und kann besonders den Anfänger entmutigen. Dieses Buch zeigt, wie durch richtige Strukturierung, durch die Ausbildung eines guten "Stils", auch komplexe Programme geschrieben und verstanden werden können. Dazu bedient es sich der Sprache Modula-3, die als eine Nachfolgesprache von Pascal und Modula-2 auf den in diesen Sprachen bereits bewährten Konzepten aufbaut und sie erweitert. Der dadurch ermöglichte Programmierstil führt schrittweise über den Umgang mit komplexen Datentypen und Algorithmen hin zu modernen und anspruchsvollen Themen wie objektorientierte und parallele Programmierung.
|
[12] | Laszlo Böszörmenyi, Karl-Heinz Eder, Carsten Weich, PPOST - A Persistent Parallel Object Store, In Massively Parallel Processing Applications and Develompent, Proceedings of the 1994 EUROSIM Conference on Massively Parallel Processing (Lan Dekker, Wim Smit, Jan C Zuidervaart, eds.), Elsevier, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom, pp. 163-170, 1994.
[bib] [pdf] |
[11] | Laszlo Böszörmenyi, Karl-Heinz Eder, Adding Parallel and Persistent Sets to Modula-3, In Proceedings of the Joint Modular Languages Conference (Peter Schulthess, ed.), Universitätsverlag Ulm, Ulm, pp. 201-216, 1994.
[bib] [pdf] [abstract]
Abstract: Parallel and persistent object sets are suggested to be incorporated into general-purpose programming languages. Two alternative implementations are presented. The actual form of the proposal is an extension of Modula-3.
|
[10] | Laszlo Böszörmenyi, Phantasie und TV-Gewalt, In Erziehungskunst - Monatsschrift zur Pädagogik Rudolf Steiners, Erziehungskunst, vol. 58. Jahrgang, no. 3, Stuttgart, Germany, pp. 210-212, 1994.
[bib][url] |
[9] | Günter Böckle, Hermann Hellwagner, Systematic Assessment of Computer Systems Architectures, In Innovationen bei Rechen- und Kommunikationssystemen, Eine Herausforderung für die Informatik (Bernd E Wolfinger, ed.), Springer Verlag, N, A, pp. 310-317, 1994.
[bib] |
[8] | Hermann Hellwagner, Randomized Shared Memory - Concept and Efficiency of a Scalable Shared Memory Scheme, In Parallel Computer Architectures: Theory, Hardware, Software, Applications (Bode Arndt, Mario Dal Cin, eds.), Springer Verlag, London, UK, pp. 102-117, 1993.
[bib] [abstract]
Abstract: Our work explores the practical relevance of Randomized Shared Memory (RSM), a theoretical concept that has been proven to enable an (asymptotically) optimally efficient implementation of scalable and universal shared memory in a distributed-memory parallel system. RSM (address hashing) pseudo-randomly distributes global memory addresses throughout the nodes' local memories. High memory access latencies are masked through massive parallelism. This paper introduces the basic principles and properties of RSM and analyzes its practical efficiency in terms of constant factors through simulation studies, assuming a state-of-the-art parallel architecture. Bottlenecks in the architecture are pointed out, and improvements are being made and their effects assessed quantitatively. The results show that RSM efficiency is encouragingly high, even in a non-optimized architecture. We propose architectural features to support RSM and conclude that RSM may indeed be a feasible shared-memory implementation in future massively parallel computers.
|
[7] | Hermann Hellwagner, Design Considerations for Scalable Parallel File Systems, In The Computer Journal - Parallel Processing, N, A, vol. Vol. 36, no. 8, N, A, pp. 741-755, 1993.
[bib] [pdf] [abstract]
Abstract: This paper addresses the problem of providing high-performance disk I/O in massively parallel computers. Resolving the fundamental I/O bottleneck in parallel architectures involves both hardware and software issues. We review previous work on disk arrays and I/O architectures aimed at providing highly parallel disk I/O subsystems. We then focus on the requirements and design of parallel file systems (PFSs) which are responsible to make the parallelism offered by the hardware and a declustered file organization available to application programs. We present the design strategy and key concepts of a general-purpose file system for a parallel computer with scalable distributed shared memory. The principal objectives of the PFS are to fully exploit the parallelism inherent among and within file accesses, and to provide scalable I/O performance. The machine model underlying the design is described, with and emphasis on the innovative architectural features supporting scalability of the shared memory. Starting from a classification of various scenarios of concurrent I/O requests, the features of the PFS design essential for achieving the goals are described and justified. It is argued that the inter- and intra-request parallelism of the I/O load can indeed be effectively exploited and supported by the parallel system resources. Scalability of I/O performance and of the PFS software can be ensured by avoiding serial bottlenecks through the use of the powerful architectural features.
|
[6] | Laszlo Böszörmenyi, Informatik in der Grundschule, In Didaktische Zeitschrift des IST-Zentrums Linz, Interdisziplinäre Zentrum für Soziale Kompetenz, Linz, Austria, pp. 15-17, 1993.
[bib] |
[5] | Laszlo Böszörmenyi, A Comparison of Modula-3 and Oberon-2: extended version, In Structured Programming, Springer, vol. Volume 14, no. 1, Berlin, Heidelberg, New York, pp. 15-22, 1993.
[bib] |
[4] | Hermann Hellwagner, On the Practical Efficiency of Randomized Shared Memory, In Parallel Processing: CONPAR 92 - VAPP V, Second Joint International Conference on Vector and Parallel Processing (Luc Bougé, Michel Cosnard, Yves Robert, Denis Trystram, eds.), Springer, Berlin-Heidelberg, pp. 429-440, 1992.
[bib] [abstract]
Abstract: This paper analyzes the efficiency of Randomized Shared Memory (RSM) in terms of constant factors. RSM or memory hashing, that is, pseudorandom distribution of global memory addresses throughout local memories in a distributed-memory parallel system, has been proven to enable an (asymptotically) optimally efficient implementation of scalable and universal shared memory. High memory access latencies are hidden through massive parallelism. Our work examines the practical relevance and feasibility of this potentially significant theoretical result. After an introduction of the background, principles, and desirable properties of RSM and an outline of the approach to determine RSM efficiency, the major results of our simulations are presented. The results show that RSM efficiency is encouragingly high (up to 20% efficiency of idealized shared memory), even in an architecture modelled on the basis of state-of-the-art technology. Performance-limiting factors are identified from the results and architectural features to increase efficiency are proposed, most notably extremely fast process switching and a combining network. Several novel machine designs document the increased interest in RSM and hardware support.
|
[3] | Laszlo Böszörmenyi, Informatik und Wissenschaftsgeschichte, In Informatik in der Schule - Informatik für die Schule, Böhlau, vol. 10, Vienna, Austria, pp. not available, 1992.
[bib] [abstract]
Abstract: Informatik in der Schule - Informatik für die Schule.
|