146 resultados para distributed conditional computation
Resumo:
The real-time parallel computation of histograms using an array of pipelined cells is proposed and prototyped in this paper with application to consumer imaging products. The array operates in two modes: histogram computation and histogram reading. The proposed parallel computation method does not use any memory blocks. The resulting histogram bins can be stored into an external memory block in a pipelined fashion for subsequent reading or streaming of the results. The array of cells can be tuned to accommodate the required data path width in a VLSI image processing engine as present in many imaging consumer devices. Synthesis of the architectures presented in this paper in FPGA are shown to compute the real-time histogram of images streamed at over 36 megapixels at 30 frames/s by processing in parallel 1, 2 or 4 pixels per clock cycle.
Resumo:
Edited book to support level 3 undergraduate understanding.
Resumo:
This article presents the results of a study that explored the human side of the multimedia experience. We propose a model that assesses quality variation from three distinct levels: the network, the media and the content levels; and from two views: the technical and the user perspective. By facilitating parameter variation at each of the quality levels and from each of the perspectives, we were able to examine their impact on user quality perception. Results show that a significant reduction in frame rate does not proportionally reduce the user's understanding of the presentation independent of technical parameters, that multimedia content type significantly impacts user information assimilation, user level of enjoyment, and user perception of quality, and that the device display type impacts user information assimilation and user perception of quality. Finally, to ensure the transfer of information, low-level abstraction (network-level) parameters, such as delay and jitter, should be adapted; to maintain the user's level of enjoyment, high-level abstraction quality parameters (content-level), such as the appropriate use of display screens, should be adapted.
Resumo:
Distributed multimedia supports a symbiotic infotainment duality, i.e. the ability to transfer information to the user, yet also provide the user with a level of satisfaction. As multimedia is ultimately produced for the education and / or enjoyment of viewers, the user’s-perspective concerning the presentation quality is surely of equal importance as objective Quality of Service (QoS) technical parameters, to defining distributed multimedia quality. In order to extensively measure the user-perspective of multimedia video quality, we introduce an extended model of distributed multimedia quality that segregates quality into three discrete levels: the network-level, the media-level and content-level, using two distinct quality perspectives: the user-perspective and the technical-perspective. Since experimental questionnaires do not provide continuous monitoring of user attention, eye tracking was used in our study in order to provide a better understanding of the role that the human element plays in the reception, analysis and synthesis of multimedia data. Results showed that video content adaptation, results in disparity in user video eye-paths when: i) no single / obvious point of focus exists; or ii) when the point of attention changes dramatically. Accordingly, appropriate technical- and user-perspective parameter adaptation is implemented, for all quality abstractions of our model, i.e. network-level (via simulated delay and jitter), media-level (via a technical- and user-perspective manipulated region-of-interest attentive display) and content-level (via display-type and video clip-type). Our work has shown that user perception of distributed multimedia quality cannot be achieved by means of purely technical-perspective QoS parameter adaptation.
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
One goal in the development of distributed virtual environments (DVEs) is to create a system such that users are unaware of the distribution-the distribution should be transparent. The paper begins by discussing the general issues in DVEs that might make this possible, and a system that allows some level of distribution transparency is described. The system described suffers from effects of inconsistency, which in turn cause undesirable visual effects. The causal surface is introduced as a solution that removes these visual effects. The paper then introduces two determining factors of distribution transparency relating to user perception and performance. With regard to these factors, two hypotheses are stated relating to the causal surface. A user-trial on forty-five subjects is used to validate the hypotheses. A discussion of the results of the trial concludes that the causal surface solution does significantly improve the distribution transparency in a DVE.
Resumo:
Research to date has tended to concentrate on bandwidth considerations to increase scalability in distributed interactive simulation and virtual reality systems. This paper proposes that the major concern for latency in user interaction is that of the fundamental limit of communication rate due to the speed of light. Causal volumes and surfaces are introduced as a model of the limitations of causality caused by this fundamental delay. The concept of virtual world critical speed is introduced, which can be determined from the causal surface. The implications of the critical speed are discussed, and relativistic dynamics are used to constrain the object speed, in the same way speeds are bounded in the real world.
Resumo:
The development of large scale virtual reality and simulation systems have been mostly driven by the DIS and HLA standards community. A number of issues are coming to light about the applicability of these standards, in their present state, to the support of general multi-user VR systems. This paper pinpoints four issues that must be readdressed before large scale virtual reality systems become accessible to a larger commercial and public domain: a reduction in the effects of network delays; scalable causal event delivery; update control; and scalable reliable communication. Each of these issues is tackled through a common theme of combining wall clock and causal time-related entity behaviour, knowledge of network delays and prediction of entity behaviour, that together overcome many of the effects of network delay.
Resumo:
The evaluation of investment fund performance has been one of the main developments of modern portfolio theory. Most studies employ the technique developed by Jensen (1968) that compares a particular fund's returns to a benchmark portfolio of equal risk. However, the standard measures of fund manager performance are known to suffer from a number of problems in practice. In particular previous studies implicitly assume that the risk level of the portfolio is stationary through the evaluation period. That is unconditional measures of performance do not account for the fact that risk and expected returns may vary with the state of the economy. Therefore many of the problems encountered in previous performance studies reflect the inability of traditional measures to handle the dynamic behaviour of returns. As a consequence Ferson and Schadt (1996) suggest an approach to performance evaluation called conditional performance evaluation which is designed to address this problem. This paper utilises such a conditional measure of performance on a sample of 27 UK property funds, over the period 1987-1998. The results of which suggest that once the time varying nature of the funds beta is corrected for, by the addition of the market indicators, the average fund performance show an improvement over that of the traditional methods of analysis.
Resumo:
This paper proposes a solution to the problems associated with network latency within distributed virtual environments. It begins by discussing the advantages and disadvantages of synchronous and asynchronous distributed models, in the areas of user and object representation and user-to-user interaction. By introducing a hybrid solution, which utilises the concept of a causal surface, the advantages of both synchronous and asynchronous models are combined. Object distortion is a characteristic feature of the hybrid system, and this is proposed as a solution which facilitates dynamic real-time user collaboration. The final section covers implementation details, with reference to a prototype system available from the Internet.