964 resultados para Computer software Reusability
Resumo:
A simulation program has been developed to calculate the power-spectral density of thin avalanche photodiodes, which are used in optical networks. The program extends the time-domain analysis of the dead-space multiplication model to compute the autocorrelation function of the APD impulse response. However, the computation requires a large amount of memory space and is very time consuming. We describe our experiences in parallelizing the code using both MPI and OpenMP. Several array partitioning schemes and scheduling policies are implemented and tested Our results show that the OpenMP code is scalable up to 64 processors on an SGI Origin 2000 machine and has small average errors.
Resumo:
An important factor for high-speed optical communication is the availability of ultrafast and low-noise photodetectors. Among the semiconductor photodetectors that are commonly used in today’s long-haul and metro-area fiber-optic systems, avalanche photodiodes (APDs) are often preferred over p-i-n photodiodes due to their internal gain, which significantly improves the receiver sensitivity and alleviates the need for optical pre-amplification. Unfortunately, the random nature of the very process of carrier impact ionization, which generates the gain, is inherently noisy and results in fluctuations not only in the gain but also in the time response. Recently, a theory characterizing the autocorrelation function of APDs has been developed by us which incorporates the dead-space effect, an effect that is very significant in thin, high-performance APDs. The research extends the time-domain analysis of the dead-space multiplication model to compute the autocorrelation function of the APD impulse response. However, the computation requires a large amount of memory space and is very time consuming. In this research, we describe our experiences in parallelizing the code in MPI and OpenMP using CAPTools. Several array partitioning schemes and scheduling policies are implemented and tested. Our results show that the code is scalable up to 64 processors on a SGI Origin 2000 machine and has small average errors.
Resumo:
Johnson's SB distribution is a four-parameter distribution that is transformed into a normal distribution by a logit transformation. By replacing the normal distribution of Johnson's SB with the logistic distribution, we obtain a new distributional model that approximates SB. It is analytically tractable, and we name it the "logitlogistic" (LL) distribution. A generalized four-parameter Weibull model and the Burr XII model are also introduced for comparison purposes. Using the distribution "shape plane" (with axes skew and kurtosis) we compare the "coverage" properties of the LL, the generalized Weibull, and the Burr XII with Johnson's SB, the beta, and the three-parameter Weibull, the main distributions used in forest modelling. The LL is found to have the largest range of shapes. An empirical case study of the distributional models is conducted on 107 sample plots of Chinese fir. The LL performs best among the four-parameter models.
Resumo:
A comprehensive solution of solidification/melting processes requires the simultaneous representation of free surface fluid flow, heat transfer, phase change, nonlinear solid mechanics and, possibly, electromagnetics together with their interactions, in what is now known as multiphysics simulation. Such simulations are computationally intensive and the implementation of solution strategies for multiphysics calculations must embed their effective parallelization. For some years, together with our collaborators, we have been involved in the development of numerical software tools for multiphysics modeling on parallel cluster systems. This research has involved a combination of algorithmic procedures, parallel strategies and tools, plus the design of a computational modeling software environment and its deployment in a range of real world applications. One output from this research is the three-dimensional parallel multiphysics code, PHYSICA. In this paper we report on an assessment of its parallel scalability on a range of increasingly complex models drawn from actual industrial problems, on three contemporary parallel cluster systems.
Resumo:
The pseudo-spectral solution method offers a flexible and fast alternative to the more usual finite element/volume/difference methods, particularly when the long-time transient behaviour of a system is of interest. Since the exact solution is obtained at the grid collocation points superior accuracy can be achieved on modest grid resolution. Furthermore, the grid can be freely adapted with time and in space, to particular flow conditions or geometric variations. This is especially advantageous where strongly coupled, time-dependent, multi-physics solutions are investigated. Examples include metallurgical applications involving the interaction of electromagnetic fields and conducting liquids with a free sutface. The electromagnetic field then determines the instantaneous liquid volume shape and the liquid shape affects in turn the electromagnetic field. In AC applications a thin "skin effect" region results on the free surface that dominates grid requirements. Infinitesimally thin boundary cells can be introduced using Chebyshev polynomial expansions without detriment to the numerical accuracy. This paper presents a general methodology of the pseudo-spectral approach and outlines the solution procedures used. Several instructive example applications are given: the aluminium electrolysis MHD problem, induction melting and stirring and the dynamics of magnetically levitated droplets in AC and DC fields. Comparisons to available analytical solutions and to experimental measurements will be discussed.
Resumo:
The World Trade Center Evacuation: The evacuation of the WTC complex represents one of the largest full-scale evacuations of people in modern times.
Resumo:
Parallel processing techniques have been used in the past to provide high performance computing resources for activities such as fire-field modelling. This has traditionally been achieved using specialized hardware and software, the expense of which would be difficult to justify for many fire engineering practices. In this article we demonstrate how typical office-based PCs attached to a Local Area Network has the potential to offer the benefits of parallel processing with minimal costs associated with the purchase of additional hardware or software. It was found that good speedups could be achieved on homogeneous networks of PCs, for example a problem composed of ~100,000 cells would run 9.3 times faster on a network of 12 800MHz PCs than on a single 800MHz PC. It was also found that a network of eight 3.2GHz Pentium 4 PCs would run 7.04 times faster than a single 3.2GHz Pentium computer. A dynamic load balancing scheme was also devised to allow the effective use of the software on heterogeneous PC networks. This scheme also ensured that the impact between the parallel processing task and other computer users on the network was minimized.
Resumo:
This paper reports on research work undertaken for the European Commission funded study GMA2/2000/32039 Very Large Transport Aircraft (VLTA) Emergency Requirements Research Evacuation Study (VERRES). A particular focus of VERRES was on evacuation issues and several large-scale evacuation trials were conducted in the CRANFIELD simulator. This paper addresses part of the research undertaken for Work Package 3 by the University of Greenwich with a focus on the analysis of the data concerning passenger use of stairs and passenger exit hesitation time analysis for upper deck slides.
Resumo:
In this paper we briefly describe new modelling capabilities within the airEXODUS evacuation model. These new capabilities involve the explicit ability to simulate the interaction of crew with passengers in managing evacuation situations
Resumo:
This paper describes the AASK database. The AASK database is unique as it is a record of human behaviour during survivable aviation accidents. The AASK database is compiled from interview data compiled by agencies such as the NTSB and the AAIB. The database can be found on the website http://fseg.gre.ac.uk
Resumo:
The scalability of a computer system is its response to growth. It is also depended on its hardware, its operating system and the applications it is running. Most distributed systems technology today still depends on bus-based shared memory which do not scale well, and systems based on the grid or hypercube scheme requires significantly less connections than a full inter-connection that would exhibit a quadratic growth rate. The rapid convergence of mobile communication, digital broadcasting and network infrastructures calls for rich multimedia content that is adaptive and responsive to the needs of individuals, businesses and the public organisations. This paper will discuss the emergence of mobile Multimedia systems and provides an overview of the issues regarding design and delivery of multimedia content to mobile devices.
Resumo:
The UK government started the UK eUniversities project in order to create a virtual campus for online education provisions, competing in a global market. The UKeU (WWW.ukeu.com) claims to "have created a new approach to e-learning" which "opens up a range of exciting opportunities for students, business and industry worldwide" to obtain both postgraduate and undergraduate qualifications. Although there has been many promises about the e-learning revolution using state-of-the-art multimedia technology, closer scrutiny of what is being delivered reveals that many of the e-learning models currently being used are little more than the old text based computer aided learning running on a global network. As part of the UKeU project a consortium of universities have been involved in developing a two year foundation degree from 2004. We look at the approach taken by the consortium in developing global e-learning provisions and the problems and the pitfalls that lay ahead.
Resumo:
Participation in European Union research projects now requires the setting-up of a project website. This paper discusses the creation of the "Matrix" to facilitate the information visualisation of a project; experiments, data, and results, etc, i.e. information far beyond the promotional details of the website. The paper describes the theory of such an endeavour before proceeding to discuss the practical realities for this case study project. Finally, we consider any lessons that can be learnt from this real-world application.
Resumo:
The notion of time plays a vital and ubiquitous role of a common universal reference. In knowledge-based systems, temporal information is usually represented in terms of a collection of statements, together with the corresponding temporal reference. This paper introduces a visualized consistency checker for temporal reference. It allows expression of both absolute and relative temporal knowledge, and provides visual representation of temporal references in terms of directed and partially weighted graphs. Based on the temporal reference of a given scenario, the visualized checker can deliver a verdict to the user as to whether the scenario is temporally consistent or not, and provide the corresponding analysis / diagnosis.