198 resultados para Parallel computing
Resumo:
Background Recent advances in Immunology highlighted the importance of local properties on the overall progression of HIV infection. In particular, the gastrointestinal tract is seen as a key area during early infection, and the massive cell depletion associated with it may influence subsequent disease progression. This motivated the development of a large-scale agent-based model. Results Lymph nodes are explicitly implemented, and considerations on parallel computing permit large simulations and the inclusion of local features. The results obtained show that GI tract inclusion in the model leads to an accelerated disease progression, during both the early stages and the long-term evolution, compared to a theoretical, uniform model. Conclusions These results confirm the potential of treatment policies currently under investigation, which focus on this region. They also highlight the potential of this modelling framework, incorporating both agent-based and network-based components, in the context of complex systems where scaling-up alone does not result in models providing additional insights.
Resumo:
Biomedical systems involve a large number of entities and intricate interactions between these. Their direct analysis is, therefore, difficult, and it is often necessary to rely on computational models. These models require significant resources and parallel computing solutions. These approaches are particularly suited, given parallel aspects in the nature of biomedical systems. Model hybridisation also permits the integration and simultaneous study of multiple aspects and scales of these systems, thus providing an efficient platform for multidisciplinary research.
Resumo:
Understanding the dynamics of disease spread is essential in contexts such as estimating load on medical services, as well as risk assessment and interven- tion policies against large-scale epidemic outbreaks. However, most of the information is available after the outbreak itself, and preemptive assessment is far from trivial. Here, we report on an agent-based model developed to investigate such epidemic events in a stylised urban environment. For most diseases, infection of a new individual may occur from casual contact in crowds as well as from repeated interactions with social partners such as work colleagues or family members. Our model therefore accounts for these two phenomena. Given the scale of the system, efficient parallel computing is required. In this presentation, we focus on aspects related to paralllelisation for large networks generation and massively multi-agent simulations.
Resumo:
Uninhabited aerial vehicles (UAVs) are a cutting-edge technology that is at the forefront of aviation/aerospace research and development worldwide. Many consider their current military and defence applications as just a token of their enormous potential. Unlocking and fully exploiting this potential will see UAVs in a multitude of civilian applications and routinely operating alongside piloted aircraft. The key to realising the full potential of UAVs lies in addressing a host of regulatory, public relation, and technological challenges never encountered be- fore. Aircraft collision avoidance is considered to be one of the most important issues to be addressed, given its safety critical nature. The collision avoidance problem can be roughly organised into three areas: 1) Sense; 2) Detect; and 3) Avoid. Sensing is concerned with obtaining accurate and reliable information about other aircraft in the air; detection involves identifying potential collision threats based on available information; avoidance deals with the formulation and execution of appropriate manoeuvres to maintain safe separation. This thesis tackles the detection aspect of collision avoidance, via the development of a target detection algorithm that is capable of real-time operation onboard a UAV platform. One of the key challenges of the detection problem is the need to provide early warning. This translates to detecting potential threats whilst they are still far away, when their presence is likely to be obscured and hidden by noise. Another important consideration is the choice of sensors to capture target information, which has implications for the design and practical implementation of the detection algorithm. The main contributions of the thesis are: 1) the proposal of a dim target detection algorithm combining image morphology and hidden Markov model (HMM) filtering approaches; 2) the novel use of relative entropy rate (RER) concepts for HMM filter design; 3) the characterisation of algorithm detection performance based on simulated data as well as real in-flight target image data; and 4) the demonstration of the proposed algorithm's capacity for real-time target detection. We also consider the extension of HMM filtering techniques and the application of RER concepts for target heading angle estimation. In this thesis we propose a computer-vision based detection solution, due to the commercial-off-the-shelf (COTS) availability of camera hardware and the hardware's relatively low cost, power, and size requirements. The proposed target detection algorithm adopts a two-stage processing paradigm that begins with an image enhancement pre-processing stage followed by a track-before-detect (TBD) temporal processing stage that has been shown to be effective in dim target detection. We compare the performance of two candidate morphological filters for the image pre-processing stage, and propose a multiple hidden Markov model (MHMM) filter for the TBD temporal processing stage. The role of the morphological pre-processing stage is to exploit the spatial features of potential collision threats, while the MHMM filter serves to exploit the temporal characteristics or dynamics. The problem of optimising our proposed MHMM filter has been examined in detail. Our investigation has produced a novel design process for the MHMM filter that exploits information theory and entropy related concepts. The filter design process is posed as a mini-max optimisation problem based on a joint RER cost criterion. We provide proof that this joint RER cost criterion provides a bound on the conditional mean estimate (CME) performance of our MHMM filter, and this in turn establishes a strong theoretical basis connecting our filter design process to filter performance. Through this connection we can intelligently compare and optimise candidate filter models at the design stage, rather than having to resort to time consuming Monte Carlo simulations to gauge the relative performance of candidate designs. Moreover, the underlying entropy concepts are not constrained to any particular model type. This suggests that the RER concepts established here may be generalised to provide a useful design criterion for multiple model filtering approaches outside the class of HMM filters. In this thesis we also evaluate the performance of our proposed target detection algorithm under realistic operation conditions, and give consideration to the practical deployment of the detection algorithm onboard a UAV platform. Two fixed-wing UAVs were engaged to recreate various collision-course scenarios to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. Based on this collected data, our proposed detection approach was able to detect targets out to distances ranging from about 400m to 900m. These distances, (with some assumptions about closing speeds and aircraft trajectories) translate to an advanced warning ahead of impact that approaches the 12.5 second response time recommended for human pilots. Furthermore, readily available graphic processing unit (GPU) based hardware is exploited for its parallel computing capabilities to demonstrate the practical feasibility of the proposed target detection algorithm. A prototype hardware-in- the-loop system has been found to be capable of achieving data processing rates sufficient for real-time operation. There is also scope for further improvement in performance through code optimisations. Overall, our proposed image-based target detection algorithm offers UAVs a cost-effective real-time target detection capability that is a step forward in ad- dressing the collision avoidance issue that is currently one of the most significant obstacles preventing widespread civilian applications of uninhabited aircraft. We also highlight that the algorithm development process has led to the discovery of a powerful multiple HMM filtering approach and a novel RER-based multiple filter design process. The utility of our multiple HMM filtering approach and RER concepts, however, extend beyond the target detection problem. This is demonstrated by our application of HMM filters and RER concepts to a heading angle estimation problem.
Resumo:
With the emergence of multi-core processors into the mainstream, parallel programming is no longer the specialized domain it once was. There is a growing need for systems to allow programmers to more easily reason about data dependencies and inherent parallelism in general purpose programs. Many of these programs are written in popular imperative programming languages like Java and C]. In this thesis I present a system for reasoning about side-effects of evaluation in an abstract and composable manner that is suitable for use by both programmers and automated tools such as compilers. The goal of developing such a system is to both facilitate the automatic exploitation of the inherent parallelism present in imperative programs and to allow programmers to reason about dependencies which may be limiting the parallelism available for exploitation in their applications. Previous work on languages and type systems for parallel computing has tended to focus on providing the programmer with tools to facilitate the manual parallelization of programs; programmers must decide when and where it is safe to employ parallelism without the assistance of the compiler or other automated tools. None of the existing systems combine abstraction and composition with parallelization and correctness checking to produce a framework which helps both programmers and automated tools to reason about inherent parallelism. In this work I present a system for abstractly reasoning about side-effects and data dependencies in modern, imperative, object-oriented languages using a type and effect system based on ideas from Ownership Types. I have developed sufficient conditions for the safe, automated detection and exploitation of a number task, data and loop parallelism patterns in terms of ownership relationships. To validate my work, I have applied my ideas to the C] version 3.0 language to produce a language extension called Zal. I have implemented a compiler for the Zal language as an extension of the GPC] research compiler as a proof of concept of my system. I have used it to parallelize a number of real-world applications to demonstrate the feasibility of my proposed approach. In addition to this empirical validation, I present an argument for the correctness of the type system and language semantics I have proposed as well as sketches of proofs for the correctness of the sufficient conditions for parallelization proposed.
Resumo:
The use of adaptive wing/aerofoil designs is being considered, as they are promising techniques in aeronautic/ aerospace since they can reduce aircraft emissions and improve aerodynamic performance of manned or unmanned aircraft. This paper investigates the robust design and optimization for one type of adaptive techniques: active flow control bump at transonic flow conditions on a natural laminar flow aerofoil. The concept of using shock control bump is to control supersonic flow on the suction/pressure side of natural laminar flow aerofoil that leads to delaying shock occurrence (weakening its strength) or boundary layer separation. Such an active flow control technique reduces total drag at transonic speeds due to reduction of wave drag. The location of boundary-layer transition can influence the position and structure of the supersonic shock on the suction/pressure side of aerofoil. The boundarylayer transition position is considered as an uncertainty design parameter in aerodynamic design due to the many factors, such as surface contamination or surface erosion. This paper studies the shock-control-bump shape design optimization using robust evolutionary algorithms with uncertainty in boundary-layer transition locations. The optimization method is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing, and asynchronous evaluation. The use of adaptive wing/aerofoil designs is being considered, as they are promising techniques in aeronautic/ aerospace since they can reduce aircraft emissions and improve aerodynamic performance of manned or unmanned aircraft. This paper investigates the robust design and optimization for one type of adaptive techniques: active flow control bump at transonic flow conditions on a natural laminar flow aerofoil. The concept of using shock control bump is to control supersonic flow on the suction/pressure side of natural laminar flow aerofoil that leads to delaying shock occurrence (weakening its strength) or boundary-layer separation. Such an active flow control technique reduces total drag at transonic speeds due to reduction of wave drag. The location of boundary-layer transition can influence the position and structure of the supersonic shock on the suction/pressure side of aerofoil. The boundarylayer transition position is considered as an uncertainty design parameter in aerodynamic design due to the many factors, such as surface contamination or surface erosion. This paper studies the shock-control-bump shape design optimization using robust evolutionary algorithms with uncertainty in boundary-layer transition locations. The optimization method is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing, and asynchronous evaluation. Two test cases are conducted: the first test assumes the boundary-layer transition position is at 45% of chord from the leading edge, and the second test considers robust design optimization for the shock control bump at the variability of boundary-layer transition positions. The numerical result shows that the optimization method coupled to uncertainty design techniques produces Pareto optimal shock-control-bump shapes, which have low sensitivity and high aerodynamic performance while having significant total drag reduction.
Resumo:
This study investigates the application of two advanced optimization methods for solving active flow control (AFC) device shape design problem and compares their optimization efficiency in terms of computational cost and design quality. The first optimization method uses hierarchical asynchronous parallel multi-objective evolutionary algorithm and the second uses hybridized evolutionary algorithm with Nash-Game strategies (Hybrid-Game). Both optimization methods are based on a canonical evolution strategy and incorporate the concepts of parallel computing and asynchronous evaluation. One type of AFC device named shock control bump (SCB) is considered and applied to a natural laminar flow (NLF) aerofoil. The concept of SCB is used to decelerate supersonic flow on suction/pressure side of transonic aerofoil that leads to a delay of shock occurrence. Such active flow technique reduces total drag at transonic speeds which is of special interest to commercial aircraft. Numerical results show that the Hybrid-Game helps an EA to accelerate optimization process. From the practical point of view, applying a SCB on the suction and pressure sides significantly reduces transonic total drag and improves lift-to-drag (L/D) value when compared to the baseline design.
Resumo:
-
Resumo:
There are many applications in aeronautical/aerospace engineering where some values of the design parameters states cannot be provided or determined accurately. These values can be related to the geometry(wingspan, length, angles) and or to operational flight conditions that vary due to the presence of uncertainty parameters (Mach, angle of attack, air density and temperature, etc.). These uncertainty design parameters cannot be ignored in engineering design and must be taken into the optimisation task to produce more realistic and reliable solutions. In this paper, a robust/uncertainty design method with statistical constraints is introduced to produce a set of reliable solutions which have high performance and low sensitivity. Robust design concept coupled with Multi Objective Evolutionary Algorithms (MOEAs) is defined by applying two statistical sampling formulas; mean and variance/standard deviation associated with the optimisation fitness/objective functions. The methodology is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing and asynchronous evaluation. It is implemented for two practical Unmanned Aerial System (UAS) design problems; the flrst case considers robust multi-objective (single disciplinary: aerodynamics) design optimisation and the second considers a robust multidisciplinary (aero structures) design optimisation. Numerical results show that the solutions obtained by the robust design method with statistical constraints have a more reliable performance and sensitivity in both aerodynamics and structures when compared to the baseline design.
Resumo:
The use of adaptive wing/aerofoil designs is being considered as promising techniques in aeronautic/aerospace since they can reduce aircraft emissions, improve aerodynamic performance of manned or unmanned aircraft. The paper investigates the robust design and optimisation for one type of adaptive techniques; Active Flow Control (AFC) bump at transonic flow conditions on a Natural Laminar Flow (NLF) aerofoil designed to increase aerodynamic efficiency (especially high lift to drag ratio). The concept of using Shock Control Bump (SCB) is to control supersonic flow on the suction/pressure side of NLF aerofoil: RAE 5243 that leads to delaying shock occurrence or weakening its strength. Such AFC technique reduces total drag at transonic speeds due to reduction of wave drag. The location of Boundary Layer Transition (BLT) can influence the position the supersonic shock occurrence. The BLT position is an uncertainty in aerodynamic design due to the many factors, such as surface contamination or surface erosion. The paper studies the SCB shape design optimisation using robust Evolutionary Algorithms (EAs) with uncertainty in BLT positions. The optimisation method is based on a canonical evolution strategy and incorporates the concepts of hierarchical topology, parallel computing and asynchronous evaluation. Two test cases are conducted; the first test assumes the BLT is at 45% of chord from the leading edge and the second test considers robust design optimisation for SCB at the variability of BLT positions and lift coefficient. Numerical result shows that the optimisation method coupled to uncertainty design techniques produces Pareto optimal SCB shapes which have low sensitivity and high aerodynamic performance while having significant total drag reduction.
Resumo:
Using Monte Carlo simulation for radiotherapy dose calculation can provide more accurate results when compared to the analytical methods usually found in modern treatment planning systems, especially in regions with a high degree of inhomogeneity. These more accurate results acquired using Monte Carlo simulation however, often require orders of magnitude more calculation time so as to attain high precision, thereby reducing its utility within the clinical environment. This work aims to improve the utility of Monte Carlo simulation within the clinical environment by developing techniques which enable faster Monte Carlo simulation of radiotherapy geometries. This is achieved principally through the use new high performance computing environments and simpler alternative, yet equivalent representations of complex geometries. Firstly the use of cloud computing technology and it application to radiotherapy dose calculation is demonstrated. As with other super-computer like environments, the time to complete a simulation decreases as 1=n with increasing n cloud based computers performing the calculation in parallel. Unlike traditional super computer infrastructure however, there is no initial outlay of cost, only modest ongoing usage fees; the simulations described in the following are performed using this cloud computing technology. The definition of geometry within the chosen Monte Carlo simulation environment - Geometry & Tracking 4 (GEANT4) in this case - is also addressed in this work. At the simulation implementation level, a new computer aided design interface is presented for use with GEANT4 enabling direct coupling between manufactured parts and their equivalent in the simulation environment, which is of particular importance when defining linear accelerator treatment head geometry. Further, a new technique for navigating tessellated or meshed geometries is described, allowing for up to 3 orders of magnitude performance improvement with the use of tetrahedral meshes in place of complex triangular surface meshes. The technique has application in the definition of both mechanical parts in a geometry as well as patient geometry. Static patient CT datasets like those found in typical radiotherapy treatment plans are often very large and present a significant performance penalty on a Monte Carlo simulation. By extracting the regions of interest in a radiotherapy treatment plan, and representing them in a mesh based form similar to those used in computer aided design, the above mentioned optimisation techniques can be used so as to reduce the time required to navigation the patient geometry in the simulation environment. Results presented in this work show that these equivalent yet much simplified patient geometry representations enable significant performance improvements over simulations that consider raw CT datasets alone. Furthermore, this mesh based representation allows for direct manipulation of the geometry enabling motion augmentation for time dependant dose calculation for example. Finally, an experimental dosimetry technique is described which allows the validation of time dependant Monte Carlo simulation, like the ones made possible by the afore mentioned patient geometry definition. A bespoke organic plastic scintillator dose rate meter is embedded in a gel dosimeter thereby enabling simultaneous 3D dose distribution and dose rate measurement. This work demonstrates the effectiveness of applying alternative and equivalent geometry definitions to complex geometries for the purposes of Monte Carlo simulation performance improvement. Additionally, these alternative geometry definitions allow for manipulations to be performed on otherwise static and rigid geometry.
Resumo:
In this paper, the shape design optimisation using morphing aerofoil/wing techniques, namely the leading and/or trailing edge deformation of a natural laminar flow RAE 5243 aerofoil is investigated to reduce transonic drag without taking into account of the piezo actuator mechanism. Two applications using a Multi-Objective Genetic Algorithm (MOGA)coupled with Euler and boundary analyser (MSES) are considered: the first example minimises the total drag with a lift constraint by optimising both the trailing edge actuator position and trailing edge deformation angle at a constant transonic Mach number (M! = 0.75)and boundary layer transition position (xtr = 45%c). The second example consists of finding reliable designs that produce lower mean total drag (μCd) and drag sensitivity ("Cd) at different uncertainty flight conditions based on statistical information. Numerical results illustrate how the solution quality in terms of mean drag and its sensitivity can be improved using MOGA software coupled with a robust design approach taking account of uncertainties (lift and boundary transition positions) and also how transonic flow over aerofoil/wing can be controlled to the best advantage using morphing techniques.
Resumo:
The emergence of pseudo-marginal algorithms has led to improved computational efficiency for dealing with complex Bayesian models with latent variables. Here an unbiased estimator of the likelihood replaces the true likelihood in order to produce a Bayesian algorithm that remains on the marginal space of the model parameter (with latent variables integrated out), with a target distribution that is still the correct posterior distribution. Very efficient proposal distributions can be developed on the marginal space relative to the joint space of model parameter and latent variables. Thus psuedo-marginal algorithms tend to have substantially better mixing properties. However, for pseudo-marginal approaches to perform well, the likelihood has to be estimated rather precisely. This can be difficult to achieve in complex applications. In this paper we propose to take advantage of multiple central processing units (CPUs), that are readily available on most standard desktop computers. Here the likelihood is estimated independently on the multiple CPUs, with the ultimate estimate of the likelihood being the average of the estimates obtained from the multiple CPUs. The estimate remains unbiased, but the variability is reduced. We compare and contrast two different technologies that allow the implementation of this idea, both of which require a negligible amount of extra programming effort. The superior performance of this idea over the standard approach is demonstrated on simulated data from a stochastic volatility model.
Resumo:
A computationally efficient sequential Monte Carlo algorithm is proposed for the sequential design of experiments for the collection of block data described by mixed effects models. The difficulty in applying a sequential Monte Carlo algorithm in such settings is the need to evaluate the observed data likelihood, which is typically intractable for all but linear Gaussian models. To overcome this difficulty, we propose to unbiasedly estimate the likelihood, and perform inference and make decisions based on an exact-approximate algorithm. Two estimates are proposed: using Quasi Monte Carlo methods and using the Laplace approximation with importance sampling. Both of these approaches can be computationally expensive, so we propose exploiting parallel computational architectures to ensure designs can be derived in a timely manner. We also extend our approach to allow for model uncertainty. This research is motivated by important pharmacological studies related to the treatment of critically ill patients.