942 resultados para computational model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Lattice Solid Model has been used successfully as a virtual laboratory to simulate fracturing of rocks, the dynamics of faults, earthquakes and gouge processes. However, results from those simulations show that in order to make the next step towards more realistic experiments it will be necessary to use models containing a significantly larger number of particles than current models. Thus, those simulations will require a greatly increased amount of computational resources. Whereas the computing power provided by single processors can be expected to increase according to Moore's law, i.e., to double every 18-24 months, parallel computers can provide significantly larger computing power today. In order to make this computing power available for the simulation of the microphysics of earthquakes, a parallel version of the Lattice Solid Model has been implemented. Benchmarks using large models with several millions of particles have shown that the parallel implementation of the Lattice Solid Model can achieve a high parallel-efficiency of about 80% for large numbers of processors on different computer architectures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simplicity in design and minimal floor space requirements render the hydrocyclone the preferred classifier in mineral processing plants. Empirical models have been developed for design and process optimisation but due to the complexity of the flow behaviour in the hydrocyclone these do not provide information on the internal separation mechanisms. To study the interaction of design variables, the flow behaviour needs to be considered, especially when modelling the new three-product cyclone. Computational fluid dynamics (CFD) was used to model the three-product cyclone, in particular the influence of the dual vortex finder arrangement on flow behaviour. From experimental work performed on the UG2 platinum ore, significant differences in the classification performance of the three-product cyclone were noticed with variations in the inner vortex finder length. Because of this simulations were performed for a range of inner vortex finder lengths. Simulations were also conducted on a conventional hydrocyclone of the same size to enable a direct comparison of the flow behaviour between the two cyclone designs. Significantly, high velocities were observed for the three-product cyclone with an inner vortex finder extended deep into the conical section of the cyclone. CFD studies revealed that in the three-product cyclone, a cylindrical shaped air-core is observed similar to conventional hydrocyclones. A constant diameter air-core was observed throughout the inner vortex finder length, while no air-core was present in the annulus. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: The clustering of gene profiles across some experimental conditions of interest contributes significantly to the elucidation of unknown gene function, the validation of gene discoveries and the interpretation of biological processes. However, this clustering problem is not straightforward as the profiles of the genes are not all independently distributed and the expression levels may have been obtained from an experimental design involving replicated arrays. Ignoring the dependence between the gene profiles and the structure of the replicated data can result in important sources of variability in the experiments being overlooked in the analysis, with the consequent possibility of misleading inferences being made. We propose a random-effects model that provides a unified approach to the clustering of genes with correlated expression levels measured in a wide variety of experimental situations. Our model is an extension of the normal mixture model to account for the correlations between the gene profiles and to enable covariate information to be incorporated into the clustering process. Hence the model is applicable to longitudinal studies with or without replication, for example, time-course experiments by using time as a covariate, and to cross-sectional experiments by using categorical covariates to represent the different experimental classes. Results: We show that our random-effects model can be fitted by maximum likelihood via the EM algorithm for which the E(expectation) and M(maximization) steps can be implemented in closed form. Hence our model can be fitted deterministically without the need for time-consuming Monte Carlo approximations. The effectiveness of our model-based procedure for the clustering of correlated gene profiles is demonstrated on three real datasets, representing typical microarray experimental designs, covering time-course, repeated-measurement and cross-sectional data. In these examples, relevant clusters of the genes are obtained, which are supported by existing gene-function annotation. A synthetic dataset is considered too.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lipoamino acids (LAAs) are promoieties able to enhance the amphiphilicity of drugs, facilitating their interaction with cell membranes. Experimental and computational studies were carried out on two series of lipophilic amide conjugates between a model drug (tranylcypromine, TCP) and LAA or alkanoic acids containing a short, medium or long alkyl side chain (C-4 to C-16). The effects of these compounds were evaluated by monolayer surface tension analysis and differential scanning calorimetry using dimyristoylphosphatidylcholine nnonolayers and liposomes as biomembrane models. The experimental results were related to independent calculations to determine partition coefficient and blood-brain partitioning. The comparison of TCP-LAA conjugates with the related series of TCP alkanoyl amides confirmed that the ability to interact with the biomembrane models is not due to the mere increase of lipophilicity, but mainly to the amphipatic nature and the kind of LAA residue. (C) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We provide here a detailed theoretical explanation of the floating molecule or levitation effect, for molecules diffusing through nanopores, using the oscillator model theory (Phys. Rev. Lett. 2003, 91, 126102) recently developed in this laboratory. It is shown that on reduction of pore size the effect occurs due to decrease in frequency of wall collision of diffusing particles at a critical pore size. This effect is, however, absent at high temperatures where the ratio of kinetic energy to the solid-fluid interaction strength is sufficiently large. It is shown that the transport diffusivities scale with this ratio. Scaling of transport diffusivities with respect to mass is also observed, even in the presence of interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We explore several models for the ground-state proton chain transfer pathway between the green fluorescent protein chromophore and its surrounding protein matrix, with a view to elucidating mechanistic aspects of this process. We have computed quantum chemically the minimum energy pathways (MEPs) in the ground electronic state for one-, two-, and three-proton models of the chain transfer. There are no stable intermediates for our models, indicating that the proton chain transfer is likely to be a single, concerted kinetic step. However, despite the concerted nature of the overall energy profile, a more detailed analysis of the MEPs reveals clear evidence of sequential movement of protons in the chain. The ground-state proton chain transfer does not appear to be driven by the movement of the phenolic proton off the chromophore onto the neutral water bridge. Rather, this proton is the last of the three protons in the chain to move. We find that the first proton movement is from the bridging Ser205 moiety to the accepting Glu222 group. This is followed by the second proton moving from the bridging water to the Ser205for our model this is where the barrier occurs. The phenolic proton on the chromophore is hence the last in the chain to move, transferring to a bridging “water” that already has substantial negative charge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computational fluid dynamics was used to search for the links between the observed pattern of attack seen in a bauxite refinery's heat exchanger headers and the hydrodynamics inside the header. Validation of the computational fluid dynamics results was done by comparing then with flow parameters measured in a 1:5 scale model of the first pass header in the laboratory. Computational fluid dynamics simulations were used to establish hydrodynamic similarity between the 1:5 scale and full scale models of the first pass header. It was found that the erosion-corrosion damage seen at the tubesheet of the first pass header was a consequence of increased levels of turbulence at the tubesheet caused by a rapidly turning flow. A prismatic flow corrections device introduced in the past helped in rectifying the problem at the tubesheet but exaggerated the erosion-corrosion problem at the first pass header shell. A number of alternative flow correction devices were tested using computational fluid dynamics. Axial ribbing in the first pass header and an inlet flow diffuser have shown the best performance and were recommended for implementation. Computational fluid dynamics simulations have revealed a smooth orderly low turbulence flow pattern in the second, third and fourth pass as well as the exit headers where no erosion-corrosion was seen in practice. This study has confirmed that near-wall turbulence intensity, which can be successfully predicted by using computational fluid dynamics, is a good hydrodynamic predictor of erosion-corrosion damage in complex geometries. (c) 2006 Published by Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To foster ongoing international cooperation beyond ACES (APEC Cooperation for Earthquake Simulation) on the simulation of solid earth phenomena, agreement was reached to work towards establishment of a frontier international research institute for simulating the solid earth: iSERVO = International Solid Earth Research Virtual Observatory institute (http://www.iservo.edu.au). This paper outlines a key Australian contribution towards the iSERVO institute seed project, this is the construction of: (1) a typical intraplate fault system model using practical fault system data of South Australia (i.e., SA interacting fault model), which includes data management and editing, geometrical modeling and mesh generation; and (2) a finite-element based software tool, which is built on our long-term and ongoing effort to develop the R-minimum strategy based finite-element computational algorithm and software tool for modelling three-dimensional nonlinear frictional contact behavior between multiple deformable bodies with the arbitrarily-shaped contact element strategy. A numerical simulation of the SA fault system is carried out using this software tool to demonstrate its capability and our efforts towards seeding the iSERVO Institute.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the insight gained from 2-D particle models, and given that the dynamics of crustal faults occur in 3-D space, the question remains, how do the 3-D fault gouge dynamics differ from those in 2-D? Traditionally, 2-D modeling has been preferred over 3-D simulations because of the computational cost of solving 3-D problems. However, modern high performance computing architectures, combined with a parallel implementation of the Lattice Solid Model (LSM), provide the opportunity to explore 3-D fault micro-mechanics and to advance understanding of effective constitutive relations of fault gouge layers. In this paper, macroscopic friction values from 2-D and 3-D LSM simulations, performed on an SGI Altix 3700 super-cluster, are compared. Two rectangular elastic blocks of bonded particles, with a rough fault plane and separated by a region of randomly sized non-bonded gouge particles, are sheared in opposite directions by normally-loaded driving plates. The results demonstrate that the gouge particles in the 3-D models undergo significant out-of-plane motion during shear. The 3-D models also exhibit a higher mean macroscopic friction than the 2-D models for varying values of interparticle friction. 2-D LSM gouge models have previously been shown to exhibit accelerating energy release in simulated earthquake cycles, supporting the Critical Point hypothesis. The 3-D models are shown to also display accelerating energy release, and good fits of power law time-to-failure functions to the cumulative energy release are obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The particle-based lattice solid model developed to study the physics of rocks and the nonlinear dynamics of earthquakes is refined by incorporating intrinsic friction between particles. The model provides a means for studying the causes of seismic wave attenuation, as well as frictional heat generation, fault zone evolution, and localisation phenomena. A modified velocity-Verlat scheme that allows friction to be precisely modelled is developed. This is a difficult computational problem given that a discontinuity must be accurately simulated by the numerical approach (i.e., the transition from static to dynamical frictional behaviour). This is achieved using a half time step integration scheme. At each half time step, a nonlinear system is solved to compute the static frictional forces and states of touching particle-pairs. Improved efficiency is achieved by adaptively adjusting the time step increment, depending on the particle velocities in the system. The total energy is calculated and verified to remain constant to a high precision during simulations. Numerical experiments show that the model can be applied to the study of earthquake dynamics, the stick-slip instability, heat generation, and fault zone evolution. Such experiments may lead to a conclusive resolution of the heat flow paradox and improved understanding of earthquake precursory phenomena and dynamics. (C) 1999 Academic Press.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presents a new approach to the problem of simultaneous localization and mapping - SLAM - inspired by computational models of the hippocampus of rodents. The rodent hippocampus has been extensively studied with respect to navigation tasks, and displays many of the properties of a desirable SLAM solution. RatSLAM is an implementation of a hippocampal model that can perform SLAM in real time on a real robot. It uses a competitive attractor network to integrate odometric information with landmark sensing to form a consistent representation of the environment. Experimental results show that RatSLAM can operate with ambiguous landmark information and recover from both minor and major path integration errors.