966 resultados para fast method
Resumo:
This paper presents a new approach to the LU decomposition method for the simulation of stationary and ergodic random fields. The approach overcomes the size limitations of LU and is suitable for any size simulation. The proposed approach can facilitate fast updating of generated realizations with new data, when appropriate, without repeating the full simulation process. Based on a novel column partitioning of the L matrix, expressed in terms of successive conditional covariance matrices, the approach presented here demonstrates that LU simulation is equivalent to the successive solution of kriging residual estimates plus random terms. Consequently, it can be used for the LU decomposition of matrices of any size. The simulation approach is termed conditional simulation by successive residuals as at each step, a small set (group) of random variables is simulated with a LU decomposition of a matrix of updated conditional covariance of residuals. The simulated group is then used to estimate residuals without the need to solve large systems of equations.
Resumo:
Background-In vivo methods to evaluate the size and composition of atherosclerotic lesions in animal models of atherosclerosis would assist in the testing of antiatherosclerotic drugs. We have developed an MRI method of detecting atherosclerotic plaque in the major vessels at the base of the heart in low-density lipoprotein (LDL) receptor-knockout (LDLR-/-) mice on a high-fat diet. Methods and Results-Three-dimensional fast spin-echo magnetic resonance images were acquired at 7 T by use of cardiac and respiratory triggering, with approximate to140-mum isotropic resolution, over 30 minutes. Comparison of normal and fat-suppressed images from female LDLR-/- mice I week before and 8 and 12 weeks after the transfer to a high-fat diet allowed visualization and quantification of plaque development in the innominate artery in vivo. Plaque mean cross-sectional area was significantly greater at week 12 in the LDLR-/- mice (0.14+/-0.086 mm(2) [mean+/-SD]) than in wild-type control mice on a normal diet (0.017+/-0.031 mm(2), p
Resumo:
A detailed analysis procedure is described for evaluating rates of volumetric change in brain structures based on structural magnetic resonance (MR) images. In this procedure, a series of image processing tools have been employed to address the problems encountered in measuring rates of change based on structural MR images. These tools include an algorithm for intensity non-uniforniity correction, a robust algorithm for three-dimensional image registration with sub-voxel precision and an algorithm for brain tissue segmentation. However, a unique feature in the procedure is the use of a fractional volume model that has been developed to provide a quantitative measure for the partial volume effect. With this model, the fractional constituent tissue volumes are evaluated for voxels at the tissue boundary that manifest partial volume effect, thus allowing tissue boundaries be defined at a sub-voxel level and in an automated fashion. Validation studies are presented on key algorithms including segmentation and registration. An overall assessment of the method is provided through the evaluation of the rates of brain atrophy in a group of normal elderly subjects for which the rate of brain atrophy due to normal aging is predictably small. An application of the method is given in Part 11 where the rates of brain atrophy in various brain regions are studied in relation to normal aging and Alzheimer's disease. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model.
Resumo:
In this paper the diffusion and flow of carbon tetrachloride, benzene and n-hexane through a commercial activated carbon is studied by a differential permeation method. The range of pressure is covered from very low pressure to a pressure range where significant capillary condensation occurs. Helium as a non-adsorbing gas is used to determine the characteristics of the porous medium. For adsorbing gases and vapors, the motion of adsorbed molecules in small pores gives rise to a sharp increase in permeability at very low pressures. The interplay between a decreasing behavior in permeability due to the saturation of small pores with adsorbed molecules and an increasing behavior due to viscous flow in larger pores with pressure could lead to a minimum in the plot of total permeability versus pressure. This phenomenon is observed for n-hexane at 30degreesC. At relative pressure of 0.1-0.8 where the gaseous viscous flow dominates, the permeability is a linear function of pressure. Since activated carbon has a wide pore size distribution, the mobility mechanism of these adsorbed molecules is different from pore to pore. In very small pores where adsorbate molecules fill the pore the permeability decreases with an increase in pressure, while in intermediate pores the permeability of such transport increases with pressure due to the increasing build-up of layers of adsorbed molecules. For even larger pores, the transport is mostly due to diffusion and flow of free molecules, which gives rise to linear permeability with respect to pressure. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Anew thermodynamic approach has been developed in this paper to analyze adsorption in slitlike pores. The equilibrium is described by two thermodynamic conditions: the Helmholtz free energy must be minimal, and the grand potential functional at that minimum must be negative. This approach has led to local isotherms that describe adsorption in the form of a single layer or two layers near the pore walls. In narrow pores local isotherms have one step that could be either very sharp but continuous or discontinuous benchlike for a definite range of pore width. The latter reflects a so-called 0 --> 1 monolayer transition. In relatively wide pores, local isotherms have two steps, of which the first step corresponds to the appearance of two layers near the pore walls, while the second step corresponds to the filling of the space between these layers. All features of local isotherms are in agreement with the results obtained from the density functional theory and Monte Carlo simulations. The approach is used for determining pore size distributions of carbon materials. We illustrate this with the benzene adsorption data on activated carbon at 20, 50, and 80 degreesC, argon adsorption on activated carbon Norit ROX at 87.3 K, and nitrogen adsorption on activated carbon Norit R1 at 77.3 K.
Resumo:
We describe for the first time the application of fast neutron mutagenesis to the genetic dissection of root nodulation in legumes. We demonstrate the utility of chromosomal deletion mutations through production of a soybean supernodulation mutant FN37 that lacks the internal autoregulation of nodulation mechanism. After inoculation with microsymbiont Bradyrhizobium japonicum, FN37 forms at least 10 times more nodules than the wild type G. soja parent and has a phenotype identical to that of chemically induced allelic mutants nts382 and nts1007 (NTS-1 locus). Reciprocal grafting of shoots and roots confirmed systemic shoot control of the FN37 nodulation phenotype. RFLP/PCR marker pUTG132a and AFLP marker UQC-IS1 which are tightly linked to NTS-1 allowed the isolation of BAC contigs delineating both ends of the deletion. The genetic/physical distance ratio in the NTS-1 region is 279 kb/cM. The deletion is estimated to be about 460 kb based on the absence of markers and bacterial artificial chromosomes (BAC) ends as well as genetic and physical mapping. Deletion break points were determined physically and placed within flanking BAC contigs.
Resumo:
Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
A supersweet sweet corn hybrid, Pacific H5, was planted at weekly intervals (P-1 to P-5) in spring in South-Eastern Queensland. All plantings were harvested at the same time resulting in immature seed for the last planting (P-5). The seed was handled by three methods: manual harvest and processing (M-1), manual harvest and mechanical processing (M-2) and mechanical harvest and processing (M-3), and later graded into three sizes (small, medium and large). After eight months storage at 12-14degreesC, seed was maintained at 30degreesC with bimonthly monitoring of germination for fourteen months and seed damage at the end of this period. Seed quality was greatest for M-1 and was reduced by mechanical processing but not by mechanical harvesting. Large and medium seed had higher germination due to greater storage reserves but also more seed damage during mechanical processing. Immature seed from premature harvest (P-5) had poor quality especially when processed mechanically and reinforced the need for harvested seed to be physiologically mature.
Resumo:
Trials conducted in Queensland, Australia between 1997 and 2002 demonstrated that fungicides belonging to the triazole group were the most effective in minimising the severity of infection of sorghum by Claviceps africana, the causal agent of sorghum ergot. Triadimenol ( as Bayfidan 250EC) at 0.125 kg a. i./ha was the most effective fungicide. A combination of the systemic activated resistance compound acibenzolar-S-methyl ( as Bion 50WG) at 0.05 kg a. i./ha and mancozeb ( as Penncozeb 750DF) at 1.5 kg a. i./ha has the potential to provide protection against the pathogen, should triazole-resistant isolates be detected. Timing and method of fungicide application are important. Our results suggest that the triazole fungicides have no systemic activity in sorghum panicles, necessitating the need for multiple applications from first anthesis to the end of flowering, whereas acibenzolar-S-methyl is most effective when applied 4 days before flowering. The flat fan nozzles tested in the trials provided higher levels of protection against C. africana and greater droplet deposition on panicles than the tested hollow cone nozzles. Application of triadimenol by a fixed wing aircraft was as efficacious as application through a tractor-mounted boom spray.
Resumo:
A high definition, finite difference time domain (HD-FDTD) method is presented in this paper. This new method allows the FDTD method to be efficiently applied over a very large frequency range including low frequencies, which are problematic for conventional FDTD methods. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and has been verified against analytical solutions within the frequency range 50 Hz-1 GHz. As an example of the lower frequency range, the method has been applied to the problem of induced eddy currents in the human body resulting from the pulsed magnetic field gradients of an MRI system. The new method only requires approximately 0.3% of the source period to obtain an accurate solution. (C) 2003 Elsevier Science Inc. All rights reserved.
Resumo:
Blasting has been the most frequently used method for rock breakage since black powder was first used to fragment rocks, more than two hundred years ago. This paper is an attempt to reassess standard design techniques used in blasting by providing an alternative approach to blast design. The new approach has been termed asymmetric blasting. Based on providing real time rock recognition through the capacity of measurement while drilling (MWD) techniques, asymmetric blasting is an approach to deal with rock properties as they occur in nature, i.e., randomly and asymmetrically spatially distributed. It is well accepted that performance of basic mining operations, such as excavation and crushing rely on a broken rock mass which has been pre conditioned by the blast. By pre-conditioned we mean well fragmented, sufficiently loose and with adequate muckpile profile. These muckpile characteristics affect loading and hauling [1]. The influence of blasting does not end there. Under the Mine to Mill paradigm, blasting has a significant leverage on downstream operations such as crushing and milling. There is a body of evidence that blasting affects mineral liberation [2]. Thus, the importance of blasting has increased from simply fragmenting and loosing the rock mass, to a broader role that encompasses many aspects of mining, which affects the cost of the end product. A new approach is proposed in this paper which facilitates this trend 'to treat non-homogeneous media (rock mass) in a non-homogeneous manner (an asymmetrical pattern) in order to achieve an optimal result (in terms of muckpile size distribution).' It is postulated there are no logical reasons (besides the current lack of means to infer rock mass properties in the blind zones of the bench and onsite precedents) for drilling a regular blast pattern over a rock mass that is inherently heterogeneous. Real and theoretical examples of such a method are presented.
Resumo:
Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
An equivalent unit cell waveguide approach (WGA) to designing 4 multilayer microstrip reflectarray of variable size patches is presented. In this approach, a normal incidence of a plane wave on an infinite periodic array of radiating elements is considered to obtain reflection coefficient phase curves for the reflectarray's elements. It is shown that this problem is equivalent to the problem of reflection of the dominant TEM mode in a waveguide with patches interleaved by layers of dielectric. This waveguide problem is solved using a field matching technique and a method of moments (MoM). Based on this solution, a fast computer algorithm is developed to generate reflection coefficient phase curves for a multilayer microstrip patch reflectarray. The validity of the developed algorithm is tested against alternative approaches and Agilent High Frequency Structure Simulator (HFSS). Having confirmed the validity of the WGA approach, a small offset feed two-layer microstrip patch array is designed and developed. This reflectarray is tested experimentally and shows good performance.