929 resultados para Matrix Transform Method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most magnetic resonance imaging (MRI) spatial encoding techniques employ low-frequency pulsed magnetic field gradients that undesirably induce multiexponentially decaying eddy currents in nearby conducting structures of the MRI system. The eddy currents degrade the switching performance of the gradient system, distort the MRI image, and introduce thermal loads in the cryostat vessel and superconducting MRI components. Heating of superconducting magnets due to induced eddy currents is particularly problematic as it offsets the superconducting operating point, which can cause a system quench. A numerical characterization of transient eddy current effects is vital for their compensation/control and further advancement of the MRI technology as a whole. However, transient eddy current calculations are particularly computationally intensive. In large-scale problems, such as gradient switching in MRI, conventional finite-element method (FEM)-based routines impose very large computational loads during generation/solving of the system equations. Therefore, other computational alternatives need to be explored. This paper outlines a three-dimensional finite-difference time-domain (FDTD) method in cylindrical coordinates for the modeling of low-frequency transient eddy currents in MRI, as an extension to the recently proposed time-harmonic scheme. The weakly coupled Maxwell's equations are adapted to the low-frequency regime by downscaling the speed of light constant, which permits the use of larger FDTD time steps while maintaining the validity of the Courant-Friedrich-Levy stability condition. The principal hypothesis of this work is that the modified FDTD routine can be employed to analyze pulsed-gradient-induced, transient eddy currents in superconducting MRI system models. The hypothesis is supported through a verification of the numerical scheme on a canonical problem and by analyzing undesired temporal eddy current effects such as the B-0-shift caused by actively shielded symmetric/asymmetric transverse x-gradient head and unshielded z-gradient whole-body coils operating in proximity to a superconducting MRI magnet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Determining the dimensionality of G provides an important perspective on the genetic basis of a multivariate suite of traits. Since the introduction of Fisher's geometric model, the number of genetically independent traits underlying a set of functionally related phenotypic traits has been recognized as an important factor influencing the response to selection. Here, we show how the effective dimensionality of G can be established, using a method for the determination of the dimensionality of the effect space from a multivariate general linear model introduced by AMEMIYA (1985). We compare this approach with two other available methods, factor-analytic modeling and bootstrapping, using a half-sib experiment that estimated G for eight cuticular hydrocarbons of Drosophila serrata. In our example, eight pheromone traits were shown to be adequately represented by only two underlying genetic dimensions by Amemiya's approach and factor-analytic modeling of the covariance structure at the sire level. In, contrast, bootstrapping identified four dimensions with significant genetic variance. A simulation study indicated that while the performance of Amemiya's method was more sensitive to power constraints, it performed as well or better than factor-analytic modeling in correctly identifying the original genetic dimensions at moderate to high levels of heritability. The bootstrap approach consistently overestimated the number of dimensions in all cases and performed less well than Amemiya's method at subspace recovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional sensitivity and elasticity analyses of matrix population models have been used to p inform management decisions, but they ignore the economic costs of manipulating vital rates. For exam le, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously, These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyse the matrix momentum algorithm, which provides an efficient approximation to on-line Newton's method, by extending a recent statistical mechanics framework to include second order algorithms. We study the efficacy of this method when the Hessian is available and also consider a practical implementation which uses a single example estimate of the Hessian. The method is shown to provide excellent asymptotic performance, although the single example implementation is sensitive to the choice of training parameters. We conjecture that matrix momentum could provide efficient matrix inversion for other second order algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural gradient learning is an efficient and principled method for improving on-line learning. In practical applications there will be an increased cost required in estimating and inverting the Fisher information matrix. We propose to use the matrix momentum algorithm in order to carry out efficient inversion and study the efficacy of a single step estimation of the Fisher information matrix. We analyse the proposed algorithm in a two-layer network, using a statistical mechanics framework which allows us to describe analytically the learning dynamics, and compare performance with true natural gradient learning and standard gradient descent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The replica method, developed in statistical physics, is employed in conjunction with Gallager's methodology to accurately evaluate zero error noise thresholds for Gallager code ensembles. Our approach generally provides more optimistic evaluations than those reported in the information theory literature for sparse matrices; the difference vanishes as the parity check matrix becomes dense.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Poor water solubility leads to low dissolution rate and consequently, it can limit bioavailability. Solid dispersions, where the drug is dispersed into an inert, hydrophilic polymer matrix can enhance drug dissolution. Solid dispersions were prepared using phenacetin and phenylbutazone as model drugs with polyethylene glycol (PEG) 8000 (carrier), by melt fusion method. Phenacetin and phenylbutazone displayed an increase in the dissolution rate when formulated as solid dispersions as compared with their physical mixture and drug alone counterparts. Characterisation of the solid dispersions was performed using differential scanning calorimetry (DSC), Fourier transform infrared spectroscopy (FTIR) and scanning electron microscopy (SEM). DSC studies revealed that drugs were present in the amorphous form within the solid dispersions. FTIR spectra for the solid dispersions of drugs suggested that there was a lack of interaction between PEG 8000 and the drug. However, the physical mixture of phenacetin with PEG 8000 indicated the formation of hydrogen bond between phenacetin and the carrier. Permeability of phenacetin and phenylbutazone was higher for solid dispersions as compared with that of drug alone across Caco-2 cell monolayers. Permeability studies have shown that both phenacetin and phenylbutazone, and their solid dispersions can be categorised as well-absorbed compounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soft contact lens wear has become a common phenomenon in recent times. The contact lens when placed in the eye rapidly undergoes change. A film of biological material builds up on and in the lens matrix. The long term wear characteristics of the lens ultimately depend on this process. With time distinct structures made up of biological material have been found to build up on the lens. A fuller understanding of this process and how it relates to the lens chemistry could lead to contact lenses that are better tolerated by the eye. The tear film is a complex biological fluid, it is this fluid that bathes the lens during wear. It is reasonable to suppose that it is material derived from this source that accumulates on the lens. To understand this phenomenon it was decided to investigate the make up and conformation of the protein species that are found on and in the lens. As inter individual variations in tear fluid composition have been found it is important to be able to study the proteins on a single lens. Many of the analytical techniques used in bio research are not suitable for this study because of the lack of sensitivity. Work with poly acrylamide electrophoresis showed the possibility of analyzing the proteins extracted from a single lens. The development of a biotin avidin electro-blot and an enzyme linked aniibody electro-blot, lead to the high sensitivity detection and identification of the proteins present. The extraction of proteins from a lens is always incomplete. A method that analyses the proteins in situ would be a great advancement. Fourier transform infra red microscopy was developed to a point where a thin section of a contact lens could yield information about the proteins present and their conformation. The three dimensional structure of the gross macroscopic structures termed white spots was investigated using confocal laser microscopy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To represent the local orientation and energy of a 1-D image signal, many models of early visual processing employ bandpass quadrature filters, formed by combining the original signal with its Hilbert transform. However, representations capable of estimating an image signal's 2-D phase have been largely ignored. Here, we consider 2-D phase representations using a method based upon the Riesz transform. For spatial images there exist two Riesz transformed signals and one original signal from which orientation, phase and energy may be represented as a vector in 3-D signal space. We show that these image properties may be represented by a Singular Value Decomposition (SVD) of the higher-order derivatives of the original and the Riesz transformed signals. We further show that the expected responses of even and odd symmetric filters from the Riesz transform may be represented by a single signal autocorrelation function, which is beneficial in simplifying Bayesian computations for spatial orientation. Importantly, the Riesz transform allows one to weight linearly across orientation using both symmetric and asymmetric filters to account for some perceptual phase distortions observed in image signals - notably one's perception of edge structure within plaid patterns whose component gratings are either equal or unequal in contrast. Finally, exploiting the benefits that arise from the Riesz definition of local energy as a scalar quantity, we demonstrate the utility of Riesz signal representations in estimating the spatial orientation of second-order image signals. We conclude that the Riesz transform may be employed as a general tool for 2-D visual pattern recognition by its virtue of representing phase, orientation and energy as orthogonal signal quantities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a parallel genetic algorithm for nding matrix multiplication algo-rithms. For 3 x 3 matrices our genetic algorithm successfully discovered algo-rithms requiring 23 multiplications, which are equivalent to the currently best known human-developed algorithms. We also studied the cases with less mul-tiplications and evaluated the suitability of the methods discovered. Although our evolutionary method did not reach the theoretical lower bound it led to an approximate solution for 22 multiplications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pack aluminide coating is a useful method for conferring oxidation resistance on nickel-base superalloys. Nominally, these coatings have a matrix composed of a Ni-Al based B2-type phase (commonly denoted as Β). However, following high-temperature exposure in oxidative envi-ronments, aluminum is depleted from the coating. Aluminum depletion in turn, leads to de-stabilization of the Β phase, resulting in the formation of a characteristic lathlike Β-derivative microstructure. This article presents a transmission electron microscopy study of the formation of the lathlike Β-derivative microstructure using bulk nickel aluminides as model alloys. In the bulk nickel aluminides, the lathlike microstructure has been found to correspond to two distinct components: L10-type martensite and a new Β derivative. The new Β derivative is characterized and the conditions associated with the presence of this feature are identified and compared with those leading to the formation of the L10 martensitic phase. © 1995 The Minerals, Metals & Material Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The basic matrixes method is suggested for the Leontief model analysis (LM) with some of its components indistinctly given. LM can be construed as a forecast task of product’s expenses-output on the basis of the known statistic information at indistinctly given several elements’ meanings of technological matrix, restriction vector and variables’ limits. Elements of technological matrix, right parts of restriction vector LM can occur as functions of some arguments. In this case the task’s dynamic analog occurs. LM essential complication lies in inclusion of variables restriction and criterion function in it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Partially supported by the Bulgarian Science Fund contract with TU Varna, No 487.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a model eigenvalue problem (EVP) in 1D, with periodic or semi–periodic boundary conditions (BCs). The discretization of this type of EVP by consistent mass finite element methods (FEMs) leads to the generalized matrix EVP Kc = λ M c, where K and M are real, symmetric matrices, with a certain (skew–)circulant structure. In this paper we fix our attention to the use of a quadratic FE–mesh. Explicit expressions for the eigenvalues of the resulting algebraic EVP are established. This leads to an explicit form for the approximation error in terms of the mesh parameter, which confirms the theoretical error estimates, obtained in [2].