871 resultados para Anisotropic Analytical Algorithm
Resumo:
Laser heating Ar-40/Ar-39 geochronology provides high analytical precision and accuracy, mum-scale spatial resolution. and statistically significant data sets for the study of geological and planetary processes, A newly commissioned Ar-40/Ar-39 laboratory at CPGeo/USP, Sao Paulo, Brazil, equips the Brazilian scientific community with a new powerful tool applicable to the study of geological and cosmochemical processes. Detailed information about laboratory layout, environmental conditions, and instrumentation provides the necessary parameters for the evaluation of the CPGeo/USp Ar-40/Ar-39 suitability to a diverse range of applications. Details about analytical procedures, including mineral separation, irradiation at the IPEN/CNEN reactor at USP, and mass spectrometric analysis enable potential researchers to design the necessary sampling and sample preparation program suitable to the objectives of their study. Finally, the results of calibration tests using Ca and K salts and glasses, international mineral standards, and in-house mineral standards show that the accuracy and precision obtained at the Ar-40/Ar-39 laboratory at CPGeo/USP are comparable to results obtained in the most respected laboratories internationally. The extensive calibration and standardization procedures under-taken ensure that the results of analytical studies carried out in our laboratories will gain immediate international credibility, enabling Brazilian students and scientists to conduct forefront research in earth and planetary sciences.
Resumo:
A new algorithm has been developed for smoothing the surfaces in finite element formulations of contact-impact. A key feature of this method is that the smoothing is done implicitly by constructing smooth signed distance functions for the bodies. These functions are then employed for the computation of the gap and other variables needed for implementation of contact-impact. The smoothed signed distance functions are constructed by a moving least-squares approximation with a polynomial basis. Results show that when nodes are placed on a surface, the surface can be reproduced with an error of about one per cent or less with either a quadratic or a linear basis. With a quadratic basis, the method exactly reproduces a circle or a sphere even for coarse meshes. Results are presented for contact problems involving the contact of circular bodies. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
The emphasis of this work is on the optimal design of MRI magnets with both superconducting coils and ferromagnetic rings. The work is directed to the automated design of MRI magnet systems containing superconducting wire and both `cold' and `warm' iron. Details of the optimization procedure are given and the results show combined superconducting and iron material MRI magnets with excellent field characteristics. Strong, homogeneous central magnetic fields are produced with little stray or external field leakage. The field calculations are performed using a semi-analytical method for both current coil and iron material sources. Design examples for symmetric, open and asymmetric clinical MRI magnets containing both superconducting coils and ferromagnetic material are presented.
Resumo:
Libraries of cyclic peptides are being synthesized using combinatorial chemistry for high throughput screening in the drug discovery process. This paper describes the min_syn_steps.cpp program (available at http://www.imb.uq.edu.au/groups/smythe/tran), which after inputting a list of cyclic peptides to be synthesized, removes cyclic redundant sequences and calculates synthetic strategies which minimize the synthetic steps as well as the reagent requirements. The synthetic steps and reagent requirements could be minimized by finding common subsets within the sequences for block synthesis. Since a brute-force approach to search for optimum synthetic strategies is impractically large, a subset-orientated approach is utilized here to limit the size of the search. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Quantification of calcium in the cuticle of the fly larva Exeretonevra angustifrons was undertaken at the micron scale using wavelength dispersive X-ray microanalysis, analytical standards, and a full matrix correction. Calcium and phosphorus were found to be present in the exoskeleton in a ratio that indicates amorphous calcium phosphate. This was confirmed through electron diffraction of the calcium-containing tissue. Due to the pragmatic difficulties of measuring light elements, it is not uncommon in the field of entomology to neglect the use of matrix corrections when performing microanalysis of bulk insect specimens. To determine, firstly, whether such a strategy affects the outcome and secondly, which matrix correction is preferable, phi-rho (z) and ZAF matrix corrections were contrasted with each other and without matrix correction. The best estimate of the mineral phase was found to be given by using the phi-rho (z) correction. When no correction was made, the ratio of Ca to P fell outside the range for amorphous calcium phosphate, possibly leading to flawed interpretation of the mineral form when used on its own.
Resumo:
Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a≥0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.
Resumo:
The Lanczos algorithm is appreciated in many situations due to its speed. and economy of storage. However, the advantage that the Lanczos basis vectors need not be kept is lost when the algorithm is used to compute the action of a matrix function on a vector. Either the basis vectors need to be kept, or the Lanczos process needs to be applied twice. In this study we describe an augmented Lanczos algorithm to compute a dot product relative to a function of a large sparse symmetric matrix, without keeping the basis vectors.
Resumo:
This article presents Monte Carlo techniques for estimating network reliability. For highly reliable networks, techniques based on graph evolution models provide very good performance. However, they are known to have significant simulation cost. An existing hybrid scheme (based on partitioning the time space) is available to speed up the simulations; however, there are difficulties with optimizing the important parameter associated with this scheme. To overcome these difficulties, a new hybrid scheme (based on partitioning the edge set) is proposed in this article. The proposed scheme shows orders of magnitude improvement of performance over the existing techniques in certain classes of network. It also provides reliability bounds with little overhead.
Resumo:
This review summarizes the development of exclusion chromatography, also termed gel filtration, molecular-sieve chromatography and gel permeation chromatography, for the quantitative characterization of solutes and solute interactions. As well as affording a means of determining molecular mass and molecular mass distribution, the technique offers a convenient way of characterizing solute selfassociation and solute-ligand interactions in terms of reaction stoichiometry and equilibrium constant. The availability of molecular-sieve media with different selective porosities ensures that very little restriction is imposed on the size of solute amenable to study. Furthermore, access to a diverse array of assay procedures for monitoring the column eluate endows analytical exclusion chromatography with far greater flexibility than other techniques from the viewpoint of solute concentration range that can be examined. In addition to its widely recognized prowess as a means of solute separation and purification, exclusion chromatography thus also possesses considerable potential for investigating the functional roles of the purified solutes. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
A Combined Genetic Algorithm and Method of Moments design methods is presented for the design of unusual near-field antennas for use in Magnetic Resonance Imaging systems. The method is successfully applied to the design of an asymmetric coil structure for use at 190MHz and demonstrates excellent radiofrequency field homogeneity.
Resumo:
Modelos de escoamento multifásico são amplamente usados em diversas áreas de pesquisa ambiental, como leitos fluidizados, dispersão de gás em líquidos e vários outros processos que englobam mais de uma propriedade físico-química do meio. Dessa forma, um modelo multifásico foi desenvolvido e adaptado para o estudo do transporte de sedimentos de fundo devido à ação de ondas de gravidade. Neste trabalho, foi elaborado o acoplamento multifásico de um modelo euleriano não-linear de ondas do tipo Boussinesq, baseado na formulação numérica encontrada em Wei et al. (1995), com um modelo lagrangiano de partículas, fundamentado pelo princípio Newtoniano do movimento com o esquema de colisões do tipo esferas rígidas. O modelo de ondas foi testado quanto à sua fonte geradora, representada por uma função gaussiana, pá-pistão e pá-batedor, e quanto à sua interação com a profundidade, através da não-linearidade e de propriedades dispersivas. Nos testes realizados da fonte geradora, foi observado que a fonte gaussiana, conforme Wei et al. (1999), apresentou melhor consistência e estabilidade na geração das ondas, quando comparada à teoria linear para um kh . A não-linearidade do modelo de ondas de 2ª ordem para a dispersão apresentou resultados satisfatórios quando confrontados com o experimento de ondas sobre um obstáculo trapezoidal, onde a deformação da onda sobre a estrutura submersa está em concordância com os dados experimentais encontrados na literatura. A partir daí, o modelo granular também foi testado em dois experimentos. O primeiro simula uma quebra de barragem em um tanque contendo água e o segundo, a quebra de barragem é simulada com um obstáculo rígido adicionado ao centro do tanque. Nesses experimentos, o algoritmo de colisão foi eficaz no tratamento da interação entre partícula-partícula e partícula-parede, permitindo a evidência de processos físicos que são complicados de serem simulados por modelos de malhas regulares. Para o acoplamento do modelo de ondas e de sedimentos, o algoritmo foi testado com base de dados da literatura quanto à morfologia do leito. Os resultados foram confrontados com dados analíticos e de modelos numéricos, e se mostraram satisfatórios com relação aos pontos de erosão, de sedimentação e na alteração da forma da barra arenosa