953 resultados para SPHERICAL HARMONICS
Resumo:
Purpose: All currently considered parametric models used for decomposing videokeratoscopy height data are viewercentered and hence describe what the operator sees rather than what the surface is. The purpose of this study was to ascertain the applicability of an object-centered representation to modeling of corneal surfaces. Methods: A three-dimensional surface decomposition into a series of spherical harmonics is considered and compared with the traditional Zernike polynomial expansion for a range of videokeratoscopic height data. Results: Spherical harmonic decomposition led to significantly better fits to corneal surfaces (in terms of the root mean square error values) than the corresponding Zernike polynomial expansions with the same number of coefficients, for all considered corneal surfaces, corneal diameters, and model orders. Conclusions: Spherical harmonic decomposition is a viable alternative to Zernike polynomial decomposition. It achieves better fits to videokeratoscopic height data and has the advantage of an object-centered representation that could be particularly suited to the analysis of multiple corneal measurements.
Resumo:
The series expansion of the plasma fields and currents in vector spherical harmonics has been demonstrated to be an efficient technique for solution of nonlinear problems in spherically bounded plasmas. Using this technique, it is possible to describe the nonlinear plasma response to the rotating high-frequency magnetic field applied to the magnetically confined plasma sphere. The effect of the external magnetic field on the current drive and field configuration is studied. The results obtained are important for continuous current drive experiments in compact toruses. © 2000 American Institute of Physics.
Resumo:
A new deterministic three-dimensional neutral and charged particle transport code, MultiTrans, has been developed. In the novel approach, the adaptive tree multigrid technique is used in conjunction with simplified spherical harmonics approximation of the Boltzmann transport equation. The development of the new radiation transport code started in the framework of the Finnish boron neutron capture therapy (BNCT) project. Since the application of the MultiTrans code to BNCT dose planning problems, the testing and development of the MultiTrans code has continued in conventional radiotherapy and reactor physics applications. In this thesis, an overview of different numerical radiation transport methods is first given. Special features of the simplified spherical harmonics method and the adaptive tree multigrid technique are then reviewed. The usefulness of the new MultiTrans code has been indicated by verifying and validating the code performance for different types of neutral and charged particle transport problems, reported in separate publications.
Resumo:
EVENT has been used to examine the effects of 3D cloud structure, distribution, and inhomogeneity on the scattering of visible solar radiation and the resulting 3D radiation field. Large eddy simulation and aircraft measurements are used to create realistic cloud fields which are continuous or broken with smooth or uneven tops. The values, patterns and variance in the resulting downwelling and upwelling radiation from incident visible solar radiation at different angles are then examined and compared to measurements. The results from EVENT confirm that 3D cloud structure is important in determining the visible radiation field, and that these results are strongly influenced by the solar zenith angle. The results match those from other models using visible solar radiation, and are supported by aircraft measurements of visible radiation, providing confidence in the new model.
Resumo:
The finite difference time domain (FDTD) method has direct applications in musical instrument modeling, simulation of environmental acoustics, room acoustics and sound reproduction paradigms, all of which benefit from auralization. However, rendering binaural impulse responses from simulated
data is not straightforward to accomplish as the calculated pressure at FDTD grid nodes does not contain any directional information. This paper addresses this issue by introducing a spherical array to capture sound pressure on a finite difference grid, and decomposing it into a plane-wave density
function. Binaural impulse responses are then constructed in the spherical harmonics domain by combining the decomposed grid data with free field head-related transfer functions. The effects of designing a spherical array in a Cartesian grid are studied, and emphasis is given to the relationships
between array sampling and the spatial and spectral design parameters of several finite-difference
schemes.
Resumo:
Due to its efficiency and simplicity, the finite-difference time-domain method is becoming a popular choice for solving wideband, transient problems in various fields of acoustics. So far, the issue of extracting a binaural response from finite difference simulations has only been discussed in the context of embedding a listener geometry in the grid. In this paper, we propose and study a method for binaural response rendering based on a spatial decomposition of the sound field. The finite difference grid is locally sampled using a volumetric array of receivers, from which a plane wave density function is computed and integrated with free-field head related transfer functions, in the spherical harmonics domain. The volumetric array is studied in terms of numerical robustness and spatial aliasing. Analytic formulas that predict the performance of the array are developed, facilitating spatial resolution analysis and numerical binaural response analysis for a number of finite difference schemes. Particular emphasis is placed on the effects of numerical dispersion on array processing and on the resulting binaural responses. Our method is compared to a binaural simulation based on the image method. Results indicate good spatial and temporal agreement between the two methods.
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
Time variable gravity fields, reflecting variations of mass distribution in the system Earth is one of the key parameters to understand the changing Earth. Mass variations are caused either by redistribution of mass in, on or above the Earth's surface or by geophysical processes in the Earth's interior. The first set of observations of monthly variations of the Earth gravity field was provided by the US/German GRACE satellite mission beginning in 2002. This mission is still providing valuable information to the science community. However, as GRACE has outlived its expected lifetime, the geoscience community is currently seeking successor missions in order to maintain the long time series of climate change that was begun by GRACE. Several studies on science requirements and technical feasibility have been conducted in the recent years. These studies required a realistic model of the time variable gravity field in order to perform simulation studies on sensitivity of satellites and their instrumentation. This was the primary reason for the European Space Agency (ESA) to initiate a study on ''Monitoring and Modelling individual Sources of Mass Distribution and Transport in the Earth System by Means of Satellites''. The goal of this interdisciplinary study was to create as realistic as possible simulated time variable gravity fields based on coupled geophysical models, which could be used in the simulation processes in a controlled environment. For this purpose global atmosphere, ocean, continental hydrology and ice models were used. The coupling was performed by using consistent forcing throughout the models and by including water flow between the different domains of the Earth system. In addition gravity field changes due to solid Earth processes like continuous glacial isostatic adjustment (GIA) and a sudden earthquake with co-seismic and post-seismic signals were modelled. All individual model results were combined and converted to gravity field spherical harmonic series, which is the quantity commonly used to describe the Earth's global gravity field. The result of this study is a twelve-year time-series of 6-hourly time variable gravity field spherical harmonics up to degree and order 180 corresponding to a global spatial resolution of 1 degree in latitude and longitude. In this paper, we outline the input data sets and the process of combining these data sets into a coherent model of temporal gravity field changes. The resulting time series was used in some follow-on studies and is available to anybody interested.
Resumo:
Purpose: To ascertain the effectiveness of object-centered three-dimensional representations for the modeling of corneal surfaces. Methods: Three-dimensional (3D) surface decomposition into series of basis functions including: (i) spherical harmonics, (ii) hemispherical harmonics, and (iii) 3D Zernike polynomials were considered and compared to the traditional viewer-centered representation of two-dimensional (2D) Zernike polynomial expansion for a range of retrospective videokeratoscopic height data from three clinical groups. The data were collected using the Medmont E300 videokeratoscope. The groups included 10 normal corneas with corneal astigmatism less than −0.75 D, 10 astigmatic corneas with corneal astigmatism between −1.07 D and 3.34 D (Mean = −1.83 D, SD = ±0.75 D), and 10 keratoconic corneas. Only data from the right eyes of the subjects were considered. Results: All object-centered decompositions led to significantly better fits to corneal surfaces (in terms of the RMS error values) than the corresponding 2D Zernike polynomial expansions with the same number of coefficients, for all considered corneal surfaces, corneal diameters (2, 4, 6, and 8 mm), and model orders (4th to 10th radial orders) The best results (smallest RMS fit error) were obtained with spherical harmonics decomposition which lead to about 22% reduction in the RMS fit error, as compared to the traditional 2D Zernike polynomials. Hemispherical harmonics and the 3D Zernike polynomials reduced the RMS fit error by about 15% and 12%, respectively. Larger reduction in RMS fit error was achieved for smaller corneral diameters and lower order fits. Conclusions: Object-centered 3D decompositions provide viable alternatives to traditional viewer-centered 2D Zernike polynomial expansion of a corneal surface. They achieve better fits to videokeratoscopic height data and could be particularly suited to the analysis of multiple corneal measurements, where there can be slight variations in the position of the cornea from one map acquisition to the next.
Resumo:
We propose in this paper a new method for the mapping of hippocampal (HC) surfaces to establish correspondences between points on HC surfaces and enable localized HC shape analysis. A novel geometric feature, the intrinsic shape context, is defined to capture the global characteristics of the HC shapes. Based on this intrinsic feature, an automatic algorithm is developed to detect a set of landmark curves that are stable across population. The direct map between a source and target HC surface is then solved as the minimizer of a harmonic energy function defined on the source surface with landmark constraints. For numerical solutions, we compute the map with the approach of solving partial differential equations on implicit surfaces. The direct mapping method has the following properties: (1) it has the advantage of being automatic; (2) it is invariant to the pose of HC shapes. In our experiments, we apply the direct mapping method to study temporal changes of HC asymmetry in Alzheimer's disease (AD) using HC surfaces from 12 AD patients and 14 normal controls. Our results show that the AD group has a different trend in temporal changes of HC asymmetry than the group of normal controls. We also demonstrate the flexibility of the direct mapping method by applying it to construct spherical maps of HC surfaces. Spherical harmonics (SPHARM) analysis is then applied and it confirms our results on temporal changes of HC asymmetry in AD.
Resumo:
As connectivity analyses become more popular, claims are often made about how the brain's anatomical networks depend on age, sex, or disease. It is unclear how results depend on tractography methods used to compute fiber networks. We applied 11 tractography methods to high angular resolution diffusion images of the brain (4-Tesla 105-gradient HARDI) from 536 healthy young adults. We parcellated 70 cortical regions, yielding 70×70 connectivity matrices, encoding fiber density. We computed popular graph theory metrics, including network efficiency, and characteristic path lengths. Both metrics were robust to the number of spherical harmonics used to model diffusion (4th-8th order). Age effects were detected only for networks computed with the probabilistic Hough transform method, which excludes smaller fibers. Sex and total brain volume affected networks measured with deterministic, tensor-based fiber tracking but not with the Hough method. Each tractography method includes different fibers, which affects inferences made about the reconstructed networks.
Resumo:
A technique for computing the spectral and angular (both the zenith and azimuthal) distribution of the solar energy reaching the surface of earth and any other plane in the atmosphere has been developed. Here the computer code LOWTRAN is used for getting the atmospheric transmittances in conjunction with two approximate procedures: one based on the Eddington method and the other on van de Hulst's adding method, for solving the equation of radiative transfer to obtain the diffuse radiation in the cloud-free situation. The aerosol scattering phase functions are approximated by the Hyeney-Greenstein functions. When the equation of radiative transfer is solved using the adding method, the azimuthal and zenith angle dependence of the scattered radiation is evaluated, whereas when the Eddington technique is utilized only the total downward flux of scattered solar radiation is obtained. Results of the diffuse and beam components of solar radiation received on surface of earth compare very well with those computed by other methods such as the more exact calculations using spherical harmonics and when atmospheric conditions corresponding to that prevailing locally in a tropical location (as in India) are used as inputs the computed values agree closely with the measured values.
Resumo:
This paper is devoted to a consideration of the following problem: A spherical mass of fluid of density varrho1, viscosity μ1 and external radius R is surrounded by a fluid of density varrho2 and viscosity μ2.The fluids are immiscible and incompressible. The interface is accelerated radially by g1: to study the effect of viscosity and surface tension on the stability of the interface. By analyzing the problem in spherical harmonics the mathematical problem is reduced to one of solution of the characteristic determinant equation. The particular case of a cavity bubble, where the viscosity μ1 of the fluid inside the bubble is negligible in comparison with the viscosity μ2 of the fluid outside the bubble, is considered in some detail. It is shown that viscosity has a stabilizing role on the interface; and when g1 > T(n − 1) (n + 2)/R2(varrho2 − varrho1) the stabilizing role of both viscosity and surface tension is more pronounced than would result when either of them is taken individually.
Resumo:
A molecular theory of collective orientational relaxation of dipolar molecules in a dense liquid is presented. Our work is based on a generalized, nonlinear, Smoluchowski equation (GSE) that includes the effects of intermolecular interactions through a mean‐field force term. The effects of translational motion of the liquid molecules on the orientational relaxation is also included self‐consistently in the GSE. Analytic expressions for the wave‐vector‐dependent orientational correlation functions are obtained for one component, pure liquid and also for binary mixtures. We find that for a dipolar liquid of spherical molecules, the correlation function ϕ(k,t) for l=1, where l is the rank of the spherical harmonics, is biexponential. At zero wave‐vector, one time constant becomes identical with the dielectric relaxation time of the polar liquid. The second time constant is the longitudinal relaxation time, but the contribution of this second component is small. We find that polar forces do not affect the higher order correlation functions (l>1) of spherical dipolar molecules in a linearized theory. The expression of ϕ(k,t) for a binary liquid is a sum of four exponential terms. We also find that the wave‐vector‐dependent relaxation times depend strongly on the microscopic structure of the dense liquid. At intermediate wave vectors, the translational diffusion greatly accelerates the rate of orientational relaxation. The present study indicates that one must pay proper attention to the microscopic structure of the liquid while treating the translational effects. An analysis of the nonlinear terms of the GSE is also presented. An interesting coupling between the number density fluctuation and the orientational fluctuation is uncovered.