982 resultados para Generalized Lommel-Wright Functions


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report the first 3D maps of genetic effects on brain fiber complexity. We analyzed HARDI brain imaging data from 90 young adult twins using an information-theoretic measure, the Jensen-Shannon divergence (JSD), to gauge the regional complexity of the white matter fiber orientation distribution functions (ODF). HARDI data were fluidly registered using Karcher means and ODF square-roots for interpol ation; each subject's JSD map was computed from the spatial coherence of the ODFs in each voxel's neighborhood. We evaluated the genetic influences on generalized fiber anisotropy (GFA) and complexity (JSD) using structural equation models (SEM). At each voxel, genetic and environmental components of data variation were estimated, and their goodness of fit tested by permutation. Color-coded maps revealed that the optimal models varied for different brain regions. Fiber complexity was predominantly under genetic control, and was higher in more highly anisotropic regions. These methods show promise for discovering factors affecting fiber connectivity in the brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new information-theoretic metric, the symmetric Kullback-Leibler divergence (sKL-divergence), to measure the difference between two water diffusivity profiles in high angular resolution diffusion imaging (HARDI). Water diffusivity profiles are modeled as probability density functions on the unit sphere, and the sKL-divergence is computed from a spherical harmonic series, which greatly reduces computational complexity. Adjustment of the orientation of diffusivity functions is essential when the image is being warped, so we propose a fast algorithm to determine the principal direction of diffusivity functions using principal component analysis (PCA). We compare sKL-divergence with other inner-product based cost functions using synthetic samples and real HARDI data, and show that the sKL-divergence is highly sensitive in detecting small differences between two diffusivity profiles and therefore shows promise for applications in the nonlinear registration and multisubject statistical analysis of HARDI data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A key question in diffusion imaging is how many diffusion-weighted images suffice to provide adequate signal-to-noise ratio (SNR) for studies of fiber integrity. Motion, physiological effects, and scan duration all affect the achievable SNR in real brain images, making theoretical studies and simulations only partially useful. We therefore scanned 50 healthy adults with 105-gradient high-angular resolution diffusion imaging (HARDI) at 4T. From gradient image subsets of varying size (6 ≤ N ≤ 94) that optimized a spherical angular distribution energy, we created SNR plots (versus gradient numbers) for seven common diffusion anisotropy indices: fractional and relative anisotropy (FA, RA), mean diffusivity (MD), volume ratio (VR), geodesic anisotropy (GA), its hyperbolic tangent (tGA), and generalized fractional anisotropy (GFA). SNR, defined in a region of interest in the corpus callosum, was near-maximal with 58, 66, and 62 gradients for MD, FA, and RA, respectively, and with about 55 gradients for GA and tGA. For VR and GFA, SNR increased rapidly with more gradients. SNR was optimized when the ratio of diffusion-sensitized to non-sensitized images was 9.13 for GA and tGA, 10.57 for FA, 9.17 for RA, and 26 for MD and VR. In orientation density functions modeling the HARDI signal as a continuous mixture of tensors, the diffusion profile reconstruction accuracy rose rapidly with additional gradients. These plots may help in making trade-off decisions when designing diffusion imaging protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To classify each stage for a progressing disease such as Alzheimer’s disease is a key issue for the disease prevention and treatment. In this study, we derived structural brain networks from diffusion-weighted MRI using whole-brain tractography since there is growing interest in relating connectivity measures to clinical, cognitive, and genetic data. Relatively little work has usedmachine learning to make inferences about variations in brain networks in the progression of the Alzheimer’s disease. Here we developed a framework to utilize generalized low rank approximations of matrices (GLRAM) and modified linear discrimination analysis for unsupervised feature learning and classification of connectivity matrices. We apply the methods to brain networks derived from DWI scans of 41 people with Alzheimer’s disease, 73 people with EMCI, 38 people with LMCI, 47 elderly healthy controls and 221 young healthy controls. Our results show that this new framework can significantly improve classification accuracy when combining multiple datasets; this suggests the value of using data beyond the classification task at hand to model variations in brain connectivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a generalization of the finite volume evolution Galerkin scheme [M. Lukacova-Medvid'ova,J. Saibertov'a, G. Warnecke, Finite volume evolution Galerkin methods for nonlinear hyperbolic systems, J. Comp. Phys. (2002) 183 533-562; M. Luacova-Medvid'ova, K.W. Morton, G. Warnecke, Finite volume evolution Galerkin (FVEG) methods for hyperbolic problems, SIAM J. Sci. Comput. (2004) 26 1-30] for hyperbolic systems with spatially varying flux functions. Our goal is to develop a genuinely multi-dimensional numerical scheme for wave propagation problems in a heterogeneous media. We illustrate our methodology for acoustic waves in a heterogeneous medium but the results can be generalized to more complex systems. The finite volume evolution Galerkin (FVEG) method is a predictor-corrector method combining the finite volume corrector step with the evolutionary predictor step. In order to evolve fluxes along the cell interfaces we use multi-dimensional approximate evolution operator. The latter is constructed using the theory of bicharacteristics under the assumption of spatially dependent wave speeds. To approximate heterogeneous medium a staggered grid approach is used. Several numerical experiments for wave propagation with continuous as well as discontinuous wave speeds confirm the robustness and reliability of the new FVEG scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robust methods are useful in making reliable statistical inferences when there are small deviations from the model assumptions. The widely used method of the generalized estimating equations can be "robustified" by replacing the standardized residuals with the M-residuals. If the Pearson residuals are assumed to be unbiased from zero, parameter estimators from the robust approach are asymptotically biased when error distributions are not symmetric. We propose a distribution-free method for correcting this bias. Our extensive numerical studies show that the proposed method can reduce the bias substantially. Examples are given for illustration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies the interest-rate policy of the ECB by estimating monetary policy rules using real-time data and central bank forecasts. The aim of the estimations is to try to characterize a decade of common monetary policy and to look at how different models perform at this task.The estimated rules include: contemporary Taylor rules, forward-looking Taylor rules, nonlinearrules and forecast-based rules. The nonlinear models allow for the possibility of zone-like preferences and an asymmetric response to key variables. The models therefore encompass the most popular sub-group of simple models used for policy analysis as well as the more unusual non-linear approach. In addition to the empirical work, this thesis also contains a more general discussion of monetary policy rules mostly from a New Keynesian perspective. This discussion includes an overview of some notable related studies, optimal policy, policy gradualism and several other related subjects. The regression estimations are performed with either least squares or the generalized method of moments depending on the requirements of the estimations. The estimations use data from both the Euro Area Real-Time Database and the central bank forecasts published in ECB Monthly Bulletins. These data sources represent some of the best data that is available for this kind of analysis. The main results of this thesis are that forward-looking behavior appears highly prevalent, but that standard forward-looking Taylor rules offer only ambivalent results with regard to inflation. Nonlinear models are shown to work, but on the other hand do not have a strong rationale over a simpler linear formulation. However, the forecasts appear to be highly useful in characterizing policy and may offer the most accurate depiction of a predominantly forward-looking central bank. In particular the inflation response appears much stronger while the output response becomes highly forward-looking as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a method for measuring the local velocities and first-order variations in velocities in a timevarying image. The scheme is an extension of the generalized gradient model that encompasses the local variation of velocity within a local patch of the image. Motion within a patch is analyzed in parallel by 42 different spatiotemporal filters derived from 6 linearly independent spatiotemporal kernels. No constraints are imposed on the image structure, and there is no need for smoothness constraints on the velocity field. The aperture problem does not arise so long as there is some two-dimensional structure in the patch being analyzed. Among the advantages of the scheme is that there is no requirement to calculate second or higher derivatives of the image function. This makes the scheme robust in the presence of noise. The spatiotemporal kernels are of simple form, involving Gaussian functions, and are biologically plausible receptive fields. The validity of the scheme is demonstrated by application to both synthetic and real video images sequences and by direct comparison with another recently published scheme Biol. Cybern. 63, 185 (1990)] for the measurement of complex optical flow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a method for measuring the local velocities and first-order variations in velocities in a time-varying image. The scheme is an extension of the generalized gradient model that encompasses the local variation of velocity within a local patch of the image. Motion within a patch is analyzed in parallel by 42 different spatiotemporal filters derived from 6 linearly independent spatiotemporal kernels. No constraints are imposed on the image structure, and there is no need for smoothness constraints on the velocity field. The aperture problem does not arise so long as there is some two-dimensional structure in the patch being analyzed. Among the advantages of the scheme is that there is no requirement to calculate second or higher derivatives of the image function. This makes the scheme robust in the presence of noise. The spatiotemporal kernels are of simple form, involving Gaussian functions, and are biologically plausible receptive fields. The validity of the scheme is demonstrated by application to both synthetic and real video images sequences and by direct comparison with another recently published scheme [Biol. Cybern. 63, 185 (1990)] for the measurement of complex optical flow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The enthalpy method is primarily developed for studying phase change in a multicomponent material, characterized by a continuous liquid volume fraction (phi(1)) vs temperature (T) relationship. Using the Galerkin finite element method we obtain solutions to the enthalpy formulation for phase change in 1D slabs of pure material, by assuming a superficial phase change region (linear (phi(1) vs T) around the discontinuity at the melting point. Errors between the computed and analytical solutions are evaluated for the fluxes at, and positions of, the freezing front, for different widths of the superficial phase change region and spatial discretizations with linear and quadratic basis functions. For Stefan number (St) varying between 0.1 and 10 the method is relatively insensitive to spatial discretization and widths of the superficial phase change region. Greater sensitivity is observed at St = 0.01, where the variation in the enthalpy is large. In general the width of the superficial phase change region should span at least 2-3 Gauss quadrature points for the enthalpy to be computed accurately. The method is applied to study conventional melting of slabs of frozen brine and ice. Regardless of the forms for the phi(1) vs T relationships, the thawing times were found to scale as the square of the slab thickness. The ability of the method to efficiently capture multiple thawing fronts which may originate at any spatial location within the sample, is illustrated with the microwave thawing of slabs and 2D cylinders. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a spin model, namely, the Kitaev model augmented by a loop term and perturbed by an Ising Hamiltonian, and show that it exhibits both confinement-deconfinement transitions from spin liquid to antiferromagnetic/spin-chain/ferromagnetic phases and topological quantum phase transitions between gapped and gapless spin-liquid phases. We develop a fermionic resonating-valence-bonds (RVB) mean-field theory to chart out the phase diagram of the model and estimate the stability of its spin-liquid phases, which might be relevant for attempts to realize the model in optical lattices and other spin systems. We present an analytical mean-field theory to study the confinement-deconfinement transition for large coefficient of the loop term and show that this transition is first order within such mean-field analysis in this limit. We also conjecture that in some other regimes, the confinement-deconfinement transitions in the model, predicted to be first order within the mean-field theory, may become second order via a defect condensation mechanism. Finally, we present a general classification of the perturbations to the Kitaev model on the basis of their effect on it's spin correlation functions and derive a necessary and sufficient condition, within the regime of validity of perturbation theory, for the spin correlators to exhibit a long-ranged power-law behavior in the presence of such perturbations. Our results reproduce those of Tikhonov et al. [Phys. Rev. Lett. 106, 067203 (2011)] as a special case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we obtain explicit solutions of a linear PDE subject to a class of radial square integrable functions with a monotonically increasing weight function |x|(n-1)e(beta vertical bar x vertical bar 2)/2, beta >= 0, x is an element of R-n. This linear PDE is obtained from a system of forced Burgers equation via the Cole-Hopf transformation. For any spatial dimension n > 1, the solution is expressed in terms of a family of weighted generalized Laguerre polynomials. We also discuss the large time behaviour of the solution of the system of forced Burgers equation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among the human factors that influence safe driving, visual skills of the driver can be considered fundamental. This study mainly focuses on investigating the effect of visual functions of drivers in India on their road crash involvement. Experiments were conducted to assess vision functions of Indian licensed drivers belonging to various organizations, age groups and driving experience. The test results were further related to the crash involvement histories of drivers through statistical tools. A generalized linear model was developed to ascertain the influence of these traits on propensity of crash involvement. Among the sampled drivers, colour vision, vertical field of vision, depth perception, contrast sensitivity, acuity and phoria were found to influence their crash involvement rates. In India, there are no efficient standards and testing methods to assess the visual capabilities of drivers during their licensing process and this study highlights the need for the same.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, and are therefore computationally intractable, we rewrite them into the form of an optimization program involving maximization of a convex function on a compact set. The dimension of the search space is decreased enormously if the data matrix has many more columns (variables) than rows. We then propose and analyze a simple gradient method suited for the task. It appears that our algorithm has best convergence properties in the case when either the objective function or the feasible set are strongly convex, which is the case with our single-unit formulations and can be enforced in the block case. Finally, we demonstrate numerically on a set of random and gene expression test problems that our approach outperforms existing algorithms both in quality of the obtained solution and in computational speed. © 2010 Michel Journée, Yurii Nesterov, Peter Richtárik and Rodolphe Sepulchre.