954 resultados para Convergence Analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

China is a large country characterized by remarkable growth and distinct regional diversity. Spatial disparity has always been a hot issue since China has been struggling to follow a balanced growth path but still confronting with unprecedented pressures and challenges. To better understand the inequality level benchmarking spatial distributions of Chinese provinces and municipalities and estimate dynamic trajectory of sustainable development in China, I constructed the Composite Index of Regional Development (CIRD) with five sub pillars/dimensions involving Macroeconomic Index (MEI), Science and Innovation Index (SCI), Environmental Sustainability Index (ESI), Human Capital Index (HCI) and Public Facilities Index (PFI), endeavoring to cover various fields of regional socioeconomic development. Ranking reports on the five sub dimensions and aggregated CIRD were provided in order to better measure the developmental degrees of 31 or 30 Chinese provinces and municipalities over 13 years from 1998 to 2010 as the time interval of three “Five-year Plans”. Further empirical applications of this CIRD focused on clustering and convergence estimation, attempting to fill up the gap in quantifying the developmental levels of regional comprehensive socioeconomics and estimating the dynamic convergence trajectory of regional sustainable development in a long run. Four clusters were benchmarked geographically-oriented in the map on the basis of cluster analysis, and club-convergence was observed in the Chinese provinces and municipalities based on stochastic kernel density estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computing the weighted geometric mean of large sparse matrices is an operation that tends to become rapidly intractable, when the size of the matrices involved grows. However, if we are not interested in the computation of the matrix function itself, but just in that of its product times a vector, the problem turns simpler and there is a chance to solve it even when the matrix mean would actually be impossible to compute. Our interest is motivated by the fact that this calculation has some practical applications, related to the preconditioning of some operators arising in domain decomposition of elliptic problems. In this thesis, we explore how such a computation can be efficiently performed. First, we exploit the properties of the weighted geometric mean and find several equivalent ways to express it through real powers of a matrix. Hence, we focus our attention on matrix powers and examine how well-known techniques can be adapted to the solution of the problem at hand. In particular, we consider two broad families of approaches for the computation of f(A) v, namely quadrature formulae and Krylov subspace methods, and generalize them to the pencil case f(A\B) v. Finally, we provide an extensive experimental evaluation of the proposed algorithms and also try to assess how convergence speed and execution time are influenced by some characteristics of the input matrices. Our results suggest that a few elements have some bearing on the performance and that, although there is no best choice in general, knowing the conditioning and the sparsity of the arguments beforehand can considerably help in choosing the best strategy to tackle the problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Default Mode Network (DMN) is a higher order functional neural network that displays activation during passive rest and deactivation during many types of cognitive tasks. Accordingly, the DMN is viewed to represent the neural correlate of internally-generated self-referential cognition. This hypothesis implies that the DMN requires the involvement of cognitive processes, like declarative memory. The present study thus examines the spatial and functional convergence of the DMN and the semantic memory system. Using an active block-design functional Magnetic Resonance Imaging (fMRI) paradigm and Independent Component Analysis (ICA), we trace the DMN and fMRI signal changes evoked by semantic, phonological and perceptual decision tasks upon visually-presented words. Our findings show less deactivation during semantic compared to the two non-semantic tasks for the entire DMN unit and within left-hemispheric DMN regions, i.e., the dorsal medial prefrontal cortex, the anterior cingulate cortex, the retrosplenial cortex, the angular gyrus, the middle temporal gyrus and the anterior temporal region, as well as the right cerebellum. These results demonstrate that well-known semantic regions are spatially and functionally involved in the DMN. The present study further supports the hypothesis of the DMN as an internal mentation system that involves declarative memory functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this paper is to establish exponential convergence of $hp$-version interior penalty (IP) discontinuous Galerkin (dG) finite element methods for the numerical approximation of linear second-order elliptic boundary-value problems with homogeneous Dirichlet boundary conditions and piecewise analytic data in three-dimensional polyhedral domains. More precisely, we shall analyze the convergence of the $hp$-IP dG methods considered in [D. Schötzau, C. Schwab, T. P. Wihler, SIAM J. Numer. Anal., 51 (2013), pp. 1610--1633] based on axiparallel $\sigma$-geometric anisotropic meshes and $\bm{s}$-linear anisotropic polynomial degree distributions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrophysiological experiments were performed on 96 male New Zealand white rabbits, anesthetized with urethane. Glass electrodes, filled with 2M NaCl, were used for microstimulation of three fiber pathways projecting from "limbic" centers to the ventromedial nucleus of the hypothalamus (VMH). Unitary and field potential recordings were made in the VMH after stimulation.^ Stimulation of the lateral portion of the fimbria, which carries fibers from the ventral subiculum of the hippocampal formation, evokes predominantly an inhibition of neurons medially in the VMH, and excitation of neurons located laterally.^ Stimulation of the dorsal portion of the stria terminalis, which carries fibers from the cortical nucleus of the amygdala, also produces predominantly an inhibition of cells medially and excitation laterally.^ Stimulation of the ventral component of the stria terminalis, which carries fibers from the medial nucleus of the amygdala, evokes excitation of cell medially, with little or no response seen laterally.^ Cells recorded medially in the VMH received convergent inputs from each of the three fiber systems: inhibition from fimbria and dorsal stria stimulation, excitation from ventral stria stimulation.^ The excitatory unitary responses recorded medially to ventral stria stimulation and laterally to fimbria and dorsal stria stimulation were subjected to a series of threshold stimulus intensities. From these tests it was determined that each of these three projections terminates monosynaptically on VMH neurons.^ The evidence for convergence upon single VMH neurons of projections from the amygdala and the hippocampal formation suggests this area of the brain to be important for integration of information from these two limbic centers. The VMH has been implied in a number of behavioral states: eating, reproduction, defense and aggression; it has further been linked to control of the anterior pituitary. These data provide a functional circuit through which the amygdaloid complex and the hippocampal formation can channel information from higher cortical centers into a hypothalamic area capable of coordinating behavioral and hormonal responses. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vestibular system contributes to the control of posture and eye movements and is also involved in various cognitive functions including spatial navigation and memory. These functions are subtended by projections to a vestibular cortex, whose exact location in the human brain is still a matter of debate (Lopez and Blanke, 2011). The vestibular cortex can be defined as the network of all cortical areas receiving inputs from the vestibular system, including areas where vestibular signals influence the processing of other sensory (e.g. somatosensory and visual) and motor signals. Previous neuroimaging studies used caloric vestibular stimulation (CVS), galvanic vestibular stimulation (GVS), and auditory stimulation (clicks and short-tone bursts) to activate the vestibular receptors and localize the vestibular cortex. However, these three methods differ regarding the receptors stimulated (otoliths, semicircular canals) and the concurrent activation of the tactile, thermal, nociceptive and auditory systems. To evaluate the convergence between these methods and provide a statistical analysis of the localization of the human vestibular cortex, we performed an activation likelihood estimation (ALE) meta-analysis of neuroimaging studies using CVS, GVS, and auditory stimuli. We analyzed a total of 352 activation foci reported in 16 studies carried out in a total of 192 healthy participants. The results reveal that the main regions activated by CVS, GVS, or auditory stimuli were located in the Sylvian fissure, insula, retroinsular cortex, fronto-parietal operculum, superior temporal gyrus, and cingulate cortex. Conjunction analysis indicated that regions showing convergence between two stimulation methods were located in the median (short gyrus III) and posterior (long gyrus IV) insula, parietal operculum and retroinsular cortex (Ri). The only area of convergence between all three methods of stimulation was located in Ri. The data indicate that Ri, parietal operculum and posterior insula are vestibular regions where afferents converge from otoliths and semicircular canals, and may thus be involved in the processing of signals informing about body rotations, translations and tilts. Results from the meta-analysis are in agreement with electrophysiological recordings in monkeys showing main vestibular projections in the transitional zone between Ri, the insular granular field (Ig), and SII.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over 100 samples of recent surface sediments from the bottomn of the Atlantic Ocean offshore NW Africa between 34° and 6° N have been analysed palynologically. The objective of this study was to reveal the relation between source areas, transport systems, and resulting distribution patterns of pollen and spores in marine sediments off NW Africa, in order to lay a sound foundation for the interpretation of pollen records of marine cores from this area. The clear zonation of the NW-African vegetation (due to the distinct climatic gradient) is helpful in determining main source areas, and the presence of some major wind belts facilitates the registration of the average course of wind trajectories. The present circulation pattern is driven by the intertropical front (ITCZ) which shifts over the continent between c. 22° N (summer position) and c. 4° N (winter position) in the course of the year. Determination of the period of main pollen release and the average atmospheric circulation pattern effective at that time of the years is of prime importance. The distribution patterns in recent marine sediments of pollen of a series of genera and families appear to record climatological/ecological variables, such as the trajectory of the NE trade, January trades, African Easterly Jet (Saharan Air Layer), the northernmost and southernmost position of the intertropical convergence zone, and the extent and latitudinal situation of the NW-African vegetation belt. Pollen analysis of a series of dated deep-sea cores taken between c. 35° and the equator off NW African enable the construction of paleo-distribution maps for time slices of the past, forming a register of paleoclimatological/paleoecological information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the present work is to provide an in-depth analysis of the most representative mirroring techniques used in SPH to enforce boundary conditions (BC) along solid profiles. We specifically refer to dummy particles, ghost particles, and Takeda et al. [Prog. Theor. Phys. 92 (1994), 939] boundary integrals. The analysis has been carried out by studying the convergence of the first- and second-order differential operators as the smoothing length (that is, the characteristic length on which relies the SPH interpolation) decreases. These differential operators are of fundamental importance for the computation of the viscous drag and the viscous/diffusive terms in the momentum and energy equations. It has been proved that close to the boundaries some of the mirroring techniques leads to intrinsic inaccuracies in the convergence of the differential operators. A consistent formulation has been derived starting from Takeda et al. boundary integrals (see the above reference). This original formulation allows implementing no-slip boundary conditions consistently in many practical applications as viscous flows and diffusion problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, robustness and stability of continuum damage models applied to material failure in soft tissues are addressed. In the implicit damage models equipped with softening, the presence of negative eigenvalues in the tangent elemental matrix degrades the condition number of the global matrix, leading to a reduction of the computational performance of the numerical model. Two strategies have been adapted from literature to improve the aforementioned computational performance degradation: the IMPL-EX integration scheme [Oliver,2006], which renders the elemental matrix contribution definite positive, and arclength-type continuation methods [Carrera,1994], which allow to capture the unstable softening branch in brittle ruptures. The IMPL-EX integration scheme has as a major drawback the need to use small time steps to keep numerical error below an acceptable value. A convergence study, limiting the maximum allowed increment of internal variables in the damage model, is presented. Finally, numerical simulation of failure problems with fibre reinforced materials illustrates the performance of the adopted methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the Expectation Maximization algorithm (EM) applied to operational modal analysis of structures. The EM algorithm is a general-purpose method for maximum likelihood estimation (MLE) that in this work is used to estimate state space models. As it is well known, the MLE enjoys some optimal properties from a statistical point of view, which make it very attractive in practice. However, the EM algorithm has two main drawbacks: its slow convergence and the dependence of the solution on the initial values used. This paper proposes two different strategies to choose initial values for the EM algorithm when used for operational modal analysis: to begin with the parameters estimated by Stochastic Subspace Identification method (SSI) and to start using random points. The effectiveness of the proposed identification method has been evaluated through numerical simulation and measured vibration data in the context of a benchmark problem. Modal parameters (natural frequencies, damping ratios and mode shapes) of the benchmark structure have been estimated using SSI and the EM algorithm. On the whole, the results show that the application of the EM algorithm starting from the solution given by SSI is very useful to identify the vibration modes of a structure, discarding the spurious modes that appear in high order models and discovering other hidden modes. Similar results are obtained using random starting values, although this strategy allows us to analyze the solution of several starting points what overcome the dependence on the initial values used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider non-negative solution of a chemotaxis system with non constant chemotaxis sensitivity function X. This system appears as a limit case of a model formorphogenesis proposed by Bollenbach et al. (Phys. Rev. E. 75, 2007).Under suitable boundary conditions, modeling the presence of a morphogen source at x=0, we prove the existence of a global and bounded weak solution using an approximation by problems where diffusion is introduced in the ordinary differential equation. Moreover,we prove the convergence of the solution to the unique steady state provided that ? is small and ? is large enough. Numerical simulations both illustrate these results and give rise to further conjectures on the solution behavior that go beyond the rigorously proved statements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a fully automatic goal-oriented hp-adaptive finite element strategy for open region electromagnetic problems (radiation and scattering) is presented. The methodology leads to exponential rates of convergence in terms of an upper bound of an user-prescribed quantity of interest. Thus, the adaptivity may be guided to provide an optimal error, not globally for the field in the whole finite element domain, but for specific parameters of engineering interest. For instance, the error on the numerical computation of the S-parameters of an antenna array, the field radiated by an antenna, or the Radar Cross Section on given directions, can be minimized. The efficiency of the approach is illustrated with several numerical simulations with two dimensional problem domains. Results include the comparison with the previously developed energy-norm based hp-adaptivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, robustness and stability of continuum damage models applied to material failure in soft tissues are addressed. In the implicit damage models equipped with softening, the presence of negative eigenvalues in the tangent elemental matrix degrades the condition number of the global matrix, leading to a reduction of the computational performance of the numerical model. Two strategies have been adapted from literature to improve the aforementioned computational performance degradation: the IMPL-EX integration scheme [Oliver,2006], which renders the elemental matrix contribution definite positive, and arclength-type continuation methods [Carrera,1994], which allow to capture the unstable softening branch in brittle ruptures. The IMPL-EX integration scheme has as a major drawback the need to use small time steps to keep numerical error below an acceptable value. A convergence study, limiting the maximum allowed increment of internal variables in the damage model, is presented. Finally, numerical simulation of failure problems with fibre reinforced materials illustrates the performance of the adopted methodology.