948 resultados para Random matrix theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In linear mixed models, model selection frequently includes the selection of random effects. Two versions of the Akaike information criterion (AIC) have been used, based either on the marginal or on the conditional distribution. We show that the marginal AIC is no longer an asymptotically unbiased estimator of the Akaike information, and in fact favours smaller models without random effects. For the conditional AIC, we show that ignoring estimation uncertainty in the random effects covariance matrix, as is common practice, induces a bias that leads to the selection of any random effect not predicted to be exactly zero. We derive an analytic representation of a corrected version of the conditional AIC, which avoids the high computational cost and imprecision of available numerical approximations. An implementation in an R package is provided. All theoretical results are illustrated in simulation studies, and their impact in practice is investigated in an analysis of childhood malnutrition in Zambia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Amorphous carbon has been investigated for a long time. Since it has the random orientation of carbon atoms, its density depends on the position of each carbon atom. It is important to know the density of amorphous carbon to use it for modeling advance carbon materials in the future. Two methods were used to create the initial structures of amorphous carbon. One is the random placement method by randomly locating 100 carbon atoms in a cubic lattice. Another method is the liquid-quench method by using reactive force field (ReaxFF) to rapidly decrease the system of 100 carbon atoms from the melting temperature. Density functional theory (DFT) was used to refine the position of each carbon atom and the dimensions of the boundaries to minimize the ground energy of the structure. The average densities of amorphous carbon structures created by the random placement method and the liquid-quench method are 2.59 and 2.44 g/cm3, respectively. Both densities have a good agreement with previous works. In addition, the final structure of amorphous carbon generated by the liquid-quench method has lower energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many methodologies dealing with prediction or simulation of soft tissue deformations on medical image data require preprocessing of the data in order to produce a different shape representation that complies with standard methodologies, such as mass–spring networks, finite element method s (FEM). On the other hand, methodologies working directly on the image space normally do not take into account mechanical behavior of tissues and tend to lack physics foundations driving soft tissue deformations. This chapter presents a method to simulate soft tissue deformations based on coupled concepts from image analysis and mechanics theory. The proposed methodology is based on a robust stochastic approach that takes into account material properties retrieved directly from the image, concepts from continuum mechanics and FEM. The optimization framework is solved within a hierarchical Markov random field (HMRF) which is implemented on the graphics processor unit (GPU See Graphics processing unit ).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stochastic models for three-dimensional particles have many applications in applied sciences. Lévy–based particle models are a flexible approach to particle modelling. The structure of the random particles is given by a kernel smoothing of a Lévy basis. The models are easy to simulate but statistical inference procedures have not yet received much attention in the literature. The kernel is not always identifiable and we suggest one approach to remedy this problem. We propose a method to draw inference about the kernel from data often used in local stereology and study the performance of our approach in a simulation study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assessing the ecological requirements of species coexisting within a community is an essential requisite for developing sound conservation action. A particularly interesting question is what mechanisms govern the stable coexistence of cryptic species within a community, i.e. species that are almost impossible to distinguish. Resource partitioning theory predicts that cryptic species, like other sympatric taxa, will occupy distinct ecological niches. This prediction is widely inferred from eco-morphological studies. A new cryptic long-eared bat species, Plecotus macrobullaris, has been recently discovered in the complex of two other species present in the European Alps, with even evidence for a few mixed colonies. This discovery poses challenges to bat ecologists concerned with planning conservation measures beyond roost protection. We therefore tested whether foraging habitat segregation occurred among the three cryptic Plecotus bat species in Switzerland by radiotracking 24 breeding female bats (8 of each species). We compared habitat features at locations visited by a bat versus random locations within individual home ranges, applying mixed effects logistic regression. Distinct, species-specific habitat preferences were revealed. P. auritus foraged mostly within traditional orchards in roost vicinity, with a marked preference for habitat heterogeneity. P. austriacus foraged up to 4.7 km from the roost, selecting mostly fruit tree plantations, hedges and tree lines. P. macrobullaris preferred patchy deciduous and mixed forests with high vertical heterogeneity in a grassland dominated-matrix. These species-specific habitat preferences should inform future conservation programmes. They highlight the possible need of distinct conservation measures for species that look very much alike.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present three methods for the distortion-free enhancement of THz signals measured by electro-optic sampling in zinc blende-type detector crystals, e.g., ZnTe or GaP. A technique commonly used in optically heterodyne-detected optical Kerr effect spectroscopy is introduced, which is based on two measurements at opposite optical biases near the zero transmission point in a crossed polarizer detection geometry. In contrast to other techniques for an undistorted THz signal enhancement, it also works in a balanced detection scheme and does not require an elaborate procedure for the reconstruction of the true signal as the two measured waveforms are simply subtracted to remove distortions. We study three different approaches for setting an optical bias using the Jones matrix formalism and discuss them also in the framework of optical heterodyne detection. We show that there is an optimal bias point in realistic situations where a small fraction of the probe light is scattered by optical components. The experimental demonstration will be given in the second part of this two-paper series [J. Opt. Soc. Am. B, doc. ID 204877 (2014, posted online)].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon’s implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike’s preceding ISI. As we show, the EIF’s exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron’s ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Theory on plant succession predicts a temporal increase in the complexity of spatial community structure and of competitive interactions: initially random occurrences of early colonising species shift towards spatially and competitively structured plant associations in later successional stages. Here we use long-term data on early plant succession in a German post mining area to disentangle the importance of random colonisation, habitat filtering, and competition on the temporal and spatial development of plant community structure. We used species co-occurrence analysis and a recently developed method for assessing competitive strength and hierarchies (transitive versus intransitive competitive orders) in multispecies communities. We found that species turnover decreased through time within interaction neighbourhoods, but increased through time outside interaction neighbourhoods. Successional change did not lead to modular community structure. After accounting for species richness effects, the strength of competitive interactions and the proportion of transitive competitive hierarchies increased through time. Although effects of habitat filtering were weak, random colonization and subsequent competitive interactions had strong effects on community structure. Because competitive strength and transitivity were poorly correlated with soil characteristics, there was little evidence for context dependent competitive strength associated with intransitive competitive hierarchies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the effects of a finite cubic volume with twisted boundary conditions on pseudoscalar mesons. We apply Chiral Perturbation Theory in the p-regime and introduce the twist by means of a constant vector field. The corrections of masses, decay constants, pseudoscalar coupling constants and form factors are calculated at next-to-leading order. We detail the derivations and compare with results available in the literature. In some case there is disagreement due to a different treatment of new extra terms generated from the breaking of the cubic invariance. We advocate to treat such terms as renormalization terms of the twisting angles and reabsorb them in the on-shell conditions. We confirm that the corrections of masses, decay constants, pseudoscalar coupling constants are related by means of chiral Ward identities. Furthermore, we show that the matrix elements of the scalar (resp. vector) form factor satisfies the Feynman–Hellman Theorem (resp. the Ward–Takahashi identity). To show the Ward–Takahashi identity we construct an effective field theory for charged pions which is invariant under electromagnetic gauge transformations and which reproduces the results obtained with Chiral Perturbation Theory at a vanishing momentum transfer. This generalizes considerations previously published for periodic boundary conditions to twisted boundary conditions. Another method to estimate the corrections in finite volume are asymptotic formulae. Asymptotic formulae were introduced by Lüscher and relate the corrections of a given physical quantity to an integral of a specific amplitude, evaluated in infinite volume. Here, we revise the original derivation of Lüscher and generalize it to finite volume with twisted boundary conditions. In some cases, the derivation involves complications due to extra terms generated from the breaking of the cubic invariance. We isolate such terms and treat them as renormalization terms just as done before. In that way, we derive asymptotic formulae for masses, decay constants, pseudoscalar coupling constants and scalar form factors. At the same time, we derive also asymptotic formulae for renormalization terms. We apply all these formulae in combination with Chiral Perturbation Theory and estimate the corrections beyond next-to-leading order. We show that asymptotic formulae for masses, decay constants, pseudoscalar coupling constants are related by means of chiral Ward identities. A similar relation connects in an independent way asymptotic formulae for renormalization terms. We check these relations for charged pions through a direct calculation. To conclude, a numerical analysis quantifies the importance of finite volume corrections at next-to-leading order and beyond. We perform a generic Analysis and illustrate two possible applications to real simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a framework for fitting multiple random walks to animal movement paths consisting of ordered sets of step lengths and turning angles. Each step and turn is assigned to one of a number of random walks, each characteristic of a different behavioral state. Behavioral state assignments may be inferred purely from movement data or may include the habitat type in which the animals are located. Switching between different behavioral states may be modeled explicitly using a state transition matrix estimated directly from data, or switching probabilities may take into account the proximity of animals to landscape features. Model fitting is undertaken within a Bayesian framework using the WinBUGS software. These methods allow for identification of different movement states using several properties of observed paths and lead naturally to the formulation of movement models. Analysis of relocation data from elk released in east-central Ontario, Canada, suggests a biphasic movement behavior: elk are either in an "encamped" state in which step lengths are small and turning angles are high, or in an "exploratory" state, in which daily step lengths are several kilometers and turning angles are small. Animals encamp in open habitat (agricultural fields and opened forest), but the exploratory state is not associated with any particular habitat type.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work is to solve a question raised for average sampling in shift-invariant spaces by using the well-known matrix pencil theory. In many common situations in sampling theory, the available data are samples of some convolution operator acting on the function itself: this leads to the problem of average sampling, also known as generalized sampling. In this paper we deal with the existence of a sampling formula involving these samples and having reconstruction functions with compact support. Thus, low computational complexity is involved and truncation errors are avoided. In practice, it is accomplished by means of a FIR filter bank. An answer is given in the light of the generalized sampling theory by using the oversampling technique: more samples than strictly necessary are used. The original problem reduces to finding a polynomial left inverse of a polynomial matrix intimately related to the sampling problem which, for a suitable choice of the sampling period, becomes a matrix pencil. This matrix pencil approach allows us to obtain a practical method for computing the compactly supported reconstruction functions for the important case where the oversampling rate is minimum. Moreover, the optimality of the obtained solution is established.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Interface discontinuity factors based on the Generalized Equivalence Theory are commonly used in nodal homogenized diffusion calculations so that diffusion average values approximate heterogeneous higher order solutions. In this paper, an additional form of interface correction factors is presented in the frame of the Analytic Coarse Mesh Finite Difference Method (ACMFD), based on a correction of the modal fluxes instead of the physical fluxes. In the ACMFD formulation, implemented in COBAYA3 code, the coupled multigroup diffusion equations inside a homogenized region are reduced to a set of uncoupled modal equations through diagonalization of the multigroup diffusion matrix. Then, physical fluxes are transformed into modal fluxes in the eigenspace of the diffusion matrix. It is possible to introduce interface flux discontinuity jumps as the difference of heterogeneous and homogeneous modal fluxes instead of introducing interface discontinuity factors as the ratio of heterogeneous and homogeneous physical fluxes. The formulation in the modal space has been implemented in COBAYA3 code and assessed by comparison with solutions using classical interface discontinuity factors in the physical space

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce in this paper a method to calculate the Hessenberg matrix of a sum of measures from the Hessenberg matrices of the component measures. Our method extends the spectral techniques used by G. Mantica to calculate the Jacobi matrix associated with a sum of measures from the Jacobi matrices of each of the measures. We apply this method to approximate the Hessenberg matrix associated with a self-similar measure and compare it with the result obtained by a former method for self-similar measures which uses a fixed point theorem for moment matrices. Results are given for a series of classical examples of self-similar measures. Finally, we also apply the method introduced in this paper to some examples of sums of (not self-similar) measures obtaining the exact value of the sections of the Hessenberg matrix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents a method for the analysis of timber composite beams which considers the slip in the connection system, based on assembling the flexibility matrix of the whole structure. This method is based on one proposed by Tommola and Jutila (2001). This paper extends the method to the case of a gap between two pieces with an arbitrary location at the first connector, which notably broadens its practical application. The addition of the gap makes it possible to model a cracked zone in concrete topping, as well as the case in which forming produces the gap. The consideration of induced stresses due to changes in temperature and moisture content is also described, while the concept of equivalent eccentricity is generalized. This method has important advantages in connection with the current European Standard EN 1995-1-1: 2004, as it is able to deal with any type of load, variable section, discrete and non-regular connection systems, a gap between the two pieces, and variations in temperature and moisture content. Although it could be applied to any structural system, it is specially suited for the case of simple supported and continuous beams. Working examples are presented at the end, showing that the arrangement of the connection notably modifies shear force distribution. A first interpretation of the results is made on the basis of the strut and tie theory. The examples prove that the use of EC-5 is unsafe when, as a rule of thumb, the strut or compression field between the support and the first connector is at an angle with the axis of the beam of less than 60º.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La inmensa mayoría de los flujos de relevancia ingenieril permanecen sin estudiar en el marco de la teoría de estabilidad global. Esto es debido a dos razones fundamentalmente, las dificultades asociadas con el análisis de los flujos turbulentos y los inmensos recursos computacionales requeridos para obtener la solución del problema de autovalores asociado al análisis de inestabilidad de flujos tridimensionales, también conocido como problema TriGlobal. En esta tesis se aborda el problema asociado con la tridimensionalidad. Se ha desarrollado una metodología general para obtener soluciones de problemas de análisis modal de las inestabilidades lineales globales mediante el acoplamiento de métodos de evolución temporal, desarrollados en este trabajo, con códigos de mecánica de fluidos computacional de segundo orden, utilizados de forma general en la industria. Esta metodología consiste en la resolución del problema de autovalores asociado al análisis de inestabilidad mediante métodos de proyección en subespacios de Krylov, con la particularidad de que dichos subespacios son generados por medio de la integración temporal de un vector inicial usando cualquier código de mecánica de fluidos computacional. Se han elegido tres problemas desafiantes en función de la exigencia de recursos computacionales necesarios y de la complejidad física para la demostración de la presente metodología: (i) el flujo en el interior de una cavidad tridimensional impulsada por una de sus tapas, (ii) el flujo alrededor de un cilindro equipado con aletas helicoidales a lo largo su envergadura y (iii) el flujo a través de una cavidad abierta tridimensinal en ausencia de homogeneidades espaciales. Para la validación de la tecnología se ha obtenido la solución del problema TriGlobal asociado al flujo en la cavidad tridimensional, utilizando el método de evolución temporal desarrollado acoplado con los operadores numéricos de flujo incompresible del código CFD OpenFOAM (código libre). Los resultados obtenidos coinciden plentamente con la literatura. La aplicación de esta metodología al estudio de inestabilidades globales de flujos abiertos tridimensionales ha proporcionado por primera vez, información sobre la transición tridimensional de estos flujos. Además, la metodología ha sido adaptada para resolver problemas adjuntos TriGlobales, permitiendo el control de flujo basado en modificaciones de las inestabilidades globales. Finalmente, se ha demostrado que la cantidad moderada de los recursos computacionales requeridos para la solución del problema de valor propio TriGlobal usando este método numérico, junto a su versatilidad al poder acoplarse a cualquier código aerodinámico, permite la realización de análisis de inestabilidad global y control de flujos complejos de relevancia industrial. Abstract Most flows of engineering relevance still remain unexplored in a global instability theory context for two reasons. First, because of the difficulties associated with the analysis of turbulent flows and, second, for the formidable computational resources required for the solution of the eigenvalue problem associated with the instability analysis of three-dimensional base flows, also known as TriGlobal problem. In this thesis, the problem associated with the three-dimensionality is addressed by means of the development of a general approach to the solution of large-scale global linear instability analysis by coupling a time-stepping approach with second order aerodynamic codes employed in industry. Three challenging flows in the terms of required computational resources and physical complexity have been chosen for demonstration of the present methodology; (i) the flow inside a wall-bounded three-dimensional lid-driven cavity, (ii) the flow past a cylinder fitted with helical strakes and (iii) the flow over a inhomogeneous three-dimensional open cavity. Results in excellent agreement with the literature have been obtained for the three-dimensional lid-driven cavity by using this methodology coupled with the incompressible solver of the open-source toolbox OpenFOAM®, which has served as validation. Moreover, significant physical insight of the instability of three-dimensional open flows has been gained through the application of the present time-stepping methodology to the other two cases. In addition, modifications to the present approach have been proposed in order to perform adjoint instability analysis of three-dimensional base flows and flow control; validation and TriGlobal examples are presented. Finally, it has been demonstrated that the moderate amount of computational resources required for the solution of the TriGlobal eigenvalue problem using this method enables the performance of instability analysis and control of flows of industrial relevance.