983 resultados para Inverse Gaussian Distribution
Resumo:
In the past century, the debate over whether or not density-dependent factors regulate populations has generally focused on changes in mean population density, ignoring the spatial variance around the mean as unimportant noise. In an attempt to provide a different framework for understanding population dynamics based on individual fitness, this paper discusses the crucial role of spatial variability itself on the stability of insect populations. The advantages of this method are the following: (1) it is founded on evolutionary principles rather than post hoc assumptions; (2) it erects hypotheses that can be tested; and (3) it links disparate ecological schools, including spatial dynamics, behavioral ecology, preference-performance, and plant apparency into an overall framework. At the core of this framework, habitat complexity governs insect spatial variance. which in turn determines population stability. First, the minimum risk distribution (MRD) is defined as the spatial distribution of individuals that results in the minimum number of premature deaths in a population given the distribution of mortality risk in the habitat (and, therefore, leading to maximized population growth). The greater the divergence of actual spatial patterns of individuals from the MRD, the greater the reduction of population growth and size from high, unstable levels. Then, based on extensive data from 29 populations of the processionary caterpillar, Ochrogaster lunifer, four steps are used to test the effect of habitat interference on population growth rates. (1) The costs (increasing the risk of scramble competition) and benefits (decreasing the risk of inverse density-dependent predation) of egg and larval aggregation are quantified. (2) These costs and benefits, along with the distribution of resources, are used to construct the MRD for each habitat. (3) The MRD is used as a benchmark against which the actual spatial pattern of individuals is compared. The degree of divergence of the actual spatial pattern from the MRD is quantified for each of the 29 habitats. (4) Finally, indices of habitat complexity are used to provide highly accurate predictions of spatial divergence from the MRD, showing that habitat interference reduces population growth rates from high, unstable levels. The reason for the divergence appears to be that high levels of background vegetation (vegetation other than host plants) interfere with female host-searching behavior. This leads to a spatial distribution of egg batches with high mortality risk, and therefore lower population growth. Knowledge of the MRD in other species should be a highly effective means of predicting trends in population dynamics. Species with high divergence between their actual spatial distribution and their MRD may display relatively stable dynamics at low population levels. In contrast, species with low divergence should experience high levels of intragenerational population growth leading to frequent habitat-wide outbreaks and unstable dynamics in the long term. Six hypotheses, erected under the framework of spatial interference, are discussed, and future tests are suggested.
Resumo:
Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a≥0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.
Resumo:
Radiotherapy (RT) is one of the most important approaches in the treatment of cancer and its performance can be improved in three different ways: through the optimization of the dose distribution, by the use of different irradiation techniques or through the study of radiobiological initiatives. The first is purely physical because is related to the physical dose distributiuon. The others are purely radiobiological because they increase the differential effect between the tumour and the health tissues. The Treatment Planning Systems (TPS) are used in RT to create dose distributions with the purpose to maximize the tumoral control and minimize the complications in the healthy tissues. The inverse planning uses dose optimization techniques that satisfy the criteria specified by the user, regarding the target and the organs at risk (OAR’s). The dose optimization is possible through the analysis of dose-volume histograms (DVH) and with the use of computed tomography, magnetic resonance and other digital image techniques.
Resumo:
Schistosomiasis prevalence and egg counts remained low one year after chemotherapy in most households in a hyperendemic rural area in northern Minas Gerais but several distinct spatial patterns could be observed in relation to IgE levels and to a lesser extent to exposure risk (TBM) and type of water supply. An inverse relationship between pre-treatment household prevalence and egg counts on the one hand and post-treatment IgE levels on the other were noted in two of the five communities. Low exposure risk was associated with the low pre-treatment infection rates in the central village but did not contribute to the decline of infection rates after chemotherapy in the study area, as indicated by the significant increase in water contact during the posttreatment period (p < 0.0001). Distance between households and the streams and socioeconomic factors were also unimportant in predicting the spatial distribution of infection. These results are consistent with the production and antiparasitic effect of high levels of IgE in Schistosoma mansoni infection.
Resumo:
A numerical study is presented of the third-dimensional Gaussian random-field Ising model at T=0 driven by an external field. Standard synchronous relaxation dynamics is employed to obtain the magnetization versus field hysteresis loops. The focus is on the analysis of the number and size distribution of the magnetization avalanches. They are classified as being nonspanning, one-dimensional-spanning, two-dimensional-spanning, or three-dimensional-spanning depending on whether or not they span the whole lattice in different space directions. Moreover, finite-size scaling analysis enables identification of two different types of nonspanning avalanches (critical and noncritical) and two different types of three-dimensional-spanning avalanches (critical and subcritical), whose numbers increase with L as a power law with different exponents. We conclude by giving a scenario for avalanche behavior in the thermodynamic limit.
Resumo:
The extended Gaussian ensemble (EGE) is introduced as a generalization of the canonical ensemble. This ensemble is a further extension of the Gaussian ensemble introduced by Hetherington [J. Low Temp. Phys. 66, 145 (1987)]. The statistical mechanical formalism is derived both from the analysis of the system attached to a finite reservoir and from the maximum statistical entropy principle. The probability of each microstate depends on two parameters ß and ¿ which allow one to fix, independently, the mean energy of the system and the energy fluctuations, respectively. We establish the Legendre transform structure for the generalized thermodynamic potential and propose a stability criterion. We also compare the EGE probability distribution with the q-exponential distribution. As an example, an application to a system with few independent spins is presented.
Resumo:
Résumé La cryptographie classique est basée sur des concepts mathématiques dont la sécurité dépend de la complexité du calcul de l'inverse des fonctions. Ce type de chiffrement est à la merci de la puissance de calcul des ordinateurs ainsi que la découverte d'algorithme permettant le calcul des inverses de certaines fonctions mathématiques en un temps «raisonnable ». L'utilisation d'un procédé dont la sécurité est scientifiquement prouvée s'avère donc indispensable surtout les échanges critiques (systèmes bancaires, gouvernements,...). La cryptographie quantique répond à ce besoin. En effet, sa sécurité est basée sur des lois de la physique quantique lui assurant un fonctionnement inconditionnellement sécurisé. Toutefois, l'application et l'intégration de la cryptographie quantique sont un souci pour les développeurs de ce type de solution. Cette thèse justifie la nécessité de l'utilisation de la cryptographie quantique. Elle montre que le coût engendré par le déploiement de cette solution est justifié. Elle propose un mécanisme simple et réalisable d'intégration de la cryptographie quantique dans des protocoles de communication largement utilisés comme les protocoles PPP, IPSec et le protocole 802.1li. Des scénarios d'application illustrent la faisabilité de ces solutions. Une méthodologie d'évaluation, selon les critères communs, des solutions basées sur la cryptographie quantique est également proposée dans ce document. Abstract Classical cryptography is based on mathematical functions. The robustness of a cryptosystem essentially depends on the difficulty of computing the inverse of its one-way function. There is no mathematical proof that establishes whether it is impossible to find the inverse of a given one-way function. Therefore, it is mandatory to use a cryptosystem whose security is scientifically proven (especially for banking, governments, etc.). On the other hand, the security of quantum cryptography can be formally demonstrated. In fact, its security is based on the laws of physics that assure the unconditional security. How is it possible to use and integrate quantum cryptography into existing solutions? This thesis proposes a method to integrate quantum cryptography into existing communication protocols like PPP, IPSec and the 802.l1i protocol. It sketches out some possible scenarios in order to prove the feasibility and to estimate the cost of such scenarios. Directives and checkpoints are given to help in certifying quantum cryptography solutions according to Common Criteria.
Resumo:
The metamorphism of the carbonate rocks of the SE Zanskar Tibetan zone has been studied by `'illite crystallinity'' and calcite-dolomite thermometry. The epizonal Zangla unit overlies the anchizonal Chumik unit. This discontinuous inverse zonation demonstrates a late to post-metamorphic thrust of the first unit over the second. The studied area underwent a complex tectonic history: - The tectonic units were stacked from the NE to the SW, generating recumbent folds, NE dipping thrusts and the regional metamorphism. The compressive movements were active under lower temperature conditions, resulting in late thrusts that disturbed the metamorphic zonation. The discontinuous inverse metamorphic zonation dates from this phase. - A NE vergent backfolding phase occurred at lower temperature conditions. It caused the uplift of more metamorphic levels. - A late extensional phase is revealed by the presence of NE dipping low angle normal faults, and a major high angle fault, the Sarchu fault. The low angle normal faults locally run along earlier thrusts (composite tectonic contacts). Their throw has been sufficient to reset a normal stratigraphic superposition (young layers overlying old ones), but insufficient to erase the inverse metamorphic relationship. However, the combined action of backfolding and normal faulting can locally lessen, or even cancel, the inverse metamorphic superposition. After deduction of the normal fault translation, the vertical component of the original thrust displacement through stratigraphy is 400 m, which is a value far too low to explain the temperature difference between the two units. The horizontal component of displacement is therefore far more important than the vertical one. The regional distribution of metamorphism within the Zangla unit points out to an anchizonal front and an epizonal inner part. This fact is in agreement with nappe tectonics.
Resumo:
A regularization method based on the non-extensive maximum entropy principle is devised. Special emphasis is given to the q=1/2 case. We show that, when the residual principle is considered as constraint, the q=1/2 generalized distribution of Tsallis yields a regularized solution for bad-conditioned problems. The so devised regularized distribution is endowed with a component which corresponds to the well known regularized solution of Tikhonov (1977).
Resumo:
Résumé: Dans le contexte d'un climat de plus en plus chaud, la localisation du pergélisol dans les terrains sédimentaires à forte déclivité et l'évaluation des mouvements de terrain qui y ont cours s'avèrent primordiales. S'insérant dans cette problématique, ce travail de thèse s'articule autour de deux axes de recherche différents. D'un point de vue statique, cette recherche propose une étude de la distribution et des caractéristiques du pergélisol dans les éboulis de la zone périglaciaire alpine. D'un point de vue dynamique, une analyse de l'influence des caractéristiques du pergélisol (teneur en glace, température du pergélisol, etc.) et des variations des températures de l'air et du sol sur les vitesses de fluage des corps sédimentaires gelés est effectuée. Afin de répondre à ce double objectif, l'approche "terrain" a été privilégiée. Pour déterminer la répartition et les caractéristiques du pergélisol, les méthodes traditionnelles de prospection du pergélisol ont été utilisées, à savoir la mesure de la température du sol à la base du manteau neigeux (BTS), la mesure de la température du sol en continu ainsi que la méthode géoélectrique. Les mouvements de terrain ont pour leur part été mesurés à l'aide d'un GPS différentiel. L'étude de la distribution du pergélisol a été effectuée dans une quinzaine d'éboulis situés dans les régions du Mont Gelé (Verbier-Nendaz) et d'Arolla principalement. Dans la plupart des cas, un pergélisol a pu être mis en évidence dans la partie inférieure des accumulations sédimentaires, alors que la partie médiane des éboulis n'est, le plus souvent, pas gelée. Si cette absence de pergélisol se prolonge parfois dans les portions sommitales des pentes, les mesures réalisées montrent que dans d'autres cas des sédiments gelés y sont à nouveau présents. Les résistivités électriques mesurées dans les portions gelées des éboulis étudiés sont dans la plupart des cas nettement inférieures à celles mesurées sur les glaciers rocheux. Des études préalables ont montré que des circulations d'air internes sont responsables de l'anomalie thermique négative et, lorsqu'il existe, du pergélisol que l'on trouve dans la partie inférieure d'éboulis situés plus de 1000 m plus bas que la limite inférieure régionale du pergélisol discontinu. L'étude de quatre sites de basse altitude (1400-1900 m), et notamment l'équipement du site de Dreveneuse (Préalpes Valaisannes) avec deux forages, des capteurs de température de surface et un anémomètre a permis de vérifier et de préciser le mécanisme de ventilation actif au sein des éboulis froids de basse altitude. Ce mécanisme fonctionne de la manière suivante: en hiver, l'air contenu dans l'éboulis, plus chaud et plus léger que l'air extérieur, monte à l'intérieur de l'accumulation sédimentaire et est expulsé dans ses parties sommitales. Cet effet de cheminée provoque une aspiration d'air froid à l'intérieur de la partie inférieure de l'éboulis, causant ainsi un sur-refroidissement marqué du terrain. En été, le mécanisme s'inverse, l'éboulis étant plus froid que l'air environnant. De l'air froid est alors expulsé au bas de la pente. Une ventilation ascendante hivernale a pu être mise en évidence dans certains des éboulis de haute altitude étudiés. Elle est probablement en grande partie responsable de la configuration particulière des zones gelées observées. Même si l'existence d'un effet de cheminée n'a pu être démontrée dans tous les cas, du fait notamment de la glace interstitielle qui entrave le cheminement de l'air, des indices laissant présager son possible fonctionnement existent dans la quasi totalité des éboulis étudiés. L'absence de pergélisol à des altitudes qui lui sont favorables pourrait en tous les cas s'expliquer par un réchauffement du terrain lié à des expulsions d'air relativement chaud. L'étude des mouvements de terrain a été effectuée sur une dizaine de sites, principalement sur des glaciers rocheux, mais également sur une moraine de poussée et - II - Résumé ? abstract quelques éboulis. Plusieurs glaciers rocheux présentent des formes de déstabilisation récente (niches d'arrachement, blocs basculés, apparition de la matrice fine à la surface, etc.), ce qui témoigne d'une récente accélération des vitesses de déplacement. Ce phénomène, qui semble général à l'échelle alpine, est probablement à mettre sur le compte du réchauffement du pergélisol depuis une vingtaine d'années. Les vitesses mesurées sur ces formations sont souvent plus élevées que les valeurs habituellement proposées dans la littérature. On note par ailleurs une forte variabilité inter-annuelle des vitesses, qui semblent dépendre de la variation de la température moyenne annuelle de surface. Abstract: In the context of a warmer climate, the localisation of permafrost in steep sedimentary terrain and the measurement of terrain movements that occur in these areas is of great importance. With respect to these problems, this PhD thesis follows two different research axes. From a static point of view, the research presents a study of the permafrost distribution and characteristics in the talus slopes of the alpine periglacial belt. From a dynamic point of view, an analysis of the influence of the permafrost characteristics (ice content, permafrost temperature, etc.) and air and soil temperature variations on the creep velocities of frozen sedimentary bodies is carried out. In order to attain this double objective, the "field" approach was favoured. To determine the distribution and the characteristics of permafrost, the traditional methods of permafrost prospecting were used, i.e. ground surface temperature measurements at the base of the snow cover (BTS), year-round ground temperature measurements and DC-resistivity prospecting. The terrain movements were measured using a differential GPS. The permafrost distribution study was carried out on 15 talus slopes located mainly in the Mont Gelé (Verbier-Nendaz) and Arolla areas (Swiss Alps). In most cases, permafrost was found in the lower part of the talus slope, whereas the medium part was free of ice. In some cases, the upper part of the talus is also free of permafrost, whereas in other cases permafrost is present. Electrical resistivities measured in the frozen parts of the studied talus are in most cases clearly lower than those measured on rock glaciers. Former studies have shown that internal air circulation is responsible for the negative thermal anomaly and, when it exists, the permafrost present in the lower part of talus slopes located more than 1000 m below the regional lower limit of discontinuous permafrost. The study of four low-altitude talus slopes (1400-1900 m), and notably the equipment of Dreveneuse field site (Valais Prealps) with two boreholes, surface temperature sensors and an anemometer permitted to verify and to detail the ventilation mechanism active in low altitude talus slopes. This mechanism works in the following way: in winter, the air contained in the block accumulation is warmer and lighter than the surrounding air and therefore moves upward in the talus and is expelled in its upper part. This chimney effect induces an aspiration of cold air in the interior of the lower part of talus, that causes a strong overcooling of the ground. In summer, the mechanism is reversed because the talus slope is colder than the surrounding air. Cold air is then expelled in the lower part of the slope. Evidence of ascending ventilation in wintertime could also be found in some of the studied high-altitude talus slopes. It is probably mainly responsible for the particular configuration of the observed frozen areas. Even if the existence of a chimney effect could not be demonstrated in all cases, notably because of interstitial ice that obstructs Résumé ? abstract - III - the air circulation, indices of its presence exist in nearly all the studied talus. The absence of permafrost at altitudes favourable to its presence could be explained, for example, by the terrain warming caused by expulsion of relatively warm air. Terrain movements were measured at about ten sites, mainly on rock glaciers, but also on a push moraine and some talus slopes. Field observations reveal that many rock glaciers display recent destabilization features (landslide scars, tilted blocks, presence of fine grained sediments at the surface, etc.) that indicate a probable recent acceleration of the creep velocities. This phenomenon, which seems to be widespread at the alpine scale, is probably linked to the permafrost warming during the last decades. The measured velocities are often higher than values usually proposed in the literature. In addition, strong inter-annual variations of the velocities were observed, which seems to depend on the mean annual ground temperature variations.
Resumo:
The microquasar LS 5039 has recently been detected as a source of very high energy (VHE) $\gamma$-rays. This detection, that confirms the previously proposed association of LS 5039 with the EGRET source 3EG~J1824$-$1514, makes of LS 5039 a special system with observational data covering nearly all the electromagnetic spectrum. In order to reproduce the observed spectrum of LS 5039, from radio to VHE $\gamma$-rays, we have applied a cold matter dominated jet model that takes into account accretion variability, the jet magnetic field, particle acceleration, adiabatic and radiative losses, microscopic energy conservation in the jet, and pair creation and absorption due to the external photon fields, as well as the emission from the first generation of secondaries. The radiative processes taken into account are synchrotron, relativistic Bremsstrahlung and inverse Compton (IC). The model is based on a scenario that has been characterized with recent observational results, concerning the orbital parameters, the orbital variability at X-rays and the nature of the compact object. The computed spectral energy distribution (SED) shows a good agreement with the available observational data.
Resumo:
In this thesis the X-ray tomography is discussed from the Bayesian statistical viewpoint. The unknown parameters are assumed random variables and as opposite to traditional methods the solution is obtained as a large sample of the distribution of all possible solutions. As an introduction to tomography an inversion formula for Radon transform is presented on a plane. The vastly used filtered backprojection algorithm is derived. The traditional regularization methods are presented sufficiently to ground the Bayesian approach. The measurements are foton counts at the detector pixels. Thus the assumption of a Poisson distributed measurement error is justified. Often the error is assumed Gaussian, altough the electronic noise caused by the measurement device can change the error structure. The assumption of Gaussian measurement error is discussed. In the thesis the use of different prior distributions in X-ray tomography is discussed. Especially in severely ill-posed problems the use of a suitable prior is the main part of the whole solution process. In the empirical part the presented prior distributions are tested using simulated measurements. The effect of different prior distributions produce are shown in the empirical part of the thesis. The use of prior is shown obligatory in case of severely ill-posed problem.
Resumo:
Second-rank tensor interactions, such as quadrupolar interactions between the spin- 1 deuterium nuclei and the electric field gradients created by chemical bonds, are affected by rapid random molecular motions that modulate the orientation of the molecule with respect to the external magnetic field. In biological and model membrane systems, where a distribution of dynamically averaged anisotropies (quadrupolar splittings, chemical shift anisotropies, etc.) is present and where, in addition, various parts of the sample may undergo a partial magnetic alignment, the numerical analysis of the resulting Nuclear Magnetic Resonance (NMR) spectra is a mathematically ill-posed problem. However, numerical methods (de-Pakeing, Tikhonov regularization) exist that allow for a simultaneous determination of both the anisotropy and orientational distributions. An additional complication arises when relaxation is taken into account. This work presents a method of obtaining the orientation dependence of the relaxation rates that can be used for the analysis of the molecular motions on a broad range of time scales. An arbitrary set of exponential decay rates is described by a three-term truncated Legendre polynomial expansion in the orientation dependence, as appropriate for a second-rank tensor interaction, and a linear approximation to the individual decay rates is made. Thus a severe numerical instability caused by the presence of noise in the experimental data is avoided. At the same time, enough flexibility in the inversion algorithm is retained to achieve a meaningful mapping from raw experimental data to a set of intermediate, model-free
Resumo:
The first two articles build procedures to simulate vector of univariate states and estimate parameters in nonlinear and non Gaussian state space models. We propose state space speci fications that offer more flexibility in modeling dynamic relationship with latent variables. Our procedures are extension of the HESSIAN method of McCausland[2012]. Thus, they use approximation of the posterior density of the vector of states that allow to : simulate directly from the state vector posterior distribution, to simulate the states vector in one bloc and jointly with the vector of parameters, and to not allow data augmentation. These properties allow to build posterior simulators with very high relative numerical efficiency. Generic, they open a new path in nonlinear and non Gaussian state space analysis with limited contribution of the modeler. The third article is an essay in commodity market analysis. Private firms coexist with farmers' cooperatives in commodity markets in subsaharan african countries. The private firms have the biggest market share while some theoretical models predict they disappearance once confronted to farmers cooperatives. Elsewhere, some empirical studies and observations link cooperative incidence in a region with interpersonal trust, and thus to farmers trust toward cooperatives. We propose a model that sustain these empirical facts. A model where the cooperative reputation is a leading factor determining the market equilibrium of a price competition between a cooperative and a private firm