810 resultados para Extreme values


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In random matrix theory, the Tracy-Widom (TW) distribution describes the behavior of the largest eigenvalue. We consider here two models in which TW undergoes transformations. In the first one disorder is introduced in the Gaussian ensembles by superimposing an external source of randomness. A competition between TW and a normal (Gaussian) distribution results, depending on the spreading of the disorder. The second model consists of removing at random a fraction of (correlated) eigenvalues of a random matrix. The usual formalism of Fredholm determinants extends naturally. A continuous transition from TW to the Weilbull distribution, characteristic of extreme values of an uncorrelated sequence, is obtained.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For the first time, we introduce and study some mathematical properties of the Kumaraswamy Weibull distribution that is a quite flexible model in analyzing positive data. It contains as special sub-models the exponentiated Weibull, exponentiated Rayleigh, exponentiated exponential, Weibull and also the new Kumaraswamy exponential distribution. We provide explicit expressions for the moments and moment generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are derived for the mean deviations, Bonferroni and Lorenz curves, reliability and Renyi entropy. The moments of the order statistics are calculated. We also discuss the estimation of the parameters by maximum likelihood. We obtain the expected information matrix. We provide applications involving two real data sets on failure times. Finally, some multivariate generalizations of the Kumaraswamy Weibull distribution are discussed. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study in detail the so-called beta-modified Weibull distribution, motivated by the wide use of the Weibull distribution in practice, and also for the fact that the generalization provides a continuous crossover towards cases with different shapes. The new distribution is important since it contains as special sub-models some widely-known distributions, such as the generalized modified Weibull, beta Weibull, exponentiated Weibull, beta exponential, modified Weibull and Weibull distributions, among several others. It also provides more flexibility to analyse complex real data. Various mathematical properties of this distribution are derived, including its moments and moment generating function. We examine the asymptotic distributions of the extreme values. Explicit expressions are also derived for the chf, mean deviations, Bonferroni and Lorenz curves, reliability and entropies. The estimation of parameters is approached by two methods: moments and maximum likelihood. We compare by simulation the performances of the estimates from these methods. We obtain the expected information matrix. Two applications are presented to illustrate the proposed distribution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A model for finely layered visco-elastic rock proposed by us in previous papers is revisited and generalized to include couple stresses. We begin with an outline of the governing equations for the standard continuum case and apply a computational simulation scheme suitable for problems involving very large deformations. We then consider buckling instabilities in a finite, rectangular domain. Embedded within this domain, parallel to the longer dimension we consider a stiff, layered beam under compression. We analyse folding up to 40% shortening. The standard continuum solution becomes unstable for extreme values of the shear/normal viscosity ratio. The instability is a consequence of the neglect of the bending stiffness/viscosity in the standard continuum model. We suggest considering these effects within the framework of a couple stress theory. Couple stress theories involve second order spatial derivatives of the velocities/displacements in the virtual work principle. To avoid C-1 continuity in the finite element formulation we introduce the spin of the cross sections of the individual layers as an independent variable and enforce equality to the spin of the unit normal vector to the layers (-the director of the layer system-) by means of a penalty method. We illustrate the convergence of the penalty method by means of numerical solutions of simple shears of an infinite layer for increasing values of the penalty parameter. For the shear problem we present solutions assuming that the internal layering is oriented orthogonal to the surfaces of the shear layer initially. For high values of the ratio of the normal-to the shear viscosity the deformation concentrates in thin bands around to the layer surfaces. The effect of couple stresses on the evolution of folds in layered structures is also investigated. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação de Mestrado em Biodiversidade e Biotecnologia Vegetal

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE To analyze the incremental cost-utility ratio for the surgical treatment of hip fracture in older patients.METHODS This was a retrospective cohort study of a systematic sample of patients who underwent surgery for hip fracture at a central hospital of a macro-region in the state of Minas Gerais, Southeastern Brazil between January 1, 2009 and December 31, 2011. A decision tree creation was analyzed considering the direct medical costs. The study followed the healthcare provider’s perspective and had a one-year time horizon. Effectiveness was measured by the time elapsed between trauma and surgery after dividing the patients into early and late surgery groups. The utility was obtained in a cross-sectional and indirect manner using the EuroQOL 5 Dimensions generic questionnaire transformed into cardinal numbers using the national regulations established by the Center for the Development and Regional Planning of the State of Minas Gerais. The sample included 110 patients, 27 of whom were allocated in the early surgery group and 83 in the late surgery group. The groups were stratified by age, gender, type of fracture, type of surgery, and anesthetic risk.RESULTS The direct medical cost presented a statistically significant increase among patients in the late surgery group (p < 0.005), mainly because of ward costs (p < 0.001). In-hospital mortality was higher in the late surgery group (7.4% versus 16.9%). The decision tree demonstrated the dominance of the early surgery strategy over the late surgery strategy: R$9,854.34 (USD4,387.17) versus R$26,754.56 (USD11,911.03) per quality-adjusted life year. The sensitivity test with extreme values proved the robustness of the results.CONCLUSIONS After controlling for confounding variables, the strategy of early surgery for hip fracture in the older adults was proven to be dominant, because it presented a lower cost and better results than late surgery.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tese de Doutoramento em Psicologia (Especialidade de Psicologia Experimental e Ciências Cognitivas)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main object of the present paper consists in giving formulas and methods which enable us to determine the minimum number of repetitions or of individuals necessary to garantee some extent the success of an experiment. The theoretical basis of all processes consists essentially in the following. Knowing the frequency of the desired p and of the non desired ovents q we may calculate the frequency of all possi- ble combinations, to be expected in n repetitions, by expanding the binomium (p-+q)n. Determining which of these combinations we want to avoid we calculate their total frequency, selecting the value of the exponent n of the binomium in such a way that this total frequency is equal or smaller than the accepted limit of precision n/pª{ 1/n1 (q/p)n + 1/(n-1)| (q/p)n-1 + 1/ 2!(n-2)| (q/p)n-2 + 1/3(n-3) (q/p)n-3... < Plim - -(1b) There does not exist an absolute limit of precision since its value depends not only upon psychological factors in our judgement, but is at the same sime a function of the number of repetitions For this reasen y have proposed (1,56) two relative values, one equal to 1-5n as the lowest value of probability and the other equal to 1-10n as the highest value of improbability, leaving between them what may be called the "region of doubt However these formulas cannot be applied in our case since this number n is just the unknown quantity. Thus we have to use, instead of the more exact values of these two formulas, the conventional limits of P.lim equal to 0,05 (Precision 5%), equal to 0,01 (Precision 1%, and to 0,001 (Precision P, 1%). The binominal formula as explained above (cf. formula 1, pg. 85), however is of rather limited applicability owing to the excessive calculus necessary, and we have thus to procure approximations as substitutes. We may use, without loss of precision, the following approximations: a) The normal or Gaussean distribution when the expected frequency p has any value between 0,1 and 0,9, and when n is at least superior to ten. b) The Poisson distribution when the expected frequecy p is smaller than 0,1. Tables V to VII show for some special cases that these approximations are very satisfactory. The praticai solution of the following problems, stated in the introduction can now be given: A) What is the minimum number of repititions necessary in order to avoid that any one of a treatments, varieties etc. may be accidentally always the best, on the best and second best, or the first, second, and third best or finally one of the n beat treatments, varieties etc. Using the first term of the binomium, we have the following equation for n: n = log Riim / log (m:) = log Riim / log.m - log a --------------(5) B) What is the minimun number of individuals necessary in 01der that a ceratin type, expected with the frequency p, may appaer at least in one, two, three or a=m+1 individuals. 1) For p between 0,1 and 0,9 and using the Gaussean approximation we have: on - ó. p (1-p) n - a -1.m b= δ. 1-p /p e c = m/p } -------------------(7) n = b + b² + 4 c/ 2 n´ = 1/p n cor = n + n' ---------- (8) We have to use the correction n' when p has a value between 0,25 and 0,75. The greek letters delta represents in the present esse the unilateral limits of the Gaussean distribution for the three conventional limits of precision : 1,64; 2,33; and 3,09 respectively. h we are only interested in having at least one individual, and m becomes equal to zero, the formula reduces to : c= m/p o para a = 1 a = { b + b²}² = b² = δ2 1- p /p }-----------------(9) n = 1/p n (cor) = n + n´ 2) If p is smaller than 0,1 we may use table 1 in order to find the mean m of a Poisson distribution and determine. n = m: p C) Which is the minimun number of individuals necessary for distinguishing two frequencies p1 and p2? 1) When pl and p2 are values between 0,1 and 0,9 we have: n = { δ p1 ( 1-pi) + p2) / p2 (1 - p2) n= 1/p1-p2 }------------ (13) n (cor) We have again to use the unilateral limits of the Gaussean distribution. The correction n' should be used if at least one of the valors pl or p2 has a value between 0,25 and 0,75. A more complicated formula may be used in cases where whe want to increase the precision : n (p1 - p2) δ { p1 (1- p2 ) / n= m δ = δ p1 ( 1 - p1) + p2 ( 1 - p2) c= m / p1 - p2 n = { b2 + 4 4 c }2 }--------- (14) n = 1/ p1 - p2 2) When both pl and p2 are smaller than 0,1 we determine the quocient (pl-r-p2) and procure the corresponding number m2 of a Poisson distribution in table 2. The value n is found by the equation : n = mg /p2 ------------- (15) D) What is the minimun number necessary for distinguishing three or more frequencies, p2 p1 p3. If the frequecies pl p2 p3 are values between 0,1 e 0,9 we have to solve the individual equations and sue the higest value of n thus determined : n 1.2 = {δ p1 (1 - p1) / p1 - p2 }² = Fiim n 1.2 = { δ p1 ( 1 - p1) + p1 ( 1 - p1) }² } -- (16) Delta represents now the bilateral limits of the : Gaussean distrioution : 1,96-2,58-3,29. 2) No table was prepared for the relatively rare cases of a comparison of threes or more frequencies below 0,1 and in such cases extremely high numbers would be required. E) A process is given which serves to solve two problemr of informatory nature : a) if a special type appears in n individuals with a frequency p(obs), what may be the corresponding ideal value of p(esp), or; b) if we study samples of n in diviuals and expect a certain type with a frequency p(esp) what may be the extreme limits of p(obs) in individual farmlies ? I.) If we are dealing with values between 0,1 and 0,9 we may use table 3. To solve the first question we select the respective horizontal line for p(obs) and determine which column corresponds to our value of n and find the respective value of p(esp) by interpolating between columns. In order to solve the second problem we start with the respective column for p(esp) and find the horizontal line for the given value of n either diretly or by approximation and by interpolation. 2) For frequencies smaller than 0,1 we have to use table 4 and transform the fractions p(esp) and p(obs) in numbers of Poisson series by multiplication with n. Tn order to solve the first broblem, we verify in which line the lower Poisson limit is equal to m(obs) and transform the corresponding value of m into frequecy p(esp) by dividing through n. The observed frequency may thus be a chance deviate of any value between 0,0... and the values given by dividing the value of m in the table by n. In the second case we transform first the expectation p(esp) into a value of m and procure in the horizontal line, corresponding to m(esp) the extreme values om m which than must be transformed, by dividing through n into values of p(obs). F) Partial and progressive tests may be recomended in all cases where there is lack of material or where the loss of time is less importent than the cost of large scale experiments since in many cases the minimun number necessary to garantee the results within the limits of precision is rather large. One should not forget that the minimun number really represents at the same time a maximun number, necessary only if one takes into consideration essentially the disfavorable variations, but smaller numbers may frequently already satisfactory results. For instance, by definition, we know that a frequecy of p means that we expect one individual in every total o(f1-p). If there were no chance variations, this number (1- p) will be suficient. and if there were favorable variations a smaller number still may yield one individual of the desired type. r.nus trusting to luck, one may start the experiment with numbers, smaller than the minimun calculated according to the formulas given above, and increase the total untill the desired result is obtained and this may well b ebefore the "minimum number" is reached. Some concrete examples of this partial or progressive procedure are given from our genetical experiments with maize.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the main implications of the efficient market hypothesis (EMH) is that expected future returns on financial assets are not predictable if investors are risk neutral. In this paper we argue that financial time series offer more information than that this hypothesis seems to supply. In particular we postulate that runs of very large returns can be predictable for small time periods. In order to prove this we propose a TAR(3,1)-GARCH(1,1) model that is able to describe two different types of extreme events: a first type generated by large uncertainty regimes where runs of extremes are not predictable and a second type where extremes come from isolated dread/joy events. This model is new in the literature in nonlinear processes. Its novelty resides on two features of the model that make it different from previous TAR methodologies. The regimes are motivated by the occurrence of extreme values and the threshold variable is defined by the shock affecting the process in the preceding period. In this way this model is able to uncover dependence and clustering of extremes in high as well as in low volatility periods. This model is tested with data from General Motors stocks prices corresponding to two crises that had a substantial impact in financial markets worldwide; the Black Monday of October 1987 and September 11th, 2001. By analyzing the periods around these crises we find evidence of statistical significance of our model and thereby of predictability of extremes for September 11th but not for Black Monday. These findings support the hypotheses of a big negative event producing runs of negative returns in the first case, and of the burst of a worldwide stock market bubble in the second example. JEL classification: C12; C15; C22; C51 Keywords and Phrases: asymmetries, crises, extreme values, hypothesis testing, leverage effect, nonlinearities, threshold models

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tropical cyclones are affected by a large number of climatic factors, which translates into complex patterns of occurrence. The variability of annual metrics of tropical-cyclone activity has been intensively studied, in particular since the sudden activation of the North Atlantic in the mid 1990’s. We provide first a swift overview on previous work by diverse authors about these annual metrics for the North-Atlantic basin, where the natural variability of the phenomenon, the existence of trends, the drawbacks of the records, and the influence of global warming have been the subject of interesting debates. Next, we present an alternative approach that does not focus on seasonal features but on the characteristics of single events [Corral et al., Nature Phys. 6, 693 (2010)]. It is argued that the individual-storm power dissipation index (PDI) constitutes a natural way to describe each event, and further, that the PDI statistics yields a robust law for the occurrence of tropical cyclones in terms of a power law. In this context, methods of fitting these distributions are discussed. As an important extension to this work we introduce a distribution function that models the whole range of the PDI density (excluding incompleteness effects at the smallest values), the gamma distribution, consisting in a powerlaw with an exponential decay at the tail. The characteristic scale of this decay, represented by the cutoff parameter, provides very valuable information on the finiteness size of the basin, via the largest values of the PDIs that the basin can sustain. We use the gamma fit to evaluate the influence of sea surface temperature (SST) on the occurrence of extreme PDI values, for which we find an increase around 50 % in the values of these basin-wide events for a 0.49 C SST average difference. Similar findings are observed for the effects of the positive phase of the Atlantic multidecadal oscillation and the number of hurricanes in a season on the PDI distribution. In the case of the El Niño Southern oscillation (ENSO), positive and negative values of the multivariate ENSO index do not have a significant effect on the PDI distribution; however, when only extreme values of the index are used, it is found that the presence of El Niño decreases the PDI of the most extreme hurricanes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Theory of compositional data analysis is often focused on the composition only. However in practical applications we often treat a composition together with covariableswith some other scale. This contribution systematically gathers and develop statistical tools for this situation. For instance, for the graphical display of the dependenceof a composition with a categorical variable, a colored set of ternary diagrams mightbe a good idea for a first look at the data, but it will fast hide important aspects ifthe composition has many parts, or it takes extreme values. On the other hand colored scatterplots of ilr components could not be very instructive for the analyst, if theconventional, black-box ilr is used.Thinking on terms of the Euclidean structure of the simplex, we suggest to set upappropriate projections, which on one side show the compositional geometry and on theother side are still comprehensible by a non-expert analyst, readable for all locations andscales of the data. This is e.g. done by defining special balance displays with carefully-selected axes. Following this idea, we need to systematically ask how to display, explore,describe, and test the relation to complementary or explanatory data of categorical, real,ratio or again compositional scales.This contribution shows that it is sufficient to use some basic concepts and very fewadvanced tools from multivariate statistics (principal covariances, multivariate linearmodels, trellis or parallel plots, etc.) to build appropriate procedures for all these combinations of scales. This has some fundamental implications in their software implementation, and how might they be taught to analysts not already experts in multivariateanalysis

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Understanding the distribution and composition of species assemblages and being able to predict them in space and time are highly important tasks io investigate the fate of biodiversity in the current global changes context. Species distribution models are tools that have proven useful to predict the potential distribution of species by relating their occurrences to environmental variables. Species assemblages can then be predicted by combining the prediction of individual species models. In the first part of my thesis, I tested the importance of new environmental predictors to improve species distribution prediction. I showed that edaphic variables, above all soil pH and nitrogen content could be important in species distribution models. In a second chapter, I tested the influence of different resolution of predictors on the predictive ability of species distribution models. I showed that fine resolution predictors could ameliorate the models for some species by giving a better estimation of the micro-topographic condition that species tolerate, but that fine resolution predictors for climatic factors still need to be ameliorated. The second goal of my thesis was to test the ability of empirical models to predict species assemblages' characteristics such as species richness or functional attributes. I showed that species richness could be modelled efficiently and that the resulting prediction gave a more realistic estimate of the number of species than when obtaining it by stacking outputs of single species distribution models. Regarding the prediction of functional characteristics (plant height, leaf surface, seed mass) of plant assemblages, mean and extreme values of functional traits were better predictable than indices reflecting the diversity of traits in the community. This approach proved interesting to understand which environmental conditions influence particular aspects of the vegetation functioning. It could also be useful to predict climate change impacts on the vegetation. In the last part of my thesis, I studied the capacity of stacked species distribution models to predict the plant assemblages. I showed that this method tended to over-predict the number of species and that the composition of the community was not predicted exactly either. Finally, I combined the results of macro- ecological models obtained in the preceding chapters with stacked species distribution models and showed that this approach reduced significantly the number of species predicted and that the prediction of the composition is also ameliorated in some cases. These results showed that this method is promising. It needs now to be tested on further data sets. - Comprendre la manière dont les plantes se répartissent dans l'environnement et s'organisent en communauté est une question primordiale dans le contexte actuel de changements globaux. Cette connaissance peut nous aider à sauvegarder la diversité des espèces et les écosystèmes. Des méthodes statistiques nous permettent de prédire la distribution des espèces de plantes dans l'espace géographique et dans le temps. Ces modèles de distribution d'espèces, relient les occurrences d'une espèce avec des variables environnementales pour décrire sa distribution potentielle. Cette méthode a fait ses preuves pour ce qui est de la prédiction d'espèces individuelles. Plus récemment plusieurs tentatives de cumul de modèles d'espèces individuelles ont été réalisées afin de prédire la composition des communautés végétales. Le premier objectif de mon travail est d'améliorer les modèles de distribution en testant l'importance de nouvelles variables prédictives. Parmi différentes variables édaphiques, le pH et la teneur en azote du sol se sont avérés des facteurs non négligeables pour prédire la distribution des plantes. Je démontre aussi dans un second chapitre que les prédicteurs environnementaux à fine résolution permettent de refléter les conditions micro-topographiques subies par les plantes mais qu'ils doivent encore être améliorés avant de pouvoir être employés de manière efficace dans les modèles. Le deuxième objectif de ce travail consistait à étudier le développement de modèles prédictifs pour des attributs des communautés végétales tels que, par exemple, la richesse en espèces rencontrée à chaque point. Je démontre qu'il est possible de prédire par ce biais des valeurs de richesse spécifiques plus réalistes qu'en sommant les prédictions obtenues précédemment pour des espèces individuelles. J'ai également prédit dans l'espace et dans le temps des caractéristiques de la végétation telles que sa hauteur moyenne, minimale et maximale. Cette approche peut être utile pour comprendre quels facteurs environnementaux promeuvent différents types de végétation ainsi que pour évaluer les changements à attendre au niveau de la végétation dans le futur sous différents régimes de changements climatiques. Dans une troisième partie de ma thèse, j'ai exploré la possibilité de prédire les assemblages de plantes premièrement en cumulant les prédictions obtenues à partir de modèles individuels pour chaque espèce. Cette méthode a le défaut de prédire trop d'espèces par rapport à ce qui est observé en réalité. J'ai finalement employé le modèle de richesse en espèce développé précédemment pour contraindre les résultats du modèle d'assemblage de plantes. Cela a permis l'amélioration des modèles en réduisant la sur-prédiction et en améliorant la prédiction de la composition en espèces. Cette méthode semble prometteuse mais de nouveaux tests sont nécessaires pour bien évaluer ses capacités.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The consequences of variable rates of clonal reproduction on the population genetics of neutral markers are explored in diploid organisms within a subdivided population (island model). We use both analytical and stochastic simulation approaches. High rates of clonal reproduction will positively affect heterozygosity. As a consequence, nearly twice as many alleles per locus can be maintained and population differentiation estimated as F(ST) value is strongly decreased in purely clonal populations as compared to purely sexual ones. With increasing clonal reproduction, effective population size first slowly increases and then points toward extreme values when the reproductive system tends toward strict clonality. This reflects the fact that polymorphism is protected within individuals due to fixed heterozygosity. Contrarily, genotypic diversity smoothly decreases with increasing rates of clonal reproduction. Asexual populations thus maintain higher genetic diversity at each single locus but a lower number of different genotypes. Mixed clonal/sexual reproduction is nearly indistinguishable from strict sexual reproduction as long as the proportion of clonal reproduction is not strongly predominant for all quantities investigated, except for genotypic diversities (both at individual loci and over multiple loci).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Community-level patterns of functional traits relate to community assembly and ecosystem functioning. By modelling the changes of different indices describing such patterns - trait means, extremes and diversity in communities - as a function of abiotic gradients, we could understand their drivers and build projections of the impact of global change on the functional components of biodiversity. We used five plant functional traits (vegetative height, specific leaf area, leaf dry matter content, leaf nitrogen content and seed mass) and non-woody vegetation plots to model several indices depicting community-level patterns of functional traits from a set of abiotic environmental variables (topographic, climatic and edaphic) over contrasting environmental conditions in a mountainous landscape. We performed a variation partitioning analysis to assess the relative importance of these variables for predicting patterns of functional traits in communities, and projected the best models under several climate change scenarios to examine future potential changes in vegetation functional properties. Not all indices of trait patterns within communities could be modelled with the same level of accuracy: the models for mean and extreme values of functional traits provided substantially better predictive accuracy than the models calibrated for diversity indices. Topographic and climatic factors were more important predictors of functional trait patterns within communities than edaphic predictors. Overall, model projections forecast an increase in mean vegetation height and in mean specific leaf area following climate warming. This trend was important at mid elevation particularly between 1000 and 2000 m asl. With this study we showed that topographic, climatic and edaphic variables can successfully model descriptors of community-level patterns of plant functional traits such as mean and extreme trait values. However, which factors determine the diversity of functional traits in plant communities remains unclear and requires more investigations.