969 resultados para Log-normal distribution
Resumo:
In Alzheimer's disease (AD) and Down's syndrome (DS), the size frequency distribution of the beta-amyloid (Abeta) deposits can be described by a log-normal model and may indictae the growth of the deposits. This study determined the size frequency distribution of the Abeta deposits in the temporal lobe in 8 casaes of dementia with Lewy bodies (DLB) with associated AD pathology (DLB/AD. The size distributions of Abeta deposits were unimodal and positively skewed; the mean size of deposi and the degree of skew varying with deposit type and brain region. Size distributions of the primitive deposits had lower means and were less skewed compared with the diffuse and classic deposits. In addition, size distributions in the hippocampus and parahippocampal gyrus (PHG) had larger means and a greater degree of skew compared with other cortical gyri. All size distributions deviated significantly from a log-normal model. There were more Abeta deposits than expected in the smaller size classes and fewer than expected near the mean and in the larger size classes. The data suggest thatthe pattern of growth of the Abeta deposits in DLB/AD depends both on deposit morphology and brain area. In addition, Abeta deposits in DLB appear to grow to within a more restricted size range than predicted and hence, to have less potential for growth compared with cases of 'pure' AD and DS.
Resumo:
Two years of harmonized aerosol number size distribution data from 24 European field monitoring sites have been analysed. The results give a comprehensive overview of the European near surface aerosol particle number concentrations and number size distributions between 30 and 500 nm of dry particle diameter. Spatial and temporal distribution of aerosols in the particle sizes most important for climate applications are presented. We also analyse the annual, weekly and diurnal cycles of the aerosol number concentrations, provide log-normal fitting parameters for median number size distributions, and give guidance notes for data users. Emphasis is placed on the usability of results within the aerosol modelling community. We also show that the aerosol number concentrations of Aitken and accumulation mode particles (with 100 nm dry diameter as a cut-off between modes) are related, although there is significant variation in the ratios of the modal number concentrations. Different aerosol and station types are distinguished from this data and this methodology has potential for further categorization of stations aerosol number size distribution types. The European submicron aerosol was divided into characteristic types: Central European aerosol, characterized by single mode median size distributions, unimodal number concentration histograms and low variability in CCN-sized aerosol number concentrations; Nordic aerosol with low number concentrations, although showing pronounced seasonal variation of especially Aitken mode particles; Mountain sites (altitude over 1000 m a.s.l.) with a strong seasonal cycle in aerosol number concentrations, high variability, and very low median number concentrations. Southern and Western European regions had fewer stations, which decreases the regional coverage of these results. Aerosol number concentrations over the Britain and Ireland had very high variance and there are indications of mixed air masses from several source regions; the Mediterranean aerosol exhibit high seasonality, and a strong accumulation mode in the summer. The greatest concentrations were observed at the Ispra station in Northern Italy with high accumulation mode number concentrations in the winter. The aerosol number concentrations at the Arctic station Zeppelin in Ny-Ålesund in Svalbard have also a strong seasonal cycle, with greater concentrations of accumulation mode particles in winter, and dominating summer Aitken mode indicating more recently formed particles. Observed particles did not show any statistically significant regional work-week or weekday related variation in number concentrations studied. Analysis products are made for open-access to the research community, available in a freely accessible internet site. The results give to the modelling community a reliable, easy-to-use and freely available comparison dataset of aerosol size distributions.
Resumo:
In this paper, we present various diagnostic methods for polyhazard models. Polyhazard models are a flexible family for fitting lifetime data. Their main advantage over the single hazard models, such as the Weibull and the log-logistic models, is to include a large amount of nonmonotone hazard shapes, as bathtub and multimodal curves. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. A discussion of the computation of the likelihood displacement as well as the normal curvature in the local influence method are presented. Finally, an example with real data is given for illustration.
Resumo:
Over the years, crop insurance programs became the focus of agricultural policy in the USA, Spain, Mexico, and more recently in Brazil. Given the increasing interest in insurance, accurate calculation of the premium rate is of great importance. We address the crop-yield distribution issue and its implications in pricing an insurance contract considering the dynamic structure of the data and incorporating the spatial correlation in the Hierarchical Bayesian framework. Results show that empirical (insurers) rates are higher in low risk areas and lower in high risk areas. Such methodological improvement is primarily important in situations of limited data.
Resumo:
In this paper we study the possible microscopic origin of heavy-tailed probability density distributions for the price variation of financial instruments. We extend the standard log-normal process to include another random component in the so-called stochastic volatility models. We study these models under an assumption, akin to the Born-Oppenheimer approximation, in which the volatility has already relaxed to its equilibrium distribution and acts as a background to the evolution of the price process. In this approximation, we show that all models of stochastic volatility should exhibit a scaling relation in the time lag of zero-drift modified log-returns. We verify that the Dow-Jones Industrial Average index indeed follows this scaling. We then focus on two popular stochastic volatility models, the Heston and Hull-White models. In particular, we show that in the Hull-White model the resulting probability distribution of log-returns in this approximation corresponds to the Tsallis (t-Student) distribution. The Tsallis parameters are given in terms of the microscopic stochastic volatility model. Finally, we show that the log-returns for 30 years Dow Jones index data is well fitted by a Tsallis distribution, obtaining the relevant parameters. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
To determine self-consistently the time evolution of particle size and their number density in situ multi-angle polarization-sensitive laser light scattering was used. Cross-polarization intensities (incident and scattered light intensities with opposite polarization) measured at 135 degrees and ex situ transmission electronic microscopy analysis demonstrate the existence of nonspherical agglomerates during the early phase of agglomeration. Later in the particle time development both techniques reveal spherical particles again. The presence of strong cross-polarization intensities is accompanied by low-frequency instabilities detected on the scattered light intensities and plasma emission. It is found that the particle radius and particle number density during the agglomeration phase can be well described by the Brownian free molecule coagulation model. Application of this neutral particle coagulation model is justified by calculation of the particle charge whereby it is shown that particles of a few tens of nanometer can be considered as neutral under our experimental conditions. The measured particle dispersion can be well described by a Brownian free molecule coagulation model including a log-normal particle size distribution. (C) 1996 American Institute of Physics.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
Resumo:
The specimen distribution pattern of a species can be used to characterise a population of interest and also provides area-specific guidance for pest management and control. In the municipality of Dracena, in the state of São Paulo, we analysed 5,889 Lutzomyia longipalpis specimens collected from the peridomiciles of 14 houses in a sector where American visceral leishmaniasis (AVL) is transmitted to humans and dogs. The goal was to analyse the dispersion and a theoretical fitting of the species occurrence probability. From January-December 2005, samples were collected once per week using CDC light traps that operated for 12-h periods. Each collection was considered a sub-sample and was evaluated monthly. The standardised Morisita index was used as a measure of dispersion. Adherence tests were performed for the log-series distribution. The number of traps was used to adjust the octave plots. The quantity of Lu. longipalpis in the sector was highly aggregated for each month of the year, adhering to a log-series distribution for 11 of the 12 months analysed. A sex-stratified analysis demonstrated a pattern of aggregated dispersion adjusted for each month of the year. The classes and frequencies of the traps in octaves can be employed as indicators for entomological surveillance and AVL control.
Resumo:
L'étude du mouvement des organismes est essentiel pour la compréhension du fonctionnement des écosystèmes. Dans le cas des écosystèmes marins exploités, cela amène à s'intéresser aux stratégies spatiales des pêcheurs. L'une des approches les plus utilisées pour la modélisation du mouvement des prédateurs supé- rieurs est la marche aléatoire de Lévy. Une marche aléatoire est un modèle mathématique composé par des déplacements aléatoires. Dans le cas de Lévy, les longueurs des déplacements suivent une loi stable de Lévy. Dans ce cas également, les longueurs, lorsqu'elles tendent vers l'in ni (in praxy lorsqu'elles sont grandes, grandes par rapport à la médiane ou au troisième quartile par exemple), suivent une loi puissance caractéristique du type de marche aléatoire de Lévy (Cauchy, Brownien ou strictement Lévy). Dans la pratique, outre que cette propriété est utilisée de façon réciproque sans fondement théorique, les queues de distribution, notion par ailleurs imprécise, sont modélisée par des lois puissances sans que soient discutées la sensibilité des résultats à la dé nition de la queue de distribution, et la pertinence des tests d'ajustement et des critères de choix de modèle. Dans ce travail portant sur les déplacements observés de trois bateaux de pêche à l'anchois du Pérou, plusieurs modèles de queues de distribution (log-normal, exponentiel, exponentiel tronqué, puissance et puissance tronqué) ont été comparés ainsi que deux dé nitions possible de queues de distribution (de la médiane à l'in ni ou du troisième quartile à l'in ni). Au plan des critères et tests statistiques utilisés, les lois tronquées (exponentielle et puissance) sont apparues les meilleures. Elles intègrent en outre le fait que, dans la pratique, les bateaux ne dépassent pas une certaine limite de longueur de déplacement. Le choix de modèle est apparu sensible au choix du début de la queue de distribution : pour un même bateau, le choix d'un modèle tronqué ou l'autre dépend de l'intervalle des valeurs de la variable sur lequel le modèle est ajusté. Pour nir, nous discutons les implications en écologie des résultats de ce travail.
Resumo:
In this paper we describe the results of a simulation study performed to elucidate the robustness of the Lindstrom and Bates (1990) approximation method under non-normality of the residuals, under different situations. Concerning the fixed effects, the observed coverage probabilities and the true bias and mean square error values, show that some aspects of this inferential approach are not completely reliable. When the true distribution of the residuals is asymmetrical, the true coverage is markedly lower than the nominal one. The best results are obtained for the skew normal distribution, and not for the normal distribution. On the other hand, the results are partially reversed concerning the random effects. Soybean genotypes data are used to illustrate the methods and to motivate the simulation scenarios
Resumo:
The modeling and estimation of the parameters that define the spatial dependence structure of a regionalized variable by geostatistical methods are fundamental, since these parameters, underlying the kriging of unsampled points, allow the construction of thematic maps. One or more atypical observations in the sample data can affect the estimation of these parameters. Thus, the assessment of the combined influence of these observations by the analysis of Local Influence is essential. The purpose of this paper was to propose local influence analysis methods for the regionalized variable, given that it has n-variate Student's t-distribution, and compare it with the analysis of local influence when the same regionalized variable has n-variate normal distribution. These local influence analysis methods were applied to soil physical properties and soybean yield data of an experiment carried out in a 56.68 ha commercial field in western Paraná, Brazil. Results showed that influential values are efficiently determined with n-variate Student's t-distribution.
Resumo:
To determine self‐consistently the time evolution of particle size and their number density in situ multi‐angle polarization‐sensitive laser light scattering was used. Cross‐polarization intensities (incident and scattered light intensities with opposite polarization) measured at 135° and ex situ transmission electronic microscopy analysis demonstrate the existence of nonspherical agglomerates during the early phase of agglomeration. Later in the particle time development both techniques reveal spherical particles again. The presence of strong cross‐polarization intensities is accompanied by low‐frequency instabilities detected on the scattered light intensities and plasma emission. It is found that the particle radius and particle number density during the agglomeration phase can be well described by the Brownian free molecule coagulation model. Application of this neutral particle coagulation model is justified by calculation of the particle charge whereby it is shown that particles of a few tens of nanometer can be considered as neutral under our experimental conditions. The measured particle dispersion can be well described by a Brownian free molecule coagulation model including a log‐normal particle size distribution.
Resumo:
Guava response to liming and fertilization can be monitored by tissue testing. Tissue nutrient signature is often diagnosed against nutrient concentration standards. However, this approach has been criticized for not considering nutrient interactions and to generate numerical biases as a result of data redundancy, scale dependency and non-normal distribution. Techniques of compositional data analysis can control those biases by balancing groups of nutrients, such as those involved in liming and fertilization. The sequentially arranged and orthonormal isometric log ratios (ilr) or balances avoid numerical bias inherent to compositional data. The objectives were to relate tissue nutrient balances with the production of "Paluma" guava orchards differentially limed and fertilized, and to adjust the current patterns of nutrient balance with the range of more productive guava trees. It was conducted one experiment of 7-yr of liming and three experiments of 3-yr with N, P and K trials in 'Paluma' orchards on an Oxisol. Plant N, P, K, Ca and Mg were monitored yearly. It was selected the [N, P, K | Ca, Mg], [N, P | K], [N | P] and [Ca | Mg] balances to set apart the effects of liming (Ca-Mg) and fertilizers (N-K) on macronutrient balances. Liming largely influenced nutrient balances of guava in the Oxisol while fertilization was less influential. The large range of guava yields and nutrient balances allowed defining balance ranges and comparing them with the critical ranges of nutrient concentration values currently used in Brazil and combined into ilr coordinates.
Resumo:
The most suitable method for estimation of size diversity is investigated. Size diversity is computed on the basis of the Shannon diversity expression adapted for continuous variables, such as size. It takes the form of an integral involving the probability density function (pdf) of the size of the individuals. Different approaches for the estimation of pdf are compared: parametric methods, assuming that data come from a determinate family of pdfs, and nonparametric methods, where pdf is estimated using some kind of local evaluation. Exponential, generalized Pareto, normal, and log-normal distributions have been used to generate simulated samples using estimated parameters from real samples. Nonparametric methods include discrete computation of data histograms based on size intervals and continuous kernel estimation of pdf. Kernel approach gives accurate estimation of size diversity, whilst parametric methods are only useful when the reference distribution have similar shape to the real one. Special attention is given for data standardization. The division of data by the sample geometric mean is proposedas the most suitable standardization method, which shows additional advantages: the same size diversity value is obtained when using original size or log-transformed data, and size measurements with different dimensionality (longitudes, areas, volumes or biomasses) may be immediately compared with the simple addition of ln k where kis the dimensionality (1, 2, or 3, respectively). Thus, the kernel estimation, after data standardization by division of sample geometric mean, arises as the most reliable and generalizable method of size diversity evaluation
Resumo:
This paper sets out to identify the initial positions of the different decisionmakers who intervene in a group decision making process with a reducednumber of actors, and to establish possible consensus paths between theseactors. As a methodological support, it employs one of the most widely-knownmulticriteria decision techniques, namely, the Analytic Hierarchy Process(AHP). Assuming that the judgements elicited by the decision makers follow theso-called multiplicative model (Crawford and Williams, 1985; Altuzarra et al.,1997; Laininen and Hämäläinen, 2003) with log-normal errors and unknownvariance, a Bayesian approach is used in the estimation of the relative prioritiesof the alternatives being compared. These priorities, estimated by way of themedian of the posterior distribution and normalised in a distributive manner(priorities add up to one), are a clear example of compositional data that will beused in the search for consensus between the actors involved in the resolution ofthe problem through the use of Multidimensional Scaling tools