995 resultados para gradually truncated log-normal
Resumo:
Modelos matemáticos não-lineares utilizados na análise de desempenho de sistemas de irrigação foram comparados visando a indicar o que se ajusta melhor aos dados observados em perfis de distribuição da água aplicada na irrigação. Foram considerados quatro modelos de probabilidade (Normal, Log-normal, Gama e Beta) e dois modelos potenciais (modelos Silva e Karmeli), aplicados a 91 casos de avaliação de desempenho da irrigação. A comparação entre as curvas de freqüência acumulada da soma de quadrados dos erros, obtida do ajuste de cada modelo aos dados, revelou que o modelo Silva é estatisticamente o melhor entre os modelos testados.
Resumo:
O objetivo deste trabalho foi apresentar modelagens alternativas, uni e bivariadas, para avaliação da conversão alimentar (CA) de suínos da raça Piau, com uso de inferência bayesiana. Os efeitos de sexo e genótipo sobre a CA dos animais foram avaliados por meio de procedimentos de simulação de Monte Carlo via cadeias de Markov (MCMC) e de integração aproximada aninhada de Laplace (INLA). O modelo univariado foi avaliado com diferentes distribuições para o erro - normal (gaussiana), t de Student, gama, log-normal e skew-normal -, enquanto, para o modelo bivariado, considerou-se o erro normal. A distribuição skew-normal foi o modelo mais parcimonioso para inferir sobre a resposta direta (univariada) da CA aos efeitos de sexo e genótipo, os quais não foram significativos. O modelo bivariado foi capaz de identificar diferenças significativas no ganho de peso e no consumo de ração em níveis de significância não detectados pelo modelo univariado. Além disso, ele também foi capaz de detectar diferenças entre sexos, quando agrupados por genótipos NN (machos, 2,73±0,04; fêmeas, 2,68±0,04) e Nn (machos, 2,70±0,07; fêmeas, 2,64±0,07), e revelou maior acurácia e precisão nas inferências nutricionais. Em ambas as abordagens, o método bayesiano mostra-se flexível e eficiente para a avaliação do desempenho nutricional dos animais.
Resumo:
The most suitable method for estimation of size diversity is investigated. Size diversity is computed on the basis of the Shannon diversity expression adapted for continuous variables, such as size. It takes the form of an integral involving the probability density function (pdf) of the size of the individuals. Different approaches for the estimation of pdf are compared: parametric methods, assuming that data come from a determinate family of pdfs, and nonparametric methods, where pdf is estimated using some kind of local evaluation. Exponential, generalized Pareto, normal, and log-normal distributions have been used to generate simulated samples using estimated parameters from real samples. Nonparametric methods include discrete computation of data histograms based on size intervals and continuous kernel estimation of pdf. Kernel approach gives accurate estimation of size diversity, whilst parametric methods are only useful when the reference distribution have similar shape to the real one. Special attention is given for data standardization. The division of data by the sample geometric mean is proposedas the most suitable standardization method, which shows additional advantages: the same size diversity value is obtained when using original size or log-transformed data, and size measurements with different dimensionality (longitudes, areas, volumes or biomasses) may be immediately compared with the simple addition of ln k where kis the dimensionality (1, 2, or 3, respectively). Thus, the kernel estimation, after data standardization by division of sample geometric mean, arises as the most reliable and generalizable method of size diversity evaluation
Resumo:
This paper sets out to identify the initial positions of the different decisionmakers who intervene in a group decision making process with a reducednumber of actors, and to establish possible consensus paths between theseactors. As a methodological support, it employs one of the most widely-knownmulticriteria decision techniques, namely, the Analytic Hierarchy Process(AHP). Assuming that the judgements elicited by the decision makers follow theso-called multiplicative model (Crawford and Williams, 1985; Altuzarra et al.,1997; Laininen and Hämäläinen, 2003) with log-normal errors and unknownvariance, a Bayesian approach is used in the estimation of the relative prioritiesof the alternatives being compared. These priorities, estimated by way of themedian of the posterior distribution and normalised in a distributive manner(priorities add up to one), are a clear example of compositional data that will beused in the search for consensus between the actors involved in the resolution ofthe problem through the use of Multidimensional Scaling tools
Resumo:
It is a well known phenomenon that the constant amplitude fatigue limit of a large component is lower than the fatigue limit of a small specimen made of the same material. In notched components the opposite occurs: the fatigue limit defined as the maximum stress at the notch is higher than that achieved with smooth specimens. These two effects have been taken into account in most design handbooks with the help of experimental formulas or design curves. The basic idea of this study is that the size effect can mainly be explained by the statistical size effect. A component subjected to an alternating load can be assumed to form a sample of initiated cracks at the end of the crack initiation phase. The size of the sample depends on the size of the specimen in question. The main objective of this study is to develop a statistical model for the estimation of this kind of size effect. It was shown that the size of a sample of initiated cracks shall be based on the stressed surface area of the specimen. In case of varying stress distribution, an effective stress area must be calculated. It is based on the decreasing probability of equally sized initiated cracks at lower stress level. If the distribution function of the parent population of cracks is known, the distribution of the maximum crack size in a sample can be defined. This makes it possible to calculate an estimate of the largest expected crack in any sample size. The estimate of the fatigue limit can now be calculated with the help of the linear elastic fracture mechanics. In notched components another source of size effect has to be taken into account. If we think about two specimens which have similar shape, but the size is different, it can be seen that the stress gradient in the smaller specimen is steeper. If there is an initiated crack in both of them, the stress intensity factor at the crack in the larger specimen is higher. The second goal of this thesis is to create a calculation method for this factor which is called the geometric size effect. The proposed method for the calculation of the geometric size effect is also based on the use of the linear elastic fracture mechanics. It is possible to calculate an accurate value of the stress intensity factor in a non linear stress field using weight functions. The calculated stress intensity factor values at the initiated crack can be compared to the corresponding stress intensity factor due to constant stress. The notch size effect is calculated as the ratio of these stress intensity factors. The presented methods were tested against experimental results taken from three German doctoral works. Two candidates for the parent population of initiated cracks were found: the Weibull distribution and the log normal distribution. Both of them can be used successfully for the prediction of the statistical size effect for smooth specimens. In case of notched components the geometric size effect due to the stress gradient shall be combined with the statistical size effect. The proposed method gives good results as long as the notch in question is blunt enough. For very sharp notches, stress concentration factor about 5 or higher, the method does not give sufficient results. It was shown that the plastic portion of the strain becomes quite high at the root of this kind of notches. The use of the linear elastic fracture mechanics becomes therefore questionable.
Resumo:
A prospective study of IgG and IgM isotypes of anticardiolipin antibodies (aCL) in a series of 100 patients with systemic lupus erythematosus was carried out. To determine the normal range of both isotype titres a group of 100 normal control serum samples was studied and a log-normal distribution of IgG and IgM isotypes was found. The IgG anticardiolipin antibody serum was regarded as positive if a binding index greater than 2.85 (SD 3.77) was detected and a binding index greater than 4.07 (3.90) was defined as positive for IgM anticardiolipin antibody. Twenty four patients were positive for IgG aCL, 20 for IgM aCL, and 36 for IgG or IgM aCL, or both. IgG aCL were found to have a significant association with thrombosis and thrombocytopenia, and IgM aCL with haemolytic anaemia and neutropenia. Specificity and predictive value for these clinical manifestations increased at moderate and high anticardiolipin antibody titres. In addition, a significant association was found between aCL and the presence of lupus anticoagulant. Identification of these differences in the anticardiolipin antibody isotype associations may improve the clinical usefulness of these tests, and this study confirms the good specificity and predictive value of the anticardiolipin antibody titre for these clinical manifestations.
Resumo:
A composição vegetal, estrutura, diversidade, estádio sucessional e distribuição de espécies do cerrado do campus da UEG com 6 ha, foram inventariados usando 30 parcelas com 100 m² cada. A partir de dbs superior ou igual a 5 cm foram amostrados 515 indivíduos, representados por 20 famílias, 28 gêneros e 46 espécies. As famílias de maior riqueza foram Leguminosae, Vochysiaceae e Malpighiaceae. As espécies Qualea grandiflora, Byrsonima crassa, Erytroxylum tortuosum, Qualea parviflora e Miconia ferruginata apresentaram os maiores VIs. A diversidade da área (H¢ = 1,353) é menor que valores descritos para Cerrado, o que pode ser conseqüência tanto da abundância de espécies como Q. grandiflora e B. crasssa quanto de interferências antrópicas no local, incluindo queimadas. O valor do índice sucessional (IS = 2,3) indica uma comunidade em estágio intermediário de sucessão, já que seu valor oscila de 1 a 3 . A comunidade ajustou-se apenas ao modelo log-normal, o que, de acordo com a literatura, pode ser influência da proporção da abundância de espécies dominantes, intermediárias e raras.
Resumo:
Microscopic visualization, especially in transparent micromodels, can provide valuable information to understand the transport phenomena at pore scale in different process occurring in porous materials (food, timber, soils, etc.). Micromodels studies focus mainly on the observation of multi-phase flow, which presents a greater proximity to reality. The aim of this study was to study the process of flexography and its application in the manufacture of polyester resin transparent micromodels and its application to carrots. Materials used to implement a flexo station for micromodels construction were thermoregulated water bath, exposure chamber to UV light, photosensitive substance (photopolymer), RTV silicone polyester resin, and glass plates. In this paper, data on size distribution of a particular kind of carrot we used, and a transparent micromodel with square cross-section as well as a Log-normal pore size distribution with pore radii ranging from 10 to 110 µm (average of 22 µm and micromodel size of 10 × 10 cm) were built. Finally, it stresses that it has successfully implemented the protocol processing 2D polyester resin transparent micromodels.
Resumo:
In this paper, we provide both qualitative and quantitative measures of the cost of measuring the integrated volatility by the realized volatility when the frequency of observation is fixed. We start by characterizing for a general diffusion the difference between the realized and the integrated volatilities for a given frequency of observations. Then, we compute the mean and variance of this noise and the correlation between the noise and the integrated volatility in the Eigenfunction Stochastic Volatility model of Meddahi (2001a). This model has, as special examples, log-normal, affine, and GARCH diffusion models. Using some previous empirical works, we show that the standard deviation of the noise is not negligible with respect to the mean and the standard deviation of the integrated volatility, even if one considers returns at five minutes. We also propose a simple approach to capture the information about the integrated volatility contained in the returns through the leverage effect.
Resumo:
In this paper, we introduce a new approach for volatility modeling in discrete and continuous time. We follow the stochastic volatility literature by assuming that the variance is a function of a state variable. However, instead of assuming that the loading function is ad hoc (e.g., exponential or affine), we assume that it is a linear combination of the eigenfunctions of the conditional expectation (resp. infinitesimal generator) operator associated to the state variable in discrete (resp. continuous) time. Special examples are the popular log-normal and square-root models where the eigenfunctions are the Hermite and Laguerre polynomials respectively. The eigenfunction approach has at least six advantages: i) it is general since any square integrable function may be written as a linear combination of the eigenfunctions; ii) the orthogonality of the eigenfunctions leads to the traditional interpretations of the linear principal components analysis; iii) the implied dynamics of the variance and squared return processes are ARMA and, hence, simple for forecasting and inference purposes; (iv) more importantly, this generates fat tails for the variance and returns processes; v) in contrast to popular models, the variance of the variance is a flexible function of the variance; vi) these models are closed under temporal aggregation.
Resumo:
L'objectif principal de ce travail est d’étudier en profondeur certaines techniques biostatistiques avancées en recherche évaluative en chirurgie cardiaque adulte. Les études ont été conçues pour intégrer les concepts d'analyse de survie, analyse de régression avec “propensity score”, et analyse de coûts. Le premier manuscrit évalue la survie après la réparation chirurgicale de la dissection aigüe de l’aorte ascendante. Les analyses statistiques utilisées comprennent : analyses de survie avec régression paramétrique des phases de risque et d'autres méthodes paramétriques (exponentielle, Weibull), semi-paramétriques (Cox) ou non-paramétriques (Kaplan-Meier) ; survie comparée à une cohorte appariée pour l’âge, le sexe et la race utilisant des tables de statistiques de survie gouvernementales ; modèles de régression avec “bootstrapping” et “multinomial logit model”. L'étude a démontrée que la survie s'est améliorée sur 25 ans en lien avec des changements dans les techniques chirurgicales et d’imagerie diagnostique. Le second manuscrit est axé sur les résultats des pontages coronariens isolés chez des patients ayant des antécédents d'intervention coronarienne percutanée. Les analyses statistiques utilisées comprennent : modèles de régression avec “propensity score” ; algorithme complexe d'appariement (1:3) ; analyses statistiques appropriées pour les groupes appariés (différences standardisées, “generalized estimating equations”, modèle de Cox stratifié). L'étude a démontrée que l’intervention coronarienne percutanée subie 14 jours ou plus avant la chirurgie de pontages coronariens n'est pas associée à des résultats négatifs à court ou long terme. Le troisième manuscrit évalue les conséquences financières et les changements démographiques survenant pour un centre hospitalier universitaire suite à la mise en place d'un programme de chirurgie cardiaque satellite. Les analyses statistiques utilisées comprennent : modèles de régression multivariée “two-way” ANOVA (logistique, linéaire ou ordinale) ; “propensity score” ; analyses de coûts avec modèles paramétriques Log-Normal. Des modèles d’analyse de « survie » ont également été explorés, utilisant les «coûts» au lieu du « temps » comme variable dépendante, et ont menés à des conclusions similaires. L'étude a démontrée que, après la mise en place du programme satellite, moins de patients de faible complexité étaient référés de la région du programme satellite au centre hospitalier universitaire, avec une augmentation de la charge de travail infirmier et des coûts.
Resumo:
The classical methods of analysing time series by Box-Jenkins approach assume that the observed series uctuates around changing levels with constant variance. That is, the time series is assumed to be of homoscedastic nature. However, the nancial time series exhibits the presence of heteroscedasticity in the sense that, it possesses non-constant conditional variance given the past observations. So, the analysis of nancial time series, requires the modelling of such variances, which may depend on some time dependent factors or its own past values. This lead to introduction of several classes of models to study the behaviour of nancial time series. See Taylor (1986), Tsay (2005), Rachev et al. (2007). The class of models, used to describe the evolution of conditional variances is referred to as stochastic volatility modelsThe stochastic models available to analyse the conditional variances, are based on either normal or log-normal distributions. One of the objectives of the present study is to explore the possibility of employing some non-Gaussian distributions to model the volatility sequences and then study the behaviour of the resulting return series. This lead us to work on the related problem of statistical inference, which is the main contribution of the thesis
Resumo:
Es ist bekannt, dass die Dichte eines gelösten Stoffes die Richtung und die Stärke seiner Bewegung im Untergrund entscheidend bestimmen kann. Eine Vielzahl von Untersuchungen hat gezeigt, dass die Verteilung der Durchlässigkeiten eines porösen Mediums diese Dichteffekte verstärken oder abmindern kann. Wie sich dieser gekoppelte Effekt auf die Vermischung zweier Fluide auswirkt, wurde in dieser Arbeit untersucht und dabei das experimentelle sowohl mit dem numerischen als auch mit dem analytischen Modell gekoppelt. Die auf der Störungstheorie basierende stochastische Theorie der macrodispersion wurde in dieser Arbeit für den Fall der transversalen Makodispersion. Für den Fall einer stabilen Schichtung wurde in einem Modelltank (10m x 1.2m x 0.1m) der Universität Kassel eine Serie sorgfältig kontrollierter zweidimensionaler Experimente an einem stochastisch heterogenen Modellaquifer durchgeführt. Es wurden Versuchsreihen mit variierenden Konzentrationsdifferenzen (250 ppm bis 100 000 ppm) und Strömungsgeschwindigkeiten (u = 1 m/ d bis 8 m/d) an drei verschieden anisotrop gepackten porösen Medien mit variierender Varianzen und Korrelationen der lognormal verteilten Permeabilitäten durchgeführt. Die stationäre räumliche Konzentrationsausbreitung der sich ausbreitenden Salzwasserfahne wurde anhand der Leitfähigkeit gemessen und aus der Höhendifferenz des 84- und 16-prozentigen relativen Konzentrationsdurchgang die Dispersion berechnet. Parallel dazu wurde ein numerisches Modell mit dem dichteabhängigen Finite-Elemente-Strömungs- und Transport-Programm SUTRA aufgestellt. Mit dem kalibrierten numerischen Modell wurden Prognosen für mögliche Transportszenarien, Sensitivitätsanalysen und stochastische Simulationen nach der Monte-Carlo-Methode durchgeführt. Die Einstellung der Strömungsgeschwindigkeit erfolgte - sowohl im experimentellen als auch im numerischen Modell - über konstante Druckränder an den Ein- und Auslauftanks. Dabei zeigte sich eine starke Sensitivität der räumlichen Konzentrationsausbreitung hinsichtlich lokaler Druckvariationen. Die Untersuchungen ergaben, dass sich die Konzentrationsfahne mit steigendem Abstand von der Einströmkante wellenförmig einem effektiven Wert annähert, aus dem die Makrodispersivität ermittelt werden kann. Dabei zeigten sich sichtbare nichtergodische Effekte, d.h. starke Abweichungen in den zweiten räumlichen Momenten der Konzentrationsverteilung der deterministischen Experimente von den Erwartungswerten aus der stochastischen Theorie. Die transversale Makrodispersivität stieg proportional zur Varianz und Korrelation der lognormalen Permeabilitätsverteilung und umgekehrt proportional zur Strömungsgeschwindigkeit und Dichtedifferenz zweier Fluide. Aus dem von Welty et al. [2003] mittels Störungstheorie entwickelten dichteabhängigen Makrodispersionstensor konnte in dieser Arbeit die stochastische Formel für die transversale Makrodispersion weiter entwickelt und - sowohl experimentell als auch numerisch - verifiziert werden.
Resumo:
This paper sets out to identify the initial positions of the different decision makers who intervene in a group decision making process with a reduced number of actors, and to establish possible consensus paths between these actors. As a methodological support, it employs one of the most widely-known multicriteria decision techniques, namely, the Analytic Hierarchy Process (AHP). Assuming that the judgements elicited by the decision makers follow the so-called multiplicative model (Crawford and Williams, 1985; Altuzarra et al., 1997; Laininen and Hämäläinen, 2003) with log-normal errors and unknown variance, a Bayesian approach is used in the estimation of the relative priorities of the alternatives being compared. These priorities, estimated by way of the median of the posterior distribution and normalised in a distributive manner (priorities add up to one), are a clear example of compositional data that will be used in the search for consensus between the actors involved in the resolution of the problem through the use of Multidimensional Scaling tools