972 resultados para sinh-normal distribution
Resumo:
O chocolate é considerado uma emulsão complexa e um alimento de luxo, que durante o seu consumo provoca estímulos que activam os centros de prazer do cérebro Humano. Tendo em conta a importância deste alimento torna-se necessário estudar e avaliar a melhor forma de melhorar a qualidade do chocolate. Este trabalho teve como objectivo verificar e analisar a qualidade do processo de fabrico da massa de chocolate, no que respeita (i) a rastreabilidade das matérias-primas e do produto acabado e, por outro lado, (ii) determinar e estudar o efeito de alguns parâmetros do processo nas características da massa, através das variáveis viscosidade, tensão de corte, tensão de corte crítica (“yield value”) e granulometria. Estas variáveis foram medidas em massas de chocolate de leite com o nome de formulação CAI e provenientes das duas unidades fabris da empresa (UF1 e UF2). Os parâmetros estudados na UF1 foram a influência das conchas e dos ingredientes. Na UF2 estudou-se a influência dos inutilizados de fabrico e a influência dos inutilizados de fabrico juntamente com o efeito de um ingrediente que foi o açúcar. Os resultados da viscosidade, tensão de corte e tensão de corte crítica (“yield value”) foram analisados estatisticamente por análise de variância (ANOVA), recorrendo aos testes de Komolgorov-Smirnov, Shapiro-Wilk e de Levene para verificar as condições de aplicabilidade desta análise. Os resultados da granulometria como não aderiram a uma distribuição normal foram analisados pelo método não paramétrico de Kruskal-Wallis. Estas análises foram executadas no programa “Statistical Package for the Social Sciences” (SPSS). Pelos resultados obtidos, conclui-se que, para a UF1, a concha afecta a tensão de corte, viscosidade e a tensão de corte crítica do chocolate produzido, na medida em que existem diferenças entre as conchas estudadas. Para esta unidade conclui-se que os ingredientes também influenciam a granulometria da massa. No caso da UF2, conclui-se que a tensão de corte é afectada apenas pelo lote de açúcar, a viscosidade é afectada tanto pelo lote de açúcar como pela presença de inutilizados de fabrico e a tensão de corte crítica não é afectada por nenhum destes efeitos. A granulometria, nesta unidade é afectada pelos lotes de açúcar estudados.
Resumo:
Mestrado em Contabilidade e Análise Financeira
Resumo:
Beyond the classical statistical approaches (determination of basic statistics, regression analysis, ANOVA, etc.) a new set of applications of different statistical techniques has increasingly gained relevance in the analysis, processing and interpretation of data concerning the characteristics of forest soils. This is possible to be seen in some of the recent publications in the context of Multivariate Statistics. These new methods require additional care that is not always included or refered in some approaches. In the particular case of geostatistical data applications it is necessary, besides to geo-reference all the data acquisition, to collect the samples in regular grids and in sufficient quantity so that the variograms can reflect the spatial distribution of soil properties in a representative manner. In the case of the great majority of Multivariate Statistics techniques (Principal Component Analysis, Correspondence Analysis, Cluster Analysis, etc.) despite the fact they do not require in most cases the assumption of normal distribution, they however need a proper and rigorous strategy for its utilization. In this work, some reflections about these methodologies and, in particular, about the main constraints that often occur during the information collecting process and about the various linking possibilities of these different techniques will be presented. At the end, illustrations of some particular cases of the applications of these statistical methods will also be presented.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia e Gestão Industrial
Resumo:
Background: Coronary artery bypass graft (CABG) is a standard surgical option for patients with diffuse and significant arterial plaque. This procedure, however, is not free of postoperative complications, especially pulmonary and cognitive disorders. Objective: This study aimed at comparing the impact of two different physiotherapy treatment approaches on pulmonary and cognitive function of patients undergoing CABG. Methods: Neuropsychological and pulmonary function tests were applied, prior to and following CABG, to 39 patients randomized into two groups as follows: Group 1 (control) - 20 patients underwent one physiotherapy session daily; and Group 2 (intensive physiotherapy) - 19 patients underwent three physiotherapy sessions daily during the recovery phase at the hospital. Non-paired and paired Student t tests were used to compare continuous variables. Variables without normal distribution were compared between groups by using Mann-Whitney test, and, within the same group at different times, by using Wilcoxon test. The chi-square test assessed differences of categorical variables. Statistical tests with a p value ≤ 0.05 were considered significant. Results: Changes in pulmonary function were not significantly different between the groups. However, while Group 2 patients showed no decline in their neurocognitive function, Group 1 patients showed a decline in their cognitive functions (P ≤ 0.01). Conclusion: Those results highlight the importance of physiotherapy after CABG and support the implementation of multiple sessions per day, providing patients with better psychosocial conditions and less morbidity.
Resumo:
The classical central limit theorem states the uniform convergence of the distribution functions of the standardized sums of independent and identically distributed square integrable real-valued random variables to the standard normal distribution function. While first versions of the central limit theorem are already due to Moivre (1730) and Laplace (1812), a systematic study of this topic started at the beginning of the last century with the fundamental work of Lyapunov (1900, 1901). Meanwhile, extensions of the central limit theorem are available for a multitude of settings. This includes, e.g., Banach space valued random variables as well as substantial relaxations of the assumptions of independence and identical distributions. Furthermore, explicit error bounds are established and asymptotic expansions are employed to obtain better approximations. Classical error estimates like the famous bound of Berry and Esseen are stated in terms of absolute moments of the random summands and therefore do not reflect a potential closeness of the distributions of the single random summands to a normal distribution. Non-classical approaches take this issue into account by providing error estimates based on, e.g., pseudomoments. The latter field of investigation was initiated by work of Zolotarev in the 1960's and is still in its infancy compared to the development of the classical theory. For example, non-classical error bounds for asymptotic expansions seem not to be available up to now ...
Resumo:
1. Species distribution modelling is used increasingly in both applied and theoretical research to predict how species are distributed and to understand attributes of species' environmental requirements. In species distribution modelling, various statistical methods are used that combine species occurrence data with environmental spatial data layers to predict the suitability of any site for that species. While the number of data sharing initiatives involving species' occurrences in the scientific community has increased dramatically over the past few years, various data quality and methodological concerns related to using these data for species distribution modelling have not been addressed adequately. 2. We evaluated how uncertainty in georeferences and associated locational error in occurrences influence species distribution modelling using two treatments: (1) a control treatment where models were calibrated with original, accurate data and (2) an error treatment where data were first degraded spatially to simulate locational error. To incorporate error into the coordinates, we moved each coordinate with a random number drawn from the normal distribution with a mean of zero and a standard deviation of 5 km. We evaluated the influence of error on the performance of 10 commonly used distributional modelling techniques applied to 40 species in four distinct geographical regions. 3. Locational error in occurrences reduced model performance in three of these regions; relatively accurate predictions of species distributions were possible for most species, even with degraded occurrences. Two species distribution modelling techniques, boosted regression trees and maximum entropy, were the best performing models in the face of locational errors. The results obtained with boosted regression trees were only slightly degraded by errors in location, and the results obtained with the maximum entropy approach were not affected by such errors. 4. Synthesis and applications. To use the vast array of occurrence data that exists currently for research and management relating to the geographical ranges of species, modellers need to know the influence of locational error on model quality and whether some modelling techniques are particularly robust to error. We show that certain modelling techniques are particularly robust to a moderate level of locational error and that useful predictions of species distributions can be made even when occurrence data include some error.
Resumo:
One feature of the modern nutrition transition is the growing consumption of animal proteins. The most common approach in the quantitative analysis of this change used to be the study of averages of food consumption. But this kind of analysis seems to be incomplete without the knowledge of the number of consumers. Data about consumers are not usually published in historical statistics. This article introduces a methodological approach for reconstructing consumer populations. This methodology is based on some assumptions about the diffusion process of foodstuffs and the modeling of consumption patterns with a log-normal distribution. This estimating process is illustrated with the specific case of milk consumption in Spain between 1925 and 1981. These results fit quite well with other data and indirect sources available showing that this dietary change was a slow and late process. The reconstruction of consumer population could shed a new light in the study of nutritional transitions.
Resumo:
The exposure to dust and polynuclear aromatic hydrocarbons (PAH) of 15 truck drivers from Geneva, Switzerland, was measured. The drivers were divided between "long-distance" drivers and "local" drivers and between smokers and nonsmokers and were compared with a control group of 6 office workers who were also divided into smokers and nonsmokers. Dust was measured on 1 workday both by a direct-reading instrument and by sampling. The local drivers showed higher exposure to dust (0.3 mg/m3) and PAH than the long-distance drivers (0.1 mg/m3), who showed no difference with the control group. This observation may be due to the fact that the local drivers spend more time in more polluted areas, such as streets with heavy traffic and construction sites, than do the long-distance drivers. Smoking does not influence exposure to dust and PAH of professional truck drivers, as measured in this study, probably because the ventilation rate of the truck cabins is relatively high even during cold days (11-15 r/h). The distribution of dust concentrations was shown in some cases to be quite different from the expected log-normal distribution. The contribution of diesel exhaust to these exposures could not be estimated since no specific tracer was used. However, the relatively low level of dust exposure dose not support the hypothesis that present day levels of diesel exhaust particulates play a significant role in the excess occurrence of lung cancer observed in professional truck drivers.
Resumo:
Introducing and describing data and understanding the normal distribution.
Resumo:
The preceding two editions of CoDaWork included talks on the possible considerationof densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended theEuclidean structure of the simplex to a Hilbert space structure of the set of densitieswithin a bounded interval, and van den Boogaart (2005) generalized this to the setof densities bounded by an arbitrary reference density. From the many variations ofthe Hilbert structures available, we work with three cases. For bounded variables, abasis derived from Legendre polynomials is used. For variables with a lower bound, westandardize them with respect to an exponential distribution and express their densitiesas coordinates in a basis derived from Laguerre polynomials. Finally, for unboundedvariables, a normal distribution is used as reference, and coordinates are obtained withrespect to a Hermite-polynomials-based basis.To get the coordinates, several approaches can be considered. A numerical accuracyproblem occurs if one estimates the coordinates directly by using discretized scalarproducts. Thus we propose to use a weighted linear regression approach, where all k-order polynomials are used as predictand variables and weights are proportional to thereference density. Finally, for the case of 2-order Hermite polinomials (normal reference)and 1-order Laguerre polinomials (exponential), one can also derive the coordinatesfrom their relationships to the classical mean and variance.Apart of these theoretical issues, this contribution focuses on the application of thistheory to two main problems in sedimentary geology: the comparison of several grainsize distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock orsediment, like their composition
Resumo:
OBJECTIVE: Previous studies reported that the severity of cognitive deficits in euthymic patients with bipolar disorder (BD) increases with the duration of illness and postulated that progressive neuronal loss or shrinkage and white matter changes may be at the origin of this phenomenon. To explore this issue, the authors performed a case-control study including detailed neuropsychological and magnetic resonance imaging analyses in 17 euthymic elderly patients with BD and 17 healthy individuals. METHODS: Neuropsychological evaluation concerned working memory, episodic memory, processing speed, and executive functions. Volumetric estimates of the amygdala, hippocampus, entorhinal cortex, and anterior cingulate cortex were obtained using both voxel-based and region of interest morphometric methods. Periventricular and deep white matter were assessed semiquantitatively. Differences in cognitive performances and structural data between BD and comparison groups were analyzed using paired t-test or analysis of variance. Wilcoxon test was used in the absence of normal distribution. RESULTS: Compared with healthy individuals, patients with BD obtained significantly lower performances in processing speed, working memory, and episodic memory but not in executive functions. Morphometric analyses did not show significant volumetric or white matter differences between the two groups. CONCLUSIONS: Our results revealed impairment in verbal memory, working memory, and processing speed in euthymic older adults with BD. These cognitive deficits are comparable both in terms of affected functions and size effects to those previously reported in younger cohorts with BD. Both this observation and the absence of structural brain abnormalities in our cohort do not support a progressively evolving neurotoxic effect in BD.
Resumo:
Aitchison and Bacon-Shone (1999) considered convex linear combinations ofcompositions. In other words, they investigated compositions of compositions, wherethe mixing composition follows a logistic Normal distribution (or a perturbationprocess) and the compositions being mixed follow a logistic Normal distribution. Inthis paper, I investigate the extension to situations where the mixing compositionvaries with a number of dimensions. Examples would be where the mixingproportions vary with time or distance or a combination of the two. Practicalsituations include a river where the mixing proportions vary along the river, or acrossa lake and possibly with a time trend. This is illustrated with a dataset similar to thatused in the Aitchison and Bacon-Shone paper, which looked at how pollution in aloch depended on the pollution in the three rivers that feed the loch. Here, I explicitlymodel the variation in the linear combination across the loch, assuming that the meanof the logistic Normal distribution depends on the river flows and relative distancefrom the source origins
Resumo:
This study is part of an ongoing collaborative effort between the medical and the signal processing communities to promote research on applying standard Automatic Speech Recognition (ASR) techniques for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based detection could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we describe an acoustic search for distinctive apnoea voice characteristics. We also study abnormal nasalization in OSA patients by modelling vowels in nasal and nonnasal phonetic contexts using Gaussian Mixture Model (GMM) pattern recognition on speech spectra. Finally, we present experimental findings regarding the discriminative power of GMMs applied to severe apnoea detection. We have achieved an 81% correct classification rate, which is very promising and underpins the interest in this line of inquiry.