979 resultados para Statistical Analysis
Resumo:
This thesis was focussed on statistical analysis methods and proposes the use of Bayesian inference to extract information contained in experimental data by estimating Ebola model parameters. The model is a system of differential equations expressing the behavior and dynamics of Ebola. Two sets of data (onset and death data) were both used to estimate parameters, which has not been done by previous researchers in (Chowell, 2004). To be able to use both data, a new version of the model has been built. Model parameters have been estimated and then used to calculate the basic reproduction number and to study the disease-free equilibrium. Estimates of the parameters were useful to determine how well the model fits the data and how good estimates were, in terms of the information they provided about the possible relationship between variables. The solution showed that Ebola model fits the observed onset data at 98.95% and the observed death data at 93.6%. Since Bayesian inference can not be performed analytically, the Markov chain Monte Carlo approach has been used to generate samples from the posterior distribution over parameters. Samples have been used to check the accuracy of the model and other characteristics of the target posteriors.
Resumo:
Two high performance liquid chromatography (HPLC) methods for the quantitative determination of indinavir sulfate were tested, validated and statistically compared. Assays were carried out using as mobile phases mixtures of dibutylammonium phosphate buffer pH 6.5 and acetonitrile (55:45) at 1 mL/min or citrate buffer pH 5 and acetonitrile (60:40) at 1 mL/min, an octylsilane column (RP-8) and a UV spectrophotometric detector at 260 nm. Both methods showed good sensitivity, linearity, precision and accuracy. The statistical analysis using the t-student test for the determination of indinavir sulfate raw material and capsules indicated no statistically significant difference between the two methods.
Resumo:
The identifiability of the parameters of a heat exchanger model without phase change was studied in this Master’s thesis using synthetically made data. A fast, two-step Markov chain Monte Carlo method (MCMC) was tested with a couple of case studies and a heat exchanger model. The two-step MCMC-method worked well and decreased the computation time compared to the traditional MCMC-method. The effect of measurement accuracy of certain control variables to the identifiability of parameters was also studied. The accuracy used did not seem to have a remarkable effect to the identifiability of parameters. The use of the posterior distribution of parameters in different heat exchanger geometries was studied. It would be computationally most efficient to use the same posterior distribution among different geometries in the optimisation of heat exchanger networks. According to the results, this was possible in the case when the frontal surface areas were the same among different geometries. In the other cases the same posterior distribution can be used for optimisation too, but that will give a wider predictive distribution as a result. For condensing surface heat exchangers the numerical stability of the simulation model was studied. As a result, a stable algorithm was developed.
Resumo:
ABSTRACT This study aimed to develop a methodology based on multivariate statistical analysis of principal components and cluster analysis, in order to identify the most representative variables in studies of minimum streamflow regionalization, and to optimize the identification of the hydrologically homogeneous regions for the Doce river basin. Ten variables were used, referring to the river basin climatic and morphometric characteristics. These variables were individualized for each of the 61 gauging stations. Three dependent variables that are indicative of minimum streamflow (Q7,10, Q90 and Q95). And seven independent variables that concern to climatic and morphometric characteristics of the basin (total annual rainfall – Pa; total semiannual rainfall of the dry and of the rainy season – Pss and Psc; watershed drainage area – Ad; length of the main river – Lp; total length of the rivers – Lt; and average watershed slope – SL). The results of the principal component analysis pointed out that the variable SL was the least representative for the study, and so it was discarded. The most representative independent variables were Ad and Psc. The best divisions of hydrologically homogeneous regions for the three studied flow characteristics were obtained using the Mahalanobis similarity matrix and the complete linkage clustering method. The cluster analysis enabled the identification of four hydrologically homogeneous regions in the Doce river basin.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
An interesting fact about language cognition is that stimulation involving incongruence in the merge operation between verb and complement has often been related to a negative event-related potential (ERP) of augmented amplitude and latency of ca. 400 ms - the N400. Using an automatic ERP latency and amplitude estimator to facilitate the recognition of waves with a low signal-to-noise ratio, the objective of the present study was to study the N400 statistically in 24 volunteers. Stimulation consisted of 80 experimental sentences (40 congruous and 40 incongruous), generated in Brazilian Portuguese, involving two distinct local verb-argument combinations (nominal object and pronominal object series). For each volunteer, the EEG was simultaneously acquired at 20 derivations, topographically localized according to the 10-20 International System. A computerized routine for automatic N400-peak marking (based on the ascendant zero-cross of the first waveform derivative) was applied to the estimated individual ERP waveform for congruous and incongruous sentences in both series for all ERP topographic derivations. Peak-to-peak N400 amplitude was significantly augmented (P < 0.05; one-sided Wilcoxon signed-rank test) due to incongruence in derivations F3, T3, C3, Cz, T5, P3, Pz, and P4 for nominal object series and in P3, Pz and P4 for pronominal object series. The results also indicated high inter-individual variability in ERP waveforms, suggesting that the usual procedure of grand averaging might not be considered a generally adequate approach. Hence, signal processing statistical techniques should be applied in neurolinguistic ERP studies allowing waveform analysis with low signal-to-noise ratio.
Resumo:
The contents of total phenolic compounds (TPC), total flavonoids (TF), and ascorbic acid (AA) of 18 frozen fruit pulps and their scavenging capacities against peroxyl radical (ROO), hydrogen peroxide (H2O2), and hydroxyl radical (OH) were determined. Principal Component Analysis (PCA) showed that TPC (total phenolic compounds) and AA (ascorbic acid) presented positive correlation with the scavenging capacity against ROO, and TF (total flavonoids) showed positive correlation with the scavenging capacity against OH and ROO However, the scavenging capacity against H2O2 presented low correlation with TF (total flavonoids), TPC (total phenolic compounds), and AA (ascorbic acid). The Hierarchical Cluster Analysis (HCA) allowed the classification of the fruit pulps into three groups: one group was formed by the açai pulp with high TF, total flavonoids, content (134.02 mg CE/100 g pulp) and the highest scavenging capacity against ROO, OH and H2O2; the second group was formed by the acerola pulp with high TPC, total phenolic compounds, (658.40 mg GAE/100 g pulp) and AA , ascorbic acid, (506.27 mg/100 g pulp) contents; and the third group was formed by pineapple, cacao, caja, cashew-apple, coconut, cupuaçu, guava, orange, lemon, mango, passion fruit, watermelon, pitanga, tamarind, tangerine, and umbu pulps, which could not be separated considering only the contents of bioactive compounds and the scavenging properties.
Resumo:
The overall focus of the thesis involves the International trade and cochin port a historical and statistical analysis 1881-1980.Analysing the trend of exports and imports through cochin port during the course of the last hundred years .This analysis has brought to light some very pertinent facts which , in our opinion,deserve serious consideration of the policy makers,the partise involved in trade and those who are interested in the development of the cochin port.Our study is restricted to twelve commodities -ten commodities of exports and two commodities of imports.The study reveals that the commodities that were exported from cochin are subjected to fluctuations -some mild and others wild. The projections only indicate the potential and unless we are very cautious the chance will be taken away by our competitors .With reference to the development of the port in particular and the states economy in general we would like to make a suggestion .This suggestion relates to declaring cochin as a free port .This will go a long way in the develppment of the port and the state's economy.The sooner it is done the better for the port and the state.
Resumo:
Diabetes mellitus is a disease where the glucosis-content of the blood does not automatically decrease to a ”normal” value between 70 mg/dl and 120 mg/dl (3,89 mmol/l and 6,67 mmol/l) between perhaps one hour (or two hours) after eating. Several instruments can be used to arrive at a relative low increase of the glucosis-content. Besides drugs (oral antidiabetica, insulin) the blood-sugar content can mainly be influenced by (i) eating, i.e., consumption of the right amount of food at the right time (ii) physical training (walking, cycling, swimming). In a recent paper the author has performed a regression analysis on the influence of eating during the night. The result was that one ”bread-unit” (12g carbon-hydrats) increases the blood-sugar by about 50 mg/dl, while one hour after eating the blood-sugar decreases by about 10 mg/dl per hour. By applying this result-assuming its correctness - it is easy to eat the right amount during the night and to arrive at a fastening blood-sugar (glucosis-content) in the morning of about 100 mg/dl (5,56 mmol/l). In this paper we try to incorporate some physical exercise into the model.
Resumo:
Auf dem Gebiet der Strukturdynamik sind computergestützte Modellvalidierungstechniken inzwischen weit verbreitet. Dabei werden experimentelle Modaldaten, um ein numerisches Modell für weitere Analysen zu korrigieren. Gleichwohl repräsentiert das validierte Modell nur das dynamische Verhalten der getesteten Struktur. In der Realität gibt es wiederum viele Faktoren, die zwangsläufig zu variierenden Ergebnissen von Modaltests führen werden: Sich verändernde Umgebungsbedingungen während eines Tests, leicht unterschiedliche Testaufbauten, ein Test an einer nominell gleichen aber anderen Struktur (z.B. aus der Serienfertigung), etc. Damit eine stochastische Simulation durchgeführt werden kann, muss eine Reihe von Annahmen für die verwendeten Zufallsvariablengetroffen werden. Folglich bedarf es einer inversen Methode, die es ermöglicht ein stochastisches Modell aus experimentellen Modaldaten zu identifizieren. Die Arbeit beschreibt die Entwicklung eines parameter-basierten Ansatzes, um stochastische Simulationsmodelle auf dem Gebiet der Strukturdynamik zu identifizieren. Die entwickelte Methode beruht auf Sensitivitäten erster Ordnung, mit denen Parametermittelwerte und Kovarianzen des numerischen Modells aus stochastischen experimentellen Modaldaten bestimmt werden können.
Resumo:
Most of economic literature has presented its analysis under the assumption of homogeneous capital stock. However, capital composition differs across countries. What has been the pattern of capital composition associated with World economies? We make an exploratory statistical analysis based on compositional data transformed by Aitchinson logratio transformations and we use tools for visualizing and measuring statistical estimators of association among the components. The goal is to detect distinctive patterns in the composition. As initial findings could be cited that: 1. Sectorial components behaved in a correlated way, building industries on one side and , in a less clear view, equipment industries on the other. 2. Full sample estimation shows a negative correlation between durable goods component and other buildings component and between transportation and building industries components. 3. Countries with zeros in some components are mainly low income countries at the bottom of the income category and behaved in a extreme way distorting main results observed in the full sample. 4. After removing these extreme cases, conclusions seem not very sensitive to the presence of another isolated cases
Resumo:
Several eco-toxicological studies have shown that insectivorous mammals, due to their feeding habits, easily accumulate high amounts of pollutants in relation to other mammal species. To assess the bio-accumulation levels of toxic metals and their in°uence on essential metals, we quantified the concentration of 19 elements (Ca, K, Fe, B, P, S, Na, Al, Zn, Ba, Rb, Sr, Cu, Mn, Hg, Cd, Mo, Cr and Pb) in bones of 105 greater white-toothed shrews (Crocidura russula) from a polluted (Ebro Delta) and a control (Medas Islands) area. Since chemical contents of a bio-indicator are mainly compositional data, conventional statistical analyses currently used in eco-toxicology can give misleading results. Therefore, to improve the interpretation of the data obtained, we used statistical techniques for compositional data analysis to define groups of metals and to evaluate the relationships between them, from an inter-population viewpoint. Hypothesis testing on the adequate balance-coordinates allow us to confirm intuition based hypothesis and some previous results. The main statistical goal was to test equal means of balance-coordinates for the two defined populations. After checking normality, one-way ANOVA or Mann-Whitney tests were carried out for the inter-group balances