893 resultados para Biogeography, Bioregions, Subregion, Statistical Modelling, GIS, Finite Mixture Models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to gain an understanding of the effects of population heterogeneity, missing data, and causal relationships on parameter estimates from statistical models when analyzing change in medication use. From a public health perspective, two timely topics were addressed: the use and effects of statins in populations in primary prevention of cardiovascular disease and polypharmacy in older population. Growth mixture models were applied to characterize the accumulation of cardiovascular and diabetes medications among apparently healthy population of statin initiators. The causal effect of statin adherence on the incidence of acute cardiovascular events was estimated using marginal structural models in comparison with discrete-time hazards models. The impact of missing data on the growth estimates of evolution of polypharmacy was examined comparing statistical models under different assumptions for missing data mechanism. The data came from Finnish administrative registers and from the population-based Geriatric Multidisciplinary Strategy for the Good Care of the Elderly study conducted in Kuopio, Finland, during 2004–07. Five distinct patterns of accumulating medications emerged among the population of apparently healthy statin initiators during two years after statin initiation. Proper accounting for time-varying dependencies between adherence to statins and confounders using marginal structural models produced comparable estimation results with those from a discrete-time hazards model. Missing data mechanism was shown to be a key component when estimating the evolution of polypharmacy among older persons. In conclusion, population heterogeneity, missing data and causal relationships are important aspects in longitudinal studies that associate with the study question and should be critically assessed when performing statistical analyses. Analyses should be supplemented with sensitivity analyses towards model assumptions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study presents the procedure followed to make a prediction of the critical flutter speed for a composite UAV wing. At the beginning of the study, there was no information available on the materials used for the construction of the wing, and the wing internal structure was unknown. Ground vibration tests were performed in order to detect the structure’s natural frequencies and mode shapes. From tests, it was found that the wing possesses a high stiffness, presenting well separated first bending and torsional natural frequencies. Two finite element models were developed and matched to experimental results. It has been necessary to introduce some assumptions, due to the uncertainties regarding the structure. The matching process was based on natural frequencies’ sensitivity with respect to a change in the mechanical properties of the materials. Once experimental results were met, average material properties were also found. Aerodynamic coefficients for the wing were obtained by means of a CFD software. The same analysis was also conducted when the wing is deformed in its first four mode shapes. A first approximation for flutter critical speed was made with the classical V - g technique. Finally, wing’s aeroelastic behavior was simulated using a coupled CFD/CSD method, obtaining a more accurate flutter prediction. The CSD solver is based on the time integration of modal dynamic equations, requiring the extraction of mode shapes from the previously performed finite-element analysis. Results show that flutter onset is not a risk for the UAV, occurring at velocities well beyond its operative range.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this doctoral dissertation, a comprehensive methodological approach for the assessment of river embankments safety conditions, based on the integrated use of laboratory testing, physical modelling and finite element (FE) numerical simulations, is proposed, with the aim of contributing to a better understanding of the effect of time-dependent hydraulic boundary conditions on the hydro-mechanical response of river embankments. The case study and materials selected for the present research project are representative for the riverbank systems of Alpine and Apennine tributaries of the main river Po (Northern Italy), which have recently experienced various sudden overall collapses. The outcomes of a centrifuge test carried out under the enhanced gravity field of 50-g, on a riverbank model, made of a compacted silty sand mixture, overlying a homogeneous clayey silt foundation layer and subjected to a simulated flood event, have been considered for the definition of a robust and realistic experimental benchmark. In order to reproduce the observed experimental behaviour, a first set of numerical simulations has been carried out by assuming, for both the embankments and the foundation unit, rigid soil porous media, under partially saturated conditions. Mechanical and hydraulic soil properties adopted in the numerical analyses have been carefully estimated based on standard saturated triaxial, oedometer and constant head permeability tests. Afterwards, advanced suction-controlled laboratory tests, have been carried out to investigate the effect of suction and confining stresses on the shear strength and compressibility characteristics of the filling material and a second set of numerical simulations has been run, taking into account the soil parameters updated based on the most recent tests. The final aim of the study is the quantitative estimation of the predictive capabilities of the calibrated numerical tools, by systematically comparing the results of the FE simulations to the experimental benchmark.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although various abutment connections and materials have recently been introduced, insufficient data exist regarding the effect of stress distribution on their mechanical performance. The purpose of this study was to investigate the effect of different abutment materials and platform connections on stress distribution in single anterior implant-supported restorations with the finite element method. Nine experimental groups were modeled from the combination of 3 platform connections (external hexagon, internal hexagon, and Morse tapered) and 3 abutment materials (titanium, zirconia, and hybrid) as follows: external hexagon-titanium, external hexagon-zirconia, external hexagon-hybrid, internal hexagon-titanium, internal hexagon-zirconia, internal hexagon-hybrid, Morse tapered-titanium, Morse tapered-zirconia, and Morse tapered-hybrid. Finite element models consisted of a 4×13-mm implant, anatomic abutment, and lithium disilicate central incisor crown cemented over the abutment. The 49 N occlusal loading was applied in 6 steps to simulate the incisal guidance. Equivalent von Mises stress (σvM) was used for both the qualitative and quantitative evaluation of the implant and abutment in all the groups and the maximum (σmax) and minimum (σmin) principal stresses for the numerical comparison of the zirconia parts. The highest abutment σvM occurred in the Morse-tapered groups and the lowest in the external hexagon-hybrid, internal hexagon-titanium, and internal hexagon-hybrid groups. The σmax and σmin values were lower in the hybrid groups than in the zirconia groups. The stress distribution concentrated in the abutment-implant interface in all the groups, regardless of the platform connection or abutment material. The platform connection influenced the stress on abutments more than the abutment material. The stress values for implants were similar among different platform connections, but greater stress concentrations were observed in internal connections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A mixture model for long-term survivors has been adopted in various fields such as biostatistics and criminology where some individuals may never experience the type of failure under study. It is directly applicable in situations where the only information available from follow-up on individuals who will never experience this type of failure is in the form of censored observations. In this paper, we consider a modification to the model so that it still applies in the case where during the follow-up period it becomes known that an individual will never experience failure from the cause of interest. Unless a model allows for this additional information, a consistent survival analysis will not be obtained. A partial maximum likelihood (ML) approach is proposed that preserves the simplicity of the long-term survival mixture model and provides consistent estimators of the quantities of interest. Some simulation experiments are performed to assess the efficiency of the partial ML approach relative to the full ML approach for survival in the presence of competing risks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When examining a rock mass, joint sets and their orientations can play a significant role with regard to how the rock mass will behave. To identify joint sets present in the rock mass, the orientation of individual fracture planer can be measured on exposed rock faces and the resulting data can be examined for heterogeneity. In this article, the expectation-maximization algorithm is used to lit mixtures of Kent component distributions to the fracture data to aid in the identification of joint sets. An additional uniform component is also included in the model to accommodate the noise present in the data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives: To compare the population modelling programs NONMEM and P-PHARM during investigation of the pharmacokinetics of tacrolimus in paediatric liver-transplant recipients. Methods: Population pharmacokinetic analysis was performed using NONMEM and P-PHARM on retrospective data from 35 paediatric liver-transplant patients receiving tacrolimus therapy. The same data were presented to both programs. Maximum likelihood estimates were sought for apparent clearance (CL/F) and apparent volume of distribution (V/F). Covariates screened for influence on these parameters were weight, age, gender, post-operative day, days of tacrolimus therapy, transplant type, biliary reconstructive procedure, liver function tests, creatinine clearance, haematocrit, corticosteroid dose, and potential interacting drugs. Results: A satisfactory model was developed in both programs with a single categorical covariate - transplant type - providing stable parameter estimates and small, normally distributed (weighted) residuals. In NONMEM, the continuous covariates - age and liver function tests - improved modelling further. Mean parameter estimates were CL/F (whole liver) = 16.3 1/h, CL/F (cut-down liver) = 8.5 1/h and V/F = 565 1 in NONMEM, and CL/F = 8.3 1/h and V/F = 155 1 in P-PHARM. Individual Bayesian parameter estimates were CL/F (whole liver) = 17.9 +/- 8.8 1/h, CL/F (cutdown liver) = 11.6 +/- 18.8 1/h and V/F = 712 792 1 in NONMEM, and CL/F (whole liver) = 12.8 +/- 3.5 1/h, CL/F (cut-down liver) = 8.2 +/- 3.4 1/h and V/F = 221 1641 in P-PHARM. Marked interindividual kinetic variability (38-108%) and residual random error (approximately 3 ng/ml) were observed. P-PHARM was more user friendly and readily provided informative graphical presentation of results. NONMEM allowed a wider choice of errors for statistical modelling and coped better with complex covariate data sets. Conclusion: Results from parametric modelling programs can vary due to different algorithms employed to estimate parameters, alternative methods of covariate analysis and variations and limitations in the software itself.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste artigo é apresentado um Sistema de Apoio à Decisão Espacial (SADE) onde os decisores podem facilmente definir diferentes tipos de problemas espaciais recorrendo a diferentes categorias de objetos, pré-definidas ou a definir, associando- lhes características com ou sem dependência espacial, e indicando formas de interferência (impactos) entre essas caracte- rísticas/propriedades. A análise espacial para determinação ou avaliação de configurações alternativas para a localização de diferentes tipos de ocorrências espaciais será feita através da utilização interativa do SADE de acordo com conjuntos de regras intrínsecas aos vários elementos gráficos (objetos, categorias, características, impactos) utilizados na definição dos problemas. O teste à generalidade representativa e analítica do SADE proposto é efectuado recorrendo a um problema concreto, suficientemente específico e complexo, relativo à aplicação de modelos gaussianos para o estudo da dispersão atmosférica de eventuais poluentes resultantes do tratamento de resíduos sólidos. A região em estudo está limitada, neste exemplo, ao município de Coimbra, Portugal. Para este município estão acessíveis, e são utilizados, os dados demográficos ao nível da secção de voto (censos oficiais) e, como tal, é possível a realização de um estudo realístico do impacto com populações humanas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The tongue is the most important and dynamic articulator for speech formation, because of its anatomic aspects (particularly, the large volume of this muscular organ comparatively to the surrounding organs of the vocal tract) and also due to the wide range of movements and flexibility that are involved. In speech communication research, a variety of techniques have been used for measuring the three-dimensional vocal tract shapes. More recently, magnetic resonance imaging (MRI) becomes common; mainly, because this technique allows the collection of a set of static and dynamic images that can represent the entire vocal tract along any orientation. Over the years, different anatomical organs of the vocal tract have been modelled; namely, 2D and 3D tongue models, using parametric or statistical modelling procedures. Our aims are to present and describe some 3D reconstructed models from MRI data, for one subject uttering sustained articulations of some typical Portuguese sounds. Thus, we present a 3D database of the tongue obtained by stack combinations with the subject articulating Portuguese vowels. This 3D knowledge of the speech organs could be very important; especially, for clinical purposes (for example, for the assessment of articulatory impairments followed by tongue surgery in speech rehabilitation), and also for a better understanding of acoustic theory in speech formation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research on cluster analysis for categorical data continues to develop, new clustering algorithms being proposed. However, in this context, the determination of the number of clusters is rarely addressed. We propose a new approach in which clustering and the estimation of the number of clusters is done simultaneously for categorical data. We assume that the data originate from a finite mixture of multinomial distributions and use a minimum message length criterion (MML) to select the number of clusters (Wallace and Bolton, 1986). For this purpose, we implement an EM-type algorithm (Silvestre et al., 2008) based on the (Figueiredo and Jain, 2002) approach. The novelty of the approach rests on the integration of the model estimation and selection of the number of clusters in a single algorithm, rather than selecting this number based on a set of pre-estimated candidate models. The performance of our approach is compared with the use of Bayesian Information Criterion (BIC) (Schwarz, 1978) and Integrated Completed Likelihood (ICL) (Biernacki et al., 2000) using synthetic data. The obtained results illustrate the capacity of the proposed algorithm to attain the true number of cluster while outperforming BIC and ICL since it is faster, which is especially relevant when dealing with large data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cluster analysis for categorical data has been an active area of research. A well-known problem in this area is the determination of the number of clusters, which is unknown and must be inferred from the data. In order to estimate the number of clusters, one often resorts to information criteria, such as BIC (Bayesian information criterion), MML (minimum message length, proposed by Wallace and Boulton, 1968), and ICL (integrated classification likelihood). In this work, we adopt the approach developed by Figueiredo and Jain (2002) for clustering continuous data. They use an MML criterion to select the number of clusters and a variant of the EM algorithm to estimate the model parameters. This EM variant seamlessly integrates model estimation and selection in a single algorithm. For clustering categorical data, we assume a finite mixture of multinomial distributions and implement a new EM algorithm, following a previous version (Silvestre et al., 2008). Results obtained with synthetic datasets are encouraging. The main advantage of the proposed approach, when compared to the above referred criteria, is the speed of execution, which is especially relevant when dealing with large data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In data clustering, the problem of selecting the subset of most relevant features from the data has been an active research topic. Feature selection for clustering is a challenging task due to the absence of class labels for guiding the search for relevant features. Most methods proposed for this goal are focused on numerical data. In this work, we propose an approach for clustering and selecting categorical features simultaneously. We assume that the data originate from a finite mixture of multinomial distributions and implement an integrated expectation-maximization (EM) algorithm that estimates all the parameters of the model and selects the subset of relevant features simultaneously. The results obtained on synthetic data illustrate the performance of the proposed approach. An application to real data, referred to official statistics, shows its usefulness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Na tentativa de se otimizar o processo de fabrico associado a uma tinta base aquosa (TBA), para minimizar os desvios de viscosidade final verificados, e de desenvolver um novo adjuvante plastificante para betão, recorreu-se a métodos e ferramentas estatísticas para a concretização do projeto. Relativamente à TBA, procedeu-se numa primeira fase a um acompanhamento do processo de fabrico, a fim de se obter todos os dados mais relevantes que poderiam influenciar a viscosidade final da tinta. Através de uma análise de capacidade ao parâmetro viscosidade, verificou-se que esta não estava sempre dentro das especificações do cliente, sendo o cpk do processo inferior a 1. O acompanhamento do processo resultou na escolha de 4 fatores, que culminou na realização de um plano fatorial 24. Após a realização dos ensaios, efetuou-se uma análise de regressão a um modelo de primeira ordem, não tendo sido esta significativa, o que implicou a realização de mais 8 ensaios nos pontos axiais. Com arealização de uma regressão passo-a-passo, obteve-se uma aproximação viável a um modelo de segunda ordem, que culminou na obtenção dos melhores níveis para os 4 fatores que garantem que a resposta viscosidade se situa no ponto médio do intervalo de especificação (1400 mPa.s). Quanto ao adjuvante para betão, o objetivo é o uso de polímeros SIKA ao invés da matériaprima comum neste tipo de produtos, tendo em conta o custo final da formulação. Escolheram-se 3 fatores importantes na formulação do produto (mistura de polímeros, mistura de hidrocarbonetos e % de sólidos), que resultou numa matriz fatorial 23. Os ensaios foram realizados em triplicado, em pasta de cimento, um para cada tipo de cimento mais utilizado em Portugal. Ao efetuar-se a análise estatística de dados obtiveram-se modelos de primeira ordem para cada tipo de cimento. O processo de otimização consistiu em otimizar uma função custo associada à formulação, garantindo sempre uma resposta superior à observada pelo produto considerado padrão. Os resultados foram animadores uma vez que se obteve para os 3 tipos de cimentocustos abaixo do requerido e espalhamento acima do observado pelo padrão.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.