903 resultados para Geo-statistical model
Resumo:
An abstract of a thesis devoted to using helix-coil models to study unfolded states.\\
Research on polypeptide unfolded states has received much more attention in the last decade or so than it has in the past. Unfolded states are thought to be implicated in various
misfolding diseases and likely play crucial roles in protein folding equilibria and folding rates. Structural characterization of unfolded states has proven to be
much more difficult than the now well established practice of determining the structures of folded proteins. This is largely because many core assumptions underlying
folded structure determination methods are invalid for unfolded states. This has led to a dearth of knowledge concerning the nature of unfolded state conformational
distributions. While many aspects of unfolded state structure are not well known, there does exist a significant body of work stretching back half a century that
has been focused on structural characterization of marginally stable polypeptide systems. This body of work represents an extensive collection of experimental
data and biophysical models associated with describing helix-coil equilibria in polypeptide systems. Much of the work on unfolded states in the last decade has not been devoted
specifically to the improvement of our understanding of helix-coil equilibria, which arguably is the most well characterized of the various conformational equilibria
that likely contribute to unfolded state conformational distributions. This thesis seeks to provide a deeper investigation of helix-coil equilibria using modern
statistical data analysis and biophysical modeling techniques. The studies contained within seek to provide deeper insights and new perspectives on what we presumably
know very well about protein unfolded states. \\
Chapter 1 gives an overview of recent and historical work on studying protein unfolded states. The study of helix-coil equilibria is placed in the context
of the general field of unfolded state research and the basics of helix-coil models are introduced.\\
Chapter 2 introduces the newest incarnation of a sophisticated helix-coil model. State of the art modern statistical techniques are employed to estimate the energies
of various physical interactions that serve to influence helix-coil equilibria. A new Bayesian model selection approach is utilized to test many long-standing
hypotheses concerning the physical nature of the helix-coil transition. Some assumptions made in previous models are shown to be invalid and the new model
exhibits greatly improved predictive performance relative to its predecessor. \\
Chapter 3 introduces a new statistical model that can be used to interpret amide exchange measurements. As amide exchange can serve as a probe for residue-specific
properties of helix-coil ensembles, the new model provides a novel and robust method to use these types of measurements to characterize helix-coil ensembles experimentally
and test the position-specific predictions of helix-coil models. The statistical model is shown to perform exceedingly better than the most commonly used
method for interpreting amide exchange data. The estimates of the model obtained from amide exchange measurements on an example helical peptide
also show a remarkable consistency with the predictions of the helix-coil model. \\
Chapter 4 involves a study of helix-coil ensembles through the enumeration of helix-coil configurations. Aside from providing new insights into helix-coil ensembles,
this chapter also introduces a new method by which helix-coil models can be extended to calculate new types of observables. Future work on this approach could potentially
allow helix-coil models to move into use domains that were previously inaccessible and reserved for other types of unfolded state models that were introduced in chapter 1.
Resumo:
The work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.
The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms.
We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.
Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.
Resumo:
River runoff is an essential climate variable as it is directly linked to the terrestrial water balance and controls a wide range of climatological and ecological processes. Despite its scientific and societal importance, there are to date no pan-European observation-based runoff estimates available. Here we employ a recently developed methodology to estimate monthly runoff rates on regular spatial grid in Europe. For this we first assemble an unprecedented collection of river flow observations, combining information from three distinct data bases. Observed monthly runoff rates are first tested for homogeneity and then related to gridded atmospheric variables (E-OBS version 12) using machine learning. The resulting statistical model is then used to estimate monthly runoff rates (December 1950 - December 2015) on a 0.5° x 0.5° grid. The performance of the newly derived runoff estimates is assessed in terms of cross validation. The paper closes with example applications, illustrating the potential of the new runoff estimates for climatological assessments and drought monitoring.
Resumo:
River runoff is an essential climate variable as it is directly linked to the terrestrial water balance and controls a wide range of climatological and ecological processes. Despite its scientific and societal importance, there are to date no pan-European observation-based runoff estimates available. Here we employ a recently developed methodology to estimate monthly runoff rates on regular spatial grid in Europe. For this we first collect an unprecedented collection of river flow observations, combining information from three distinct data bases. Observed monthly runoff rates are first tested for homogeneity and then related to gridded atmospheric variables (E-OBS version 11) using machine learning. The resulting statistical model is then used to estimate monthly runoff rates (December 1950-December 2014) on a 0.5° × 0.5° grid. The performance of the newly derived runoff estimates is assessed in terms of cross validation. The paper closes with example applications, illustrating the potential of the new runoff estimates for climatological assessments and drought monitoring.
Resumo:
Global niobium production is presently dominated by three operations, Araxá and Catalão (Brazil), and Niobec (Canada). Although Brazil accounts for over 90% of the world’s niobium production, a number of high grade niobium deposits exist worldwide. The advancement of these deposits depends largely on the development of operable beneficiation flowsheets. Pyrochlore, as the primary niobium mineral, is typically upgraded by flotation with amine collectors at acidic pH following a complicated flowsheet with significant losses of niobium. This research compares the typical two stage flotation flowsheet to a direct flotation process (i.e. elimination of gangue pre-flotation) with the objective of circuit simplification. In addition, the use of a chelating reagent (benzohydroxamic acid, BHA) was studied as an alternative collector for fine grained, highly disseminated pyrochlore. For the amine based reagent system, results showed that while comparable at the laboratory scale, when scaled up to the pilot level the direct flotation process suffered from circuit instability because of high quantities of dissolved calcium in the process water due to stream recirculation and fine calcite dissolution, which ultimately depressed pyrochlore. This scale up issue was not observed in pilot plant operation of the two stage flotation process as a portion of the highly reactive carbonate minerals was removed prior to acid addition. A statistical model was developed for batch flotation using BHA on carbonatite ore (0.25% Nb2O5) that could not be effectively upgraded using the conventional amine reagent scheme. Results showed that it was possible to produce a concentrate containing 1.54% Nb2O5 with 93% Nb recovery in ~15% of the original mass. Fundamental studies undertaken included FT-IR and XPS, which showed the adsorption of both the protonized amine and the neutral amine onto the surface of the pyrochlore (possibly at niobium sites as indicated by detected shifts in the Nb3d binding energy). The results suggest that the preferential flotation of pyrochlore over quartz with amines at low pH levels can be attributed to a difference in critical hemimicelle concentration (CHC) values for the two minerals. BHA was found to be absorbed on pyrochlore surfaces by a similar mechanism to alkyl hydroxamic acid. It is hoped that this work will assist in improving operability of existing pyrochlore flotation circuits and help promote the development of niobium deposits globally. Future studies should focus on investigation into specific gangue mineral depressants and inadvertent activation phenomenon related to BHA flotation of gangue minerals.
Resumo:
El artículo analiza los cambios político electorales en León, Guanajuato, a partir de cómo se fue configurando el desplazamiento del Partido Revolucionario Institucional (PRI) por el Partido Acción Nacional (PAN) en este ayuntamiento en el año 1988, hasta el cambio de correlación de fuerzas en el año 2012. Ello da pauta para analizar los escenarios que podrían caracterizar las próximas elecciones de este año. Con este objetivo se propone un modelo estadístico para dicho estudio: el modelo de regresión Dirichlet, el cual permite considerar la naturaleza de los datos electorales.
The article analyzes the electoral changes in León, Guanajuato, based on how it was setting the displacement of the Institutional Revolutionary Party (PRI) by the National Action Party (PAN) in this council in 1988, until the change of correlation forces in 2012, which gives guidelines to analyze the scenarios that could characterize the upcoming elections this year. With this aim the authors proposed a statistical model for the study: the Dirichlet regression model, which allows to consider the nature of electoral data.
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
Dans ce projet de recherche, le dépôt des couches minces de carbone amorphe (généralement connu sous le nom de DLC pour Diamond-Like Carbon en anglais) par un procédé de dépôt chimique en phase vapeur assisté par plasma (ou PECVD pour Plasma Enhanced Chemical Vapor deposition en anglais) a été étudié en utilisant la Spectroscopie d’Émission Optique (OES) et l’analyse partielle par régression des moindres carrés (PLSR). L’objectif de ce mémoire est d’établir un modèle statistique pour prévoir les propriétés des revêtements DLC selon les paramètres du procédé de déposition ou selon les données acquises par OES. Deux séries d’analyse PLSR ont été réalisées. La première examine la corrélation entre les paramètres du procédé et les caractéristiques du plasma pour obtenir une meilleure compréhension du processus de dépôt. La deuxième série montre le potentiel de la technique d’OES comme outil de surveillance du procédé et de prédiction des propriétés de la couche déposée. Les résultats montrent que la prédiction des propriétés des revêtements DLC qui était possible jusqu’à maintenant en se basant sur les paramètres du procédé (la pression, la puissance, et le mode du plasma), serait envisageable désormais grâce aux informations obtenues par OES du plasma (particulièrement les indices qui sont reliées aux concentrations des espèces dans le plasma). En effet, les données obtenues par OES peuvent être utilisées pour surveiller directement le processus de dépôt plutôt que faire une étude complète de l’effet des paramètres du processus, ceux-ci étant strictement reliés au réacteur plasma et étant variables d’un laboratoire à l’autre. La perspective de l’application d’un modèle PLSR intégrant les données de l’OES est aussi démontrée dans cette recherche afin d’élaborer et surveiller un dépôt avec une structure graduelle.
Resumo:
Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work.
Resumo:
Estuaries are areas which, from their structure, their fonctioning, and their localisation, are subject to significant contribution of nutrients. One of the objectif of the RNO, the French network for coastal water quality monitoring, is to assess the levels and trends of nutrient concentrations in estuaries. A linear model was used in order to describe and to explain the total dissolved nitrogen concentration evolution in the three most important estuaries on the Chanel-Atlantic front (Seine, Loire and Gironde). As a first step, the selection of a reliable data set was performed. Then total dissolved nitrogen evolution schemes in estuary environment were graphically studied, and allowed a resonable choice of covariables. The salinity played a major role in explaining nitrogen concentration variability in estuary, and dilution lines were proved to be a useful tool to detect outlying observations and to model the nitrogenlsalinity relation. Increasing trends were detected by the model, with a high magnitude in Seine, intermediate in Loire, and lower in Gironde. The non linear trend estimated in Loire and Seine estuaries could be due to important interannual variations as suggest in graphics. In the objective of the QUADRIGE database valorisation, a discussion on the statistical model, and on the RNO hydrological data sampling strategy, allowed to formulate suggestions towards a better exploitation of nutrient data.
Resumo:
In this thesis, wind wave prediction and analysis in the Southern Caspian Sea are surveyed. Because of very much importance and application of this matter in reducing vital and financial damages or marine activities, such as monitoring marine pollution, designing marine structure, shipping, fishing, offshore industry, tourism and etc, gave attention by some marine activities. In this study are used the Caspian Sea topography data that are extracted from the Caspian Sea Hydrography map of Iran Armed Forces Geographical Organization and the I 0 meter wind field data that are extracted from the transmitted GTS synoptic data of regional centers to Forecasting Center of Iran Meteorological Organization for wave prediction and is used the 20012 wave are recorded by the oil company's buoy that was located at distance 28 Kilometers from Neka shore for wave analysis. The results of this research are as follows: - Because of disagreement between the prediction results of SMB method in the Caspian sea and wave data of the Anzali and Neka buoys. The SMB method isn't able to Predict wave characteristics in the Southern Caspian Sea. - Because of good relativity agreement between the WAM model output in the Caspian Sea and wave data of the Anzali buoy. The WAM model is able to predict wave characteristics in the southern Caspian Sea with high relativity accuracy. The extreme wave height distribution function for fitting to the Southern Caspian Sea wave data is obtained by determining free parameters of Poisson-Gumbel function through moment method. These parameters are as below: A=2.41, B=0.33. The maximum relative error between the estimated 4-year return value of the Southern Caspian Sea significant wave height by above function with the wave data of Neka buoy is about %35. The 100-year return value of the Southern Caspian Sea significant height wave is about 4.97 meter. The maximum relative error between the estimated 4-year return value of the Southern Caspian Sea significant wave height by statistical model of peak over threshold with the wave data of Neka buoy is about %2.28. The parametric relation for fitting to the Southern Caspian Sea frequency spectra is obtained by determining free parameters of the Strekalov, Massel and Krylov etal_ multipeak spectra through mathematical method. These parameters are as below: A = 2.9 B=26.26, C=0.0016 m=0.19 and n=3.69. The maximum relative error between calculated free parameters of the Southern Caspian Sea multipeak spectrum with the proposed free parameters of double-peaked spectrum by Massel and Strekalov on the experimental data from the Caspian Sea is about 36.1 % in spectrum energetic part and is about 74M% in spectrum high frequency part. The peak over threshold waverose of the Southern Caspian Sea shows that maximum occurrence probability of wave height is relevant to waves with 2-2.5 meters wave fhe error sources in the statistical analysis are mainly due to: l) the missing wave data in 2 years duration through battery discharge of Neka buoy. 2) the deportation %15 of significant height annual mean in single year than long period average value that is caused by lack of adequate measurement on oceanic waves, and the error sources in the spectral analysis are mainly due to above- mentioned items and low accurate of the proposed free parameters of double-peaked spectrum on the experimental data from the Caspian Sea.
Resumo:
Spent hydroprocessing catalysts (HPCs) are solid wastes generated in refinery industries and typically contain various hazardous metals, such as Co, Ni, and Mo. These wastes cannot be discharged into the environment due to strict regulations and require proper treatment to remove the hazardous substances. Various options have been proposed and developed for spent catalysts treatment; however, hydrometallurgical processes are considered efficient, cost-effective and environmentally-friendly methods of metal extraction, and have been widely employed for different metal uptake from aqueous leachates of secondary materials. Although there are a large number of studies on hazardous metal extraction from aqueous solutions of various spent catalysts, little information is available on Co, Ni, and Mo removal from spent NiMo hydroprocessing catalysts. In the current study, a solvent extraction process was applied to the spent HPC to specifically remove Co, Ni, and Mo. The spent HPC is dissolved in an acid solution and then the metals are extracted using three different extractants, two of which were aminebased and one which was a quaternary ammonium salt. The main aim of this study was to develop a hydrometallurgical method to remove, and ultimately be able to recover, Co, Ni, and Mo from the spent HPCs produced at the petrochemical plant in Come By Chance, Newfoundland and Labrador. The specific objectives of the study were: (1) characterization of the spent catalyst and the acidic leachate, (2) identifying the most efficient leaching agent to dissolve the metals from the spent catalyst; (3) development of a solvent extraction procedure using the amine-based extractants Alamine308, Alamine336 and the quaternary ammonium salt, Aliquat336 in toluene to remove Co, Ni, and Mo from the spent catalyst; (4) selection of the best reagent for Co, Ni, and Mo extraction based on the required contact time, required extractant concentration, as well as organic:aqueous ratio; and (5) evaluation of the extraction conditions and optimization of the metal extraction process using the Design Expert® software. For the present study, a Central Composite Design (CCD) method was applied as the main method to design the experiments, evaluate the effect of each parameter, provide a statistical model, and optimize the extraction process. Three parameters were considered as the most significant factors affecting the process efficiency: (i) extractant concentration, (ii) the organic:aqueous ratio, and (iii) contact time. Metal extraction efficiencies were calculated based on ICP analysis of the pre- and post–leachates, and the process optimization was conducted with the aid of the Design Expert® software. The obtained results showed that Alamine308 can be considered to be the most effective and suitable extractant for spent HPC examined in the study. Alamine308 is capable of removing all three metals to the maximum amounts. Aliquat336 was found to be not as effective, especially for Ni extraction; however, it is able to separate all of these metals within the first 10 min, unlike Alamine336, which required more than 35 min to do so. Based on the results of this study, a cost-effective and environmentally-friendly solventextraction process was achieved to remove Co, Ni, and Mo from the spent HPCs in a short amount of time and with the low extractant concentration required. This method can be tested and implemented for other hazardous metals from other secondary materials as well. Further investigation may be required; however, the results of this study can be a guide for future research on similar metal extraction processes.
Resumo:
O sector avícola enfrenta atualmente dois desafios muito estimulantes. O primeiro decorre do aumento, que se prevê continuar a crescer, nos níveis de procura de carne de aves no mercado interno e internacional; o segundo decorre do facto da criação avícola ter adotado métodos de produção mais intensivos (kg peso vivo/m2/ano) e em maior escala, i.e. com maior concentração animal na mesma exploração. Este carácter vincadamente “industrial” tem merecido uma natural atenção das sociedades e das autoridades pecuárias no sentido desta economia de escala passar a ter num conjunto de instrumentos legais e técnicos o devido contrapeso para a salvaguarda das aves enquanto ser vivo. O presente trabalho tem como ponto de partida a Directiva 2007/43/CE do Conselho de 28 de Junho, relativa ao estabelecimento de regras mínimas para a proteção de frangos de carne. Em virtude de não existir ainda informação suficiente sobre a forma como a qualidade do maneio animal pode ser monitorizada, ao nível do abate, por médicos veterinários e auxiliares oficiais, em frangos de criação especial segundo os modelos definidos no Regulamento (CE) n.º 543/2008, urge realizar estudos neste domínio. O principal objetivo da realização do presente trabalho de campo foi o estudo da ocorrência das dermatites de contacto plantar (pododermatites) e da bolsa sinovial préesternal em frangos produzidos em sistemas de produção considerados “protetores” do bem-estar animal, designadamente os seguintes: i) ar livre; e, ii) extensivo de interior. O estudo foi efetuado num centro de abate de frangos do campo, em Oliveira de Frades, entre Maio de 20012 e Março de 2013. Os animais abatidos foram criados em explorações com contratos de integração situadas no Distrito de Viseu. Os dados foram recolhidos em 39 bandos diferentes da espécie Gallus domesticus, dos quais 1021 carcaças foram avaliadas após evisceração, o que correspondeu ao exame de uma a cada quinze aves da linha de abate. Para a avaliação da pododermatite foi utilizado o método adaptado pela DGAV, enquanto para a avaliação da bursite esternal foi efetuada tendo em conta o modelo aplicado em perus por Berk em 2002. Apesar do modelo estatístico desenvolvido para a análise dos resultados obtidos no presente trabalho exigir um maior número de observações, foi possível identificar com grande precisão alguns fatores de risco que devem ser realçados pela sua relevância no contexto dos sistemas produtivos escrutinados ou no mecanismo fisiopatológico da dermatite de contacto, nomeadamente os seguintes: (i) a idade das aves que, apesar de não ter sido identificada uma relação directa com os scores de pododermatite e bursite, verificou-se que a idade elevada que os animais tipicamente atingem nos sistemas de produção extensivos está associada a uma taxa superior de rejeições pela inspecção sanitária; (ii) o peso pré-abate que, independentemente da inconsistência defendida por diversos autores em relação à influência do peso vivo do frango industrial sobre a dermatite de contacto, nos animais produzidos em regime extensivo, esta variável pode desempenhar um fator chave para a ocorrência desta lesão. De facto, há que realçar que o peso destes animais tem uma importância fulcral na modelação da biomecânica da ave, incluindo na pressão exercida sobre a superfície plantar; (iii) o tipo de sistema de abeberamento, tendo ficado demonstrado que a selecção do tipo de bebedouro tem uma importância peculiar sobre a ocorrência de pododermatite em “frango de campo”, algo que está provavelmente relacionado com a influência exercida sobre o teor de humidade da cama. Globalmente, as frequências de pododermatite e bursite apuradas neste trabalho devem ser consideradas inquietantes. Esta preocupação eleva-se quando se toma consciência que as aves provieram de regimes considerados “amigáveis” e “sustentáveis”, pelo que urge monitorizar adequadamente aqueles sistemas produtivos, melhorar as suas condições e reanalisar os benefícios ao nível do bem-estar animal.
Resumo:
O prognóstico da perda dentária é um dos principais problemas na prática clínica de medicina dentária. Um dos principais fatores prognósticos é a quantidade de suporte ósseo do dente, definido pela área da superfície radicular dentária intraóssea. A estimação desta grandeza tem sido realizada por diferentes metodologias de investigação com resultados heterogéneos. Neste trabalho utilizamos o método da planimetria com microtomografia para calcular a área da superfície radicular (ASR) de uma amostra de cinco dentes segundos pré-molares inferiores obtida da população portuguesa, com o objetivo final de criar um modelo estatístico para estimar a área de superfície radicular intraóssea a partir de indicadores clínicos da perda óssea. Por fim propomos um método para aplicar os resultados na prática. Os dados referentes à área da superfície radicular, comprimento total do dente (CT) e dimensão mésio-distal máxima da coroa (MDeq) serviram para estabelecer as relações estatísticas entre variáveis e definir uma distribuição normal multivariada. Por fim foi criada uma amostra de 37 observações simuladas a partir da distribuição normal multivariada definida e estatisticamente idênticas aos dados da amostra de cinco dentes. Foram ajustados cinco modelos lineares generalizados aos dados simulados. O modelo estatístico foi selecionado segundo os critérios de ajustamento, preditibilidade, potência estatística, acurácia dos parâmetros e da perda de informação, e validado pela análise gráfica de resíduos. Apoiados nos resultados propomos um método em três fases para estimação área de superfície radicular perdida/remanescente. Na primeira fase usamos o modelo estatístico para estimar a área de superfície radicular, na segunda estimamos a proporção (decis) de raiz intraóssea usando uma régua de Schei adaptada e na terceira multiplicamos o valor obtido na primeira fase por um coeficiente que representa a proporção de raiz perdida (ASRp) ou da raiz remanescente (ASRr) para o decil estimado na segunda fase. O ponto forte deste estudo foi a aplicação de metodologia estatística validada para operacionalizar dados clínicos na estimação de suporte ósseo perdido. Como pontos fracos consideramos a aplicação destes resultados apenas aos segundos pré-molares mandibulares e a falta de validação clínica.
Resumo:
Mestrado em Economia e Gestão de Ciência, Tecnologia e Inovação