561 resultados para Predictability
Resumo:
Introdução: Determinar as causas da perda óssea nos maxilares, compreender os mecanismos biológicos desencadeados após a perda dental e criar recursos técnicos no intúito de prevenir e/ou minimizar as sequélas decorrentes, tem sido ao longo dos anos uma das vertentes de maior pesquisa e desenvolvimento na medicina dental . Objetivo: Assim, o objetivo desta dissertação é realizar uma revisão da literatura sobre os avanços nos biomateriais e técnicas na correção dos defeitos ósseos maxilares, para que seja possível, futuramente, ampliar as suas aplicações em Medicina Dentária, ultrapassando as limitações das técnicas e materiais existentes atualmente. Metodologia: Para isso, foi realizada uma pesquisa de artigos na base de dados PubMed, Bireme, Lilacs, Medline, revistas e periódicos nos idiomas: português, inglês e espanhol; assim como livros consagrados na literatura médico-odontológica,com o recurso a limites e palavras-chave de forma a refinar essa pesquisa. Desenvolvimento: Os avanços nos biomateriais e técnicas na correção dos defeitos ósseos maxilares tem seguido, assim como os implantes dentais, dois principais eixos de pesquisa, primeiro no que diz respeito aos biomateriais empregados pós-exodontia, a prevenir a reabsorção osséa e aqueles utilizados à fim de corrigir defeitos já existentes; a aplicação destes materiais recai sobre fatores que são decisivos na escolha do cirurgião, tais como: disponibilidade, necessidade de procedimento cirúrgico adicional, compatibilidade, morbidade do enxerto, qualidade do osso resultante e tempo de neo-formação. Segundo, as técnicas e recursos desenvolvidos para garantir a eficáz correção do defeito, assim como proporcionar procedimentos menos traumaticos ao organismo e de maior simplicidade e previsibilidade na sua execução e reprodução pelos profissionais. Discussão: Os trabalhos desenvolvidos em volta dos biomateriais atualmente buscam não só o substituto ideal, mas sim a melhoria na interação entre o osso hospedeiro e o biomaterial enxertado perante os recursos já utilizados e consagrados pelas literaturas; assim como a utilização das técnicas já protocoladas de forma associada a estes novos materiais. Conclusão: As pesquisas sobre o aperfeiçoamento do processo de regeneração óssea nos defeitos maxilares avançam na questão de promover a rápida e eficaz interação entre organismo e biomaterial, a fim de trazer soluções para problemas como a anti-genicidade, previsibilidade dimensional, menor tempo entre a enxertia e a reabilitação protética e agrega-se a utilização de recursos técnicos práticos, uma vez que o planejamento da reabilitação inicia-se nas decisões pré-exodontias e pré-implantares.
Resumo:
A técnica de Regeneração Óssea Guiada (ROG) é um procedimento que tem por objetivo a reposição do volume ósseo da crista alveolar necessário para garantir o sucesso da reabilitação oral recorrendo a implantes, repondo tanto a componente estética como funcional. Este trabalho foca apenas a ROG horizontal prévia à colocação de implantes e analisa o sucesso e a previsibilidade clínica deste procedimento de aumento ósseo, o sucesso e a sobrevivência de implantes colocados em osso regenerado bem como alguns tipos de enxertos ósseos e membranas. Metodologicamente consiste numa revisão de literatura, baseando-se numa pesquisa de artigos em bases de dados on-line e recorrendo também à consulta de livros em formato digital. As palavras-chave utilizadas na pesquisa on-line foram: “bone healing” AND “tooth extraction”, “bone resoption” AND “tooth extraction”, “bone regeneration” AND “dental implants”, “horizontal guided bone regeneration”, “horizontal guided bone regeneration” AND “dental implants”, “horizontal bone augmentation” “horizontal bone augmentation” AND “dental implants”, “lateral bone augmentation” AND “dental implants”, “horizontal ridge augmentation” AND “dental implants”. A regeneração óssea guiada manifesta comprovado sucesso e previsibilidade no aumento ósseo e os implantes colocados em osso regenerado demonstram sucesso a longo-prazo.
Resumo:
A demanda na procura da reabilitação estética é um dos focos da história da humanidade ao longo das eras, mas que teve a sua acentuação nestes últimos dois séculos. Sendo a face um dos pontos que nos permite avaliar esteticamente uma pessoa, cabe ao médico dentista, como um dos profissionais que trabalha nessa zona do corpo humano, avaliar e procurar satisfazer as necessidades estéticas da população. Assim, por parte dos profissionais de Medicina Dentária, tem de haver uma procura constante para a satisfação das exigências estéticas, não só no conhecimento como no aprimoramento da técnica. Nos últimos tempos, com a necessidade de desenvolvimento de materiais para colmatar a cada vez maior busca para a perfeição estética, as facetas surgiram como tratamento de excelência. O presente trabalho teve como objectivo a comparação entre resina composta e cerâmica, na elaboração de restaurações estéticas. Para os dois tipos de materiais foram avaliados a estética e o comportamento biomecânico. Foram comparados benefícios e desvantagens, contra-indicações, indicações, plano de tratamento, diagnóstico e procedimentos clínicos dos dois materiais, utilizados na confecção das facetas cerâmicas e de resina composta. Foram utilizados os seguintes parâmetros de comparação: biocompatibilidade, adaptação marginal, preparação, resistência, cor, acabamento, potencial de reparação, custo e estética. A utilização de facetas cerâmicas tem sido um dos principais focos de desenvolvimento da Medicina Dentária no âmbito científico. A sua utilização permite uma maior predictibilidade e uma maior longevidade clínica. A sua qualidade estética, resistência à fractura, biodisponibilidade e estabilidade de cor, são as suas maiores vantagens na utilização clínica. Em sentido inverso, as facetas de resina composta apresentam menor custo, maior resistência à abrasão, possibilidade de reparação fácil e menor desgaste de estrutura dentária durante a sua preparação. Contudo apresentam menor estabilidade de cor. Portanto, torna-se esclarecedor que a escolha do material a utilizar na confecção de facetas, deve ser adaptada às especificidades de cada caso.
Resumo:
BRITTO, Ricardo S.; MEDEIROS, Adelardo A. D.; ALSINA, Pablo J. Uma arquitetura distribuída de hardware e software para controle de um robô móvel autônomo. In: SIMPÓSIO BRASILEIRO DE AUTOMAÇÃO INTELIGENTE,8., 2007, Florianópolis. Anais... Florianópolis: SBAI, 2007.
Resumo:
This PhD thesis contains three main chapters on macro finance, with a focus on the term structure of interest rates and the applications of state-of-the-art Bayesian econometrics. Except for Chapter 1 and Chapter 5, which set out the general introduction and conclusion, each of the chapters can be considered as a standalone piece of work. In Chapter 2, we model and predict the term structure of US interest rates in a data rich environment. We allow the model dimension and parameters to change over time, accounting for model uncertainty and sudden structural changes. The proposed timevarying parameter Nelson-Siegel Dynamic Model Averaging (DMA) predicts yields better than standard benchmarks. DMA performs better since it incorporates more macro-finance information during recessions. The proposed method allows us to estimate plausible realtime term premia, whose countercyclicality weakened during the financial crisis. Chapter 3 investigates global term structure dynamics using a Bayesian hierarchical factor model augmented with macroeconomic fundamentals. More than half of the variation in the bond yields of seven advanced economies is due to global co-movement. Our results suggest that global inflation is the most important factor among global macro fundamentals. Non-fundamental factors are essential in driving global co-movements, and are closely related to sentiment and economic uncertainty. Lastly, we analyze asymmetric spillovers in global bond markets connected to diverging monetary policies. Chapter 4 proposes a no-arbitrage framework of term structure modeling with learning and model uncertainty. The representative agent considers parameter instability, as well as the uncertainty in learning speed and model restrictions. The empirical evidence shows that apart from observational variance, parameter instability is the dominant source of predictive variance when compared with uncertainty in learning speed or model restrictions. When accounting for ambiguity aversion, the out-of-sample predictability of excess returns implied by the learning model can be translated into significant and consistent economic gains over the Expectations Hypothesis benchmark.
Resumo:
This thesis studies the field of asset price bubbles. It is comprised of three independent chapters. Each of these chapters either directly or indirectly analyse the existence or implications of asset price bubbles. The type of bubbles assumed in each of these chapters is consistent with rational expectations. Thus, the kind of price bubbles investigated here are known as rational bubbles in the literature. The following describes the three chapters. Chapter 1: This chapter attempts to explain the recent US housing price bubble by developing a heterogeneous agent endowment economy asset pricing model with risky housing, endogenous collateral and defaults. Investment in housing is subject to an idiosyncratic risk and some mortgages are defaulted in equilibrium. We analytically derive the leverage or the endogenous loan to value ratio. This variable comes from a limited participation constraint in a one period mortgage contract with monitoring costs. Our results show that low values of housing investment risk produces a credit easing effect encouraging excess leverage and generates credit driven rational price bubbles in the housing good. Conversely, high values of housing investment risk produces a credit crunch characterized by tight borrowing constraints, low leverage and low house prices. Furthermore, the leverage ratio was found to be procyclical and the rate of defaults countercyclical consistent with empirical evidence. Chapter 2: It is widely believed that financial assets have considerable persistence and are susceptible to bubbles. However, identification of this persistence and potential bubbles is not straightforward. This chapter tests for price bubbles in the United States housing market accounting for long memory and structural breaks. The intuition is that the presence of long memory negates price bubbles while the presence of breaks could artificially induce bubble behaviour. Hence, we use procedures namely semi-parametric Whittle and parametric ARFIMA procedures that are consistent for a variety of residual biases to estimate the value of the long memory parameter, d, of the log rent-price ratio. We find that the semi-parametric estimation procedures robust to non-normality and heteroskedasticity errors found far more bubble regions than parametric ones. A structural break was identified in the mean and trend of all the series which when accounted for removed bubble behaviour in a number of regions. Importantly, the United States housing market showed evidence for rational bubbles at both the aggregate and regional levels. In the third and final chapter, we attempt to answer the following question: To what extend should individuals participate in the stock market and hold risky assets over their lifecycle? We answer this question by employing a lifecycle consumption-portfolio choice model with housing, labour income and time varying predictable returns where the agents are constrained in the level of their borrowing. We first analytically characterize and then numerically solve for the optimal asset allocation on the risky asset comparing the return predictability case with that of IID returns. We successfully resolve the puzzles and find equity holding and participation rates close to the data. We also find that return predictability substantially alter both the level of risky portfolio allocation and the rate of stock market participation. High factor (dividend-price ratio) realization and high persistence of factor process indicative of stock market bubbles raise the amount of wealth invested in risky assets and the level of stock market participation, respectively. Conversely, rare disasters were found to bring down these rates, the change being severe for investors in the later years of the life-cycle. Furthermore, investors following time varying returns (return predictability) hedged background risks significantly better than the IID ones.
Resumo:
This PhD thesis contains three main chapters on macro finance, with a focus on the term structure of interest rates and the applications of state-of-the-art Bayesian econometrics. Except for Chapter 1 and Chapter 5, which set out the general introduction and conclusion, each of the chapters can be considered as a standalone piece of work. In Chapter 2, we model and predict the term structure of US interest rates in a data rich environment. We allow the model dimension and parameters to change over time, accounting for model uncertainty and sudden structural changes. The proposed time-varying parameter Nelson-Siegel Dynamic Model Averaging (DMA) predicts yields better than standard benchmarks. DMA performs better since it incorporates more macro-finance information during recessions. The proposed method allows us to estimate plausible real-time term premia, whose countercyclicality weakened during the financial crisis. Chapter 3 investigates global term structure dynamics using a Bayesian hierarchical factor model augmented with macroeconomic fundamentals. More than half of the variation in the bond yields of seven advanced economies is due to global co-movement. Our results suggest that global inflation is the most important factor among global macro fundamentals. Non-fundamental factors are essential in driving global co-movements, and are closely related to sentiment and economic uncertainty. Lastly, we analyze asymmetric spillovers in global bond markets connected to diverging monetary policies. Chapter 4 proposes a no-arbitrage framework of term structure modeling with learning and model uncertainty. The representative agent considers parameter instability, as well as the uncertainty in learning speed and model restrictions. The empirical evidence shows that apart from observational variance, parameter instability is the dominant source of predictive variance when compared with uncertainty in learning speed or model restrictions. When accounting for ambiguity aversion, the out-of-sample predictability of excess returns implied by the learning model can be translated into significant and consistent economic gains over the Expectations Hypothesis benchmark.
Resumo:
Many exchange rate papers articulate the view that instabilities constitute a major impediment to exchange rate predictability. In this thesis we implement Bayesian and other techniques to account for such instabilities, and examine some of the main obstacles to exchange rate models' predictive ability. We first consider in Chapter 2 a time-varying parameter model in which fluctuations in exchange rates are related to short-term nominal interest rates ensuing from monetary policy rules, such as Taylor rules. Unlike the existing exchange rate studies, the parameters of our Taylor rules are allowed to change over time, in light of the widespread evidence of shifts in fundamentals - for example in the aftermath of the Global Financial Crisis. Focusing on quarterly data frequency from the crisis, we detect forecast improvements upon a random walk (RW) benchmark for at least half, and for as many as seven out of 10, of the currencies considered. Results are stronger when we allow the time-varying parameters of the Taylor rules to differ between countries. In Chapter 3 we look closely at the role of time-variation in parameters and other sources of uncertainty in hindering exchange rate models' predictive power. We apply a Bayesian setup that incorporates the notion that the relevant set of exchange rate determinants and their corresponding coefficients, change over time. Using statistical and economic measures of performance, we first find that predictive models which allow for sudden, rather than smooth, changes in the coefficients yield significant forecast improvements and economic gains at horizons beyond 1-month. At shorter horizons, however, our methods fail to forecast better than the RW. And we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients variability to incorporate in the models, as the main factors obstructing predictive ability. Chapter 4 focus on the problem of the time-varying predictive ability of economic fundamentals for exchange rates. It uses bootstrap-based methods to uncover the time-specific conditioning information for predicting fluctuations in exchange rates. Employing several metrics for statistical and economic evaluation of forecasting performance, we find that our approach based on pre-selecting and validating fundamentals across bootstrap replications generates more accurate forecasts than the RW. The approach, known as bumping, robustly reveals parsimonious models with out-of-sample predictive power at 1-month horizon; and outperforms alternative methods, including Bayesian, bagging, and standard forecast combinations. Chapter 5 exploits the predictive content of daily commodity prices for monthly commodity-currency exchange rates. It builds on the idea that the effect of daily commodity price fluctuations on commodity currencies is short-lived, and therefore harder to pin down at low frequencies. Using MIxed DAta Sampling (MIDAS) models, and Bayesian estimation methods to account for time-variation in predictive ability, the chapter demonstrates the usefulness of suitably exploiting such short-lived effects in improving exchange rate forecasts. It further shows that the usual low-frequency predictors, such as money supplies and interest rates differentials, typically receive little support from the data at monthly frequency, whereas MIDAS models featuring daily commodity prices are highly likely. The chapter also introduces the random walk Metropolis-Hastings technique as a new tool to estimate MIDAS regressions.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Departamento de Engenharia Florestal, Programa de Pós-Graduação em Ciências Florestasi, 2015.
Resumo:
O prognóstico da perda dentária é um dos principais problemas na prática clínica de medicina dentária. Um dos principais fatores prognósticos é a quantidade de suporte ósseo do dente, definido pela área da superfície radicular dentária intraóssea. A estimação desta grandeza tem sido realizada por diferentes metodologias de investigação com resultados heterogéneos. Neste trabalho utilizamos o método da planimetria com microtomografia para calcular a área da superfície radicular (ASR) de uma amostra de cinco dentes segundos pré-molares inferiores obtida da população portuguesa, com o objetivo final de criar um modelo estatístico para estimar a área de superfície radicular intraóssea a partir de indicadores clínicos da perda óssea. Por fim propomos um método para aplicar os resultados na prática. Os dados referentes à área da superfície radicular, comprimento total do dente (CT) e dimensão mésio-distal máxima da coroa (MDeq) serviram para estabelecer as relações estatísticas entre variáveis e definir uma distribuição normal multivariada. Por fim foi criada uma amostra de 37 observações simuladas a partir da distribuição normal multivariada definida e estatisticamente idênticas aos dados da amostra de cinco dentes. Foram ajustados cinco modelos lineares generalizados aos dados simulados. O modelo estatístico foi selecionado segundo os critérios de ajustamento, preditibilidade, potência estatística, acurácia dos parâmetros e da perda de informação, e validado pela análise gráfica de resíduos. Apoiados nos resultados propomos um método em três fases para estimação área de superfície radicular perdida/remanescente. Na primeira fase usamos o modelo estatístico para estimar a área de superfície radicular, na segunda estimamos a proporção (decis) de raiz intraóssea usando uma régua de Schei adaptada e na terceira multiplicamos o valor obtido na primeira fase por um coeficiente que representa a proporção de raiz perdida (ASRp) ou da raiz remanescente (ASRr) para o decil estimado na segunda fase. O ponto forte deste estudo foi a aplicação de metodologia estatística validada para operacionalizar dados clínicos na estimação de suporte ósseo perdido. Como pontos fracos consideramos a aplicação destes resultados apenas aos segundos pré-molares mandibulares e a falta de validação clínica.
Resumo:
There has long been a question as to whether crowding in rail passenger transport poses a threat to passenger health related to the experience of stress. A review of the scientific literature was conducted. Little rail-specific empirical research was identified. The more general research that does exist suggests that high-density environments are not necessarily perceived as crowded and that stress-related physiological, psychological and behavioural reactions do not necessarily follow from exposure to such environments. Several factors are identified that may moderate the impact of a high-density environment on perceptions of crowding and the subsequent experience and effects of stress. These include, inter alia, perceptions of control and predictability of events. However, if caused, the experience and effects of stress may be made worse by inadequate coach design that gives rise to discomfort. The model that emerges from these findings offers a suitable framework for the development of research questions that should help translate emerging knowledge into practical interventions, for the reduction of any adverse health outcomes associated with crowding.
Resumo:
INTRODUCTION: Attaining an accurate diagnosis in the acute phase for severely brain-damaged patients presenting Disorders of Consciousness (DOC) is crucial for prognostic validity; such a diagnosis determines further medical management, in terms of therapeutic choices and end-of-life decisions. However, DOC evaluation based on validated scales, such as the Revised Coma Recovery Scale (CRS-R), can lead to an underestimation of consciousness and to frequent misdiagnoses particularly in cases of cognitive motor dissociation due to other aetiologies. The purpose of this study is to determine the clinical signs that lead to a more accurate consciousness assessment allowing more reliable outcome prediction. METHODS: From the Unit of Acute Neurorehabilitation (University Hospital, Lausanne, Switzerland) between 2011 and 2014, we enrolled 33 DOC patients with a DOC diagnosis according to the CRS-R that had been established within 28 days of brain damage. The first CRS-R assessment established the initial diagnosis of Unresponsive Wakefulness Syndrome (UWS) in 20 patients and a Minimally Consciousness State (MCS) in the remaining13 patients. We clinically evaluated the patients over time using the CRS-R scale and concurrently from the beginning with complementary clinical items of a new observational Motor Behaviour Tool (MBT). Primary endpoint was outcome at unit discharge distinguishing two main classes of patients (DOC patients having emerged from DOC and those remaining in DOC) and 6 subclasses detailing the outcome of UWS and MCS patients, respectively. Based on CRS-R and MBT scores assessed separately and jointly, statistical testing was performed in the acute phase using a non-parametric Mann-Whitney U test; longitudinal CRS-R data were modelled with a Generalized Linear Model. RESULTS: Fifty-five per cent of the UWS patients and 77% of the MCS patients had emerged from DOC. First, statistical prediction of the first CRS-R scores did not permit outcome differentiation between classes; longitudinal regression modelling of the CRS-R data identified distinct outcome evolution, but not earlier than 19 days. Second, the MBT yielded a significant outcome predictability in the acute phase (p<0.02, sensitivity>0.81). Third, a statistical comparison of the CRS-R subscales weighted by MBT became significantly predictive for DOC outcome (p<0.02). DISCUSSION: The association of MBT and CRS-R scoring improves significantly the evaluation of consciousness and the predictability of outcome in the acute phase. Subtle motor behaviour assessment provides accurate insight into the amount and the content of consciousness even in the case of cognitive motor dissociation.
Resumo:
This thesis presents a study of the Grid data access patterns in distributed analysis in the CMS experiment at the LHC accelerator. This study ranges from the deep analysis of the historical patterns of access to the most relevant data types in CMS, to the exploitation of a supervised Machine Learning classification system to set-up a machinery able to eventually predict future data access patterns - i.e. the so-called dataset “popularity” of the CMS datasets on the Grid - with focus on specific data types. All the CMS workflows run on the Worldwide LHC Computing Grid (WCG) computing centers (Tiers), and in particular the distributed analysis systems sustains hundreds of users and applications submitted every day. These applications (or “jobs”) access different data types hosted on disk storage systems at a large set of WLCG Tiers. The detailed study of how this data is accessed, in terms of data types, hosting Tiers, and different time periods, allows to gain precious insight on storage occupancy over time and different access patterns, and ultimately to extract suggested actions based on this information (e.g. targetted disk clean-up and/or data replication). In this sense, the application of Machine Learning techniques allows to learn from past data and to gain predictability potential for the future CMS data access patterns. Chapter 1 provides an introduction to High Energy Physics at the LHC. Chapter 2 describes the CMS Computing Model, with special focus on the data management sector, also discussing the concept of dataset popularity. Chapter 3 describes the study of CMS data access patterns with different depth levels. Chapter 4 offers a brief introduction to basic machine learning concepts and gives an introduction to its application in CMS and discuss the results obtained by using this approach in the context of this thesis.
Resumo:
Objectifs: L’objectif principal de ce mémoire consiste à comprendre les caractéristiques des carrières criminelles d’individus connus de la police pour avoir perpétré une infraction de leurre d’enfants sur Internet. Aussi, par une analyse typologique à l’aide des antécédents criminels, il sera possible d’établir une typologie d’individus ayant leurré des enfants sur Internet. Également, il sera question de vérifier s’il y a un lien entre les caractéristiques des antécédents criminels de ces individus sur la perpétration de l’agression sexuelle hors ligne. Méthodologie: Provenant de données officielles de la communauté policière du Québec, l’échantillon comprend les parcours de criminels ayant perpétré une infraction de leurre d’enfants sur Internet. Des analyses descriptives en lien avec les différents paramètres de la carrière criminelle seront effectuées. Ensuite, des tests de moyenne et une analyse de régression Cox permettront de vérifier la présence ou non d’un lien statistique entre les caractéristiques des antécédents criminels des individus connus de la police pour leurre d’enfants sur Internet et le passage à l’acte physique. Résultats: Les analyses ont montré que la majorité des sujets n’avaient aucun antécédent judiciaire. Pour la plupart, le leurre d’enfants est le crime le plus grave perpétré au cours de leur carrière criminelle. Trois catégories d’individus ont été décelées : les amateurs, les spécialistes et les généralistes. Ce sont les individus polymorphes ayant une carrière criminelle plus grave et plus longue qui sont portés à agresser sexuellement avant le leurre. Cependant, ce sont des individus spécialisés ayant une importante proportion de délits sexuels dans leurs antécédents criminels qui ont plus de chance d’agresser sexuellement suite à l’exploitation sexuelle sur Internet.