951 resultados para Probabilities.
Resumo:
INTRODUCTION: There are several risk scores for stratification of patients with ST-segment elevation myocardial infarction (STEMI), the most widely used of which are the TIMI and GRACE scores. However, these are complex and require several variables. The aim of this study was to obtain a reduced model with fewer variables and similar predictive and discriminative ability. METHODS: We studied 607 patients (age 62 years, SD=13; 76% male) who were admitted with STEMI and underwent successful primary angioplasty. Our endpoints were all-cause in-hospital and 30-day mortality. Considering all variables from the TIMI and GRACE risk scores, multivariate logistic regression models were fitted to the data to identify the variables that best predicted death. RESULTS: Compared to the TIMI score, the GRACE score had better predictive and discriminative performance for in-hospital mortality, with similar results for 30-day mortality. After data modeling, the variables with highest predictive ability were age, serum creatinine, heart failure and the occurrence of cardiac arrest. The new predictive model was compared with the GRACE risk score, after internal validation using 10-fold cross validation. A similar discriminative performance was obtained and some improvement was achieved in estimates of probabilities of death (increased for patients who died and decreased for those who did not). CONCLUSION: It is possible to simplify risk stratification scores for STEMI and primary angioplasty using only four variables (age, serum creatinine, heart failure and cardiac arrest). This simplified model maintained a good predictive and discriminative performance for short-term mortality.
Resumo:
BACKGROUND: High-grade gliomas are aggressive, incurable tumors characterized by extensive diffuse invasion of the normal brain parenchyma. Novel therapies at best prolong survival; their costs are formidable and benefit is marginal. Economic restrictions thus require knowledge of the cost-effectiveness of treatments. Here, we show the cost-effectiveness of enhanced resections in malignant glioma surgery using a well-characterized tool for intraoperative tumor visualization, 5-aminolevulinic acid (5-ALA). OBJECTIVE: To evaluate the cost-effectiveness of 5-ALA fluorescence-guided neurosurgery compared with white-light surgery in adult patients with newly diagnosed high-grade glioma, adopting the perspective of the Portuguese National Health Service. METHODS: We used a Markov model (cohort simulation). Transition probabilities were estimated with the use of data from 1 randomized clinical trial and 1 noninterventional prospective study. Utility values and resource use were obtained from published literature and expert opinion. Unit costs were taken from official Portuguese reimbursement lists (2012 values). The health outcomes considered were quality-adjusted life-years, lifeyears, and progression-free life-years. Extensive 1-way and probabilistic sensitivity analyses were performed. RESULTS: The incremental cost-effectiveness ratios are below €10 000 in all evaluated outcomes, being around €9100 per quality-adjusted life-year gained, €6700 per life-year gained, and €8800 per progression-free life-year gained. The probability of 5-ALA fluorescence-guided surgery cost-effectiveness at a threshold of €20000 is 96.0% for quality-adjusted life-year, 99.6% for life-year, and 98.8% for progression-free life-year. CONCLUSION: 5-ALA fluorescence-guided surgery appears to be cost-effective in newly diagnosed high-grade gliomas compared with white-light surgery. This example demonstrates cost-effectiveness analyses for malignant glioma surgery to be feasible on the basis of existing data.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
Resumo:
Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
RESUMO: Apesar de toda a evolução farmacológica e de meios complementares de diagnóstico possível nos últimos anos, o enfarte agudo do miocárdio e a morte súbita continuam a ser a primeira manifestação da aterosclerose coronária para muitos doentes, que estavam previamente assintomáticos. Os exames complementares de diagnóstico tradicionalmente usados para avaliar a presença de doença coronária, baseiam‐se na documentação de isquémia do miocárdio e por este motivo a sua positividade depende da presença de lesões coronárias obstrutivas. As lesões coronárias não obstrutivas estão também frequentemente implicadas no desenvolvimento de eventos coronários. Apesar de o risco absoluto de instabilização por placa ser superior para as lesões mais volumosas e obstrutivas, estas são menos prevalentes do que as placas não obstrutivas e assim, por questões probabilísticas, os eventos coronários resultam com frequência da rotura ou erosão destas últimas. Estudos recentes de imagiologia intracoronária avançada forneceram evidência de que apesar de ser possível identificar algumas características de vulnerabilidade em placas associadas ao desenvolvimento subsequente de eventos coronários, a sua sensibilidade e especificidade é muito baixa para aplicação clínica. Mais do que o risco associado a uma placa em particular, para o doente poderá ser mais importante o risco global da sua árvore coronária reflexo da soma das probabilidade de todas as suas lesões, sendo que quanto maior for a carga aterosclerótica maior será o seu risco. A angio TC cardíaca é a mais recente técnica de imagem não invasiva para o estudo da doença coronária e surgiu nos últimos anos fruto de importantes avanços na tecnologia de TC multidetectores. Estes avanços, permitiram uma progressiva melhoria da resolução espacial e temporal, contribuindo para a melhoria da qualidade dos exames, bem como uma significativa redução da dose de radiação. A par desta evolução tecnológica, foi aumentando a experiência e gerada mais evidência científica, tornando a angio TC cardíaca cada vez mais robusta na avaliação da doença coronária e aumentando a sua aplicabilidade clínica. Mais recentemente apareceram vários trabalhos que validaram o seu valor prognóstico, assinalando a sua chegada à idade adulta. Para além de permitir excluir a presença de doença coronária e de identificar a presença de estenoses significativas, a angio TC cardíaca permite identificar a presença de lesões coronárias não obstrutivas, característica impar desta técnica como modalidade de imagem não invasiva. Ao permitir identificar a totalidade das lesões ateroscleróticas (obstrutivas e não obstrutivas), a 18 angio TC cardíaca poderá fornecer uma quantificação da carga aterosclerótica coronária total, podendo essa identificação ser útil na estratificação dos indivíduos em risco de eventos coronários. Neste trabalho foi possível identificar preditores demográficos e clínicos de uma elevada carga aterosclerótica coronária documentada pela angioTC cardíaca, embora o seu poder discriminativo tenha sido relativamente modesto, mesmo quando agrupados em scores clínicos. Entre os vários scores, o desempenho foi um pouco melhor para o score de risco cardiovascular Heartscore. Estas limitações espelham a dificuldade de prever apenas com base em variáveis clínicas, mesmo quando agrupadas em scores, a presença e extensão da doença coronária. Um dos factores de risco clássicos, a obesidade, parece ter uma relação paradoxal com a carga aterosclerótica, o que pode justificar algumas limitações da estimativa com base em scores clínicos. A diabetes mellitus, por outro lado, foi um dos preditores clínicos mais importantes, funcionando como modelo de doença coronária mais avançada, útil para avaliar o desempenho dos diferentes índices de carga aterosclerótica. Dada a elevada prevalência de placas ateroscleróticas identificáveis por angio TC na árvore coronária, torna-‐se importante desenvolver ferramentas que permitam quantificar a carga aterosclerótica e assim identificar os indivíduos que poderão eventualmente beneficiar de medidas de prevenção mais intensivas. Com este objectivo, foi desenvolvido um índice de carga aterosclerótica que reúne a informação global acerca da localização, do grau de estenose e do tipo de placa, obtida pela angio TC cardíaca, o CT--‐LeSc. Este score poderá vir a ser uma ferramenta útil para quantificação da carga aterosclerótica coronária, sendo de esperar que possa traduzir a informação prognóstica da angio TC cardíaca. Por fim, o conceito de árvore coronária vulnerável poderá ser mais importante do que o da placa vulnerável e a sua identificação pela angio TC cardíaca poderá ser importante numa estratégia de prevenção mais avançada. Esta poderá permitir personalizar as medidas de prevenção primária, doseando melhor a sua intensidade em função da carga aterosclerótica, podendo esta vir a constituir uma das mais importantes indicações da angio TC cardíaca no futuro.---------------- ABSTRACT Despite the significant advances made possible in recent years in the field of pharmacology and diagnostic tests, acute yocardial infarction and sudden cardiac death remain the first manifestation of coronary atherosclerosis in a significant proportion of patients, as many were previously asymptomatic. Traditionally, the diagnostic exams employed for the evaluation of possible coronary artery disease are based on the documentation of myocardial ischemia and, in this way, they are linked to the presence of obstructive coronary stenosis. Nonobstructive coronary lesions are also frequently involved in the development of coronary events. Although the absolute risk of becoming unstable per plaque is higher for more obstructive and higher burden plaques, these are much less frequent than nonobstructive lesions and therefore, in terms of probability for the patient, coronary events are often the result of rupture or erosion of the latter ones. Recent advanced intracoronary imaging studies provided evidence that although it is possible to identify some features of vulnerability in plaques associated with subsequente development of coronary events, the sensitivity and sensibility are very limited for clinical application. More important than the individual risk associated with a certain plaque, for the patient it might be more important the global risk of the total coronary tree, as reflected by the sum of the diferent probabilities of all the lesions, since the higher the coronary Atherosclerotic burden, the higher the risk for the patient. Cardiac CT or Coronary CT angiography is still a young modality. It is the most recente noninvasive imaging modality in the study of coronary artery disease and its development was possible due to important advances in multidetector CT technology. These allowed significant improvements in temporal and spatial resolution, leading to better image quality and also some impressive reductions in radiation dose. At the same time, the increasing experience with this technique lead to a growing body of scientific evidence, making cardiac CT a robust imaging tool for the evaluation of coronary artery disease and increased its clinical indications. More recently, several publications documented its prognostic value, marking the transition of cardiac CT to adulthood. Besides being able to exclude the presence of coronary artery disease and of obstructive lesions, Cardiac CT allows also the identification of nonobstructive lesions, making this a unique tool in the field of noninvasive imaging modalities. By evaluating both obstructive and nonobstructive lesions, cardiac CT can provide for the quantification of total coronary atherosclerotic burden, and this can be useful to stratify the risk of future coronary events. In the present work, it was possible to identify significant demographic and clinical predictors of a high coronary atherosclerotic burden as assessed by cardiac CT, but with modest odds ratios, even when the individual variables were gathered in clinical scores. Among these diferent clinical scores, the performance was better for the Heartscore, a cardiovascular risk score. This modest performance underline the limitations on predicting the presence and severity of coronary disease based only on clinical variables, even when optimized together in risk scores, One of the classical risk factors, obesity, had in fact a paradoxical relation with coronary atherosclerotic burden and might explain some of the limitations of the clinical models. On the opposite, diabetes mellitus was one of the strongest clinical predictors, and was considered to be a model of more advanced coronary disease, useful to evaluate the performance of diferent plaque burden scores. In face of the high prevalence of plaques that can be identified in the coronary tree of patients undergoing cardiac CT, it is of utmost importance to develop tools to quantify the total coronary atherosclerotic burden providing the identification of patients that could eventually benefit from more intensive preventive measures. This was the rational for the development of a coronary atherosclerotic burden score, reflecting the comprehensive information on localization, degree of stenosis and plaque composition provided by cardiac CT – the CT-LeSc. This score may become a useful tool to quantify total coronary atherosclerotic burden and is expected to convey the strong prognostic information of cardiac CT. Lastly, the concept of vulnerable coronary tree might become more important than the concept of the vulnerable plaque and his assessment by cardiac CT Might become important in a more advance primary prevention strategy. This Could lead to a more custom-made primary prevention, tailoring the intensity of preventive measures to the atherosclerotic burden and this might become one of the most important indications of cardiac CT In the near future.
Resumo:
This work studies the combination of safe and probabilistic reasoning through the hybridization of Monte Carlo integration techniques with continuous constraint programming. In continuous constraint programming there are variables ranging over continuous domains (represented as intervals) together with constraints over them (relations between variables) and the goal is to find values for those variables that satisfy all the constraints (consistent scenarios). Constraint programming “branch-and-prune” algorithms produce safe enclosures of all consistent scenarios. Special proposed algorithms for probabilistic constraint reasoning compute the probability of sets of consistent scenarios which imply the calculation of an integral over these sets (quadrature). In this work we propose to extend the “branch-and-prune” algorithms with Monte Carlo integration techniques to compute such probabilities. This approach can be useful in robotics for localization problems. Traditional approaches are based on probabilistic techniques that search the most likely scenario, which may not satisfy the model constraints. We show how to apply our approach in order to cope with this problem and provide functionality in real time.
Resumo:
This paper uses the framework developed by Vrugt (2010) to extract the recovery rate and term-structure of risk-neutral default probabilities implied in the cross-section of Portuguese sovereign bonds outstanding between March and August 2011. During this period the expectations on the recovery rate remain firmly anchored around 50 percent while the instantaneous default probability increases steadily from 6 to above 30 percent. These parameters are then used to calculate the fair-value of a 5-year and 10- year CDS contract. A credit-risk-neutral strategy is developed from the difference between the market price of a CDS of the same tenors and the fair-value calculated, yielding a sharpe ratio of 3.2
Resumo:
Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).
Resumo:
OBJECTIVE: To investigate and characterize the professional stereotypes associated with general medicine and surgery among Brazilian medical residents. METHODS: A randomized sample of residents of the General Medicine and Surgery Residence Programs were interviewed and their perceptions and views of general and surgical doctors were compared. RESULTS: The general practitioner was characterized by the residents in general to be principally a sensitive and concerned doctor with a close relationship with the patient; (45%); calm, tranquil, and balanced (27%); with intellectual skills (25%); meticulous and attentive to details (23%); slow to resolve problems and make decisions (22%); and working more with probabilities and hypotheses (20%). The surgeon was considered to be practical and objective (40%); quickly resolving problems (35%); technical with manual skills (23%); omnipotent, arrogant, and domineering (23%); anxious, stressed, nervous, and temperamental (23%); and more decided, secure, and courageous (20%). Only the residents of general medicine attributed the surgeon with less knowledge of medicine and only the surgeons attributed gender characteristics to their own specialty. CONCLUSION: There was considerable similarity in the description of a typical general practitioner and surgeon among the residents in general, regardless of the specialty they had chosen. It was interesting to observe that these stereotypes persist despite the transformations in the history of medicine, i.e. the first physicians (especially regarding the valorization of knowledge) and the first surgeons, so-called "barber surgeons" in Brazil (associated with less knowledge and the performance of high-risk procedures).
Resumo:
Chesney’s: Growing Through Product Expansion The purpose of this work project is to have a better understanding about how to proceed when a company is challenged by new options to grow and thrive. It aims to decode the next direction of Chesney’s Ltd, a United Kingdom leading company in luxurious replicas of antique fireplaces, wood burning stoves and other architectural pieces. The work project relies on the concepts of strategy, innovation and design thinking in order to encourage dynamic activities within the company. Chesney’s continuously tries to improve and innovate and this work project will assess whether the possible options have strategic fit with the purpose of the company and consequently, create an introduction plan for the opportunity that shows higher probabilities of becoming successful.
Resumo:
In longitudinal studies of disease, patients may experience several events through a follow-up period. In these studies, the sequentially ordered events are often of interest and lead to problems that have received much attention recently. Issues of interest include the estimation of bivariate survival, marginal distributions and the conditional distribution of gap times. In this work we consider the estimation of the survival function conditional to a previous event. Different nonparametric approaches will be considered for estimating these quantities, all based on the Kaplan-Meier estimator of the survival function. We explore the finite sample behavior of the estimators through simulations. The different methods proposed in this article are applied to a data set from a German Breast Cancer Study. The methods are used to obtain predictors for the conditional survival probabilities as well as to study the influence of recurrence in overall survival.
Resumo:
This paper investigates the geographical distribution and concentration of firms’ innovation persistence and innovation type (product and process) based on three waves of the Portuguese Community Innovation Survey data covering the period 1998–2006. The main findings are: 1) both innovation persistence and innovation type are asymmetrically distributed across Portuguese regions, 2) the degree of correlation between geographical location and innovative output varies with the innovation type, and 3) the correlation between geographical unit and innovation increases when the spatial unit of analysis is narrower. The results suggest that the firms’ choices of geographical location have a long-lasting effect, engendering no equal probabilities of being persistently innovative.
Resumo:
OBJECTIVE: The aim of this study is to evaluate the survival rate in a cohort of Parkinson's disease patients with and without depression. METHODS: A total of 53 Parkinson's disease subjects were followed up from 2003-2008 and 21 were diagnosed as depressed. Mean time of follow up was 3.8 (SD 95% = 1.5) years for all the sample and there was no significant difference in mean time of follow up between depressed and nondepressed Parkinson's disease patients. Survival curves rates were fitted using the Kaplan-Meier method. In order to compare survival probabilities according to the selected covariables the Log-Rank test was used. Multivariate analysis with Cox regression was performed aiming at estimating the effect of predictive covariables on the survival. RESULTS: The cumulative global survival of this sample was 83% with nine deaths at the end of the study - five in the depressed and four in the nondepressed group, and 55.6% died in the first year of observation, and none died at the fourth and fifth year of follow up. CONCLUSION: Our finding point toward incremental death risk in depressed Parkinson's disease patients.
Resumo:
Neste artigo estuda-se a realização de alunos futuros educadores e professores dos primeiros anos de escolaridade na comparação de probabilidades de acontecimentos em diferentes contextos sociais, formulados de forma explícita e implícita, tendo em vista averiguar a influência do nível de explicitação na comparação de probabilidades. Participaram no estudo 51 alunos futuros educadores e professores dos primeiros anos de escolaridade, que responderam a um questionário contendo duas questões, com vários itens envolvendo a comparação de probabilidades de acontecimentos formulados explícita e implicitamente. Em termos de resultados, globalmente, verificou-se que os itens formulados implicitamente se revelaram mais difíceis para os alunos do que os itens formulados explicitamente.