43 resultados para decision analysis
Resumo:
BACKGROUND: Coronary artery disease (CAD) continues to be one of the top public health burden. Perfusion cardiovascular magnetic resonance (CMR) is generally accepted to detect CAD, while data on its cost effectiveness are scarce. Therefore, the goal of the study was to compare the costs of a CMR-guided strategy vs two invasive strategies in a large CMR registry. METHODS: In 3'647 patients with suspected CAD of the EuroCMR-registry (59 centers/18 countries) costs were calculated for diagnostic examinations (CMR, X-ray coronary angiography (CXA) with/without FFR), revascularizations, and complications during a 1-year follow-up. Patients with ischemia-positive CMR underwent an invasive CXA and revascularization at the discretion of the treating physician (=CMR + CXA-strategy). In the hypothetical invasive arm, costs were calculated for an initial CXA and a FFR in vessels with ≥50 % stenoses (=CXA + FFR-strategy) and the same proportion of revascularizations and complications were applied as in the CMR + CXA-strategy. In the CXA-only strategy, costs included those for CXA and for revascularizations of all ≥50 % stenoses. To calculate the proportion of patients with ≥50 % stenoses, the stenosis-FFR relationship from the literature was used. Costs of the three strategies were determined based on a third payer perspective in 4 healthcare systems. RESULTS: Revascularizations were performed in 6.2 %, 4.5 %, and 12.9 % of all patients, patients with atypical chest pain (n = 1'786), and typical angina (n = 582), respectively; whereas complications (=all-cause death and non-fatal infarction) occurred in 1.3 %, 1.1 %, and 1.5 %, respectively. The CMR + CXA-strategy reduced costs by 14 %, 34 %, 27 %, and 24 % in the German, UK, Swiss, and US context, respectively, when compared to the CXA + FFR-strategy; and by 59 %, 52 %, 61 % and 71 %, respectively, versus the CXA-only strategy. In patients with typical angina, cost savings by CMR + CXA vs CXA + FFR were minimal in the German (2.3 %), intermediate in the US and Swiss (11.6 % and 12.8 %, respectively), and remained substantial in the UK (18.9 %) systems. Sensitivity analyses proved the robustness of results. CONCLUSIONS: A CMR + CXA-strategy for patients with suspected CAD provides substantial cost reduction compared to a hypothetical CXA + FFR-strategy in patients with low to intermediate disease prevalence. However, in the subgroup of patients with typical angina, cost savings were only minimal to moderate.
Resumo:
Background Decisions on limiting life-sustaining treatment for patients in the vegetative state (VS) are emotionally and morally challenging. In Germany, doctors have to discuss, together with the legal surrogate (often a family member), whether the proposed treatment is in accordance with the patient's will. However, it is unknown whether family members of the patient in the VS actually base their decisions on the patient's wishes. Objective To examine the role of advance directives, orally expressed wishes, or the presumed will of patients in a VS for family caregivers' decisions on life-sustaining treatment. Methods and sample A qualitative interview study with 14 next of kin of patients in a VS in a long-term care setting was conducted; 13 participants were the patient's legal surrogates. Interviews were analysed according to qualitative content analysis. Results The majority of family caregivers said that they were aware of aforementioned wishes of the patient that could be applied to the VS condition, but did not base their decisions primarily on these wishes. They gave three reasons for this: (a) the expectation of clinical improvement, (b) the caregivers' definition of life-sustaining treatments and (c) the moral obligation not to harm the patient. If the patient's wishes were not known or not revealed, the caregivers interpreted a will to live into the patient's survival and non-verbal behaviour. Conclusions Whether or not prior treatment wishes of patients in a VS are respected depends on their applicability, and also on the medical assumptions and moral attitudes of the surrogates. We recommend repeated communication, support for the caregivers and advance care planning.
Resumo:
The aim of this work is to evaluate the capabilities and limitations of chemometric methods and other mathematical treatments applied on spectroscopic data and more specifically on paint samples. The uniqueness of the spectroscopic data comes from the fact that they are multivariate - a few thousands variables - and highly correlated. Statistical methods are used to study and discriminate samples. A collection of 34 red paint samples was measured by Infrared and Raman spectroscopy. Data pretreatment and variable selection demonstrated that the use of Standard Normal Variate (SNV), together with removal of the noisy variables by a selection of the wavelengths from 650 to 1830 cm−1 and 2730-3600 cm−1, provided the optimal results for infrared analysis. Principal component analysis (PCA) and hierarchical clusters analysis (HCA) were then used as exploratory techniques to provide evidence of structure in the data, cluster, or detect outliers. With the FTIR spectra, the Principal Components (PCs) correspond to binder types and the presence/absence of calcium carbonate. 83% of the total variance is explained by the four first PCs. As for the Raman spectra, we observe six different clusters corresponding to the different pigment compositions when plotting the first two PCs, which account for 37% and 20% respectively of the total variance. In conclusion, the use of chemometrics for the forensic analysis of paints provides a valuable tool for objective decision-making, a reduction of the possible classification errors, and a better efficiency, having robust results with time saving data treatments.
Resumo:
Background and purpose: Decision making (DM) has been defined as the process through which a person forms preferences, selects and executes actions, and evaluates the outcome related to a selected choice. This ability represents an important factor for adequate behaviour in everyday life. DM impairment in multiple sclerosis (MS) has been previously reported. The purpose of the present study was to assess DM in patients with MS at the earliest clinically detectable time point of the disease. Methods: Patients with definite (n=109) or possible (clinically isolated syndrome, CIS; n=56) MS, a short disease duration (mean 2.3 years) and a minor neurological disability (mean EDSS 1.8) were compared to 50 healthy controls aged 18 to 60 years (mean age 32.2) using the Iowa Gambling Task (IGT). Subjects had to select a card from any of 4 decks (A/B [disadvantageous]; C/D [advantageous]). The game consisted of 100 trials then grouped in blocks of 20 cards for data analysis. Skill in DM was assessed by means of a learning index (LI) defined as the difference between the averaged last three block indexes and first two block indexes (LI=[(BI-3+BI-4+BI-5)/3-(BI-1+B2)/2]). Non parametric tests were used for statistical analysis. Results: LI was higher in the control group (0.24, SD 0.44) than in the MS group (0.21, SD 0.38), however without reaching statistical significance (p=0.7). Interesting differences were detected when MS patients were grouped according to phenotype. A trend to a difference between MS subgroups and controls was observed for LI (p=0.06), which became significant between MS subgroups (p=0.03). CIS patients who confirmed MS diagnosis by presenting a second relapse after study entry showed a dysfunction in the IGT in comparison to the other CIS (p=0.01) and definite MS (p=0.04) patients. In the opposite, CIS patients characterised by not entirely fulfilled McDonald criteria at inclusion and absence of relapse during the study showed an normal learning pattern on the IGT. Finally, comparing MS patients who developed relapses after study entry, those who remained clinically stable and controls, we observed impaired performances only in relapsing patients in comparison to stable patients (p=0.008) and controls (p=0.03). Discussion: These results raise the assumption of a sustained role for both MS relapsing activity and disease heterogeneity (i.e. infra-clinical severity or activity of MS) in the impaired process of decision making.
Resumo:
1 6 STRUCTURE OF THIS THESIS -Chapter I presents the motivations of this dissertation by illustrating two gaps in the current body of knowledge that are worth filling, describes the research problem addressed by this thesis and presents the research methodology used to achieve this goal. -Chapter 2 shows a review of the existing literature showing that environment analysis is a vital strategic task, that it shall be supported by adapted information systems, and that there is thus a need for developing a conceptual model of the environment that provides a reference framework for better integrating the various existing methods and a more formal definition of the various aspect to support the development of suitable tools. -Chapter 3 proposes a conceptual model that specifies the various enviromnental aspects that are relevant for strategic decision making, how they relate to each other, and ,defines them in a more formal way that is more suited for information systems development. -Chapter 4 is dedicated to the evaluation of the proposed model on the basis of its application to a concrete environment to evaluate its suitability to describe the current conditions and potential evolution of a real environment and get an idea of its usefulness. -Chapter 5 goes a step further by assembling a toolbox describing a set of methods that can be used to analyze the various environmental aspects put forward by the model and by providing more detailed specifications for a number of them to show how our model can be used to facilitate their implementation as software tools. -Chapter 6 describes a prototype of a strategic decision support tool that allow the analysis of some of the aspects of the environment that are not well supported by existing tools and namely to analyze the relationship between multiple actors and issues. The usefulness of this prototype is evaluated on the basis of its application to a concrete environment. -Chapter 7 finally concludes this thesis by making a summary of its various contributions and by proposing further interesting research directions.
Resumo:
Integrative and conjugative elements (ICE) are in some ways parasitic mobile DNA that propagate vertically through replication with the bacterial host chromosome but at low frequencies can excise and invade new recipient cells through conjugation and reintegration (horizontal propagation). The factors that contribute to successful horizontal propagation are not very well understood. Here, we study the influence of host cell life history on the initiation of transfer of a model ICE named ICEclc in bacteria of the genus Pseudomonas. We use time-lapse microscopy of growing and stationary-phase microcolonies of ICEclc bearing cells in combination with physiological staining and gene reporter analysis in stationary-phase suspended cells. We provide evidence that cell age and cell lineage are unlikely to play a role in the decision to initiate the ICEclc transfer program. In contrast, cells activating ICEclc show more often increased levels of reactive oxygen species and membrane damage than nonactivating cells, suggesting that some form of biochemical damage may make cells more prone to ICEclc induction. Finally, we find that ICEclc active cells appear spatially at random in a microcolony, which may have been a selective advantage for maximizing ICEclc horizontal transmission to new recipient species.
Resumo:
PURPOSE: To determine and compare the diagnostic performance of magnetic resonance imaging (MRI) and computed tomography (CT) for the diagnosis of tumor extent in advanced retinoblastoma, using histopathologic analysis as the reference standard. DESIGN: Systematic review and meta-analysis. PARTICIPANTS: Patients with advanced retinoblastoma who underwent MRI, CT, or both for the detection of tumor extent from published diagnostic accuracy studies. METHODS: Medline and Embase were searched for literature published through April 2013 assessing the diagnostic performance of MRI, CT, or both in detecting intraorbital and extraorbital tumor extension of retinoblastoma. Diagnostic accuracy data were extracted from included studies. Summary estimates were based on a random effects model. Intrastudy and interstudy heterogeneity were analyzed. MAIN OUTCOME MEASURES: Sensitivity and specificity of MRI and CT in detecting tumor extent. RESULTS: Data of the following tumor-extent parameters were extracted: anterior eye segment involvement and ciliary body, optic nerve, choroidal, and (extra)scleral invasion. Articles on MRI reported results of 591 eyes from 14 studies, and articles on CT yielded 257 eyes from 4 studies. The summary estimates with their 95% confidence intervals (CIs) of the diagnostic accuracy of conventional MRI at detecting postlaminar optic nerve, choroidal, and scleral invasion showed sensitivities of 59% (95% CI, 37%-78%), 74% (95% CI, 52%-88%), and 88% (95% CI, 20%-100%), respectively, and specificities of 94% (95% CI, 84%-98%), 72% (95% CI, 31%-94%), and 99% (95% CI, 86%-100%), respectively. Magnetic resonance imaging with a high (versus a low) image quality showed higher diagnostic accuracies for detection of prelaminar optic nerve and choroidal invasion, but these differences were not statistically significant. Studies reporting the diagnostic accuracy of CT did not provide enough data to perform any meta-analyses. CONCLUSIONS: Magnetic resonance imaging is an important diagnostic tool for the detection of local tumor extent in advanced retinoblastoma, although its diagnostic accuracy shows room for improvement, especially with regard to sensitivity. With only a few-mostly old-studies, there is very little evidence on the diagnostic accuracy of CT, and generally these studies show low diagnostic accuracy. Future studies assessing the role of MRI in clinical decision making in terms of prognostic value for advanced retinoblastoma are needed.
Resumo:
BACKGROUND AND STUDY AIMS: Appropriate use of colonoscopy is a key component of quality management in gastrointestinal endoscopy. In an update of a 1998 publication, the 2008 European Panel on the Appropriateness of Gastrointestinal Endoscopy (EPAGE II) defined appropriateness criteria for various colonoscopy indications. This introductory paper therefore deals with methodology, general appropriateness, and a review of colonoscopy complications. METHODS:The RAND/UCLA Appropriateness Method was used to evaluate the appropriateness of various diagnostic colonoscopy indications, with 14 multidisciplinary experts using a scale from 1 (extremely inappropriate) to 9 (extremely appropriate). Evidence reported in a comprehensive updated literature review was used for these decisions. Consolidation of the ratings into three appropriateness categories (appropriate, uncertain, inappropriate) was based on the median and the heterogeneity of the votes. The experts then met to discuss areas of disagreement in the light of existing evidence, followed by a second rating round, with a subsequent third voting round on necessity criteria, using much more stringent criteria (i. e. colonoscopy is deemed mandatory). RESULTS: Overall, 463 indications were rated, with 55 %, 16 % and 29 % of them being judged appropriate, uncertain and inappropriate, respectively. Perforation and hemorrhage rates, as reported in 39 studies, were in general < 0.1 % and < 0.3 %, respectively CONCLUSIONS: The updated EPAGE II criteria constitute an aid to clinical decision-making but should in no way replace individual judgment. Detailed panel results are freely available on the internet (www.epage.ch) and will thus constitute a reference source of information for clinicians.
Resumo:
INTRODUCTION: A clinical decision rule to improve the accuracy of a diagnosis of influenza could help clinicians avoid unnecessary use of diagnostic tests and treatments. Our objective was to develop and validate a simple clinical decision rule for diagnosis of influenza. METHODS: We combined data from 2 studies of influenza diagnosis in adult outpatients with suspected influenza: one set in California and one in Switzerland. Patients in both studies underwent a structured history and physical examination and had a reference standard test for influenza (polymerase chain reaction or culture). We randomly divided the dataset into derivation and validation groups and then evaluated simple heuristics and decision rules from previous studies and 3 rules based on our own multivariate analysis. Cutpoints for stratification of risk groups in each model were determined using the derivation group before evaluating them in the validation group. For each decision rule, the positive predictive value and likelihood ratio for influenza in low-, moderate-, and high-risk groups, and the percentage of patients allocated to each risk group, were reported. RESULTS: The simple heuristics (fever and cough; fever, cough, and acute onset) were helpful when positive but not when negative. The most useful and accurate clinical rule assigned 2 points for fever plus cough, 2 points for myalgias, and 1 point each for duration <48 hours and chills or sweats. The risk of influenza was 8% for 0 to 2 points, 30% for 3 points, and 59% for 4 to 6 points; the rule performed similarly in derivation and validation groups. Approximately two-thirds of patients fell into the low- or high-risk group and would not require further diagnostic testing. CONCLUSION: A simple, valid clinical rule can be used to guide point-of-care testing and empiric therapy for patients with suspected influenza.
Resumo:
Tailoring adjuvant therapy in breast cancer patients relies on prognostic and predictive factors, most of which are currently established by histopathological analysis of tumors. The quality of the assessment of the former (i.e.: tumor size, lymph node status, tumor grade, HER2 status, and lymphovascular invasion) and the latter (estrogen and progesteron receptors expression, HER2 overexpression or amplification) is an essential prerequisite for an optimal therapeutic decision. If the prognostic and predictive values of multigenes signatures are confirmed by on-going clinical studies, this approach could enter the clinical practice in the coming years and result in improved accuracy of adjuvant therapies in breast cancer patients. This approach might especially allow avoiding overtreatment in patients at low risk of recurrence.
Resumo:
OBJECTIVE: We aim to explore how health surrogates of patients with dementia proceed in decision making, which considerations are decisive, and whether family surrogates and professional guardians decide differently. METHODS: We conducted an experimental vignette study using think aloud protocol analysis. Thirty-two family surrogates and professional guardians were asked to decide on two hypothetical case vignettes, concerning a feeding tube placement and a cardiac pacemaker implantation in patients with end-stage dementia. They had to verbalize their thoughts while deciding. Verbalizations were audio-recorded, transcribed, and analyzed according to content analysis. By experimentally changing variables in the vignettes, the impact of these variables on the outcome of decision making was calculated. RESULTS: Although only 25% and 31% of the relatives gave their consent to the feeding tube and pacemaker placement, respectively, 56% and 81% of the professional guardians consented to these life-sustaining measures. Relatives decided intuitively, referred to their own preferences, and focused on the patient's age, state of wellbeing, and suffering. Professional guardians showed a deliberative approach, relied on medical and legal authorities, and emphasized patient autonomy. Situational variables such as the patient's current behavior and the views of health care professionals and family members had higher impacts on decisions than the patient's prior statements or life attitudes. CONCLUSIONS: Both the process and outcome of surrogate decision making depend heavily on whether the surrogate is a relative or not. These findings have implications for the physician-surrogate relationship and legal frameworks regarding surrogacy. Copyright © 2011 John Wiley & Sons, Ltd.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
The authors are discussing the results of the international literature with regards to referrals between ambulatory physicians. There are still few studies on this problem and the methodologies used are often too different to make valid comparisons. However, the earned results suggest more questions than they give answers to the determinants of the referral process. This can be explained by the multidimensionality of factors which are involved in the decision to refer a patient to another practitioner, particularly by the complex interaction between the characteristics of each patient, practitioner and the sanitary system itself.
Resumo:
The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.
Resumo:
This paper applies probability and decision theory in the graphical interface of an influence diagram to study the formal requirements of rationality which justify the individualization of a person found through a database search. The decision-theoretic part of the analysis studies the parameters that a rational decision maker would use to individualize the selected person. The modeling part (in the form of an influence diagram) clarifies the relationships between this decision and the ingredients that make up the database search problem, i.e., the results of the database search and the different pairs of propositions describing whether an individual is at the source of the crime stain. These analyses evaluate the desirability associated with the decision of 'individualizing' (and 'not individualizing'). They point out that this decision is a function of (i) the probability that the individual in question is, in fact, at the source of the crime stain (i.e., the state of nature), and (ii) the decision maker's preferences among the possible consequences of the decision (i.e., the decision maker's loss function). We discuss the relevance and argumentative implications of these insights with respect to recent comments in specialized literature, which suggest points of view that are opposed to the results of our study.