180 resultados para Logical Decision Function
em Université de Lausanne, Switzerland
Resumo:
In this paper, we develop a data-driven methodology to characterize the likelihood of orographic precipitation enhancement using sequences of weather radar images and a digital elevation model (DEM). Geographical locations with topographic characteristics favorable to enforce repeatable and persistent orographic precipitation such as stationary cells, upslope rainfall enhancement, and repeated convective initiation are detected by analyzing the spatial distribution of a set of precipitation cells extracted from radar imagery. Topographic features such as terrain convexity and gradients computed from the DEM at multiple spatial scales as well as velocity fields estimated from sequences of weather radar images are used as explanatory factors to describe the occurrence of localized precipitation enhancement. The latter is represented as a binary process by defining a threshold on the number of cell occurrences at particular locations. Both two-class and one-class support vector machine classifiers are tested to separate the presumed orographic cells from the nonorographic ones in the space of contributing topographic and flow features. Site-based validation is carried out to estimate realistic generalization skills of the obtained spatial prediction models. Due to the high class separability, the decision function of the classifiers can be interpreted as a likelihood or susceptibility of orographic precipitation enhancement. The developed approach can serve as a basis for refining radar-based quantitative precipitation estimates and short-term forecasts or for generating stochastic precipitation ensembles conditioned on the local topography.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
What genotype should the scientist specify for conducting a database search to try to find the source of a low-template-DNA (lt-DNA) trace? When the scientist answers this question, he or she makes a decision. Here, we approach this decision problem from a normative point of view by defining a decision-theoretic framework for answering this question for one locus. This framework combines the probability distribution describing the uncertainty over the trace's donor's possible genotypes with a loss function describing the scientist's preferences concerning false exclusions and false inclusions that may result from the database search. According to this approach, the scientist should choose the genotype designation that minimizes the expected loss. To illustrate the results produced by this approach, we apply it to two hypothetical cases: (1) the case of observing one peak for allele xi on a single electropherogram, and (2) the case of observing one peak for allele xi on one replicate, and a pair of peaks for alleles xi and xj, i ≠ j, on a second replicate. Given that the probabilities of allele drop-out are defined as functions of the observed peak heights, the threshold values marking the turning points when the scientist should switch from one designation to another are derived in terms of the observed peak heights. For each case, sensitivity analyses show the impact of the model's parameters on these threshold values. The results support the conclusion that the procedure should not focus on a single threshold value for making this decision for all alleles, all loci and in all laboratories.
Resumo:
The Notch1 gene has an important role in mammalian cell-fate decision and tumorigenesis. Upstream control mechanisms for transcription of this gene are still poorly understood. In a chemical genetics screen for small molecule activators of Notch signalling, we identified epidermal growth factor receptor (EGFR) as a key negative regulator of Notch1 gene expression in primary human keratinocytes, intact epidermis and skin squamous cell carcinomas (SCCs). The underlying mechanism for negative control of the Notch1 gene in human cells, as well as in a mouse model of EGFR-dependent skin carcinogenesis, involves transcriptional suppression of p53 by the EGFR effector c-Jun. Suppression of Notch signalling in cancer cells counteracts the differentiation-inducing effects of EGFR inhibitors while, at the same time, synergizing with these compounds in induction of apoptosis. Thus, our data reveal a key role of EGFR signalling in the negative regulation of Notch1 gene transcription, of potential relevance for combinatory approaches for cancer therapy.
Resumo:
BACKGROUND: Shared Decision Making (SDM) is increasingly advocated as a model for medical decision making. However, there is still low use of SDM in clinical practice. High impact factor journals might represent an efficient way for its dissemination. We aimed to identify and characterize publication trends of SDM in 15 high impact medical journals. METHODS: We selected the 15 general and internal medicine journals with the highest impact factor publishing original articles, letters and editorials. We retrieved publications from 1996 to 2011 through the full-text search function on each journal website and abstracted bibliometric data. We included publications of any type containing the phrase "shared decision making" or five other variants in their abstract or full text. These were referred to as SDM publications. A polynomial Poisson regression model with logarithmic link function was used to assess the evolution across the period of the number of SDM publications according to publication characteristics. RESULTS: We identified 1285 SDM publications out of 229,179 publications in 15 journals from 1996 to 2011. The absolute number of SDM publications by journal ranged from 2 to 273 over 16 years. SDM publications increased both in absolute and relative numbers per year, from 46 (0.32% relative to all publications from the 15 journals) in 1996 to 165 (1.17%) in 2011. This growth was exponential (P < 0.01). We found fewer research publications (465, 36.2% of all SDM publications) than non-research publications, which included non-systematic reviews, letters, and editorials. The increase of research publications across time was linear. Full-text search retrieved ten times more SDM publications than a similar PubMed search (1285 vs. 119 respectively). CONCLUSION: This review in full-text showed that SDM publications increased exponentially in major medical journals from 1996 to 2011. This growth might reflect an increased dissemination of the SDM concept to the medical community.
Resumo:
In patients undergoing non-cardiac surgery, cardiac events are the most common cause of perioperative morbidity and mortality. It is often difficult to choose adequate cardiologic examinations before surgery. This paper, inspired by the guidelines of the European and American societies of cardiology (ESC, AHA, ACC), discusses the place of standard ECG, echocardiography, treadmill or bicycle ergometer and pharmacological stress testing in preoperative evaluations. The role of coronary angiography and prophylactic revascularization will also be discussed. Finally, we provide a decision tree which will be helpful to both general practitioners and specialists.
Resumo:
This paper applies probability and decision theory in the graphical interface of an influence diagram to study the formal requirements of rationality which justify the individualization of a person found through a database search. The decision-theoretic part of the analysis studies the parameters that a rational decision maker would use to individualize the selected person. The modeling part (in the form of an influence diagram) clarifies the relationships between this decision and the ingredients that make up the database search problem, i.e., the results of the database search and the different pairs of propositions describing whether an individual is at the source of the crime stain. These analyses evaluate the desirability associated with the decision of 'individualizing' (and 'not individualizing'). They point out that this decision is a function of (i) the probability that the individual in question is, in fact, at the source of the crime stain (i.e., the state of nature), and (ii) the decision maker's preferences among the possible consequences of the decision (i.e., the decision maker's loss function). We discuss the relevance and argumentative implications of these insights with respect to recent comments in specialized literature, which suggest points of view that are opposed to the results of our study.
Resumo:
Single amino acid substitution is the type of protein alteration most related to human diseases. Current studies seek primarily to distinguish neutral mutations from harmful ones. Very few methods offer an explanation of the final prediction result in terms of the probable structural or functional effect on the protein. In this study, we describe the use of three novel parameters to identify experimentally-verified critical residues of the TP53 protein (p53). The first two parameters make use of a surface clustering method to calculate the protein surface area of highly conserved regions or regions with high nonlocal atomic interaction energy (ANOLEA) score. These parameters help identify important functional regions on the surface of a protein. The last parameter involves the use of a new method for pseudobinding free-energy estimation to specifically probe the importance of residue side-chains to the stability of protein fold. A decision tree was designed to optimally combine these three parameters. The result was compared to the functional data stored in the International Agency for Research on Cancer (IARC) TP53 mutation database. The final prediction achieved a prediction accuracy of 70% and a Matthews correlation coefficient of 0.45. It also showed a high specificity of 91.8%. Mutations in the 85 correctly identified important residues represented 81.7% of the total mutations recorded in the database. In addition, the method was able to correctly assign a probable functional or structural role to the residues. Such information could be critical for the interpretation and prediction of the effect of missense mutations, as it not only provided the fundamental explanation of the observed effect, but also helped design the most appropriate laboratory experiment to verify the prediction results.
Resumo:
At a time when disciplined inference and decision making under uncertainty represent common aims to participants in legal proceedings, the scientific community is remarkably heterogenous in its attitudes as to how these goals ought to be achieved. Probability and decision theory exert a considerable influence, and we think by all reason rightly do so, but they go against a mainstream of thinking that does not embrace-or is not aware of-the 'normative' character of this body of theory. It is normative, in the sense understood in this article, in that it prescribes particular properties, typically (logical) coherence, to which reasoning and decision making ought to conform. Disregarding these properties can result in diverging views which are occasionally used as an argument against the theory, or as a pretext for not following it. Typical examples are objections according to which people, both in everyday life but also individuals involved at various levels in the judicial process, find the theory difficult to understand and to apply. A further objection is that the theory does not reflect how people actually behave. This article aims to point out in what sense these examples misinterpret the analytical framework in its normative perspective. Through examples borrowed mostly from forensic science contexts, it is argued that so-called intuitive scientific attitudes are particularly liable to such misconceptions. These attitudes are contrasted with a statement of the actual liberties and constraints of probability and decision theory and the view according to which this theory is normative.
Resumo:
Since the first implantation of an endograft in 1991, endovascular aneurysm repair (EVAR) rapidly gained recognition. Historical trials showed lower early mortality rates but these results were not maintained beyond 4 years. Despite newer-generation devices, higher rates of reintervention are associated with EVAR during follow-up. Therefore, the best therapeutic decision relies on many parameters that the physician has to take in consideration. Patient's preferences and characteristics are important, especially age and life expectancy besides health status. Aneurysmal anatomical conditions remain probably the most predictive factor that should be carefully evaluated to offer the best treatment. Unfavorable anatomy has been observed to be associated with more complications especially endoleak, leading to more re-interventions and higher risk of late mortality. Nevertheless, technological advances have made surgeons move forward beyond the set barriers. Thus, more endografts are implanted outside the instructions for use despite excellent results after open repair especially in low-risk patients. When debating about AAA repair, some other crucial points should be analysed. It has been shown that strict surveillance is mandatory after EVAR to offer durable results and prevent late rupture. Such program is associated with additional costs and with increased risk of radiation. Moreover, a risk of loss of renal function exists when repetitive imaging and secondary procedures are required. The aim of this article is to review the data associated with abdominal aortic aneurysm and its treatment in order to establish selection criteria to decide between open or endovascular repair.
Resumo:
Our objective was to determine the test and treatment thresholds for common acute primary care conditions. We presented 200 clinicians with a series of web-based clinical vignettes, describing patients with possible influenza, acute coronary syndrome (ACS), pneumonia, deep vein thrombosis (DVT) and urinary tract infection (UTI). We randomly varied the probability of disease and asked whether the clinician wanted to rule out disease, order tests or rule in disease. By randomly varying the probability, we obtained clinical decisions across a broad range of disease probabilities that we used to create threshold curves. For influenza, the test (4.5% vs 32%, p<0.001) and treatment (55% vs 68%, p=0.11) thresholds were lower for US compared with Swiss physicians. US physicians had somewhat higher test (3.8% vs 0.7%, p=0.107) and treatment (76% vs 58%, p=0.005) thresholds for ACS than Swiss physicians. For both groups, the range between test and treatment thresholds was greater for ACS than for influenza (which is sensible, given the consequences of incorrect diagnosis). For pneumonia, US physicians had a trend towards higher test thresholds and lower treatment thresholds (48% vs 64%, p=0.076) than Swiss physicians. The DVT and UTI scenarios did not provide easily interpretable data, perhaps due to poor wording of the vignettes. We have developed a novel approach for determining decision thresholds. We found important differences in thresholds for US and Swiss physicians that may be a function of differences in healthcare systems. Our results can also guide development of clinical decision rules and guidelines.
Resumo:
BACKGROUND: Shared Decision Making (SDM) is increasingly advocated as a model for medical decision making. However, there is still low use of SDM in clinical practice. High impact factor journals might represent an efficient way for its dissemination. We aimed to identify and characterize publication trends of SDM in 15 high impact medical journals. METHODS: We selected the 15 general and internal medicine journals with the highest impact factor publishing original articles, letters and editorials. We retrieved publications from 1996 to 2011 through the full-text search function on each journal website and abstracted bibliometric data. We included publications of any type containing the phrase "shared decision making" or five other variants in their abstract or full text. These were referred to as SDM publications. A polynomial Poisson regression model with logarithmic link function was used to assess the evolution across the period of the number of SDM publications according to publication characteristics. RESULTS: We identified 1285 SDM publications out of 229,179 publications in 15 journals from 1996 to 2011. The absolute number of SDM publications by journal ranged from 2 to 273 over 16 years. SDM publications increased both in absolute and relative numbers per year, from 46 (0.32% relative to all publications from the 15 journals) in 1996 to 165 (1.17%) in 2011. This growth was exponential (P < 0.01). We found fewer research publications (465, 36.2% of all SDM publications) than non-research publications, which included non-systematic reviews, letters, and editorials. The increase of research publications across time was linear. Full-text search retrieved ten times more SDM publications than a similar PubMed search (1285 vs. 119 respectively). CONCLUSION: This review in full-text showed that SDM publications increased exponentially in major medical journals from 1996 to 2011. This growth might reflect an increased dissemination of the SDM concept to the medical community.
Resumo:
Reduced re'nal function has been reported with tenofovir disoproxil fumarate (TDF). It is not clear whether TDF co-administered with a boosted protease inhibitor (PI) leads to a greater decline in renal function than TDF co-administered with a non-nucleoside reverse transcriptase inhibitor (NNRTI).Methods: We selected ail antiretroviral therapy-naive patients in the Swiss HIV Cohort Study (SHCS) with calibrated or corrected serum creatinine measurements starting antiretroviral therapy with TDF and either efavirenz (EFV) or the ritonavir-boosted PIs, lopinavir (LPV/r) or atazanavir (ATV/r). As a measure of renal function, we used the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation to estimate the glomerular filtration rate (eGFR). We calculated the difference in eGFR over time between two therapies using a marginal model for repeated measures. In weighted analyses, observations were weighted by the product of their point of treatment and censoring weights to adjust for differences both in the sort of patients starting each therapy and in the sort of patients remaining on each therapy over time.Results: By March 2011, 940 patients with at least one creatinine measurement on a first therapy with either TDF and EFV (n=484), TDF and LPVlr (n=269) or TDF and ATV/r (n=187) had been followed for a median of 1. 7, 1.2 and 1.3 years, respectively. Table 1 shows the difference in average estimated GFR (eGFR) over time since starting cART for two marginal models. The first model was not adjusted for potential confounders; the second mode! used weights to adjust for confounders. The results suggest a greater decline in renal function during the first 6 months if TDF is used with a PI rather than with an NNRTI, but no further difference between these therapies after the first 6 months. TDF and ATV/r may lead to a greater decline in the first 6 months than TDF and LPVlr.Conclusions: TDF co-administered with a boosted PI leads to a greater de cline in renal function over the first 6 months of therapy than TDF co-administered with an NNRTI; this decline may be worse with ATV/r than with LPV/r.
Resumo:
NKG2D is an activation receptor that allows natural killer (NK) cells to detect diseased host cells. The engagement of NKG2D with corresponding ligand results in surface modulation of the receptor and reduced function upon subsequent receptor engagement. However, it is not clear whether in addition to modulation the NKG2D receptor complex and/or its signaling capacity is preserved. We show here that the prolonged encounter with tumor cell-bound, but not soluble, ligand can completely uncouple the NKG2D receptor from the intracellular mobilization of calcium and the exertion of cell-mediated cytolysis. However, cytolytic effector function is intact since NKG2D ligand-exposed NK cells can be activated via the Ly49D receptor. While NKG2D-dependent cytotoxicity is impaired, prolonged ligand exposure results in constitutive interferon gamma (IFNgamma) production, suggesting sustained signaling. The functional changes are associated with a reduced presence of the relevant signal transducing adaptors DNAX-activating protein of 10 kDa (DAP-10) and killer cell activating receptor-associated protein/DNAX-activating protein of 12 kDa (KARAP/DAP-12). That is likely the consequence of constitutive NKG2D engagement and signaling, since NKG2D function and adaptor expression is restored to normal when the stimulating tumor cells are removed. Thus, the chronic exposure to tumor cells expressing NKG2D ligand alters NKG2D signaling and may facilitate the evasion of tumor cells from NK cell reactions.