997 resultados para Bayesian reasoning


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a ‘magnitude-based inference’ approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to identify and describe the types of errors in clinical reasoning that contribute to poor diagnostic performance at different levels of medical training and experience. Three cohorts of subjects, second- and fourth- (final) year medical students and a group of general practitioners, completed a set of clinical reasoning problems. The responses of those whose scores fell below the 25th centile were analysed to establish the stage of the clinical reasoning process - identification of relevant information, interpretation or hypothesis generation - at which most errors occurred and whether this was dependent on problem difficulty and level of medical experience. Results indicate that hypothesis errors decrease as expertise increases but that identification and interpretation errors increase. This may be due to inappropriate use of pattern recognition or to failure of the knowledge base. Furthermore, although hypothesis errors increased in line with problem difficulty, identification and interpretation errors decreased. A possible explanation is that as problem difficulty increases, subjects at all levels of expertise are less able to differentiate between relevant and irrelevant clinical features and so give equal consideration to all information contained within a case. It is concluded that the development of clinical reasoning in medical students throughout the course of their pre-clinical and clinical education may be enhanced by both an analysis of the clinical reasoning process and a specific focus on each of the stages at which errors commonly occur.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study sought to assess the extent to which the entry characteristics of students in a graduate-entry medical programme predict the subsequent development of clinical reasoning ability. Subjects comprised 290 students voluntarily recruited from three successive cohorts of the University of Queensland's MBBS Programme. Clinical reasoning was measured once a year over a period of three years using two methods, a set of 10 Clinical Reasoning Problems (CRPs) and the Diagnostic Thinking Inventory (DTI). Data on gender, age at entry into the programme, nature of primary degree, scores on selection criteria (written examination plus interview) and academic performance in the first two years of the programme were recorded for each student, and their association with clinical reasoning skill analysed using univariate and multivariate analysis. Univariate analysis indicated significant associations between CRP score, gender and primary degree with a significant but small association between DTI and interview score. Stage of progression through the programme was also an important predictor of performance on both indicators. Subsequent multivariate analysis suggested that female gender is a positive predictor of CRP score independently of the nature of a subject's primary degree and stage of progression through the programme, although these latter two variables are interdependent. Positive predictors of clinical reasoning skill are stage of progression through the MBBS programme, female gender and interview score. Although the nature of a student's primary degree is important in the early years of the programme, evidence suggests that by graduation differences between students' clinical reasoning skill due to this factor have been resolved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to identify and describe the clinical reasoning characteristics of diagnostic experts. A group of 21 experienced general practitioners were asked to complete the Diagnostic Thinking Inventory (DTI) and a set of 10 clinical reasoning problems (CRPs) to evaluate their clinical reasoning. Both the DTI and the CRPs were scored, and the CRP response patterns of each GP examined in terms of the number and type of errors contained in them. Analysis of these data showed that six GPs were able to reach the correct diagnosis using significantly less clinical information than their colleagues. These GPs also made significantly fewer interpretation errors but scored lower on both the DTI and the CRPs. Additionally, this analysis showed that more than 20% of misdiagnoses occurred despite no errors being made in the identification and interpretation of relevant clinical information. These results indicate that these six GPs diagnose efficiently, effectively and accurately using relatively few clinical data and can therefore be classified as diagnostic experts. They also indicate that a major cause of misdiagnoses is failure to properly integrate clinical data. We suggest that increased emphasis on this step in the reasoning process should prove beneficial to the development of clinical reasoning skill in undergraduate medical students.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to develop and trial a method to monitor the evolution of clinical reasoning in a PBL curriculum that is suitable for use in a large medical school. Termed Clinical Reasoning Problems (CRPs), it is based on the notion that clinical reasoning is dependent on the identification and correct interpretation of certain critical clinical features. Each problem consists of a clinical scenario comprising presentation, history and physical examination. Based on this information, subjects are asked to nominate the two most likely diagnoses and to list the clinical features that they considered in formulating their diagnoses, indicating whether these features supported or opposed the nominated diagnoses. Students at different levels of medical training completed a set of 10 CRPs as well as the Diagnostic Thinking Inventory, a self-reporting questionnaire designed to assess reasoning style. Responses were scored against those of a reference group of general practitioners. Results indicate that the CRPs are an easily administered, reliable and valid assessment of clinical reasoning, able to successfully monitor its development throughout medical training. Consequently, they can be employed to assess clinical reasoning skill in individual students and to evaluate the success of undergraduate medical schools in providing effective tuition in clinical reasoning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accelerator mass spectrometry (AMS) is an ultrasensitive technique for measuring the concentration of a single isotope. The electric and magnetic fields of an electrostatic accelerator system are used to filter out other isotopes from the ion beam. The high velocity means that molecules can be destroyed and removed from the measurement background. As a result, concentrations down to one atom in 10^16 atoms are measurable. This thesis describes the construction of the new AMS system in the Accelerator Laboratory of the University of Helsinki. The system is described in detail along with the relevant ion optics. System performance and some of the 14C measurements done with the system are described. In a second part of the thesis, a novel statistical model for the analysis of AMS data is presented. Bayesian methods are used in order to make the best use of the available information. In the new model, instrumental drift is modelled with a continuous first-order autoregressive process. This enables rigorous normalization to standards measured at different times. The Poisson statistical nature of a 14C measurement is also taken into account properly, so that uncertainty estimates are much more stable. It is shown that, overall, the new model improves both the accuracy and the precision of AMS measurements. In particular, the results can be improved for samples with very low 14C concentrations or measured only a few times.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this research is to draw up a clear construction of an anticipatory communicative decision-making process and a successful implementation of a Bayesian application that can be used as an anticipatory communicative decision-making support system. This study is a decision-oriented and constructive research project, and it includes examples of simulated situations. As a basis for further methodological discussion about different approaches to management research, in this research, a decision-oriented approach is used, which is based on mathematics and logic, and it is intended to develop problem solving methods. The approach is theoretical and characteristic of normative management science research. Also, the approach of this study is constructive. An essential part of the constructive approach is to tie the problem to its solution with theoretical knowledge. Firstly, the basic definitions and behaviours of an anticipatory management and managerial communication are provided. These descriptions include discussions of the research environment and formed management processes. These issues define and explain the background to further research. Secondly, it is processed to managerial communication and anticipatory decision-making based on preparation, problem solution, and solution search, which are also related to risk management analysis. After that, a solution to the decision-making support application is formed, using four different Bayesian methods, as follows: the Bayesian network, the influence diagram, the qualitative probabilistic network, and the time critical dynamic network. The purpose of the discussion is not to discuss different theories but to explain the theories which are being implemented. Finally, an application of Bayesian networks to the research problem is presented. The usefulness of the prepared model in examining a problem and the represented results of research is shown. The theoretical contribution includes definitions and a model of anticipatory decision-making. The main theoretical contribution of this study has been to develop a process for anticipatory decision-making that includes management with communication, problem-solving, and the improvement of knowledge. The practical contribution includes a Bayesian Decision Support Model, which is based on Bayesian influenced diagrams. The main contributions of this research are two developed processes, one for anticipatory decision-making, and the other to produce a model of a Bayesian network for anticipatory decision-making. In summary, this research contributes to decision-making support by being one of the few publicly available academic descriptions of the anticipatory decision support system, by representing a Bayesian model that is grounded on firm theoretical discussion, by publishing algorithms suitable for decision-making support, and by defining the idea of anticipatory decision-making for a parallel version. Finally, according to the results of research, an analysis of anticipatory management for planned decision-making is presented, which is based on observation of environment, analysis of weak signals, and alternatives to creative problem solving and communication.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Template matching is concerned with measuring the similarity between patterns of two objects. This paper proposes a memory-based reasoning approach for pattern recognition of binary images with a large template set. It seems that memory-based reasoning intrinsically requires a large database. Moreover, some binary image recognition problems inherently need large template sets, such as the recognition of Chinese characters which needs thousands of templates. The proposed algorithm is based on the Connection Machine, which is the most massively parallel machine to date, using a multiresolution method to search for the matching template. The approach uses the pyramid data structure for the multiresolution representation of templates and the input image pattern. For a given binary image it scans the template pyramid searching the match. A binary image of N × N pixels can be matched in O(log N) time complexity by our algorithm and is independent of the number of templates. Implementation of the proposed scheme is described in detail.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study in EU law analyses the reasoning of the Court of Justice (the Court of Justice of the European Union) in a set of its preliminary rulings. Preliminary rulings are answers to national courts questions on the interpretation (and validity) of EU law called preliminary references. These questions concern specific legal issues that have arisen in legal disputes before the national courts. The Court of Justice alone has the ultimate authority to interpret EU law. The preliminary rulings bind the national courts in the cases giving rise to the preliminary reference, and the interpretations of EU law offered in the preliminary rulings are considered generally binding on all instances applying EU law. EU law is often described as a dynamic legal order and the Court of Justice as at the vanguard of developing it. It is generally assumed that the Court of Justice is striving to realise the EU s meta-level purpose (telos): integration. Against this backdrop one can understand the criticism the Court of Justice is often faced with in certain fields of EU law that can be described as developing. This criticism concerns the Court s (negatively) activist way of not just stating the law but developing or even making law. It is difficult to analyse or prove wrong this accusation as it is not in methodological terms clearly established what constitutes judicial activism, or more exactly where the threshold of negative activism lies. Moreover, one popular approach to assessing the role of the Court of Justice described as integration through law has become fairly political, neglecting to take into consideration the special nature of law as both facilitating and constraining action, not merely a medium for furthering integration. This study offers a legal reasoning approach of a more legalist nature, in order to balance the existing mix of approaches to explaining what the Court of Justice does and how. Reliance on legal reasoning is found to offer a working framework for analysis, whereas the tools for an analysis based on activism are found lacking. The legal reasoning approach enables one to assess whether or not the Court of Justice is pertaining to its own established criteria of interpretation of EU law, and if it is not, one should look more in detail at how the interpretation fits with earlier case-law and doctrines of EU law. This study examines the reasoning of the Court of Justice in a set of objectively chosen cases. The emphasis of the study is on analysing how the Court of Justice applies the established criteria of interpretation it has assumed for itself. Moreover, the judgments are assessed not only in terms of reasoning but also for meaningful silences they contain. The analysis is furthermore contextualised by taking into consideration how the cases were commented by legal scholars, their substantive EU law context, and also their larger politico-historical context. In this study, the analysis largely shows that the Court of Justice is interpreting EU law in accordance with its previous practice. Its reasoning retains connection with the linguistic or semiotic criteria of interpretation, while emphasis lies on systemic reasoning. Moreover, although there are a few judgments where the Court of Justice offers clearly dynamic reasoning or what can be considered as substantive reasoning stemming from, for example, common sense or reasonableness, such reasons are most often given in addition to systemic ones. In this sense and even when considered in its broader context, the case-law analysed in this study does not portray a specifically activist image of the Court of Justice. The legal reasoning approach is a valid alternative for explaining how and why the Court of Justice interprets EU law as it does.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stochastic behavior of an aero-engine failure/repair process has been analyzed from a Bayesian perspective. Number of failures/repairs in the component-sockets of this multi-component system are assumed to follow independent renewal processes with Weibull inter-arrival times. Based on the field failure/repair data of a large number of such engines and independent Gamma priors on the scale parameters and log-concave priors on the shape parameters, an exact method of sampling from the resulting posterior distributions of the parameters has been proposed. These generated parameter values are next utilised in obtaining the posteriors of the expected number of system repairs, system failure rate, and the conditional intensity function, which are computed using a recursive formula.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose an efficient and parameter-free scoring criterion, the factorized conditional log-likelihood (ˆfCLL), for learning Bayesian network classifiers. The proposed score is an approximation of the conditional log-likelihood criterion. The approximation is devised in order to guarantee decomposability over the network structure, as well as efficient estimation of the optimal parameters, achieving the same time and space complexity as the traditional log-likelihood scoring criterion. The resulting criterion has an information-theoretic interpretation based on interaction information, which exhibits its discriminative nature. To evaluate the performance of the proposed criterion, we present an empirical comparison with state-of-the-art classifiers. Results on a large suite of benchmark data sets from the UCI repository show that ˆfCLL-trained classifiers achieve at least as good accuracy as the best compared classifiers, using significantly less computational resources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bayesian networks are compact, flexible, and interpretable representations of a joint distribution. When the network structure is unknown but there are observational data at hand, one can try to learn the network structure. This is called structure discovery. This thesis contributes to two areas of structure discovery in Bayesian networks: space--time tradeoffs and learning ancestor relations. The fastest exact algorithms for structure discovery in Bayesian networks are based on dynamic programming and use excessive amounts of space. Motivated by the space usage, several schemes for trading space against time are presented. These schemes are presented in a general setting for a class of computational problems called permutation problems; structure discovery in Bayesian networks is seen as a challenging variant of the permutation problems. The main contribution in the area of the space--time tradeoffs is the partial order approach, in which the standard dynamic programming algorithm is extended to run over partial orders. In particular, a certain family of partial orders called parallel bucket orders is considered. A partial order scheme that provably yields an optimal space--time tradeoff within parallel bucket orders is presented. Also practical issues concerning parallel bucket orders are discussed. Learning ancestor relations, that is, directed paths between nodes, is motivated by the need for robust summaries of the network structures when there are unobserved nodes at work. Ancestor relations are nonmodular features and hence learning them is more difficult than modular features. A dynamic programming algorithm is presented for computing posterior probabilities of ancestor relations exactly. Empirical tests suggest that ancestor relations can be learned from observational data almost as accurately as arcs even in the presence of unobserved nodes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers the problem of spectrum sensing, i.e., the detection of whether or not a primary user is transmitting data by a cognitive radio. The Bayesian framework is adopted, with the performance measure being the probability of detection error. A decentralized setup, where N sensors use M observations each to arrive at individual decisions that are combined at a fusion center to form the overall decision is considered. The unknown fading channel between the primary sensor and the cognitive radios makes the individual decision rule computationally complex, hence, a generalized likelihood ratio test (GLRT)-based approach is adopted. Analysis of the probabilities of false alarm and miss detection of the proposed method reveals that the error exponent with respect to M is zero. Also, the fusion of N individual decisions offers a diversity advantage, similar to diversity reception in communication systems, and a tight bound on the error exponent is presented. Through an analysis in the low power regime, the number of observations needed as a function of received power, to achieve a given probability of error is determined. Monte-Carlo simulations confirm the accuracy of the analysis.