18 resultados para Learning set
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Chronic myelomonocytic leukaemia (CMML) is a heterogeneous haematopoietic disorder characterized by myeloproliferative or myelodysplastic features. At present, the pathogenesis of this malignancy is not completely understood. In this study, we sought to analyse gene expression profiles of CMML in order to characterize new molecular outcome predictors. A learning set of 32 untreated CMML patients at diagnosis was available for TaqMan low-density array gene expression analysis. From 93 selected genes related to cancer and cell cycle, we built a five-gene prognostic index after multiplicity correction. Using this index, we characterized two categories of patients with distinct overall survival (94% vs. 19% for good and poor overall survival, respectively; P = 0.007) and we successfully validated its strength on an independent cohort of 21 CMML patients with Affymetrix gene expression data. We found no specific patterns of association with traditional prognostic stratification parameters in the learning cohort. However, the poor survival group strongly correlated with high-risk treated patients and transformation to acute myeloid leukaemia. We report here a new multigene prognostic index for CMML, independent of the gene expression measurement method, which could be used as a powerful tool to predict clinical outcome and help physicians to evaluate criteria for treatments.
Resumo:
Rapid tryptophan (Trp) depletion (RTD) has been reported to cause deterioration in the quality of decision making and impaired reversal learning, while leaving attentional set shifting relatively unimpaired. These findings have been attributed to a more powerful neuromodulatory effect of reduced 5-HT on ventral prefrontal cortex (PFC) than on dorsolateral PFC. In view of the limited number of reports, the aim of this study was to independently replicate these findings using the same test paradigms. Healthy human subjects without a personal or family history of affective disorder were assessed using a computerized decision making/gambling task and the CANTAB ID/ED attentional set-shifting task under Trp-depleted (n=17; nine males and eight females) or control (n=15; seven males and eight females) conditions, in a double-blind, randomized, parallel-group design. There was no significant effect of RTD on set shifting, reversal learning, risk taking, impulsivity, or subjective mood. However, RTD significantly altered decision making such that depleted subjects chose the more likely of two possible outcomes significantly more often than controls. This is in direct contrast to the previous report that subjects chose the more likely outcome significantly less often following RTD. In the terminology of that report, our result may be interpreted as improvement in the quality of decision making following RTD. This contrast between studies highlights the variability in the cognitive effects of RTD between apparently similar groups of healthy subjects, and suggests the need for future RTD studies to control for a range of personality, family history, and genetic factors that may be associated with 5-HT function.
Resumo:
Self-categorization theory stresses the importance of the context in which the metacontrast principle is proposed to operate. This study is concerned with how 'the pool of psychologically relevant stimuli' (Turner, Hogg, Oakes, Reicher & Wetherell, 1987, p. 47) comprising the context is determined. Data from interviews with 33 people with learning difficulties were used to show how a positive sense of self might be constructed by members of a stigmatized social category through the social worlds that they describe, and therefore the social comparisons and categorizations that are made possible. Participants made downward comparisons which focused on people with learning difficulties who were less able or who displayed challenging behaviour, and with people who did not have learning difficulties but who, according to the participants, behaved badly, such as beggars, drunks and thieves. By selection of dimensions and comparison others, a positive sense of self and a particular set of social categorizations were presented. It is suggested that when using self-categorization theory to study real-world social categories, more attention needs to be paid to the involvement of the perceiver in determining which stimuli are psychologically relevant since this is a crucial determinant of category salience.
Resumo:
Background: A suite of 10 online virtual patients developed using the IVIMEDS ‘Riverside’ authoring tool has been introduced into our undergraduate general practice clerkship. These cases provide a multimedia-rich experience to students. Their interactive nature promotes the development of clinical reasoning skills such as discriminating key clinical features, integrating information from a variety of sources and forming diagnoses and management plans.
Aims: To evaluate the usefulness and usability of a set of online virtual patients in an undergraduate general practice clerkship.
Method: Online questionnaire completed by students after their general practice placement incorporating the System Usability Scale questionnaire.
Results: There was a 57% response rate. Ninety-five per cent of students agreed that the online package was a useful learning tool and ranked virtual patients third out of six learning modalities. Questions and answers and the use of images and videos were all rated highly by students as useful learning methods. The package was perceived to have a high level of usability among respondents.
Conclusion: Feedback from students suggest that this implementation of virtual patients, set in primary care, is user friendly and rated as a valuable adjunct to their learning. The cost of production of such learning resources demands close attention to design.
Resumo:
The majority of reported learning methods for Takagi-Sugeno-Kang fuzzy neural models to date mainly focus on the improvement of their accuracy. However, one of the key design requirements in building an interpretable fuzzy model is that each obtained rule consequent must match well with the system local behaviour when all the rules are aggregated to produce the overall system output. This is one of the distinctive characteristics from black-box models such as neural networks. Therefore, how to find a desirable set of fuzzy partitions and, hence, to identify the corresponding consequent models which can be directly explained in terms of system behaviour presents a critical step in fuzzy neural modelling. In this paper, a new learning approach considering both nonlinear parameters in the rule premises and linear parameters in the rule consequents is proposed. Unlike the conventional two-stage optimization procedure widely practised in the field where the two sets of parameters are optimized separately, the consequent parameters are transformed into a dependent set on the premise parameters, thereby enabling the introduction of a new integrated gradient descent learning approach. A new Jacobian matrix is thus proposed and efficiently computed to achieve a more accurate approximation of the cost function by using the second-order Levenberg-Marquardt optimization method. Several other interpretability issues about the fuzzy neural model are also discussed and integrated into this new learning approach. Numerical examples are presented to illustrate the resultant structure of the fuzzy neural models and the effectiveness of the proposed new algorithm, and compared with the results from some well-known methods.
Resumo:
This paper presents a new algorithm for learning the structure of a special type of Bayesian network. The conditional phase-type (C-Ph) distribution is a Bayesian network that models the probabilistic causal relationships between a skewed continuous variable, modelled by the Coxian phase-type distribution, a special type of Markov model, and a set of interacting discrete variables. The algorithm takes a dataset as input and produces the structure, parameters and graphical representations of the fit of the C-Ph distribution as output.The algorithm, which uses a greedy-search technique and has been implemented in MATLAB, is evaluated using a simulated data set consisting of 20,000 cases. The results show that the original C-Ph distribution is recaptured and the fit of the network to the data is discussed.
Resumo:
This project set out to evaluate the effectiveness of social work education by analysing student perceptions of the strengths and limitations of their education and training on the Bachelor of Social Work, Queen’s University, Belfast (QUB) at different stages of their ‘learning journey’ through the programme.
The author’s primary aim in undertaking this study was to contribute evidence-based understanding of the challenges and opportunities students identified themselves within contemporary practice environments. A secondary aim was to test the effectiveness of key approaches, theories and learning tools in common usage in social work education. The authors believe the outcomes generated by the project demonstrate the value of systematically researching student perceptions of their learning experience and feel the study provides important lessons which should help to inform the future development of social work education not only locally but in other parts of the UK.
Dual-processes in learning and judgment:Evidence from the multiple cue probability learning paradigm
Resumo:
Multiple cue probability learning (MCPL) involves learning to predict a criterion based on a set of novel cues when feedback is provided in response to each judgment made. But to what extent does MCPL require controlled attention and explicit hypothesis testing? The results of two experiments show that this depends on cue polarity. Learning about cues that predict positively is aided by automatic cognitive processes, whereas learning about cues that predict negatively is especially demanding on controlled attention and hypothesis testing processes. In the studies reported here, negative, but not positive cue learning related to individual differences in working memory capacity both on measures of overall judgment performance and modelling of the implicit learning process. However, the introduction of a novel method to monitor participants' explicit beliefs about a set of cues on a trial-by-trial basis revealed that participants were engaged in explicit hypothesis testing about positive and negative cues, and explicit beliefs about both types of cues were linked to working memory capacity. Taken together, our results indicate that while people are engaged in explicit hypothesis testing during cue learning, explicit beliefs are applied to judgment only when cues are negative. © 2012 Elsevier Inc.
Resumo:
The article examines why a comprehensive settlement to resolve the Cyprus problem has yet to be reached despite the existence of a positive incentive structure and the proactive involvement of regional and international organizations, including the European Union and the United Nations. To address this question, evidence from critical turning points in foreign policy decision-making in Turkey, Greece and the two communities in Cyprus is drawn on. The role of hegemonic political discourses is emphasized, and it is argued that the latter have prevented an accurate evaluation of incentives that could have set the stage for a constructive settlement. However, despite the political debacle in the Cypriot negotiations, success stories have emerged, such as the reactivation of the Committee for Missing Persons (CMP), a defunct body for almost 25 years, to become the most successful bi-communal project following Cyprus’s EU accession. Contradictory evidence in the Cypriot peace process is evaluated and policy lessons to be learned from the CMP ‘success story’ are identified.
Resumo:
This work presents two new score functions based on the Bayesian Dirichlet equivalent uniform (BDeu) score for learning Bayesian network structures. They consider the sensitivity of BDeu to varying parameters of the Dirichlet prior. The scores take on the most adversary and the most beneficial priors among those within a contamination set around the symmetric one. We build these scores in such way that they are decomposable and can be computed efficiently. Because of that, they can be integrated into any state-of-the-art structure learning method that explores the space of directed acyclic graphs and allows decomposable scores. Empirical results suggest that our scores outperform the standard BDeu score in terms of the likelihood of unseen data and in terms of edge discovery with respect to the true network, at least when the training sample size is small. We discuss the relation between these new scores and the accuracy of inferred models. Moreover, our new criteria can be used to identify the amount of data after which learning is saturated, that is, additional data are of little help to improve the resulting model.
Resumo:
This paper addresses the estimation of parameters of a Bayesian network from incomplete data. The task is usually tackled by running the Expectation-Maximization (EM) algorithm several times in order to obtain a high log-likelihood estimate. We argue that choosing the maximum log-likelihood estimate (as well as the maximum penalized log-likelihood and the maximum a posteriori estimate) has severe drawbacks, being affected both by overfitting and model uncertainty. Two ideas are discussed to overcome these issues: a maximum entropy approach and a Bayesian model averaging approach. Both ideas can be easily applied on top of EM, while the entropy idea can be also implemented in a more sophisticated way, through a dedicated non-linear solver. A vast set of experiments shows that these ideas produce significantly better estimates and inferences than the traditional and widely used maximum (penalized) log-likelihood and maximum a posteriori estimates. In particular, if EM is adopted as optimization engine, the model averaging approach is the best performing one; its performance is matched by the entropy approach when implemented using the non-linear solver. The results suggest that the applicability of these ideas is immediate (they are easy to implement and to integrate in currently available inference engines) and that they constitute a better way to learn Bayesian network parameters.
Resumo:
This paper explores semi-qualitative probabilistic networks (SQPNs) that combine numeric and qualitative information. We first show that exact inferences with SQPNs are NPPP-Complete. We then show that existing qualitative relations in SQPNs (plus probabilistic logic and imprecise assessments) can be dealt effectively through multilinear programming. We then discuss learning: we consider a maximum likelihood method that generates point estimates given a SQPN and empirical data, and we describe a Bayesian-minded method that employs the Imprecise Dirichlet Model to generate set-valued estimates.
Resumo:
Efficient identification and follow-up of astronomical transients is hindered by the need for humans to manually select promising candidates from data streams that contain many false positives. These artefacts arise in the difference images that are produced by most major ground-based time-domain surveys with large format CCD cameras. This dependence on humans to reject bogus detections is unsustainable for next generation all-sky surveys and significant effort is now being invested to solve the problem computationally. In this paper, we explore a simple machine learning approach to real-bogus classification by constructing a training set from the image data of similar to 32 000 real astrophysical transients and bogus detections from the Pan-STARRS1 Medium Deep Survey. We derive our feature representation from the pixel intensity values of a 20 x 20 pixel stamp around the centre of the candidates. This differs from previous work in that it works directly on the pixels rather than catalogued domain knowledge for feature design or selection. Three machine learning algorithms are trained (artificial neural networks, support vector machines and random forests) and their performances are tested on a held-out subset of 25 per cent of the training data. We find the best results from the random forest classifier and demonstrate that by accepting a false positive rate of 1 per cent, the classifier initially suggests a missed detection rate of around 10 per cent. However, we also find that a combination of bright star variability, nuclear transients and uncertainty in human labelling means that our best estimate of the missed detection rate is approximately 6 per cent.