825 resultados para Modeling Non-Verbal Behaviors Using Machine Learning
Resumo:
OBJECTIVES To explore the experiences of oncology staff with communicating safety concerns and to examine situational factors and motivations surrounding the decision whether and how to speak up using semistructured interviews. SETTING 7 oncology departments of six hospitals in Switzerland. PARTICIPANTS Diverse sample of 32 experienced oncology healthcare professionals. RESULTS Nurses and doctors commonly experience situations which raise their concerns and require questioning, clarifying and correcting. Participants often used non-verbal communication to signal safety concerns. Speaking-up behaviour was strongly related to a clinical safety issue. Most episodes of 'silence' were connected to hygiene, isolation and invasive procedures. In contrast, there seemed to exist a strong culture to communicate questions, doubts and concerns relating to medication. Nearly all interviewees were concerned with 'how' to say it and in particular those of lower hierarchical status reflected on deliberate 'voicing tactics'. CONCLUSIONS Our results indicate a widely accepted culture to discuss any concerns relating to medication safety while other issues are more difficult to voice. Clinicians devote considerable efforts to evaluate the situation and sensitively decide whether and how to speak up. Our results can serve as a starting point to develop a shared understanding of risks and appropriate communication of safety concerns among staff in oncology.
Resumo:
Both theoretically and empirically there is a continuous interest in understanding the specific relation between cognitive and motor development in childhood. In the present longitudinal study including three measurement points, this relation was targeted. At the beginning of the study, the participating children were 5-6-year-olds. By assessing participants' fine motor skills, their executive functioning, and their non-verbal intelligence, their cross-sectional and cross-lagged interrelations were examined. Additionally, performance in these three areas was used to predict early school achievement (in terms of mathematics, reading, and spelling) at the end of participants' first grade. Correlational analyses and structural equation modeling revealed that fine motor skills, non-verbal intelligence and executive functioning were significantly interrelated. Both fine motor skills and intelligence had significant links to later school achievement. However, when executive functioning was additionally included into the prediction of early academic achievement, fine motor skills and non-verbal intelligence were no longer significantly associated with later school performance suggesting that executive functioning plays an important role for the motor-cognitive performance link.
Resumo:
Introduction Since the quality of patient portrayal of standardized patients (SPs) during an Objective Structured Clinical Exam (OSCE) has a major impact on the reliability and validity of the exam, quality control should be initiated. Literature about quality control of SP’s performance focuses on feedback [1, 2] or completion of checklists [3, 4]. Since we did not find a published instrument meeting our needs for the assessment of patient portrayal, we developed such an instrument after being inspired by others [5] and used it in our high-stakes exam. Methods SP trainers from all five Swiss medical faculties collected and prioritized quality criteria for patient portrayal. Items were revised with the partners twice, based on experiences during OSCEs. The final instrument contains 14 criteria for acting (i.e. adequate verbal and non-verbal expression) and standardization (i.e. verbatim delivery of the first sentence). All partners used the instrument during a high-stakes OSCE. Both, SPs and trainers were introduced to the instrument. The tool was used in training (more than 100 observations) and during the exam (more than 250 observations). FAIR_OSCE The list of items to assess the quality of the simulation by SPs was primarily developed and used to provide formative feedback to the SPs in order to help them to improve their performance. It was therefore named “Feedbackstruckture for the Assessment of Interactive Role play in Objective Structured Clinical Exams (FAIR_OSCE). It was also used to assess the quality of patient portrayal during the exam. The results were calculated for each of the five faculties individually. Formative evaluation was given to the five faculties with individual feedback without revealing results of other faculties other than overall results. Results High quality of patient portrayal during the exam was documented. More than 90% of SP performances were rated to be completely correct or sufficient. An increase in quality of performance between training and exam was noted. In example the rate of completely correct reaction in medical tests increased from 88% to 95%. 95% completely correct reactions together with 4% sufficient reactions add up to 99% of the reactions meeting the requirements of the exam. SP educators using the instrument reported an augmentation of SPs performance induced by the use of the instrument. Disadvantages mentioned were high concentration needed to explicitly observe all criteria and cumbersome handling of the paper-based forms. Conclusion We were able to document a very high quality of SP performance in our exam. The data also indicate that our training is effective. We believe that the high concentration needed using the instrument is well invested, considering the observed augmentation of performance. The development of an iPad based application for the form is planned to address the cumbersome handling of the paper.
Resumo:
Introduction Since the quality of patient portrayal of standardized patients (SPs) during an Objective Structured Clinical Exam (OSCE) has a major impact on the reliability and validity of the exam, quality control should be initiated. Literature about quality control of SPs’ performance focuses on feedback [1, 2] or completion of checklists [3, 4]. Since we did not find a published instrument meeting our needs for the assessment of patient portrayal, we developed such an instrument after being inspired by others [5] and used it in our high-stakes exam. Project description SP trainers from five medical faculties collected and prioritized quality criteria for patient portrayal. Items were revised twice, based on experiences during OSCEs. The final instrument contains 14 criteria for acting (i.e. adequate verbal and non-verbal expression) and standardization (i.e. verbatim delivery of the first sentence). All partners used the instrument during a high-stakes OSCE. SPs and trainers were introduced to the instrument. The tool was used in training (more than 100 observations) and during the exam (more than 250 observations). Outcome High quality of SPs’ patient portrayal during the exam was documented. More than 90% of SP performances were rated to be completely correct or sufficient. An increase in quality of performance between training and exam was noted. For example, the rate of completely correct reaction in medical tests increased from 88% to 95%. Together with 4% of sufficient performances these 95% add up to 99% of the reactions in medical tests meeting the standards of the exam. SP educators using the instrument reported an augmentation of SPs’ performance induced by the use of the instrument. Disadvantages mentioned were the high concentration needed to observe all criteria and the cumbersome handling of the paper-based forms. Discussion We were able to document a very high quality of SP performance in our exam. The data also indicates that our training is effective. We believe that the high concentration needed using the instrument is well invested, considering the observed enhancement of performance. The development of an iPad-based application for the form is planned to address the cumbersome handling of the paper.
Resumo:
Until today, most of the documentation of forensic relevant medical findings is limited to traditional 2D photography, 2D conventional radiographs, sketches and verbal description. There are still some limitations of the classic documentation in forensic science especially if a 3D documentation is necessary. The goal of this paper is to demonstrate new 3D real data based geo-metric technology approaches. This paper present approaches to a 3D geo-metric documentation of injuries on the body surface and internal injuries in the living and deceased cases. Using modern imaging methods such as photogrammetry, optical surface and radiological CT/MRI scanning in combination it could be demonstrated that a real, full 3D data based individual documentation of the body surface and internal structures is possible in a non-invasive and non-destructive manner. Using the data merging/fusing and animation possibilities, it is possible to answer reconstructive questions of the dynamic development of patterned injuries (morphologic imprints) and to evaluate the possibility, that they are matchable or linkable to suspected injury-causing instruments. For the first time, to our knowledge, the method of optical and radiological 3D scanning was used to document the forensic relevant injuries of human body in combination with vehicle damages. By this complementary documentation approach, individual forensic real data based analysis and animation were possible linking body injuries to vehicle deformations or damages. These data allow conclusions to be drawn for automobile accident research, optimization of vehicle safety (pedestrian and passenger) and for further development of crash dummies. Real 3D data based documentation opens a new horizon for scientific reconstruction and animation by bringing added value and a real quality improvement in forensic science.
Resumo:
We present a novel surrogate model-based global optimization framework allowing a large number of function evaluations. The method, called SpLEGO, is based on a multi-scale expected improvement (EI) framework relying on both sparse and local Gaussian process (GP) models. First, a bi-objective approach relying on a global sparse GP model is used to determine potential next sampling regions. Local GP models are then constructed within each selected region. The method subsequently employs the standard expected improvement criterion to deal with the exploration-exploitation trade-off within selected local models, leading to a decision on where to perform the next function evaluation(s). The potential of our approach is demonstrated using the so-called Sparse Pseudo-input GP as a global model. The algorithm is tested on four benchmark problems, whose number of starting points ranges from 102 to 104. Our results show that SpLEGO is effective and capable of solving problems with large number of starting points, and it even provides significant advantages when compared with state-of-the-art EI algorithms.
Resumo:
This work deals with parallel optimization of expensive objective functions which are modelled as sample realizations of Gaussian processes. The study is formalized as a Bayesian optimization problem, or continuous multi-armed bandit problem, where a batch of q > 0 arms is pulled in parallel at each iteration. Several algorithms have been developed for choosing batches by trading off exploitation and exploration. As of today, the maximum Expected Improvement (EI) and Upper Confidence Bound (UCB) selection rules appear as the most prominent approaches for batch selection. Here, we build upon recent work on the multipoint Expected Improvement criterion, for which an analytic expansion relying on Tallis’ formula was recently established. The computational burden of this selection rule being still an issue in application, we derive a closed-form expression for the gradient of the multipoint Expected Improvement, which aims at facilitating its maximization using gradient-based ascent algorithms. Substantial computational savings are shown in application. In addition, our algorithms are tested numerically and compared to state-of-the-art UCB-based batchsequential algorithms. Combining starting designs relying on UCB with gradient-based EI local optimization finally appears as a sound option for batch design in distributed Gaussian Process optimization.
Resumo:
Purpose In recent years, selective retina laser treatment (SRT), a sub-threshold therapy method, avoids widespread damage to all retinal layers by targeting only a few. While these methods facilitate faster healing, their lack of visual feedback during treatment represents a considerable shortcoming as induced lesions remain invisible with conventional imaging and make clinical use challenging. To overcome this, we present a new strategy to provide location-specific and contact-free automatic feedback of SRT laser applications. Methods We leverage time-resolved optical coherence tomography (OCT) to provide informative feedback to clinicians on outcomes of location-specific treatment. By coupling an OCT system to SRT treatment laser, we visualize structural changes in the retinal layers as they occur via time-resolved depth images. We then propose a novel strategy for automatic assessment of such time-resolved OCT images. To achieve this, we introduce novel image features for this task that when combined with standard machine learning classifiers yield excellent treatment outcome classification capabilities. Results Our approach was evaluated on both ex vivo porcine eyes and human patients in a clinical setting, yielding performances above 95 % accuracy for predicting patient treatment outcomes. In addition, we show that accurate outcomes for human patients can be estimated even when our method is trained using only ex vivo porcine data. Conclusion The proposed technique presents a much needed strategy toward noninvasive, safe, reliable, and repeatable SRT applications. These results are encouraging for the broader use of new treatment options for neovascularization-based retinal pathologies.
Resumo:
While numerous studies have found similar mortality rates for Hispanics compared to non-Hispanic whites, surprisingly little is known about years of potential life lost (YPLL) differentials in mortality. The primary purpose of this paper is to quantify the effect that YPLL has on Hispanics in order to determine if YPLL differs between Hispanics and non-Hispanic whites. Using YPLL may bring attention to dissimilarities that are often obscured through traditional measures. Bexar County 2000-2004 data from the Texas Department of State Health Services, Vital Statistics Unit was analyzed for the descriptive analysis and 2003 Bexar County Multiple Cause Death data was analyzed for the regression analysis. The multiple regression models were used to examine Hispanic and non-Hispanic white differences in years of potential life lost (YPLL) before age 75 from all-causes of death. For this analysis, YPLL was regressed on ethnicity, education level and marital status for men and women. The descriptive analysis found YPLL from all-causes was greater among non-Hispanic whites than Hispanics. However, the regression analysis found Hispanics lost more year of potential from all-causes of death compared to non-Hispanic whites. This indicates that the effect of ethnicity on YPLL differs for different methods of analysis. Future research efforts should keep in mind the method of analysis when using YPLL. Understanding differences in mortality among Hispanics and non-Hispanic whites is important for targeting future health policies and research to aid in eliminating Hispanic health disparities. ^
Resumo:
This dissertation utilized quantitative and qualitative methods to examine the role of responsibility in the prevention of sexually transmitted infections (STIs) and pregnancy through condom use and other sexual behaviors among young adolescents. Data were analyzed across race and gender and three papers were developed. The quantitative portion used logistic regression to assess associations between personal responsibility, as well as other know correlates, and reported condom use and condom use intentions as a means of STI and pregnancy prevention among 445 inner-city, high school adolescents. Responsibility to prevent pregnancy by providing the condom was associated with condom use at last sex and consistent condom use. Responsibility to prevent acquiring a STI by using a condom was significantly associated with consistent condom use. No significant associations were found between responsibility and condom use intentions. ^ The qualitative section of the dissertation project involved conducting 28 in-depth interviews among 9th and 10th grade, African American and Hispanic students who attended a large urban school district in South Central Texas. Perceptions of responsibility for preventing STIs and unintended pregnancy, as well as for condom use, were explored. Male and female adolescents expressed joint responsibility to prevent a STI or pregnancy. Perceptions of responsibility for providing and using the condoms were mixed. Despite the indication of both partners, mostly all participants implied that females, more so than the males, had the final responsibility to prevent contracting a STI, a pregnancy, to provide a condom, and to make sure a condom was used. Participants expressed the role of parents' involvement for preventing these outcomes as well as the need for more sexual health education and access to preventative methods. ^ The last section of this dissertation involved qualitative inquiry to ascertain perceptions of reasons why adolescents engage in anal and oral (non-coital) sex. Pleasure-seeking and giving as well social influence and pressure were described as the main reasons why teenagers have non-coital sex. Other reasons included conveniences of participating in these behaviors such as ease of performing oral sex and anal sex as a convenient alternative to vaginal sex. Sexual inexperience was an indicator for why anal sex occurs. Many of the reasons involved misperceptions and adolescents who practice these sexual behaviors place themselves at-risk for contracting a STI. ^ This dissertation increased the current knowledge base about adolescent sexual responsibility and non-coital behaviors. Future studies should explore perceptions of responsibility and actual sexual activity practices among adolescents to reduce the burden of STIs and pregnancy as well as help public health professionals develop programs for adolescent populations, schools, and communities where these issues persist.^
Resumo:
Problem: Medical and veterinary students memorize facts but then have difficulty applying those facts in clinical problem solving. Cognitive engineering research suggests that the inability of medical and veterinary students to infer concepts from facts may be due in part to specific features of how information is represented and organized in educational materials. First, physical separation of pieces of information may increase the cognitive load on the student. Second, information that is necessary but not explicitly stated may also contribute to the student’s cognitive load. Finally, the types of representations – textual or graphical – may also support or hinder the student’s learning process. This may explain why students have difficulty applying biomedical facts in clinical problem solving. Purpose: To test the hypothesis that three specific aspects of expository text – the patial distance between the facts needed to infer a rule, the explicitness of information, and the format of representation – affected the ability of students to solve clinical problems. Setting: The study was conducted in the parasitology laboratory of a college of veterinary medicine in Texas. Sample: The study subjects were a convenience sample consisting of 132 second-year veterinary students who matriculated in 2007. The age of this class upon admission ranged from 20-52, and the gender makeup of this class consisted of approximately 75% females and 25% males. Results: No statistically significant difference in student ability to solve clinical problems was found when relevant facts were placed in proximity, nor when an explicit rule was stated. Further, no statistically significant difference in student ability to solve clinical problems was found when students were given different representations of material, including tables and concept maps. Findings: The findings from this study indicate that the three properties investigated – proximity, explicitness, and representation – had no statistically significant effect on student learning as it relates to clinical problem-solving ability. However, ad hoc observations as well as findings from other researchers suggest that the subjects were probably using rote learning techniques such as memorization, and therefore were not attempting to infer relationships from the factual material in the interventions, unless they were specifically prompted to look for patterns. A serendipitous finding unrelated to the study hypothesis was that those subjects who correctly answered questions regarding functional (non-morphologic) properties, such as mode of transmission and intermediate host, at the family taxonomic level were significantly more likely to correctly answer clinical case scenarios than were subjects who did not correctly answer questions regarding functional properties. These findings suggest a strong relationship (p < .001) between well-organized knowledge of taxonomic functional properties and clinical problem solving ability. Recommendations: Further study should be undertaken investigating the relationship between knowledge of functional taxonomic properties and clinical problem solving ability. In addition, the effect of prompting students to look for patterns in instructional material, followed by the effect of factors that affect cognitive load such as proximity, explicitness, and representation, should be explored.
Resumo:
En un mercado de educación superior cada vez más competitivo, la colaboración entre universidades es una efectiva estrategia para acceder al mercado global. El desarrollo de titulaciones conjuntas es un importante mecanismo para fortalecer las colaboraciones académicas y diversificar los conocimientos. Las titulaciones conjuntas están siendo cada vez más implementadas en las universidades de todo el mundo. En Europa, el proceso de Bolonia y el programa Erasmus, están fomentado el reconocimiento de titulaciones conjuntas y dobles y promoviendo la colaboración entre las instituciones académicas. En el imparable proceso de la globalización y convergencia educativa, el uso de sistemas de e-learning para soportar cursos tanto semipresencial como online es una tendencia en crecimiento. Dado que los sistemas de e-learning soportan una amplia variedad de cursos, es necesario encontrar una solución adecuada que permita a las universidades soportar y gestionar las titulaciones conjuntas a través de sus sistemas de e-learning en conformidad con los acuerdos de colaboración establecidos por las universidades participantes. Esta tesis doctoral abordará las siguientes preguntas de investigación: 1. ¿Qué factores deben tenerse en cuenta en la implementación y gestión de titulaciones conjuntas? 2. ¿Cómo pueden los sistemas actuales de e-learning soportar el desarrollo de titulaciones conjuntas? 3. ¿Qué otros servicios y sistemas necesitan ser adaptados por las universidades interesadas en participar en una titulación conjunta a través de sus sistemas de e-learning? La implementación de titulaciones conjuntas a través de sistemas de e-learning es compleja e implica retos técnicos, administrativos, culturales, financieros, jurídicos y de seguridad. Esta tesis doctoral propone una serie de contribuciones que pueden ayudar a resolver algunos de los retos identificados. En primer lugar se ha elaborado un modelo conceptual que incluye la información del contexto de las titulaciones conjuntas que es relevante para la implementación de estas titulaciones en los sistemas de e-learning. Después de definir el modelo conceptual, se ha propuesto una arquitectura basada en políticas para la implementación de titulaciones interinstitucionales a través de sistemas de e-learning de acuerdo a los términos estipulados en los acuerdos de colaboración que son firmados por las universidades participantes. El autor se ha centrado en el componente de gestión de flujos de trabajo de esta arquitectura. Por último y con el fin de permitir la interoperabilidad de repositorios de objetos educativos, los componentes básicos a implementar han sido identificados y validados. El uso de servicios multimedia en educación es una tendencia creciente, proporcionando servicios de e-learning que permiten mejorar la comunicación y la interacción entre profesores y alumnos. Dentro de estos servicios, nos hemos centrado en el uso de la videoconferencia y la grabación de clases como servicios adecuados para el desarrollo de cursos impartidos en escenarios de educación colaborativos. Las contribuciones han sido validadas en proyectos de investigación de ámbito nacional y europeo en los que el autor ha participado. Abstract In an increasingly competitive higher education market, collaboration between universities is an effective strategy for gaining access to the global market. The development of joint degrees is an important mechanism for strengthening academic research collaborations and diversifying knowledge. Joint degrees are becoming increasingly implemented in universities around the world. In Europe, the Bologna process and the Erasmus programme have encouraged both the global recognition of joint and double degrees and promoted close collaboration between academic institutions. In the unstoppable process of globalization and educational convergence, the use of e-learning systems for supporting both blended and online courses is becoming a growing trend. Since e-learning systems covers a wide range of courses, it becomes necessary to find a suitable solution that enables universities to support and manage joint degrees through their e-learning systems in accordance with the collaboration agreements established by the universities involved. This dissertation will address the following research questions: 1. What factors need to be considered in the implementation and management of joint degrees? 2. How can the current e-learning systems support the development of joint degrees? 3. What other services and systems need to be adapted by universities interested in participating in a joint degree through their e-learning systems? The implementation of joint degrees using e-learning systems is complex and involves technical, administrative, security, cultural, financial and legal challenges. This dissertation proposes a series of contributions to help solve some of the identified challenges. One of the cornerstones of this proposal is a conceptual model of all the relevant issues related to the support of joint degrees by means of e-learning systems. After defining the conceptual model, this dissertation proposes a policy-driven architecture for implementing inter-institutional degree collaborations through e-learning systems as stipulated by a collaboration agreement signed by two universities. The author has focused on the workflow management component of this architecture. Finally, the building blocks for achieving interoperability of learning object repositories have been identified and validated. The use of multimedia services in education is a growing trend, providing rich e-learning services that improve the communication and interaction between teachers and students. Within these e-learning services, we have focused on the use of videoconferencing and lecture recording as the best-suited services to support collaborative learning scenarios. The contributions have been validated within national and European research projects that the author has been involved in.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Thanks to their inherent properties, probabilistic graphical models are one of the prime candidates for machine learning and decision making tasks especially in uncertain domains. Their capabilities, like representation, inference and learning, if used effectively, can greatly help to build intelligent systems that are able to act accordingly in different problem domains. Evolutionary algorithms is one such discipline that has employed probabilistic graphical models to improve the search for optimal solutions in complex problems. This paper shows how probabilistic graphical models have been used in evolutionary algorithms to improve their performance in solving complex problems. Specifically, we give a survey of probabilistic model building-based evolutionary algorithms, called estimation of distribution algorithms, and compare different methods for probabilistic modeling in these algorithms.
Resumo:
This paper addresses the question of maximizing classifier accuracy for classifying task-related mental activity from Magnetoencelophalography (MEG) data. We propose the use of different sources of information and introduce an automatic channel selection procedure. To determine an informative set of channels, our approach combines a variety of machine learning algorithms: feature subset selection methods, classifiers based on regularized logistic regression, information fusion, and multiobjective optimization based on probabilistic modeling of the search space. The experimental results show that our proposal is able to improve classification accuracy compared to approaches whose classifiers use only one type of MEG information or for which the set of channels is fixed a priori.