898 resultados para Expert systems
Resumo:
Texture image analysis is an important field of investigation that has attracted the attention from computer vision community in the last decades. In this paper, a novel approach for texture image analysis is proposed by using a combination of graph theory and partially self-avoiding deterministic walks. From the image, we build a regular graph where each vertex represents a pixel and it is connected to neighboring pixels (pixels whose spatial distance is less than a given radius). Transformations on the regular graph are applied to emphasize different image features. To characterize the transformed graphs, partially self-avoiding deterministic walks are performed to compose the feature vector. Experimental results on three databases indicate that the proposed method significantly improves correct classification rate compared to the state-of-the-art, e.g. from 89.37% (original tourist walk) to 94.32% on the Brodatz database, from 84.86% (Gabor filter) to 85.07% on the Vistex database and from 92.60% (original tourist walk) to 98.00% on the plant leaves database. In view of these results, it is expected that this method could provide good results in other applications such as texture synthesis and texture segmentation. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Fraud is a global problem that has required more attention due to an accentuated expansion of modern technology and communication. When statistical techniques are used to detect fraud, whether a fraud detection model is accurate enough in order to provide correct classification of the case as a fraudulent or legitimate is a critical factor. In this context, the concept of bootstrap aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the adjusted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper, for the first time, we aim to present a pioneer study of the performance of the discrete and continuous k-dependence probabilistic networks within the context of bagging predictors classification. Via a large simulation study and various real datasets, we discovered that the probabilistic networks are a strong modeling option with high predictive capacity and with a high increment using the bagging procedure when compared to traditional techniques. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In multi-label classification, examples can be associated with multiple labels simultaneously. The task of learning from multi-label data can be addressed by methods that transform the multi-label classification problem into several single-label classification problems. The binary relevance approach is one of these methods, where the multi-label learning task is decomposed into several independent binary classification problems, one for each label in the set of labels, and the final labels for each example are determined by aggregating the predictions from all binary classifiers. However, this approach fails to consider any dependency among the labels. Aiming to accurately predict label combinations, in this paper we propose a simple approach that enables the binary classifiers to discover existing label dependency by themselves. An experimental study using decision trees, a kernel method as well as Naive Bayes as base-learning techniques shows the potential of the proposed approach to improve the multi-label classification performance.
Resumo:
Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
OBJECTIVE: This study proposes a new approach that considers uncertainty in predicting and quantifying the presence and severity of diabetic peripheral neuropathy. METHODS: A rule-based fuzzy expert system was designed by four experts in diabetic neuropathy. The model variables were used to classify neuropathy in diabetic patients, defining it as mild, moderate, or severe. System performance was evaluated by means of the Kappa agreement measure, comparing the results of the model with those generated by the experts in an assessment of 50 patients. Accuracy was evaluated by an ROC curve analysis obtained based on 50 other cases; the results of those clinical assessments were considered to be the gold standard. RESULTS: According to the Kappa analysis, the model was in moderate agreement with expert opinions. The ROC analysis (evaluation of accuracy) determined an area under the curve equal to 0.91, demonstrating very good consistency in classifying patients with diabetic neuropathy. CONCLUSION: The model efficiently classified diabetic patients with different degrees of neuropathy severity. In addition, the model provides a way to quantify diabetic neuropathy severity and allows a more accurate patient condition assessment.
Resumo:
This work proposes a novel texture descriptor based on fractal theory. The method is based on the Bouligand- Minkowski descriptors. We decompose the original image recursively into four equal parts. In each recursion step, we estimate the average and the deviation of the Bouligand-Minkowski descriptors computed over each part. Thus, we extract entropy features from both average and deviation. The proposed descriptors are provided by concatenating such measures. The method is tested in a classification experiment under well known datasets, that is, Brodatz and Vistex. The results demonstrate that the novel technique achieves better results than classical and state-of-the-art texture descriptors, such as Local Binary Patterns, Gabor-wavelets and co-occurrence matrix.
Resumo:
Dynamic texture is a recent field of investigation that has received growing attention from computer vision community in the last years. These patterns are moving texture in which the concept of selfsimilarity for static textures is extended to the spatiotemporal domain. In this paper, we propose a novel approach for dynamic texture representation, that can be used for both texture analysis and segmentation. In this method, deterministic partially self-avoiding walks are performed in three orthogonal planes of the video in order to combine appearance and motion features. We validate our method on three applications of dynamic texture that present interesting challenges: recognition, clustering and segmentation. Experimental results on these applications indicate that the proposed method improves the dynamic texture representation compared to the state of the art.
Resumo:
Nach wie vor gibt es für die didaktische Gestaltung von Unterrichtssituationen wenig zufrieden stellende Hilfsmittel, die sowohl unerfahrene Pädagoginnen und Pädagogen unterstützen, gleichzeitig aber auch Kreativität und didaktische Vielfalt von Expertinnen und Experten fördern. Dieses Buch präsentiert für dieses Dilemma einen neuen Lösungsansatz.
Resumo:
Teaching is a dynamic activity. It can be very effective, if its impact is constantly monitored and adjusted to the demands of changing social contexts and needs of learners. This implies that teachers need to be aware about teaching and learning processes. Moreover, they should constantly question their didactical methods and the learning resources, which they provide to their students. They should reflect if their actions are suitable, and they should regulate their teaching, e.g., by updating learning materials based on new knowledge about learners, or by motivating learners to engage in further learning activities. In the last years, a rising interest in ‘learning analytics’ is observable. This interest is motivated by the availability of massive amounts of educational data. Also, the continuously increasing processing power, and a strong motivation for discovering new information from these pools of educational data, is pushing further developments within the learning analytics research field. Learning analytics could be a method for reflective teaching practice that enables and guides teachers to investigate and evaluate their work in future learning scenarios. However, this potentially positive impact has not yet been sufficiently verified by learning analytics research. Another method that pursues these goals is ‘action research’. Learning analytics promises to initiate action research processes because it facilitates awareness, reflection and regulation of teaching activities analogous to action research. Therefore, this thesis joins both concepts, in order to improve the design of learning analytics tools. Central research question of this thesis are: What are the dimensions of learning analytics in relation to action research, which need to be considered when designing a learning analytics tool? How does a learning analytics dashboard impact the teachers of technology-enhanced university lectures regarding ‘awareness’, ‘reflection’ and ‘action’? Does it initiate action research? Which are central requirements for a learning analytics tool, which pursues such effects? This project followed design-based research principles, in order to answer these research questions. The main contributions are: a theoretical reference model that connects action research and learning analytics, the conceptualization and implementation of a learning analytics tool, a requirements catalogue for useful and usable learning analytics design based on evaluations, a tested procedure for impact analysis, and guidelines for the introduction of learning analytics into higher education.
Resumo:
Web 2.0 und soziale Netzwerke gaben erste Impulse für neue Formen der Online-Lehre, welche die umfassende Vernetzung von Objekten und Nutzern im Internet nachhaltig einsetzen. Die Vielfältigkeit der unterschiedlichen Systeme erschwert aber deren ganzheitliche Nutzung in einem umfassenden Lernszenario, das den Anforderungen der modernen Informationsgesellschaft genügt. In diesem Beitrag wird eine auf dem Konnektivismus basierende Plattform für die Online-Lehre namens “Wiki-Learnia” präsentiert, welche alle wesentlichen Abschnitte des lebenslangen Lernens abbildet. Unter Einsatz zeitgemäßer Technologien werden nicht nur Nutzer untereinander verbunden, sondern auch Nutzer mit dedizierten Inhalten sowie ggf. zugehörigen Autoren und/oder Tutoren verknüpft. Für ersteres werden verschiedene Kommunikations-Werkzeuge des Web 2.0 (soziale Netzwerke, Chats, Foren etc.) eingesetzt. Letzteres fußt auf dem sogenannten “Learning-Hub”-Ansatz, welcher mit Hilfe von Web-3.0-Mechanismen insbesondere durch eine semantische Metasuchmaschine instrumentiert wird. Zum Aufzeigen der praktischen Relevanz des Ansatzes wird das mediengestützte Juniorstudium der Universität Rostock vorgestellt, ein Projekt, das Schüler der Abiturstufe aufs Studium vorbereitet. Anhand der speziellen Anforderungen dieses Vorhabens werden der enorme Funktionsumfang und die große Flexibilität von Wiki-Learnia demonstriert.
Resumo:
Web 2.0 und soziale Netzwerke gaben erste Impulse für neue Formen der Online-Lehre, welche die umfassende Vernetzung von Objekten und Nutzern im Internet nachhaltig einsetzen. Die Vielfältigkeit der unterschiedlichen Systeme erschwert aber deren ganzheitliche Nutzung in einem umfassenden Lernszenario, das den Anforderungen der modernen Informationsgesellschaft genügt. In diesem Beitrag wird eine auf dem Konnektivismus basierende Plattform für die Online-Lehre namens “Wiki-Learnia” präsentiert, welche alle wesentlichen Abschnitte des lebenslangen Lernens abbildet. Unter Einsatz zeitgemäßer Technologien werden nicht nur Nutzer untereinander verbunden, sondern auch Nutzer mit dedizierten Inhalten sowie ggf. zugehörigen Autoren und/oder Tutoren verknüpft. Für ersteres werden verschiedene Kommunikations-Werkzeuge des Web 2.0 (soziale Netzwerke, Chats, Foren etc.) eingesetzt. Letzteres fußt auf dem sogenannten “Learning-Hub”-Ansatz, welcher mit Hilfe von Web-3.0-Mechanismen insbesondere durch eine semantische Metasuchmaschine instrumentiert wird. Zum Aufzeigen der praktischen Relevanz des Ansatzes wird das mediengestützte Juniorstudium der Universität Rostock vorgestellt, ein Projekt, das Schüler der Abiturstufe aufs Studium vorbereitet. Anhand der speziellen Anforderungen dieses Vorhabens werden der enorme Funktionsumfang und die große Flexibilität von Wiki-Learnia demonstriert.
Resumo:
Der Beitrag fokussiert die Entwicklung, den Einsatz und die Nutzung von innovativen Technologien zur Unterstützung von Bildungsszenarien in Schule, Hochschule und Weiterbildung. Ausgehend von den verschiedenen Phasen des Corporate Learning, Social Learning, Mobile Learning und Intelligent Learning wird in einem ersten Abschnitt das Nutzungsverhalten von Technologien durch Kinder, Jugendliche und (junge) Erwachsene in Schule, Studium und Lehre betrachtet. Es folgt die Darstellung technologischer Entwicklungen auf Basis des Technology Life Cycle und die Konsequenzen von unterschiedlichen Entwicklungszuständen und Reifegraden von Technologien wie Content Learning Management, sozialen Netzwerken, mobilen Endgeräten, multidimensionalen und -modalen Räumen bis hin zu Anwendungen augmentierter Realität und des Internets der Dinge, Dienste und Daten für den Einsatz und die Nutzung in Bildungsszenarien. Nach der Darstellung von Anforderungen an digitale Technologien hinsichtlich Inhalte, Didaktik und Methodik wie etwa hinsichtlich der Erstellung von Inhalten, deren Wiederverwendung, Digitalisierung und Auffindbarkeit sowie Standards werden methodische Hinweise zur Nutzung digitaler Technologien zur Interaktion von Lernenden, von Lehrenden, sozialer Interaktion, kollaborativem Autorieren, Kommentierung, Evaluation und Begutachtung gegeben. Abschließend werden - differenziert für Schule und Hochschule - Erkenntnisse zu Rahmenbedingungen, Einflussgrößen, hemmenden und fördernden Faktoren sowie Herausförderungen bei der Einführung und nachhaltigen Implementation digitaler Technologien im schulischen Unterricht, in Lehre, Studium und Weiterbildung im Überblick zusammengefasst.
Resumo:
Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^
Resumo:
This dissertation documents health and illness in the context of daily life circumstances and structural conditions faced by African American families living in Clover Heights (pseudonym), an inner city public housing project in the Third Ward, Houston, Texas. Drawing from Kleinman's (1980) model of culturally defined health care systems and using the holistic-content approach to narrative analysis (Lieblich, Tuval- Mashiach, & Zilber, 1998) the purpose of this research was to explore the ways in which social and health policy, economic mobility, the inner city environment, and cultural beliefs intertwined with African American families' health related ideas, behaviors, and practices. I recruited six families using a convenience sampling method (Schensul, Schensul, & LeCompte, 1999) and followed them for fourteen months (2010–2011). Family was defined as a household unit, or those living in the same residence, short or long-term. Single, African American women ranging in age from 29–80 years headed all families. All but one family included children or grandchildren 18 years of age and younger, or children or other relative 18 years of age and older. I also recruited six residents with who I became acquainted over the course of the project. I collected data using traditional ethnographic methods including participant-observation, archive review, field notes, mapping, free-listing, in-depth interviews, and life history interviews. ^ Doing ethnography afforded the families who participated in this project the freedom to construct their own experiences of health and illness. My role centered on listening to, learning from, and interpreting participants' narratives, exploring similarities and differences within and across families' experiences. As the research progressed, a pattern concerning diagnosis and pharmacotherapy for children's behavioral and emotional problems, particularly attention-deficit hyperactivity disorder (ADHD) and pediatric bipolar disorder (PBD), emerged from my formal interactions with participants and my informal interactions with residents. The findings presented in this dissertation document this pattern, focusing on how mothers and families interpreted, organized, and ascribed meaning to their experiences of ADHD and PBD. ^ In the first manuscript presented here, I documented three mothers' narrative constructions of a child's diagnosis with and pharmacotherapy for ADHD or PBD. Using Gergen's (1997) relational perspective I argued that mothers' knowledge and experiences of ADHD and PBD were not individually constructed, but were linguistically and discursively constituted through various social interactions and relationships, including family, spirituality and faith, community norms, and expert systems of knowledge. Mothers' narratives revealed the complexity of children's behavioral and emotional problems, the daily trials of living through these problems, how they coped with adversity and developed survival strategies, and how they interacted with various institutional authorities involved in evaluating, diagnosing, and encouraging pharmaceutical intervention for children's behavior. The findings highlight the ways in which mothers' social interactions and relationships introduced a scientific language and discourse for explaining children's behavior as mental illness, the discordances between expert systems of knowledge and mothers' understandings, and how discordances reflected mothers' ‘microsources of power’ for producing their own stories and experiences. ^ In the second manuscript presented here, I documented the ways in which structural factors, including gender, race/ethnicity, and socioeconomic status, coupled with a unique cultural and social standpoint (Collins, 1990/2009) influenced the strategies this group of African American mothers employed to understand and respond to ADHD or PBD. The most salient themes related to mother-child relationships coalesced around mothers' beliefs about the etiology of ADHD and PBD, ‘conceptualizing responsibility,’ and ‘protection-survival.’ The findings suggest that even though mothers' strategies varied, they were in pursuit of a common goal. Mothers' challenged the status quo, addressing children's behavioral and emotional problems in the ways that made the most sense to them, specifically protecting their children from further marginalization in society more so than believing these were the best options for their children.^