901 resultados para Learning techniques
Resumo:
This study examines how awareness of the interior architecture of a building, specifically daylighing, affects students academic performance. Extensive research has proven that the use of daylighting in a classroom can significantly enhance students’ academic success. The problem statement and purpose of this study is to determine if student awareness of daylighting in their learning environment affects academic performance compared to students with no knowledge of daylighting. Research and surveys in existing and newly constructed high schools were conducted to verify the results of this study. These design ideas and concepts could influence the architecture and design industry to advocate construction and building requirements that incorporate more sustainable design teaching techniques.
Resumo:
Complex networks have been employed to model many real systems and as a modeling tool in a myriad of applications. In this paper, we use the framework of complex networks to the problem of supervised classification in the word disambiguation task, which consists in deriving a function from the supervised (or labeled) training data of ambiguous words. Traditional supervised data classification takes into account only topological or physical features of the input data. On the other hand, the human (animal) brain performs both low- and high-level orders of learning and it has facility to identify patterns according to the semantic meaning of the input data. In this paper, we apply a hybrid technique which encompasses both types of learning in the field of word sense disambiguation and show that the high-level order of learning can really improve the accuracy rate of the model. This evidence serves to demonstrate that the internal structures formed by the words do present patterns that, generally, cannot be correctly unveiled by only traditional techniques. Finally, we exhibit the behavior of the model for different weights of the low- and high-level classifiers by plotting decision boundaries. This study helps one to better understand the effectiveness of the model. Copyright (C) EPLA, 2012
Resumo:
The reproductive performance of cattle may be influenced by several factors, but mineral imbalances are crucial in terms of direct effects on reproduction. Several studies have shown that elements such as calcium, copper, iron, magnesium, selenium, and zinc are essential for reproduction and can prevent oxidative stress. However, toxic elements such as lead, nickel, and arsenic can have adverse effects on reproduction. In this paper, we applied a simple and fast method of multi-element analysis to bovine semen samples from Zebu and European classes used in reproduction programs and artificial insemination. Samples were analyzed by inductively coupled plasma spectrometry (ICP-MS) using aqueous medium calibration and the samples were diluted in a proportion of 1:50 in a solution containing 0.01% (vol/vol) Triton X-100 and 0.5% (vol/vol) nitric acid. Rhodium, iridium, and yttrium were used as the internal standards for ICP-MS analysis. To develop a reliable method of tracing the class of bovine semen, we used data mining techniques that make it possible to classify unknown samples after checking the differentiation of known-class samples. Based on the determination of 15 elements in 41 samples of bovine semen, 3 machine-learning tools for classification were applied to determine cattle class. Our results demonstrate the potential of support vector machine (SVM), multilayer perceptron (MLP), and random forest (RF) chemometric tools to identify cattle class. Moreover, the selection tools made it possible to reduce the number of chemical elements needed from 15 to just 8.
Resumo:
Multi-element analysis of honey samples was carried out with the aim of developing a reliable method of tracing the origin of honey. Forty-two chemical elements were determined (Al, Cu, Pb, Zn, Mn, Cd, Tl, Co, Ni, Rb, Ba, Be, Bi, U, V, Fe, Pt, Pd, Te, Hf, Mo, Sn, Sb, P, La, Mg, I, Sm, Tb, Dy, Sd, Th, Pr, Nd, Tm, Yb, Lu, Gd, Ho, Er, Ce, Cr) by inductively coupled plasma mass spectrometry (ICP-MS). Then, three machine learning tools for classification and two for attribute selection were applied in order to prove that it is possible to use data mining tools to find the region where honey originated. Our results clearly demonstrate the potential of Support Vector Machine (SVM), Multilayer Perceptron (MLP) and Random Forest (RF) chemometric tools for honey origin identification. Moreover, the selection tools allowed a reduction from 42 trace element concentrations to only 5. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Ultrasound imaging is widely used in medical diagnostics as it is the fastest, least invasive, and least expensive imaging modality. However, ultrasound images are intrinsically difficult to be interpreted. In this scenario, Computer Aided Detection (CAD) systems can be used to support physicians during diagnosis providing them a second opinion. This thesis discusses efficient ultrasound processing techniques for computer aided medical diagnostics, focusing on two major topics: (i) Ultrasound Tissue Characterization (UTC), aimed at characterizing and differentiating between healthy and diseased tissue; (ii) Ultrasound Image Segmentation (UIS), aimed at detecting the boundaries of anatomical structures to automatically measure organ dimensions and compute clinically relevant functional indices. Research on UTC produced a CAD tool for Prostate Cancer detection to improve the biopsy protocol. In particular, this thesis contributes with: (i) the development of a robust classification system; (ii) the exploitation of parallel computing on GPU for real-time performance; (iii) the introduction of both an innovative Semi-Supervised Learning algorithm and a novel supervised/semi-supervised learning scheme for CAD system training that improve system performance reducing data collection effort and avoiding collected data wasting. The tool provides physicians a risk map highlighting suspect tissue areas, allowing them to perform a lesion-directed biopsy. Clinical validation demonstrated the system validity as a diagnostic support tool and its effectiveness at reducing the number of biopsy cores requested for an accurate diagnosis. For UIS the research developed a heart disease diagnostic tool based on Real-Time 3D Echocardiography. Thesis contributions to this application are: (i) the development of an automated GPU based level-set segmentation framework for 3D images; (ii) the application of this framework to the myocardium segmentation. Experimental results showed the high efficiency and flexibility of the proposed framework. Its effectiveness as a tool for quantitative analysis of 3D cardiac morphology and function was demonstrated through clinical validation.
Resumo:
Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.
Resumo:
The purpose of the study was to examine the effect of teacher experience on student progress and performance quality in an introductory applied lesson. Nine experienced teachers and 15 pre-service teachers taught an adult beginner to play ‘Mary Had a Little Lamb’ on a wind instrument. The lessons were videotaped for subsequent analysis of teaching behaviors and performance achievement. Following instruction, a random sample of teachers was interviewed about their perceptions of the lesson. A panel of adjudicators rated final pupil performances. No significant difference was found between pupils taught by experienced and pre-service teachers in the quality of their final performance. Systematic observation of the videotaped lessons showed that participant teachers provided relatively frequent and highly positive reinforcement during the lessons. Pupils of experienced teachers talked significantly more during the lessons than did pupils of pre-service teachers. Pre-service teachers modeled significantly more on their instruments than did experienced teachers.
Resumo:
Many schools do not begin to introduce college students to software engineering until they have had at least one semester of programming. Since software engineering is a large, complex, and abstract subject it is difficult to construct active learning exercises that build on the students’ elementary knowledge of programming and still teach basic software engineering principles. It is also the case that beginning students typically know how to construct small programs, but they have little experience with the techniques necessary to produce reliable and long-term maintainable modules. I have addressed these two concerns by defining a local standard (Montana Tech Method (MTM) Software Development Standard for Small Modules Template) that step-by-step directs students toward the construction of highly reliable small modules using well known, best-practices software engineering techniques. “Small module” is here defined as a coherent development task that can be unit tested, and can be car ried out by a single (or a pair of) software engineer(s) in at most a few weeks. The standard describes the process to be used and also provides a template for the top-level documentation. The instructional module’s sequence of mini-lectures and exercises associated with the use of this (and other) local standards are used throughout the course, which perforce covers more abstract software engineering material using traditional reading and writing assignments. The sequence of mini-lectures and hands-on assignments (many of which are done in small groups) constitutes an instructional module that can be used in any similar software engineering course.
Resumo:
Zur administrativen Unterstützung von Lehr- und Lernprozessen werden E-Learning-Plattformen eingesetzt, die auf der Grundlage des Internet Funktionen zur Distribution von Lehr- und Lernmaterialien und zur Kommunikation zwischen Lehrenden und Lernenden anbieten. Zahlreiche wissenschaftliche Beiträge und Marktstudien beschäftigen sich mit der multikriteriellen Evaluation dieser Softwareprodukte zur informatorischen Fundierung strategischer Investitionsentscheidungen. Demgegenüber werden Instrumente zum kostenorientierten Controlling von E-Learning-Plattformen allenfalls marginal thematisiert. Dieser Beitrag greift daher das Konzept der Total Cost of Ownership (TCO) auf, das einen methodischen Ansatzpunkt zur Schaffung von Kostentransparenz von E-Learning-Plattformen bildet. Aufbauend auf den konzeptionellen Grundlagen werden Problembereiche und Anwendungspotenziale für das kostenorientierte Controlling von LMS identifiziert. Zur softwaregestützten Konstruktion und Analyse von TCO-Modellen wird das Open Source-Werkzeug TCO-Tool eingeführt und seine Anwendung anhand eines synthetischen Fallbeispiels erörtert. Abschließend erfolgt die Identifikation weiterführender Entwicklungsperspektiven des TCO-Konzepts im Kontext des E-Learning. Die dargestellte Thematik ist nicht nur von theoretischem Interesse, sondern adressiert auch den steigenden Bedarf von Akteuren aus der Bildungspraxis nach Instrumenten zur informatorischen Fundierung von Investitions- und Desinvestitionsentscheidungen im Umfeld des E-Learning.
Resumo:
Mit dem zunehmenden Einsatz von E-Learning-Plattformen rücken verstärkt Wirtschaftlichkeitsaspekte in den Betrachtungsmittelpunkt, die Methoden zur Ermittlung systembedingter Kosten voraussetzen. Durch die zunehmende Serviceorientierung und Integration von LMS mit bestehenden Komponenten der Anwendungsarchitektur sind hierfür jedoch neue Methoden notwendig, welche die Defizite traditioneller Total Cost of Ownership-Modelle abbauen. Einen Ansatzpunkt hierfür bietet das ITIL-Referenzmodell, das einen Rahmen für taktische und operative IT-Services vorgibt und somit die Grundlage für eine serviceorientierte Gesamtkostenermittlung in Form der Total Cost of Services (TCS) liefert.
Resumo:
This paper is a report about the FuXML project carried out at the FernUniversität Hagen. FuXML is a Learning Content Management System (LCMS) aimed at providing a practical and efficient solution for the issues attributed to authoring, maintenance, production and distribution of online and offline distance learning material. The paper presents the environment for which the system was conceived and describes the technical realisation. We discuss the reasons for specific implementation decisions and also address the integration of the system within the organisational and technical infrastructure of the university.
Resumo:
We describe the use of log file analysis to investigate whether the use of CSCL applications corresponds to its didactical purposes. Exemplarily we examine the use of the web-based system CommSy as software support for project-oriented university courses. We present two findings: (1) We suggest measures to shape the context of CSCL applications and support their initial and continuous use. (2) We show how log files can be used to analyze how, when and by whom a CSCL system is used and thus help to validate further empirical findings. However, log file analyses can only be interpreted reasonably when additional data concerning the context of use is available.
Resumo:
A multitude of products, systems, approaches, views and notions characterize the field of e-learning. This article attempts to disentangle the field by using economic and sociological theories, theories of marketing management and strategy as well as practical experience gained by the author while working with leading edge suppliers of e-learning. On this basis, a distinction between knowledge creation e-learning and knowledge transfer e-learning is made. The various views are divided into four different ideal-typical paradigms, each with its own characteristics and limitations. Selecting the right paradigm to use in the development of an e-learning strategy may prove crucial to success. Implications for the development of an e-learning strategy in businesses and educational institutions are outlined.