804 resultados para Computational learning theory
Resumo:
Der Einsatz von Fallstudien kann als wichtiges Bindeglied zur Verknüpfung von Theorie und Praxis betrachtet werden. Fallstudien ermöglichen die Anwendung theoretischen Grundlagenwissens und die Entwicklung überfachlicher Kompetenzen. Damit können sie einen wichtigen Beitrag zur beruflichen Handlungskompetenz genau dort leisten, wo praktische Erfahrungen im Rahmen der Aus-und Weiterbildung nicht möglich sind. Der Einsatz von Fallstudien sollte aus diesem Grund nicht nur den „klassischen“ Anwendungsdisziplinen wie den Rechtswissenschaften, der Betriebswirtschaftslehre oder der Psychologie vorbehalten sein. Auch im Bereich der Informatik können sie eine wichtige Ergänzung zu den bisher eingesetzten Methoden darstellen. Das im Kontext des Projekts New Economy1 entwickelte und hier vorgestellte Konzept zur didaktischen und technischen Aufbereitung von Fallstudien am Beispiel der IT-Aus- und Weiterbildung soll diese Diskussion anregen. Mit Hilfe des vorgestellten Ansatzes ist es möglich, unterschiedliche methodische Zugänge zu einer Fallstudie für eine computerbasierte Präsentation automatisch zu generieren und mit fachlichen Inhalten zu verknüpfen. Damit ist ein entscheidender Mehrwert gegenüber den bisherigen statischen und in sich geschlossenen Darstellungen gegeben. Der damit zu erreichende Qualitätssprung im Einsatz von Fallstudien in der universitären und betrieblichen Aus- und Weiterbildung stellt einen wichtigen Beitrag zur praxisorientierten Gestaltung von Blended Learning-Ansätzen dar.(DIPF/Orig.)
Resumo:
Die Entwicklung der Akustik-Lern-CD hatte das Ziel, den Anwendungsbezug von theoretischem Wissen bei Regelverstärkern zu fördern. Die Studenten konnten nach dem theoretischen Unterricht zwar Hüllkurven zeichnen und Kompressionsraten berechnen, hatten aber Probleme, in konkreten Situationen wie z.B. beim Übersteuern von Instrumenten den korrekten Regelverstärker auszuwählen. Um einen besseren Wissenstransfer zu erreichen, werden bei der Lern-CD dem Lerner Situationen angeboten, in denen eigene Konstruktionsleistungen möglich sind und in denen kontextgebunden, interaktiv gelernt werden kann.(DIPF/Orig.)
Resumo:
This work aims to provide high school students an development in his mathematical and geometrical knowledge, through the use of Geometric Constructions as a teaching resource in Mathematics classes. First a literature search to understand how it emerged and evolved the field of geometry was carried out and the Geometric Constructions. The ways in which the teaching of geometry happened in our country, also were studied some theories related to learning and in particular the Van Hiele theory which deals with the geometric learning also through the literature search were diagnosed. Two forms of the Geometric Constructions approach are analyzed in class: through the design of hand tools - ruler and compass - and through the computational tool - geometric software - being that we chose to approach using the ruler and compass instruments. It is proposed a workshop with nine Geometric Construction activities which was applied with a group of 3rd year of high school, the Escola de Educac¸ ˜ao B´asica Professor Anacleto Damiani in the city of Abelardo Luz, Santa Catarina. Each workshop activity includes the following topics: Activity Goals, Activity Sheet, Steps of Construction Activity Background and activity of the solution. After application of the workshop, the data were analyzed through content analysis according to three categories: Drawing Instruments, angles and their implications and Parallel and its Implications. Was observed that most of the students managed to achieve the research objectives, and had an development in their mathematical and geometrical knowledge, which can be perceived through the analysis of questionnaires administered to students, audio recordings, observations made during the workshop and especially through the improvement of the students in the development of the proposed activities.
Resumo:
Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.
Resumo:
The steam turbines play a significant role in global power generation. Especially, research on low pressure (LP) steam turbine stages is of special importance for steam turbine man- ufactures, vendors, power plant owners and the scientific community due to their lower efficiency than the high pressure steam turbine stages. Because of condensation, the last stages of LP turbine experience irreversible thermodynamic losses, aerodynamic losses and erosion in turbine blades. Additionally, an LP steam turbine requires maintenance due to moisture generation, and therefore, it is also affecting on the turbine reliability. Therefore, the design of energy efficient LP steam turbines requires a comprehensive analysis of condensation phenomena and corresponding losses occurring in the steam tur- bine either by experiments or with numerical simulations. The aim of the present work is to apply computational fluid dynamics (CFD) to enhance the existing knowledge and understanding of condensing steam flows and loss mechanisms that occur due to the irre- versible heat and mass transfer during the condensation process in an LP steam turbine. Throughout this work, two commercial CFD codes were used to model non-equilibrium condensing steam flows. The Eulerian-Eulerian approach was utilised in which the mix- ture of vapour and liquid phases was solved by Reynolds-averaged Navier-Stokes equa- tions. The nucleation process was modelled with the classical nucleation theory, and two different droplet growth models were used to predict the droplet growth rate. The flow turbulence was solved by employing the standard k-ε and the shear stress transport k-ω turbulence models. Further, both models were modified and implemented in the CFD codes. The thermodynamic properties of vapour and liquid phases were evaluated with real gas models. In this thesis, various topics, namely the influence of real gas properties, turbulence mod- elling, unsteadiness and the blade trailing edge shape on wet-steam flows, are studied with different convergent-divergent nozzles, turbine stator cascade and 3D turbine stator-rotor stage. The simulated results of this study were evaluated and discussed together with the available experimental data in the literature. The grid independence study revealed that an adequate grid size is required to capture correct trends of condensation phenomena in LP turbine flows. The study shows that accurate real gas properties are important for the precise modelling of non-equilibrium condensing steam flows. The turbulence modelling revealed that the flow expansion and subsequently the rate of formation of liquid droplet nuclei and its growth process were affected by the turbulence modelling. The losses were rather sensitive to turbulence modelling as well. Based on the presented results, it could be observed that the correct computational prediction of wet-steam flows in the LP turbine requires the turbulence to be modelled accurately. The trailing edge shape of the LP turbine blades influenced the liquid droplet formulation, distribution and sizes, and loss generation. The study shows that the semicircular trailing edge shape predicted the smallest droplet sizes. The square trailing edge shape estimated greater losses. The analysis of steady and unsteady calculations of wet-steam flow exhibited that in unsteady simulations, the interaction of wakes in the rotor blade row affected the flow field. The flow unsteadiness influenced the nucleation and droplet growth processes due to the fluctuation in the Wilson point.
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
Resumo:
Das Notebook-Seminar stellte ein M-Learning-Szenario dar, welches durch die Integration des Notebooks in die Lehre eine Verbesserung derselben erwirken soll. Methodischer Schwerpunkt ist das projektorientierte Lernen, welches neben der Vermittlung von Fachinhalten zum Ziel hat, auch fachübergreifende Kompetenzen zu vermitteln. Auf Basis einer ein Semester umfassenden Projektaufgabe werden bestimmte Lernhandlungen von den Studierenden absolviert. Diese Lernhandlungen fassen bestimmte Lernziele, die in Fach-, Methoden und Sozialkompetenz aufgeteilt sind. Das Paper beleuchtet Aspekte des M-Learnings und konzipiert vor diesem Hintergrund das Notebook-Seminar. Danach wird das in die Praxis umgesetzte Konzept vorgestellt und die gemachten Erfahrungen, sowie die Ergebnisse der Evaluation diskutiert.(DIPF/Orig.)
Resumo:
The aim of this study was to model the process of development for an Online Learning Resource (OLR) by Health Care Professionals (HCPs) to meet lymphoedema-related educational needs, within an asset-based management context. Previous research has shown that HCPs have unmet educational needs in relation to lymphoedema but details on their specific nature or context were lacking. Against this background, the study was conducted in two distinct but complementary phases. In Phase 1, a national survey was conducted of HCPs predominantly in community, oncology and palliative care services, followed by focus group discussions with a sample of respondents. In Phase 2, lymphoedema specialists (LSs) used an action research approach to design and implement an OLR to meet the needs identified in Phase 1. Study findings were analysed using descriptive statistics (Phase 1), and framework, thematic and dialectic analysis to explore their potential to inform future service development and education theory. Unmet educational need was found to be specific to health care setting and professional group. These resulted in HCPs feeling poorly-equipped to diagnose and manage lymphoedema. Of concern, when identified, lymphoedema was sometimes buried for fear of overwhelming stretched services. An OLR was identified as a means of addressing the unmet educational needs. This was successfully developed and implemented with minimal additional resources. The process model created has the potential to inform contemporary leadership theory in asset-based management contexts. This doctoral research makes a timely contribution to leadership theory since the resource constraints underpinning much of the contribution has salience to current public services. The process model created has the potential to inform contemporary leadership theory in asset-based management contexts. Further study of a leadership style which incorporates cognisance of Cognitive Load Theory and Self-Determination Theory is suggested. In addition, the detailed reporting of process and how this facilitated learning for participants contributes to workplace education theory
Resumo:
Mit dem Einsatz der neuen Medien, insbesondere des Internet, im Bereich des Lehrens und Lernens werden – auch nach dem Abklingen der ersten Euphorie – hohe Erwartungen verbunden. Dabei spielen neue, pädagogisch interessante Nutzungsmöglichkeiten, aber auch wirtschaftliche Interessen eine wesentliche Rolle. Die damit verbundenen Fragen (z.B. nach der Sicherung von Qualität und Gewährleistung von Rentabilität) führen zu einem steigenden Interesse an Metadaten zur Beschreibung von telematischen Lehr-/Lernmaterialien (zum Begriff „telematisch“ vgl. Zimmer, 1997, S. 111). Der folgende Beitrag befasst sich mit Erwartungen und Schwierigkeiten bei der Entwicklung und dem Einsatz pädagogischer Metadaten. Im Anschluss an eine kurze allgemeine Darstellung der Funktion von Metadaten wird unter Rückgriff auf Vorschläge verschiedener Gremien zur Bestimmung pädagogischer Metadaten gezeigt, welche Probleme bei deren Findung, Benennung und Implementierung auftreten: So stellen sich z.B. Fragen nach der interkulturellen Übertragbarkeit, nach den unterschiedlichen Perspektiven von Contentanbietern und Lernenden sowie auch die grundsätzliche Frage nach der Möglichkeit der Standardisierung pädagogischer Kategorien. Anhand des Praxisbeispiels der Virtuellen Fachhochschule für Technik, Informatik und Wirtschaft werden projekttypische Entwicklungsstufen von (pädagogischen) Metadaten dargestellt. Vorschläge zur Lösung der beschriebenen Probleme und ein Ausblick mit Forschungsfragen schließen den Beitrag ab.(DIPF/Orig.)
Resumo:
In this paper we envision didactical concepts for university education based on self-responsible and project-based learning and outline principles of adequate technical support. We use the scenario technique describing how a fictive student named Anna organizes her studies of informatics at a fictive university from the first days of her studies to make a career for herself.(DIPF/Orig.)
Resumo:
In most e-learning scenarios, communication and on-line collaboration is seen as an add-on feature to resource based learning. This paper will endeavour to present a pedagogical framework for inverting this view and putting communities of practice as the basic paradigm for e-learning. It will present an approach currently being used in the development of a virtual Radiopharmacy community, called VirRAD, and will discuss how theory can lead to an instructional design approach to support technologically enhanced learning.(DIPF/Orig.)
Resumo:
This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.
Resumo:
Transitions processes in higher education are characterized by new learning situations which pose challenges to most students. This chapter explores the heterogeneity of reactions to these challenges from a perspective of regulation processes. The Integrated Model of Learning and Action is used to identity different patterns of motivational regulation amongst students at university by using mixed distribution models. Six subpopulations of motivational regulation could be identified: students with self-determined, pragmatic, strategic, negative, anxious and insecure learning motivation. Findings about these patterns can be used to design didactic measures that will support students’ learning processes.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Computational intelligent support for decision making is becoming increasingly popular and essential among medical professionals. Also, with the modern medical devices being capable to communicate with ICT, created models can easily find practical translation into software. Machine learning solutions for medicine range from the robust but opaque paradigms of support vector machines and neural networks to the also performant, yet more comprehensible, decision trees and rule-based models. So how can such different techniques be combined such that the professional obtains the whole spectrum of their particular advantages? The presented approaches have been conceived for various medical problems, while permanently bearing in mind the balance between good accuracy and understandable interpretation of the decision in order to truly establish a trustworthy ‘artificial’ second opinion for the medical expert.