974 resultados para Learning Programming Paradigms
Resumo:
Hintergrund: Trotz ihrer Etablierung als essentieller Bestandteil der medizinischen Weiter-/Fortbildung werden europa- wie schweizweit kaum Kurse in evidenzbasierter Medizin (ebm) angeboten, die - integriert im klinischen Alltag - gezielt Fertigkeiten in ebm vermitteln. Noch grössere Defizite finden sich bei ebm- Weiterbildungsmöglichkeiten für klinische Ausbilder (z.B. Oberärzte). Als Weiterführung eines EU-finanzierten, klinisch integrierten E-learning- Programms für Weiterbildungsassistenten (www.ebm-unity.org) entwickelte eine europäische Gruppe von medical educators gezielt für Ausbilder ein e-learning-Curriculum zur Vermittlung von ebm im Rahmen der klinischen Weiterbildung. Methode: Die Entwicklung des Curriculums umfasst folgende Schritte: Beschreibung von Lernzielen, Identifikation von klinisch relevanten Lernumgebungen, Entwicklung von Lerninhalten und exemplarischen didaktischen Strategien, zugeschnitten auf die jeweilige Lernumgebungen, Design von web-basierten Selbst-Lernsequenzen mit Möglichkeiten zur Selbstevaluation, Erstellung eines Handbuchs. Ergebnisse: Lernziele des Tutoren-Lehrgangs sind der Erwerb von Fertigkeiten zur Vermittlung der 5 klassischen ebm-Schritte: PICO- (Patient-Intervention-Comparison-Outcome)-Fragen, Literatursuche, kritische Literaturbewertung, Übertragung der Ergebnisse im eigenen Setting und Implementierung). Die Lehrbeispiele zeigen angehenden ebm-Tutoren, wie sich typische klinische Situationen wie z.B. Stationsvisite, Ambulanzsprechstunde, Journalclub, offizielle Konferenzen, Audit oder das klinische Assessment von Weiterbildungsassistenten gezielt für die Vermittlung von ebm nutzen lassen. Kurze E-Learning-Module mit exemplarischen «real-life»-Video-Clips erlauben flexibles Lernen zugeschnitten auf das knappe Zeitkontingent von Ärzten. Eine Selbst-Evaluation ermöglicht die Überprüfung der gelernten Inhalte. Die Pilotierung des Tutoren-Lehrgangs mit klinisch tätigen Tutoren sowie die Übersetzung des Moduls in weitere Sprachen sind derzeit in Vorbereitung. chlussfolgerung: Der modulare Train-the-Trainer-Kurs zur Vermittlung von ebm im klinischen Alltag schliesst eine wichtige Lücke in der Dissemination von klinischer ebm. Webbasierte Beispiele mit kurzen Sequenzen demonstrieren typische Situationen zur Vermittlung der ebm-Kernfertigkeiten und bieten medical educators wie Oberärzten einen niedrigschwelligen Einstieg in «ebm» am Krankenbett. Langfristiges Ziel ist eine europäische Qualifikation für ebm- Learning und -Teaching in der Fort- und Weiterbildung. Nach Abschluss der Evaluation steht das Curriculum interessierten Personen und Gruppen unter «not-for-profit»-Bedingungen zur Verfügung. Auskünfte erhältlich von rkunz@uhbs.ch. Finanziert durch die Europäische Kommission - Leonardo da Vinci Programme - Transfer of Innovation - Pilot Project for Lifelong Learn- ing 2007 und das Schweizerische Staatssekretariat für Bildung und Forschung.
Resumo:
This paper presents and discusses the use of Bayesian procedures - introduced through the use of Bayesian networks in Part I of this series of papers - for 'learning' probabilities from data. The discussion will relate to a set of real data on characteristics of black toners commonly used in printing and copying devices. Particular attention is drawn to the incorporation of the proposed procedures as an integral part in probabilistic inference schemes (notably in the form of Bayesian networks) that are intended to address uncertainties related to particular propositions of interest (e.g., whether or not a sample originates from a particular source). The conceptual tenets of the proposed methodologies are presented along with aspects of their practical implementation using currently available Bayesian network software.
Resumo:
In the future, robots will enter our everyday lives to help us with various tasks.For a complete integration and cooperation with humans, these robots needto be able to acquire new skills. Sensor capabilities for navigation in real humanenvironments and intelligent interaction with humans are some of the keychallenges.Learning by demonstration systems focus on the problem of human robotinteraction, and let the human teach the robot by demonstrating the task usinghis own hands. In this thesis, we present a solution to a subproblem within thelearning by demonstration field, namely human-robot grasp mapping. Robotgrasping of objects in a home or office environment is challenging problem.Programming by demonstration systems, can give important skills for aidingthe robot in the grasping task.The thesis presents two techniques for human-robot grasp mapping, directrobot imitation from human demonstrator and intelligent grasp imitation. Inintelligent grasp mapping, the robot takes the size and shape of the object intoconsideration, while for direct mapping, only the pose of the human hand isavailable.These are evaluated in a simulated environment on several robot platforms.The results show that knowing the object shape and size for a grasping taskimproves the robot precision and performance
Resumo:
Peer-reviewed
Resumo:
Classical treatments of problems of sequential mate choice assume that the distribution of the quality of potential mates is known a priori. This assumption, made for analytical purposes, may seem unrealistic, opposing empirical data as well as evolutionary arguments. Using stochastic dynamic programming, we develop a model that includes the possibility for searching individuals to learn about the distribution and in particular to update mean and variance during the search. In a constant environment, a priori knowledge of the parameter values brings strong benefits in both time needed to make a decision and average value of mate obtained. Knowing the variance yields more benefits than knowing the mean, and benefits increase with variance. However, the costs of learning become progressively lower as more time is available for choice. When parameter values differ between demes and/or searching periods, a strategy relying on fixed a priori information might lead to erroneous decisions, which confers advantages on the learning strategy. However, time for choice plays an important role as well: if a decision must be made rapidly, a fixed strategy may do better even when the fixed image does not coincide with the local parameter values. These results help in delineating the ecological-behavior context in which learning strategies may spread.
Resumo:
The future of elections seems to be electronic voting systems du to its advantatges over the traditional voting. Nowadays, there are some different paradigms to ensure the security and reliability of e-voting. This document is part of a wider project which presents an e-Voting platform based on elliptic curve cryptography. It uses an hybrid combination of two of the main e-Voting paradigms to guarantee privacy and security in the counting phase, these are precisely, the mixnets and the homomorphic protocols. This document is focused in the description of the system and the maths and programming needed to solve the homomorphic part of it. In later chapters, there is a comparison between a simple mixing system and our system proposal.
Resumo:
We present a novel filtering method for multispectral satellite image classification. The proposed method learns a set of spatial filters that maximize class separability of binary support vector machine (SVM) through a gradient descent approach. Regularization issues are discussed in detail and a Frobenius-norm regularization is proposed to efficiently exclude uninformative filters coefficients. Experiments carried out on multiclass one-against-all classification and target detection show the capabilities of the learned spatial filters.
Resumo:
The explosive growth of Internet during the last years has been reflected in the ever-increasing amount of the diversity and heterogeneity of user preferences, types and features of devices and access networks. Usually the heterogeneity in the context of the users which request Web contents is not taken into account by the servers that deliver them implying that these contents will not always suit their needs. In the particular case of e-learning platforms this issue is especially critical due to the fact that it puts at stake the knowledge acquired by their users. In the following paper we present a system that aims to provide the dotLRN e-learning platform with the capability to adapt to its users context. By integrating dotLRN with a multi-agent hypermedia system, online courses being undertaken by students as well as their learning environment are adapted in real time
Resumo:
Learning object economies are marketplaces for the sharing and reuse of learning objects (LO). There are many motivations for stimulating the development of the LO economy. The main reason is the possibility of providing the right content, at the right time, to the right learner according to adequate quality standards in the context of a lifelong learning process; in fact, this is also the main objective of education. However, some barriers to the development of a LO economy, such as the granularity and editability of LO, must be overcome. Furthermore, some enablers, such as learning design generation and standards usage, must be promoted in order to enhance LO economy. For this article, we introduced the integration of distributed learning object repositories (DLOR) as sources of LO that could be placed in adaptive learning designs to assist teachers’ design work. Two main issues presented as a result: how to access distributed LO, and where to place the LO in the learning design. To address these issues, we introduced two processes: LORSE, a distributed LO searching process, and LOOK, a micro context-based positioning process, respectively. Using these processes, the teachers were able to reuse LO from different sources to semi-automatically generate an adaptive learning design without leaving their virtual environment. A layered evaluation yielded good results for the process of placing learning objects from controlled learning object repositories into a learning design, and permitting educators to define different open issues that must be covered when they use uncontrolled learning object repositories for this purpose. We verified the satisfaction users had with our solution
Resumo:
Black-box optimization problems (BBOP) are de ned as those optimization problems in which the objective function does not have an algebraic expression, but it is the output of a system (usually a computer program). This paper is focussed on BBOPs that arise in the eld of insurance, and more speci cally in reinsurance problems. In this area, the complexity of the models and assumptions considered to de ne the reinsurance rules and conditions produces hard black-box optimization problems, that must be solved in order to obtain the optimal output of the reinsurance. The application of traditional optimization approaches is not possible in BBOP, so new computational paradigms must be applied to solve these problems. In this paper we show the performance of two evolutionary-based techniques (Evolutionary Programming and Particle Swarm Optimization). We provide an analysis in three BBOP in reinsurance, where the evolutionary-based approaches exhibit an excellent behaviour, nding the optimal solution within a fraction of the computational cost used by inspection or enumeration methods.
Resumo:
Business processes designers take into account the resources that the processes would need, but, due to the variable cost of certain parameters (like energy) or other circumstances, this scheduling must be done when business process enactment. In this report we formalize the energy aware resource cost, including time and usage dependent rates. We also present a constraint programming approach and an auction-based approach to solve the mentioned problem including a comparison of them and a comparison of the proposed algorithms for solving them
Resumo:
We propose and validate a multivariate classification algorithm for characterizing changes in human intracranial electroencephalographic data (iEEG) after learning motor sequences. The algorithm is based on a Hidden Markov Model (HMM) that captures spatio-temporal properties of the iEEG at the level of single trials. Continuous intracranial iEEG was acquired during two sessions (one before and one after a night of sleep) in two patients with depth electrodes implanted in several brain areas. They performed a visuomotor sequence (serial reaction time task, SRTT) using the fingers of their non-dominant hand. Our results show that the decoding algorithm correctly classified single iEEG trials from the trained sequence as belonging to either the initial training phase (day 1, before sleep) or a later consolidated phase (day 2, after sleep), whereas it failed to do so for trials belonging to a control condition (pseudo-random sequence). Accurate single-trial classification was achieved by taking advantage of the distributed pattern of neural activity. However, across all the contacts the hippocampus contributed most significantly to the classification accuracy for both patients, and one fronto-striatal contact for one patient. Together, these human intracranial findings demonstrate that a multivariate decoding approach can detect learning-related changes at the level of single-trial iEEG. Because it allows an unbiased identification of brain sites contributing to a behavioral effect (or experimental condition) at the level of single subject, this approach could be usefully applied to assess the neural correlates of other complex cognitive functions in patients implanted with multiple electrodes.
Analysis and evaluation of techniques for the extraction of classes in the ontology learning process
Resumo:
This paper analyzes and evaluates, in the context of Ontology learning, some techniques to identify and extract candidate terms to classes of a taxonomy. Besides, this work points out some inconsistencies that may be occurring in the preprocessing of text corpus, and proposes techniques to obtain good terms candidate to classes of a taxonomy.
Resumo:
In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.