743 resultados para blended learning methods
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
This paper investigates how to make improved action selection for online policy learning in robotic scenarios using reinforcement learning (RL) algorithms. Since finding control policies using any RL algorithm can be very time consuming, we propose to combine RL algorithms with heuristic functions for selecting promising actions during the learning process. With this aim, we investigate the use of heuristics for increasing the rate of convergence of RL algorithms and contribute with a new learning algorithm, Heuristically Accelerated Q-learning (HAQL), which incorporates heuristics for action selection to the Q-Learning algorithm. Experimental results on robot navigation show that the use of even very simple heuristic functions results in significant performance enhancement of the learning rate.
Resumo:
Our AUTC Biotechnology study (Phases 1 and 2) identified a range of areas that could benefit from a common approach by universities nationally. A national network of biotechnology educators needs to be solidified through more regular communication, biennial meetings, and development of methods for sharing effective teaching practices and industry placement strategies, for example. Our aims in this proposed study are to: a. Revisit the state of undergraduate biotechnology degree programs nationally to determine their rate of change in content, growth or shrinkage in student numbers (as the biotech industry has had its ups and downs in recent years), and sustainability within their institutions in light of career movements of key personnel, tightening budgets, and governmental funding priorities. b. Explore the feasibility of a range of initiatives to benefit university biotechnology education to determine factors such as how practical each one is, how much buy-in could be gained from potentially participating universities and industry counterparts, and how sustainable such efforts are. One of many such initiatives arising in our AUTC Biotech study was a national register of industry placements for final-year students. c. During scoping and feasibility study, to involve our colleagues who are teaching in biotechnology – and contributing disciplines. Their involvement is meant to yield not only meaningful insight into how to strengthen biotechnology teaching and learning but also to generate ‘buy-in’ on any initiatives that result from this effort.
Resumo:
Purpose. To conduct a controlled trial of traditional and problem-based learning (PBL) methods of teaching epidemiology. Method. All second-year medical students (n = 136) at The University of Western Australia Medical School were offered the chance to participate in a randomized controlled trial of teaching methods fur an epidemiology course. Students who consented to participate (n = 80) were randomly assigned to either a PBL or a traditional course. Students who did not consent or did not return the consent form (n = 56) were assigned to the traditional course, Students in both streams took identical quizzes and exams. These scores, a collection of semi-quantitative feedback from all students, and a qualitative analysis of interviews with a convenience sample of six students from each stream were compared. Results. There was no significant difference in performances on quizzes or exams between PBL and traditional students. Students using PBL reported a stronger grasp of epidemiologic principles, enjoyed working with a group, and, at the end of the course, were more enthusiastic about epidemiology and its professional relevance to them than were students in the traditional course. PBL students worked more steadily during the semester but spent only marginally more time on the epidemiology course overall. Interviews corroborated these findings. Non-consenting students were older (p < 0.02) and more likely to come from non-English-speaking backgrounds (p < 0.005). Conclusions. PBL provides an academically equivalent but personally far richer learning experience. The adoption of PBL approaches to medical education makes it important to study whether PBL presents particular challenges for students whose first language is not the language of instruction.
Resumo:
This paper is concerned with the use of scientific visualization methods for the analysis of feedforward neural networks (NNs). Inevitably, the kinds of data associated with the design and implementation of neural networks are of very high dimensionality, presenting a major challenge for visualization. A method is described using the well-known statistical technique of principal component analysis (PCA). This is found to be an effective and useful method of visualizing the learning trajectories of many learning algorithms such as back-propagation and can also be used to provide insight into the learning process and the nature of the error surface.
Resumo:
Following the application of the remember/know paradigm to student learning by Conway et al. (1997), this study examined changes in learning and memory awareness of university students in a lecture course and a research methods course. The proposed shift from a dominance of 'remember' awareness in early learning to a dominance of 'know' awareness as learning progresses and schematization occurs was evident for the methods course but not for the lecture course. The patterns of remember and know awareness and proposed associated levels of schematization were supported by a separate measure of the quality of student learning using the SOLO (Structure of Observed Learning Outcomes) Taxonomy. As found by previous research, the remember-to-know shift and schematization of knowledge is dependent upon type of course and level of achievement. Findings are discussed in terms of the utility of the methodology used, the theoretical implications and the applications to educational practice. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
The blending of coals has become popular to improve the performance of coals, to meet specifications of power plants and, to reduce the cost of coals, This article reviews the results and provides new information on ignition, flame stability, and carbon burnout studies of blended coals. The reviewed studies were conducted in laboratory-, pilot-, and full-scale facilities. The new information was taken in pilot-scale studies. The results generally show that blending a high-volatile coal with a low-volatile coal or anthracite can improve the ignition, flame stability and burnout of the blends. This paper discusses two general methods to predict the performance of blended coals: (1) experiment; and (2) indices. Laboratory- and pilot-scale tests, at least, provide a relative ranking of the combustion performance of coal/blends in power station boilers. Several indices, volatile matter content, heating value and a maceral index, can be used to predict the relative ranking of ignitability and flame stability of coals and blends. The maceral index, fuel ratio, and vitrinite reflectance can also be used to predict the absolute carbon burnout of coal and blends within limits. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
The long short-term memory (LSTM) is not the only neural network which learns a context sensitive language. Second-order sequential cascaded networks (SCNs) are able to induce means from a finite fragment of a context-sensitive language for processing strings outside the training set. The dynamical behavior of the SCN is qualitatively distinct from that observed in LSTM networks. Differences in performance and dynamics are discussed.
Resumo:
This article jointly examines the differences of laboratory versions of the Dutch clock open auction, a sealed-bid auction to represent book building, and a two-stage sealed bid auction to proxy for the “competitive IPO”, a recent innovation used in a few European equity initial public offerings. We investigate pricing, seller allocation, and buyer welfare allocation efficiency and conclude that the book building emulation seems to be as price efficient as the Dutch auction, even after investor learning, whereas the competitive IPO is not price efficient, regardless of learning. The competitive IPO is the most seller allocative efficient method because it maximizes offer proceeds. The Dutch auction emerges as the most buyer welfare allocative efficient method. Underwriters are probably seeking pricing efficiency rather than seller or buyer welfare allocative efficiency and their discretionary pricing and allocation must be important since book building is prominent worldwide.
Resumo:
This paper is an elaboration of the DECA algorithm [1] to blindly unmix hyperspectral data. The underlying mixing model is linear, meaning that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. The proposed method, as DECA, is tailored to highly mixed mixtures in which the geometric based approaches fail to identify the simplex of minimum volume enclosing the observed spectral vectors. We resort then to a statitistical framework, where the abundance fractions are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. With respect to DECA, we introduce two improvements: 1) the number of Dirichlet modes are inferred based on the minimum description length (MDL) principle; 2) The generalized expectation maximization (GEM) algorithm we adopt to infer the model parameters is improved by using alternating minimization and augmented Lagrangian methods to compute the mixing matrix. The effectiveness of the proposed algorithm is illustrated with simulated and read data.
Resumo:
A organização automática de mensagens de correio electrónico é um desafio actual na área da aprendizagem automática. O número excessivo de mensagens afecta cada vez mais utilizadores, especialmente os que usam o correio electrónico como ferramenta de comunicação e trabalho. Esta tese aborda o problema da organização automática de mensagens de correio electrónico propondo uma solução que tem como objectivo a etiquetagem automática de mensagens. A etiquetagem automática é feita com recurso às pastas de correio electrónico anteriormente criadas pelos utilizadores, tratando-as como etiquetas, e à sugestão de múltiplas etiquetas para cada mensagem (top-N). São estudadas várias técnicas de aprendizagem e os vários campos que compõe uma mensagem de correio electrónico são analisados de forma a determinar a sua adequação como elementos de classificação. O foco deste trabalho recai sobre os campos textuais (o assunto e o corpo das mensagens), estudando-se diferentes formas de representação, selecção de características e algoritmos de classificação. É ainda efectuada a avaliação dos campos de participantes através de algoritmos de classificação que os representam usando o modelo vectorial ou como um grafo. Os vários campos são combinados para classificação utilizando a técnica de combinação de classificadores Votação por Maioria. Os testes são efectuados com um subconjunto de mensagens de correio electrónico da Enron e um conjunto de dados privados disponibilizados pelo Institute for Systems and Technologies of Information, Control and Communication (INSTICC). Estes conjuntos são analisados de forma a perceber as características dos dados. A avaliação do sistema é realizada através da percentagem de acerto dos classificadores. Os resultados obtidos apresentam melhorias significativas em comparação com os trabalhos relacionados.
Resumo:
March 19 - 22, 2006, São Paulo, BRAZIL World Congress on Computer Science, Engineering and Technology Education
Resumo:
Electricity markets are complex environments, involving a large number of different entities, playing in a dynamic scene to obtain the best advantages and profits. MASCEM is a multi-agent electricity market simu-lator to model market players and simulate their operation in the market. Market players are entities with specific characteristics and objectives, making their decisions and interacting with other players. MASCEM pro-vides several dynamic strategies for agents’ behaviour. This paper presents a method that aims to provide market players strategic bidding capabilities, allowing them to obtain the higher possible gains out of the market. This method uses an auxiliary forecasting tool, e.g. an Artificial Neural Net-work, to predict the electricity market prices, and analyses its forecasting error patterns. Through the recognition of such patterns occurrence, the method predicts the expected error for the next forecast, and uses it to adapt the actual forecast. The goal is to approximate the forecast to the real value, reducing the forecasting error.
Resumo:
The very particular characteristics of electricity markets, require deep studies of the interactions between the involved players. MASCEM is a market simulator developed to allow studying electricity market negotiations. This paper presents a new proposal for the definition of MASCEM players’ strategies to negotiate in the market. The proposed methodology is implemented as a multiagent system, using reinforcement learning algorithms to provide players with the capabilities to perceive the changes in the environment, while adapting their bids formulation according to their needs, using a set of different techniques that are at their disposal. This paper also presents a methodology to define players’ models based on the historic of their past actions, interpreting how their choices are affected by past experience, and competition.
Resumo:
Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.