718 resultados para Reverse problem based learning
Resumo:
The proliferation of Web-based learning objects makes finding and evaluating resources a considerable hurdle for learners to overcome. While established learning analytics methods provide feedback that can aid learner evaluation of learning resources, the adequacy and reliability of these methods is questioned. Because engagement with online learning is different from other Web activity, it is important to establish pedagogically relevant measures that can aid the development of distinct, automated analysis systems. Content analysis is often used to examine online discussion in educational settings, but these instruments are rarely compared with each other which leads to uncertainty regarding their validity and reliability. In this study, participation in Massive Open Online Course (MOOC) comment forums was evaluated using four different analytical approaches: the Digital Artefacts for Learning Engagement (DiAL-e) framework, Bloom's Taxonomy, Structure of Observed Learning Outcomes (SOLO) and Community of Inquiry (CoI). Results from this study indicate that different approaches to measuring cognitive activity are closely correlated and are distinct from typical interaction measures. This suggests that computational approaches to pedagogical analysis may provide useful insights into learning processes.
Resumo:
The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we deal with a generalized multi-period mean-variance portfolio selection problem with market parameters Subject to Markov random regime switchings. Problems of this kind have been recently considered in the literature for control over bankruptcy, for cases in which there are no jumps in market parameters (see [Zhu, S. S., Li, D., & Wang, S. Y. (2004). Risk control over bankruptcy in dynamic portfolio selection: A generalized mean variance formulation. IEEE Transactions on Automatic Control, 49, 447-457]). We present necessary and Sufficient conditions for obtaining an optimal control policy for this Markovian generalized multi-period meal-variance problem, based on a set of interconnected Riccati difference equations, and oil a set of other recursive equations. Some closed formulas are also derived for two special cases, extending some previous results in the literature. We apply the results to a numerical example with real data for Fisk control over bankruptcy Ill a dynamic portfolio selection problem with Markov jumps selection problem. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Using a numerical implicit model for root water extraction by a single root in a symmetric radial flow problem, based on the Richards equation and the combined convection-dispersion equation, we investigated some aspects of the response of root water uptake to combined water and osmotic stress. The model implicitly incorporates the effect of simultaneous pressure head and osmotic head on root water uptake, and does not require additional assumptions (additive or multiplicative) to derive the combined effect of water and salt stress. Simulation results showed that relative transpiration equals relative matric flux potential, which is defined as the matric flux potential calculated with an osmotic pressure head-dependent lower bound of integration, divided by the matric flux potential at the onset of limiting hydraulic conditions. In the falling rate phase, the osmotic head near the root surface was shown to increase in time due to decreasing root water extraction rates, causing a more gradual decline of relative transpiration than with water stress alone. Results furthermore show that osmotic stress effects on uptake depend on pressure head or water content, allowing a refinement of the approach in which fixed reduction factors based on the electrical conductivity of the saturated soil solution extract are used. One of the consequences is that osmotic stress is predicted to occur in situations not predicted by the saturation extract analysis approach. It is also shown that this way of combining salinity and water as stressors yields results that are different from a purely multiplicative approach. An analytical steady state solution is presented to calculate the solute content at the root surface, and compared with the outputs of the numerical model. Using the analytical solution, a method has been developed to estimate relative transpiration as a function of system parameters, which are often already used in vadose zone models: potential transpiration rate, root length density, minimum root surface pressure head, and soil theta-h and K-h functions.
Resumo:
Effect of temperature-dependent viscosity on fully developed forced convection in a duct of rectangular cross-section occupied by a fluid-saturated porous medium is investigated analytically. The Darcy flow model is applied and the viscosity-temperature relation is assumed to be an inverse-linear one. The case of uniform heat flux on the walls, i.e. the H boundary condition in the terminology of Kays and Crawford, is treated. For the case of a fluid whose viscosity decreases with temperature, it is found that the effect of the variation is to increase the Nusselt number for heated walls. Having found the velocity and the temperature distribution, the second law of thermodynamics is invoked to find the local and average entropy generation rate. Expressions for the entropy generation rate, the Bejan number, the heat transfer irreversibility, and the fluid flow irreversibility are presented in terms of the Brinkman number, the Péclet number, the viscosity variation number, the dimensionless wall heat flux, and the aspect ratio (width to height ratio). These expressions let a parametric study of the problem based on which it is observed that the entropy generated due to flow in a duct of square cross-section is more than those of rectangular counterparts while increasing the aspect ratio decreases the entropy generation rate similar to what previously reported for the clear flow case.
Resumo:
This paper reports on an investigation into the teaching of medical ethics and related areas in the medical undergraduate course at the University of Queensland. The project was designed in the context of a major curriculum change to replace the current 6 year course by an integrated, problem-based, 4 year graduate medical course, which began in 1997. A survey of clinical students, observations of clinical teaching sessions, and interviews with clinical teachers were conducted. Data obtained have contributed to curriculum development and will provide a baseline for comparison and evaluation of the graduate course in this field. A view of integrated ethics teaching is advanced in the light of the data obtained.
Resumo:
There are tendencies in universities globally to change undergraduate teaching in veterinary parasitology. To be able to give considered advice to universities, faculties, governmental bodies and professional societies about a discipline and to establish how particular changes may impact on the quality of a course, is the requirement to record and review its current status. The present paper contributes toward this objective by providing a snap-shot of the veterinary parasitology courses at the Universities of Melbourne, Sydney and Queensland in eastern Australia. It includes a description of the veterinary science curriculum in each institution, and provides an outline of its veterinary parasitology course, including objectives, topics covered, course delivery, student examination procedures and course evaluation. Student contact time in veterinary parasitology during the curriculum is currently higher in Melbourne (183 h) compared with Sydney and Queensland (106-110 h). In the teaching of parasitology, Melbourne adopts a taxonomic approach (in the pre-clinical period) followed by a combined disciplinary and problem-based approach in the clinical semesters, whereas both Sydney and Queensland focus more on presenting parasites on a host species-basis followed by a problem-based approach. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Este estudo tem como principal objectivo compreender de que modo os alunos de 1.º ano de escolaridade desenvolvem estratégias de cálculo mental, num contexto de resolução de problemas de adição e subtracção. Para tal, procurou responder-se a três questões: a) Que estratégias de cálculo mental são utilizadas pelos alunos na resolução de problemas de adição e subtracção?; b) De que modo evoluem essas estratégias?; e c) Será que o significado da operação de adição ou subtracção, presente no problema, influencia a estratégia de cálculo mental utilizada na sua resolução? Tendo em conta a problemática do estudo, seguiu-se uma metodologia de natureza qualitativa, tendo sido realizados três estudos de caso. O trabalho de campo deste estudo foi realizado numa turma do 1.º ano do 1.º ciclo do ensino básico, da qual sou professora, tendo sido concluído no início do ano lectivo seguinte, quando os alunos frequentavam o 2.º ano de escolaridade. Os alunos em estudo resolveram três cadeias de problemas, contemplando os diferentes significados das operações de adição e subtracção: as primeiras duas cadeias foram resolvidas a pares, na sala de aula, e a última foi resolvida individualmente, apenas pelos alunos que constituíram os casos e fora da sala de aula. Os registos realizados pelos alunos aquando da resolução dos problemas, juntamente com as gravações áudio, vídeo e as notas de campo, constituíram-se como as principais fontes de recolha de dados. Os dados permitem afirmar que as estratégias de cálculo usadas pelos alunos evoluíram de estratégias elementares baseadas em contagem e na utilização de factos numéricos, para estratégias de cálculo mental complexas, aditivas ou subtractivas das categorias 1010 e N10. Foi possível identificar uma preferência por estratégias aditivas do tipo 1010 na resolução dos problemas de adição e, na resolução dos problemas de subtracção, as estratégias utilizadas pelos alunos variaram com o significado presente em cada problema: foram usadas estratégias subtractivas do tipo 1010 em problemas com o significado de retirar e, na resolução dos problemas com os significados de comparar e completar, de um modo geral, os alunos utilizaram estratégias aditivas do tipo A10, pertencente à categoria N10. Os dados apontam também para uma possível influência do ambiente de aprendizagem na utilização de estratégias de cálculo mental mais eficientes, particularmente a nível da estratégia aditiva do tipo 1010. Os dados permitem ainda concluir que alunos do 1.º ano são capazes de desenvolver e utilizar estratégias de cálculo mental, referidas na literatura a que tive acesso (por exemplo, Beishuizen, 1993; 2001; Buys, 2001; Cooper, Heirdsfield & Irons, 1995; Thompson & Smith, 1999), associadas a alunos mais velhos. Deste modo, os resultados deste estudo salientam a necessidade de, em ambientes de aprendizagem enriquecedores, o professor promover o desenvolvimento de estratégias complexas de cálculo mental, evoluindo para além das estratégias de cálculo elementares, habitualmente associadas aos alunos mais novos.
Resumo:
Translator’s training and assessment has used more and more tools and innovative strategies over the years. The goals and results to achieve haven’t changed much, however: translation quality. In order to accomplish it, the translator and all the tasks and processes he develops appear as crucial, being pre-translation and post-translation processes equally important as the translation itself, namely as far as autonomy, reflexive and critical skills are concerned. Finally, the need and relevance of collaborative tasks and networks amongst virtual translation communities, led us to the decision of implementing ePortfolios as a tool to develop the requested skills and extend the use of Internet in translation. In this paper we describe a case-study of a pilot experiment on the using of e-portfolios as a translation training tool and discuss their role in the definition of a clear set of objectives and phases for the completion of each task, by helping students in the management of the projects deadlines, improving their knowledge on the construction and management of translation resources and deepening their awareness about the concepts related to the development of eportfolios.
Resumo:
Projecto apresentado ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do Grau de Mestre em Assessoria de Administração
Resumo:
Serious games are games where the entertainment aspect is not the most relevant motivation or objective. TimeMesh is an online, multi-language, multiplayer, collaborative and social game platform for sharing and acquiring knowledge of the history of European regions. As such it is a serious game with educational characteristics. This article evaluates the use of TimeMesh with students of 13 and 14 years-old. It shows that this game is already a significant learning tool about European citizenship.
Resumo:
A survey to assess training needs in TQM was developed in several European countries, within the framework of a Leonardo’s project named IMVOCED. Beyond a comparison of the results in each country, a global analysis was performed to design a TQM programme to be delivered by WBL (Work Based Learning). Differences were found between countries, and the Portuguese results also revealed that different approaches to TQM training should be adopted according to the organisation’s dimension. Based on this evidence, two different strategies for TQM training by WBL are proposed and discussed.
Resumo:
Mestrado em Ensino Precoce do Inglês
Resumo:
Interactive products are appealing objects in a technology-driven society and the offer in the market is wide and varied. Most of the existing interactive products only provide either light or sound experiences. Therefore, the goal of this project was to develop a product aimed for children combining both features. This project was developed by a team of four thirdyear students with different engineering backgrounds and nationalities during the European Project Semester at ISEP (EPS@ISEP) in 2012. This paper presents the process that led to the development of an interactive sound table that combines nine identical interaction blocks, a control block and a sound block. Each interaction block works independently and is composed of four light emitting diodes (LED) and one infrared (IR) sensor. The control is performed by an Arduino microcontroller and the sound block includes a music shield and a pair of loud speakers. A number of tests were carried out to assess whether the controller, IR sensors, LED, music shield and speakers work together properly and if the ensemble was a viable interactive light and sound device for children.