904 resultados para Programming courses
Resumo:
Currently, the teaching-learning process in domains, such as computer programming, is characterized by an extensive curricula and a high enrolment of students. This poses a great workload for faculty and teaching assistants responsible for the creation, delivery, and assessment of student exercises. The main goal of this chapter is to foster practice-based learning in complex domains. This objective is attained with an e-learning framework—called Ensemble—as a conceptual tool to organize and facilitate technical interoperability among services. The Ensemble framework is used on a specific domain: computer programming. Content issues are tacked with a standard format to describe programming exercises as learning objects. Communication is achieved with the extension of existing specifications for the interoperation with several systems typically found in an e-learning environment. In order to evaluate the acceptability of the proposed solution, an Ensemble instance was validated on a classroom experiment with encouraging results.
Resumo:
Massive Open Online Courses (MOOC) are gaining prominence in transversal teaching-learning strategies. However, there are many issues still debated, namely assessment, recognized largely as a cornerstone in Education. The large number of students involved requires a redefinition of strategies that often use approaches based on tasks or challenging projects. In these conditions and due to this approach, assessment is made through peer-reviewed assignments and quizzes online. The peer-reviewed assignments are often based upon sample answers or topics, which guide the student in the task of evaluating peers. This chapter analyzes the grading and evaluation in MOOCs, especially in science and engineering courses, within the context of education and grading methodologies and discusses possible perspectives to pursue grading quality in massive e-learning courses.
Resumo:
A new iterative algorithm based on the inexact-restoration (IR) approach combined with the filter strategy to solve nonlinear constrained optimization problems is presented. The high level algorithm is suggested by Gonzaga et al. (SIAM J. Optim. 14:646–669, 2003) but not yet implement—the internal algorithms are not proposed. The filter, a new concept introduced by Fletcher and Leyffer (Math. Program. Ser. A 91:239–269, 2002), replaces the merit function avoiding the penalty parameter estimation and the difficulties related to the nondifferentiability. In the IR approach two independent phases are performed in each iteration, the feasibility and the optimality phases. The line search filter is combined with the first one phase to generate a “more feasible” point, and then it is used in the optimality phase to reach an “optimal” point. Numerical experiences with a collection of AMPL problems and a performance comparison with IPOPT are provided.
Resumo:
Distance learning - where students take courses (attend classes, get activities and other sort of learning materials) while being physically separated from their instructors, for larger part of the course duration - is far from being a “new event”. Since the middle of the nineteenth century, this has been done through Radio, Mail and TV, taking advantage of the full educational potential that these media resources had to offer at the time. However, in recent times we have, at our complete disposal, the “magic wonder” of communication and globalization - the Internet. Taking advantage of a whole new set of educational opportunities, with a more or less unselfish “look” to economic interests, focusing its concern on a larger and collective “welfare”, contributing to the development of a more “equitable” world, with regard to educational opportunities, the Massive Open Online Courses (MOOCs) were born and have become an important feature of the higher education in recent years. Many people have been talking about MOOCs as a potential educational revolution, which has arrived from North America, still growing and spreading, referring to its benefits and/or disadvantages. The Polytechnic Institute of Porto, also known as IPP, is a Higher Education Portuguese institution providing undergraduate and graduate studies, which has a solid history of online education and innovation through the use of technology, and it has been particularly interested and focused on MOOC developments, based on an open educational policy in order to try to implement some differentiated learning strategies to its actual students and as a way to attract future ones. Therefore, in July 2014, IPP launched the first Math MOOC on its own platform. This paper describes the requirements, the resulting design and implementation of a mathematics MOOC, which was essentially addressed to three target populations: - pre-college students or individuals wishing to update their Math skills or that need to prepare for the National Exam of Mathematics; - Higher Education students who have not attended in High School, this subject, and who feel the need to acquire basic knowledge about some of the topics covered; - High School Teachers who may use these resources with their students allowing them to develop teaching methodologies like "Flipped Classroom” (available at http://www.opened.ipp.pt/). The MOOC was developed in partnership with several professors from several schools from IPP, gathering different math competences and backgrounds to create and put to work different activities such video lectures and quizzes. We will also try to briefly discuss the advertising strategy being developed to promote this MOOC, since it is not offered through a main MOOC portal, such as Coursera or Udacity.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
Dissertação para obtenção do Grau de Doutor em Ciências da Educação Especialidade em Tecnologias, Redes e Multimédia na Educação e Formação
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
The Intel R Xeon PhiTM is the first processor based on Intel’s MIC (Many Integrated Cores) architecture. It is a co-processor specially tailored for data-parallel computations, whose basic architectural design is similar to the ones of GPUs (Graphics Processing Units), leveraging the use of many integrated low computational cores to perform parallel computations. The main novelty of the MIC architecture, relatively to GPUs, is its compatibility with the Intel x86 architecture. This enables the use of many of the tools commonly available for the parallel programming of x86-based architectures, which may lead to a smaller learning curve. However, programming the Xeon Phi still entails aspects intrinsic to accelerator-based computing, in general, and to the MIC architecture, in particular. In this thesis we advocate the use of algorithmic skeletons for programming the Xeon Phi. Algorithmic skeletons abstract the complexity inherent to parallel programming, hiding details such as resource management, parallel decomposition, inter-execution flow communication, thus removing these concerns from the programmer’s mind. In this context, the goal of the thesis is to lay the foundations for the development of a simple but powerful and efficient skeleton framework for the programming of the Xeon Phi processor. For this purpose we build upon Marrow, an existing framework for the orchestration of OpenCLTM computations in multi-GPU and CPU environments. We extend Marrow to execute both OpenCL and C++ parallel computations on the Xeon Phi. We evaluate the newly developed framework, several well-known benchmarks, like Saxpy and N-Body, will be used to compare, not only its performance to the existing framework when executing on the co-processor, but also to assess the performance on the Xeon Phi versus a multi-GPU environment.
Resumo:
Machine ethics is an interdisciplinary field of inquiry that emerges from the need of imbuing autonomous agents with the capacity of moral decision-making. While some approaches provide implementations in Logic Programming (LP) systems, they have not exploited LP-based reasoning features that appear essential for moral reasoning. This PhD thesis aims at investigating further the appropriateness of LP, notably a combination of LP-based reasoning features, including techniques available in LP systems, to machine ethics. Moral facets, as studied in moral philosophy and psychology, that are amenable to computational modeling are identified, and mapped to appropriate LP concepts for representing and reasoning about them. The main contributions of the thesis are twofold. First, novel approaches are proposed for employing tabling in contextual abduction and updating – individually and combined – plus a LP approach of counterfactual reasoning; the latter being implemented on top of the aforementioned combined abduction and updating technique with tabling. They are all important to model various issues of the aforementioned moral facets. Second, a variety of LP-based reasoning features are applied to model the identified moral facets, through moral examples taken off-the-shelf from the morality literature. These applications include: (1) Modeling moral permissibility according to the Doctrines of Double Effect (DDE) and Triple Effect (DTE), demonstrating deontological and utilitarian judgments via integrity constraints (in abduction) and preferences over abductive scenarios; (2) Modeling moral reasoning under uncertainty of actions, via abduction and probabilistic LP; (3) Modeling moral updating (that allows other – possibly overriding – moral rules to be adopted by an agent, on top of those it currently follows) via the integration of tabling in contextual abduction and updating; and (4) Modeling moral permissibility and its justification via counterfactuals, where counterfactuals are used for formulating DDE.
Resumo:
INTRODUCTION: Debates about the quality of medical education have become more evident in the recent past, and as a result several different assessment methods have been refined for that purpose. The use of questionnaires filled out by medical students to assess the quality of lectures is one of the most common methods employed in our milieu. However, the reliability of this investigation method has not yet been systematically tested. The authors present the reliability of a specific form applied to the fourth grade medical students during the clinical psychiatry course. METHOD: Eighty-one fourth grade medical students were instructed to complete a form immediately after each clinical psychiatry lecture. Thirty-four students (42%) failed to turn in the forms after the final lecture. These students were given an identical form to assess the lectures in a retrospective fashion. The grades given by both groups of students for each performed lecture and the number of students who have graded an unperformed lecture were compared. Statistical significance for both groups was determined by means of the chi-square test (p< 0.05). RESULTS: Eighteen out of the 34 students who filled out the forms retrospectively (53%) rated the unperformed lecture, whereas only 5 out of the 47 students who filled out the forms during the course (11%) did so. This is statistically significant (p< 0.05). There was no statistical difference for the grades given to the lectures that were actually performed. DISCUSSION: The authors concluded the low reliability rate of the retrospective evaluation warrant a continuous assessment method during the course.
Resumo:
Despite the extensive literature in finding new models to replace the Markowitz model or trying to increase the accuracy of its input estimations, there is less studies about the impact on the results of using different optimization algorithms. This paper aims to add some research to this field by comparing the performance of two optimization algorithms in drawing the Markowitz Efficient Frontier and in real world investment strategies. Second order cone programming is a faster algorithm, appears to be more efficient, but is impossible to assert which algorithm is better. Quadratic Programming often shows superior performance in real investment strategies.
Resumo:
Recaí sob a responsabilidade da Marinha Portuguesa a gestão da Zona Económica Exclusiva de Portugal, assegurando a sua segurança da mesma face a atividades criminosas. Para auxiliar a tarefa, é utilizado o sistema Oversee, utilizado para monitorizar a posição de todas as embarcações presentes na área afeta, permitindo a rápida intervenção da Marinha Portuguesa quando e onde necessário. No entanto, o sistema necessita de transmissões periódicas constantes originadas nas embarcações para operar corretamente – casos as transmissões sejam interrompidas, deliberada ou acidentalmente, o sistema deixa de conseguir localizar embarcações, dificultando a intervenção da Marinha. A fim de colmatar esta falha, é proposto adicionar ao sistema Oversee a capacidade de prever as posições futuras de uma embarcação com base no seu trajeto até à cessação das transmissões. Tendo em conta os grandes volumes de dados gerados pelo sistema (históricos de posições), a área de Inteligência Artificial apresenta uma possível solução para este problema. Atendendo às necessidades de resposta rápida do problema abordado, o algoritmo de Geometric Semantic Genetic Programming baseado em referências de Vanneschi et al. apresenta-se como uma possível solução, tendo já produzido bons resultados em problemas semelhantes. O presente trabalho de tese pretende integrar o algoritmo de Geometric Semantic Genetic Programming desenvolvido com o sistema Oversee, a fim de lhe conceder capacidades preditivas. Adicionalmente, será realizado um processo de análise de desempenho a fim de determinar qual a ideal parametrização do algoritmo. Pretende-se com esta tese fornecer à Marinha Portuguesa uma ferramenta capaz de auxiliar o controlo da Zona Económica Exclusiva Portuguesa, permitindo a correta intervenção da Marinha em casos onde o atual sistema não conseguiria determinar a correta posição da embarcação em questão.
Resumo:
Relatório de estágio de mestrado em Ensino de Informática