965 resultados para ubiquitous multi-core framework


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os grandes desafios colocados à educação têm vindo a dar centralidade à formação dos professores. O presente estudo pretende ser uma contribuição para o desenvolvimento profissional de professores (DPP) de ciências na sociedade atual. Trata-se de uma investigação em supervisão da formação, mais especificamente, na supervisão da formação de professores em didática das ciências. Insere-se na formação contínua e reflete preocupações que se prendem com saberes profissionais dos professores de ciências num contexto da educação em geociências no ensino secundário. Foi concebido, implementado e avaliado um programa de formação (PF), integrado num percurso de desenvolvimento profissional, que contemplou uma abordagem multidisciplinar de natureza ciência-tecnologia-sociedade (CTS). Valorizaram-se os ambientes exteriores à sala de aula (AESA), bem como a avaliação do seu impacte no DPP, no final do PF e, também, no final do ano letivo seguinte. O estudo integrou três fases: aprofundamento do quadro teórico que contextualiza a investigação (Fase I); conceção, implementação e avaliação do PF (Fase II); elaboração de linhas orientadoras para o DPP de ciências e redação do trabalho final (Fase III). Na Fase I foi aprofundado o quadro teórico que enquadrou e sustentou o estudo realizado, fundamentando as opções tomadas nas fases subsequentes. Foram relevadas temáticas como o conhecimento profissional e o desenvolvimento profissional docente, a formação contínua e a supervisão da formação, a educação em geociências e o seu contributo para a formação científica do cidadão numa perspetiva CTS. A avaliação das aprendizagens, de uma forma especial em AESA, bem como as dificuldades inerentes ao nível da sua integração curricular foi abordada. Na Fase II, foi concebido um PF, sustentado em indicadores da investigação, que assentou numa matriz multidisciplinar, de natureza CTS, o qual valorizou os AESA. Na modalidade de oficina de formação, o PF decorreu ao longo do ano letivo 2010/2011, teve a duração de cinquenta horas presenciais - em ambientes de aprendizagem diversos - e cinquenta não presenciais e foi frequentado por dezasseis professores do grupo 520, pertencentes a dez escolas diferentes. Na primeira sessão, foi administrado um questionário aos professores para diagnosticar as suas conceções sobre o ensino das ciências desenvolvido numa matriz CTS e em AESA. Os indicadores obtidos mostram que a maior parte dos professores não desenvolve atividades daquela natureza e que as realizadas são pouco exigentes do ponto de vista cognitivo. Foi igualmente reconhecido que a implementação deste tipo de atividades necessita de conhecimentos de outras áreas do saber e de materiais de apoio, e, ainda, que a formação inicial é insuficiente. Esta fase terminou com a avaliação das perceções sobre o impacte do programa de formação no DPP e na melhoria das práticas de formação contínua de professores de ciências. Os indicadores obtidos revelam que o percurso formativo contribuiu para: − o desenvolvimento do conhecimento profissional dos participantes, ao nível dos saberes associados à exploração, transformação e utilização dos recursos geológicos e das implicações sociais e ambientais associadas e, também, do conhecimento didático acerca do ensino das ciências nos contextos aqui considerados, bem como da avaliação das aprendizagens; − a construção de materiais didáticos específicos, que os professores reconhecem que promovem a aprendizagem contextualizada, a integração de saberes, o desenvolvimento de competências (conceptuais, procedimentais e atitudinais), a avaliação das aprendizagens integrada, permitindo, ainda, atingir os objetivos educacionais previstos no programa da disciplina e motivar os alunos para a aprendizagem da geologia; − a modificação de algumas práticas pedagógicas, ao nível da utilização curricular da perspetiva CTS nos ambientes aqui estudados, com destaque para a respetiva avaliação dos alunos; − o desenvolvimento de capacidades necessárias ao trabalho colaborativo e da capacidade reflexiva dos professores participantes; − a identificação de potencialidades no PF implementado, ao nível da organização, da metodologia e da supervisão da formação. Na Fase III, as conclusões obtidas levaram à apresentação de linhas orientadoras para o DPP de ciências, ao nível da formação contínua de professores, da supervisão da formação e do ensino e aprendizagem das ciências. Considera-se que as propostas referidas, ao alterarem as práticas vigentes, podem contribuir para aproximar a prática letiva dos professores de ciências da investigação em didática e, assim, para a melhoria da qualidade das aprendizagens dos alunos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tese de doutoramento, Bioquimica, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2015

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Real-time systems demand guaranteed and predictable run-time behaviour in order to ensure that no task has missed its deadline. Over the years we are witnessing an ever increasing demand for functionality enhancements in the embedded real-time systems. Along with the functionalities, the design itself grows more complex. Posed constraints, such as energy consumption, time, and space bounds, also require attention and proper handling. Additionally, efficient scheduling algorithms, as proven through analyses and simulations, often impose requirements that have significant run-time cost, specially in the context of multi-core systems. In order to further investigate the behaviour of such systems to quantify and compare these overheads involved, we have developed the SPARTS, a simulator of a generic embedded real- time device. The tasks in the simulator are described by externally visible parameters (e.g. minimum inter-arrival, sporadicity, WCET, BCET, etc.), rather than the code of the tasks. While our current implementation is primarily focused on our immediate needs in the area of power-aware scheduling, it is designed to be extensible to accommodate different task properties, scheduling algorithms and/or hardware models for the application in wide variety of simulations. The source code of the SPARTS is available for download at [1].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last three decades, computer architects have been able to achieve an increase in performance for single processors by, e.g., increasing clock speed, introducing cache memories and using instruction level parallelism. However, because of power consumption and heat dissipation constraints, this trend is going to cease. In recent times, hardware engineers have instead moved to new chip architectures with multiple processor cores on a single chip. With multi-core processors, applications can complete more total work than with one core alone. To take advantage of multi-core processors, parallel programming models are proposed as promising solutions for more effectively using multi-core processors. This paper discusses some of the existent models and frameworks for parallel programming, leading to outline a draft parallel programming model for Ada.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes finite-sample procedures for testing the SURE specification in multi-equation regression models, i.e. whether the disturbances in different equations are contemporaneously uncorrelated or not. We apply the technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] to obtain exact tests based on standard LR and LM zero correlation tests. We also suggest a MC quasi-LR (QLR) test based on feasible generalized least squares (FGLS). We show that the latter statistics are pivotal under the null, which provides the justification for applying MC tests. Furthermore, we extend the exact independence test proposed by Harvey and Phillips (1982) to the multi-equation framework. Specifically, we introduce several induced tests based on a set of simultaneous Harvey/Phillips-type tests and suggest a simulation-based solution to the associated combination problem. The properties of the proposed tests are studied in a Monte Carlo experiment which shows that standard asymptotic tests exhibit important size distortions, while MC tests achieve complete size control and display good power. Moreover, MC-QLR tests performed best in terms of power, a result of interest from the point of view of simulation-based tests. The power of the MC induced tests improves appreciably in comparison to standard Bonferroni tests and, in certain cases, outperforms the likelihood-based MC tests. The tests are applied to data used by Fischer (1993) to analyze the macroeconomic determinants of growth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Avec la complexité croissante des systèmes sur puce, de nouveaux défis ne cessent d’émerger dans la conception de ces systèmes en matière de vérification formelle et de synthèse de haut niveau. Plusieurs travaux autour de SystemC, considéré comme la norme pour la conception au niveau système, sont en cours afin de relever ces nouveaux défis. Cependant, à cause du modèle de concurrence complexe de SystemC, relever ces défis reste toujours une tâche difficile. Ainsi, nous pensons qu’il est primordial de partir sur de meilleures bases en utilisant un modèle de concurrence plus efficace. Par conséquent, dans cette thèse, nous étudions une méthodologie de conception qui offre une meilleure abstraction pour modéliser des composants parallèles en se basant sur le concept de transaction. Nous montrons comment, grâce au raisonnement simple que procure le concept de transaction, il devient plus facile d’appliquer la vérification formelle, le raffinement incrémental et la synthèse de haut niveau. Dans le but d’évaluer l’efficacité de cette méthodologie, nous avons fixé l’objectif d’optimiser la vitesse de simulation d’un modèle transactionnel en profitant d’une machine multicoeur. Nous présentons ainsi l’environnement de modélisation et de simulation parallèle que nous avons développé. Nous étudions différentes stratégies d’ordonnancement en matière de parallélisme et de surcoût de synchronisation. Une expérimentation faite sur un modèle du transmetteur Wi-Fi 802.11a a permis d’atteindre une accélération d’environ 1.8 en utilisant deux threads. Avec 8 threads, bien que la charge de travail des différentes transactions n’était pas importante, nous avons pu atteindre une accélération d’environ 4.6, ce qui est un résultat très prometteur.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Réalisé en majeure partie sous la tutelle de feu le Professeur Paul Arminjon. Après sa disparition, le Docteur Aziz Madrane a pris la relève de la direction de mes travaux.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

clRNG et clProbdist sont deux interfaces de programmation (APIs) que nous avons développées pour la génération de nombres aléatoires uniformes et non uniformes sur des dispositifs de calculs parallèles en utilisant l’environnement OpenCL. La première interface permet de créer au niveau d’un ordinateur central (hôte) des objets de type stream considérés comme des générateurs virtuels parallèles qui peuvent être utilisés aussi bien sur l’hôte que sur les dispositifs parallèles (unités de traitement graphique, CPU multinoyaux, etc.) pour la génération de séquences de nombres aléatoires. La seconde interface permet aussi de générer au niveau de ces unités des variables aléatoires selon différentes lois de probabilité continues et discrètes. Dans ce mémoire, nous allons rappeler des notions de base sur les générateurs de nombres aléatoires, décrire les systèmes hétérogènes ainsi que les techniques de génération parallèle de nombres aléatoires. Nous présenterons aussi les différents modèles composant l’architecture de l’environnement OpenCL et détaillerons les structures des APIs développées. Nous distinguons pour clRNG les fonctions qui permettent la création des streams, les fonctions qui génèrent les variables aléatoires uniformes ainsi que celles qui manipulent les états des streams. clProbDist contient les fonctions de génération de variables aléatoires non uniformes selon la technique d’inversion ainsi que les fonctions qui permettent de retourner différentes statistiques des lois de distribution implémentées. Nous évaluerons ces interfaces de programmation avec deux simulations qui implémentent un exemple simplifié d’un modèle d’inventaire et un exemple d’une option financière. Enfin, nous fournirons les résultats d’expérimentation sur les performances des générateurs implémentés.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The process of developing software that takes advantage of multiple processors is commonly referred to as parallel programming. For various reasons, this process is much harder than the sequential case. For decades, parallel programming has been a problem for a small niche only: engineers working on parallelizing mostly numerical applications in High Performance Computing. This has changed with the advent of multi-core processors in mainstream computer architectures. Parallel programming in our days becomes a problem for a much larger group of developers. The main objective of this thesis was to find ways to make parallel programming easier for them. Different aims were identified in order to reach the objective: research the state of the art of parallel programming today, improve the education of software developers about the topic, and provide programmers with powerful abstractions to make their work easier. To reach these aims, several key steps were taken. To start with, a survey was conducted among parallel programmers to find out about the state of the art. More than 250 people participated, yielding results about the parallel programming systems and languages in use, as well as about common problems with these systems. Furthermore, a study was conducted in university classes on parallel programming. It resulted in a list of frequently made mistakes that were analyzed and used to create a programmers' checklist to avoid them in the future. For programmers' education, an online resource was setup to collect experiences and knowledge in the field of parallel programming - called the Parawiki. Another key step in this direction was the creation of the Thinking Parallel weblog, where more than 50.000 readers to date have read essays on the topic. For the third aim (powerful abstractions), it was decided to concentrate on one parallel programming system: OpenMP. Its ease of use and high level of abstraction were the most important reasons for this decision. Two different research directions were pursued. The first one resulted in a parallel library called AthenaMP. It contains so-called generic components, derived from design patterns for parallel programming. These include functionality to enhance the locks provided by OpenMP, to perform operations on large amounts of data (data-parallel programming), and to enable the implementation of irregular algorithms using task pools. AthenaMP itself serves a triple role: the components are well-documented and can be used directly in programs, it enables developers to study the source code and learn from it, and it is possible for compiler writers to use it as a testing ground for their OpenMP compilers. The second research direction was targeted at changing the OpenMP specification to make the system more powerful. The main contributions here were a proposal to enable thread-cancellation and a proposal to avoid busy waiting. Both were implemented in a research compiler, shown to be useful in example applications, and proposed to the OpenMP Language Committee.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design space of emerging heterogenous multi-core architectures with re-configurability element makes it feasible to design mixed fine-grained and coarse-grained parallel architectures. This paper presents a hierarchical composite array design which extends the curret design space of regular array design by combining a sequence of transformations. This technique is applied to derive a new design of a pipelined parallel regular array with different dataflow between phases of computation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Can autonomic computing concepts be applied to traditional multi-core systems found in high performance computing environments? In this paper, we propose a novel synergy between parallel computing and swarm robotics to offer a new computing paradigm, `Swarm-Array Computing' that can harness and apply autonomic computing for parallel computing systems. One approach among three proposed approaches in swarm-array computing based on landscapes of intelligent cores, in which the cores of a parallel computing system are abstracted to swarm agents, is investigated. A task gets executed and transferred seamlessly between cores in the proposed approach thereby achieving self-ware properties that characterize autonomic computing. FPGAs are considered as an experimental platform taking into account its application in space robotics. The feasibility of the proposed approach is validated on the SeSAm multi-agent simulator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe infinitely scalable pipeline machines with perfect parallelism, in the sense that every instruction of an inline program is executed, on successive data, on every clock tick. Programs with shared data effectively execute in less than a clock tick. We show that pipeline machines are faster than single or multi-core, von Neumann machines for sufficiently many program runs of a sufficiently time consuming program. Our pipeline machines exploit the totality of transreal arithmetic and the known waiting time of statically compiled programs to deliver the interesting property that they need no hardware or software exception handling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Anycast and multicast are two important Internet services. Combining the two protocols can provide new and practical services. In this paper we propose a new Internet service, Minicast: in the scenario of n replicated or similar servers, deliver a message to at least m members, 1 m n. Such a service has potential applications in information retrieval, parallel computing, cache queries, etc. The service can provide the same Internet service with an optimal cost, reducing bandwidth consumption, network delay, and so on. We design a multi-core tree based architecture for the Minicast service and present the criteria for calculating the subcores among a subset of Minicast members. Simulation shows that the proposed architecture can even the Minicast traffic, and the Minicast application can save the consumptions of network resource.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Is there room for more creativity in information systems? This article grew out of an AWRE'04 panel discussion on creativity in requirements engineering, and the impact of requirements engineering on creativity in systems engineering and systems use. Both panel and article were motivated by the goal of identifying a framework for understanding creativity in a larger context and thus establishing a potential structure for future research. The authors' research backgrounds differ widely and, at times, our views conflict occasionally, quite sharply. We make underlying world views - our own and those of relevant disciplines - explicit; identify the paradox caused by the need to be functionally creative while leaving room for creativity in successive stages; and argue for a multi paradigm framework for resolving this paradox.