987 resultados para mathematics computing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this report I present a summary of the three dimensions used in PISA 2003assessment in mathematics: Content, Process and Situation, and I includesome examples of items.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este artículo presenta los resultados de un estudio sobre las tradiciones de enseñanza en cuatro países europeos: Bélgica (Flandes), Inglaterra, Hungría y España. Se trata de un estudio a pequeña escala en el que se emplean métodos cuantitativos y cualitativos, y que, en lugar de pretender obtener generalizaciones, está orientado a arrojar alguna luz que posibilite la mejora de la enseñanza y el aprendizaje de las matemáticas. Establece comparaciones con los resultados de los test TIMSS y PISA y extrae alguna conclusión para la formación inicial de maestros y profesores de matemáticas. Extraemos de éste los resultados relativos a los datos cuantitativos y nos centramos en el foco matemático.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we report some findings from an investigation of a topic related to affect and mathematics which is not well-represented in the literature. For some mathematicians, mathematics itself is a source of security in an uncertain world, and we investigated this feeling and experience in the case of 19 adult mathematicians working in universities and schools in Greece. The focus reported here is on ways that a relationship with mathematics offers a sense of permanence and stability on the one hand, and an assurance of novelty and progress on the other.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Description of some variables used in PISA 2003 project to asses competences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study describes the performance of the mentors in a blended graduate-level training program of teachers in the field of secondary school mathematics. We codified and analyzed the mentors’ comments on the projects presented by the groups of in-service teachers for whom they (the mentors) were responsible. To do this, we developed a structure of categories and codes based on a combination of a literature review, a model of teacher learning, and a cyclical review of the data. We performed two types of analysis: frequency and cluster. The first analysis permitted us to characterize the common actions shared by most of the mentors. From the second, we established three profiles of the mentors’ actions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computer Aided Parallelisation Tools (CAPTools) is a toolkit designed to automate as much as possible of the process of parallelising scalar FORTRAN 77 codes. The toolkit combines a very powerful dependence analysis together with user supplied knowledge to build an extremely comprehensive and accurate dependence graph. The initial version has been targeted at structured mesh computational mechanics codes (eg. heat transfer, Computational Fluid Dynamics (CFD)) and the associated simple mesh decomposition paradigm is utilised in the automatic code partition, execution control mask generation and communication call insertion. In this, the first of a series of papers [1–3] the authors discuss the parallelisations of a number of case study codes showing how the various component tools may be used to develop a highly efficient parallel implementation in a few hours or days. The details of the parallelisation of the TEAMKE1 CFD code are described together with the results of three other numerical codes. The resulting parallel implementations are then tested on workstation clusters using PVM and an i860-based parallel system showing efficiencies well over 80%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The requirement for a very accurate dependence analysis to underpin software tools to aid the generation of efficient parallel implementations of scalar code is argued. The current status of dependence analysis is shown to be inadequate for the generation of efficient parallel code, causing too many conservative assumptions to be made. This paper summarises the limitations of conventional dependence analysis techniques, and then describes a series of extensions which enable the production of a much more accurate dependence graph. The extensions include analysis of symbolic variables, the development of a symbolic inequality disproof algorithm and its exploitation in a symbolic Banerjee inequality test; the use of inference engine proofs; the exploitation of exact dependence and dependence pre-domination attributes; interprocedural array analysis; conditional variable definition tracing; integer array tracing and division calculations. Analysis case studies on typical numerical code is shown to reduce the total dependencies estimated from conventional analysis by up to 50%. The techniques described in this paper have been embedded within a suite of tools, CAPTools, which combines analysis with user knowledge to produce efficient parallel implementations of numerical mesh based codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The availability of a very accurate dependence graph for a scalar code is the basis for the automatic generation of an efficient parallel implementation. The strategy for this task which is encapsulated in a comprehensive data partitioning code generation algorithm is described. This algorithm involves the data partition, calculation of assignment ranges for partitioned arrays, addition of a comprehensive set of execution control masks, altering loop limits, addition and optimisation of communications for all data. In this context, the development and implementation of strategies to merge communications wherever possible has proved an important feature in producing efficient parallel implementations for numerical mesh based codes. The code generation strategies described here are embedded within the Computer Aided Parallelisation tools (CAPTools) software as a key part of a toolkit for automating as much as possible of the parallelisation process for mesh based numerical codes. The algorithms used enables parallelisation of real computational mechanics codes with only minor user interaction and without any prior manual customisation of the serial code to suit the parallelisation tool.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

User supplied knowledge and interaction is a vital component of a toolkit for producing high quality parallel implementations of scalar FORTRAN numerical code. In this paper we consider the necessary components that such a parallelisation toolkit should possess to provide an effective environment to identify, extract and embed user relevant user knowledge. We also examine to what extent these facilities are available in leading parallelisation tools; in particular we discuss how these issues have been addressed in the development of the user interface of the Computer Aided Parallelisation Tools (CAPTools). The CAPTools environment has been designed to enable user exploration, interaction and insertion of user knowledge to facilitate the automatic generation of very efficient parallel code. A key issue in the user's interaction is control of the volume of information so that the user is focused on only that which is needed. User control over the level and extent of information revealed at any phase is supplied using a wide variety of filters. Another issue is the way in which information is communicated. Dependence analysis and its resulting graphs involve a lot of sophisticated rather abstract concepts unlikely to be familiar to most users of parallelising tools. As such, considerable effort has been made to communicate with the user in terms that they will understand. These features, amongst others, and their use in the parallelisation process are described and their effectiveness discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the exploitation of overlapping communication with calculation within parallel FORTRAN 77 codes for computational fluid dynamics (CFD) and computational structured dynamics (CSD). The obvious objective is to overlap interprocessor communication with calculation on each processor in a distributed memory parallel system and so improve the efficiency of the parallel implementation. A general strategy for converting synchronous to overlapped communication is presented together with tools to enable its automatic implementation in FORTRAN 77 codes. This strategy is then implemented within the parallelisation toolkit, CAPTools, to facilitate the automatic generation of parallel code with overlapped communications. The success of these tools are demonstrated on two codes from the NAS-PAR and PERFECT benchmark suites. In each case, the tools produce parallel code with overlapped communications which is as good as that which could be generated manually. The parallel performance of the codes also improve in line with expectation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In fluid mechanics, it is well accepted that the Euler equation is one of the reduced forms of the Navier-Stokes equation by truncating the viscous effect. There are other truncation techniques currently being used in order to truncate the Navier-Stokes equation to a reduced form. This paper describes one such technique, suitable for adaptive domain decomposition methods for the solution of viscous flow problems. The physical domain of a viscous flow problem is partitioned into viscous and inviscid subdomains without overlapping regions, and the technique is embedded into a finite volume method. Some numerical results are provided for a flat plate and the NACA0012 aerofoil. Issues related to distributed computing are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The classical Purcell's vector method, for the construction of solutions to dense systems of linear equations is extended to a flexible orthogonalisation procedure. Some properties are revealed of the orthogonalisation procedure in relation to the classical Gauss-Jordan elimination with or without pivoting. Additional properties that are not shared by the classical Gauss-Jordan elimination are exploited. Further properties related to distributed computing are discussed with applications to panel element equations in subsonic compressible aerodynamics. Using an orthogonalisation procedure within panel methods enables a functional decomposition of the sequential panel methods and leads to a two-level parallelism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel multi-scale seamless model of brittle-crack propagation is proposed and applied to the simulation of fracture growth in a two-dimensional Ag plate with macroscopic dimensions. The model represents the crack propagation at the macroscopic scale as the drift-diffusion motion of the crack tip alone. The diffusive motion is associated with the crack-tip coordinates in the position space, and reflects the oscillations observed in the crack velocity following its critical value. The model couples the crack dynamics at the macroscales and nanoscales via an intermediate mesoscale continuum. The finite-element method is employed to make the transition from the macroscale to the nanoscale by computing the continuum-based displacements of the atoms at the boundary of an atomic lattice embedded within the plate and surrounding the tip. Molecular dynamics (MD) simulation then drives the crack tip forward, producing the tip critical velocity and its diffusion constant. These are then used in the Ito stochastic calculus to make the reverse transition from the nanoscale back to the macroscale. The MD-level modelling is based on the use of a many-body potential. The model successfully reproduces the crack-velocity oscillations, roughening transitions of the crack surfaces, as well as the macroscopic crack trajectory. The implications for a 3-D modelling are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We provide a select overview of tools supporting traditional Jewish learning. Then we go on to discuss our own HyperJoseph/HyperIsaac project in instructional hypermedia. Its application is to teaching, teacher training, and self-instruction in given Bible passages. The treatment of two narratives has been developed thus far. The tool enables an analysis of the text in several respects: linguistic, narratological, etc. Moreover, the Scriptures' focality throughout the cultural history makes this domain of application particularly challenging, in that there is a requirement for the tool to encompass the accretion of receptions in the cultural repertoire, i.e., several layers of textual traditions—either hermeneutic (i.e., interpretive), or appropriations—related to the given core passage, thus including "secondary" texts (i.e., such that are responding or derivative) from as disparate realms as Roman-age and later homiletics, Medieval and later commentaries or supercommentaries, literary appropriations, references to the arts and modern scholarship, etc. in particular, the Midrash (homiletic expansions) is adept at narrative gap filling, so the narratives mushroom at the interstices where the primary text is silent. The genealogy of the project is rooted in Weiss' index of novelist Agnon's writings, which was eventually upgraded into a hypertextual tool, including Agnon's full-text and ancillary materials. Those early tools being intended primarily for reference and research-support in literary studies, the Agnon hypertext system was initially emulated in the conception of HyperJoseph, which is applied to the Joseph story from Genesis. Then, the transition from a tool for reference to an instructional tool required a thorough reconception in an educational perspective, which led to HyperIsaac, on the sacrifice of Isaac, and to a redesign and upgrade of HyperJoseph as patterned after HyperIsaac.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A higher order version of the Hopfield neural network is presented which will perform a simple vector quantisation or clustering function. This model requires no penalty terms to impose constraints in the Hopfield energy, in contrast to the usual one where the energy involves only terms quadratic in the state vector. The energy function is shown to have no local minima within the unit hypercube of the state vector so the network only converges to valid final states. Optimisation trials show that the network can consistently find optimal clusterings for small, trial problems and near optimal ones for a large data set consisting of the intensity values from the digitised, grey-level image.