909 resultados para Interval analysis (Mathematics)
Resumo:
This study analyzed three fifth grade students’ misconceptions and error patterns when working with equivalence, addition and subtraction of fractions. The findings revealed that students used both conceptual and procedural knowledge to solve the problems. They used pictures, gave examples, and made connections to other mathematical concepts and to daily life topics. Error patterns found include using addition and subtraction of numerators and denominators, and finding the greatest common factor.
Resumo:
A constructivist philosophy underlies the Irish primary mathematics curriculum. As constructivism is a theory of learning its implications for teaching need to be addressed. This study explores the experiences of four senior class primary teachers as they endeavour to teach mathematics from a constructivist-compatible perspective with primary school children in Ireland over a school-year period. Such a perspective implies that children should take ownership of their learning while working in groups on tasks which challenge them at their zone of proximal development. The key question on which the research is based is: to what extent will an exposure to constructivism and its implications for the classroom impact on teaching practices within the senior primary mathematics classroom in both the short and longer term? Although several perspectives on constructivism have evolved (von Glaserfeld (1995), Cobb and Yackel (1996), Ernest (1991,1998)), it is the synthesis of the emergent perspective which becomes pivotal to the Irish primary mathematics curriculum. Tracking the development of four primary teachers in a professional learning initiative involving constructivist-compatible approaches necessitated the use of Borko’s (2004) Phase 1 research methodology to account for the evolution in teachers’ understanding of constructivism. Teachers’ and pupils’ viewpoints were recorded using both audio and video technology. Teachers were interviewed at the beginning and end of the project and also one year on to ascertain how their views had evolved. Pupils were interviewed at the end of the project only. The data were analysed from a Jaworskian perspective i.e. using the categories of her Teaching Triad of management of learning, mathematical challenge and sensitivity to students. Management of learning concerns how the teacher organises her classroom to maximise learning opportunities for pupils. Mathematical challenge is reminiscent of the Vygotskian (1978) construct of the zone of proximal development. Sensitivity to students involves a consciousness on the part of the teacher as to how pupils are progressing with a mathematical task and whether or not to intervene to scaffold their learning. Through this analysis a synthesis of the teachers’ interpretations of constructivist philosophy with concomitant implications for theory, policy and practice emerges. The study identifies strategies for teachers wishing to adopt a constructivist-compatible approach to their work. Like O’Shea (2009) it also highlights the likely difficulties to be experienced by such teachers as they move from utilising teacher-dominated methods of teaching mathematics to ones in which pupils have more ownership over their learning.
Resumo:
This work introduces a tessellation-based model for the declivity analysis of geographic regions. The analysis of the relief declivity, which is embedded in the rules of the model, categorizes each tessellation cell, with respect to the whole considered region, according to the (positive, negative, null) sign of the declivity of the cell. Such information is represented in the states assumed by the cells of the model. The overall configuration of such cells allows the division of the region into subregions of cells belonging to a same category, that is, presenting the same declivity sign. In order to control the errors coming from the discretization of the region into tessellation cells, or resulting from numerical computations, interval techniques are used. The implementation of the model is naturally parallel since the analysis is performed on the basis of local rules. An immediate application is in geophysics, where an adequate subdivision of geographic areas into segments presenting similar topographic characteristics is often convenient.
Resumo:
Diffusion equations that use time fractional derivatives are attractive because they describe a wealth of problems involving non-Markovian Random walks. The time fractional diffusion equation (TFDE) is obtained from the standard diffusion equation by replacing the first-order time derivative with a fractional derivative of order α ∈ (0, 1). Developing numerical methods for solving fractional partial differential equations is a new research field and the theoretical analysis of the numerical methods associated with them is not fully developed. In this paper an explicit conservative difference approximation (ECDA) for TFDE is proposed. We give a detailed analysis for this ECDA and generate discrete models of random walk suitable for simulating random variables whose spatial probability density evolves in time according to this fractional diffusion equation. The stability and convergence of the ECDA for TFDE in a bounded domain are discussed. Finally, some numerical examples are presented to show the application of the present technique.
Resumo:
Information graphics have become increasingly important in representing, organising and analysing information in a technological age. In classroom contexts, information graphics are typically associated with graphs, maps and number lines. However, all students need to become competent with the broad range of graphics that they will encounter in mathematical situations. This paper provides a rationale for creating a test to measure students’ knowledge of graphics. This instrument can be used in mass testing and individual (in-depth) situations. Our analysis of the utility of this instrument informs policy and practice. The results provide an appreciation of the relative difficulty of different information graphics; and provide the capacity to benchmark information about students’ knowledge of graphics. The implications for practice include the need to support the development of students’ knowledge of graphics, the existence of gender differences, the role of cross-curriculum applications in learning about graphics, and the need to explicate the links among graphics.
Resumo:
Enhancing children's self-concepts is widely accepted as a critical educational outcome of schooling and is postulated as a mediating variable that facilitates the attainment of other desired outcomes such as improved academic achievement. Despite considerable advances in self-concept research, there has been limited progress in devising teacher-administered enhancement interventions. This is unfortunate as teachers are crucial change agents during important developmental periods when self-concept is formed. The primary aim of the present investigation is to build on the promising features of previous self-concept enhancement studies by: (a) combining two exciting research directions developed by Burnett and Craven to develop a potentially powerful cognitive-based intervention; (b) incorporating recent developments in theory and measurement to ensure that the multidimensionality of self-concept is accounted for in the research design; (c) fully investigating the effects of a potentially strong cognitive intervention on reading, mathematics, school and learning self-concepts by using a large sample size and a sophisticated research design; (d) evaluating the effects of the intervention on affective and cognitive subcomponents of reading, mathematics, school and learning self-concepts over time to test for differential effects of the intervention; (e) modifying and extending current procedures to maximise the successful implementation of a teacher-mediated intervention in a naturalistic setting by incorporating sophisticated teacher training as suggested by Hattie (1992) and including an assessment of the efficacy of implementation; and (f) examining the durability of effects associated with the intervention.
Resumo:
Aims: To assess the effectiveness of current treatment approaches to assist benzodiazepine discontinuation. Methods: A systematic review of approaches to benzodiazepine discontinuation in general practice and out-patient settings was undertaken. Routine care was compared with three treatment approaches: brief interventions, gradual dose reduction (GDR) and psychological interventions. GDR was compared with GDR plus psychological interventions or substitutive pharmacotherapies. Results: Inclusion criteria were met by 24 studies, and a further eight were identified by future search. GDR [odds ratio (OR) = 5.96, confidence interval (CI) = 2.08–17.11] and brief interventions (OR = 4.37, CI = 2.28–8.40) provided superior cessation rates at post-treatment to routine care. Psychological treatment plus GDR were superior to both routine care (OR = 3.38, CI = 1.86–6.12) and GDR alone (OR = 1.82, CI = 1.25–2.67). However, substitutive pharmacotherapies did not add to the impact of GDR (OR = 1.30, CI = 0.97– 1.73), and abrupt substitution of benzodiazepines by other pharmacotherapy was less effective than GDR alone (OR = 0.30, CI = 0.14–0.64). Few studies on any technique had significantly greater benzodiazepine discontinuation than controls at follow-up. Conclusions: Providing an intervention is more effective than routine care. Psychological interventions may improve discontinuation above GDR alone. While some substitutive pharmacotherapies may have promise, current evidence is insufficient to support their use.
Resumo:
This paper investigates the robust H∞ control for Takagi-Sugeno (T-S) fuzzy systems with interval time-varying delay. By employing a new and tighter integral inequality and constructing an appropriate type of Lyapunov functional, delay-dependent stability criteria are derived for the control problem. Because neither any model transformation nor free weighting matrices are employed in our theoretical derivation, the developed stability criteria significantly improve and simplify the existing stability conditions. Also, the maximum allowable upper delay bound and controller feedback gains can be obtained simultaneously from the developed approach by solving a constrained convex optimization problem. Numerical examples are given to demonstrate the effectiveness of the proposed methods.
Resumo:
Over the past decade, Thai schools have been encouraged by the Thai Ministry of Education to introduce more student-centred pedagogies such as cooperative learning into their classrooms (Carter, 2006). However, prior research has indicated that the implementation of cooperative learning into Thai schools has been confounded by cultural traditions endemic within Thai schools (Deveney, 2005). The purpose of the study was to investigate how 32 Grade 3 and 32 Grade 4 students enrolled in a Thai school engaged with cooperative learning in mathematics classrooms after they had been taught cooperative learning strategies and skills. These strategies and skills were derived from a conceptual framework that was the outcome of an analysis and synthesis of social learning, behaviourist and socio-cognitive theories found in the research literature. The intervention began with a two week program during which the students were introduced to and engaged in practicing a set of cooperative learning strategies and skills (3 times a week). Then during the next four weeks (3 times a week), these cooperative learning strategies and skills were applied in the contexts of two units of mathematics lessons. A survey of student attitudes with respect to their engagement in cooperative learning was conducted at the conclusion of the six-week intervention. The results from the analysis of the survey data were triangulated with the results derived from the analysis of data from classroom observations and teacher interviews. The analysis of data identified four complementary processes that need to be considered by Thai teachers attempting to implement cooperative learning into their mathematics classrooms. The paper concludes with a set of criteria derived from the results of the study to guide Thai teachers intending to implement cooperative learning strategies and skills in their classrooms.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
The primary purpose of this research was to examine individual differences in learning from worked examples. By integrating cognitive style theory and cognitive load theory, it was hypothesised that an interaction existed between individual cognitive style and the structure and presentation of worked examples in their effect upon subsequent student problem solving. In particular, it was hypothesised that Analytic-Verbalisers, Analytic-Imagers, and Wholist-lmagers would perform better on a posttest after learning from structured-pictorial worked examples than after learning from unstructured worked examples. For Analytic-Verbalisers it was reasoned that the cognitive effort required to impose structure on unstructured worked examples would hinder learning. Alternatively, it was expected that Wholist-Verbalisers would display superior performances after learning from unstructured worked examples than after learning from structured-pictorial worked examples. The images of the structured-pictorial format, incongruent with the Wholist-Verbaliser style, would be expected to split attention between the text and the diagrams. The information contained in the images would also be a source of redundancy and not easily ignored in the integrated structured-pictorial format. Despite a number of authors having emphasised the need to include individual differences as a fundamental component of problem solving within domainspecific subjects such as mathematics, few studies have attempted to investigate a relationship between mathematical or science instructional method, cognitive style, and problem solving. Cognitive style theory proposes that the structure and presentation of learning material is likely to affect each of the four cognitive styles differently. No study could be found which has used Riding's (1997) model of cognitive style as a framework for examining the interaction between the structural presentation of worked examples and an individual's cognitive style. 269 Year 12 Mathematics B students from five urban and rural secondary schools in Queensland, Australia participated in the main study. A factorial (three treatments by four cognitive styles) between-subjects multivariate analysis of variance indicated a statistically significant interaction. As the difficulty of the posttest components increased, the empirical evidence supporting the research hypotheses became more pronounced. The rigour of the study's theoretical framework was further tested by the construction of a measure of instructional efficiency, based on an index of cognitive load, and the construction of a measure of problem-solving efficiency, based on problem-solving time. The consistent empirical evidence within this study that learning from worked examples is affected by an interaction of cognitive style and the structure and presentation of the worked examples emphasises the need to consider individual differences among senior secondary mathematics students to enhance educational opportunities. Implications for teaching and learning are discussed and recommendations for further research are outlined.