976 resultados para Iterative methods (mathematics)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated Microteaching Lesson Study (MLS) and three possible MLS mentor interaction structures during the debriefing sessions in relation to elementary preservice teacher development of knowledge for teaching. One hundred three elementary preservice teachers enrolled in five different sections of a mathematics methods course at a southern urban university were part of the study. This included 72 participants who completed MLS across three different mentor interaction structures as part of their course requirements and 31 elementary preservice teachers who did not complete MLS as part of their methods course and served as a comparison group for a portion of the study. A sequential mixed-methods research design was used to analyze the relationship between MLS mentor interaction structure and growth in preservice teachers' mathematics teacher knowledge. Data sources included pre and post assessments, group developed lesson plans and final reports, a feedback survey with Likert-type and open-ended questions, and transcripts of audio-recorded debriefing sessions. The pre and post assessments were analyzed using Analysis of Variance (ANOVA) and descriptive statistics were used to analyze the Likert-type feedback survey questions. Group MLS lesson plans, final reports, and transcripts of debriefing sessions along with the open-ended questions from the feedback survey were coded in a three-step process as described by Miles and Huberman (1994). In alignment with findings from M. Fernandez (2005, 2010), elementary preservice teachers participating in MLS grew in content knowledge related to MLS topics taught by one another. Results from the analysis of pre and post content knowledge assessments revealed that participants grew in their understanding of the mathematics topics taught during MLS irrespective of their mentor interaction structure and when compared to the participants who did not complete MLS in their methods course. Findings from the analysis of lesson plans for growth in pedagogical content knowledge revealed the most growth in this area occurred for participants assigned to the interaction structure in which the MLS mentor participated in the first two debriefing sessions. Analysis of the transcripts of the discourse during the debriefing sessions and the feedback surveys support the finding that the elementary preservice teachers assigned to the interaction structure in which the MLS mentor participated in the first and second debriefing sessions benefited more from the MLS experience when compared to elementary preservice teachers assigned to the other two interaction structures (MLS mentor participated in only the first debriefing session and MLS mentor participated in only the last debriefing session).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mathematics is rigidly classified as an academic discipline. This determines curriculum content and teaching and evaluation methods. These methods can give rise to negative views of mathematics, resulting in increased math anxiety. Educators, therefore, need to look beyond the discipline to provide a classroom environment that meets students’ needs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Math literacy is imperative to succeed in society. Experience is key for acquiring math literacy. A preschooler's world is full of mathematical experiences. Children are continually counting, sorting and comparing as they play. As children are engaged in these activities they are using language as a tool to express their mathematical thinking. If teachers are aware of these teachable moments and help children bridge their daily experiences to mathematical concepts, math literacy may be enhanced. This study described the interactions between teachers and preschoolers, determining the extent to which teachers scaffold children's everyday language into expressions of mathematical concepts. Of primary concern were the teachers' responsive interactions to children's expressions of an implicit mathematical utterance made while engaged in block play. The parallel mixed methods research design consisted of two strands. Strand 1 of the study focused on preschoolers' use of everyday language and the teachers' responses after a child made a mathematical utterance. Twelve teachers and 60 students were observed and videotaped while engaged in block play. Each teacher worked with five children for 20 minutes, yielding 240 minutes of observation. Interaction analysis was used to deductively analyze the recorded observations and field notes. Using a priori codes for the five mathematical concepts, it was found children produced 2,831 mathematical utterances. Teachers ignored 60% of these utterances and responded to, but did not mediate 30% of them. Only 10% of the mathematical utterances were mediated to a mathematical concept. Strand 2 focused on the teacher's view of the role of language in early childhood mathematics. The 12 teachers who had been observed as part of the first strand of the study were interviewed. Based on a thematic analysis of these interviews three themes emerged: (a) the importance of a child's environment, (b) the importance of an education in society, and (c) the role of math in early childhood. Finally, based on a meta-inference of both strands, three themes emerged: (a) teacher conception of math, (b) teacher practice, and (c) teacher sensitivity. Implications based on the findings involve policy, curriculum, and professional development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Constant technology advances have caused data explosion in recent years. Accord- ingly modern statistical and machine learning methods must be adapted to deal with complex and heterogeneous data types. This phenomenon is particularly true for an- alyzing biological data. For example DNA sequence data can be viewed as categorical variables with each nucleotide taking four different categories. The gene expression data, depending on the quantitative technology, could be continuous numbers or counts. With the advancement of high-throughput technology, the abundance of such data becomes unprecedentedly rich. Therefore efficient statistical approaches are crucial in this big data era.

Previous statistical methods for big data often aim to find low dimensional struc- tures in the observed data. For example in a factor analysis model a latent Gaussian distributed multivariate vector is assumed. With this assumption a factor model produces a low rank estimation of the covariance of the observed variables. Another example is the latent Dirichlet allocation model for documents. The mixture pro- portions of topics, represented by a Dirichlet distributed variable, is assumed. This dissertation proposes several novel extensions to the previous statistical methods that are developed to address challenges in big data. Those novel methods are applied in multiple real world applications including construction of condition specific gene co-expression networks, estimating shared topics among newsgroups, analysis of pro- moter sequences, analysis of political-economics risk data and estimating population structure from genotype data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prior work of our research group, that quantified the alarming levels of radiation dose to patients with Crohn’s disease from medical imaging and the notable shift towards CT imaging making these patients an at risk group, provided context for this work. CT delivers some of the highest doses of ionising radiation in diagnostic radiology. Once a medical imaging examination is deemed justified, there is an onus on the imaging team to endeavour to produce diagnostic quality CT images at the lowest possible radiation dose to that patient. The fundamental limitation with conventional CT raw data reconstruction was the inherent coupling of administered radiation dose with observed image noise – the lower the radiation dose, the noisier the image. The renaissance, rediscovery and refinement of iterative reconstruction removes this limitation allowing either an improvement in image quality without increasing radiation dose or maintenance of image quality at a lower radiation dose compared with traditional image reconstruction. This thesis is fundamentally an exercise in optimisation in clinical CT practice with the objectives of assessment of iterative reconstruction as a method for improvement of image quality in CT, exploration of the associated potential for radiation dose reduction, and development of a new split dose CT protocol with the aim of achieving and validating diagnostic quality submillisiever t CT imaging in patients with Crohn’s disease. In this study, we investigated the interplay of user-selected parameters on radiation dose and image quality in phantoms and cadavers, comparing traditional filtered back projection (FBP) with iterative reconstruction algorithms. This resulted in the development of an optimised, refined and appropriate split dose protocol for CT of the abdomen and pelvis in clinical patients with Crohn’s disease allowing contemporaneous acquisition of both modified and conventional dose CT studies. This novel algorithm was then applied to 50 patients with a suspected acute complication of known Crohn’s disease and the raw data reconstructed with FBP, adaptive statistical iterative reconstruction (ASiR) and model based iterative reconstruction (MBIR). Conventional dose CT images with FBP reconstruction were used as the reference standard with which the modified dose CT images were compared in terms of radiation dose, diagnostic findings and image quality indices. As there are multiple possible user-selected strengths of ASiR available, these were compared in terms of image quality to determine the optimal strength for this modified dose CT protocol. Modified dose CT images with MBIR were also compared with contemporaneous abdominal radiograph, where performed, in terms of diagnostic yield and radiation dose. Finally, attenuation measurements in organs, tissues, etc. with each reconstruction algorithm were compared to assess for preservation of tissue characterisation capabilities. In the phantom and cadaveric models, both forms of iterative reconstruction examined (ASiR and MBIR) were superior to FBP across a wide variety of imaging protocols, with MBIR superior to ASiR in all areas other than reconstruction speed. We established that ASiR appears to work to a target percentage noise reduction whilst MBIR works to a target residual level of absolute noise in the image. Modified dose CT images reconstructed with both ASiR and MBIR were non-inferior to conventional dose CT with FBP in terms of diagnostic findings, despite reduced subjective and objective indices of image quality. Mean dose reductions of 72.9-73.5% were achieved with the modified dose protocol with a mean effective dose of 1.26mSv. MBIR was again demonstrated superior to ASiR in terms of image quality. The overall optimal ASiR strength for the modified dose protocol used in this work is ASiR 80%, as this provides the most favourable balance of peak subjective image quality indices with less objective image noise than the corresponding conventional dose CT images reconstructed with FBP. Despite guidelines to the contrary, abdominal radiographs are still often used in the initial imaging of patients with a suspected complication of Crohn’s disease. We confirmed the superiority of modified dose CT with MBIR over abdominal radiographs at comparable doses in detection of Crohn’s disease and non-Crohn’s disease related findings. Finally, we demonstrated (in phantoms, cadavers and in vivo) that attenuation values do not change significantly across reconstruction algorithms meaning preserved tissue characterisation capabilities with iterative reconstruction. Both adaptive statistical and model based iterative reconstruction algorithms represent feasible methods of facilitating acquisition diagnostic quality CT images of the abdomen and pelvis in patients with Crohn’s disease at markedly reduced radiation doses. Our modified dose CT protocol allows dose savings of up to 73.5% compared with conventional dose CT, meaning submillisievert imaging is possible in many of these patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A constructivist philosophy underlies the Irish primary mathematics curriculum. As constructivism is a theory of learning its implications for teaching need to be addressed. This study explores the experiences of four senior class primary teachers as they endeavour to teach mathematics from a constructivist-compatible perspective with primary school children in Ireland over a school-year period. Such a perspective implies that children should take ownership of their learning while working in groups on tasks which challenge them at their zone of proximal development. The key question on which the research is based is: to what extent will an exposure to constructivism and its implications for the classroom impact on teaching practices within the senior primary mathematics classroom in both the short and longer term? Although several perspectives on constructivism have evolved (von Glaserfeld (1995), Cobb and Yackel (1996), Ernest (1991,1998)), it is the synthesis of the emergent perspective which becomes pivotal to the Irish primary mathematics curriculum. Tracking the development of four primary teachers in a professional learning initiative involving constructivist-compatible approaches necessitated the use of Borko’s (2004) Phase 1 research methodology to account for the evolution in teachers’ understanding of constructivism. Teachers’ and pupils’ viewpoints were recorded using both audio and video technology. Teachers were interviewed at the beginning and end of the project and also one year on to ascertain how their views had evolved. Pupils were interviewed at the end of the project only. The data were analysed from a Jaworskian perspective i.e. using the categories of her Teaching Triad of management of learning, mathematical challenge and sensitivity to students. Management of learning concerns how the teacher organises her classroom to maximise learning opportunities for pupils. Mathematical challenge is reminiscent of the Vygotskian (1978) construct of the zone of proximal development. Sensitivity to students involves a consciousness on the part of the teacher as to how pupils are progressing with a mathematical task and whether or not to intervene to scaffold their learning. Through this analysis a synthesis of the teachers’ interpretations of constructivist philosophy with concomitant implications for theory, policy and practice emerges. The study identifies strategies for teachers wishing to adopt a constructivist-compatible approach to their work. Like O’Shea (2009) it also highlights the likely difficulties to be experienced by such teachers as they move from utilising teacher-dominated methods of teaching mathematics to ones in which pupils have more ownership over their learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Licentiate Thesis is devoted to the presentation and discussion of some new contributions in applied mathematics directed towards scientific computing in sports engineering. It considers inverse problems of biomechanical simulations with rigid body musculoskeletal systems especially in cross-country skiing. This is a contrast to the main research on cross-country skiing biomechanics, which is based mainly on experimental testing alone. The thesis consists of an introduction and five papers. The introduction motivates the context of the papers and puts them into a more general framework. Two papers (D and E) consider studies of real questions in cross-country skiing, which are modelled and simulated. The results give some interesting indications, concerning these challenging questions, which can be used as a basis for further research. However, the measurements are not accurate enough to give the final answers. Paper C is a simulation study which is more extensive than paper D and E, and is compared to electromyography measurements in the literature. Validation in biomechanical simulations is difficult and reducing mathematical errors is one way of reaching closer to more realistic results. Paper A examines well-posedness for forward dynamics with full muscle dynamics. Moreover, paper B is a technical report which describes the problem formulation and mathematical models and simulation from paper A in more detail. Our new modelling together with the simulations enable new possibilities. This is similar to simulations of applications in other engineering fields, and need in the same way be handled with care in order to achieve reliable results. The results in this thesis indicate that it can be very useful to use mathematical modelling and numerical simulations when describing cross-country skiing biomechanics. Hence, this thesis contributes to the possibility of beginning to use and develop such modelling and simulation techniques also in this context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the a posteriori error analysis and hp-adaptation strategies for hp-version interior penalty discontinuous Galerkin methods for second-order partial differential equations with nonnegative characteristic form on anisotropically refined computational meshes with anisotropically enriched elemental polynomial degrees. In particular, we exploit duality based hp-error estimates for linear target functionals of the solution and design and implement the corresponding adaptive algorithms to ensure reliable and efficient control of the error in the prescribed functional to within a given tolerance. This involves exploiting both local isotropic and anisotropic mesh refinement and isotropic and anisotropic polynomial degree enrichment. The superiority of the proposed algorithm in comparison with standard hp-isotropic mesh refinement algorithms and an h-anisotropic/p-isotropic adaptive procedure is illustrated by a series of numerical experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to investigate the effectiveness of problem-based learning (PBL) on students’ mathematical performance. This includes mathematics achievement and students’ attitudes towards mathematics for third and eighth grade students in Saudi Arabia. Mathematics achievement includes, knowing, applying, and reasoning domains, while students’ attitudes towards mathematics covers, ‘Like learning mathematics’, ‘value mathematics’, and ‘a confidence to learn mathematics’. This study goes deeper to examine the interaction of a PBL teaching strategy, with trained face-to-face and self-directed learning teachers, on students’ performance (mathematics achievement and attitudes towards mathematics). It also examines the interaction between different ability levels of students (high and low levels) with a PBL teaching strategy (with trained face-to-face or self-directed learning teachers) on students’ performance. It draws upon findings and techniques of the TIMSS international benchmarking studies. Mixed methods are used to analyse the quasi-experimental study data. One -way ANOVA, Mixed ANOVA, and paired t-tests models are used to analyse quantitative data, while a semi-structured interview with teachers, and author’s observations are used to enrich understanding of PBL and mathematical performance. The findings show that the PBL teaching strategy significantly improves students’ knowledge application, and is better than the traditional teaching methods among third grade students. This improvement, however, occurred only with the trained face-to-face teacher’s group. Furthermore, there is robust evidence that using a PBL teaching strategy could raise significantly students’ liking of learning mathematics, and confidence to learn mathematics, more than traditional teaching methods among third grade students. Howe ver, there was no evidence that PBL could improve students’ performance (mathematics achievement and attitudes towards mathematics), more than traditional teaching methods, among eighth grade students. In 8th grade, the findings for low achieving students show significant improvement compared to high achieving students, whether PBL is applied or not. However, for 3th grade students, no significant difference in mathematical achievement between high and low achieving students was found. The results were not expected for high achieving students and this is also discussed. The implications of these findings for mathematics education in Saudi Arabia are considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This qualitative case study explored three teacher candidates’ learning and enactment of discourse-focused mathematics teaching practices. Using audio and video recordings of their teaching practice this study aimed to identify the shifts in the way in which the teacher candidates enacted the following discourse practices: elicited and used evidence of student thinking, posed purposeful questions, and facilitated meaningful mathematical discourse. The teacher candidates’ written reflections from their practice-based coursework as well as interviews were examined to see how two mathematics methods courses influenced their learning and enactment of the three discourse focused mathematics teaching practices. These data sources were also used to identify tensions the teacher candidates encountered. All three candidates in the study were able to successfully enact and reflect on these discourse-focused mathematics teaching practices at various time points in their preparation programs. Consistency of use and areas of improvement differed, however, depending on various tensions experienced by each candidate. Access to quality curriculum materials as well as time to formulate and enact thoughtful lesson plans that supported classroom discourse were tensions for these teacher candidates. This study shows that teacher candidates are capable of enacting discourse-focused teaching practices early in their field placements and with the support of practice-based coursework they can analyze and reflect on their practice for improvement. This study also reveals the importance of assisting teacher candidates in accessing rich mathematical tasks and collaborating during lesson planning. More research needs to be explored to identify how specific aspects of the learning cycle impact individual teachers and how this can be used to improve practice-based teacher education courses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we consider the a posteriori error estimation and adaptive mesh refinement of discontinuous Galerkin finite element approximations of the hydrodynamic stability problem associated with the incompressible Navier-Stokes equations. Particular attention is given to the reliable error estimation of the eigenvalue problem in channel and pipe geometries. Here, computable a posteriori error bounds are derived based on employing the generalization of the standard Dual-Weighted-Residual approach, originally developed for the estimation of target functionals of the solution, to eigenvalue/stability problems. The underlying analysis consists of constructing both a dual eigenvalue problem and a dual problem for the original base solution. In this way, errors stemming from both the numerical approximation of the original nonlinear flow problem, as well as the underlying linear eigenvalue problem are correctly controlled. Numerical experiments highlighting the practical performance of the proposed a posteriori error indicator on adaptively refined computational meshes are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In quantitative risk analysis, the problem of estimating small threshold exceedance probabilities and extreme quantiles arise ubiquitously in bio-surveillance, economics, natural disaster insurance actuary, quality control schemes, etc. A useful way to make an assessment of extreme events is to estimate the probabilities of exceeding large threshold values and extreme quantiles judged by interested authorities. Such information regarding extremes serves as essential guidance to interested authorities in decision making processes. However, in such a context, data are usually skewed in nature, and the rarity of exceedance of large threshold implies large fluctuations in the distribution's upper tail, precisely where the accuracy is desired mostly. Extreme Value Theory (EVT) is a branch of statistics that characterizes the behavior of upper or lower tails of probability distributions. However, existing methods in EVT for the estimation of small threshold exceedance probabilities and extreme quantiles often lead to poor predictive performance in cases where the underlying sample is not large enough or does not contain values in the distribution's tail. In this dissertation, we shall be concerned with an out of sample semiparametric (SP) method for the estimation of small threshold probabilities and extreme quantiles. The proposed SP method for interval estimation calls for the fusion or integration of a given data sample with external computer generated independent samples. Since more data are used, real as well as artificial, under certain conditions the method produces relatively short yet reliable confidence intervals for small exceedance probabilities and extreme quantiles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis builds a framework for evaluating downside risk from multivariate data via a special class of risk measures (RM). The peculiarity of the analysis lies in getting rid of strong data distributional assumptions and in orientation towards the most critical data in risk management: those with asymmetries and heavy tails. At the same time, under typical assumptions, such as the ellipticity of the data probability distribution, the conformity with classical methods is shown. The constructed class of RM is a multivariate generalization of the coherent distortion RM, which possess valuable properties for a risk manager. The design of the framework is twofold. The first part contains new computational geometry methods for the high-dimensional data. The developed algorithms demonstrate computability of geometrical concepts used for constructing the RM. These concepts bring visuality and simplify interpretation of the RM. The second part develops models for applying the framework to actual problems. The spectrum of applications varies from robust portfolio selection up to broader spheres, such as stochastic conic optimization with risk constraints or supervised machine learning.