697 resultados para science learning
Resumo:
Postprint
Resumo:
Postprint
Resumo:
Postprint
Resumo:
Postprint
Resumo:
Postprint
Resumo:
Peer reviewed
Resumo:
Peer reviewed
Resumo:
This paper reports on an investigation with first year undergraduate Product Design and Management students within a School of Engineering. The students at the time of this investigation had studied fundamental engineering science and mathematics for one semester. The students were given an open ended, ill formed problem which involved designing a simple bridge to cross a river. They were given a talk on problem solving and given a rubric to follow, if they chose to do so. They were not given any formulae or procedures needed in order to resolve the problem. In theory, they possessed the knowledge to ask the right questions in order to make assumptions but, in practice, it turned out they were unable to link their a priori knowledge to resolve this problem. They were able to solve simple beam problems when given closed questions. The results show they were unable to visualise a simple bridge as an augmented beam problem and ask pertinent questions and hence formulate appropriate assumptions in order to offer resolutions.
Resumo:
In common with most universities teaching electronic engineering in the UK, Aston University has seen a shift in the profile of its incoming students in recent years. The educational background of students has moved away from traditional Alevel maths and science and if anything this variation is set to increase with the introduction of engineering diplomas. Another major change to the circumstances of undergraduate students relates to the introduction of tuition fees in 1998 which has resulted in an increased likelihood of them working during term time. This may have resulted in students tending to concentrate on elements of the course that directly provide marks contributing to the degree classification. In the light of these factors a root and branch rethink of the electronic engineering degree programme structures at Aston was required. The factors taken into account during the course revision were:. Changes to the qualifications of incoming students. Changes to the background and experience of incoming students. Increase in overseas students, some with very limited practical experience. Student focus on work directly leading to marks. Modular compartmentalisation of knowledge. The need for provision of continuous feedback on performance We discuss these issues with specific reference to a 40 credit first year electronic engineering course and detail the new course structure and evaluate the effectiveness of the changes. The new approach appears to have been successful both educationally and with regards to student satisfaction. The first cohort of students from the new course will graduate in 2010 and results from student surveys relating particularly to project and design work will be presented at the conference. © 2009 K Sugden, D J Webb and R P Reeves.
Resumo:
Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.
A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.
The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.
From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.
Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.
Resumo:
Ostensibly, BITs are the ideal international treaty. First, until just recently, they almost uniformly came with explicit dispute resolution mechanisms through which countries could face real costs for violation (Montt 2009). Second, the signing, ratification, and violation of them are easily accessible public knowledge. Thus countries presumably would face reputational costs for violating these agreements. Yet, these compliance devices have not dissuaded states from violating these agreements. Even more interestingly, in recent years, both developed and developing countries have moved towards modifying the investor-friendly provisions of these agreements. These deviations from the expectations of the credible commitment argument raise important questions about the field's assumptions regarding the ability of international treaties with commitment devices to effectively constrain state behavior.
Resumo:
This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.
The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.
Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.
Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.
The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.
Resumo:
Bayesian methods offer a flexible and convenient probabilistic learning framework to extract interpretable knowledge from complex and structured data. Such methods can characterize dependencies among multiple levels of hidden variables and share statistical strength across heterogeneous sources. In the first part of this dissertation, we develop two dependent variational inference methods for full posterior approximation in non-conjugate Bayesian models through hierarchical mixture- and copula-based variational proposals, respectively. The proposed methods move beyond the widely used factorized approximation to the posterior and provide generic applicability to a broad class of probabilistic models with minimal model-specific derivations. In the second part of this dissertation, we design probabilistic graphical models to accommodate multimodal data, describe dynamical behaviors and account for task heterogeneity. In particular, the sparse latent factor model is able to reveal common low-dimensional structures from high-dimensional data. We demonstrate the effectiveness of the proposed statistical learning methods on both synthetic and real-world data.
Resumo:
The integration of mathematics and science in secondary schools in the 21st century continues to be an important topic of practice and research. The purpose of my research study, which builds on studies by Frykholm and Glasson (2005) and Berlin and White (2010), is to explore the potential constraints and benefits of integrating mathematics and science in Ontario secondary schools based on the perspectives of in-service and pre-service teachers with various math and/or science backgrounds. A qualitative and quantitative research design with an exploratory approach was used. The qualitative data was collected from a sample of 12 in-service teachers with various math and/or science backgrounds recruited from two school boards in Eastern Ontario. The quantitative and some qualitative data was collected from a sample of 81 pre-service teachers from the Queen’s University Bachelor of Education (B.Ed) program. Semi-structured interviews were conducted with the in-service teachers while a survey and a focus group was conducted with the pre-service teachers. Once the data was collected, the qualitative data were abductively analyzed. For the quantitative data, descriptive and inferential statistics (one-way ANOVAs and Pearson Chi Square analyses) were calculated to examine perspectives of teachers regardless of teaching background and to compare groups of teachers based on teaching background. The findings of this study suggest that in-service and pre-service teachers have a positive attitude towards the integration of math and science and view it as valuable to student learning and success. The pre-service teachers viewed the integration as easy and did not express concerns to this integration. On the other hand, the in-service teachers highlighted concerns and challenges such as resources, scheduling, and time constraints. My results illustrate when teachers perceive it is valuable to integrate math and science and which aspects of the classroom benefit best from the integration. Furthermore, the results highlight barriers and possible solutions to better the integration of math and science. In addition to the benefits and constraints of integration, my results illustrate why some teachers may opt out of integrating math and science and the different strategies teachers have incorporated to integrate math and science in their classroom.
Resumo:
Metacognition is the understanding and control of cognitive processes. Students with high levels of metacognition achieve greater academic success. The purpose of this mixed-methods study was to examine elementary teachers’ beliefs about metacognition and integration of metacognitive practices in science. Forty-four teachers were recruited through professional networks to complete a questionnaire containing open-ended questions (n = 44) and Likert-type items (n = 41). Five respondents were selected to complete semi-structured interviews informed by the questionnaire. The selected interview participants had a minimum of three years teaching experience and demonstrated a conceptual understanding of metacognition. Statistical tests (Pearson correlation, t-tests, and multiple regression) on quantitative data and thematic analysis of qualitative data indicated that teachers largely understood metacognition but had some gaps in their understanding. Participants’ reported actions (teaching practices) and beliefs differed according to their years of experience but not gender. Hierarchical multiple regression demonstrated that the first block of gender and experience was not a significant predictor of teachers' metacognitive actions, although experience was a significant predictor by itself. Experience was not a significant predictor once teachers' beliefs were added. The majority of participants indicated that metacognition was indeed appropriate for elementary students. Participants consistently reiterated that students’ metacognition developed with practice, but required explicit instruction. A lack of consensus remained around the domain specificity of metacognition. More specifically, the majority of questionnaire respondents indicated that metacognitive strategies could not be used across subject domains, whereas all interviewees indicated that they used strategies across subjects. Metacognition was integrated frequently into Ontario elementary classrooms; however, metacognition was integrated less frequently in science lessons. Lastly, participants used a variety of techniques to integrate metacognition into their classrooms. Implications for practice include the need for more professional development aimed at integrating metacognition into science lessons at both the Primary and Junior levels. Further, teachers could benefit from additional clarification on the three main components of metacognition and the need to integrate all three to successfully develop students’ metacognition.