811 resultados para Learning Performance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was threefold: first, to investigate variables associated with learning, and performance as measured by the National Council Licensure Examination for Registered Nurses (NCLEX-RN). The second purpose was to validate the predictive value of the Assessment Technologies Institute (ATI) achievement exit exam, and lastly, to provide a model that could be used to predict performance on the NCLEX-RN, with implications for admission and curriculum development. The study was based on school learning theory, which implies that acquisition in school learning is a function of aptitude (pre-admission measures), opportunity to learn, and quality of instruction (program measures). Data utilized were from 298 graduates of an associate degree nursing program in the Southeastern United States. Of the 298 graduates, 142 were Hispanic, 87 were Black, non-Hispanic, 54 White, non-Hispanic, and 15 reported as Others. The graduates took the NCLEX-RN for the first time during the years 2003–2005. This study was a predictive, correlational design that relied upon retrospective data. Point biserial correlations, and chi-square analyses were used to investigate relationships between 19 selected predictor variables and the dichotomous criterion variable, NCLEX-RN. The correlation and chi square findings indicated that men did better on the NCLEX-RN than women; Blacks had the highest failure rates, followed by Hispanics; older students were more likely to pass the exam than younger students; and students who passed the exam started and completed the nursing program with a higher grade point average, than those who failed the exam. Using logistic regression, five statistical models that used variables associated with learning and student performance on the NCLEX-RN were tested with a model adapted from Bloom's (1976) and Carroll's (1963) school learning theories. The derived model included: NCLEX-RNsuccess = f (Nurse Entrance Test and advanced medical-surgical nursing course grade achieved). The model demonstrates that student performance on the NCLEX-RN can be predicted by one pre-admission measure, and a program measure. The Assessment Technologies Institute achievement exit exam (an outcome measure) had no predictive value for student performance on the NCLEX-RN. The model developed accurately predicted 94% of the student's successful performance on the NCLEX-RN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Career Academy instructors' technical literacy is vital to the academic success of students. This nonexperimental ex post facto study examined the relationships between the level of technical literacy of instructors in career academies and student academic performance. It was also undertaken to explore the relationship between the pedagogical training of instructors and the academic performance of students. ^ Out of a heterogeneous population of 564 teachers in six targeted schools, 136 teachers (26.0 %) responded to an online survey. The survey was designed to gather demographic and teaching experience data. Each demographic item was linked by researchers to teachers' technology use in the classroom. Student achievement was measured by student learning gains as assessed by the reading section of the FCAT from the previous to the present school year. ^ Linear and hierarchical regressions were conducted to examine the research questions. To clarify the possibility of teacher gender and teacher race/ethnic group differences by research variable, a series of one-way ANOVAs were conducted. As revealed by the ANOVA results, there were not statistically significant group differences in any of the research variables by teacher gender or teacher race/ethnicity. Greater student learning gains were associated with greater teacher technical expertise integrating computers and technology into the classroom, even after controlling for teacher attitude towards computers. Neither teacher attitude toward technology integration nor years of experience in integrating computers into the curriculum significantly predicted student learning gains in the regression models. ^ Implications for HRD theory, research, and practice suggest that identifying teacher levels of technical literacy may help improve student academic performance by facilitating professional development strategies and new parameters for defining highly qualified instructors with 21st century skills. District professional development programs can benefit by increasing their offerings to include more computer and information communication technology courses. Teacher preparation programs can benefit by including technical literacy as part of their curriculum. State certification requirements could be expanded to include formal surveys to assess teacher use of technology.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent studies have established that yolk hormones of maternal origin have significant effects on the physiology and behavior of offspring in birds. Herrington (2012) demonstrated that an elevation of progesterone in yolk elevates emotional reactivity in bobwhite quail neonates. Chicks that hatched from progesterone treated eggs displayed increased latency in tonic immobility and did not emerge as quickly from a covered location into an open field compared to control groups. For the present study, three experimental groups were formed: chicks hatched from eggs with artificially elevated progesterone (P), chicks hatched from an oil-vehicle control group (V), and chicks hatched from a non-manipulated control group (C). Experiment 1 examined levels of progesterone with High Performance Liquid Chromatography/tandem Mass Spectroscopy (HPLC/MS) from prenatal day 1 to prenatal day 17 in bobwhite quail egg yolk. In Experiment 2, bobwhite quail embryos were passively exposed to an individual maternal assembly call for 24 hours prior to hatching. Chicks were then tested individually for their preference between the familiarized call and a novel call at 24 and 48 hours following hatching. For Experiment 3, newly hatched chicks were exposed to an individual maternal assembly call for 24-hrs. Chicks were then tested for their preference for the familiarized call at 24 and 48-hrs after hatch. Results of Experiment 1 showed that yolk progesterone levels were significantly elevated in treated eggs and were present in the egg yolk longer into prenatal development than the two control groups. Results from Experiment 2 indicated that chicks from the P group failed to demonstrate a preference for the familiar bobwhite maternal assembly call at 24 or 48-hrs after hatch following 24-hrs of prenatal exposure. In contrast, chicks from the C and V groups demonstrated a significant preference for the familiarized call. In Experiment 3, chicks from the P group showed an enhanced preference for the familiarized bobwhite maternal call compared to chicks from the C and V groups at 24 and 48-hrs after hatch. The results of these experiments suggest that elevated maternal yolk hormone levels in pre-incubated bobwhite quail eggs can influence auditory perceptual learning in embryos and neonates.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past two decades, the community college in the United States has boasted a leadership role in the movement to make education community-based and performance-oriented. This has led to an intensification in attempts to search for more innovative means to make education more experiential and relevant to students' lived experiences. One such innovative program that holds promise to meet this challenge is service- learning. This paradigm attempts to relate the academic education in the classroom to community-based problems, which fits in neatly with the community-based characteristics of the community college. It promises to link ideas developed in the classroom and their practical application within the community through guided reflection. It is designed to enhance and enrich student learning of course material by combining citizenship, academic subjects, skills, and values. Though many studies have been carried out in regard to the outcomes of service-learning through quantitative means, relatively few qualitative studies are available, and those available have primarily studied traditional students at four-year residential colleges or universities. Therefore, there is an urgent need to study non-traditional students' perspectives at the community college level. The purpose of this study was to describe and explain the perspectives of five students at Broward Community College, Central Campus, Ft. Lauderdale, Florida. The following exploratory questions guided this study: 1. What elements constitute these students' perspectives? 2. What variables influence their perspectives? 3. What beliefs do these students hold about their service-learning experience which support or are contrary to their perspectives? This ethnographic interview study was conducted over a period of twelve months and consisted of three interviews for each of the five participants. The analysis of the data was conducted following the stringent principles of ethnographic research which included constant comparative analysis. The interviews were tape recorded with the participants' permission, transcribed verbatim, and organized into categories for in-depth understanding. Furthermore, these categories were developed from the data collected and an organizational scheme for understanding and interpreting of these perspectives emerged. The researcher, as well, kept a reflective journal of the research process as part of the data set. The results of this study show the need for a better grasp of the concepts of service-learning on the part of all involved with its implementation. In spite of this, all of the participants displayed gains to a greater or lesser degree in personal growth, academic skills, and citizenship skills.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Unified Modeling Language (UML) has quickly become the industry standard for object-oriented software development. It is being widely used in organizations and institutions around the world. However, UML is often found to be too complex for novice systems analysts. Although prior research has identified difficulties novice analysts encounter in learning UML, no viable solution has been proposed to address these difficulties. Sequence-diagram modeling, in particular, has largely been overlooked. The sequence diagram models the behavioral aspects of an object-oriented software system in terms of interactions among its building blocks, i.e. objects and classes. It is one of the most commonly-used UML diagrams in practice. However, there has been little research on sequence-diagram modeling. The current literature scarcely provides effective guidelines for developing a sequence diagram. Such guidelines will be greatly beneficial to novice analysts who, unlike experienced systems analysts, do not possess relevant prior experience to easily learn how to develop a sequence diagram. There is the need for an effective sequence-diagram modeling technique for novices. This dissertation reports a research study that identified novice difficulties in modeling a sequence diagram and proposed a technique called CHOP (CHunking, Ordering, Patterning), which was designed to reduce the cognitive load by addressing the cognitive complexity of sequence-diagram modeling. The CHOP technique was evaluated in a controlled experiment against a technique recommended in a well-known textbook, which was found to be representative of approaches provided in many textbooks as well as practitioner literatures. The results indicated that novice analysts were able to perform better using the CHOP technique. This outcome seems have been enabled by pattern-based heuristics provided by the technique. Meanwhile, novice analysts rated the CHOP technique more useful although not significantly easier to use than the control technique. The study established that the CHOP technique is an effective sequence-diagram modeling technique for novice analysts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computing devices have become ubiquitous in our technologically-advanced world, serving as vehicles for software applications that provide users with a wide array of functions. Among these applications are electronic learning software, which are increasingly being used to educate and evaluate individuals ranging from grade school students to career professionals. This study will evaluate the design and implementation of user interfaces in these pieces of software. Specifically, it will explore how these interfaces can be developed to facilitate the use of electronic learning software by children. In order to do this, research will be performed in the area of human-computer interaction, focusing on cognitive psychology, user interface design, and software development. This information will be analyzed in order to design a user interface that provides an optimal user experience for children. This group will test said interface, as well as existing applications, in order to measure its usability. The objective of this study is to design a user interface that makes electronic learning software more usable for children, facilitating their learning process and increasing their academic performance. This study will be conducted by using the Adobe Creative Suite to design the user interface and an Integrated Development Environment to implement functionality. These are digital tools that are available on computing devices such as desktop computers, laptops, and smartphones, which will be used for the development of software. By using these tools, I hope to create a user interface for electronic learning software that promotes usability while maintaining functionality. This study will address the increasing complexity of computing software seen today – an issue that has risen due to the progressive implementation of new functionality. This issue is having a detrimental effect on the usability of electronic learning software, increasing the learning curve for targeted users such as children. As we make electronic learning software an integral part of educational programs in our schools, it is important to address this in order to guarantee them a successful learning experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computing devices have become ubiquitous in our technologically-advanced world, serving as vehicles for software applications that provide users with a wide array of functions. Among these applications are electronic learning software, which are increasingly being used to educate and evaluate individuals ranging from grade school students to career professionals. This study will evaluate the design and implementation of user interfaces in these pieces of software. Specifically, it will explore how these interfaces can be developed to facilitate the use of electronic learning software by children. In order to do this, research will be performed in the area of human-computer interaction, focusing on cognitive psychology, user interface design, and software development. This information will be analyzed in order to design a user interface that provides an optimal user experience for children. This group will test said interface, as well as existing applications, in order to measure its usability. The objective of this study is to design a user interface that makes electronic learning software more usable for children, facilitating their learning process and increasing their academic performance. This study will be conducted by using the Adobe Creative Suite to design the user interface and an Integrated Development Environment to implement functionality. These are digital tools that are available on computing devices such as desktop computers, laptops, and smartphones, which will be used for the development of software. By using these tools, I hope to create a user interface for electronic learning software that promotes usability while maintaining functionality. This study will address the increasing complexity of computing software seen today – an issue that has risen due to the progressive implementation of new functionality. This issue is having a detrimental effect on the usability of electronic learning software, increasing the learning curve for targeted users such as children. As we make electronic learning software an integral part of educational programs in our schools, it is important to address this in order to guarantee them a successful learning experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This text presents a discussion about the song cycle Slopiewnie Opus 46 bis of Karol Szymanowski, one of the most important Polish composers of the 20th century. Slopiewnie was composed on texts of Julian Tuwim, poet born in 1894 in Łódź, who used ancient roots to create new words and search for special sonorities. First, this text introduces a brief biographical sketch about Szymanowski, in order to contextualize Slopiewnie in relation to the composer’s works. Afterwards, the text provides an analysis of the songs and their texts, which may serve as a study tool for future perfomers. Interpretative suggestions are offered, based on the experience of learning these songs and on references. The text also presents the phonetic transcription of the poems, as well as a suggested translation to Portuguese, making it easier for Brazilian singers to learn the cycle’s text and prosody.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La présente thèse vise à évaluer le degré d’implantation et d’utilisation de systèmes de mesure de la performance (SMP) par les décideurs des organisations de réadaptation et à comprendre les facteurs contextuels ayant influencé leur implantation. Pour ce faire, une étude de cas multiples a été réalisée comprenant deux sources de données: des entrevues individuelles avec des cadres supérieurs des organisations de réadaptation du Québec et des documents organisationnels. Le cadre conceptuel Consolidated Framework for Implementation Research a été utilisé pour guider la collecte et l’analyse des données. Une analyse intra-cas ainsi qu’une analyse inter-cas ont été réalisées. Nos résultats montrent que le niveau de préparation organisationnelle à l’implantation d’un SMP était élevé et que les SMP ont été implantés avec succès et utilisés de plusieurs façons. Les organisations les ont utilisés de façon passive (comme outil d’information), de façon ciblée (pour tenter d’améliorer des domaines sous-performants) et de façon politique (comme outil de négociation auprès des autorités gouvernementales). Cette utilisation diversifiée des SMP est suscitée par l’interaction complexe de facteurs provenant du contexte interne propre à chaque organisation, des caractéristiques du SMP, du processus d’implantation appliqué et du contexte externe dans lequel évoluent ces organisations. Au niveau du contexte interne, l’engagement continu et le leadership de la haute direction ont été décisifs dans l’implantation du SMP de par leur influence sur l’identification du besoin d’un SMP, l’engagement des utilisateurs visés dans le projet, la priorité organisationnelle accordée au SMP ainsi que les ressources octroyées à son implantation, la qualité des communications et le climat d’apprentissage organisationnel. Toutefois, même si certains de ces facteurs, comme les ressources octroyées à l’implantation, la priorité organisationnelle du SMP et le climat d’apprentissage se sont révélés être des barrières à l’implantation, ultimement, ces barrières n’étaient pas suffisamment importantes pour entraver l’utilisation du SMP. Cette étude a également confirmé l’importance des caractéristiques du SMP, particulièrement la perception de qualité et d’utilité de l’information. Cependant, à elles seules, ces caractéristiques sont insuffisantes pour assurer le succès d’implantation. Cette analyse d’implantation a également révélé que, même si le processus d’implantation ne suit pas des étapes formelles, un plan de développement du SMP, la participation et l’engagement des décideurs ainsi que la désignation d’un responsable de projet ont tous facilité son implantation. Cependant, l’absence d’évaluation et de réflexion collective sur le processus d’implantation a limité le potentiel d’apprentissage organisationnel, un prérequis à l’amélioration de la performance. Quant au contexte externe, le soutien d’un organisme externe s’est avéré un facilitateur indispensable pour favoriser l’implantation de SMP par les organisations de réadaptation malgré l’absence de politiques et incitatifs gouvernementaux à cet effet. Cette étude contribue à accroître les connaissances sur les facteurs contextuels ainsi que sur leurs interactions dans l’utilisation d’innovations tels les SMP et confirme l’importance d’aborder l’analyse de l’implantation avec une perspective systémique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this chapter, the way in which varied terms such as Networked learning, e-learning and Technology Enhanced Learning (TEL) have each become colonised to support a dominant, economically-based world view of educational technology is discussed. Critical social theory about technology, language and learning is brought into dialogue with examples from a corpus-based Critical Discourse Analysis (CDA) of UK policy texts for educational technology between1997 and 2012. Though these policy documents offer much promise for enhancement of people’s performance via technology, the human presence to enact such innovation is missing. Given that ‘academic workload’ is a ‘silent barrier’ to the implementation of TEL strategies (Gregory and Lodge, 2015), analysis further exposes, through empirical examples, that the academic labour of both staff and students appears to be unacknowledged. Global neoliberal capitalist values have strongly territorialised the contemporary university (Hayes & Jandric, 2014), utilising existing naïve, utopian arguments about what technology alone achieves. Whilst the chapter reveals how humans are easily ‘evicted’, even from discourse about their own learning (Hayes, 2015), it also challenges staff and students to seek to re-occupy the important territory of policy to subvert the established order. We can use the very political discourse that has disguised our networked learning practices, in new explicit ways, to restore our human visibility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to ascertain how today’s international marketers can perform better on the global scene by harnessing spontaneity. Design/methodology/approach: The authors draw on contingency theory to develop a model of the spontaneity – international marketing performance relationship, and identify three potential moderators, namely, strategic planning, centralization, and market dynamism. The authors test the model via structural equation modeling with survey data from 197 UK exporters. Findings: The results indicate that spontaneity is beneficial to exporters in terms of enhancing profit performance. In addition, greater centralization and strategic planning strengthen the positive effects of spontaneity. However, market dynamism mitigates the positive effect of spontaneity on export performance (when customer needs are volatile, spontaneous decisions do not function as well in terms of ensuring success). Practical implications: Learning to be spontaneous when making export decisions appears to result in favorable outcomes for the export function. To harness spontaneity, export managers should look to develop company heuristics (increase centralization and strategic planning). Finally, if operating in dynamic export market environments, the role of spontaneity is weaker, so more conventional decision-making approaches should be adopted. Originality/value: The international marketing environment typically requires decisions to be flexible and fast. In this context, spontaneity could enable accelerated and responsive decision-making, allowing international marketers to realize superior performance. Yet, there is a lack of research on decision-making spontaneity and its potential for international marketing performance enhancement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In common with most universities teaching electronic engineering in the UK, Aston University has seen a shift in the profile of its incoming students in recent years. The educational background of students has moved away from traditional Alevel maths and science and if anything this variation is set to increase with the introduction of engineering diplomas. Another major change to the circumstances of undergraduate students relates to the introduction of tuition fees in 1998 which has resulted in an increased likelihood of them working during term time. This may have resulted in students tending to concentrate on elements of the course that directly provide marks contributing to the degree classification. In the light of these factors a root and branch rethink of the electronic engineering degree programme structures at Aston was required. The factors taken into account during the course revision were:. Changes to the qualifications of incoming students. Changes to the background and experience of incoming students. Increase in overseas students, some with very limited practical experience. Student focus on work directly leading to marks. Modular compartmentalisation of knowledge. The need for provision of continuous feedback on performance We discuss these issues with specific reference to a 40 credit first year electronic engineering course and detail the new course structure and evaluate the effectiveness of the changes. The new approach appears to have been successful both educationally and with regards to student satisfaction. The first cohort of students from the new course will graduate in 2010 and results from student surveys relating particularly to project and design work will be presented at the conference. © 2009 K Sugden, D J Webb and R P Reeves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.

The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.

Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.

Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.

The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In our daily lives, we often must predict how well we are going to perform in the future based on an evaluation of our current performance and an assessment of how much we will improve with practice. Such predictions can be used to decide whether to invest our time and energy in learning and, if we opt to invest, what rewards we may gain. This thesis investigated whether people are capable of tracking their own learning (i.e. current and future motor ability) and exploiting that information to make decisions related to task reward. In experiment one, participants performed a target aiming task under a visuomotor rotation such that they initially missed the target but gradually improved. After briefly practicing the task, they were asked to select rewards for hits and misses applied to subsequent performance in the task, where selecting a higher reward for hits came at a cost of receiving a lower reward for misses. We found that participants made decisions that were in the direction of optimal and therefore demonstrated knowledge of future task performance. In experiment two, participants learned a novel target aiming task in which they were rewarded for target hits. Every five trials, they could choose a target size which varied inversely with reward value. Although participants’ decisions deviated from optimal, a model suggested that they took into account both past performance, and predicted future performance, when making their decisions. Together, these experiments suggest that people are capable of tracking their own learning and using that information to make sensible decisions related to reward maximization.