753 resultados para learning to program
Resumo:
Educators should movefrom teacher-centered learning to student-centered learning, from isolated work to collaborative work, andfromfactual knowledgebased instructions to critical thinking and informed decision-making. The high tech classroom should be more interactive and encourage active, exploratory, inquiry-based learning, as opposed to the didactic mode in which teachersfeed students information. (Valenti,2000, p. 85) The influence of technology in schools is growing as quickly as the students it impacts. As a pioneer in an e-leaming high school, I hoped to better understand the effects and influences of this learning tool in the English classroom. Using interpretive ethnography as my main frame of reference, I examined the role of technology in a grade 9 Academic English class environment. My role was participant observer as I worked with 4 students in the Laptop Program at St. Augustine Catholic High School. Through interview, observation, joumaling, and thick description, I undertook a journey into cyberspace. I documented the experiences, the frustrations, and the highlights of being in e-leaming along with my students. In this study, I specifically considered the issues of teacher training, administrative support, technology support personnel, resource availability, the role of the teacher in a constructivist classroom, and the benefits of the laptop computer as a learning tool in classroom and school.
Resumo:
Learning to write is a daunting task for many young children. The purpose of this study was to examine the impact of a combined approach to writing instruction and assessment on the writing performance of students in two grade 3 classes. Five forms and traits of writing were purposefully connected during writing lessons while exhibiting links to the four strands of the grade 3 Ontario science curriculum. Students then had opportunities to engage in the writing process and to self-assess their compositions using either student-developed (experimental group/teacher-researcher's class) or teachercreated (control group/teacher-participant's class) rubrics. Paired samples t-tests revealed that both the experimental and control groups exhibited statistically significant growth from pretest to posttest on all five integrated writing units. Independent samples t-tests showed that the experimental group outperformed the control group on the persuasive + sentence fluency and procedure + word choice writing tasks. Pearson product-moment correlation r tests revealed significant correlations between the experimental group and the teacher-researcher on the recount + ideas and report + organization tasks, while students in the control group showed significant correlations with the teacher-researcher on the narrative + voice and procedure + word choice tasks. Significant correlations between the control group and the teacher-participant were evident on the persuasive + sentence fluency and procedure + word choice tasks. Qualitative analyses revealed five themes that highlighted how students' self-assessments and reflections can be used to guide teachers in their instructional decision making. These findings suggest that educators should adopt an integrated writing program in their classrooms, while working with students to create and utilize purposeful writing assessment tools.
Resumo:
Handwriting is a functional task that is used to communicate thoughts using a written code. Research findings have indicated that handwriting is related to learning to read and learning to write. The purposes of this research project were to determine if a handwriting intervention would increase abilities in reading and writing skills, in graphomotor and visual-motor integration skills, and improve the participants’ self-perceptions and self-descriptions pertaining to handwriting enjoyment, competence, and effort. A single-subject research design was implemented with four struggling high school students who each received 10.5 to 15.5 hours of cursive handwriting intervention using the ez Write program. In summary, the findings indicated that the students showed significant improvements in aspects of reading and writing; that they improved significantly in their cursive writing abilities; and that their self-perceptions concerning their handwriting experience and competence improved. The contribution of handwriting to academic achievement and vocational success can no longer be neglected.
Resumo:
An article by Grandin sharing tips for teaching and working with autistic children. The focus is on: Structured Environment, Learning to Talk, Rhythm, Sensory Problems, Reducing Arousal, Tactile Stimulation, Fixations, Visual Thinking. The conclusion of the article reads "I cannot over emphasize the important role that good teachers and therapists play in enabling autistics to lead a fuller life. A good autism program needs dedicated people and should use a variety of treatment methods in combination with an intense structured environment".
Resumo:
Les antipatrons sont de “mauvaises” solutions à des problèmes récurrents de conception logicielle. Leur apparition est soit due à de mauvais choix lors de la phase de conception soit à des altérations et des changements continus durant l’implantation des programmes. Dans la littérature, il est généralement admis que les antipatrons rendent la compréhension des programmes plus difficile. Cependant, peu d’études empiriques ont été menées pour vérifier l’impact des antipatrons sur la compréhension. Dans le cadre de ce travail de maîtrise, nous avons conçu et mené trois expériences, avec 24 sujets chacune, dans le but de recueillir des données sur la performance des sujets lors de tâches de compréhension et d’évaluer l’impact de l’existence de deux antipatrons, Blob et Spaghetti Code, et de leurs combinaisons sur la compréhension des programmes. Nous avons mesuré les performances des sujets en terme : (1) du TLX (NASA task load index) pour l’éffort ; (2) du temps consacré à l’exécution des tâches ; et, (3) de leurs pourcentages de réponses correctes. Les données recueillies montrent que la présence d’un antipatron ne diminue pas sensiblement la performance des sujets alors que la combinaison de deux antipatrons les entrave de façon significative. Nous concluons que les développeurs peuvent faire face à un seul antipatron, alors que la combinaison de plusieurs antipatrons devrait être évitée, éventuellement par le biais de détection et de réusinage.
Resumo:
Les algorithmes d'apprentissage profond forment un nouvel ensemble de méthodes puissantes pour l'apprentissage automatique. L'idée est de combiner des couches de facteurs latents en hierarchies. Cela requiert souvent un coût computationel plus elevé et augmente aussi le nombre de paramètres du modèle. Ainsi, l'utilisation de ces méthodes sur des problèmes à plus grande échelle demande de réduire leur coût et aussi d'améliorer leur régularisation et leur optimization. Cette thèse adresse cette question sur ces trois perspectives. Nous étudions tout d'abord le problème de réduire le coût de certains algorithmes profonds. Nous proposons deux méthodes pour entrainer des machines de Boltzmann restreintes et des auto-encodeurs débruitants sur des distributions sparses à haute dimension. Ceci est important pour l'application de ces algorithmes pour le traitement de langues naturelles. Ces deux méthodes (Dauphin et al., 2011; Dauphin and Bengio, 2013) utilisent l'échantillonage par importance pour échantilloner l'objectif de ces modèles. Nous observons que cela réduit significativement le temps d'entrainement. L'accéleration atteint 2 ordres de magnitude sur plusieurs bancs d'essai. Deuxièmement, nous introduisont un puissant régularisateur pour les méthodes profondes. Les résultats expérimentaux démontrent qu'un bon régularisateur est crucial pour obtenir de bonnes performances avec des gros réseaux (Hinton et al., 2012). Dans Rifai et al. (2011), nous proposons un nouveau régularisateur qui combine l'apprentissage non-supervisé et la propagation de tangente (Simard et al., 1992). Cette méthode exploite des principes géometriques et permit au moment de la publication d'atteindre des résultats à l'état de l'art. Finalement, nous considérons le problème d'optimiser des surfaces non-convexes à haute dimensionalité comme celle des réseaux de neurones. Tradionellement, l'abondance de minimum locaux était considéré comme la principale difficulté dans ces problèmes. Dans Dauphin et al. (2014a) nous argumentons à partir de résultats en statistique physique, de la théorie des matrices aléatoires, de la théorie des réseaux de neurones et à partir de résultats expérimentaux qu'une difficulté plus profonde provient de la prolifération de points-selle. Dans ce papier nous proposons aussi une nouvelle méthode pour l'optimisation non-convexe.
Resumo:
There has been recent interest in using temporal difference learning methods to attack problems of prediction and control. While these algorithms have been brought to bear on many problems, they remain poorly understood. It is the purpose of this thesis to further explore these algorithms, presenting a framework for viewing them and raising a number of practical issues and exploring those issues in the context of several case studies. This includes applying the TD(lambda) algorithm to: 1) learning to play tic-tac-toe from the outcome of self-play and of play against a perfectly-playing opponent and 2) learning simple one-dimensional segmentation tasks.
Resumo:
To recognize a previously seen object, the visual system must overcome the variability in the object's appearance caused by factors such as illumination and pose. Developments in computer vision suggest that it may be possible to counter the influence of these factors, by learning to interpolate between stored views of the target object, taken under representative combinations of viewing conditions. Daily life situations, however, typically require categorization, rather than recognition, of objects. Due to the open-ended character both of natural kinds and of artificial categories, categorization cannot rely on interpolation between stored examples. Nonetheless, knowledge of several representative members, or prototypes, of each of the categories of interest can still provide the necessary computational substrate for the categorization of new instances. The resulting representational scheme based on similarities to prototypes appears to be computationally viable, and is readily mapped onto the mechanisms of biological vision revealed by recent psychophysical and physiological studies.
Resumo:
This white paper reports emerging findings at the end of Phase I of the Lean Aircraft Initiative in the Policy focus group area. Specifically, it provides details about research on program instability. Its objective is to discuss high-level findings detailing: 1) the relative contribution of different factors to a program’s overall instability; 2) the cost impact of program instability on acquisition programs; and 3) some strategies recommended by program managers for overcoming and/or mitigating the negative effects of program instability on their programs. Because this report comes as this research is underway, this is not meant to be a definitive document on the subject. Rather, is it anticipated that this research may potentially produce a number of reports on program instability-related topics. The government managers of military acquisition programs rated annual budget or production rate changes, changes in requirements, and technical difficulties as the three top contributors, respectively, to program instability. When asked to partition actual variance in their program’s planned cost and schedule to each of these factors, it was found that the combined effects of unplanned budget and requirement changes accounted for 5.2% annual cost growth and 20% total program schedule slip. At a rate of approximately 5% annual cost growth from these factors, it is easy to see that even conservative estimates of the cost benefits to be gained from acquisition reforms and process improvements can quickly be eclipsed by the added cost associated with program instability. Program management practices involving the integration of stakeholders from throughout the value chain into the decision making process were rated the most effective at avoiding program instability. The use of advanced information technologies was rated the most effective at mitigating the negative impact of program instability.
Resumo:
This white paper reports emerging findings at the end of Phase I of the Lean Aircraft Initiative in the Policy focus group area. Specifically, it provides details about research on program instability. Its objective is to discuss high-level findings detailing: 1) the relative contribution of different factors to a program’s overall instability; 2) the cost impact of program instability on acquisition programs; and 3) some strategies recommended by program managers for overcoming and/or mitigating the negative effects of program instability on their programs. Because this report comes as this research is underway, this is not meant to be a definitive document on the subject. Rather, is it anticipated that this research may potentially produce a number of reports on program instability-related topics. The government managers of military acquisition programs rated annual budget or production rate changes, changes in requirements, and technical difficulties as the three top contributors, respectively, to program instability. When asked to partition actual variance in their program’s planned cost and schedule to each of these factors, it was found that the combined effects of unplanned budget and requirement changes accounted for 5.2% annual cost growth and 20% total program schedule slip. At a rate of approximately 5% annual cost growth from these factors, it is easy to see that even conservative estimates of the cost benefits to be gained from acquisition reforms and process improvements can quickly be eclipsed by the added cost associated with program instability. Program management practices involving the integration of stakeholders from throughout the value chain into the decision making process were rated the most effective at avoiding program instability. The use of advanced information technologies was rated the most effective at mitigating the negative impact of program instability.
Resumo:
For students learning JavaScript programming, this exercise sets out a fairly complete template for a DHTML implementation of Life. Students have to program the missing sections of code and attempt the extra features described. Only I have the password to unlock the solution!
Resumo:
El propósito principal de esta monografía es ofrecer una perspectiva crítica sobre el conflicto latente en la Península Coreana, haciendo un acercamiento al mismo desde un marco teórico asentado en el realismo estructural de Kenneth Waltz. De este modo, se busca responder a cuestiones sobre los intereses estatales como fundamento básico de las estrategias de mantenimiento de la Estructura en regiones geopolíticamente sensibles. Al final, se llega a la conclusión afirmando que la Estructura ejerce una serie de funciones para garantizar su preservación mediante una acción de convergencia en la conducta de los Estados. Esta realidad ha mantenido a la Península Coreana sin un conflicto bélico en los últimos 50 años, muy a pesar de estar al borde del mismo en varias ocasiones, ya que de llegarse a presentar se rompería la estabilidad de la región, y por ende el Equilibrio de Poderes estaría en grave riesgo.
Resumo:
En años recientes,la Inteligencia Artificial ha contribuido a resolver problemas encontrados en el desempeño de las tareas de unidades informáticas, tanto si las computadoras están distribuidas para interactuar entre ellas o en cualquier entorno (Inteligencia Artificial Distribuida). Las Tecnologías de la Información permiten la creación de soluciones novedosas para problemas específicos mediante la aplicación de los hallazgos en diversas áreas de investigación. Nuestro trabajo está dirigido a la creación de modelos de usuario mediante un enfoque multidisciplinario en los cuales se emplean los principios de la psicología, inteligencia artificial distribuida, y el aprendizaje automático para crear modelos de usuario en entornos abiertos; uno de estos es la Inteligencia Ambiental basada en Modelos de Usuario con funciones de aprendizaje incremental y distribuido (conocidos como Smart User Model). Basándonos en estos modelos de usuario, dirigimos esta investigación a la adquisición de características del usuario importantes y que determinan la escala de valores dominantes de este en aquellos temas en los cuales está más interesado, desarrollando una metodología para obtener la Escala de Valores Humanos del usuario con respecto a sus características objetivas, subjetivas y emocionales (particularmente en Sistemas de Recomendación).Una de las áreas que ha sido poco investigada es la inclusión de la escala de valores humanos en los sistemas de información. Un Sistema de Recomendación, Modelo de usuario o Sistemas de Información, solo toman en cuenta las preferencias y emociones del usuario [Velásquez, 1996, 1997; Goldspink, 2000; Conte and Paolucci, 2001; Urban and Schmidt, 2001; Dal Forno and Merlone, 2001, 2002; Berkovsky et al., 2007c]. Por lo tanto, el principal enfoque de nuestra investigación está basado en la creación de una metodología que permita la generación de una escala de valores humanos para el usuario desde el modelo de usuario. Presentamos resultados obtenidos de un estudio de casos utilizando las características objetivas, subjetivas y emocionales en las áreas de servicios bancarios y de restaurantes donde la metodología propuesta en esta investigación fue puesta a prueba.En esta tesis, las principales contribuciones son: El desarrollo de una metodología que, dado un modelo de usuario con atributos objetivos, subjetivos y emocionales, se obtenga la Escala de Valores Humanos del usuario. La metodología propuesta está basada en el uso de aplicaciones ya existentes, donde todas las conexiones entre usuarios, agentes y dominios que se caracterizan por estas particularidades y atributos; por lo tanto, no se requiere de un esfuerzo extra por parte del usuario.
Resumo:
[EU]Lan honetan semantika distribuzionalaren eta ikasketa automatikoaren erabilera aztertzen dugu itzulpen automatiko estatistikoa hobetzeko. Bide horretan, erregresio logistikoan oinarritutako ikasketa automatikoko eredu bat proposatzen dugu hitz-segiden itzulpen- probabilitatea modu dinamikoan modelatzeko. Proposatutako eredua itzulpen automatiko estatistikoko ohiko itzulpen-probabilitateen orokortze bat dela frogatzen dugu, eta testuinguruko nahiz semantika distribuzionaleko informazioa barneratzeko baliatu ezaugarri lexiko, hitz-cluster eta hitzen errepresentazio bektorialen bidez. Horretaz gain, semantika distribuzionaleko ezagutza itzulpen automatiko estatistikoan txertatzeko beste hurbilpen bat lantzen dugu: hitzen errepresentazio bektorial elebidunak erabiltzea hitz-segiden itzulpenen antzekotasuna modelatzeko. Gure esperimentuek proposatutako ereduen baliagarritasuna erakusten dute, emaitza itxaropentsuak eskuratuz oinarrizko sistema sendo baten gainean. Era berean, gure lanak ekarpen garrantzitsuak egiten ditu errepresentazio bektorialen mapaketa elebidunei eta hitzen errepresentazio bektorialetan oinarritutako hitz-segiden antzekotasun neurriei dagokienean, itzulpen automatikoaz haratago balio propio bat dutenak semantika distribuzionalaren arloan.
Resumo:
[Es]El objetivo principal de este trabajo es la introducción del usuario al mundo de la robótica, explicando para ello, desde un punto de vista práctico, los conceptos teóricos relacionados con la cinemática de mecanismos espaciales, específicamente la de los robots serie. Para lograr este objetivo se ha creado una metodología de aprendizaje, basada en tres ejercicios, que explica los comandos principales de RobotStudio; software de programación necesario para el control virtual de robots de la marca ABB, robot disponible en la escuela. Junto con esto, se desarrollan los conceptos necesarios para la realización de tareas básicas dentro del ámbito de la robótica. Mediante la implantación de esta metodología se pretende dotar al usuario de los conceptos esenciales para programar robots serie dentro de un ámbito virtual, otorgándole la posibilidad de conectarlo posteriormente a un robot real, obteniendo resultados prácticos y visibles.