883 resultados para Task-Based Instruction (TBI)
Resumo:
With security and surveillance, there is an increasing need to process image data efficiently and effectively either at source or in a large data network. Whilst a Field-Programmable Gate Array (FPGA) has been seen as a key technology for enabling this, the design process has been viewed as problematic in terms of the time and effort needed for implementation and verification. The work here proposes a different approach of using optimized FPGA-based soft-core processors which allows the user to exploit the task and data level parallelism to achieve the quality of dedicated FPGA implementations whilst reducing design time. The paper also reports some preliminary
progress on the design flow to program the structure. An implementation for a Histogram of Gradients algorithm is also reported which shows that a performance of 328 fps can be achieved with this design approach, whilst avoiding the long design time, verification and debugging steps associated with conventional FPGA implementations.
Resumo:
Safety on public transport is a major concern for the relevant authorities. We
address this issue by proposing an automated surveillance platform which combines data from video, infrared and pressure sensors. Data homogenisation and integration is achieved by a distributed architecture based on communication middleware that resolves interconnection issues, thereby enabling data modelling. A common-sense knowledge base models and encodes knowledge about public-transport platforms and the actions and activities of passengers. Trajectory data from passengers is modelled as a time-series of human activities. Common-sense knowledge and rules are then applied to detect inconsistencies or errors in the data interpretation. Lastly, the rationality that characterises human behaviour is also captured here through a bottom-up Hierarchical Task Network planner that, along with common-sense, corrects misinterpretations to explain passenger behaviour. The system is validated using a simulated bus saloon scenario as a case-study. Eighteen video sequences were recorded with up to six passengers. Four metrics were used to evaluate performance. The system, with an accuracy greater than 90% for each of the four metrics, was found to outperform a rule-base system and a system containing planning alone.
Resumo:
We consider the problem of resource selection in clustered Peer-to-Peer Information Retrieval (P2P IR) networks with cooperative peers. The clustered P2P IR framework presents a significant departure from general P2P IR architectures by employing clustering to ensure content coherence between resources at the resource selection layer, without disturbing document allocation. We propose that such a property could be leveraged in resource selection by adapting well-studied and popular inverted lists for centralized document retrieval. Accordingly, we propose the Inverted PeerCluster Index (IPI), an approach that adapts the inverted lists, in a straightforward manner, for resource selection in clustered P2P IR. IPI also encompasses a strikingly simple peer-specific scoring mechanism that exploits the said index for resource selection. Through an extensive empirical analysis on P2P IR testbeds, we establish that IPI competes well with the sophisticated state-of-the-art methods in virtually every parameter of interest for the resource selection task, in the context of clustered P2P IR.
Resumo:
Les enfants d’âge préscolaire (≤ 5 ans) sont plus à risque de subir un traumatisme crânio-cérébral (TCC) que les enfants plus agés, et 90% de ces TCC sont de sévérité légère (TCCL). De nombreuses études publiées dans les deux dernières décennies démontrent que le TCCL pédiatrique peut engendrer des difficultés cognitives, comportementales et psychiatriques en phase aigüe qui, chez certains enfants, peuvent perdurer à long terme. Il existe une littérature florissante concernant l'impact du TCCL sur le fonctionnement social et sur la cognition sociale (les processus cognitifs qui sous-tendent la socialisation) chez les enfants d'âge scolaire et les adolescents. Or, seulement deux études ont examiné l'impact d'un TCCL à l'âge préscolaire sur le développement social et aucune étude ne s'est penchée sur les répercussions socio-cognitives d'un TCCL précoce (à l’âge préscolaire). L'objectif de la présente thèse était donc d'étudier les conséquences du TCCL en bas âge sur la cognition sociale. Pour ce faire, nous avons examiné un aspect de la cognition sociale qui est en plein essor à cet âge, soit la théorie de l'esprit (TE), qui réfère à la capacité de se mettre à la place d'autrui et de comprendre sa perspective. Le premier article avait pour but d'étudier deux sous-composantes de la TE, soit la compréhension des fausses croyances et le raisonnement des désirs et des émotions d'autrui, six mois post-TCCL. Les résultats indiquent que les enfants d'âge préscolaire (18 à 60 mois) qui subissent un TCCL ont une TE significativement moins bonne 6 mois post-TCCL comparativement à un groupe contrôle d'enfants n'ayant subi aucune blessure. Le deuxième article visait à éclaircir l'origine de la diminution de la TE suite à un TCCL précoce. Cet objectif découle du débat qui existe actuellement dans la littérature. En effet, plusieurs scientifiques sont d'avis que l'on peut conclure à un effet découlant de la blessure au cerveau seulement lorsque les enfants ayant subi un TCCL sont comparés à des enfants ayant subi une blessure n'impliquant pas la tête (p.ex., une blessure orthopédique). Cet argument est fondé sur des études qui démontrent qu'en général, les enfants qui sont plus susceptibles de subir une blessure, peu importe la nature de celle-ci, ont des caractéristiques cognitives pré-existantes (p.ex. impulsivité, difficultés attentionnelles). Il s'avère donc possible que les difficultés que nous croyons attribuables à la blessure cérébrale étaient présentes avant même que l'enfant ne subisse un TCCL. Dans cette deuxième étude, nous avons donc comparé les performances aux tâches de TE d'enfants ayant subi un TCCL à ceux d'enfants appartenant à deux groupes contrôles, soit des enfants n'ayant subi aucune blessure et à des pairs ayant subi une blessure orthopédique. De façon générale, les enfants ayant subi un TCCL ont obtenu des performances significativement plus faibles à la tâche évaluant le raisonnement des désirs et des émotions d'autrui, 6 mois post-blessure, comparativement aux deux groupes contrôles. Cette étude visait également à examiner l'évolution de la TE suite à un TCCL, soit de 6 mois à 18 mois post-blessure. Les résultats démontrent que les moindres performances sont maintenues 18 mois post-TCCL. Enfin, le troisième but de cette étude était d’investiguer s’il existe un lien en la performance aux tâches de TE et les habiletés sociales, telles qu’évaluées à l’aide d’un questionnaire rempli par le parent. De façon intéressante, la TE est associée aux habiletés sociales seulement chez les enfants ayant subi un TCCL. Dans l'ensemble, ces deux études mettent en évidence des répercussions spécifiques du TCCL précoce sur la TE qui persistent à long terme, et une TE amoindrie seraient associée à de moins bonnes habiletés sociales. Cette thèse démontre qu'un TCCL en bas âge peut faire obstacle au développement sociocognitif, par le biais de répercussions sur la TE. Ces résultats appuient la théorie selon laquelle le jeune cerveau immature présente une vulnérabilité accrue aux blessures cérébrales. Enfin, ces études mettent en lumière la nécessité d'étudier ce groupe d'âge, plutôt que d'extrapoler à partir de résultats obtenus avec des enfants plus âgés, puisque les enjeux développementaux s'avèrent différents, et que ceux-ci ont potentiellement une influence majeure sur les répercussions d'une blessure cérébrale sur le fonctionnement sociocognitif.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Objective: Caffeine has been shown to have effects on certain areas of cognition, but in executive functioning the research is limited and also inconsistent. One reason could be the need for a more sensitive measure to detect the effects of caffeine on executive function. This study used a new non-immersive virtual reality assessment of executive functions known as JEF© (the Jansari Assessment of Executive Function) alongside the ‘classic’ Stroop Colour- Word task to assess the effects of a normal dose of caffeinated coffee on executive function. Method: Using a double-blind, counterbalanced within participants procedure 43 participants were administered either a caffeinated or decaffeinated coffee and completed the ‘JEF©’ and Stroop tasks, as well as a subjective mood scale and blood pressure pre- and post condition on two separate occasions a week apart. JEF© yields measures for eight separate aspects of executive functions, in addition to a total average score. Results: Findings indicate that performance was significantly improved on the planning, creative thinking, event-, time- and action-based prospective memory, as well as total JEF© score following caffeinated coffee relative to the decaffeinated coffee. The caffeinated beverage significantly decreased reaction times on the Stroop task, but there was no effect on Stroop interference. Conclusion: The results provide further support for the effects of a caffeinated beverage on cognitive functioning. In particular, it has demonstrated the ability of JEF© to detect the effects of caffeine across a number of executive functioning constructs, which weren’t shown in the Stroop task, suggesting executive functioning improvements as a result of a ‘typical’ dose of caffeine may only be detected by the use of more real-world, ecologically valid tasks.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Il a été avancé que des apprenants expérimentés développeraient des niveaux élevés de conscience métalinguistique (MLA), ce qui leur faciliterait l’apprentissage de langues subséquentes (p.ex., Singleton & Aronin, 2007). De plus, des chercheurs dans le domaine de l’acquisition des langues tierces insistent sur les influences positives qu’exercent les langues précédemment apprises sur l’apprentissage formel d’une langue étrangère (p.ex., Cenoz & Gorter, 2015), et proposent de délaisser le regard traditionnel qui mettait l’accent sur l’interférence à l’origine des erreurs des apprenants pour opter pour une vision plus large et positive de l’interaction entre les langues. Il a été démontré que la similarité typologique ainsi que la compétence dans la langue source influence tous les types de transfert (p.ex., Ringbom, 1987, 2007). Cependant, le défi méthodologique de déterminer, à la fois l’usage pertinent d’une langue cible en tant que résultat d’une influence translinguistique (p.ex., Falk & Bardel, 2010) et d’établir le rôle crucial de la MLA dans l’activation consciente de mots ou de constructions reliés à travers différentes langues, demeure. La présente étude avait pour but de relever ce double défi en faisant appel à des protocoles oraux (TAPs) pour examiner le transfert positif de l’anglais (L2) vers l’allemand (L3) chez des Québécois francophones après cinq semaines d’enseignement formel de la L3. Les participants ont été soumis à une tâche de traduction développée aux fins de la présente étude. Les 42 items ont été sélectionnés sur la base de jugements de similarité et d’imagibilité ainsi que de fréquence des mots provenant d’une étude de cognats allemands-anglais (Friel & Kennison, 2001). Les participants devaient réfléchir à voix haute pendant qu’ils traduisaient des mots inconnus de l’allemand (L3) vers le français (L1). Le transfert positif a été opérationnalisé par des traductions correctes qui étaient basées sur un cognat anglais. La MLA a été mesurée par le biais du THAM (Test d’habiletés métalinguistiques) (Pinto & El Euch, 2015) ainsi que par l’analyse des TAPs. Les niveaux de compétence en anglais ont été établis sur la base du Michigan Test (Corrigan et al., 1979), tandis que les niveaux d’exposition ainsi que l’intérêt envers la langue et la culture allemandes ont été mesurés à l’aide d’un questionnaire. Une analyse fine des TAPs a révélé de la variabilité inter- et intra-individuelle dans l’activation consciente du vocabulaire en L2, tout en permettant l’identification de niveaux distincts de prise de conscience. Deux modèles indépendants de régressions logistiques ont permis d’identifier les deux dimensions de MLA comme prédicteurs de transfert positif. Le premier modèle, dans lequel le THAM était la mesure exclusive de MLA, a déterminé cette dimension réflexive comme principal prédicteur, suivie de la compétence en anglais, tandis qu’aucune des autres variables indépendantes pouvait prédire le transfert positif de l’anglais. Dans le second modèle, incluant le THAM ainsi que les TAPs comme mesures complémentaires de MLA, la dimension appliquée de MLA, telle que mesurée par les TAPs, était de loin le prédicteur principal, suivie de la dimension réflexive, telle que mesurée par le THAM, tandis que la compétence en anglais ne figurait plus parmi les facteurs ayant une influence significative sur la variable réponse. Bien que la verbalisation puisse avoir influencé la performance dans une certaine mesure, nos observations mettent en évidence la contribution précieuse de données introspectives comme complément aux résultats basés sur des caractéristiques purement linguistiques du transfert. Nos analyses soulignent la complexité des processus métalinguistiques et des stratégies individuelles, ce qui reflète une perspective dynamique du multilinguisme (p.ex., Jessner, 2008).
Resumo:
The estimating of the relative orientation and position of a camera is one of the integral topics in the field of computer vision. The accuracy of a certain Finnish technology company’s traffic sign inventory and localization process can be improved by utilizing the aforementioned concept. The company’s localization process uses video data produced by a vehicle installed camera. The accuracy of estimated traffic sign locations depends on the relative orientation between the camera and the vehicle. This thesis proposes a computer vision based software solution which can estimate a camera’s orientation relative to the movement direction of the vehicle by utilizing video data. The task was solved by using feature-based methods and open source software. When using simulated data sets, the camera orientation estimates had an absolute error of 0.31 degrees on average. The software solution can be integrated to be a part of the traffic sign localization pipeline of the company in question.
Resumo:
With the rapid development of Internet technologies, video and audio processing are among the most important parts due to the constant requirements of high quality media contents. Along with the improvement of network environment and the hardware equipment, this demand is becoming more and more imperious, people prefer high quality videos and audios as well as the net streaming media resources. FFmpeg is a set of open source program about the A/V decoding. Many commercial players use FFmpeg as their displaying cores. This paper designed a simple and easy-to-use video player based on FFmpeg. The first part is about the basic theories and related knowledge of video displaying, including some concepts like data formats, streaming media data, video coding and decoding. In a word, the realization of the video player depend on the a set of video decoding process. The general idea about the process is to get the video packets from the Internet, to read the related protocols and de-encapsulate the protocols, to de-encapsulate the packaging data and to get encoded formats data, to decode them to pixel data that can be displayed directly through graphics cards. During the coding and decoding process, there could be different degrees of data losing, which is called lossy compression, but it usually does not influence the quality of user experiences. The second part is about the principle of the FFmpeg decoding process, that is one of the key point of the paper. In this project, FFmpeg is used for the main decoding task, by call some main functions and structures from FFmpeg class libraries, packaging video formats could be transfer to pixel data, after getting the pixel data, SDL is used for the displaying process. The third part is about the SDL displaying flow. Similarly, it would invoke some important displaying functions from SDL class libraries to realize the function, though SDL is able to do not only displaying task, but also many other game playing process. After that, a independent video displayer is completed, it is provided with all the key function of a player. The fourth part make a simple users interface for the player based on the MFC program, it enable the player could be used by most people. At last, in consideration of the mobile Internet’s blossom, people nowadays can hardly ever drop their mobile phones, there is a brief introduction about how to transplant the video player to Android platform which is one of the most used mobile systems.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
The current study investigated whether 4- to 6-year-old children’s task solution choice was influenced by the past proficiency of familiar peer models and the children’s personal prior task experience. Peer past proficiency was established through behavioral assessments of interactions with novel tasks alongside peer and teacher predictions of each child’s proficiency. Based on these assessments, one peer model with high past proficiency and one age-, sex-, dominance-, and popularity-matched peer model with lower past proficiency were trained to remove a capsule using alternative solutions from a three-solution artificial fruit task. Video demonstrations of the models were shown to children after they had either a personal successful interaction or no interaction with the task. In general, there was not a strong bias toward the high past-proficiency model, perhaps due to a motivation to acquire multiple methods and the salience of other transmission biases. However, there was some evidence of a model-based past-proficiency bias; when the high past-proficiency peer matched the participants’ original solution, there was increased use of that solution, whereas if the high past-proficiency peer demonstrated an alternative solution, there was increased use of the alternative social solution and novel solutions. Thus, model proficiency influenced innovation.
Resumo:
This quantitative study examines the impact of teacher practices on student achievement in classrooms where the English is Fun Interactive Radio Instruction (IRI) programs were being used. A contemporary IRI design using a dual-audience approach, the English is Fun IRI programs delivered daily English language instruction to students in grades 1 and 2 in Delhi and Rajasthan through 120 30-minute programs via broadcast radio (the first audience) while modeling pedagogical techniques and behaviors for their teachers (the second audience). Few studies have examined how the dual-audience approach influences student learning. Using existing data from 32 teachers and 696 students, this study utilizes a multivariate multilevel model to examine the role of the primary expectations for teachers (e.g., setting up the IRI classroom, following instructions from the radio characters and ensuring students are participating) and the role of secondary expectations for teachers (e.g., modeling pedagogies and facilitating learning beyond the instructions) in promoting students’ learning in English listening skills, knowledge of vocabulary and use of sentences. The study finds that teacher practice on both sets of expectations mattered, but that practice in the secondary expectations mattered more. As expected, students made the smallest gains in the most difficult linguistic task (sentence use). The extent to which teachers satisfied the primary and secondary expectations was associated with gains in all three skills – confirming the relationship between students’ English proficiency and teacher practice in a dual-audience program. When it came to gains in students’ scores in sentence use, a teacher whose focus was greater on primary expectations had a negative effect on student performance in both states. In all, teacher practice clearly mattered but not in the same way for all three skills. An optimal scenario for teacher practice is presented in which gains in all three skills are maximized. These findings have important implications for the way the classroom teacher is cast in IRI programs that utilize a dual-audience approach and in the way IRI programs are contracted insofar as the role of the teacher in instruction is minimized and access is limited to instructional support from the IRI lessons alone.
Resumo:
Abstract-The immune system is a complex biological system with a highly distributed, adaptive and self-organising nature. This paper presents an artificial immune system (AIS) that exploits some of these characteristics and is applied to the task of film recommendation by collaborative filtering (CF). Natural evolution and in particular the immune system have not been designed for classical optimisation. However, for this problem, we are not interested in finding a single optimum. Rather we intend to identify a sub-set of good matches on which recommendations can be based. It is our hypothesis that an AIS built on two central aspects of the biological immune system will be an ideal candidate to achieve this: Antigen - antibody interaction for matching and antibody - antibody interaction for diversity. Computational results are presented in support of this conjecture and compared to those found by other CF techniques.
Resumo:
The immune system is a complex biological system with a highly distributed, adaptive and self-organising nature. This paper presents an Artificial Immune System (AIS) that exploits some of these characteristics and is applied to the task of film recommendation by Collaborative Filtering (CF). Natural evolution and in particular the immune system have not been designed for classical optimisation. However, for this problem, we are not interested in finding a single optimum. Rather we intend to identify a sub-set of good matches on which recommendations can be based. It is our hypothesis that an AIS built on two central aspects of the biological immune system will be an ideal candidate to achieve this: Antigen-antibody interaction for matching and idiotypic antibody-antibody interaction for diversity. Computational results are presented in support of this conjecture and compared to those found by other CF techniques.