795 resultados para false beliefs
Resumo:
Keyword Spotting is the task of detecting keywords of interest within continu- ous speech. The applications of this technology range from call centre dialogue systems to covert speech surveillance devices. Keyword spotting is particularly well suited to data mining tasks such as real-time keyword monitoring and unre- stricted vocabulary audio document indexing. However, to date, many keyword spotting approaches have su®ered from poor detection rates, high false alarm rates, or slow execution times, thus reducing their commercial viability. This work investigates the application of keyword spotting to data mining tasks. The thesis makes a number of major contributions to the ¯eld of keyword spotting. The ¯rst major contribution is the development of a novel keyword veri¯cation method named Cohort Word Veri¯cation. This method combines high level lin- guistic information with cohort-based veri¯cation techniques to obtain dramatic improvements in veri¯cation performance, in particular for the problematic short duration target word class. The second major contribution is the development of a novel audio document indexing technique named Dynamic Match Lattice Spotting. This technique aug- ments lattice-based audio indexing principles with dynamic sequence matching techniques to provide robustness to erroneous lattice realisations. The resulting algorithm obtains signi¯cant improvement in detection rate over lattice-based audio document indexing while still maintaining extremely fast search speeds. The third major contribution is the study of multiple veri¯er fusion for the task of keyword veri¯cation. The reported experiments demonstrate that substantial improvements in veri¯cation performance can be obtained through the fusion of multiple keyword veri¯ers. The research focuses on combinations of speech background model based veri¯ers and cohort word veri¯ers. The ¯nal major contribution is a comprehensive study of the e®ects of limited training data for keyword spotting. This study is performed with consideration as to how these e®ects impact the immediate development and deployment of speech technologies for non-English languages.
Resumo:
Vigilance declines when exposed to highly predictable and uneventful tasks. Monotonous tasks provide little cognitive and motor stimulation and contribute to human errors. This paper aims to model and detect vigilance decline in real time through participant’s reaction times during a monotonous task. A lab-based experiment adapting the Sustained Attention to Response Task (SART) is conducted to quantify the effect of monotony on overall performance. Then relevant parameters are used to build a model detecting hypovigilance throughout the experiment. The accuracy of different mathematical models are compared to detect in real-time – minute by minute - the lapses in vigilance during the task. We show that monotonous tasks can lead to an average decline in performance of 45%. Furthermore, vigilance modelling enables to detect vigilance decline through reaction times with an accuracy of 72% and a 29% false alarm rate. Bayesian models are identified as a better model to detect lapses in vigilance as compared to Neural Networks and Generalised Linear Mixed Models. This modelling could be used as a framework to detect vigilance decline of any human performing monotonous tasks.
Resumo:
Many well-known specialists have contributed to this book which presents for the first time an in-depth look at the viruses, their satellites and the retrotransposons infecting (or occuring in) one plant family: the Poaceae (Gramineae). After molecular and biological descriptions of the viruses to species level, virus diseases are presented by crop: barley, maize, rice, rye, sorghum, sugarcane, triticales, wheats, forage, ornamental and lawn. A detailed index of the viruses and taxonomic lists will help readers in the search for information.
Resumo:
The critical factor in determining students' interest and motivation to learn science is the quality of the teaching. However, science typically receives very little time in primary classrooms, with teachers often lacking the confidence to engage in inquiry-based learning because they do not have a sound understanding of science or its associated pedagogical approaches. Developing teacher knowledge in this area is a major challenge. Addressing these concerns with didactic "stand and deliver" modes of Professional Development (PD) has been shown to have little relevance or effectiveness, yet is still the predominant approach used by schools and education authorities. In response to that issue, the constructivist-inspired Primary Connections professional learning program applies contemporary theory relating to the characteristics of effective primary science teaching, the changes required for teachers to use those pedagogies, and professional learning strategies that facilitate such change. This study investigated the nature of teachers' engagement with the various elements of the program. Summative assessments of such PD programs have been undertaken previously, however there was an identified need for a detailed view of the changes in teachers' beliefs and practices during the intervention. This research was a case study of a Primary Connections implementation. PD workshops were presented to a primary school staff, then two teachers were observed as they worked in tandem to implement related curriculum units with their Year 4/5 classes over a six-month period. Data including interviews, classroom observations and written artefacts were analysed to identify common themes and develop a set of assertions related to how teachers changed their beliefs and practices for teaching science. When teachers implement Primary Connections, their students "are more frequently curious in science and more frequently learn interesting things in science" (Hackling & Prain, 2008). This study has found that teachers who observe such changes in their students consequently change their beliefs and practices about teaching science. They enhance science learning by promoting student autonomy through open-ended inquiries, and they and their students enhance their scientific literacy by jointly constructing investigations and explaining their findings. The findings have implications for teachers and for designers of PD programs. Assertions related to teaching science within a pedagogical framework consistent with the Primary Connections model are that: (1) promoting student autonomy enhances science learning; (2) student autonomy presents perceived threats to teachers but these are counteracted by enhanced student engagement and learning; (3) the structured constructivism of Primary Connections resources provides appropriate scaffolding for teachers and students to transition from didactic to inquiry-based learning modes; and (4) authentic science investigations promote understanding of scientific literacy and the "nature of science". The key messages for designers of PD programs are that: (1) effective programs model the pedagogies being promoted; (2) teachers benefit from taking the role of student and engaging in the proposed learning experiences; (3) related curriculum resources foster long-term engagement with new concepts and strategies; (4) change in beliefs and practices occurs after teachers implement the program or strategy and see positive outcomes in their students; and (5) implementing this study's PD model is efficient in terms of resources. Identified topics for further investigation relate to the role of assessment in providing evidence to support change in teachers' beliefs and practices, and of teacher reflection in making such change more sustainable.
Resumo:
A one year mathematics project that focused on measurement was conducted with six Torres Strait Islander schools and communities. Its key focus was to contextualise the teaching and learning of measurement within the students’ culture, communities and home languages. There were six teachers and two teacher aides who participated in the project. This paper reports on the findings from the teachers’ and teacher aides’ survey questionnaire used in the first Professional Development session to identify: a) teachers’ experience of teaching in Torres Strait Islands, b) teachers’ beliefs about effective ways to teach Torres Strait Islander students, and c) contexualising measurement within Torres Strait Islander culture, Communities and home languages. A wide range of differing levels of knowledge and understanding about how to contextualise measurement to support student learning were identified and analysed. For example, an Indigenous teacher claimed that mathematics and the environment are relational, that is, they are not discrete and in isolation from one another, rather they interconnect with mathematical ideas emerging from the environment of the Torres Strait Communities.
Resumo:
The modal strain energy method, which depends on the vibration characteristics of the structure, has been reasonably successful in identifying and localising damage in the structure. However, existing strain energy methods require the first few modes to be measured to provide meaningful damage detection. Use of individual modes with existing strain energy methods may indicate false alarms or may not detect the damage at or near the nodal points. This paper proposes a new modal strain energy based damage index which can detect and localize the damage using any one of the modes measured and illustrates its application for beam structures. It becomes evident that the proposed strain energy based damage index also has potential for damage quantification.
Resumo:
In November 2009 the researcher embarked on a project aimed at reducing the amount of paper used by Queensland University of Technology (QUT) staff in their daily workplace activities. The key goal was to communicate to staff that excessive printing has a tangible and negative effect on their workplace and local environment. The research objective was to better understand what motivates staff towards more ecologically sustainable printing practises, whilst meeting their job’s demands. The current study is built on previous research that found that one interface does not address the needs of all users when creating persuasive Human Computer Interaction (HCI) interventions targeting resource consumption. In response, the current study created and trialled software that communicates individual paper consumption in precise metrics. Based on preliminary research data different metric sets have been defined to address the different motivations and beliefs of user archetypes using descriptive and injunctive normative information.
Resumo:
If the student wellbeing pedagogy characterised by the troika metaphor is to become more widely adopted, beginning teachers need to be inducted into service learning. In this chapter, we discuss the implementation and outcomes of a service learning program in a Bachelor of Education course in Australia. The program provides pre-service teachers with insights into service learning practice. Pre-service teachers are given supported opportunities to examine and challenge traditional beliefs and values about student diversity and the role of schools in developing a more inclusive society. They are supported in developing ethics of care and concern for inclusive and equitable practices – characteristics necessary for quality teaching. Thus, the Queensland University of Technology (QUT) service learning program is an ideal example of the troika effect in practice, in that the pedagogy fuses values education, quality teaching and service learning to develop within each student an inclusive ethical framework that will inform their classroom practice as beginning quality teachers.
Resumo:
Uncooperative iris identification systems at a distance and on the move often suffer from poor resolution and poor focus of the captured iris images. The lack of pixel resolution and well-focused images significantly degrades the iris recognition performance. This paper proposes a new approach to incorporate the focus score into a reconstruction-based super-resolution process to generate a high resolution iris image from a low resolution and focus inconsistent video sequence of an eye. A reconstruction-based technique, which can incorporate middle and high frequency components from multiple low resolution frames into one desired super-resolved frame without introducing false high frequency components, is used. A new focus assessment approach is proposed for uncooperative iris at a distance and on the move to improve performance for variations in lighting, size and occlusion. A novel fusion scheme is then proposed to incorporate the proposed focus score into the super-resolution process. The experiments conducted on the The Multiple Biometric Grand Challenge portal database shows that our proposed approach achieves an EER of 2.1%, outperforming the existing state-of-the-art averaging signal-level fusion approach by 19.2% and the robust mean super-resolution approach by 8.7%.
Resumo:
In normal child development, both individual and group pretense first emerges at approximately two years of age. The metarepresentational account of pretense holds that children already have the concept PRETEND when they first engage in early group pretense. A behavioristic account suggests that early group pretense is analogous to early beliefs or desires and thus require no mental state concepts. I argue that a behavioral account does not explain the actual behavior observed in children and it cannot explain how children come to understand that a specific action is one of pretense versus one of belief. I conclude that a mentalistic explanation of pretense best explains the behavior under consideration.
Resumo:
Identification of hot spots, also known as the sites with promise, black spots, accident-prone locations, or priority investigation locations, is an important and routine activity for improving the overall safety of roadway networks. Extensive literature focuses on methods for hot spot identification (HSID). A subset of this considerable literature is dedicated to conducting performance assessments of various HSID methods. A central issue in comparing HSID methods is the development and selection of quantitative and qualitative performance measures or criteria. The authors contend that currently employed HSID assessment criteria—namely false positives and false negatives—are necessary but not sufficient, and additional criteria are needed to exploit the ordinal nature of site ranking data. With the intent to equip road safety professionals and researchers with more useful tools to compare the performances of various HSID methods and to improve the level of HSID assessments, this paper proposes four quantitative HSID evaluation tests that are, to the authors’ knowledge, new and unique. These tests evaluate different aspects of HSID method performance, including reliability of results, ranking consistency, and false identification consistency and reliability. It is intended that road safety professionals apply these different evaluation tests in addition to existing tests to compare the performances of various HSID methods, and then select the most appropriate HSID method to screen road networks to identify sites that require further analysis. This work demonstrates four new criteria using 3 years of Arizona road section accident data and four commonly applied HSID methods [accident frequency ranking, accident rate ranking, accident reduction potential, and empirical Bayes (EB)]. The EB HSID method reveals itself as the superior method in most of the evaluation tests. In contrast, identifying hot spots using accident rate rankings performs the least well among the tests. The accident frequency and accident reduction potential methods perform similarly, with slight differences explained. The authors believe that the four new evaluation tests offer insight into HSID performance heretofore unavailable to analysts and researchers.
Resumo:
Identifying crash “hotspots”, “blackspots”, “sites with promise”, or “high risk” locations is standard practice in departments of transportation throughout the US. The literature is replete with the development and discussion of statistical methods for hotspot identification (HSID). Theoretical derivations and empirical studies have been used to weigh the benefits of various HSID methods; however, a small number of studies have used controlled experiments to systematically assess various methods. Using experimentally derived simulated data—which are argued to be superior to empirical data, three hot spot identification methods observed in practice are evaluated: simple ranking, confidence interval, and Empirical Bayes. Using simulated data, sites with promise are known a priori, in contrast to empirical data where high risk sites are not known for certain. To conduct the evaluation, properties of observed crash data are used to generate simulated crash frequency distributions at hypothetical sites. A variety of factors is manipulated to simulate a host of ‘real world’ conditions. Various levels of confidence are explored, and false positives (identifying a safe site as high risk) and false negatives (identifying a high risk site as safe) are compared across methods. Finally, the effects of crash history duration in the three HSID approaches are assessed. The results illustrate that the Empirical Bayes technique significantly outperforms ranking and confidence interval techniques (with certain caveats). As found by others, false positives and negatives are inversely related. Three years of crash history appears, in general, to provide an appropriate crash history duration.
Resumo:
Tracking/remote monitoring systems using GNSS are a proven method to enhance the safety and security of personnel and vehicles carrying precious or hazardous cargo. While GNSS tracking appears to mitigate some of these threats, if not adequately secured, it can be a double-edged sword allowing adversaries to obtain sensitive shipment and vehicle position data to better coordinate their attacks, and to provide a false sense of security to monitoring centers. Tracking systems must be designed with the ability to perform route-compliance and thwart attacks ranging from low-level attacks such as the cutting of antenna cables to medium and high-level attacks involving radio jamming and signal / data-level simulation, especially where the goods transported have a potentially high value to terrorists. This paper discusses the use of GNSS in critical tracking applications, addressing the mitigation of GNSS security issues, augmentation systems and communication systems in order to provide highly robust and survivable tracking systems.
Resumo:
Continuous biometric authentication schemes (CBAS) are built around the biometrics supplied by user behavioural characteristics and continuously check the identity of the user throughout the session. The current literature for CBAS primarily focuses on the accuracy of the system in order to reduce false alarms. However, these attempts do not consider various issues that might affect practicality in real world applications and continuous authentication scenarios. One of the main issues is that the presented CBAS are based on several samples of training data either of both intruder and valid users or only the valid users' profile. This means that historical profiles for either the legitimate users or possible attackers should be available or collected before prediction time. However, in some cases it is impractical to gain the biometric data of the user in advance (before detection time). Another issue is the variability of the behaviour of the user between the registered profile obtained during enrollment, and the profile from the testing phase. The aim of this paper is to identify the limitations in current CBAS in order to make them more practical for real world applications. Also, the paper discusses a new application for CBAS not requiring any training data either from intruders or from valid users.