847 resultados para multimodal biometrics
Resumo:
This paper outlines results from the long-term deployment of a system for mobile group socialization which utilizes a variety of mundane technologies to support cross-media notifications and messaging. We focus here on the results as they pertain to usage of mundane technologies, particularly the use of such technologies within the context of a cross-media system. We introduce “Rhub”, our prototype, which was designed to support coordination, communication and sharing amongst informal social groups. We also describe and discuss the usage of the “console,” a text-based syntax to enable consistent use across text messaging, instant messaging, email and the web. The prototype has been in active use for over 18 months by over 170 participants, who have used it on an everyday basis for their own socializing requirements.
Resumo:
Readers and writers use a variety of modes of inscription – print, oral and multimedia – to understand, analyze, critique and transform their social, cultural and political worlds. Beginning from Freire (1970), ‘critical literacy’ has become a theoretically diverse educational project, drawing from reader response theory, linguistic and grammatical analysis from critical linguistics, feminist, poststructuralist, postcolonial and critical race theory, and cultural and media studies. In the UK, Australia, Canada, South Africa, New Zealand and the US different approaches to critical literacy have been developed in curriculum and schools. These focus on social and cultural analysis and on how print and digital texts and discourses work, with a necessary and delicate tension between classroom emphasis on student and community cultural ‘voice’ and social analysis – and on explicit engagement with the technical features and social uses of written and multimodal texts.
Resumo:
Rapid advances in educational and information communications technology (ICT)have encouraged some educators to move beyond traditional face to face and distance education correspondence modes toward a rich, technology mediated e-learning environment. Ready access to multimedia at the desktop has provided the opportunity for educators to develop flexible, engaging and interactive learning resources incorporating multimedia and hypermedia. However, despite this opportunity, the adoption and integration of educational technologies by academics across the tertiary sector has typically been slow. This paper presents the findings of a qualitative study that investigated factors influencing the manner in which academics adopt and integrate educational technology and ICT. The research was conducted at a regional Australian university, the University of Southern Queensland (USQ), and focused on the development of e-learning environments. These e-learning environments include a range of multimodal learning objects and multiple representations of content that seek to cater for different learning styles and modal preferences, increase interaction, improve learning outcomes, provide a more inclusive and equitable curriculum and more closely mirror the on campus learning experience. This focus of this paper is primarily on the barriers or inhibitors academics reported in the study, including institutional barriers, individual inhibitors and pedagogical concerns. Strategies for addressing these obstacles are presented and implications and recommendations for educational institutions are discussed.
Resumo:
Emissions from airport operations are of significant concern because of their potential impact on local air quality and human health. The currently limited scientific knowledge of aircraft emissions is an important issue worldwide, when considering air pollution associated with airport operation, and this is especially so for ultrafine particles. This limited knowledge is due to scientific complexities associated with measuring aircraft emissions during normal operations on the ground. In particular this type of research has required the development of novel sampling techniques which must take into account aircraft plume dispersion and dilution as well as the various particle dynamics that can affect the measurements of the aircraft engine plume from an operational aircraft. In order to address this scientific problem, a novel mobile emission measurement method called the Plume Capture and Analysis System (PCAS), was developed and tested. The PCAS permits the capture and analysis of aircraft exhaust during ground level operations including landing, taxiing, takeoff and idle. The PCAS uses a sampling bag to temporarily store a sample, providing sufficient time to utilize sensitive but slow instrumental techniques to be employed to measure gas and particle emissions simultaneously and to record detailed particle size distributions. The challenges in relation to the development of the technique include complexities associated with the assessment of the various particle loss and deposition mechanisms which are active during storage in the PCAS. Laboratory based assessment of the method showed that the bag sampling technique can be used to accurately measure particle emissions (e.g. particle number, mass and size distribution) from a moving aircraft or vehicle. Further assessment of the sensitivity of PCAS results to distance from the source and plume concentration was conducted in the airfield with taxiing aircraft. The results showed that the PCAS is a robust method capable of capturing the plume in only 10 seconds. The PCAS is able to account for aircraft plume dispersion and dilution at distances of 60 to 180 meters downwind of moving a aircraft along with particle deposition loss mechanisms during the measurements. Characterization of the plume in terms of particle number, mass (PM2.5), gaseous emissions and particle size distribution takes only 5 minutes allowing large numbers of tests to be completed in a short time. The results were broadly consistent and compared well with the available data. Comprehensive measurements and analyses of the aircraft plumes during various modes of the landing and takeoff (LTO) cycle (e.g. idle, taxi, landing and takeoff) were conducted at Brisbane Airport (BNE). Gaseous (NOx, CO2) emission factors, particle number and mass (PM2.5) emission factors and size distributions were determined for a range of Boeing and Airbus aircraft, as a function of aircraft type and engine thrust level. The scientific complexities including the analysis of the often multimodal particle size distributions to describe the contributions of different particle source processes during the various stages of aircraft operation were addressed through comprehensive data analysis and interpretation. The measurement results were used to develop an inventory of aircraft emissions at BNE, including all modes of the aircraft LTO cycle and ground running procedures (GRP). Measurements of the actual duration of aircraft activity in each mode of operation (time-in-mode) and compiling a comprehensive matrix of gas and particle emission rates as a function of aircraft type and engine thrust level for real world situations was crucial for developing the inventory. The significance of the resulting matrix of emission rates in this study lies in the estimate it provides of the annual particle emissions due to aircraft operations, especially in terms of particle number. In summary, this PhD thesis presents for the first time a comprehensive study of the particle and NOx emission factors and rates along with the particle size distributions from aircraft operations and provides a basis for estimating such emissions at other airports. This is a significant addition to the scientific knowledge in terms of particle emissions from aircraft operations, since the standard particle number emissions rates are not currently available for aircraft activities.
Resumo:
A data-driven background dataset refinement technique was recently proposed for SVM based speaker verification. This method selects a refined SVM background dataset from a set of candidate impostor examples after individually ranking examples by their relevance. This paper extends this technique to the refinement of the T-norm dataset for SVM-based speaker verification. The independent refinement of the background and T-norm datasets provides a means of investigating the sensitivity of SVM-based speaker verification performance to the selection of each of these datasets. Using refined datasets provided improvements of 13% in min. DCF and 9% in EER over the full set of impostor examples on the 2006 SRE corpus with the majority of these gains due to refinement of the T-norm dataset. Similar trends were observed for the unseen data of the NIST 2008 SRE.
Resumo:
This paper presents Scatter Difference Nuisance Attribute Projection (SD-NAP) as an enhancement to NAP for SVM-based speaker verification. While standard NAP may inadvertently remove desirable speaker variability, SD-NAP explicitly de-emphasises this variability by incorporating a weighted version of the between-class scatter into the NAP optimisation criterion. Experimental evaluation of SD-NAP with a variety of SVM systems on the 2006 and 2008 NIST SRE corpora demonstrate that SD-NAP provides improved verification performance over standard NAP in most cases, particularly at the EER operating point.
Resumo:
This paper presents a novel approach of estimating the confidence interval of speaker verification scores. This approach is utilised to minimise the utterance lengths required in order to produce a confident verification decision. The confidence estimation method is also extended to address both the problem of high correlation in consecutive frame scores, and robustness with very limited training samples. The proposed technique achieves a drastic reduction in the typical data requirements for producing confident decisions in an automatic speaker verification system. When evaluated on the NIST 2005 SRE, the early verification decision method demonstrates that an average of 5–10 seconds of speech is sufficient to produce verification rates approaching those achieved previously using an average in excess of 100 seconds of speech.
Resumo:
A method of improving the security of biometric templates which satisfies desirable properties such as (a) irreversibility of the template, (b) revocability and assignment of a new template to the same biometric input, (c) matching in the secure transformed domain is presented. It makes use of an iterative procedure based on the bispectrum that serves as an irreversible transformation for biometric features because signal phase is discarded each iteration. Unlike the usual hash function, this transformation preserves closeness in the transformed domain for similar biometric inputs. A number of such templates can be generated from the same input. These properties are illustrated using synthetic data and applied to images from the FRGC 3D database with Gabor features. Verification can be successfully performed using these secure templates with an EER of 5.85%
Resumo:
The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.
Resumo:
We introduce multiple-control fuzzy vaults allowing generalised threshold, compartmented and multilevel access structure. The presented schemes enable many useful applications employing multiple users and/or multiple locking sets. Introducing the original single control fuzzy vault of Juels and Sudan we identify several similarities and differences between their vault and secret sharing schemes which influence how best to obtain working generalisations. We design multiple-control fuzzy vaults suggesting applications using biometric credentials as locking and unlocking values. Furthermore we assess the security of our obtained generalisations for insider/ outsider attacks and examine the access-complexity for legitimate vault owners.
Resumo:
In 2007, a comprehensive review of the extant research on nonpharmacological interventions for persons with early-stage dementia was conducted. More than 150 research reports, centered on six major domains, were included: early-stage support groups, cognitive training and enhancement programs, exercise programs, exemplar programs, health promotion programs, and “other” programs not fitting into previous categories. Theories of neural regeneration and plasticity were most often used to support the tested interventions. Recommendations for practice, research, and health policy are outlined, including evidence-based, nonpharmacological treatment protocols for persons with mild cognitive impairment and early-stage dementia. A tested, community-based, multimodal treatment program is also described. Overall, findings identify well-supported nonpharmacological treatments for persons with early-stage dementia and implications for a national health care agenda to optimize outcomes for this growing population of older adults.
Resumo:
Information fusion in biometrics has received considerable attention. The architecture proposed here is based on the sequential integration of multi-instance and multi-sample fusion schemes. This method is analytically shown to improve the performance and allow a controlled trade-off between false alarms and false rejects when the classifier decisions are statistically independent. Equations developed for detection error rates are experimentally evaluated by considering the proposed architecture for text dependent speaker verification using HMM based digit dependent speaker models. The tuning of parameters, n classifiers and m attempts/samples, is investigated and the resultant detection error trade-off performance is evaluated on individual digits. Results show that performance improvement can be achieved even for weaker classifiers (FRR-19.6%, FAR-16.7%). The architectures investigated apply to speaker verification from spoken digit strings such as credit card numbers in telephone or VOIP or internet based applications.
Resumo:
Over recent decades there has been growing interest in the role of non-motorized modes in the overall transport system (especially walking and cycling for private purposes) and many government initiatives have been taken to encourage these active modes. However there has been relatively little research attention given to the paid form of non-motorized travel which can be called non-motorized public transport (NMPT). This involves cycle-powered vehicles which can carry several passengers (plus the driver) and a small amount of goods; and which provide flexible hail-and-ride services. Effectively they are non-motorized taxis. Common forms include cycle-rickshaw (Bangladesh, India), becak (Indonesia), cyclos (Vietnam, Cambodia), bicitaxi (Columbia, Cuba), velo-taxi (Germany, Netherland), and pedicabs (UK, Japan, USA). --------- The popularity of NMPT is widespread in developing countries, where it caters for a wide range of mobility needs. For instance in Dhaka, Bangladesh, rickshaws are the preferred mode for non-walk trips and have a higher mode share than cars or buses. Factors that underlie the continued existence and popularity of NMPT in many developing countries include positive contribution to social equity, micro-macro economic significance, employment creation, and suitability for narrow and crowded streets. Although top speeds are lower than motorized modes, NMPT is competitive and cost-effective for short distance door-to-door trips that make up the bulk of travel in many developing cities. In addition, NMPT is often the preferred mode for vulnerable groups such as females, children and elderly people. NMPT is more prominent in developing countries but its popularity and significance is also gradually increasing in several developed countries of Asia, Europe and parts of North America, where there is a trend for the NMPT usage pattern to broaden from tourism to public transport. This shift is due to a number of factors including the eco-sustainable nature of NMPT; its operating flexibility (such as in areas where motorized vehicle access is restricted or discouraged through pricing); and the dynamics that it adds to the urban fabric. Whereas NMPT may have been seen as a “dying” mode, in many cities it is maintaining or increasing its significance and with potential for further growth. --------- This paper will examine and analyze global trends in NMPT incorporating both developing and developed country contexts and issues such as usage patterns; NMPT policy and management practices; technological development; and operational integration of NMPT into the overall transport system. It will look at how NMPT policies, practices and usage have changed over time and the differing trends in developing and developed countries. In particular, it will use Dhaka, Bangladesh as a case study in recognition of its standing as the major NMPT city in the world. The aim is to highlight NMPT issues and trends and their significance for shaping future policy towards NMPT in developing and developed countries. The paper will be of interest to transport planners, traffic engineers, urban and regional planners, environmentalists, economists and policy makers.