624 resultados para Muti-Modal Biometrics, User Authentication, Fingerprint Recognition, Palm Print Recognition
Resumo:
This paper presents Sequence Matching Across Route Traversals (SMART); a generally applicable sequence-based place recognition algorithm. SMART provides invariance to changes in illumination and vehicle speed while also providing moderate pose invariance and robustness to environmental aliasing. We evaluate SMART on vehicles travelling at highly variable speeds in two challenging environments; firstly, on an all-terrain vehicle in an off-road, forest track and secondly, using a passenger car traversing an urban environment across day and night. We provide comparative results to the current state-of-the-art SeqSLAM algorithm and investigate the effects of altering SMART’s image matching parameters. Additionally, we conduct an extensive study of the relationship between image sequence length and SMART’s matching performance. Our results show viable place recognition performance in both environments with short 10-metre sequences, and up to 96% recall at 100% precision across extreme day-night cycles when longer image sequences are used.
Resumo:
In the current climate of global economic volatility, there are increasing calls for training in enterprising skills and entrepreneurship to underpin the systemic innovation required for even medium-term business sustainability. The skills long-recognised as the essential for entrepreneurship now appear on the list of employability skills demanded by industry. The QUT Innovation Space (QIS) was an experiment aimed at delivering entrepreneurship education (EE), as an extra-curricular platform across the university, to the undergraduate students of an Australian higher education institute. It was an ambitious project that built on overseas models of EE studied during an Australian Learning and Teaching Council (ALTC) Teaching Fellowship (Collet, 2011) and implemented those approaches across an institute. Such EE approaches have not been attempted in an Australian university. The project tested resonance not only with the student population, from the perspective of what worked and what didn’t work, but also with every level of university operations. Such information is needed to inform the development of EE in the Australian university landscape. The QIS comprised a physical co-working space, virtual sites (web, Twitter and Facebook) and a network of entrepreneurial mentors, colleagues, and students. All facets of the QIS enabled connection between like-minded individuals that underpins the momentum needed for a project of this nature. The QIS became an innovation community within QUT. This report serves two purposes. First, as an account of the QIS project and its evolution, the report serves to identify the student demand for skills and training as well as barriers and facilitators of the activities that promote EE in an Australian university context. Second, the report serves as a how-to manual, in the tradition of many tomes on EE, outlining the QIS activities that worked as well as those that failed. The activities represent one measure of QIS outcomes and are described herein to facilitate implementation in other institutes. The QIS initially aimed to adopt an incubation model for training in EE. The ‘learning by doing’ model for new venture creation is a highly successful and high profile training approach commonly found in overseas contexts. However, the greatest demand of the QUT student population was not for incubation and progression of a developed entrepreneurial intent, but rather for training that instilled enterprising skills in the individual. These two scenarios require different training approaches (Fayolle and Gailly, 2008). The activities of the QIS evolved to meet that student demand. In addressing enterprising skills, the QIS developed the antecedents of entrepreneurialism (i.e., entrepreneurial attitudes, motivation and behaviours) including high-level skills around risk-taking, effective communication, opportunity recognition and action-orientation. In focusing on the would-be entrepreneur and not on the (initial) idea per se, the QIS also fostered entrepreneurial outcomes that would never have gained entry to the rigid stage-gated incubation model proposed for the original QIS framework. Important lessons learned from the project for development of an innovation community include the need to: 1. Evaluate the context of the type of EE program to be delivered and the student demand for the skills training (as noted above). 2. Create a community that builds on three dimensions: a physical space, a virtual environment and a network of mentors and partners. 3. Supplement the community with external partnerships that aid in delivery of skills training materials. 4. Ensure discovery of the community through the use of external IT services to deliver advertising and networking outlets. 5. Manage unrealistic student expectations of billion dollar products. 6. Continuously renew and rebuild simple activities to maintain student engagement. 7. Accommodate the non-university end-user group within the community. 8. Recognise and address the skills bottlenecks that serve as barriers to concept progression; in this case, externally provided IT and programming skills. 9. Use available on-line and published resources rather than engage in constructing project-specific resources that quickly become obsolete. 10. Avoid perceptions of faculty ownership and operate in an increasingly competitive environment. 11. Recognise that the continuum between creativity/innovation and entrepreneurship is complex, non-linear and requires different training regimes during the different phases of the pipeline. One small entity, such as the QIS, cannot address them all. The QIS successfully designed, implemented and delivered activities that included events, workshops, seminars and services to QUT students in the extra-curricular space. That the QIS project can be considered successful derives directly from the outcomes. First, the QIS project changed the lives of emerging QUT student entrepreneurs. Also, the QIS activities developed enterprising skills in students who did not necessarily have a business proposition, at the time. Second, successful outcomes of the QIS project are evidenced as the embedding of most, perhaps all, of the QIS activities in a new Chancellery-sponsored initiative: the Leadership Development and Innovation Program hosted by QUT Student Support Services. During the course of the QIS project, the Brisbane-based innovation ecosystem underwent substantial change. From a dearth of opportunities for the entrepreneurially inclined, there is now a plethora of entities that cater for a diversity of innovation-related activities. While the QIS evolved with the landscape, the demand endpoint of the QIS activities still highlights a gap in the local and national innovation ecosystems. The freedom to experiment and to fail is not catered for by the many new entities seeking to build viable businesses on the back of the innovation push. The onus of teaching the enterprising skills, which are the employability skills now demanded by industry, remains the domain of the higher education sector.
Resumo:
Recognising that charitable behaviour can be motivated by public recognition and emotional satisfaction, not-for-profit organisations have developed strategies that leverage self-interest over altruism by facilitating individuals to donate conspicuously. Initially developed as novel marketing programs to increase donation income, such conspicuous tokens of recognition are being recognised as important value propositions to nurture donor relationships. Despite this, there is little empirical evidence that identifies when donations can be increased through conspicuous recognition. Furthermore, social media’s growing popularity for self-expression, as well as the increasing use of technology in donor relationship management strategies, makes an examination of virtual conspicuous tokens of recognition in relation to what value donors seek particularly insightful. Therefore, this research examined the impact of experiential donor value and virtual conspicuous tokens of recognition on blood donor intentions. Using online survey data from 186 Australian blood donors, results show that in fact emotional value is a stronger predictor of intentions to donate blood than altruistic value, while social value is the strongest predictor of intentions if provided with recognition. Clear linkages between dimensions of donor value (altruistic, emotional and social) and conspicuous donation behaviour (CDB) were identified. The findings provide valuable insights into the use of conspicuous donation tokens of recognition on social media, and contribute to our understanding into the under-researched areas of donor value and CDB.
Resumo:
The location of previously unseen and unregistered individuals in complex camera networks from semantic descriptions is a time consuming and often inaccurate process carried out by human operators, or security staff on the ground. To promote the development and evaluation of automated semantic description based localisation systems, we present a new, publicly available, unconstrained 110 sequence database, collected from 6 stationary cameras. Each sequence contains detailed semantic information for a single search subject who appears in the clip (gender, age, height, build, hair and skin colour, clothing type, texture and colour), and between 21 and 290 frames for each clip are annotated with the target subject location (over 11,000 frames are annotated in total). A novel approach for localising a person given a semantic query is also proposed and demonstrated on this database. The proposed approach incorporates clothing colour and type (for clothing worn below the waist), as well as height and build to detect people. A method to assess the quality of candidate regions, as well as a symmetry driven approach to aid in modelling clothing on the lower half of the body, is proposed within this approach. An evaluation on the proposed dataset shows that a relative improvement in localisation accuracy of up to 21 is achieved over the baseline technique.
Resumo:
The use of Wireless Sensor Networks (WSNs) for vibration-based Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data asynchronicity and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. Based on a brief review, this paper first reveals that Data Synchronization Error (DSE) is the most inherent factor amongst uncertainties of SHM-oriented WSNs. Effects of this factor are then investigated on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when merging data from multiple sensor setups. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as benchmark data after being added with a certain level of noise to account for the higher presence of this factor in SHM-oriented WSNs. From this source, a large number of simulations have been made to generate multiple DSE-corrupted datasets to facilitate statistical analyses. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with DSE at a relaxed level. Finally, the combination of preferred OMA techniques and the use of the channel projection for the time-domain OMA technique to cope with DSE are recommended.
Resumo:
Digital signatures are often used by trusted authorities to make unique bindings between a subject and a digital object; for example, certificate authorities certify a public key belongs to a domain name, and time-stamping authorities certify that a certain piece of information existed at a certain time. Traditional digital signature schemes however impose no uniqueness conditions, so a trusted authority could make multiple certifications for the same subject but different objects, be it intentionally, by accident, or following a (legal or illegal) coercion. We propose the notion of a double-authentication-preventing signature, in which a value to be signed is split into two parts: a subject and a message. If a signer ever signs two different messages for the same subject, enough information is revealed to allow anyone to compute valid signatures on behalf of the signer. This double-signature forgeability property discourages signers from misbehaving---a form of self-enforcement---and would give binding authorities like CAs some cryptographic arguments to resist legal coercion. We give a generic construction using a new type of trapdoor functions with extractability properties, which we show can be instantiated using the group of sign-agnostic quadratic residues modulo a Blum integer.
Resumo:
In this paper we tackle the problem of finding an efficient signature verification scheme when the number of signatures is signi.- cantly large and the verifier is relatively weak. In particular, we tackle the problem of message authentication in many-to-one communication networks known as concast communication. The paper presents three signature screening algorithms for a variant of ElGamal-type digital signatures. The cost for these schemes is n applications of hash functions, 2n modular multiplications, and n modular additions plus the verification of one digital signature, where n is the number of signatures. The paper also presents a solution to the open problem of finding a fast screening signature for non-RSA digital signature schemes.
Resumo:
This paper presents a novel place recognition algorithm inspired by the recent discovery of overlapping and multi-scale spatial maps in the rodent brain. We mimic this hierarchical framework by training arrays of Support Vector Machines to recognize places at multiple spatial scales. Place match hypotheses are then cross-validated across all spatial scales, a process which combines the spatial specificity of the finest spatial map with the consensus provided by broader mapping scales. Experiments on three real-world datasets including a large robotics benchmark demonstrate that mapping over multiple scales uniformly improves place recognition performance over a single scale approach without sacrificing localization accuracy. We present analysis that illustrates how matching over multiple scales leads to better place recognition performance and discuss several promising areas for future investigation.
Resumo:
A dynamic accumulator is an algorithm, which merges a large set of elements into a constant-size value such that for an element accumulated, there is a witness confirming that the element was included into the value, with a property that accumulated elements can be dynamically added and deleted into/from the original set. Recently Wang et al. presented a dynamic accumulator for batch updates at ICICS 2007. However, their construction suffers from two serious problems. We analyze them and propose a way to repair their scheme. We use the accumulator to construct a new scheme for common secure indices with conjunctive keyword-based retrieval.
Resumo:
A parallel authentication and public-key encryption is introduced and exemplified on joint encryption and signing which compares favorably with sequential Encrypt-then-Sign (ɛtS) or Sign-then-Encrypt (Stɛ) schemes as far as both efficiency and security are concerned. A security model for signcryption, and thus joint encryption and signing, has been recently defined which considers possible attacks and security goals. Such a scheme is considered secure if the encryption part guarantees indistinguishability and the signature part prevents existential forgeries, for outsider but also insider adversaries. We propose two schemes of parallel signcryption, which are efficient alternative to Commit-then-Sign-and- Encrypt (Ct&G3&S). They are both provably secure in the random oracle model. The first one, called generic parallel encrypt and sign, is secure if the encryption scheme is semantically secure against chosen-ciphertext attacks and the signature scheme prevents existential forgeries against random-message attacks. The second scheme, called optimal parallel encrypt. and sign, applies random oracles similar to the OAEP technique in order to achieve security using encryption and signature components with very weak security requirements — encryption is expected to be one-way under chosen-plaintext attacks while signature needs to be secure against universal forgeries under random-plaintext attack, that is actually the case for both the plain-RSA encryption and signature under the usual RSA assumption. Both proposals are generic in the sense that any suitable encryption and signature schemes (i.e. which simply achieve required security) can be used. Furthermore they allow both parallel encryption and signing, as well as parallel decryption and verification. Properties of parallel encrypt and sign schemes are considered and a new security standard for parallel signcryption is proposed.
Resumo:
We study the multicast stream authentication problem when an opponent can drop, reorder and inject data packets into the communication channel. In this context, bandwidth limitation and fast authentication are the core concerns. Therefore any authentication scheme is to reduce as much as possible the packet overhead and the time spent at the receiver to check the authenticity of collected elements. Recently, Tartary and Wang developed a provably secure protocol with small packet overhead and a reduced number of signature verifications to be performed at the receiver. In this paper, we propose an hybrid scheme based on Tartary and Wang’s approach and Merkle hash trees. Our construction will exhibit a smaller overhead and a much faster processing at the receiver making it even more suitable for multicast than the earlier approach. As Tartary and Wang’s protocol, our construction is provably secure and allows the total recovery of the data stream despite erasures and injections occurred during transmission.
Resumo:
Slippage in the contact roller-races has always played a central role in the field of diagnostics of rolling element bearings. Due to this phenomenon, vibrations triggered by a localized damage are not strictly periodic and therefore not detectable by means of common spectral functions as power spectral density or discrete Fourier transform. Due to the strong second order cyclostationary component, characterizing these signals, techniques such as cyclic coherence, its integrated form and square envelope spectrum have proven to be effective in a wide range of applications. An expert user can easily identify a damage and its location within the bearing components by looking for particular patterns of peaks in the output of the selected cyclostationary tool. These peaks will be found in the neighborhood of specific frequencies, that can be calculated in advance as functions of the geometrical features of the bearing itself. Unfortunately the non-periodicity of the vibration signal is not the only consequence of the slippage: often it also involves a displacement of the damage characteristic peaks from the theoretically expected frequencies. This issue becomes particularly important in the attempt to develop highly automated algorithms for bearing damage recognition, and, in order to correctly set thresholds and tolerances, a quantitative description of the magnitude of the above mentioned deviations is needed. This paper is aimed at identifying the dependency of the deviations on the different operating conditions. This has been possible thanks to an extended experimental activity performed on a full scale bearing test rig, able to reproduce realistically the operating and environmental conditions typical of an industrial high power electric motor and gearbox. The importance of load will be investigated in detail for different bearing damages. Finally some guidelines on how to cope with such deviations will be given, accordingly to the expertise obtained in the experimental activity.
Resumo:
User-generated content where content is created and shared among consumers is of key importance to marketers. This study investigates consumer intrinsic and extrinsic motivation to understand why people create user-generated branded video content. Specifically, we examine the role of altruism (individual difference – intrinsic motivation), social benefits (extrinsic reward), and economic incentives (extrinsic reward) on intentions to create user-generated content. Results show that extrinsic rewards (economic incentives) result in more positive intentions to create user-generated content than intrinsic motivations. However, an effect for altruism is also evident revealing that high altruism consumers are more likely to create positive user-generated content. The implication of these findings is that marketers wanting to encourage user-generated content about their brands should target high altruism consumers and offer economic incentives for content creation.
Resumo:
In this paper we demonstrate that existing cooperative spectrum sensing formulated for static primary users cannot accurately detect dynamic primary users regardless of the information fusion method. Performance error occurs as the sensing parameters calculated by the conventional detector result in sensing performance that violates the sensing requirements. Furthermore, the error is accumulated and compounded by the number of cooperating nodes. To address this limitation, we design and implement the duty cycle detection model for the context of cooperative spectrum sensing to accurately calculate the sensing parameters that satisfy the sensing requirements. We show that longer sensing duration is required to compensate for dynamic primary user traffic.