196 resultados para Biometric authentication
Resumo:
This paper addresses the development of trust in the use of Open Data through incorporation of appropriate authentication and integrity parameters for use by end user Open Data application developers in an architecture for trustworthy Open Data Services. The advantages of this architecture scheme is that it is far more scalable, not another certificate-based hierarchy that has problems with certificate revocation management. With the use of a Public File, if the key is compromised: it is a simple matter of the single responsible entity replacing the key pair with a new one and re-performing the data file signing process. Under this proposed architecture, the the Open Data environment does not interfere with the internal security schemes that might be employed by the entity. However, this architecture incorporates, when needed, parameters from the entity, e.g. person who authorized publishing as Open Data, at the time that datasets are created/added.
Resumo:
In this paper we present a new method for performing Bayesian parameter inference and model choice for low count time series models with intractable likelihoods. The method involves incorporating an alive particle filter within a sequential Monte Carlo (SMC) algorithm to create a novel pseudo-marginal algorithm, which we refer to as alive SMC^2. The advantages of this approach over competing approaches is that it is naturally adaptive, it does not involve between-model proposals required in reversible jump Markov chain Monte Carlo and does not rely on potentially rough approximations. The algorithm is demonstrated on Markov process and integer autoregressive moving average models applied to real biological datasets of hospital-acquired pathogen incidence, animal health time series and the cumulative number of poison disease cases in mule deer.
Resumo:
Neural interface devices and the melding of mind and machine, challenge the law in determining where civil liability for injury, damage or loss should lie. The ability of the human mind to instruct and control these devices means that in a negligence action against a person with a neural interface device, determining the standard of care owed by him or her will be of paramount importance. This article considers some of the factors that may influence the court’s determination of the appropriate standard of care to be applied in this situation, leading to the conclusion that a new standard of care might evolve.
Resumo:
Real-world cryptographic protocols such as the widely used Transport Layer Security (TLS) protocol support many different combinations of cryptographic algorithms (called ciphersuites) and simultaneously support different versions. Recent advances in provable security have shown that most modern TLS ciphersuites are secure authenticated and confidential channel establishment (ACCE) protocols, but these analyses generally focus on single ciphersuites in isolation. In this paper we extend the ACCE model to cover protocols with many different sub-protocols, capturing both multiple ciphersuites and multiple versions, and define a security notion for secure negotiation of the optimal sub-protocol. We give a generic theorem that shows how secure negotiation follows, with some additional conditions, from the authentication property of secure ACCE protocols. Using this framework, we analyse the security of ciphersuite and three variants of version negotiation in TLS, including a recently proposed mechanism for detecting fallback attacks.
Resumo:
Lattice-based cryptographic primitives are believed to offer resilience against attacks by quantum computers. We demonstrate the practicality of post-quantum key exchange by constructing cipher suites for the Transport Layer Security (TLS) protocol that provide key exchange based on the ring learning with errors (R-LWE) problem, we accompany these cipher suites with a rigorous proof of security. Our approach ties lattice-based key exchange together with traditional authentication using RSA or elliptic curve digital signatures: the post-quantum key exchange provides forward secrecy against future quantum attackers, while authentication can be provided using RSA keys that are issued by today's commercial certificate authorities, smoothing the path to adoption. Our cryptographically secure implementation, aimed at the 128-bit security level, reveals that the performance price when switching from non-quantum-safe key exchange is not too high. With our R-LWE cipher suites integrated into the Open SSL library and using the Apache web server on a 2-core desktop computer, we could serve 506 RLWE-ECDSA-AES128-GCM-SHA256 HTTPS connections per second for a 10 KiB payload. Compared to elliptic curve Diffie-Hellman, this means an 8 KiB increased handshake size and a reduction in throughput of only 21%. This demonstrates that provably secure post-quantum key-exchange can already be considered practical.
Resumo:
Developed economies are moving from an economy of corporations to an economy of people. More than ever, people produce and share value amongst themselves, and create value for corporations through co-creation and by sharing their data. This data remains in the hands of corporations and governments, but people want to regain control. Digital identity 3.0 gives people that control, and much more. In this paper we describe a concept for a digital identity platform that substantially goes beyond common concepts providing authentication services. Instead, the notion of digital identity 3.0 empowers people to decide who creates, updates, reads and deletes their data, and to bring their own data into interactions with organisations, governments and peers. To the extent that the user allows, this data is updated and expanded based on automatic, integrated and predictive learning, enabling trusted third party providers (e.g., retailers, banks, public sector) to proactively provide services. Consumers can also add to their digital identity desired meta-data and attribute values allowing them to design their own personal data record and to facilitate individualised experiences. We discuss the essential features of digital identity 3.0, reflect on relevant stakeholders and outline possible usage scenarios in selected industries.
Resumo:
We propose a novel multiview fusion scheme for recognizing human identity based on gait biometric data. The gait biometric data is acquired from video surveillance datasets from multiple cameras. Experiments on publicly available CASIA dataset show the potential of proposed scheme based on fusion towards development and implementation of automatic identity recognition systems.
Resumo:
This thesis investigates the use of fusion techniques and mathematical modelling to increase the robustness of iris recognition systems against iris image quality degradation, pupil size changes and partial occlusion. The proposed techniques improve recognition accuracy and enhance security. They can be further developed for better iris recognition in less constrained environments that do not require user cooperation. A framework to analyse the consistency of different regions of the iris is also developed. This can be applied to improve recognition systems using partial iris images, and cancelable biometric signatures or biometric based cryptography for privacy protection.
Resumo:
Quasi-likelihood (QL) methods are often used to account for overdispersion in categorical data. This paper proposes a new way of constructing a QL function that stems from the conditional mean-variance relationship. Unlike traditional QL approaches to categorical data, this QL function is, in general, not a scaled version of the ordinary log-likelihood function. A simulation study is carried out to examine the performance of the proposed QL method. Fish mortality data from quantal response experiments are used for illustration.
Resumo:
Artist statement – Artisan Gallery I have a confession to make… I don’t wear a FitBit, I don’t want an Apple Watch and I don’t like bling LED’s. But, what excites me is a future where ‘wearables’ are discreet, seamless and potentially one with our body. Burgeoning E-textiles research will provide the ability to inconspicuously communicate, measure and enhance human health and well-being. Alongside this, next generation wearables arguably will not be worn on the body, but rather within the body…under the skin. ‘Under the Skin’ is a polemic piece provoking debate on the future of wearables – a place where they are not overt, not auxiliary and perhaps not apparent. Indeed, a future where wearables are under the skin or one with our apparel. And, as underwear closets the skin and is the most intimate and cloaked apparel item we wear, this work unashamedly teases dialogue to explore how wearables can transcend from the overt to the unseen. Context Wearable Technology, also referred to as wearable computing or ‘wearables’, is an embryonic field that has the potential to unsettle conventional notions as to how technology can interact, enhance and augment the human body. Wearable technology is the next-generation for ubiquitous consumer electronics and ‘Wearables’ are, in essence, miniature electronic devices that are worn by a person, under clothing, embedded within clothing/textiles, on top of clothing, or as stand-alone accessories/devices. This wearables market is predicted to grow somewhere between $30-$50 billion in the next 5 years (Credit Suisse, 2013). The global ‘wearables’ market, which is emergent in phase, has forecasted predictions for vast consumer revenue with the potential to become a significant cross-disciplinary disruptive space for designers and entrepreneurs. For Fashion, the field of wearables is arguably at the intersection of the second and third generation for design innovation: the first phase being purely decorative with aspects such as LED lighting; the second phase consisting of an array of wearable devices, such as smart watches, to communicate areas such as health and fitness, the third phase involving smart electronics that are woven into the textile to perform a vast range of functions such as body cooling, fabric colour change or garment silhouette change; and the fourth phase where wearable devices are surgically implanted under the skin to augment, transform and enhance the human body. Whilst it is acknowledged the wearable phases are neither clear-cut nor discreet in progression and design innovation can still be achieved with first generation decorative approaches, the later generation of technology that is less overt and at times ‘under the skin’ provides a uniquely rich point for design innovation where the body and technology intersect as one. With this context in mind, the wearable provocation piece ‘Under the Skin’ provides a unique opportunity for the audience to question and challenge conventional notions that wearables need to be a: manifest in nature, b: worn on or next to the body, and c: purely functional. The piece ‘Under the Skin’ is informed by advances in the market place for wearable innovation, such as: the Australian based wearable design firm Catapult with their discreet textile biometric sports tracking innovation, French based Spinali Design with their UV app based textile senor to provide sunburn alerts, as well as opportunities for design technology innovation through UNICEF’s ‘Wearables for Good’ design challenge to improve the quality of life in disadvantaged communities. Exhibition As part of Artisan’s Wearnext exhibition, the work was on public display from 25 July to 7 November 2015 and received the following media coverage: WEARNEXT ONLINE LISTINGS AND MEDIA COVERAGE: http://indulgemagazine.net/wear-next/ http://www.weekendnotes.com/wear-next-exhibition-gallery-artisan/ http://concreteplayground.com/brisbane/event/wear-next_/ http://www.nationalcraftinitiative.com.au/news_and_events/event/48/wear-next http://bneart.com/whats-on/wear-next_/ http://creativelysould.tumblr.com/post/124899079611/creative-weekend-art-edition http://www.abc.net.au/radionational/programs/breakfast/smartly-dressed-the-future-of-wearable-technology/6744374 http://couriermail.newspaperdirect.com/epaper/viewer.aspx RADIO COVERAGE http://www.abc.net.au/radionational/programs/breakfast/wear-next-exhibition-whats-next-for-wearable-technology/6745986 TELEVISION COVERAGE http://www.abc.net.au/radionational/programs/breakfast/wear-next-exhibition-whats-next-for-wearable-technology/6745986 https://au.news.yahoo.com/video/watch/29439742/how-you-could-soon-be-wearing-smart-clothes/#page1
Resumo:
The quality of species distribution models (SDMs) relies to a large degree on the quality of the input data, from bioclimatic indices to environmental and habitat descriptors (Austin, 2002). Recent reviews of SDM techniques, have sought to optimize predictive performance e.g. Elith et al., 2006. In general SDMs employ one of three approaches to variable selection. The simplest approach relies on the expert to select the variables, as in environmental niche models Nix, 1986 or a generalized linear model without variable selection (Miller and Franklin, 2002). A second approach explicitly incorporates variable selection into model fitting, which allows examination of particular combinations of variables. Examples include generalized linear or additive models with variable selection (Hastie et al. 2002); or classification trees with complexity or model based pruning (Breiman et al., 1984, Zeileis, 2008). A third approach uses model averaging, to summarize the overall contribution of a variable, without considering particular combinations. Examples include neural networks, boosted or bagged regression trees and Maximum Entropy as compared in Elith et al. 2006. Typically, users of SDMs will either consider a small number of variable sets, via the first approach, or else supply all of the candidate variables (often numbering more than a hundred) to the second or third approaches. Bayesian SDMs exist, with several methods for eliciting and encoding priors on model parameters (see review in Low Choy et al. 2010). However few methods have been published for informative variable selection; one example is Bayesian trees (O’Leary 2008). Here we report an elicitation protocol that helps makes explicit a priori expert judgements on the quality of candidate variables. This protocol can be flexibly applied to any of the three approaches to variable selection, described above, Bayesian or otherwise. We demonstrate how this information can be obtained then used to guide variable selection in classical or machine learning SDMs, or to define priors within Bayesian SDMs.
Resumo:
This is a comprehensive study of a large range of biometric and optical parameters in people with type 1 diabetes. The parameters of 74 people with type 1 diabetes and an age matched control group were assessed. Most of the people with diabetes had low levels of neuropathy, retinopathy and nephropathy. Marginal or no significant differences were found between groups for corneal shape, corneal thickness, pupil size, and pupil decentrations. Relative to the control group, the diabetes group demonstrated smaller anterior chamber depths, more curved lenses, greater lens thickness and lower lens equivalent refractive index. While the optics of diabetic eyes make them appear as older eyes than those of people of the same age without diabetes, the differences did not increase significantly with age. Age-related changes in the optics of the eyes of people with diabetes need not be accelerated if the diabetes is well controlled.
Resumo:
Purpose.: To develop three-surface paraxial schematic eyes with different ages and sexes based on data for 7- and 14-year-old Chinese children from the Anyang Childhood Eye Study. Methods.: Six sets of paraxial schematic eyes, including 7-year-old eyes, 7-year-old male eyes, 7-year-old female eyes, 14-year-old eyes, 14-year-old male eyes, and 14-year-old female eyes, were developed. Both refraction-dependent and emmetropic eye models were developed, with the former using linear dependence of ocular parameters on refraction. Results.: A total of 2059 grade 1 children (boys 58%) and 1536 grade 8 children (boys 49%) were included, with mean age of 7.1 ± 0.4 and 13.7 ± 0.5 years, respectively. Changes in these schematic eyes with aging are increased anterior chamber depth, decreased lens thickness, increased vitreous chamber depth, increased axial length, and decreased lens equivalent power. Male schematic eyes have deeper anterior chamber depth, longer vitreous chamber depth, longer axial length, and lower lens equivalent power than female schematic eyes. Changes in the schematic eyes with positive increase in refraction are decreased anterior chamber depth, increased lens thickness, decreased vitreous chamber depth, decreased axial length, increased corneal radius of curvature, and increased lens power. In general, the emmetropic schematic eyes have biometric parameters similar to those arising from regression fits for the refraction-dependent schematic eyes. Conclusions.: The paraxial schematic eyes of Chinese children may be useful for myopia research and for facilitating comparison with other children with the same or different racial backgrounds and living in different places.
Resumo:
Visual problems may be the first symptoms of diabetes. There have been several reports of transient changes in refraction of people newly diagnosed with diabetes. Visual acuity and refraction may be affected when there are ocular biometric changes. Small but significant biometrical changes have been found by some authors during hyperglycaemia and during reduction of hyperglycaemia.[4] Here, we describe a case of type 2 diabetes that was detected from ocular straylight and intraocular thickness measurements...
Resumo:
Detect and Avoid (DAA) technology is widely acknowledged as a critical enabler for unsegregated Remote Piloted Aircraft (RPA) operations, particularly Beyond Visual Line of Sight (BVLOS). Image-based DAA, in the visible spectrum, is a promising technological option for addressing the challenges DAA presents. Two impediments to progress for this approach are the scarcity of available video footage to train and test algorithms, in conjunction with testing regimes and specifications which facilitate repeatable, statistically valid, performance assessment. This paper includes three key contributions undertaken to address these impediments. In the first instance, we detail our progress towards the creation of a large hybrid collision and near-collision encounter database. Second, we explore the suitability of techniques employed by the biometric research community (Speaker Verification and Language Identification), for DAA performance optimisation and assessment. These techniques include Detection Error Trade-off (DET) curves, Equal Error Rates (EER), and the Detection Cost Function (DCF). Finally, the hybrid database and the speech-based techniques are combined and employed in the assessment of a contemporary, image based DAA system. This system includes stabilisation, morphological filtering and a Hidden Markov Model (HMM) temporal filter.