985 resultados para sequential-tests
Resumo:
Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.
Resumo:
The IEC 61850 family of standards for substation communication systems were released in the early 2000s, and include IEC 61850-8-1 and IEC 61850-9-2 that enable Ethernet to be used for process-level connections between transmission substation switchyards and control rooms. This paper presents an investigation of process bus protection performance, as the in-service behavior of multi-function process buses is largely unknown. An experimental approach was adopted that used a Real Time Digital Simulator and 'live' substation automation devices. The effect of sampling synchronization error and network traffic on transformer differential protection performance was assessed and compared to conventional hard-wired connections. Ethernet was used for all sampled value measurements, circuit breaker tripping, transformer tap-changer position reports and Precision Time Protocol synchronization of sampled value merging unit sampling. Test results showed that the protection relay under investigation operated correctly with process bus network traffic approaching 100% capacity. The protection system was not adversely affected by synchronizing errors significantly larger than the standards permit, suggesting these requirements may be overly conservative. This 'closed loop' approach, using substation automation hardware, validated the operation of protection relays under extreme conditions. Digital connections using a single shared Ethernet network outperformed conventional hard-wired solutions.
Resumo:
Advanced composite materials offer remarkable potential in the upgrade of civil engineering structures. The evolution of CFRP (carbon fibre reinforced polymer) technologies and their versatility for applications in civil constructions require comprehensive and reliable codes of practice. Guidelines are available on the rehabilitation and retrofit of concrete structures with advanced composite materials. However, there is a need to develop appropriate design guidelines for CFRP strengthened steel structures. It is important to understand the bond characteristics between CFRP and steel plates. This paper describes a series of double strap shear tests loaded in tension to investigate the bond between CFRP sheets and steel plates. Both normal modulus (240 GPa) and high modulus (640 GPa) CFRPs were used in the test program. Strain gauges were mounted to capture the strain distribution along the CFRP length. Different failure modes were observed for joints with normal modulus CFRP and those with high modulus CFRP. The strain distribution along the CFRP length is similar for the two cases. A shorter effective bond length was obtained for joints with high modulus CFRP whereas larger ultimate load carrying capacity can be achieved for joints with normal modulus CFRP when the bond length is long enough.
Resumo:
This thesis investigated a range of factors underlying the impact of uncorrected refractive errors on laboratory-based tests related to driving. Results showed that refractive blur had a pronounced effect on recognition of briefly presented targets, particularly under low light conditions. Blur, in combination with audio distracters, also slowed a participant's reactions to road hazards in video presentations. This suggests that recognition of suddenly appearing road hazards might be slowed in the presence of refractive blur, particularly under conditions of distraction. These findings highlight the importance of correcting even small refractive errors for driving, particularly at night.
Resumo:
Currently, the GNSS computing modes are of two classes: network-based data processing and user receiver-based processing. A GNSS reference receiver station essentially contributes raw measurement data in either the RINEX file format or as real-time data streams in the RTCM format. Very little computation is carried out by the reference station. The existing network-based processing modes, regardless of whether they are executed in real-time or post-processed modes, are centralised or sequential. This paper describes a distributed GNSS computing framework that incorporates three GNSS modes: reference station-based, user receiver-based and network-based data processing. Raw data streams from each GNSS reference receiver station are processed in a distributed manner, i.e., either at the station itself or at a hosting data server/processor, to generate station-based solutions, or reference receiver-specific parameters. These may include precise receiver clock, zenith tropospheric delay, differential code biases, ambiguity parameters, ionospheric delays, as well as line-of-sight information such as azimuth and elevation angles. Covariance information for estimated parameters may also be optionally provided. In such a mode the nearby precise point positioning (PPP) or real-time kinematic (RTK) users can directly use the corrections from all or some of the stations for real-time precise positioning via a data server. At the user receiver, PPP and RTK techniques are unified under the same observation models, and the distinction is how the user receiver software deals with corrections from the reference station solutions and the ambiguity estimation in the observation equations. Numerical tests demonstrate good convergence behaviour for differential code bias and ambiguity estimates derived individually with single reference stations. With station-based solutions from three reference stations within distances of 22–103 km the user receiver positioning results, with various schemes, show an accuracy improvement of the proposed station-augmented PPP and ambiguity-fixed PPP solutions with respect to the standard float PPP solutions without station augmentation and ambiguity resolutions. Overall, the proposed reference station-based GNSS computing mode can support PPP and RTK positioning services as a simpler alternative to the existing network-based RTK or regionally augmented PPP systems.
Resumo:
Courts set guidelines for when genetic testing would be ordered - medical testing - life insurers - use of test results - confidentiality.
Resumo:
This thesis presents a sequential pattern based model (PMM) to detect news topics from a popular microblogging platform, Twitter. PMM captures key topics and measures their importance using pattern properties and Twitter characteristics. This study shows that PMM outperforms traditional term-based models, and can potentially be implemented as a decision support system. The research contributes to news detection and addresses the challenging issue of extracting information from short and noisy text.
Resumo:
Research suggests that the length and quality of police-citizen encounters affect policing outcomes. The Koper Curve, for example, shows that the optimal length for police presence in hot spots is between 14 and 15 minutes, with diminishing returns observed thereafter. Our study, using data from the Queensland Community Engagement Trial (QCET), examines the impact of encounter length on citizen perceptions of police performance. QCET involved a randomised field trial, where 60 random breath test (RBT) traffic stop operations were randomly allocated to an experimental condition involving a procedurally just encounter or a business-as-usual control condition. Our results show that the optimal length of time for procedurally just encounters during RBT traffic stops is just less than 2 minutes. We show, therefore, that it is important to encourage and facilitate positive police–citizen encounters during RBTat traffic stops, while ensuring that the length of these interactions does not pass a point of diminishing returns.
Resumo:
In this paper we present a unified sequential Monte Carlo (SMC) framework for performing sequential experimental design for discriminating between a set of models. The model discrimination utility that we advocate is fully Bayesian and based upon the mutual information. SMC provides a convenient way to estimate the mutual information. Our experience suggests that the approach works well on either a set of discrete or continuous models and outperforms other model discrimination approaches.
Resumo:
We present two unconditional secure protocols for private set disjointness tests. In order to provide intuition of our protocols, we give a naive example that applies Sylvester matrices. Unfortunately, this simple construction is insecure as it reveals information about the intersection cardinality. More specifically, it discloses its lower bound. By using the Lagrange interpolation, we provide a protocol for the honest-but-curious case without revealing any additional information. Finally, we describe a protocol that is secure against malicious adversaries. In this protocol, a verification test is applied to detect misbehaving participants. Both protocols require O(1) rounds of communication. Our protocols are more efficient than the previous protocols in terms of communication and computation overhead. Unlike previous protocols whose security relies on computational assumptions, our protocols provide information theoretic security. To our knowledge, our protocols are the first ones that have been designed without a generic secure function evaluation. More important, they are the most efficient protocols for private disjointness tests in the malicious adversary case.
Resumo:
Dose-finding trials are a form of clinical data collection process in which the primary objective is to estimate an optimum dose of an investigational new drug when given to a patient. This thesis develops and explores three novel dose-finding design methodologies. All design methodologies presented in this thesis are pragmatic. They use statistical models, incorporate clinicians' prior knowledge efficiently, and prematurely stop a trial for safety or futility reasons. Designing actual dose-finding trials using these methodologies will minimize practical difficulties, improve efficiency of dose estimation, be flexible to stop early and reduce possible patient discomfort or harm.