894 resultados para Inter Session Variability Modelling
Resumo:
In cats with underlying low insulin sensitivity, obesity is a major risk factor for type 2 diabetes. Strategies to prevent the onset of type 2 diabetes could be implemented if these cats could be identified. Currently, two labour-intensive and complex methods have been used to measure insulin sensitivity in research studies: the hyperinsulinemic euglycemic clamp (Clamp) and the minimal model analysis (MINMOD) of a frequentlysampled intravenous glucose tolerance test. However, simpler measures are required in practice. Validation of simple measures requires a wellestablished method with minimal inter-day variability. The aims of this study were to determine the inter-day variability of the current methods of measuring insulin sensitivity in cats, and to assess the relationship between these tests and simpler measures of insulin sensitivity.
Resumo:
We evaluated inter-individual variability in optimal current direction for biphasic transcranial magnetic stimulation (TMS) of the motor cortex. Motor threshold for first dorsal interosseus was detected visually at eight coil orientations in 45° increments. Each participant (n = 13) completed two experimental sessions. One participant with low test–retest correlation (Pearson's r < 0.5) was excluded. In four subjects, visual detection of motor threshold was compared to EMG detection; motor thresholds were very similar and highly correlated (0.94–0.99). Similar with previous studies, stimulation in the majority of participants was most effective when the first current pulse flowed towards postero-lateral in the brain. However, in four participants, the optimal coil orientation deviated from this pattern. A principal component analysis using all eight orientations suggests that in our sample the optimal orientation of current direction was normally distributed around the postero-lateral orientation with a range of 63° (S.D. = 13.70°). Whenever the intensity of stimulation at the target site is calculated as a percentage from the motor threshold, in order to minimize intensity and side-effects it may be worthwhile to check whether rotating the coil 45° from the traditional posterior–lateral orientation decreases motor threshold.
Resumo:
Speaker verification is the process of verifying the identity of a person by analysing their speech. There are several important applications for automatic speaker verification (ASV) technology including suspect identification, tracking terrorists and detecting a person’s presence at a remote location in the surveillance domain, as well as person authentication for phone banking and credit card transactions in the private sector. Telephones and telephony networks provide a natural medium for these applications. The aim of this work is to improve the usefulness of ASV technology for practical applications in the presence of adverse conditions. In a telephony environment, background noise, handset mismatch, channel distortions, room acoustics and restrictions on the available testing and training data are common sources of errors for ASV systems. Two research themes were pursued to overcome these adverse conditions: Modelling mismatch and modelling uncertainty. To directly address the performance degradation incurred through mismatched conditions it was proposed to directly model this mismatch. Feature mapping was evaluated for combating handset mismatch and was extended through the use of a blind clustering algorithm to remove the need for accurate handset labels for the training data. Mismatch modelling was then generalised by explicitly modelling the session conditions as a constrained offset of the speaker model means. This session variability modelling approach enabled the modelling of arbitrary sources of mismatch, including handset type, and halved the error rates in many cases. Methods to model the uncertainty in speaker model estimates and verification scores were developed to address the difficulties of limited training and testing data. The Bayes factor was introduced to account for the uncertainty of the speaker model estimates in testing by applying Bayesian theory to the verification criterion, with improved performance in matched conditions. Modelling the uncertainty in the verification score itself met with significant success. Estimating a confidence interval for the "true" verification score enabled an order of magnitude reduction in the average quantity of speech required to make a confident verification decision based on a threshold. The confidence measures developed in this work may also have significant applications for forensic speaker verification tasks.
Resumo:
This paper investigates the use of the dimensionality-reduction techniques weighted linear discriminant analysis (WLDA), and weighted median fisher discriminant analysis (WMFD), before probabilistic linear discriminant analysis (PLDA) modeling for the purpose of improving speaker verification performance in the presence of high inter-session variability. Recently it was shown that WLDA techniques can provide improvement over traditional linear discriminant analysis (LDA) for channel compensation in i-vector based speaker verification systems. We show in this paper that the speaker discriminative information that is available in the distance between pair of speakers clustered in the development i-vector space can also be exploited in heavy-tailed PLDA modeling by using the weighted discriminant approaches prior to PLDA modeling. Based upon the results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset, we believe that WLDA and WMFD projections before PLDA modeling can provide an improved approach when compared to uncompensated PLDA modeling for i-vector based speaker verification systems.
Resumo:
Background: The aim was to evaluate the validity and repeatability of the auto-refraction function of the Nidek OPD-Scan III (Nidek Technologies, Gamagori, Japan) compared with non-cycloplegic subjective refraction. The Nidek OPD-Scan III is a new aberrometer/corneal topographer workstation based on the skiascopy principle. It combines a wavefront aberrometer, topographer, autorefractor, auto keratometer and pupillometer/pupillographer. Methods: Objective refraction results obtained using the Nidek OPD-Scan III were compared with non-cycloplegic subjective refraction for 108 eyes of 54 participants (29 female) with a mean age of 23.7±9.5 years. Intra-session and inter-session variability were assessed on 14 subjects (28 eyes). Results: The Nidek OPD-Scan III gave slightly more negative readings than results obtained by subjective refraction (Nidek mean difference -0.19±0.36 DS, p<0.01 for sphere; -0.19±0.35 DS, p<0.01 for mean spherical equivalent; -0.002±0.23 DC, p=0.91 for cylinder; -0.06±0.38 DC, p=0.30 for J0 and -0.36±0.31 DC for J45, p=0.29). Auto-refractor results for 74 per cent of spherical readings and 60 per cent of cylindrical powers were within±0.25 of subjective refraction. There was high intra-session and inter-session repeatability for all parameters; 90 per cent of inter-session repeatability results were within 0.25 D. Conclusion: The Nidek OPD-Scan III gives valid and repeatable measures of objective refraction when compared with non-cycloplegic subjective refraction. © 2013 The Authors. Clinical and Experimental Optometry © 2013 Optometrists Association Australia.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): The characterization of inter-decadal climate variability in the Southern Hemisphere is severely constrained by the shortness of the instrumental climate records. To help relieve this constraint, we have developed and analyzed a reconstruction of warm-season (November-April) temperatures from Tasmanian tree rings that now extends back to 800 BC. A detailed analysis of this reconstruction in the time and frequency domains indicates that much of the inter-decadal variability is principally confined to four frequency bands with mean periods of 31, 57, 77, and 200 years. ... In so doing, we show how a future greenhouse warming signal over Tasmania could be masked by these natural oscillations unless they are taken into account.
Resumo:
A dynamical wind-wave climate simulation covering the North Atlantic Ocean and spanning the whole 21st century under the A1B scenario has been compared with a set of statistical projections using atmospheric variables or large scale climate indices as predictors. As a first step, the performance of all statistical models has been evaluated for the present-day climate; namely they have been compared with a dynamical wind-wave hindcast in terms of winter Significant Wave Height (SWH) trends and variance as well as with altimetry data. For the projections, it has been found that statistical models that use wind speed as independent variable predictor are able to capture a larger fraction of the winter SWH inter-annual variability (68% on average) and of the long term changes projected by the dynamical simulation. Conversely, regression models using climate indices, sea level pressure and/or pressure gradient as predictors, account for a smaller SWH variance (from 2.8% to 33%) and do not reproduce the dynamically projected long term trends over the North Atlantic. Investigating the wind-sea and swell components separately, we have found that the combination of two regression models, one for wind-sea waves and another one for the swell component, can improve significantly the wave field projections obtained from single regression models over the North Atlantic.
Resumo:
The Earth’s climate system is driven by a complex interplay of internal chaotic dynamics and natural and anthropogenic external forcing. Recent instrumental data have shown a remarkable degree of asynchronicity between Northern Hemisphere and Southern Hemisphere temperature fluctuations, thereby questioning the relative importance of internal versus external drivers of past as well as future climate variability1, 2, 3. However, large-scale temperature reconstructions for the past millennium have focused on the Northern Hemisphere4, 5, limiting empirical assessments of inter-hemispheric variability on multi-decadal to centennial timescales. Here, we introduce a new millennial ensemble reconstruction of annually resolved temperature variations for the Southern Hemisphere based on an unprecedented network of terrestrial and oceanic palaeoclimate proxy records. In conjunction with an independent Northern Hemisphere temperature reconstruction ensemble5, this record reveals an extended cold period (1594–1677) in both hemispheres but no globally coherent warm phase during the pre-industrial (1000–1850) era. The current (post-1974) warm phase is the only period of the past millennium where both hemispheres are likely to have experienced contemporaneous warm extremes. Our analysis of inter-hemispheric temperature variability in an ensemble of climate model simulations for the past millennium suggests that models tend to overemphasize Northern Hemisphere–Southern Hemisphere synchronicity by underestimating the role of internal ocean–atmosphere dynamics, particularly in the ocean-dominated Southern Hemisphere. Our results imply that climate system predictability on decadal to century timescales may be lower than expected based on assessments of external climate forcing and Northern Hemisphere temperature variations5, 6 alone.
Resumo:
Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.
Resumo:
In this paper we extend the concept of speaker annotation within a single-recording, or speaker diarization, to a collection wide approach we call speaker attribution. Accordingly, speaker attribution is the task of clustering expectantly homogenous intersession clusters obtained using diarization according to common cross-recording identities. The result of attribution is a collection of spoken audio across multiple recordings attributed to speaker identities. In this paper, an attribution system is proposed using mean-only MAP adaptation of a combined-gender UBM to model clusters from a perfect diarization system, as well as a JFA-based system with session variability compensation. The normalized cross-likelihood ratio is calculated for each pair of clusters to construct an attribution matrix and the complete linkage algorithm is employed to conduct clustering of the inter-session clusters. A matched cluster purity and coverage of 87.1% was obtained on the NIST 2008 SRE corpus.
Resumo:
A novel in-cylinder pressure method for determining ignition delay has been proposed and demonstrated. This method proposes a new Bayesian statistical model to resolve the start of combustion, defined as being the point at which the band-pass in-cylinder pressure deviates from background noise and the combustion resonance begins. Further, it is demonstrated that this method is still accurate in situations where there is noise present. The start of combustion can be resolved for each cycle without the need for ad hoc methods such as cycle averaging. Therefore, this method allows for analysis of consecutive cycles and inter-cycle variability studies. Ignition delay obtained by this method and by the net rate of heat release have been shown to give good agreement. However, the use of combustion resonance to determine the start of combustion is preferable over the net rate of heat release method because it does not rely on knowledge of heat losses and will still function accurately in the presence of noise. Results for a six-cylinder turbo-charged common-rail diesel engine run with neat diesel fuel at full, three quarters and half load have been presented. Under these conditions the ignition delay was shown to increase as the load was decreased with a significant increase in ignition delay at half load, when compared with three quarter and full loads.
Resumo:
A decision-making framework for image-guided radiotherapy (IGRT) is being developed using a Bayesian Network (BN) to graphically describe, and probabilistically quantify, the many interacting factors that are involved in this complex clinical process. Outputs of the BN will provide decision-support for radiation therapists to assist them to make correct inferences relating to the likelihood of treatment delivery accuracy for a given image-guided set-up correction. The framework is being developed as a dynamic object-oriented BN, allowing for complex modelling with specific sub-regions, as well as representation of the sequential decision-making and belief updating associated with IGRT. A prototype graphic structure for the BN was developed by analysing IGRT practices at a local radiotherapy department and incorporating results obtained from a literature review. Clinical stakeholders reviewed the BN to validate its structure. The BN consists of a sub-network for evaluating the accuracy of IGRT practices and technology. The directed acyclic graph (DAG) contains nodes and directional arcs representing the causal relationship between the many interacting factors such as tumour site and its associated critical organs, technology and technique, and inter-user variability. The BN was extended to support on-line and off-line decision-making with respect to treatment plan compliance. Following conceptualisation of the framework, the BN will be quantified. It is anticipated that the finalised decision-making framework will provide a foundation to develop better decision-support strategies and automated correction algorithms for IGRT.