216 resultados para vector error correction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In semisupervised learning (SSL), a predictive model is learn from a collection of labeled data and a typically much larger collection of unlabeled data. These paper presented a framework called multi-view point cloud regularization (MVPCR), which unifies and generalizes several semisupervised kernel methods that are based on data-dependent regularization in reproducing kernel Hilbert spaces (RKHSs). Special cases of MVPCR include coregularized least squares (CoRLS), manifold regularization (MR), and graph-based SSL. An accompanying theorem shows how to reduce any MVPCR problem to standard supervised learning with a new multi-view kernel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The uniformization method (also known as randomization) is a numerically stable algorithm for computing transient distributions of a continuous time Markov chain. When the solution is needed after a long run or when the convergence is slow, the uniformization method involves a large number of matrix-vector products. Despite this, the method remains very popular due to its ease of implementation and its reliability in many practical circumstances. Because calculating the matrix-vector product is the most time-consuming part of the method, overall efficiency in solving large-scale problems can be significantly enhanced if the matrix-vector product is made more economical. In this paper, we incorporate a new relaxation strategy into the uniformization method to compute the matrix-vector products only approximately. We analyze the error introduced by these inexact matrix-vector products and discuss strategies for refining the accuracy of the relaxation while reducing the execution cost. Numerical experiments drawn from computer systems and biological systems are given to show that significant computational savings are achieved in practical applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The measurement error model is a well established statistical method for regression problems in medical sciences, although rarely used in ecological studies. While the situations in which it is appropriate may be less common in ecology, there are instances in which there may be benefits in its use for prediction and estimation of parameters of interest. We have chosen to explore this topic using a conditional independence model in a Bayesian framework using a Gibbs sampler, as this gives a great deal of flexibility, allowing us to analyse a number of different models without losing generality. Using simulations and two examples, we show how the conditional independence model can be used in ecology, and when it is appropriate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gaussian mixture models (GMMs) have become an established means of modeling feature distributions in speaker recognition systems. It is useful for experimentation and practical implementation purposes to develop and test these models in an efficient manner particularly when computational resources are limited. A method of combining vector quantization (VQ) with single multi-dimensional Gaussians is proposed to rapidly generate a robust model approximation to the Gaussian mixture model. A fast method of testing these systems is also proposed and implemented. Results on the NIST 1996 Speaker Recognition Database suggest comparable and in some cases an improved verification performance to the traditional GMM based analysis scheme. In addition, previous research for the task of speaker identification indicated a similar system perfomance between the VQ Gaussian based technique and GMMs

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Robust speaker verification on short utterances remains a key consideration when deploying automatic speaker recognition, as many real world applications often have access to only limited duration speech data. This paper explores how the recent technologies focused around total variability modeling behave when training and testing utterance lengths are reduced. Results are presented which provide a comparison of Joint Factor Analysis (JFA) and i-vector based systems including various compensation techniques; Within-Class Covariance Normalization (WCCN), LDA, Scatter Difference Nuisance Attribute Projection (SDNAP) and Gaussian Probabilistic Linear Discriminant Analysis (GPLDA). Speaker verification performance for utterances with as little as 2 sec of data taken from the NIST Speaker Recognition Evaluations are presented to provide a clearer picture of the current performance characteristics of these techniques in short utterance conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Changes in peripheral aberrations, particularly higher-order aberrations, as a function of accommodation have received little attention. Wavefront aberrations were measured for the right eyes of 9 young adult emmetropes at 38 field positions in the central 42 x 32 degrees of the visual field. Subjects accommodated monocularly to targets at vergences of either 0.3 or 4.0 D. Wavefront data for a 5 mm diameter pupil were analyzed either in terms of the vector components of refraction or Zernike coefficients and total RMS wavefront aberrations. Relative peripheral refractive error (RPRE) was myopic at both accommodation demands and showed only a slight, not statistically significant, hypermetropic shift in the vertical meridian with the higher accommodation demand. There was little change in the astigmatic components of refraction or the higher-order Zernike coefficients, apart from fourth-order spherical aberration which became more negative (by 0.10 µm) at all field locations. Although it has been suggested that nearwork and the state of peripheral refraction may play some role in myopia development, for most of our adult emmetropes any changes with accommodation in RPRE and aberration were small. Hence it seems unlikely that such changes can be of importance to late-onset myopisation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The quality of conceptual business process models is highly relevant for the design of corresponding information systems. In particular, a precise measurement of model characteristics can be beneficial from a business perspective, helping to save costs thanks to early error detection. This is just as true from a software engineering point of view. In this latter case, models facilitate stakeholder communication and software system design. Research has investigated several proposals as regards measures for business process models, from a rather correlational perspective. This is helpful for understanding, for example size and complexity as general driving forces of error probability. Yet, design decisions usually have to build on thresholds, which can reliably indicate that a certain counter-action has to be taken. This cannot be achieved only by providing measures; it requires a systematic identification of effective and meaningful thresholds. In this paper, we derive thresholds for a set of structural measures for predicting errors in conceptual process models. To this end, we use a collection of 2,000 business process models from practice as a means of determining thresholds, applying an adaptation of the ROC curves method. Furthermore, an extensive validation of the derived thresholds was conducted by using 429 EPC models from an Australian financial institution. Finally, significant thresholds were adapted to refine existing modeling guidelines in a quantitative way.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Contact lenses are a common method for the correction of refractive errors of the eye. While there have been significant advancements in contact lens designs and materials over the past few decades, the lenses still represent a foreign object in the ocular environment and may lead to physiological as well as mechanical effects on the eye. When contact lenses are placed in the eye, the ocular anatomical structures behind and in front of the lenses are directly affected. This thesis presents a series of experiments that investigate the mechanical and physiological effects of the short-term use of contact lenses on anterior and posterior corneal topography, corneal thickness, the eyelids, tarsal conjunctiva and tear film surface quality. The experimental paradigm used in these studies was a repeated measures, cross-over study design where subjects wore various types of contact lenses on different days and the lenses were varied in one or more key parameters (e.g. material or design). Both, old and newer lens materials were investigated, soft and rigid lenses were used, high and low oxygen permeability materials were tested, toric and spherical lens designs were examined, high and low powers and small and large diameter lenses were used in the studies. To establish the natural variability in the ocular measurements used in the studies, each experiment also contained at least one “baseline” day where an identical measurement protocol was followed, with no contact lenses worn. In this way, changes associated with contact lens wear were considered in relation to those changes that occurred naturally during the 8 hour period of the experiment. In the first study, the regional distribution and magnitude of change in corneal thickness and topography was investigated in the anterior and posterior cornea after short-term use of soft contact lenses in 12 young adults using the Pentacam. Four different types of contact lenses (Silicone hydrogel/ Spherical/–3D, Silicone Hydrogel/Spherical/–7D, Silicone Hydrogel/Toric/–3D and HEMA/Toric/–3D) of different materials, designs and powers were worn for 8 hours each, on 4 different days. The natural diurnal changes in corneal thickness and curvature were measured on two separate days before any contact lens wear. Significant diurnal changes in corneal thickness and curvature within the duration of the study were observed and these were taken into consideration for calculating the contact lens induced corneal changes. Corneal thickness changed significantly with lens wear and the greatest corneal swelling was seen with the hydrogel (HEMA) toric lens with a noticeable regional swelling of the cornea beneath the stabilization zones, the thickest regions of the lenses. The anterior corneal surface generally showed a slight flattening with lens wear. All contact lenses resulted in central posterior corneal steepening, which correlated with the relative degree of corneal swelling. The corneal swelling induced by the silicone hydrogel contact lenses was typically less than the natural diurnal thinning of the cornea over this same period (i.e. net thinning). This highlights why it is important to consider the natural diurnal variations in corneal thickness observed from morning to afternoon to accurately interpret contact lens induced corneal swelling. In the second experiment, the relative influence of lenses of different rigidity (polymethyl methacrylate – PMMA, rigid gas permeable – RGP and silicone hydrogel – SiHy) and diameters (9.5, 10.5 and 14.0) on corneal thickness, topography, refractive power and wavefront error were investigated. Four different types of contact lenses (PMMA/9.5, RGP/9.5, RGP/10.5, SiHy/14.0), were worn by 14 young healthy adults for a period of 8 hours on 4 different days. There was a clear association between fluorescein fitting pattern characteristics (i.e. regions of minimum clearance in the fluorescein pattern) and the resulting corneal shape changes. PMMA lenses resulted in significant corneal swelling (more in the centre than periphery) along with anterior corneal steepening and posterior flattening. RGP lenses, on the other hand, caused less corneal swelling (more in the periphery than centre) along with opposite effects on corneal curvature, anterior corneal flattening and posterior steepening. RGP lenses also resulted in a clinically and statistically significant decrease in corneal refractive power (ranging from 0.99 to 0.01 D), large enough to affect vision and require adjustment in the lens power. Wavefront analysis also showed a significant increase in higher order aberrations after PMMA lens wear, which may partly explain previous reports of "spectacle blur" following PMMA lens wear. We further explored corneal curvature, thickness and refractive changes with back surface toric and spherical RGP lenses in a group of 6 subjects with toric corneas. The lenses were worn for 8 hours and measurements were taken before and after lens wear, as in previous experiments. Both lens types caused anterior corneal flattening and a decrease in corneal refractive power but the changes were greater with the spherical lens. The spherical lens also caused a significant decrease in WTR astigmatism (WRT astigmatism defined as major axis within 30 degrees of horizontal). Both the lenses caused slight posterior corneal steepening and corneal swelling, with a greater effect in the periphery compared to the central cornea. Eyelid position, lid-wiper and tarsal conjunctival staining were also measured in Experiment 2 after short-term use of the rigid and SiHy contact lenses. Digital photos of the external eyes were captured for lid position analysis. The lid-wiper region of the marginal conjunctiva was stained using fluorescein and lissamine green dyes and digital photos were graded by an independent masked observer. A grading scale was developed in order to describe the tarsal conjunctival staining. A significant decrease in the palpebral aperture height (blepharoptosis) was found after wearing of PMMA/9.5 and RGP/10.5 lenses. All three rigid contact lenses caused a significant increase in lid-wiper and tarsal staining after 8 hours of lens wear. There was also a significant diurnal increase in tarsal staining, even without contact lens wear. These findings highlight the need for better contact lens edge design to minimise the interactions between the lid and contact lens edge during blinking and more lubricious contact lens surfaces to reduce ocular surface micro-trauma due to friction and for. Tear film surface quality (TFSQ) was measured using a high-speed videokeratoscopy technique in Experiment 2. TFSQ was worse with all the lenses compared to baseline (PMMA/9.5, RGP/9.5, RGP/10.5, and SiHy/14) in the afternoon (after 8 hours) during normal and suppressed blinking conditions. The reduction in TFSQ was similar with all the contact lenses used, irrespective of their material and diameter. An unusual pattern of change in TFSQ in suppressed blinking conditions was also found. The TFSQ with contact lens was found to decrease until a certain time after which it improved to a value even better than the bare eye. This is likely to be due to the tear film drying completely over the surface of the contact lenses. The findings of this study also show that there is still a scope for improvement in contact lens materials in terms of better wettability and hydrophilicity in order to improve TFSQ and patient comfort. These experiments showed that a variety of changes can occur in the anterior eye as a result of the short-term use of a range of commonly used contact lens types. The greatest corneal changes occurred with lenses manufactured from older HEMA and PMMA lens materials, whereas modern SiHy and rigid gas permeable materials caused more subtle changes in corneal shape and thickness. All lenses caused signs of micro-trauma to the eyelid wiper and palpebral conjunctiva, although rigid lenses appeared to cause more significant changes. Tear film surface quality was also significantly reduced with all types of contact lenses. These short-term changes in the anterior eye are potential markers for further long term changes and the relative differences between lens types that we have identified provide an indication of areas of contact lens design and manufacture that warrant further development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces the Weighted Linear Discriminant Analysis (WLDA) technique, based upon the weighted pairwise Fisher criterion, for the purposes of improving i-vector speaker verification in the presence of high intersession variability. By taking advantage of the speaker discriminative information that is available in the distances between pairs of speakers clustered in the development i-vector space, the WLDA technique is shown to provide an improvement in speaker verification performance over traditional Linear Discriminant Analysis (LDA) approaches. A similar approach is also taken to extend the recently developed Source Normalised LDA (SNLDA) into Weighted SNLDA (WSNLDA) which, similarly, shows an improvement in speaker verification performance in both matched and mismatched enrolment/verification conditions. Based upon the results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset, we believe that both WLDA and WSNLDA are viable as replacement techniques to improve the performance of LDA and SNLDA-based i-vector speaker verification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To determine the effect of moderate levels of refractive blur and simulated cataracts on nighttime pedestrian conspicuity in the presence and absence of headlamp glare. Methods: The ability to recognize pedestrians at night was measured in 28 young adults (M=27.6 years) under three visual conditions: normal vision, refractive blur and simulated cataracts; mean acuity was 20/40 or better in all conditions. Pedestrian recognition distances were recorded while participants drove an instrumented vehicle along a closed road course at night. Pedestrians wore one of three clothing conditions and oncoming headlamps were present for 16 participants and absent for 12 participants. Results: Simulated visual impairment and glare significantly reduced the frequency with which drivers recognized pedestrians and the distance at which the drivers first recognized them. Simulated cataracts were significantly more disruptive than blur even though photopic visual acuity levels were matched. With normal vision, drivers responded to pedestrians at 3.6x and 5.5x longer distances on average than for the blur or cataract conditions, respectively. Even in the presence of visual impairment and glare, pedestrians were recognized more often and at longer distances when they wore a “biological motion” reflective clothing configuration than when they wore a reflective vest or black clothing. Conclusions: Drivers’ ability to recognize pedestrians at night is degraded by common visual impairments even when the drivers’ mean visual acuity meets licensing requirements. To maximize drivers’ ability to see pedestrians, drivers should wear their optimum optical correction, and cataract surgery should be performed early enough to avoid potentially dangerous reductions in visual performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Voltage drop and rise at network peak and off–peak periods along with voltage unbalance are the major power quality problems in low voltage distribution networks. Usually, the utilities try to use adjusting the transformer tap changers as a solution for the voltage drop. They also try to distribute the loads equally as a solution for network voltage unbalance problem. On the other hand, the ever increasing energy demand, along with the necessity of cost reduction and higher reliability requirements, are driving the modern power systems towards Distributed Generation (DG) units. This can be in the form of small rooftop photovoltaic cells (PV), Plug–in Electric Vehicles (PEVs) or Micro Grids (MGs). Rooftop PVs, typically with power levels ranging from 1–5 kW installed by the householders are gaining popularity due to their financial benefits for the householders. Also PEVs will be soon emerged in residential distribution networks which behave as a huge residential load when they are being charged while in their later generation, they are also expected to support the network as small DG units which transfer the energy stored in their battery into grid. Furthermore, the MG which is a cluster of loads and several DG units such as diesel generators, PVs, fuel cells and batteries are recently introduced to distribution networks. The voltage unbalance in the network can be increased due to the uncertainties in the random connection point of the PVs and PEVs to the network, their nominal capacity and time of operation. Therefore, it is of high interest to investigate the voltage unbalance in these networks as the result of MGs, PVs and PEVs integration to low voltage networks. In addition, the network might experience non–standard voltage drop due to high penetration of PEVs, being charged at night periods, or non–standard voltage rise due to high penetration of PVs and PEVs generating electricity back into the grid in the network off–peak periods. In this thesis, a voltage unbalance sensitivity analysis and stochastic evaluation is carried out for PVs installed by the householders versus their installation point, their nominal capacity and penetration level as different uncertainties. A similar analysis is carried out for PEVs penetration in the network working in two different modes: Grid to vehicle and Vehicle to grid. Furthermore, the conventional methods are discussed for improving the voltage unbalance within these networks. This is later continued by proposing new and efficient improvement methods for voltage profile improvement at network peak and off–peak periods and voltage unbalance reduction. In addition, voltage unbalance reduction is investigated for MGs and new improvement methods are proposed and applied for the MG test bed, planned to be established at Queensland University of Technology (QUT). MATLAB and PSCAD/EMTDC simulation softwares are used for verification of the analyses and the proposals.