188 resultados para Error Vector Magnitude (EVM)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the use of certain data-dependent estimates of the complexity of a function class, called Rademacher and Gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinations of functions from basis classes and show how the Rademacher and Gaussian complexities of such a function class can be bounded in terms of the complexity of the basis classes. We give examples of the application of these techniques in finding data-dependent risk bounds for decision trees, neural networks and support vector machines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In semisupervised learning (SSL), a predictive model is learn from a collection of labeled data and a typically much larger collection of unlabeled data. These paper presented a framework called multi-view point cloud regularization (MVPCR), which unifies and generalizes several semisupervised kernel methods that are based on data-dependent regularization in reproducing kernel Hilbert spaces (RKHSs). Special cases of MVPCR include coregularized least squares (CoRLS), manifold regularization (MR), and graph-based SSL. An accompanying theorem shows how to reduce any MVPCR problem to standard supervised learning with a new multi-view kernel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the relationship between diet, physical activity and health in humans requires accurate measurement of body composition and daily energy expenditure. Stable isotopes provide a means of measuring total body water and daily energy expenditure under free-living conditions. While the use of isotope ratio mass spectrometry (IRMS) for the analysis of 2H (Deuterium) and 18O (Oxygen-18) is well established in the field of human energy metabolism research, numerous questions remain regarding the factors which influence analytical and measurement error using this methodology. This thesis was comprised of four studies with the following emphases. The aim of Study 1 was to determine the analytical and measurement error of the IRMS with regard to sample handling under certain conditions. Study 2 involved the comparison of TEE (Total daily energy expenditure) using two commonly employed equations. Further, saliva and urine samples, collected at different times, were used to determine if clinically significant differences would occur. Study 3 was undertaken to determine the appropriate collection times for TBW estimates and derived body composition values. Finally, Study 4, a single case study to investigate if TEE measures are affected when the human condition changes due to altered exercise and water intake. The aim of Study 1 was to validate laboratory approaches to measure isotopic enrichment to ensure accurate (to international standards), precise (reproducibility of three replicate samples) and linear (isotope ratio was constant over the expected concentration range) results. This established the machine variability for the IRMS equipment in use at Queensland University for both TBW and TEE. Using either 0.4mL or 0.5mL sample volumes for both oxygen-18 and deuterium were statistically acceptable (p>0.05) and showed a within analytical variance of 5.8 Delta VSOW units for deuterium, 0.41 Delta VSOW units for oxygen-18. This variance was used as “within analytical noise” to determine sample deviations. It was also found that there was no influence of equilibration time on oxygen-18 or deuterium values when comparing the minimum (oxygen-18: 24hr; deuterium: 3 days) and maximum (oxygen-18: and deuterium: 14 days) equilibration times. With regard to preparation using the vacuum line, any order of preparation is suitable as the TEE values fall within 8% of each other regardless of preparation order. An 8% variation is acceptable for the TEE values due to biological and technical errors (Schoeller, 1988). However, for the automated line, deuterium must be assessed first followed by oxygen-18 as the automated machine line does not evacuate tubes but merely refills them with an injection of gas for a predetermined time. Any fractionation (which may occur for both isotopes), would cause a slight elevation in the values and hence a lower TEE. The purpose of the second and third study was to investigate the use of IRMS to measure the TEE and TBW of and to validate the current IRMS practices in use with regard to sample collection times of urine and saliva, the use of two TEE equations from different research centers and the body composition values derived from these TEE and TBW values. Following the collection of a fasting baseline urine and saliva sample, 10 people (8 women, 2 men) were dosed with a doubly labeled water does comprised of 1.25g 10% oxygen-18 and 0.1 g 100% deuterium/kg body weight. The samples were collected hourly for 12 hrs on the first day and then morning, midday, and evening samples were collected for the next 14 days. The samples were analyzed using an isotope ratio mass spectrometer. For the TBW, time to equilibration was determined using three commonly employed data analysis approaches. Isotopic equilibration was reached in 90% of the sample by hour 6, and in 100% of the sample by hour 7. With regard to the TBW estimations, the optimal time for urine collection was found to be between hours 4 and 10 as to where there was no significant difference between values. In contrast, statistically significant differences in TBW estimations were found between hours 1-3 and from 11-12 when compared with hours 4-10. Most of the individuals in this study were in equilibrium after 7 hours. The TEE equations of Prof Dale Scholler (Chicago, USA, IAEA) and Prof K.Westerterp were compared with that of Prof. Andrew Coward (Dunn Nutrition Centre). When comparing values derived from samples collected in the morning and evening there was no effect of time or equation on resulting TEE values. The fourth study was a pilot study (n=1) to test the variability in TEE as a result of manipulations in fluid consumption and level of physical activity; the magnitude of change which may be expected in a sedentary adult. Physical activity levels were manipulated by increasing the number of steps per day to mimic the increases that may result when a sedentary individual commences an activity program. The study was comprised of three sub-studies completed on the same individual over a period of 8 months. There were no significant changes in TBW across all studies, even though the elimination rates changed with the supplemented water intake and additional physical activity. The extra activity may not have sufficiently strenuous enough and the water intake high enough to cause a significant change in the TBW and hence the CO2 production and TEE values. The TEE values measured show good agreement based on the estimated values calculated on an RMR of 1455 kcal/day, a DIT of 10% of TEE and activity based on measured steps. The covariance values tracked when plotting the residuals were found to be representative of “well-behaved” data and are indicative of the analytical accuracy. The ratio and product plots were found to reflect the water turnover and CO2 production and thus could, with further investigation, be employed to identify the changes in physical activity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The uniformization method (also known as randomization) is a numerically stable algorithm for computing transient distributions of a continuous time Markov chain. When the solution is needed after a long run or when the convergence is slow, the uniformization method involves a large number of matrix-vector products. Despite this, the method remains very popular due to its ease of implementation and its reliability in many practical circumstances. Because calculating the matrix-vector product is the most time-consuming part of the method, overall efficiency in solving large-scale problems can be significantly enhanced if the matrix-vector product is made more economical. In this paper, we incorporate a new relaxation strategy into the uniformization method to compute the matrix-vector products only approximately. We analyze the error introduced by these inexact matrix-vector products and discuss strategies for refining the accuracy of the relaxation while reducing the execution cost. Numerical experiments drawn from computer systems and biological systems are given to show that significant computational savings are achieved in practical applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The measurement error model is a well established statistical method for regression problems in medical sciences, although rarely used in ecological studies. While the situations in which it is appropriate may be less common in ecology, there are instances in which there may be benefits in its use for prediction and estimation of parameters of interest. We have chosen to explore this topic using a conditional independence model in a Bayesian framework using a Gibbs sampler, as this gives a great deal of flexibility, allowing us to analyse a number of different models without losing generality. Using simulations and two examples, we show how the conditional independence model can be used in ecology, and when it is appropriate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gaussian mixture models (GMMs) have become an established means of modeling feature distributions in speaker recognition systems. It is useful for experimentation and practical implementation purposes to develop and test these models in an efficient manner particularly when computational resources are limited. A method of combining vector quantization (VQ) with single multi-dimensional Gaussians is proposed to rapidly generate a robust model approximation to the Gaussian mixture model. A fast method of testing these systems is also proposed and implemented. Results on the NIST 1996 Speaker Recognition Database suggest comparable and in some cases an improved verification performance to the traditional GMM based analysis scheme. In addition, previous research for the task of speaker identification indicated a similar system perfomance between the VQ Gaussian based technique and GMMs

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Robust speaker verification on short utterances remains a key consideration when deploying automatic speaker recognition, as many real world applications often have access to only limited duration speech data. This paper explores how the recent technologies focused around total variability modeling behave when training and testing utterance lengths are reduced. Results are presented which provide a comparison of Joint Factor Analysis (JFA) and i-vector based systems including various compensation techniques; Within-Class Covariance Normalization (WCCN), LDA, Scatter Difference Nuisance Attribute Projection (SDNAP) and Gaussian Probabilistic Linear Discriminant Analysis (GPLDA). Speaker verification performance for utterances with as little as 2 sec of data taken from the NIST Speaker Recognition Evaluations are presented to provide a clearer picture of the current performance characteristics of these techniques in short utterance conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Changes in peripheral aberrations, particularly higher-order aberrations, as a function of accommodation have received little attention. Wavefront aberrations were measured for the right eyes of 9 young adult emmetropes at 38 field positions in the central 42 x 32 degrees of the visual field. Subjects accommodated monocularly to targets at vergences of either 0.3 or 4.0 D. Wavefront data for a 5 mm diameter pupil were analyzed either in terms of the vector components of refraction or Zernike coefficients and total RMS wavefront aberrations. Relative peripheral refractive error (RPRE) was myopic at both accommodation demands and showed only a slight, not statistically significant, hypermetropic shift in the vertical meridian with the higher accommodation demand. There was little change in the astigmatic components of refraction or the higher-order Zernike coefficients, apart from fourth-order spherical aberration which became more negative (by 0.10 µm) at all field locations. Although it has been suggested that nearwork and the state of peripheral refraction may play some role in myopia development, for most of our adult emmetropes any changes with accommodation in RPRE and aberration were small. Hence it seems unlikely that such changes can be of importance to late-onset myopisation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Few studies have specifically investigated the functional effects of uncorrected astigmatism on measures of reading fluency. This information is important to provide evidence for the development of clinical guidelines for the correction of astigmatism. Methods: Participants included 30 visually normal, young adults (mean age 21.7 ± 3.4 years). Distance and near visual acuity and reading fluency were assessed with optimal spectacle correction (baseline) and for two levels of astigmatism, 1.00DC and 2.00DC, at two axes (90° and 180°) to induce both against-the-rule (ATR) and with-the-rule (WTR) astigmatism. Reading and eye movement fluency were assessed using standardized clinical measures including the test of Discrete Reading Rate (DRR), the Developmental Eye Movement (DEM) test and by recording eye movement patterns with the Visagraph (III) during reading for comprehension. Results: Both distance and near acuity were significantly decreased compared to baseline for all of the astigmatic lens conditions (p < 0.001). Reading speed with the DRR for N16 print size was significantly reduced for the 2.00DC ATR condition (a reduction of 10%), while for smaller text sizes reading speed was reduced by up to 24% for the 1.00DC ATR and 2.00DC condition in both axis directions (p<0.05). For the DEM, sub-test completion speeds were significantly impaired, with the 2.00DC condition affecting both vertical and horizontal times and the 1.00DC ATR condition affecting only horizontal times (p<0.05). Visagraph reading eye movements were not significantly affected by the induced astigmatism. Conclusions: Induced astigmatism impaired performance on selected tests of reading fluency, with ATR astigmatism having significantly greater effects on performance than did WTR, even for relatively small amounts of astigmatic blur of 1.00DC. These findings have implications for the minimal prescribing criteria for astigmatic refractive errors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Animal models of refractive error development have demonstrated that visual experience influences ocular growth. In a variety of species, axial anisometropia (i.e. a difference in the length of the two eyes) can be induced through unilateral occlusion, image degradation or optical manipulation. In humans, anisometropia may occur in isolation or in association with amblyopia, strabismus or unilateral pathology. Non-amblyopic myopic anisometropia represents an interesting anomaly of ocular growth, since the two eyes within one visual system have grown to different endpoints. These experiments have investigated a range of biometric, optical and mechanical properties of anisometropic eyes (with and without amblyopia) with the aim of improving our current understanding of asymmetric refractive error development. In the first experiment, the interocular symmetry in 34 non-amblyopic myopic anisometropes (31 Asian, 3 Caucasian) was examined during relaxed accommodation. A high degree of symmetry was observed between the fellow eyes for a range of optical, biometric and biomechanical measurements. When the magnitude of anisometropia exceeded 1.75 D, the more myopic eye was almost always the sighting dominant eye. Further analysis of the optical and biometric properties of the dominant and non-dominant eyes was conducted to determine any related factors but no significant interocular differences were observed with respect to best-corrected visual acuity, corneal or total ocular aberrations during relaxed accommodation. Given the high degree of symmetry observed between the fellow eyes during distance viewing in the first experiment and the strong association previously reported between near work and myopia development, the aim of the second experiment was to investigate the symmetry between the fellow eyes of the same 34 myopic anisometropes following a period of near work. Symmetrical changes in corneal and total ocular aberrations were observed following a short reading task (10 minutes, 2.5 D accommodation demand) which was attributed to the high degree of interocular symmetry for measures of anterior eye morphology, and corneal biomechanics. These changes were related to eyelid shape and position during downward gaze, but gave no clear indication of factors associated with near work that might cause asymmetric eye growth within an individual. Since the influence of near work on eye growth is likely to be most obvious during, rather than following near tasks, in the third experiment the interocular symmetry of the optical and biometric changes was examined during accommodation for 11 myopic anisometropes. The changes in anterior eye biometrics associated with accommodation were again similar between the eyes, resulting in symmetrical changes in the optical characteristics. However, the more myopic eyes exhibited slightly greater amounts of axial elongation during accommodation which may be related to the force exerted by the ciliary muscle. This small asymmetry in axial elongation we observed between the eyes may be due to interocular differences in posterior eye structure, given that the accommodative response was equal between eyes. Using ocular coherence tomography a reduced average choroidal thickness was observed in the more myopic eyes compared to the less myopic eyes of these subjects. The interocular difference in choroidal thickness was correlated with the magnitude of spherical equivalent and axial anisometropia. The symmetry in optics and biometrics between fellow eyes which have undergone significantly different visual development (i.e. anisometropic subjects with amblyopia) is also of interest with respect to refractive error development. In the final experiment the influence of altered visual experience upon corneal and ocular higher-order aberrations was investigated in 21 amblyopic subjects (8 refractive, 11 strabismic and 2 form deprivation). Significant differences in aberrations were observed between the fellow eyes, which varied according to the type of amblyopia. Refractive amblyopes displayed significantly higher levels of 4th order corneal aberrations (spherical aberration and secondary astigmatism) in the amblyopic eye compared to the fellow non-amblyopic eye. Strabismic amblyopes exhibited significantly higher levels of trefoil, a third order aberration, in the amblyopic eye for both corneal and total ocular aberrations. The results of this experiment suggest that asymmetric visual experience during development is associated with asymmetries in higher-order aberrations, proportional to the magnitude of anisometropia and dependent upon the amblyogenic factor. This suggests a direct link between the development of higher-order optical characteristics of the human eye and visual feedback. The results from these experiments have shown that a high degree of symmetry exists between the fellow eyes of non-amblyopic myopic anisometropes for a range of biomechanical, biometric and optical parameters for different levels of accommodation and following near work. While a single specific optical or biomechanical factor that is consistently associated with asymmetric refractive error development has not been identified, the findings from these studies suggest that further research into the association between ocular dominance, choroidal thickness and higher-order aberrations with anisometropia may improve our understanding of refractive error development.