957 resultados para best linear unbiased predictor
Resumo:
Speech polarity detection is a crucial first step in many speech processing techniques. In this paper, an algorithm is proposed that improvises the existing technique using the skewness of the voice source (VS) signal. Here, the integrated linear prediction residual (ILPR) is used as the VS estimate, which is obtained using linear prediction on long-term frames of the low-pass filtered speech signal. This excludes the unvoiced regions from analysis and also reduces the computation. Further, a modified skewness measure is proposed for decision, which also considers the magnitude of the skewness of the ILPR along with its sign. With the detection error rate (DER) as the performance metric, the algorithm is tested on 8 large databases and its performance (DER=0.20%) is found to be comparable to that of the best technique (DER=0.06%) on both clean and noisy speech. Further, the proposed method is found to be ten times faster than the best technique.
Resumo:
In this paper, reanalysis fields from the ECMWF have been statistically downscaled to predict from large-scale atmospheric fields, surface moisture flux and daily precipitation at two observatories (Zaragoza and Tortosa, Ebro Valley, Spain) during the 1961-2001 period. Three types of downscaling models have been built: (i) analogues, (ii) analogues followed by random forests and (iii) analogues followed by multiple linear regression. The inputs consist of data (predictor fields) taken from the ERA-40 reanalysis. The predicted fields are precipitation and surface moisture flux as measured at the two observatories. With the aim to reduce the dimensionality of the problem, the ERA-40 fields have been decomposed using empirical orthogonal functions. Available daily data has been divided into two parts: a training period used to find a group of about 300 analogues to build the downscaling model (1961-1996) and a test period (19972001), where models' performance has been assessed using independent data. In the case of surface moisture flux, the models based on analogues followed by random forests do not clearly outperform those built on analogues plus multiple linear regression, while simple averages calculated from the nearest analogues found in the training period, yielded only slightly worse results. In the case of precipitation, the three types of model performed equally. These results suggest that most of the models' downscaling capabilities can be attributed to the analogues-calculation stage.
Resumo:
The concept of a "projection function" in a finite-dimensional real or complex normed linear space H (the function PM which carries every element into the closest element of a given subspace M) is set forth and examined.
If dim M = dim H - 1, then PM is linear. If PN is linear for all k-dimensional subspaces N, where 1 ≤ k < dim M, then PM is linear.
The projective bound Q, defined to be the supremum of the operator norm of PM for all subspaces, is in the range 1 ≤ Q < 2, and these limits are the best possible. For norms with Q = 1, PM is always linear, and a characterization of those norms is given.
If H also has an inner product (defined independently of the norm), so that a dual norm can be defined, then when PM is linear its adjoint PMH is the projection on (kernel PM)⊥ by the dual norm. The projective bounds of a norm and its dual are equal.
The notion of a pseudo-inverse F+ of a linear transformation F is extended to non-Euclidean norms. The distance from F to the set of linear transformations G of lower rank (in the sense of the operator norm ∥F - G∥) is c/∥F+∥, where c = 1 if the range of F fills its space, and 1 ≤ c < Q otherwise. The norms on both domain and range spaces have Q = 1 if and only if (F+)+ = F for every F. This condition is also sufficient to prove that we have (F+)H = (FH)+, where the latter pseudo-inverse is taken using dual norms.
In all results, the real and complex cases are handled in a completely parallel fashion.
Resumo:
Unbiased location- and scale-invariant `elemental' estimators for the GPD tail parameter are constructed. Each involves three log-spacings. The estimators are unbiased for finite sample sizes, even as small as N=3. It is shown that the elementals form a complete basis for unbiased location- and scale-invariant estimators constructed from linear combinations of log-spacings. Preliminary numerical evidence is presented which suggests that elemental combinations can be constructed which are consistent estimators of the tail parameter for samples drawn from the pure GPD family.
Resumo:
Large margin criteria and discriminative models are two effective improvements for HMM-based speech recognition. This paper proposed a large margin trained log linear model with kernels for CSR. To avoid explicitly computing in the high dimensional feature space and to achieve the nonlinear decision boundaries, a kernel based training and decoding framework is proposed in this work. To make the system robust to noise a kernel adaptation scheme is also presented. Previous work in this area is extended in two directions. First, most kernels for CSR focus on measuring the similarity between two observation sequences. The proposed joint kernels defined a similarity between two observation-label sequence pairs on the sentence level. Second, this paper addresses how to efficiently employ kernels in large margin training and decoding with lattices. To the best of our knowledge, this is the first attempt at using large margin kernel-based log linear models for CSR. The model is evaluated on a noise corrupted continuous digit task: AURORA 2.0. © 2013 IEEE.
Resumo:
Aging African-American women are disproportionately affected by negative health outcomes and mortality. Life stress has strong associations with these health outcomes. The purpose of this research was to understand how aging African American women manage stress. Specifically, the effects of coping, optimism, resilience, and religiousness as it relates to quality of life were examined. This cross-sectional exploratory study used a self-administered questionnaire and examined quality of life in 182 African-American women who were 65 years of age or older living in senior residential centers in Baltimore using convenience sampling. The age range for these women was 65 to 94 years with a mean of 71.8 years (SD = 5.6). The majority (53.1%) of participants completed high school, with 23 percent (N = 42) obtaining college degrees and 19 percent (N = 35) holding advanced degrees. Nearly 58 percent of participants were widowed and 81 percent were retired. In addition to demographics, the questionnaire included the following reliable and valid survey instruments: The Brief Cope Scale (Carver, Scheier, & Weintraub, 1989), Optimism Questionnaire (Scheier, Carver, & Bridges, 1994), Resilience Survey (Wagnild & Young, 1987), Religiousness Assessment (Koenig, 1997), and Quality of Life Questionnaire (Cummins, 1996). Results revealed that the positive psychological factors examined were positively associated with and significant predictors of quality of life. The bivariate correlations indicated that of the six coping dimensions measured in this study, planning (r=.68) was the most positively associated with quality of life. Optimism (r=.33), resilience (=.48), and religiousness (r=.30) were also significantly correlated with quality of life. In the linear regression model, again the coping dimension of planning was the best predictor of quality of life (beta = .75, p <.001). Optimism (beta = .31, p <.001), resilience (beta = .34, p, .001) and religiousness (beta = .17, p <.01) were also significant predictors of quality of life. It appears as if positive psychology plays an important role in improving quality of life among aging African-American women.
Resumo:
During the 1980s, a rapid increase in the Phytoplankton Colour Index (PCI), a semiquantitative visual estimate of algal biomass, was observed in the North Sea as part of a regionwide regime shift. Two new data sets created from the relationship between the PCI and SeaWiFS chlorophyll a (Chl a) quantify differences in the previous and current regimes for both the anthropogenically affected coastal North Sea and the comparatively unaffected open North Sea. The new regime maintains a 13% higher Chl a concentration in the open North Sea and a 21% higher concentration in coastal North Sea waters. However, the current regime has lower total nitrogen and total phosphorus concentrations than the previous regime, although the molar N: P ratio in coastal waters is now well above the Redfield ratio and continually increasing. Besides becoming warmer, North Sea waters are also becoming clearer (i.e., less turbid), thereby allowing the normally light-limited coastal phytoplankton to more effectively utilize lower concentrations of nutrients. Linear regression analyses indicate that winter Secchi depth and sea surface temperature are the most important predictors of coastal Chl a, while Atlantic inflow is the best predictor of open Chl a; nutrient concentrations are not a significant predictor in either model. Thus, despite decreasing nutrient concentrations, Chl a continues to increase, suggesting that climatic variability and water transparency may be more important than nutrient concentrations to phytoplankton production at the scale of this study.
Resumo:
One of the first attempts to develop a formal model of depth cue integration is to be found in Maloney and Landy's (1989) "human depth combination rule". They advocate that the combination of depth cues by the visual sysetem is best described by a weighted linear model. The present experiments tested whether the linear combination rule applies to the integration of texture and shading. As would be predicted by a linear combination rule, the weight assigned to the shading cue did vary as a function of its curvature value. However, the weight assigned to the texture cue varied systematically as a function of the curvature value of both cues. Here we descrive a non-linear model which provides a better fit to the data. Redescribing the stimuli in terms of depth rather than curvature reduced the goodness of fit for all models tested. These results support the hypothesis that the locus of cue integration is a curvature map, rather than a depth map. We conclude that the linear comination rule does not generalize to the integration of shading and texture, and that for these cues it is likely that integration occurs after the recovery of surface curvature.
Resumo:
14C wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a sequence of closely spaced peat 14C dates with the 14C calibration curve. A numerical approach to WMD enables the quantitative assessment of various possible wiggle-match solutions and of calendar year confidence intervals for sequences of 14C dates. We assess the assumptions, advantages, and limitations of the method. Several case-studies show that WMD results in more precise chronologies than when individual 14C dates are calibrated. WMD is most successful during periods with major excursions in the 14C calibration curve (e.g., in one case WMD could narrow down confidence intervals from 230 to 36 yr).
Resumo:
To understand academic performance of students, the variable of conscientiousness from personality inventory Big Five, has been recognized as an important key. The aim of this paper is to analyze the relationship established between the personality factor conscientiousness itself and two of its facets, laboriousness and planning, with academic performance, and observe if there are genre differences in consciousness personality factor. A total of 456 Spanish students of high school and college participated in the study. They were requested to answer a personality report and a self inform questionnaire. The results show that both conscientiousness as a personality dimension and the consideration of laboriousness facet are able to predict academic performance, especially with regard to student’s exam marks, classroom attendance and dedication to study. The genre variable pointed out that feminine genre is more conscious than male in that personality factor. From a practical perspective, these results indicate that the establishment of a routine of continuous work is suitable for improving student grades and their adaptation to the educational environment.
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.
Resumo:
OBJECTIVE: To assess the impedance cardiogram recorded by an automated external defibrillator during cardiac arrest to facilitate emergency care by lay persons. Lay persons are poor at emergency pulse checks (sensitivity 84%, specificity 36%); guidelines recommend they should not be performed. The impedance cardiogram (dZ/dt) is used to indicate stroke volume. Can an impedance cardiogram algorithm in a defibrillator determine rapidly circulatory arrest and facilitate prompt initiation of external cardiac massage?
DESIGN: Clinical study.
SETTING: University hospital.
PATIENTS: Phase 1 patients attended for myocardial perfusion imaging. Phase 2 patients were recruited during cardiac arrest. This group included nonarrest controls.
INTERVENTIONS: The impedance cardiogram was recorded through defibrillator/electrocardiographic pads oriented in the standard cardiac arrest position.
MEASUREMENTS AND MAIN RESULTS: Phase 1: Stroke volumes from gated myocardial perfusion imaging scans were correlated with parameters from the impedance cardiogram system (dZ/dt(max) and the peak amplitude of the Fast Fourier Transform of dZ/dt between 1.5 Hz and 4.5 Hz). Multivariate analysis was performed to fit stroke volumes from gated myocardial perfusion imaging scans with linear and quadratic terms for dZ/dt(max) and the Fast Fourier Transform to identify significant parameters for incorporation into a cardiac arrest diagnostic algorithm. The square of the peak amplitude of the Fast Fourier Transform of dZ/dt was the best predictor of reduction in stroke volumes from gated myocardial perfusion imaging scans (range = 33-85 mL; p = .016). Having established that the two pad impedance cardiogram system could detect differences in stroke volumes from gated myocardial perfusion imaging scans, we assessed its performance in diagnosing cardiac arrest. Phase 2: The impedance cardiogram was recorded in 132 "cardiac arrest" patients (53 training, 79 validation) and 97 controls (47 training, 50 validation): the diagnostic algorithm indicated cardiac arrest with sensitivities and specificities (+/- exact 95% confidence intervals) of 89.1% (85.4-92.1) and 99.6% (99.4-99.7; training) and 81.1% (77.6-84.3) and 97% (96.7-97.4; validation).
CONCLUSIONS: The impedance cardiogram algorithm is a significant marker of circulatory collapse. Automated defibrillators with an integrated impedance cardiogram could improve emergency care by lay persons, enabling rapid and appropriate initiation of external cardiac massage.
Resumo:
Linear wave theory models are commonly applied to predict the performance of bottom-hinged oscillating wave surge converters (OWSC) in operational sea states. To account for non-linear effects, the additional input of coefficients not included in the model itself becomes necessary. In ocean engineering it is
common practice to obtain damping coefficients of floating structures from free decay tests. This paper presents results obtained from experimental tank tests and numerical computational fluid dynamics simulations of OWSC’s. Agreement between numerical and experimental methods is found to be very good, with CFD providing more data points at small amplitude rotations.
Analysis of the obtained data reveals that linear quadratic-damping, as commonly used in time domain models, is not able to accurately model the occurring damping over the whole regime of rotation amplitudes. The authors
conclude that a hyperbolic function is most suitable to express the instantaneous damping ratio over the rotation amplitude and would be the best choice to be used in coefficient based time domain models.
Resumo:
Virtual metrology (VM) aims to predict metrology values using sensor data from production equipment and physical metrology values of preceding samples. VM is a promising technology for the semiconductor manufacturing industry as it can reduce the frequency of in-line metrology operations and provide supportive information for other operations such as fault detection, predictive maintenance and run-to-run control. The prediction models for VM can be from a large variety of linear and nonlinear regression methods and the selection of a proper regression method for a specific VM problem is not straightforward, especially when the candidate predictor set is of high dimension, correlated and noisy. Using process data from a benchmark semiconductor manufacturing process, this paper evaluates the performance of four typical regression methods for VM: multiple linear regression (MLR), least absolute shrinkage and selection operator (LASSO), neural networks (NN) and Gaussian process regression (GPR). It is observed that GPR performs the best among the four methods and that, remarkably, the performance of linear regression approaches that of GPR as the subset of selected input variables is increased. The observed competitiveness of high-dimensional linear regression models, which does not hold true in general, is explained in the context of extreme learning machines and functional link neural networks.
Resumo:
BACKGROUND: It is now common for individuals to require dialysis following the failure of a kidney transplant. Management of complications and preparation for dialysis are suboptimal in this group. To aid planning, it is desirable to estimate the time to dialysis requirement. The rate of decline in the estimated glomerular filtration rate (eGFR) may be used to this end.
METHODS: This study compared the rate of eGFR decline prior to dialysis commencement between individuals with failing transplants and transplant-naïve patients. The rate of eGFR decline was also compared between transplant recipients with and without graft failure. eGFR was calculated using the four-variable MDRD equation with rate of decline calculated by least squares linear regression.
RESULTS: The annual rate of eGFR decline in incident dialysis patients with graft failure exceeded that of the transplant-naïve incident dialysis patients. In the transplant cohort, the mean annual rate of eGFR decline prior to graft failure was 7.3 ml/min/1.73 m(2) compared to 4.8 ml/min/1.73 m(2) in the transplant-naïve group (p < 0.001) and 0.35 ml/min/1.73 m(2) in recipients without graft failure (p < 0.001). Factors associated with eGFR decline were recipient age, decade of transplantation, HLA mismatch and histological evidence of chronic immunological injury.
CONCLUSIONS: Individuals with graft failure have a rapid decline in eGFR prior to dialysis commencement. To improve outcomes, dialysis planning and management of chronic kidney disease complications should be initiated earlier than in the transplant-naïve population.