61 resultados para Weighting
Resumo:
The contribution of retinal flow (RF), extraretinal (ER), and egocentric visual direction (VD) information in locomotor control was explored. First, the recovery of heading from RF was examined when ER information was manipulated; results confirmed that ER signals affect heading judgments. Then the task was translated to steering curved paths, and the availability and veracity of VD were manipulated with either degraded or systematically biased RE Large steering errors resulted from selective manipulation of RF and VD, providing strong evidence for the combination of RF, ER, and VD. The relative weighting applied to RF and VD was estimated. A point-attractor model is proposed that combines redundant sources of information for robust locomotor control with flexible trajectory planning through active gaze.
Resumo:
Binocular disparity, blur, and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3 and 2 m. By separating the three main near cues, we can explore their relative weighting in three-, two-, one-, and zero-cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable interparticipant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development, and emmetropization.
Resumo:
Objective: To examine the properties of the Social Communication Questionnaire (SCQ) in a population cohort of children with autism spectrum disorders (ASDs) and in the general population, Method: SCQ data were collected from three samples: the Special Needs and Autism Project (SNAP) cohort of 9- to 10-year-old children with special educational needs with and without ASD and two similar but separate age groups of children from the general population (n = 411 and n = 247). Diagnostic assessments were completed on a stratified subsample (n = 255) of the special educational needs group. A sample-weighting procedure enabled us to estimate characteristics of the SCQ in the total ASD population. Diagnostic status of cases in the general population samples were extracted from child health records. Results: The SCQ showed strong discrimination between ASD and non-ASD cases (sensitivity 0.88, specificity 0.72) and between autism and nonautism cases (sensitivity 0.90, specificity 0.86). Findings were not affected by child IQ or parental education. In the general population samples between 4% and 5% of children scored above the ASD cutoff including 1.5% who scored above the autism cutoff. Although many of these high-scoring children had an ASD diagnosis, almost all (similar to 90%) of them had a diagnosed neurodevelopmental disorder. Conclusions: This study confirms the utility of the SCQ as a,first-level screen for ASD in at-risk samples of school-age children.
Resumo:
Background Recent reports have suggested that the prevalence of autism and related spectrum disorders (ASDs) is substantially higher than previously recognised. We sought to quantify prevalence of ASDs in children in South Thames, UK. Methods Within a total population cohort of 56946 children aged 9-10 years, we screened all those with a current clinical diagnosis of ASD (n=255) or those judged to be at risk for being an undetected case (n=1515). A stratified subsample (n=255) received a comprehensive diagnostic assessment, including standardised clinical observation, and parent interview assessments of autistic symptoms, language, and intelligence quotient (IQ). Clinical consensus diagnoses of childhood autism and other ASDs were derived. We used a sample weighting procedure to estimate prevalence. Findings The prevalence of childhood autism was 38.9 per 10000 (95% CI 29.9-47.8) and that of other ASDs was 77.2 per 10000 (52.1-102.3), making the total prevalence of all AS Ds 116.1 per 10000 (90.4-141.8). A narrower definition of childhood autism, which combined clinical consensus with instrument criteria for past and current presentation, provided a prevalence of 24.8 per 10 000 (17.6-32.0). The rate of previous local identification was lowest for children of less educated parents. Interpretation Prevalence of autism and related ASDs is substantially greater than previously recognised. Whether the increase is due to better ascertainment, broadening diagnostic criteria, or increased incidence is unclear. Services in health, education, and social care will need to recognise the needs of children with some form of ASD, who constitute 1% of the child population.
Resumo:
We consider a fully complex-valued radial basis function (RBF) network for regression application. The locally regularised orthogonal least squares (LROLS) algorithm with the D-optimality experimental design, originally derived for constructing parsimonious real-valued RBF network models, is extended to the fully complex-valued RBF network. Like its real-valued counterpart, the proposed algorithm aims to achieve maximised model robustness and sparsity by combining two effective and complementary approaches. The LROLS algorithm alone is capable of producing a very parsimonious model with excellent generalisation performance while the D-optimality design criterion further enhances the model efficiency and robustness. By specifying an appropriate weighting for the D-optimality cost in the combined model selecting criterion, the entire model construction procedure becomes automatic. An example of identifying a complex-valued nonlinear channel is used to illustrate the regression application of the proposed fully complex-valued RBF network.
Resumo:
A construction algorithm for multioutput radial basis function (RBF) network modelling is introduced by combining a locally regularised orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximised model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious RBF network model with excellent generalisation performance. The D-optimality design criterion enhances the model efficiency and robustness. A further advantage of the combined approach is that the user only needs to specify a weighting for the D-optimality cost in the combined RBF model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.
Resumo:
The note proposes an efficient nonlinear identification algorithm by combining a locally regularized orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximized model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious model with excellent generalization performance. The D-optimality design criterion further enhances the model efficiency and robustness. An added advantage is that the user only needs to specify a weighting for the D-optimality cost in the combined model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.
Resumo:
We consider a fully complex-valued radial basis function (RBF) network for regression and classification applications. For regression problems, the locally regularised orthogonal least squares (LROLS) algorithm aided with the D-optimality experimental design, originally derived for constructing parsimonious real-valued RBF models, is extended to the fully complex-valued RBF (CVRBF) network. Like its real-valued counterpart, the proposed algorithm aims to achieve maximised model robustness and sparsity by combining two effective and complementary approaches. The LROLS algorithm alone is capable of producing a very parsimonious model with excellent generalisation performance while the D-optimality design criterion further enhances the model efficiency and robustness. By specifying an appropriate weighting for the D-optimality cost in the combined model selecting criterion, the entire model construction procedure becomes automatic. An example of identifying a complex-valued nonlinear channel is used to illustrate the regression application of the proposed fully CVRBF network. The proposed fully CVRBF network is also applied to four-class classification problems that are typically encountered in communication systems. A complex-valued orthogonal forward selection algorithm based on the multi-class Fisher ratio of class separability measure is derived for constructing sparse CVRBF classifiers that generalise well. The effectiveness of the proposed algorithm is demonstrated using the example of nonlinear beamforming for multiple-antenna aided communication systems that employ complex-valued quadrature phase shift keying modulation scheme. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Many kernel classifier construction algorithms adopt classification accuracy as performance metrics in model evaluation. Moreover, equal weighting is often applied to each data sample in parameter estimation. These modeling practices often become problematic if the data sets are imbalanced. We present a kernel classifier construction algorithm using orthogonal forward selection (OFS) in order to optimize the model generalization for imbalanced two-class data sets. This kernel classifier identification algorithm is based on a new regularized orthogonal weighted least squares (ROWLS) estimator and the model selection criterion of maximal leave-one-out area under curve (LOO-AUC) of the receiver operating characteristics (ROCs). It is shown that, owing to the orthogonalization procedure, the LOO-AUC can be calculated via an analytic formula based on the new regularized orthogonal weighted least squares parameter estimator, without actually splitting the estimation data set. The proposed algorithm can achieve minimal computational expense via a set of forward recursive updating formula in searching model terms with maximal incremental LOO-AUC value. Numerical examples are used to demonstrate the efficacy of the algorithm.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram-Schmidt (G-S) method has been introduced via two effective and complementary approaches of regularization and D-optimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a D-optimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the D-optimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
The ARM Shortwave Spectrometer (SWS) measures zenith radiance at 418 wavelengths between 350 and 2170 nm. Because of its 1-sec sampling resolution, the SWS provides a unique capability to study the transition zone between cloudy and clear sky areas. A spectral invariant behavior is found between ratios of zenith radiance spectra during the transition from cloudy to cloud-free. This behavior suggests that the spectral signature of the transition zone is a linear mixture between the two extremes (definitely cloudy and definitely clear). The weighting function of the linear mixture is a wavelength-independent characteristic of the transition zone. It is shown that the transition zone spectrum is fully determined by this function and zenith radiance spectra of clear and cloudy regions. An important result of these discoveries is that high temporal resolution radiance measurements in the clear-to-cloud transition zone can be well approximated by lower temporal resolution measurements plus linear interpolation.
Resumo:
An updated analysis of observed stratospheric temperature variability and trends is presented on the basis of satellite, radiosonde, and lidar observations. Satellite data include measurements from the series of NOAA operational instruments, including the Microwave Sounding Unit covering 1979–2007 and the Stratospheric Sounding Unit (SSU) covering 1979–2005. Radiosonde results are compared for six different data sets, incorporating a variety of homogeneity adjustments to account for changes in instrumentation and observational practices. Temperature changes in the lower stratosphere show cooling of 0.5 K/decade over much of the globe for 1979–2007, with some differences in detail among the different radiosonde and satellite data sets. Substantially larger cooling trends are observed in the Antarctic lower stratosphere during spring and summer, in association with development of the Antarctic ozone hole. Trends in the lower stratosphere derived from radiosonde data are also analyzed for a longer record (back to 1958); trends for the presatellite era (1958–1978) have a large range among the different homogenized data sets, implying large trend uncertainties. Trends in the middle and upper stratosphere have been derived from updated SSU data, taking into account changes in the SSU weighting functions due to observed atmospheric CO2 increases. The results show mean cooling of 0.5–1.5 K/decade during 1979–2005, with the greatest cooling in the upper stratosphere near 40–50 km. Temperature anomalies throughout the stratosphere were relatively constant during the decade 1995–2005. Long records of lidar temperature measurements at a few locations show reasonable agreement with SSU trends, although sampling uncertainties are large in the localized lidar measurements. Updated estimates of the solar cycle influence on stratospheric temperatures show a statistically significant signal in the tropics (30N–S), with an amplitude (solar maximum minus solar minimum) of 0.5 K (lower stratosphere) to 1.0 K (upper stratosphere).
IQ in children with autism spectrum disorders: data from the Special Needs and Autism Project (SNAP)
Resumo:
Background Autism spectrum disorder (ASD) was once considered to be highly associated with intellectual disability and to show a characteristic IQ profile, with strengths in performance over verbal abilities and a distinctive pattern of ‘peaks’ and ‘troughs’ at the subtest level. However, there are few data from epidemiological studies. Method Comprehensive clinical assessments were conducted with 156 children aged 10–14 years [mean (s.d.)=11.7 (0.9)], seen as part of an epidemiological study (81 childhood autism, 75 other ASD). A sample weighting procedure enabled us to estimate characteristics of the total ASD population. Results Of the 75 children with ASD, 55% had an intellectual disability (IQ<70) but only 16% had moderate to severe intellectual disability (IQ<50); 28% had average intelligence (115>IQ>85) but only 3% were of above average intelligence (IQ>115). There was some evidence for a clinically significant Performance/Verbal IQ (PIQ/VIQ) discrepancy but discrepant verbal versus performance skills were not associated with a particular pattern of symptoms, as has been reported previously. There was mixed evidence of a characteristic subtest profile: whereas some previously reported patterns were supported (e.g. poor Comprehension), others were not (e.g. no ‘peak’ in Block Design). Adaptive skills were significantly lower than IQ and were associated with severity of early social impairment and also IQ. Conclusions In this epidemiological sample, ASD was less strongly associated with intellectual disability than traditionally held and there was only limited evidence of a distinctive IQ profile. Adaptive outcome was significantly impaired even for those children of average intelligence.
Resumo:
In financial decision-making processes, the adopted weights of the objective functions have significant impacts on the final decision outcome. However, conventional rating and weighting methods exhibit difficulty in deriving appropriate weights for complex decision-making problems with imprecise information. Entropy is a quantitative measure of uncertainty and has been useful in exploring weights of attributes in decision making. A fuzzy and entropy-based mathematical approach is employed to solve the weighting problem of the objective functions in an overall cash-flow model. The multiproject being undertaken by a medium-size construction firm in Hong Kong was used as a real case study to demonstrate the application of entropy. Its application in multiproject cash flow situations is demonstrated. The results indicate that the overall before-tax profit was HK$ 0.11 millions lower after the introduction of appropriate weights. In addition, the best time to invest in new projects arising from positive cash flow was identified to be two working months earlier than the nonweight system.