969 resultados para covariance estimator
Resumo:
Access to higher education has increased among students with disabilities, and universities are adopting different alternatives which must be assessed. The purpose of this study was to identify the situation of a sample of students with disabilities (n=91) who attend a university in Spain, through the design and validation of the “CUNIDIS-d” scale, with satisfactory psychometric properties. The results show the importance of making reasoned curriculum adaptations, adapting teacher training, improving accessibility and involving all the university community. Different proposals were provided which support the social dimension of the EHEA.
Resumo:
This paper analyses multivariate statistical techniques for identifying and isolating abnormal process behaviour. These techniques include contribution charts and variable reconstructions that relate to the application of principal component analysis (PCA). The analysis reveals firstly that contribution charts produce variable contributions which are linearly dependent and may lead to an incorrect diagnosis, if the number of principal components retained is close to the number of recorded process variables. The analysis secondly yields that variable reconstruction affects the geometry of the PCA decomposition. The paper further introduces an improved variable reconstruction method for identifying multiple sensor and process faults and for isolating their influence upon the recorded process variables. It is shown that this can accommodate the effect of reconstruction, i.e. changes in the covariance matrix of the sensor readings and correctly re-defining the PCA-based monitoring statistics and their confidence limits. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes a novel image denoising technique based on the normal inverse Gaussian (NIG) density model using an extended non-negative sparse coding (NNSC) algorithm proposed by us. This algorithm can converge to feature basis vectors, which behave in the locality and orientation in spatial and frequency domain. Here, we demonstrate that the NIG density provides a very good fitness to the non-negative sparse data. In the denoising process, by exploiting a NIG-based maximum a posteriori estimator (MAP) of an image corrupted by additive Gaussian noise, the noise can be reduced successfully. This shrinkage technique, also referred to as the NNSC shrinkage technique, is self-adaptive to the statistical properties of image data. This denoising method is evaluated by values of the normalized signal to noise rate (SNR). Experimental results show that the NNSC shrinkage approach is indeed efficient and effective in denoising. Otherwise, we also compare the effectiveness of the NNSC shrinkage method with methods of standard sparse coding shrinkage, wavelet-based shrinkage and the Wiener filter. The simulation results show that our method outperforms the three kinds of denoising approaches mentioned above.
Resumo:
A problem with use of the geostatistical Kriging error for optimal sampling design is that the design does not adapt locally to the character of spatial variation. This is because a stationary variogram or covariance function is a parameter of the geostatistical model. The objective of this paper was to investigate the utility of non-stationary geostatistics for optimal sampling design. First, a contour data set of Wiltshire was split into 25 equal sub-regions and a local variogram was predicted for each. These variograms were fitted with models and the coefficients used in Kriging to select optimal sample spacings for each sub-region. Large differences existed between the designs for the whole region (based on the global variogram) and for the sub-regions (based on the local variograms). Second, a segmentation approach was used to divide a digital terrain model into separate segments. Segment-based variograms were predicted and fitted with models. Optimal sample spacings were then determined for the whole region and for the sub-regions. It was demonstrated that the global design was inadequate, grossly over-sampling some segments while under-sampling others.
Resumo:
This paper describes the application of multivariate regression techniques to the Tennessee Eastman benchmark process for modelling and fault detection. Two methods are applied : linear partial least squares, and a nonlinear variant of this procedure using a radial basis function inner relation. The performance of the RBF networks is enhanced through the use of a recently developed training algorithm which uses quasi-Newton optimization to ensure an efficient and parsimonious network; details of this algorithm can be found in this paper. The PLS and PLS/RBF methods are then used to create on-line inferential models of delayed process measurements. As these measurements relate to the final product composition, these models suggest that on-line statistical quality control analysis should be possible for this plant. The generation of `soft sensors' for these measurements has the further effect of introducing a redundant element into the system, redundancy which can then be used to generate a fault detection and isolation scheme for these sensors. This is achieved by arranging the sensors and models in a manner comparable to the dedicated estimator scheme of Clarke et al. 1975, IEEE Trans. Pero. Elect. Sys., AES-14R, 465-473. The effectiveness of this scheme is demonstrated on a series of simulated sensor and process faults, with full detection and isolation shown to be possible for sensor malfunctions, and detection feasible in the case of process faults. Suggestions for enhancing the diagnostic capacity in the latter case are covered towards the end of the paper.
Resumo:
BACKGROUND:
Researching psychotic disorders in unison rather than as separate diagnostic groups is widely advocated, but the viability of such an approach requires careful consideration from a neurocognitive perspective.
AIMS:
To describe cognition in people with bipolar disorder and schizophrenia and to examine how known causes of variability in individual's performance contribute to any observed diagnostic differences.
METHOD:
Neurocognitive functioning in people with bipolar disorder (n = 32), schizophrenia (n = 46) and healthy controls (n = 67) was compared using analysis of covariance on data from the Northern Ireland First Episode Psychosis Study.
RESULTS:
The bipolar disorder and schizophrenia groups were most impaired on tests of memory, executive functioning and language. The bipolar group performed significantly better on tests of response inhibition, verbal fluency and callosal functioning. Between-group differences could be explained by the greater proclivity of individuals with schizophrenia to experience global cognitive impairment and negative symptoms.
CONCLUSIONS:
Particular impairments are common to people with psychosis and may prove useful as endophenotypic markers. Considering the degree of individuals' global cognitive impairment is critical when attempting to understand patterns of selective impairment both within and between these diagnostic groups.
Resumo:
Objective: Both neurocognitive impairments and a history of childhood abuse are highly prevalent in patients with schizophrenia. Childhood trauma has been associated with memory impairment as well as hippocampal volume reduction in adult survivors. The aim of the following study was to examine the contribution of childhood adversity to verbal memory functioning in people with schizophrenia. Methods: Eighty-five outpatients with a Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) diagnosis of chronic schizophrenia were separated into 2 groups on the basis of self-reports of childhood trauma. Performance on measures of episodic narrative memory, list learning, and working memory was then compared using multivariate analysis of covariance. Results: Thirty-eight (45%) participants reported moderate to severe levels of childhood adversity, while 47 (55%) reported no or low levels of childhood adversity. After controlling for premorbid IQ and current depressive symptoms, the childhood trauma group had significantly poorer working memory and episodic narrative memory. However, list learning was similar between groups. Conclusion: Childhood trauma is an important variable that can contribute to specific ongoing memory impairments in schizophrenia.
Resumo:
A family of stochastic gradient algorithms and their behaviour in the data echo cancellation work platform are presented. The cost function adaptation algorithms use an error exponent update strategy based on an absolute error mapping, which is updated at every iteration. The quadratic and nonquadratic cost functions are special cases of the new family. Several possible realisations are introduced using these approaches. The noisy error problem is discussed and the digital recursive filter estimator is proposed. The simulation outcomes confirm the effectiveness of the proposed family of algorithms.
Resumo:
In this paper, we propose a novel linear transmit precoding strategy for multiple-input, multiple-output (MIMO) systems employing improper signal constellations. In particular, improved zero-forcing (ZF) and minimum mean square error (MMSE) precoders are derived based on modified cost functions, and are shown to achieve a superior performance without loss of spectrum efficiency compared to the conventional linear and nonlinear precoders. The superiority of the proposed precoders over the conventional solutions are verified by both simulation and analytical results. The novel approach to precoding design is also applied to the case of an imperfect channel estimate with a known error covariance as well as to the multi-user scenario where precoding based on the nullspace of channel transmission matrix is employed to decouple multi-user channels. In both cases, the improved precoding schemes yield significant performance gain compared to the conventional counterparts.
Resumo:
The study investigates how producer-specific environmental factors influence the performance of Irish credit unions. The empirical analysis uses a two-stage approach. The first stage measures efficiency by a data envelopment analysis (DEA) estimator, which explicitly incorporates the production of undesirable outputs such as bad loans in the modelling, and the second stage uses truncated regression to infer how various factors influence the (bias-corrected) estimated efficiency. A key finding of the analysis is that 68% of Irish credit unions do not incur an extra opportunity cost in meeting regulatory guidance on bad debt.
Resumo:
This study investigates the superposition-based cooperative transmission system. In this system, a key point is for the relay node to detect data transmitted from the source node. This issued was less considered in the existing literature as the channel is usually assumed to be flat fading and a priori known. In practice, however, the channel is not only a priori unknown but subject to frequency selective fading. Channel estimation is thus necessary. Of particular interest is the channel estimation at the relay node which imposes extra requirement for the system resources. The authors propose a novel turbo least-square channel estimator by exploring the superposition structure of the transmission data. The proposed channel estimator not only requires no pilot symbols but also has significantly better performance than the classic approach. The soft-in-soft-out minimum mean square error (MMSE) equaliser is also re-derived to match the superimposed data structure. Finally computer simulation results are shown to verify the proposed algorithm.
Resumo:
We draw an explicit connection between the statistical properties of an entangled two-mode continuous variable (CV) resource and the amount of entanglement that can be dynamically transferred to a pair of noninteracting two-level systems. More specifically, we rigorously reformulate entanglement-transfer process by making use of covariance matrix formalism. When the resource state is Gaussian, our method makes the approach to the transfer of quantum correlations much more flexible than in previously considered schemes and allows the straightforward inclusion of the effects of noise affecting the CV system. Moreover, the proposed method reveals that the use of de-Gaussified two-mode states is almost never advantageous for transferring entanglement with respect to the full Gaussian picture, despite the entanglement in the non-Gaussian resource can be much larger than in its Gaussian counterpart. We can thus conclude that the entanglement-transfer map overthrows the
Resumo:
This paper investigates the center selection of multi-output radial basis function (RBF) networks, and a multi-output fast recursive algorithm (MFRA) is proposed. This method can not only reveal the significance of each candidate center based on the reduction in the trace of the error covariance matrix, but also can estimate the network weights simultaneously using a back substitution approach. The main contribution is that the center selection procedure and the weight estimation are performed within a well-defined regression context, leading to a significantly reduced computational complexity. The efficiency of the algorithm is confirmed by a computational complexity analysis, and simulation results demonstrate its effectiveness. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper discusses the monitoring of complex nonlinear and time-varying processes. Kernel principal component analysis (KPCA) has gained significant attention as a monitoring tool for nonlinear systems in recent years but relies on a fixed model that cannot be employed for time-varying systems. The contribution of this article is the development of a numerically efficient and memory saving moving window KPCA (MWKPCA) monitoring approach. The proposed technique incorporates an up- and downdating procedure to adapt (i) the data mean and covariance matrix in the feature space and (ii) approximates the eigenvalues and eigenvectors of the Gram matrix. The article shows that the proposed MWKPCA algorithm has a computation complexity of O(N2), whilst batch techniques, e.g. the Lanczos method, are of O(N3). Including the adaptation of the number of retained components and an l-step ahead application of the MWKPCA monitoring model, the paper finally demonstrates the utility of the proposed technique using a simulated nonlinear time-varying system and recorded data from an industrial distillation column.
Resumo:
In this article, we extend the earlier work of Freeland and McCabe [Journal of time Series Analysis (2004) Vol. 25, pp. 701–722] and develop a general framework for maximum likelihood (ML) analysis of higher-order integer-valued autoregressive processes. Our exposition includes the case where the innovation sequence has a Poisson distribution and the thinning is binomial. A recursive representation of the transition probability of the model is proposed. Based on this transition probability, we derive expressions for the score function and the Fisher information matrix, which form the basis for ML estimation and inference. Similar to the results in Freeland and McCabe (2004), we show that the score function and the Fisher information matrix can be neatly represented as conditional expectations. Using the INAR(2) speci?cation with binomial thinning and Poisson innovations, we examine both the asymptotic e?ciency and ?nite sample properties of the ML estimator in relation to the widely used conditional least
squares (CLS) and Yule–Walker (YW) estimators. We conclude that, if the Poisson assumption can be justi?ed, there are substantial gains to be had from using ML especially when the thinning parameters are large.