54 resultados para Error analysis (Mathematics)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Factor analysis as frequent technique for multivariate data inspection is widely used also for compositional data analysis. The usual way is to use a centered logratio (clr)transformation to obtain the random vector y of dimension D. The factor model istheny = Λf + e (1)with the factors f of dimension k & D, the error term e, and the loadings matrix Λ.Using the usual model assumptions (see, e.g., Basilevsky, 1994), the factor analysismodel (1) can be written asCov(y) = ΛΛT + ψ (2)where ψ = Cov(e) has a diagonal form. The diagonal elements of ψ as well as theloadings matrix Λ are estimated from an estimation of Cov(y).Given observed clr transformed data Y as realizations of the random vectory. Outliers or deviations from the idealized model assumptions of factor analysiscan severely effect the parameter estimation. As a way out, robust estimation ofthe covariance matrix of Y will lead to robust estimates of Λ and ψ in (2), seePison et al. (2003). Well known robust covariance estimators with good statisticalproperties, like the MCD or the S-estimators (see, e.g. Maronna et al., 2006), relyon a full-rank data matrix Y which is not the case for clr transformed data (see,e.g., Aitchison, 1986).The isometric logratio (ilr) transformation (Egozcue et al., 2003) solves thissingularity problem. The data matrix Y is transformed to a matrix Z by usingan orthonormal basis of lower dimension. Using the ilr transformed data, a robustcovariance matrix C(Z) can be estimated. The result can be back-transformed tothe clr space byC(Y ) = V C(Z)V Twhere the matrix V with orthonormal columns comes from the relation betweenthe clr and the ilr transformation. Now the parameters in the model (2) can beestimated (Basilevsky, 1994) and the results have a direct interpretation since thelinks to the original variables are still preserved.The above procedure will be applied to data from geochemistry. Our specialinterest is on comparing the results with those of Reimann et al. (2002) for the Kolaproject data

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The low quality of education is a persistent problem in many developed countries. Parallel to in the last decades exists a tendency towards decentralization in many developed and developing countries. Using micro data from the Programme for International Student Assessment (PISA) referred to 22 countries, we test whether there exists an impact of fiscal and political decentralization on student performance in the areas of mathematics, reading skills and science. We observe that fiscal decentralization exerts an unequivocal positive effect on students’ outcomes in all areas, while the effect of political decentralization is more ambiguous. On the one hand, the capacity of the subnational governments to rule on its region has a positive effect on students’ performance in mathematics. On the other hand, the capacity to influence the country as a whole has a negative impact on mathematics achievement. As a general result, we observe that students’ performance in Mathematics is more sensible to these exogenous variations than in Sciences and reading skills. Keywords: School outcomes, PISA, fiscal decentralization, political decentralization JEL codes: H11, H77, I21

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two common methods of accounting for electric-field-induced perturbations to molecular vibration are analyzed and compared. The first method is based on a perturbation-theoretic treatment and the second on a finite-field treatment. The relationship between the two, which is not immediately apparent, is made by developing an algebraic formalism for the latter. Some of the higher-order terms in this development are documented here for the first time. As well as considering vibrational dipole polarizabilities and hyperpolarizabilities, we also make mention of the vibrational Stark effec

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a technique to estimate and model patient-specific pulsatility of cerebral aneurysms over onecardiac cycle, using 3D rotational X-ray angiography (3DRA) acquisitions. Aneurysm pulsation is modeled as a time varying-spline tensor field representing the deformation applied to a reference volume image, thus producing the instantaneousmorphology at each time point in the cardiac cycle. The estimated deformation is obtained by matching multiple simulated projections of the deforming volume to their corresponding original projections. A weighting scheme is introduced to account for the relevance of each original projection for the selected time point. The wide coverage of the projections, together with the weighting scheme, ensures motion consistency in all directions. The technique has been tested on digital and physical phantoms that are realistic and clinically relevant in terms of geometry, pulsation and imaging conditions. Results from digital phantomexperiments demonstrate that the proposed technique is able to recover subvoxel pulsation with an error lower than 10% of the maximum pulsation in most cases. The experiments with the physical phantom allowed demonstrating the feasibility of pulsation estimation as well as identifying different pulsation regions under clinical conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the minimum mean square error (MMSE) and the multiuser efficiency η of large dynamic multiple access communication systems in which optimal multiuser detection is performed at the receiver as the number and the identities of active users is allowed to change at each transmission time. The system dynamics are ruled by a Markov model describing the evolution of the channel occupancy and a large-system analysis is performed when the number of observations grow large. Starting on the equivalent scalar channel and the fixed-point equation tying multiuser efficiency and MMSE, we extend it to the case of a dynamic channel, and derive lower and upper bounds for the MMSE (and, thus, for η as well) holding true in the limit of large signal–to–noise ratios and increasingly large observation time T.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented evaluates the statistical characteristics of regional bias and expected error in reconstructions of real positron emission tomography (PET) data of human brain fluoro-deoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task of evaluating radioisotope uptake in regions-of-interest (ROIs) is investigated. An assessment of bias and variance in uptake measurements is carried out with simulated data. Then, by using three different transition matrices with different degrees of accuracy and a components of variance model for statistical analysis, it is shown that the characteristics obtained from real human FDG brain data are consistent with the results of the simulation studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ground clutter caused by anomalous propagation (anaprop) can affect seriously radar rain rate estimates, particularly in fully automatic radar processing systems, and, if not filtered, can produce frequent false alarms. A statistical study of anomalous propagation detected from two operational C-band radars in the northern Italian region of Emilia Romagna is discussed, paying particular attention to its diurnal and seasonal variability. The analysis shows a high incidence of anaprop in summer, mainly in the morning and evening, due to the humid and hot summer climate of the Po Valley, particularly in the coastal zone. Thereafter, a comparison between different techniques and datasets to retrieve the vertical profile of the refractive index gradient in the boundary layer is also presented. In particular, their capability to detect anomalous propagation conditions is compared. Furthermore, beam path trajectories are simulated using a multilayer ray-tracing model and the influence of the propagation conditions on the beam trajectory and shape is examined. High resolution radiosounding data are identified as the best available dataset to reproduce accurately the local propagation conditions, while lower resolution standard TEMP data suffers from interpolation degradation and Numerical Weather Prediction model data (Lokal Model) are able to retrieve a tendency to superrefraction but not to detect ducting conditions. Observing the ray tracing of the centre, lower and upper limits of the radar antenna 3-dB half-power main beam lobe it is concluded that ducting layers produce a change in the measured volume and in the power distribution that can lead to an additional error in the reflectivity estimate and, subsequently, in the estimated rainfall rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multiexponential decays may contain time-constants differing in several orders of magnitudes. In such cases, uniform sampling results in very long records featuring a high degree of oversampling at the final part of the transient. Here, we analyze a nonlinear time scale transformation to reduce the total number of samples with minimum signal distortion, achieving an important reduction of the computational cost of subsequent analyses. We propose a time-varying filter whose length is optimized for minimum mean square error

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion. To achieve this goal, the optimal codeword separation is sacrificed in favor of a maximum class discrimination in the partitions. The creation of the hierarchical partition set is performed using a binary tree. As a result, a compact matrix with high discrimination power is obtained. Our method is validated using the UCI database and applied to a real problem, the classification of traffic sign images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

" Has comes un error" . " Estas en un error" . " És un error votar aquest parti!" . " És un error votar" . " És un error afirmar que 2 + 3 = 9" . " És un error afirmar que és un error afirmar que 2 + 3 = 5" . " És un error afirmar que, quan dividim, sempre obtenim un nombre més petit" . " És un error que l'existencia precedeixi l'essencia" . " És un error que vulguis enganyar-me" . " És un error afirmar que a = a" ... i així fins a acomplir les il'limitades possibilitats del llenguatge. Qualsevol judici, en la mesura que té un significat, en la mesura que és assertori, és susceptible de ser erroni, de ser fals. Peró, l'error té sempre la mateixa qualitat? Us hem proposat un reguitzell d'exemples. És obvi (si excloem la mentida, que no és error, sinó mentida) que el significat d'" error" (o el seu valor) no és identic en tots els casos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this studywas to adapt and assess the psychometric properties of the Spanish version of the sMARS in terms of evidence of validity and reliability of scores. The sMARS was administered to 342 students and, in order to assess convergent and discriminant validity, several subsamples completed a series of related tests. The factorial structure of the sMARSwas analyzed by means of a confirmatory factor analysis and results showed that the three-factor structure reported in the original test fits well with the data. Thus, three dimensions were established in the test: math test, numerical task and math course anxiety. The results of this study provide sound evidence that demonstrates the good psychometric properties of the scores of the Spanish version of the sMARS: strong internal consistency, high 7-week testretest reliability and good convergent/discriminant validity were evident. Overall, this study provides an instrument that allows us to obtain valid and reliable math anxiety measurements. This instrument may be a useful tool for educators and psychologists interested in identifying individuals that may have a low level of math mastery because of their anxiety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Methods for the extraction of features from physiological datasets are growing needs as clinical investigations of Alzheimer’s disease (AD) in large and heterogeneous population increase. General tools allowing diagnostic regardless of recording sites, such as different hospitals, are essential and if combined to inexpensive non-invasive methods could critically improve mass screening of subjects with AD. In this study, we applied three state of the art multiway array decomposition (MAD) methods to extract features from electroencephalograms (EEGs) of AD patients obtained from multiple sites. In comparison to MAD, spectral-spatial average filter (SSFs) of control and AD subjects were used as well as a common blind source separation method, algorithm for multiple unknown signal extraction (AMUSE). We trained a feed-forward multilayer perceptron (MLP) to validate and optimize AD classification from two independent databases. Using a third EEG dataset, we demonstrated that features extracted from MAD outperformed features obtained from SSFs AMUSE in terms of root mean squared error (RMSE) and reaching up to 100% of accuracy in test condition. We propose that MAD maybe a useful tool to extract features for AD diagnosis offering great generalization across multi-site databases and opening doors to the discovery of new characterization of the disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Markets, in the real world, are not efficient zero-sum games where hypotheses of the CAPM are fulfilled. Then, it is easy to conclude the market portfolio is not located on Markowitz"s efficient frontier, and passive investments (and indexing) are not optimal but biased. In this paper, we define and analyze biases suffered by passive investors: the sample, construction, efficiency and active biases and tracking error are presented. We propose Minimum Risk Indices (MRI) as an alternative to deal with to market index biases, and to provide investors with portfolios closer to the efficient frontier, that is, more optimal investment possibilities. MRI (using a Parametric Value-at-Risk Minimization approach) are calculated for three stock markets achieving interesting results. Our indices are less risky and more profitable than current Market Indices in the Argentinean and Spanish markets, facing that way the Efficient Market Hypothesis. Two innovations must be outlined: an error dimension has been included in the backtesting and the Sharpe"s Ratio has been used to select the"best" MRI

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The stop-loss reinsurance is one of the most important reinsurance contracts in the insurance market. From the insurer point of view, it presents an interesting property: it is optimal if the criterion of minimizing the variance of the cost of the insurer is used. The aim of the paper is to contribute to the analysis of the stop-loss contract in one period from the point of view of the insurer and the reinsurer. Firstly, the influence of the parameters of the reinsurance contract on the correlation coefficient between the cost of the insurer and the cost of the reinsurer is studied. Secondly, the optimal stop-loss contract is obtained if the criterion used is the maximization of the joint survival probability of the insurer and the reinsurer in one period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a probabilistic approach to model the problem of power supply voltage fluctuations. Error probability calculations are shown for some 90-nm technology digital circuits.The analysis here considered gives the timing violation error probability as a new design quality factor in front of conventional techniques that assume the full perfection of the circuit. The evaluation of the error bound can be useful for new design paradigms where retry and self-recoveringtechniques are being applied to the design of high performance processors. The method here described allows to evaluate the performance of these techniques by means of calculating the expected error probability in terms of power supply distribution quality.