14 resultados para Statistical Method
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
A statistical method for classification of sags their origin downstream or upstream from the recording point is proposed in this work. The goal is to obtain a statistical model using the sag waveforms useful to characterise one type of sags and to discriminate them from the other type. This model is built on the basis of multi-way principal component analysis an later used to project the available registers in a new space with lower dimension. Thus, a case base of diagnosed sags is built in the projection space. Finally classification is done by comparing new sags against the existing in the case base. Similarity is defined in the projection space using a combination of distances to recover the nearest neighbours to the new sag. Finally the method assigns the origin of the new sag according to the origin of their neighbours
Resumo:
One hundred and eighty nine patients with primary biliary cirrhosis were entered into a double blind, placebo controlled randomised trial starting in January 1978 to assess the therapeutic value of d-penicillamine 1200 mg daily. Eighteen of the 98 patients receiving d-penicillamine and 22 of the 91 placebo treated patients died during the study. Thirty six per cent of those on d-penicillamine and 8% of those on placebo were withdrawn from the study. No difference in overall survival was noted between the two groups of patients whether the results were analysed for the entire period of observation or only during the period in which the patients were receiving therapy. The mortality rate of those receiving d-penicillamine in histological stage I to II, however, was one third of that of the placebo group although this difference did not reach statistical significance. Using the occurrence rate ratio as the statistical method of analysis, no effect of d-penicillamine was noted on any clinical, biochemical or hist
Resumo:
Background: Recent advances on high-throughput technologies have produced a vast amount of protein sequences, while the number of high-resolution structures has seen a limited increase. This has impelled the production of many strategies to built protein structures from its sequence, generating a considerable amount of alternative models. The selection of the closest model to the native conformation has thus become crucial for structure prediction. Several methods have been developed to score protein models by energies, knowledge-based potentials and combination of both.Results: Here, we present and demonstrate a theory to split the knowledge-based potentials in scoring terms biologically meaningful and to combine them in new scores to predict near-native structures. Our strategy allows circumventing the problem of defining the reference state. In this approach we give the proof for a simple and linear application that can be further improved by optimizing the combination of Zscores. Using the simplest composite score () we obtained predictions similar to state-of-the-art methods. Besides, our approach has the advantage of identifying the most relevant terms involved in the stability of the protein structure. Finally, we also use the composite Zscores to assess the conformation of models and to detect local errors.Conclusion: We have introduced a method to split knowledge-based potentials and to solve the problem of defining a reference state. The new scores have detected near-native structures as accurately as state-of-art methods and have been successful to identify wrongly modeled regions of many near-native conformations.
Resumo:
A new statistical parallax method using the Maximum Likelihood principle is presented, allowing the simultaneous determination of a luminosity calibration, kinematic characteristics and spatial distribution of a given sample. This method has been developed for the exploitation of the Hipparcos data and presents several improvements with respect to the previous ones: the effects of the selection of the sample, the observational errors, the galactic rotation and the interstellar absorption are taken into account as an intrinsic part of the formulation (as opposed to external corrections). Furthermore, the method is able to identify and characterize physically distinct groups in inhomogeneous samples, thus avoiding biases due to unidentified components. Moreover, the implementation used by the authors is based on the extensive use of numerical methods, so avoiding the need for simplification of the equations and thus the bias they could introduce. Several examples of application using simulated samples are presented, to be followed by applications to real samples in forthcoming articles.
Resumo:
The work presented evaluates the statistical characteristics of regional bias and expected error in reconstructions of real positron emission tomography (PET) data of human brain fluoro-deoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task of evaluating radioisotope uptake in regions-of-interest (ROIs) is investigated. An assessment of bias and variance in uptake measurements is carried out with simulated data. Then, by using three different transition matrices with different degrees of accuracy and a components of variance model for statistical analysis, it is shown that the characteristics obtained from real human FDG brain data are consistent with the results of the simulation studies.
Resumo:
In the scope of the European project Hydroptimet, INTERREG IIIB-MEDOCC programme, limited area model (LAM) intercomparison of intense events that produced many damages to people and territory is performed. As the comparison is limited to single case studies, the work is not meant to provide a measure of the different models' skill, but to identify the key model factors useful to give a good forecast on such a kind of meteorological phenomena. This work focuses on the Spanish flash-flood event, also known as "Montserrat-2000" event. The study is performed using forecast data from seven operational LAMs, placed at partners' disposal via the Hydroptimet ftp site, and observed data from Catalonia rain gauge network. To improve the event analysis, satellite rainfall estimates have been also considered. For statistical evaluation of quantitative precipitation forecasts (QPFs), several non-parametric skill scores based on contingency tables have been used. Furthermore, for each model run it has been possible to identify Catalonia regions affected by misses and false alarms using contingency table elements. Moreover, the standard "eyeball" analysis of forecast and observed precipitation fields has been supported by the use of a state-of-the-art diagnostic method, the contiguous rain area (CRA) analysis. This method allows to quantify the spatial shift forecast error and to identify the error sources that affected each model forecasts. High-resolution modelling and domain size seem to have a key role for providing a skillful forecast. Further work is needed to support this statement, including verification using a wider observational data set.
Resumo:
Trees are a great bank of data, named sometimes for this reason as the "silentwitnesses" of the past. Due to annual formation of rings, which is normally influenced directly by of climate parameters (generally changes in temperature and moisture or precipitation) and other environmental factors; these changes, occurred in the past, are"written" in the tree "archives" and can be "decoded" in order to interpret what hadhappened before, mainly applied for the past climate reconstruction.Using dendrochronological methods for obtaining samples of Pinus nigra fromthe Catalonian PrePirineous region, the cores of 15 trees with total time spine of about 100 - 250 years were analyzed for the tree ring width (TRW) patterns and had quite high correlation between them (0.71 ¿ 0.84), corresponding to a common behaviour for the environmental changes in their annual growth.After different trials with raw TRW data for standardization in order to take outthe negative exponential growth curve dependency, the best method of doubledetrending (power transformation and smoothing line of 32 years) were selected for obtaining the indexes for further analysis.Analyzing the cross-correlations between obtained tree ring width indexes andclimate data, significant correlations (p<0.05) were observed in some lags, as forexample, annual precipitation in lag -1 (previous year) had negative correlation with TRW growth in the Pallars region. Significant correlation coefficients are between 0.27- 0.51 (with positive or negative signs) for many cases; as for recent (but very short period) climate data of Seu d¿Urgell meteorological station, some significant correlation coefficients were observed, of the order of 0.9.These results confirm the hypothesis of using dendrochronological data as aclimate signal for further analysis, such as reconstruction of climate in the past orprediction in the future for the same locality.
Resumo:
We introduce a new parameter to investigate replica symmetry breaking transitions using finite-size scaling methods. Based on exact equalities initially derived by F. Guerra this parameter is a direct check of the self-averaging character of the spin-glass order parameter. This new parameter can be used to study models with time reversal symmetry but its greatest interest lies in models where this symmetry is absent. We apply the method to long-range and short-range Ising spin-glasses with and without a magnetic field as well as short-range multispin interaction spin-glasses.
Resumo:
In this paper, we develop a new decision making model and apply it in political Surveys of economic climate collect opinions of managers about the short-term future evolution of their business. Interviews are carried out on a regular basis and responses measure optimistic, neutral or pessimistic views about the economic perspectives. We propose a method to evaluate the sampling error of the average opinion derived from a particular type of survey data. Our variance estimate is useful to interpret historical trends and to decide whether changes in the index from one period to another are due to a structural change or whether ups and downs can be attributed to sampling randomness. An illustration using real data from a survey of business managers opinions is discussed.
Resumo:
A new aggregation method for decision making is presented by using induced aggregation operators and the index of maximum and minimum level. Its main advantage is that it can assess complex reordering processes in the aggregation that represent complex attitudinal characters of the decision maker such as psychological or personal factors. A wide range of properties and particular cases of this new approach are studied. A further generalization by using hybrid averages and immediate weights is also presented. The key issue in this approach against the previous model is that we can use the weighted average and the ordered weighted average in the same formulation. Thus, we are able to consider the subjective attitude and the degree of optimism of the decision maker in the decision process. The paper ends with an application in a decision making problem based on the use of the assignment theory.
Resumo:
This paper examines statistical analysis of social reciprocity, that is, the balance between addressing and receiving behaviour in social interactions. Specifically, it focuses on the measurement of social reciprocity by means of directionality and skew-symmetry statistics at different levels. Two statistics have been used as overall measures of social reciprocity at group level: the directional consistency and the skew-symmetry statistics. Furthermore, the skew-symmetry statistic allows social researchers to obtain complementary information at dyadic and individual levels. However, having computed these measures, social researchers may be interested in testing statistical hypotheses regarding social reciprocity. For this reason, it has been developed a statistical procedure, based on Monte Carlo sampling, in order to allow social researchers to describe groups and make statistical decisions.
Resumo:
EEG recordings are usually corrupted by spurious extra-cerebral artifacts, which should be rejected or cleaned up by the practitioner. Since manual screening of human EEGs is inherently error prone and might induce experimental bias, automatic artifact detection is an issue of importance. Automatic artifact detection is the best guarantee for objective and clean results. We present a new approach, based on the time–frequency shape of muscular artifacts, to achieve reliable and automatic scoring. The impact of muscular activity on the signal can be evaluated using this methodology by placing emphasis on the analysis of EEG activity. The method is used to discriminate evoked potentials from several types of recorded muscular artifacts—with a sensitivity of 98.8% and a specificity of 92.2%. Automatic cleaning ofEEGdata are then successfully realized using this method, combined with independent component analysis. The outcome of the automatic cleaning is then compared with the Slepian multitaper spectrum based technique introduced by Delorme et al (2007 Neuroimage 34 1443–9).
Resumo:
A regularization method based on the non-extensive maximum entropy principle is devised. Special emphasis is given to the q=1/2 case. We show that, when the residual principle is considered as constraint, the q=1/2 generalized distribution of Tsallis yields a regularized solution for bad-conditioned problems. The so devised regularized distribution is endowed with a component which corresponds to the well known regularized solution of Tikhonov (1977).
Resumo:
A statistical indentation method has been employed to study the hardness value of fire-refined high conductivity copper, using nanoindentation technique. The Joslin and Oliver approach was used with the aim to separate the hardness (H) influence of copper matrix, from that of inclusions and grain boundaries. This approach relies on a large array of imprints (around 400 indentations), performed at 150 nm of indentation depth. A statistical study using a cumulative distribution function fit and Gaussian simulated distributions, exhibits that H for each phase can be extracted when the indentation depth is much lower than the size of the secondary phases. It is found that the thermal treatment produces a hardness increase, due to the partly re-dissolution of the inclusions (mainly Pb and Sn) in the matrix.