12 resultados para Variance analysis

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

70.00% 70.00%

Publicador:

Resumo:

ABSTRACT Dual-trap optical tweezers are often used in high-resolution measurements in single-molecule biophysics. Such measurements can be hindered by the presence of extraneous noise sources, the most prominent of which is the coupling of fluctuations along different spatial directions, which may affect any optical tweezers setup. In this article, we analyze, both from the theoretical and the experimental points of view, the most common source for these couplings in dual-trap optical-tweezers setups: the misalignment of traps and tether. We give criteria to distinguish different kinds of misalignment, to estimate their quantitative relevance and to include them in the data analysis. The experimental data is obtained in a, to our knowledge, novel dual-trap optical-tweezers setup that directly measures forces. In the case in which misalignment is negligible, we provide a method to measure the stiffness of traps and tether based on variance analysis. This method can be seen as a calibration technique valid beyond the linear trap region. Our analysis is then employed to measure the persistence length of dsDNA tethers of three different lengths spanning two orders of magnitude. The effective persistence length of such tethers is shown to decrease with the contour length, in accordance with previous studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper surveys asset allocation methods that extend the traditional approach. An important feature of the the traditional approach is that measures the risk and return tradeoff in terms of mean and variance of final wealth. However, there are also other important features that are not always made explicit in terms of investor s wealth, information, and horizon: The investor makes a single portfolio choice based only on the mean and variance of her final financial wealth and she knows the relevant parameters in that computation. First, the paper describes traditional portfolio choice based on four basic assumptions, while the rest of the sections extend those assumptions. Each section will describe the corresponding equilibrium implications in terms of portfolio advice and asset pricing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We conduct a large-scale comparative study on linearly combining superparent-one-dependence estimators (SPODEs), a popular family of seminaive Bayesian classifiers. Altogether, 16 model selection and weighing schemes, 58 benchmark data sets, and various statistical tests are employed. This paper's main contributions are threefold. First, it formally presents each scheme's definition, rationale, and time complexity and hence can serve as a comprehensive reference for researchers interested in ensemble learning. Second, it offers bias-variance analysis for each scheme's classification error performance. Third, it identifies effective schemes that meet various needs in practice. This leads to accurate and fast classification algorithms which have an immediate and significant impact on real-world applications. Another important feature of our study is using a variety of statistical tests to evaluate multiple learning methods across multiple data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop a general error analysis framework for the Monte Carlo simulationof densities for functionals in Wiener space. We also study variancereduction methods with the help of Malliavin derivatives. For this, wegive some general heuristic principles which are applied to diffusionprocesses. A comparison with kernel density estimates is made.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although correspondence analysis is now widely available in statistical software packages and applied in a variety of contexts, notably the social and environmental sciences, there are still some misconceptions about this method as well as unresolved issues which remain controversial to this day. In this paper we hope to settle these matters, namely (i) the way CA measures variance in a two-way table and how to compare variances between tables of different sizes, (ii) the influence, or rather lack of influence, of outliers in the usual CA maps, (iii) the scaling issue and the biplot interpretation of maps,(iv) whether or not to rotate a solution, and (v) statistical significance of results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented evaluates the statistical characteristics of regional bias and expected error in reconstructions of real positron emission tomography (PET) data of human brain fluoro-deoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task of evaluating radioisotope uptake in regions-of-interest (ROIs) is investigated. An assessment of bias and variance in uptake measurements is carried out with simulated data. Then, by using three different transition matrices with different degrees of accuracy and a components of variance model for statistical analysis, it is shown that the characteristics obtained from real human FDG brain data are consistent with the results of the simulation studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Leakage detection is an important issue in many chemical sensing applications. Leakage detection hy thresholds suffers from important drawbacks when sensors have serious drifts or they are affected by cross-sensitivities. Here we present an adaptive method based in a Dynamic Principal Component Analysis that models the relationships between the sensors in the may. In normal conditions a certain variance distribution characterizes sensor signals. However, in the presence of a new source of variance the PCA decomposition changes drastically. In order to prevent the influence of sensor drifts the model is adaptive and it is calculated in a recursive manner with minimum computational effort. The behavior of this technique is studied with synthetic signals and with real signals arising by oil vapor leakages in an air compressor. Results clearly demonstrate the efficiency of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new drift compensation method based on Common Principal Component Analysis (CPCA) is proposed. The drift variance in data is found as the principal components computed by CPCA. This method finds components that are common for all gasses in feature space. The method is compared in classification task with respect to the other approaches published where the drift direction is estimated through a Principal Component Analysis (PCA) of a reference gas. The proposed new method ¿ employing no specific reference gas, but information from all gases ¿has shown the same performance as the traditional approach with the best-fitted reference gas. Results are shown with data lasting 7-months including three gases at different concentrations for an array of 17 polymeric sensors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The stop-loss reinsurance is one of the most important reinsurance contracts in the insurance market. From the insurer point of view, it presents an interesting property: it is optimal if the criterion of minimizing the variance of the cost of the insurer is used. The aim of the paper is to contribute to the analysis of the stop-loss contract in one period from the point of view of the insurer and the reinsurer. Firstly, the influence of the parameters of the reinsurance contract on the correlation coefficient between the cost of the insurer and the cost of the reinsurer is studied. Secondly, the optimal stop-loss contract is obtained if the criterion used is the maximization of the joint survival probability of the insurer and the reinsurer in one period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of research was to analyse the potential of Normalized Difference Vegetation Index (NDVI) maps from satellite images, yield maps and grapevine fertility and load variables to delineate zones with different wine grape properties for selective harvesting. Two vineyard blocks located in NE Spain (Cabernet Sauvignon and Syrah) were analysed. The NDVI was computed from a Quickbird-2 multi-spectral image at veraison (July 2005). Yield data was acquired by means of a yield monitor during September 2005. Other variables, such as the number of buds, number of shoots, number of wine grape clusters and weight of 100 berries were sampled in a 10 rows × 5 vines pattern and used as input variables, in combination with the NDVI, to define the clusters as alternative to yield maps. Two days prior to the harvesting, grape samples were taken. The analysed variables were probable alcoholic degree, pH of the juice, total acidity, total phenolics, colour, anthocyanins and tannins. The input variables, alone or in combination, were clustered (2 and 3 Clusters) by using the ISODATA algorithm, and an analysis of variance and a multiple rang test were performed. The results show that the zones derived from the NDVI maps are more effective to differentiate grape maturity and quality variables than the zones derived from the yield maps. The inclusion of other grapevine fertility and load variables did not improve the results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Public opinion surveys have become progressively incorporated into systems of official statistics. Surveys of the economic climate are usually qualitative because they collect opinions of businesspeople and/or experts about the long-term indicators described by a number of variables. In such cases the responses are expressed in ordinal numbers, that is, the respondents verbally report, for example, whether during a given trimester the sales or the new orders have increased, decreased or remained the same as in the previous trimester. These data allow to calculate the percent of respondents in the total population (results are extrapolated), who select every one of the three options. Data are often presented in the form of an index calculated as the difference between the percent of those who claim that a given variable has improved in value and of those who claim that it has deteriorated.