40 resultados para Standardization Sample
Resumo:
Small sample properties are of fundamental interest when only limited data is avail-able. Exact inference is limited by constraints imposed by speci.c nonrandomizedtests and of course also by lack of more data. These e¤ects can be separated as we propose to evaluate a test by comparing its type II error to the minimal type II error among all tests for the given sample. Game theory is used to establish this minimal type II error, the associated randomized test is characterized as part of a Nash equilibrium of a .ctitious game against nature.We use this method to investigate sequential tests for the di¤erence between twomeans when outcomes are constrained to belong to a given bounded set. Tests ofinequality and of noninferiority are included. We .nd that inference in terms oftype II error based on a balanced sample cannot be improved by sequential sampling or even by observing counter factual evidence providing there is a reasonable gap between the hypotheses.
Resumo:
In order to interpret the biplot it is necessary to know which points usually variables are the ones that are important contributors to the solution, and this information is available separately as part of the biplot s numerical results. We propose a new scaling of the display, called the contribution biplot, which incorporates this diagnostic directly into the graphical display, showing visually the important contributors and thus facilitating the biplot interpretation and often simplifying the graphical representation considerably. The contribution biplot can be applied to a wide variety of analyses such as correspondence analysis, principal component analysis, log-ratio analysis and the graphical results of a discriminant analysis/MANOVA, in fact to any method based on the singular-value decomposition. In the contribution biplot one set of points, usually the rows of the data matrix, optimally represent the spatial positions of the cases or sample units, according to some distance measure that usually incorporates some form of standardization unless all data are comparable in scale. The other set of points, usually the columns, is represented by vectors that are related to their contributions to the low-dimensional solution. A fringe benefit is that usually only one common scale for row and column points is needed on the principal axes, thus avoiding the problem of enlarging or contracting the scale of one set of points to make the biplot legible. Furthermore, this version of the biplot also solves the problem in correspondence analysis of low-frequency categories that are located on the periphery of the map, giving the false impression that they are important, when they are in fact contributing minimally to the solution.
Resumo:
This paper analyzes whether standard covariance matrix tests work whendimensionality is large, and in particular larger than sample size. Inthe latter case, the singularity of the sample covariance matrix makeslikelihood ratio tests degenerate, but other tests based on quadraticforms of sample covariance matrix eigenvalues remain well-defined. Westudy the consistency property and limiting distribution of these testsas dimensionality and sample size go to infinity together, with theirratio converging to a finite non-zero limit. We find that the existingtest for sphericity is robust against high dimensionality, but not thetest for equality of the covariance matrix to a given matrix. For thelatter test, we develop a new correction to the existing test statisticthat makes it robust against high dimensionality.
Resumo:
The central message of this paper is that nobody should be using the samplecovariance matrix for the purpose of portfolio optimization. It containsestimation error of the kind most likely to perturb a mean-varianceoptimizer. In its place, we suggest using the matrix obtained from thesample covariance matrix through a transformation called shrinkage. Thistends to pull the most extreme coefficients towards more central values,thereby systematically reducing estimation error where it matters most.Statistically, the challenge is to know the optimal shrinkage intensity,and we give the formula for that. Without changing any other step in theportfolio optimization process, we show on actual stock market data thatshrinkage reduces tracking error relative to a benchmark index, andsubstantially increases the realized information ratio of the activeportfolio manager.
Resumo:
In this paper I explore the issue of nonlinearity (both in the datageneration process and in the functional form that establishes therelationship between the parameters and the data) regarding the poorperformance of the Generalized Method of Moments (GMM) in small samples.To this purpose I build a sequence of models starting with a simple linearmodel and enlarging it progressively until I approximate a standard (nonlinear)neoclassical growth model. I then use simulation techniques to find the smallsample distribution of the GMM estimators in each of the models.
Resumo:
We derive a new inequality for uniform deviations of averages from their means. The inequality is a common generalization of previous results of Vapnik and Chervonenkis (1974) and Pollard (1986). Usingthe new inequality we obtain tight bounds for empirical loss minimization learning.
Resumo:
In this article we report our systematic studies of the dependence on the sample thickness of the onset parameters of the instability of the nematic-isotropic interface during directional growth and melting, in homeotropic or planar anchoring.
Resumo:
Background Obesity may have an impact on key aspects of health-related quality of life (HRQOL). In this context, the Impact of Weight Quality of Life (IWQOL) questionnaire was the first scale designed to assess HRQOL. The aim of the present study was twofold: to assess HRQOL in a sample of Spanish patients awaiting bariatric surgery and to determine the psychometric properties of the IWQOL-Lite and its sensitivity to detect differences in HRQOL across groups. Methods Participants were 109 obese adult patients (BMI¿ 35 kg/m2) from Barcelona, to whom the following measurement instruments were applied: IWQOL-Lite, Depression Anxiety Stress Scales, Brief Symptom Inventory, and self-perception items. Results Descriptive data regarding the IWQOL-Lite scores obtained by these patients are reported. Principal components analysis revealed a five-factor model accounting for 72.05% of the total variance, with factor loadings being adequate for all items. Corrected itemtotal correlations were acceptable for all items. Cronbach"s alpha coefficients were excellent both for the subscales (0.880.93) and the total scale (0.95). The relationship between the IWQOLLite and other variables supports the construct validity of the scale. Finally, sensitivity analysis revealed large effect sizes when comparing scores obtained by extreme BMI groups. Conclusions This is the first study to report the application of the IWQOL-Lite to a sample of Spanish patients awaiting bariatric surgery and to confirm that the Spanish version of the instrument has adequate psychometric properties.
Resumo:
Transmission electron microscopy is a proven technique in the field of cell biology and a very useful tool in biomedical research. Innovation and improvements in equipment together with the introduction of new technology have allowed us to improve our knowledge of biological tissues, to visualizestructures better and both to identify and to locate molecules. Of all the types ofmicroscopy exploited to date, electron microscopy is the one with the mostadvantageous resolution limit and therefore it is a very efficient technique fordeciphering the cell architecture and relating it to function. This chapter aims toprovide an overview of the most important techniques that we can apply to abiological sample, tissue or cells, to observe it with an electron microscope, fromthe most conventional to the latest generation. Processes and concepts aredefined, and the advantages and disadvantages of each technique are assessedalong with the image and information that we can obtain by using each one ofthem.
Resumo:
This paper proposes new methodologies for evaluating out-of-sample forecastingperformance that are robust to the choice of the estimation window size. The methodologies involve evaluating the predictive ability of forecasting models over a wide rangeof window sizes. We show that the tests proposed in the literature may lack the powerto detect predictive ability and might be subject to data snooping across differentwindow sizes if used repeatedly. An empirical application shows the usefulness of themethodologies for evaluating exchange rate models' forecasting ability.
Resumo:
Phenomena with a constrained sample space appear frequently in practice. This is the case e.g. with strictly positive data, or with compositional data, like percentages or proportions. If the natural measure of difference is not the absolute one, simple algebraic properties show that it is more convenient to work with a geometry different from the usual Euclidean geometry in real space, and with a measure different from the usual Lebesgue measure, leading to alternative models which better fit the phenomenon under study. The general approach is presented and illustrated using the normal distribution, both on the positive real line and on the D-part simplex. The original ideas of McAlister in his introduction to the lognormal distribution in 1879, are recovered and updated
Resumo:
We study the determining factors of cience-based cooperation in the case of small and micro firms. In this research, we propose an analytical framework based on the resource-based view of the firm and we identify a set of organisational characteristics, which we classify as internal, external and structural factors. Each factor can be linked to at least one reason, from the firm¿s point of view, to cooperate with universities and public research centres. Each reason can, in turn, be used as an indicator of a firm¿s organisational needs or organisational capacities. In order to validate the theoretical model, we estimate a logistic regression that models the propensity to participate in science-based cooperation activities within a sample of 285 small and micro firms located in Barcelona. The results show the key role played by the absorptive capacity of new and small companies.
Resumo:
This paper introduces a qualitative case study on mobile communication among the older population (60+ years old) conducted in Great Los Angeles (CA, USA) in autumn 2011. Methodology, fieldwork and preliminary results are discussed.Before, country-level data is presented to better understand the specific characteristics of the studied individuals. The section focus is on demographics and on acceptance and use of information and communication technologies (ICT).Preliminary results show that within the sample under study (20 individuals) there is a high number of mobile phone users (15) while among non-mobile users (5), three of them decide to stop using this technology. A majority of mobile phone adopters describe a very limited use of the device for everyday life communications. Finally,while Internet is really popular within the sample (14 users), just 3 individuals go online through their mobile telephone.
Resumo:
Regression equations predicting dissectable muscle weight in rabbits from external measurements were presented. Bone weight and weight of muscle groups were also carcass predicted. Predictive capacity of external measurements, retail cuts and muscle groups on total muscle, percent muscle, total bone and muscle to bone ratio were studied separately. Measurements on dissected retail cuts should be included in ordcr to obtain good equations for prediction of percent muscle in the carcass. Equations for predicting the muscle to bone ratio using external mcasurcments and data from the dissection of one hind leg were suggested. The equations had generally high coefficients of determination. The coefficient of determination for prediction of dissectable muscle was 0.91, and for percent muscle in the carcass 0.79.
Resumo:
We performed a number of tests with the aim to develop an effective extraction method for the analysis of carotenoid content in maize seed. Mixtures of methanol–ethyl acetate (6:4, v/v) and methanol–tetrahydrofuran (1:1, v/v) were the most effective solvent systems for carotenoid extraction from maize endosperm under the conditions assayed. In addition, we also addressed sample preparation prior to the analysis of carotenoids by liquid chromatography (LC). The LC response of extracted carotenoids and standards in several solvents was evaluated and results were related to the degree of solubility of these pigments. Three key factors were found to be important when selecting a suitable injection solvent: compatibility between the mobile phase and injection solvent, carotenoid polarity and content in the matrix.