41 resultados para sample pretreatment
Resumo:
We extend to score, Wald and difference test statistics the scaled and adjusted corrections to goodness-of-fit test statistics developed in Satorra and Bentler (1988a,b). The theory is framed in the general context of multisample analysis of moment structures, under general conditions on the distribution of observable variables. Computational issues, as well as the relation of the scaled and corrected statistics to the asymptotic robust ones, is discussed. A Monte Carlo study illustrates thecomparative performance in finite samples of corrected score test statistics.
Resumo:
Small sample properties are of fundamental interest when only limited data is avail-able. Exact inference is limited by constraints imposed by speci.c nonrandomizedtests and of course also by lack of more data. These e¤ects can be separated as we propose to evaluate a test by comparing its type II error to the minimal type II error among all tests for the given sample. Game theory is used to establish this minimal type II error, the associated randomized test is characterized as part of a Nash equilibrium of a .ctitious game against nature.We use this method to investigate sequential tests for the di¤erence between twomeans when outcomes are constrained to belong to a given bounded set. Tests ofinequality and of noninferiority are included. We .nd that inference in terms oftype II error based on a balanced sample cannot be improved by sequential sampling or even by observing counter factual evidence providing there is a reasonable gap between the hypotheses.
Resumo:
This paper analyzes whether standard covariance matrix tests work whendimensionality is large, and in particular larger than sample size. Inthe latter case, the singularity of the sample covariance matrix makeslikelihood ratio tests degenerate, but other tests based on quadraticforms of sample covariance matrix eigenvalues remain well-defined. Westudy the consistency property and limiting distribution of these testsas dimensionality and sample size go to infinity together, with theirratio converging to a finite non-zero limit. We find that the existingtest for sphericity is robust against high dimensionality, but not thetest for equality of the covariance matrix to a given matrix. For thelatter test, we develop a new correction to the existing test statisticthat makes it robust against high dimensionality.
Resumo:
The central message of this paper is that nobody should be using the samplecovariance matrix for the purpose of portfolio optimization. It containsestimation error of the kind most likely to perturb a mean-varianceoptimizer. In its place, we suggest using the matrix obtained from thesample covariance matrix through a transformation called shrinkage. Thistends to pull the most extreme coefficients towards more central values,thereby systematically reducing estimation error where it matters most.Statistically, the challenge is to know the optimal shrinkage intensity,and we give the formula for that. Without changing any other step in theportfolio optimization process, we show on actual stock market data thatshrinkage reduces tracking error relative to a benchmark index, andsubstantially increases the realized information ratio of the activeportfolio manager.
Resumo:
In this paper I explore the issue of nonlinearity (both in the datageneration process and in the functional form that establishes therelationship between the parameters and the data) regarding the poorperformance of the Generalized Method of Moments (GMM) in small samples.To this purpose I build a sequence of models starting with a simple linearmodel and enlarging it progressively until I approximate a standard (nonlinear)neoclassical growth model. I then use simulation techniques to find the smallsample distribution of the GMM estimators in each of the models.
Resumo:
We derive a new inequality for uniform deviations of averages from their means. The inequality is a common generalization of previous results of Vapnik and Chervonenkis (1974) and Pollard (1986). Usingthe new inequality we obtain tight bounds for empirical loss minimization learning.
Resumo:
Background: Data from different studies suggest a favourable association between pretreatment with statins or hypercholesterolemia and outcome after ischaemic stroke. We examined whether there were differences in in-hospital mortality according to the presence or absence of statin therapy in a large population of first-ever ischaemic stroke patients and assessed the influence of statins upon early death and spontaneous neurological recovery. Methods: In 2,082 consecutive patients with first-ever ischaemic stroke collected from a prospective hospital-based stroke registry during a period of 19 years (1986-2004), statin use or hypercholesterolemia before stroke was documented in 381 patients. On the other hand, favourable outcome defined as grades 0-2 in the modified Rankin scale was recorded in 382 patients. Results: Early outcome was better in the presence of statin therapy or hypercholesterolemia (cholesterol levels were not measured) with significant differences between the groups with and without pretreatment with statins in in-hospital mortality (6% vs 13.3%, P = 0.001) and symptom-free (22% vs 17.5%, P = 0.025) and severe functional limitation (6.6% vs 11.5%, P = 0.002) at hospital discharge, as well as lower rates of infectious respiratory complications during hospitalization. In the logistic regression model, statin therapy was the only variable inversely associated with in-hospital death (odds ratio 0.57) and directly associated with favourable outcome (odds ratio 1.32).
Resumo:
In this article we report our systematic studies of the dependence on the sample thickness of the onset parameters of the instability of the nematic-isotropic interface during directional growth and melting, in homeotropic or planar anchoring.
Resumo:
Background Obesity may have an impact on key aspects of health-related quality of life (HRQOL). In this context, the Impact of Weight Quality of Life (IWQOL) questionnaire was the first scale designed to assess HRQOL. The aim of the present study was twofold: to assess HRQOL in a sample of Spanish patients awaiting bariatric surgery and to determine the psychometric properties of the IWQOL-Lite and its sensitivity to detect differences in HRQOL across groups. Methods Participants were 109 obese adult patients (BMI¿ 35 kg/m2) from Barcelona, to whom the following measurement instruments were applied: IWQOL-Lite, Depression Anxiety Stress Scales, Brief Symptom Inventory, and self-perception items. Results Descriptive data regarding the IWQOL-Lite scores obtained by these patients are reported. Principal components analysis revealed a five-factor model accounting for 72.05% of the total variance, with factor loadings being adequate for all items. Corrected itemtotal correlations were acceptable for all items. Cronbach"s alpha coefficients were excellent both for the subscales (0.880.93) and the total scale (0.95). The relationship between the IWQOLLite and other variables supports the construct validity of the scale. Finally, sensitivity analysis revealed large effect sizes when comparing scores obtained by extreme BMI groups. Conclusions This is the first study to report the application of the IWQOL-Lite to a sample of Spanish patients awaiting bariatric surgery and to confirm that the Spanish version of the instrument has adequate psychometric properties.
Resumo:
Transmission electron microscopy is a proven technique in the field of cell biology and a very useful tool in biomedical research. Innovation and improvements in equipment together with the introduction of new technology have allowed us to improve our knowledge of biological tissues, to visualizestructures better and both to identify and to locate molecules. Of all the types ofmicroscopy exploited to date, electron microscopy is the one with the mostadvantageous resolution limit and therefore it is a very efficient technique fordeciphering the cell architecture and relating it to function. This chapter aims toprovide an overview of the most important techniques that we can apply to abiological sample, tissue or cells, to observe it with an electron microscope, fromthe most conventional to the latest generation. Processes and concepts aredefined, and the advantages and disadvantages of each technique are assessedalong with the image and information that we can obtain by using each one ofthem.
Resumo:
This paper proposes new methodologies for evaluating out-of-sample forecastingperformance that are robust to the choice of the estimation window size. The methodologies involve evaluating the predictive ability of forecasting models over a wide rangeof window sizes. We show that the tests proposed in the literature may lack the powerto detect predictive ability and might be subject to data snooping across differentwindow sizes if used repeatedly. An empirical application shows the usefulness of themethodologies for evaluating exchange rate models' forecasting ability.
Resumo:
Phenomena with a constrained sample space appear frequently in practice. This is the case e.g. with strictly positive data, or with compositional data, like percentages or proportions. If the natural measure of difference is not the absolute one, simple algebraic properties show that it is more convenient to work with a geometry different from the usual Euclidean geometry in real space, and with a measure different from the usual Lebesgue measure, leading to alternative models which better fit the phenomenon under study. The general approach is presented and illustrated using the normal distribution, both on the positive real line and on the D-part simplex. The original ideas of McAlister in his introduction to the lognormal distribution in 1879, are recovered and updated
Resumo:
We study the determining factors of cience-based cooperation in the case of small and micro firms. In this research, we propose an analytical framework based on the resource-based view of the firm and we identify a set of organisational characteristics, which we classify as internal, external and structural factors. Each factor can be linked to at least one reason, from the firm¿s point of view, to cooperate with universities and public research centres. Each reason can, in turn, be used as an indicator of a firm¿s organisational needs or organisational capacities. In order to validate the theoretical model, we estimate a logistic regression that models the propensity to participate in science-based cooperation activities within a sample of 285 small and micro firms located in Barcelona. The results show the key role played by the absorptive capacity of new and small companies.
Resumo:
This paper introduces a qualitative case study on mobile communication among the older population (60+ years old) conducted in Great Los Angeles (CA, USA) in autumn 2011. Methodology, fieldwork and preliminary results are discussed.Before, country-level data is presented to better understand the specific characteristics of the studied individuals. The section focus is on demographics and on acceptance and use of information and communication technologies (ICT).Preliminary results show that within the sample under study (20 individuals) there is a high number of mobile phone users (15) while among non-mobile users (5), three of them decide to stop using this technology. A majority of mobile phone adopters describe a very limited use of the device for everyday life communications. Finally,while Internet is really popular within the sample (14 users), just 3 individuals go online through their mobile telephone.
Resumo:
Regression equations predicting dissectable muscle weight in rabbits from external measurements were presented. Bone weight and weight of muscle groups were also carcass predicted. Predictive capacity of external measurements, retail cuts and muscle groups on total muscle, percent muscle, total bone and muscle to bone ratio were studied separately. Measurements on dissected retail cuts should be included in ordcr to obtain good equations for prediction of percent muscle in the carcass. Equations for predicting the muscle to bone ratio using external mcasurcments and data from the dissection of one hind leg were suggested. The equations had generally high coefficients of determination. The coefficient of determination for prediction of dissectable muscle was 0.91, and for percent muscle in the carcass 0.79.