979 resultados para sample preparation
Resumo:
Selection of action may rely on external guidance or be motivated internally, engaging partially distinct cerebral networks. With age, there is an increased allocation of sensorimotor processing resources, accompanied by a reduced differentiation between the two networks of action selection. The present study examines the age effects on the motor-related oscillatory patterns related to the preparation of externally and internally guided movements. Thirty-two older and 30 younger adults underwent three delayed motor tasks with S1 as preparatory and S2 as imperative cue: Full, laterality instructed by S1 (external guidance); Free, laterality freely selected (internal guidance); None, laterality instructed by S2 (no preparation). Electroencephalogram (EEG) was recorded using 64 surface electrodes. Motor-Related Amplitude Asymmetries (MRAA), indexing the lateralization of oscillatory activities, were analyzed within the S1-S2 interval in the mu (9-12 Hz) and low beta (15-20 Hz) motor-related frequency bands. Reaction times to S2 were slower in older than younger subjects, and slower in the Free than in the Full condition in older subjects only. In the Full condition, there were significant mu MRAA in both age groups, and significant low beta MRAA only in older adults. The Free condition was associated with large mu MRAA in younger adults and limited low beta MRAA in older adults. In younger subjects, the lateralization of mu activity in both Full and Free conditions indicated effective external and internal motor preparation. In older subjects, external motor preparation was associated with lateralization of low beta in addition with mu activity, compatible with an increase of motor-related resources. In contrast, absence of mu and limited low beta lateralization in internal motor preparation was concomitant with reaction time slowing and suggested less efficient cerebral processes subtending free movement selection in older adults, indicating reduced capacity for internally driven action with age.
Resumo:
Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.
Resumo:
We introduce several exact nonparametric tests for finite sample multivariatelinear regressions, and compare their powers. This fills an important gap inthe literature where the only known nonparametric tests are either asymptotic,or assume one covariate only.
Resumo:
In moment structure analysis with nonnormal data, asymptotic valid inferences require the computation of a consistent (under general distributional assumptions) estimate of the matrix $\Gamma$ of asymptotic variances of sample second--order moments. Such a consistent estimate involves the fourth--order sample moments of the data. In practice, the use of fourth--order moments leads to computational burden and lack of robustness against small samples. In this paper we show that, under certain assumptions, correct asymptotic inferences can be attained when $\Gamma$ is replaced by a matrix $\Omega$ that involves only the second--order moments of the data. The present paper extends to the context of multi--sample analysis of second--order moment structures, results derived in the context of (simple--sample) covariance structure analysis (Satorra and Bentler, 1990). The results apply to a variety of estimation methods and general type of statistics. An example involving a test of equality of means under covariance restrictions illustrates theoretical aspects of the paper.
Resumo:
We extend to score, Wald and difference test statistics the scaled and adjusted corrections to goodness-of-fit test statistics developed in Satorra and Bentler (1988a,b). The theory is framed in the general context of multisample analysis of moment structures, under general conditions on the distribution of observable variables. Computational issues, as well as the relation of the scaled and corrected statistics to the asymptotic robust ones, is discussed. A Monte Carlo study illustrates thecomparative performance in finite samples of corrected score test statistics.
Resumo:
Purpose: Cardiac 18F-FDG PET is considered as the gold standard to assess myocardial metabolism and infarct size. The myocardial demand for glucose can be influenced by fasting and/or following pharmacological preparation. In the rat, it has been previously shown that fasting combined with preconditioning with acipimox, a nicotinic acid derivate and lipidlowering agent, increased dramatically 18F-FDG uptake in the myocardium. Strategies aimed at reducing infarct scar are evaluated in a variety of mouse models. PET would particularly useful for assessing cardiac viability in the mouse. However, prior knowledge of the best preparation protocol is a prerequisite for accurate measurement of glucose uptake in mice. Therefore, we studied the effect of different protocols on 18F-FDG uptake in the mouse heart.Methods: Mice (n = 15) were separated into three treatment groups according to preconditioning and underwent a 18FDG PET scan. Group 1: No preconditioning (n = 3); Group 2: Overnight fasting (n = 8); and Group 3: Overnight fasting and acipimox (25mg/kg SC) (n = 4). MicroPET images were processed with PMOD to determine 18F-FDG mean standard uptake value (SUV) at 30 min for the whole left ventricle (LV) and for each region of the 17-segments AHA model. For comparisons, we used Mann-Whitney test and multilevel mixed-effects linear regression (Stata 11.0).Results: In total, 27 microPET were performed successfully in 15 animals. Overnight fasting led to a dramatic increase in LV-SUV compared to mice without preconditioning (8.6±0.7g/mL vs. 3.7±1.1g/mL, P<0.001). In addition, LV-SUV was slightly but not significantly higher in animals treated with acipimox compared to animals with overnight fasting alone (10.2±0.5 g/mL, P = 0.06). Fastening increased segmental SUV by 5.1±0.5g/mL as compared to free-feeding mice (from 3.7±0.8g/mL to 8.8±0.4g/mL, P<0.001); segmental-SUV also significantly increased after administration of acipimox (from 8.8±0.4g/mL to 10.1±0.4g/mL, P<0.001).Conclusion: Overnight fasting led to myocardial glucose deprivation and increases 18F-FDG myocardial uptake. Additional administration of acipimox enhances myocardial 18F-FDG uptake, at least at the segmental level. Thus, preconditioning with acipimox may provide better image quality that may help for assessing segmental myocardial metabolism.
Resumo:
Small sample properties are of fundamental interest when only limited data is avail-able. Exact inference is limited by constraints imposed by speci.c nonrandomizedtests and of course also by lack of more data. These e¤ects can be separated as we propose to evaluate a test by comparing its type II error to the minimal type II error among all tests for the given sample. Game theory is used to establish this minimal type II error, the associated randomized test is characterized as part of a Nash equilibrium of a .ctitious game against nature.We use this method to investigate sequential tests for the di¤erence between twomeans when outcomes are constrained to belong to a given bounded set. Tests ofinequality and of noninferiority are included. We .nd that inference in terms oftype II error based on a balanced sample cannot be improved by sequential sampling or even by observing counter factual evidence providing there is a reasonable gap between the hypotheses.
Resumo:
This paper analyzes whether standard covariance matrix tests work whendimensionality is large, and in particular larger than sample size. Inthe latter case, the singularity of the sample covariance matrix makeslikelihood ratio tests degenerate, but other tests based on quadraticforms of sample covariance matrix eigenvalues remain well-defined. Westudy the consistency property and limiting distribution of these testsas dimensionality and sample size go to infinity together, with theirratio converging to a finite non-zero limit. We find that the existingtest for sphericity is robust against high dimensionality, but not thetest for equality of the covariance matrix to a given matrix. For thelatter test, we develop a new correction to the existing test statisticthat makes it robust against high dimensionality.
Resumo:
The central message of this paper is that nobody should be using the samplecovariance matrix for the purpose of portfolio optimization. It containsestimation error of the kind most likely to perturb a mean-varianceoptimizer. In its place, we suggest using the matrix obtained from thesample covariance matrix through a transformation called shrinkage. Thistends to pull the most extreme coefficients towards more central values,thereby systematically reducing estimation error where it matters most.Statistically, the challenge is to know the optimal shrinkage intensity,and we give the formula for that. Without changing any other step in theportfolio optimization process, we show on actual stock market data thatshrinkage reduces tracking error relative to a benchmark index, andsubstantially increases the realized information ratio of the activeportfolio manager.
Resumo:
In this paper I explore the issue of nonlinearity (both in the datageneration process and in the functional form that establishes therelationship between the parameters and the data) regarding the poorperformance of the Generalized Method of Moments (GMM) in small samples.To this purpose I build a sequence of models starting with a simple linearmodel and enlarging it progressively until I approximate a standard (nonlinear)neoclassical growth model. I then use simulation techniques to find the smallsample distribution of the GMM estimators in each of the models.
Resumo:
We derive a new inequality for uniform deviations of averages from their means. The inequality is a common generalization of previous results of Vapnik and Chervonenkis (1974) and Pollard (1986). Usingthe new inequality we obtain tight bounds for empirical loss minimization learning.
Resumo:
At 3 T, the effective wavelength of the RF field is comparable to the dimension of the human body, resulting in B1 standing wave effects and extra variations in phase. This effect is accompanied by an increase in B0 field inhomogeneity compared to 1.5 T. This combination results in nonuniform magnetization preparation by the composite MLEV weighted T2 preparation (T2 Prep) sequence used for coronary magnetic resonance angiography (MRA). A new adiabatic refocusing T2 Prep sequence is presented in which the magnetization is tipped into the transverse plane with a hard RF pulse and refocused using a pair of adiabatic fast-passage RF pulses. The isochromats are subsequently returned to the longitudinal axis using a hard RF pulse. Numerical simulations predict an excellent suppression of artifacts originating from B1 inhomogeneity while achieving good contrast enhancement between coronary arteries and surrounding tissue. This was confirmed by an in vivo study, in which coronary MR angiograms were obtained without a T2 Prep, with an MLEV weighted T2 Prep and the proposed adiabatic T2 Prep. Improved quantitative and qualitative coronary MRA image measurement was achieved using the adiabatic T2 Prep at 3 T.