49 resultados para Sample preparation
Resumo:
En una muestra de 34 adictos al juego en tratamiento, se examinan las características deconsumo de tabaco de los sujetos fumadores y la influencia de las consecuencias percibidas dela conducta de fumar en función de las etapas de cambio (Prochaska, DiClemente y Norcross,1992). Los resultados muestran que, aunque los porcentajes de fumadores doblan a los existentesen la población general, las personas fumadoras adictas al juego están representadas en lasdiferentes etapas de cambio con porcentajes parecidos a los de dicha población. Por otra parte,no se ha encontrado relación entre el nivel de dependencia medido con el Test de Fagerström yla etapa de cambio. En cuanto a la influencia de las consecuencias percibidas del consumo detabaco, en general los sujetos tienden a conceder mayor importancia a los perjuicios que a losbeneficios de fumar. En los análisis en función de las etapas de cambio, se encuentran diferenciassignificativas entre el grupo de los que piensan dejar de fumar en los próximos seis meses,(contempladores y preparados) y el grupo de los que no fuman (exfumadores y no fumadores) enel beneficio “fumar ayuda a relajarse” que es más valorado por los primeros. También seencuentran dichas diferencias entre los precontempladores y los que no fuman en dos perjuicios:“fumar produce a veces dolor de cabeza” y “fumar a veces provoca taquicardia” que son másvalorados por los segundos. Estos resultados sugieren la conveniencia de realizar las intervencionesmás adecuadas para cada etapa de cambio, a fin de que las personas adictas al juegopuedan también tener éxito en el abandono de la adicción al tabaco
Resumo:
Precision of released figures is not only an important quality feature of official statistics,it is also essential for a good understanding of the data. In this paper we show a casestudy of how precision could be conveyed if the multivariate nature of data has to betaken into account. In the official release of the Swiss earnings structure survey, the totalsalary is broken down into several wage components. We follow Aitchison's approachfor the analysis of compositional data, which is based on logratios of components. Wefirst present diferent multivariate analyses of the compositional data whereby the wagecomponents are broken down by economic activity classes. Then we propose a numberof ways to assess precision
Resumo:
In the last few years, many researchers have studied the presence of common dimensions of temperament in subjects with symptoms of anxiety. The aim of this study is to examine the association between temperamental dimensions (high negative affect and activity level) and anxiety problems in clinicalpreschool children. A total of 38 children, ages 3 to 6 years, from the Infant and Adolescent Mental Health Center of Girona and the Center of Diagnosis and Early Attention of Sabadell and Olot were evaluated by parents and psychologists. Their parents completed several screening scales and, subsequently, clinical child psychopathology professionals carried out diagnostic interviews with children from the sample who presented signs of anxiety. Findings showed that children with high levels of negative affect and low activity level have pronounced symptoms of anxiety. However, children with anxiety disorders do not present different temperament styles from their peers without these pathologies
Resumo:
A problem in the archaeometric classification of Catalan Renaissance pottery is the fact, thatthe clay supply of the pottery workshops was centrally organized by guilds, and thereforeusually all potters of a single production centre produced chemically similar ceramics.However, analysing the glazes of the ware usually a large number of inclusions in the glaze isfound, which reveal technological differences between single workshops. These inclusionshave been used by the potters in order to opacify the transparent glaze and to achieve a whitebackground for further decoration.In order to distinguish different technological preparation procedures of the single workshops,at a Scanning Electron Microscope the chemical composition of those inclusions as well astheir size in the two-dimensional cut is recorded. Based on the latter, a frequency distributionof the apparent diameters is estimated for each sample and type of inclusion.Following an approach by S.D. Wicksell (1925), it is principally possible to transform thedistributions of the apparent 2D-diameters back to those of the true three-dimensional bodies.The applicability of this approach and its practical problems are examined using differentways of kernel density estimation and Monte-Carlo tests of the methodology. Finally, it istested in how far the obtained frequency distributions can be used to classify the pottery
Resumo:
The present work reports on the preparation of thermoplastic starch (TPS) modified in situ with a diisocyanate derivative. Evidence of the condensation reaction between the hydroxyl groups of starch and glycerol with the isocyanate function (NCO) was confirmed by FTIR analysis. The evolution of the properties of the ensuing TPS, in term of mechanical properties, microstructure, and water sensitivity, was investigated using tensile mechanical, dynamic mechanical thermal analysis (DMTA), X-ray diffraction (XRD), and water uptake. The results showed that the addition of isocyanate did not affect the crystallinity of the TPS and slightly reduced the water uptake of the material. The evolution of the mechanical properties with ageing became less pronounced by the addition of the isocyanate as their amount exceeded 4 to 6wt%.
Resumo:
Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.
Resumo:
We introduce several exact nonparametric tests for finite sample multivariatelinear regressions, and compare their powers. This fills an important gap inthe literature where the only known nonparametric tests are either asymptotic,or assume one covariate only.
Resumo:
In moment structure analysis with nonnormal data, asymptotic valid inferences require the computation of a consistent (under general distributional assumptions) estimate of the matrix $\Gamma$ of asymptotic variances of sample second--order moments. Such a consistent estimate involves the fourth--order sample moments of the data. In practice, the use of fourth--order moments leads to computational burden and lack of robustness against small samples. In this paper we show that, under certain assumptions, correct asymptotic inferences can be attained when $\Gamma$ is replaced by a matrix $\Omega$ that involves only the second--order moments of the data. The present paper extends to the context of multi--sample analysis of second--order moment structures, results derived in the context of (simple--sample) covariance structure analysis (Satorra and Bentler, 1990). The results apply to a variety of estimation methods and general type of statistics. An example involving a test of equality of means under covariance restrictions illustrates theoretical aspects of the paper.
Resumo:
We extend to score, Wald and difference test statistics the scaled and adjusted corrections to goodness-of-fit test statistics developed in Satorra and Bentler (1988a,b). The theory is framed in the general context of multisample analysis of moment structures, under general conditions on the distribution of observable variables. Computational issues, as well as the relation of the scaled and corrected statistics to the asymptotic robust ones, is discussed. A Monte Carlo study illustrates thecomparative performance in finite samples of corrected score test statistics.
Resumo:
Small sample properties are of fundamental interest when only limited data is avail-able. Exact inference is limited by constraints imposed by speci.c nonrandomizedtests and of course also by lack of more data. These e¤ects can be separated as we propose to evaluate a test by comparing its type II error to the minimal type II error among all tests for the given sample. Game theory is used to establish this minimal type II error, the associated randomized test is characterized as part of a Nash equilibrium of a .ctitious game against nature.We use this method to investigate sequential tests for the di¤erence between twomeans when outcomes are constrained to belong to a given bounded set. Tests ofinequality and of noninferiority are included. We .nd that inference in terms oftype II error based on a balanced sample cannot be improved by sequential sampling or even by observing counter factual evidence providing there is a reasonable gap between the hypotheses.
Resumo:
This paper analyzes whether standard covariance matrix tests work whendimensionality is large, and in particular larger than sample size. Inthe latter case, the singularity of the sample covariance matrix makeslikelihood ratio tests degenerate, but other tests based on quadraticforms of sample covariance matrix eigenvalues remain well-defined. Westudy the consistency property and limiting distribution of these testsas dimensionality and sample size go to infinity together, with theirratio converging to a finite non-zero limit. We find that the existingtest for sphericity is robust against high dimensionality, but not thetest for equality of the covariance matrix to a given matrix. For thelatter test, we develop a new correction to the existing test statisticthat makes it robust against high dimensionality.
Resumo:
The central message of this paper is that nobody should be using the samplecovariance matrix for the purpose of portfolio optimization. It containsestimation error of the kind most likely to perturb a mean-varianceoptimizer. In its place, we suggest using the matrix obtained from thesample covariance matrix through a transformation called shrinkage. Thistends to pull the most extreme coefficients towards more central values,thereby systematically reducing estimation error where it matters most.Statistically, the challenge is to know the optimal shrinkage intensity,and we give the formula for that. Without changing any other step in theportfolio optimization process, we show on actual stock market data thatshrinkage reduces tracking error relative to a benchmark index, andsubstantially increases the realized information ratio of the activeportfolio manager.
Resumo:
In this paper I explore the issue of nonlinearity (both in the datageneration process and in the functional form that establishes therelationship between the parameters and the data) regarding the poorperformance of the Generalized Method of Moments (GMM) in small samples.To this purpose I build a sequence of models starting with a simple linearmodel and enlarging it progressively until I approximate a standard (nonlinear)neoclassical growth model. I then use simulation techniques to find the smallsample distribution of the GMM estimators in each of the models.
Resumo:
We derive a new inequality for uniform deviations of averages from their means. The inequality is a common generalization of previous results of Vapnik and Chervonenkis (1974) and Pollard (1986). Usingthe new inequality we obtain tight bounds for empirical loss minimization learning.