897 resultados para Sample size


Relevância:

60.00% 60.00%

Publicador:

Resumo:

ABSTRACT: BACKGROUND: Most scales that assess the presence and severity of psychotic symptoms often measure a broad range of experiences and behaviours, something that restricts the detailed measurement of specific symptoms such as delusions or hallucinations. The Psychotic Symptom Rating Scales (PSYRATS) is a clinical assessment tool that focuses on the detailed measurement of these core symptoms. The goal of this study was to examine the psychometric properties of the French version of the PSYRATS. METHODS: A sample of 103 outpatients suffering from schizophrenia or schizoaffective disorders and presenting persistent psychotic symptoms over the previous three months was assessed using the PSYRATS. Seventy-five sample participants were also assessed with the Positive And Negative Syndrome Scale (PANSS). RESULTS: ICCs were superior to .90 for all items of the PSYRATS. Factor analysis replicated the factorial structure of the original version of the delusions scale. Similar to previous replications, the factor structure of the hallucinations scale was partially replicated. Convergent validity indicated that some specific PSYRATS items do not correlate with the PANSS delusions or hallucinations. The distress items of the PSYRATS are negatively correlated with the grandiosity scale of the PANSS. CONCLUSIONS: The results of this study are limited by the relatively small sample size as well as the selection of participants with persistent symptoms. The French version of the PSYRATS partially replicates previously published results. Differences in factor structure of the hallucinations scale might be explained by greater variability of its elements. The future development of the scale should take into account the presence of grandiosity in order to better capture details of the psychotic experience.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A national survey designed for estimating a specific population quantity is sometimes used for estimation of this quantity also for a small area, such as a province. Budget constraints do not allow a greater sample size for the small area, and so other means of improving estimation have to be devised. We investigate such methods and assess them by a Monte Carlo study. We explore how a complementary survey can be exploited in small area estimation. We use the context of the Spanish Labour Force Survey (EPA) and the Barometer in Spain for our study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many theories, most famously Max Weber s essay on the Protestant ethic, have hypothesizedthat Protestantism should have favored economic development. With their considerablereligious heterogeneity and stability of denominational affiliations until the 19th century, theGerman Lands of the Holy Roman Empire present an ideal testing ground for this hypothesis.Using population figures in a dataset comprising 272 cities in the years 1300 1900, I find no effectsof Protestantism on economic growth. The finding is robust to the inclusion of a varietyof controls, and does not appear to depend on data selection or small sample size. In addition,Protestantism has no effect when interacted with other likely determinants of economic development.I also analyze the endogeneity of religious choice; instrumental variables estimates ofthe effects of Protestantism are similar to the OLS results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Structural equation models (SEM) are commonly used to analyze the relationship between variables some of which may be latent, such as individual ``attitude'' to and ``behavior'' concerning specific issues. A number of difficulties arise when we want to compare a large number of groups, each with large sample size, and the manifest variables are distinctly non-normally distributed. Using an specific data set, we evaluate the appropriateness of the following alternative SEM approaches: multiple group versus MIMIC models, continuous versus ordinal variables estimation methods, and normal theory versus non-normal estimation methods. The approaches are applied to the ISSP-1993 Environmental data set, with the purpose of exploring variation in the mean level of variables of ``attitude'' to and ``behavior''concerning environmental issues and their mutual relationship across countries. Issues of both theoretical and practical relevance arise in the course of this application.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We continue the development of a method for the selection of a bandwidth or a number of design parameters in density estimation. We provideexplicit non-asymptotic density-free inequalities that relate the $L_1$ error of the selected estimate with that of the best possible estimate,and study in particular the connection between the richness of the classof density estimates and the performance bound. For example, our methodallows one to pick the bandwidth and kernel order in the kernel estimatesimultaneously and still assure that for {\it all densities}, the $L_1$error of the corresponding kernel estimate is not larger than aboutthree times the error of the estimate with the optimal smoothing factor and kernel plus a constant times $\sqrt{\log n/n}$, where $n$ is the sample size, and the constant only depends on the complexity of the family of kernels used in the estimate. Further applications include multivariate kernel estimates, transformed kernel estimates, and variablekernel estimates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Summary points: - The bias introduced by random measurement error will be different depending on whether the error is in an exposure variable (risk factor) or outcome variable (disease) - Random measurement error in an exposure variable will bias the estimates of regression slope coefficients towards the null - Random measurement error in an outcome variable will instead increase the standard error of the estimates and widen the corresponding confidence intervals, making results less likely to be statistically significant - Increasing sample size will help minimise the impact of measurement error in an outcome variable but will only make estimates more precisely wrong when the error is in an exposure variable

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Designing an efficient sampling strategy is of crucial importance for habitat suitability modelling. This paper compares four such strategies, namely, 'random', 'regular', 'proportional-stratified' and 'equal -stratified'- to investigate (1) how they affect prediction accuracy and (2) how sensitive they are to sample size. In order to compare them, a virtual species approach (Ecol. Model. 145 (2001) 111) in a real landscape, based on reliable data, was chosen. The distribution of the virtual species was sampled 300 times using each of the four strategies in four sample sizes. The sampled data were then fed into a GLM to make two types of prediction: (1) habitat suitability and (2) presence/ absence. Comparing the predictions to the known distribution of the virtual species allows model accuracy to be assessed. Habitat suitability predictions were assessed by Pearson's correlation coefficient and presence/absence predictions by Cohen's K agreement coefficient. The results show the 'regular' and 'equal-stratified' sampling strategies to be the most accurate and most robust. We propose the following characteristics to improve sample design: (1) increase sample size, (2) prefer systematic to random sampling and (3) include environmental information in the design'

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Friedman et al. report that hemodialysis patients with the highest levels of n-3 fatty acids had impressively low odds of sudden cardiac death. The study is limited by a small sample size, and the analysis relies on only a single baseline measurement of blood levels. Recent randomized evidence indeed fails to support that n-3 fatty acids may prevent sudden death in nonrenal patients. More evidence is needed to advocate fish oil in this setting.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: To investigate the effect of chronic hyperglycemia on cerebral microvascular remodeling using perfusion computed tomography. METHODS: We retrospectively identified 26 patients from our registry of 2453 patients who underwent a perfusion computed tomographic study and had their hemoglobin A1c (HbA1c) measured. These 26 patients were divided into 2 groups: those with HbA1c>6.5% (n=15) and those with HbA1c≤6.5% (n=11). Perfusion computed tomographic studies were processed using a delay-corrected, deconvolution-based software. Perfusion computed tomographic values were compared between the 2 patient groups, including mean transit time, which relates to the cerebral capillary architecture and length. RESULTS: Mean transit time values in the nonischemic cerebral hemisphere were significantly longer in the patients with HbA1c>6.5% (P=0.033), especially in the white matter (P=0.005). Significant correlation (R=0.469; P=0.016) between mean transit time and HbA1c level was observed. CONCLUSIONS: Our results from a small sample suggest that chronic hyperglycemia may be associated with cerebral microvascular remodeling in humans. Additional prospective studies with larger sample size are required to confirm this observation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences ("batch effects") as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. FOCUS: The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. DATA: We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., 'control') or group 2 (e.g., 'treated'). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. METHODS: We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fish acute toxicity tests play an important role in environmental risk assessment and hazard classification because they allow for first estimates of the relative toxicity of various chemicals in various species. However, such tests need to be carefully interpreted. Here we shortly summarize the main issues which are linked to the genetics and the condition of the test animals, the standardized test situations, the uncertainty about whether a given test species can be seen as representative to a given fish fauna, the often missing knowledge about possible interaction effects, especially with micropathogens, and statistical problems like small sample sizes and, in some cases, pseudoreplication. We suggest that multi-factorial embryo tests on ecologically relevant species solve many of these issues, and we shortly explain how such tests could be done to avoid the weaker points of fish acute toxicity tests.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND/OBJECTIVES: To assess the distribution of interleukin (IL)-1β, IL-6, tumour necrosis factor (TNF)-α and C-reactive protein (CRP) according to the different definitions of metabolically healthy obesity (MHO). SUBJECTS/METHODS: A total of 881 obese (body mass index (BMI) > or =30 kg/m2) subjects derived from the population-based CoLaus Study participated in this study. MHO was defined using six sets of criteria including different combinations of waist, blood pressure, total high-density lipoprotein cholesterol or low-density lipoprotein -cholesterol, triglycerides, fasting glucose, homeostasis model, high-sensitivity CRP, and personal history of cardiovascular, respiratory or metabolic diseases. IL-1β, IL-6 and TNF-α were assessed by multiplexed flow cytometric assay. CRP was assessed by immunoassay. RESULTS: On bivariate analysis some, but not all, definitions of MHO led to significantly lower levels of IL-6, TNF-α and CRP compared with non-MH obese subjects. Most of these differences became nonsignificant after multivariate analysis. An posteriori analysis showed a statistical power between 9 and 79%, depending on the inflammatory biomarker and MHO definition considered. Further increasing sample size to overweight+obese individuals (BMI > or =25 kg/m2, n=2917) showed metabolically healthy status to be significantly associated with lower levels of CRP, while no association was found for IL-1β. Significantly lower IL-6 and TNF-α levels were also found with some but not all MHO definitions, the differences in IL-6 becoming nonsignificant after adjusting for abdominal obesity or percent body fat. CONCLUSIONS: MHO individuals present with decreased levels of CRP and, depending on MHO definition, also with decreased levels in IL-6 and TNF-α. Conversely, no association with IL-1β levels was found.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En este artículo abordamos el uso y la importancia de las herramientas estadísticas que se utilizan principalmente en los estudios médicos del ámbito de la oncología y la hematología, pero aplicables a muchos otros campos tanto médicos como experimentales o industriales. El objetivo del presente trabajo es presentar de una manera clara y precisa la metodología estadística necesaria para analizar los datos obtenidos en los estudios rigurosa y concisamente en cuanto a las hipótesis de trabajo planteadas por los investigadores. La medida de la respuesta al tratamiento elegidas en al tipo de estudio elegido determinarán los métodos estadísticos que se utilizarán durante el análisis de los datos del estudio y también el tamaño de muestra. Mediante la correcta aplicación del análisis estadístico y de una adecuada planificación se puede determinar si la relación encontrada entre la exposición a un tratamiento y un resultado es casual o por el contrario, está sujeto a una relación no aleatoria que podría establecer una relación de causalidad. Hemos estudiado los principales tipos de diseño de los estudios médicos más utilizados, tales como ensayos clínicos y estudios observacionales (cohortes, casos y controles, estudios de prevalencia y estudios ecológicos). También se presenta una sección sobre el cálculo del tamaño muestral de los estudios y cómo calcularlo, ¿Qué prueba estadística debe utilizarse?, los aspectos sobre fuerza del efecto ¿odds ratio¿ (OR) y riesgo relativo (RR), el análisis de supervivencia. Se presentan ejemplos en la mayoría de secciones del artículo y bibliografía más relevante.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wasps and their relatives from the Lower Cretaceous lithographic limestones of Spain have been studied. Thirty specimens representing 30 species (4 of them with undetermined placement), at least 21 genera and 11 families are recorded. We erect 1 new family - Andrenelidae-, 6 new genera and 11 new species: Meiaghilarella cretacica n.gen., n.sp. (Sepulcidae Ghilarellinae), Eosyntexis catalonicus n.sp., Cretosyntexis montsecensis n.gen., n.sp. (Anaxyelidae Syntexinae), Montsecephialtites zherikhini n.gen., n.sp. (Ephialtitidae Ephialtitinae), Karataus hispanicus n.sp. (Ephialtitidae Symphytopterinae), Manlaya ansorge i n.sp. (Gasteruptiidae Baissinae), Andrenelia pennata n.gen., n.sp. (Andrenelidae n. fam.), Cretoserphus gomezi n.gen., n.sp. (Mesoserphidae), Montsecosphex jarzembow skii n.gen., n.sp., Angarosphex penyalveri n.sp., Pompilopterus (?) noguerensis n.sp. (Sphecidae Angarosphecinae), Cretoscolia conquensis n.sp. (Scoliidae Archaeoscoliinae). The Mesozoic family Ephialtitidae is revisited based on the restudy of the type-species. We compare these Spanish Cretaceous assemblages with other ones from various parts of the world: Central and Eastern Asia, England, Australia, and Brazil. The number of genera and families identified in the Spanish fossil-sites is almost the same as in the English Purbeck and Wealden. The absence of some hymenopteran groups as Xyelidae, is consistent with the warm climate know to exist in Spain during the Early Cretaceous. We conclude that both La Cabrúa and La Pedrera assemblages - the two sites that have yielded the greatest number of species- correspond to the Lower Cretaceous"Baissin type" (sensu Rasnitsyn et al., 1998), but including some Jurassic"survivors". La Pedrera assemblage fits equally well in the"angarosphecine subtype", while La Cabrúa roughly corresponds to the"proctotrupid" one, although shows a comparative ly high proportion of angarosphecins. This fact may suggest: a) possibly asynchrony between these two fossilsites, b) environmental differences not reflected in the lithological record, c) different taphonomic processes and/or, d) insufficient sample size - to reflect the reality of the source populations-. La Pedrera assemblage is very similar to those from Weald Clay (England), Bon Tsagan (Mongolia) and Santana (Brazil). La Cabrúa approaches to a some extent, though not quite agrees with the Purbeck (UK), Koonwarra (Australia), and most Lower Cretaceous Asian assemblages.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Acute infection with the hepatitis C virus (HCV) induces a wide range of innate and adaptive immune responses. A total of 20-50% of acutely HCV-infected individuals permanently control the virus, referred to as 'spontaneous hepatitis C clearance', while the infection progresses to chronic hepatitis C in the majority of cases. Numerous studies have examined host genetic determinants of hepatitis C infection outcome and revealed the influence of genetic polymorphisms of human leukocyte antigens, killer immunoglobulin-like receptors, chemokines, interleukins and interferon-stimulated genes on spontaneous hepatitis C clearance. However, most genetic associations were not confirmed in independent cohorts, revealed opposing results in diverse populations or were limited by varying definitions of hepatitis C outcomes or small sample size. Coordinated efforts are needed in the search for key genetic determinants of spontaneous hepatitis C clearance that include well-conducted candidate genetic and genome-wide association studies, direct sequencing and follow-up functional studies.