66 resultados para Genetics Statistical methods
Resumo:
The present work focuses the attention on the skew-symmetry index as a measure of social reciprocity. This index is based on the correspondence between the amount of behaviour that each individual addresses to its partners and what it receives from them in return. Although the skew-symmetry index enables researchers to describe social groups, statistical inferential tests are required. The main aim of the present study is to propose an overall statistical technique for testing symmetry in experimental conditions, calculating the skew-symmetry statistic (Φ) at group level. Sampling distributions for the skew- symmetry statistic have been estimated by means of a Monte Carlo simulation in order to allow researchers to make statistical decisions. Furthermore, this study will allow researchers to choose the optimal experimental conditions for carrying out their research, as the power of the statistical test has been estimated. This statistical test could be used in experimental social psychology studies in which researchers may control the group size and the number of interactions within dyads.
Resumo:
The present work deals with quantifying group characteristics. Specifically, dyadic measures of interpersonal perceptions were used to forecast group performance. 46 groups of students, 24 of four and 22 of five people, were studied in a real educational assignment context and marks were gathered as an indicator of group performance. Our results show that dyadic measures of interpersonal perceptions account for final marks. By means of linear regression analysis 85% and 85.6% of group performance was respectively explained for group sizes equal to four and five. Results found in the scientific literature based on the individualistic approach are no larger than 18%. The results of the present study support the utility of dyadic approaches for predicting group performance in social contexts.
Resumo:
Workgroup diversity can be conceptualized as variety, separation, or disparity. Thus, the proper operationalization of diversity depends on how a diversity dimension has been defined. Analytically, the minimal diversity must be obtained when there are no differences on an attribute among the members of a group, however maximal diversity has a different shape for each conceptualization of diversity. Previous work on diversity indexes indicated maximum values for variety (e.g., Blau"s index and Teachman"s index), separation (e.g., standard deviation and mean Euclidean distance), and disparity (e.g., coefficient of variation and the Gini coefficient of concentration), although these maximum values are not valid for all group characteristics (i.e., group size and group size parity) and attribute scales (i.e., number of categories). We demonstrate analytically appropriate upper boundaries for conditional diversity determined by some specific group characteristics, avoiding the bias related to absolute diversity. This will allow applied researchers to make better interpretations regarding the relationship between group diversity and group outcomes.
Resumo:
El presente trabajo recoge de forma breve laproblemática de la estimación de la serial en series temporales de datos obtenidos en registros ERP. Se centra en aquellos componentes de frecuencia mis baja, como es el caso de la CNV: Sepropone la utilización alternativa de las técnicas de suavizado del Análisis Exploratorio de Datos (EDA), para mejorar la estimación obtenida, en comparación con la técnica del promediado simple de diferentes ensayos.
Resumo:
Cuando se realiza una encuesta social en un amplio territorio queda siempre el deseo de aplicar análisis similares a los realizados en la encuesta a poblaciones o territorios más reducidos, evidentemente utilizando los propios datos de la encuesta. El objetivo de este articulo consiste en mostrar cómo cada estrato de una muestra estratificada puede constituir una base muestral para llevar a cabo dichos análisis con todas las garantías de precisión o, al menos, con garantías calculables y aceptables sin aumentar el número muestral para la encuesta general.
Resumo:
Sickness absence (SA) is an important social, economic and public health issue. Identifying and understanding the determinants, whether biological, regulatory or, health services-related, of variability in SA duration is essential for better management of SA. The conditional frailty model (CFM) is useful when repeated SA events occur within the same individual, as it allows simultaneous analysis of event dependence and heterogeneity due to unknown, unmeasured, or unmeasurable factors. However, its use may encounter computational limitations when applied to very large data sets, as may frequently occur in the analysis of SA duration. To overcome the computational issue, we propose a Poisson-based conditional frailty model (CFPM) for repeated SA events that accounts for both event dependence and heterogeneity. To demonstrate the usefulness of the model proposed in the SA duration context, we used data from all non-work-related SA episodes that occurred in Catalonia (Spain) in 2007, initiated by either a diagnosis of neoplasm or mental and behavioral disorders. As expected, the CFPM results were very similar to those of the CFM for both diagnosis groups. The CPU time for the CFPM was substantially shorter than the CFM. The CFPM is an suitable alternative to the CFM in survival analysis with recurrent events,especially with large databases.
Resumo:
L’objecte del present treball és la realització d’una aplicació que permeti portar a terme el control estadístic multivariable en línia d’una planta SBR.Aquesta eina ha de permetre realitzar un anàlisi estadístic multivariable complet del lot en procés, de l’últim lot finalitzat i de la resta de lots processats a la planta.L’aplicació s’ha de realitzar en l’entorn LabVIEW. L’elecció d’aquest programa vecondicionada per l’actualització del mòdul de monitorització de la planta que s’estàdesenvolupant en aquest mateix entorn
Resumo:
In the context of observed climate change impacts and their effect on agriculture and crop production, this study intends to assess the vulnerability of rural livelihoods through a study case in Karnataka, India. The social approach of climate change vulnerability in this study case includes defining and exploring factors that determine farmers’ vulnerability in four villages. Key informant interviews, farmer workshops and structured household interviews were used for data collection. To analyse the data, we adapted and applied three vulnerability indices: Livelihood Vulnerability Index (LVI), LVI-IPCC and the Livelihood Effect Index (LEI), and used descriptive statistical methods. The data was analysed at two scales: whole sample-level and household level. The results from applying the indices for the whole-sample level show that this community's vulnerability to climate change is moderate, whereas the household-level results show that most of the households' vulnerability is high-very high, while 15 key drivers of vulnerability were identified. Results and limitations of the study are discussed under the rural livelihoods framework, in which the indices are based, allowing a better understanding of the social behaviouraltrends, as well as an holistic and integrated view of the climate change, agriculture, and livelihoods processes shaping vulnerability. We conclude that these indices, although a straightforward method to assess vulnerability, have limitations that could account for inaccuracies and inability to be standardised for benchmarking, therefore we stress the need for further research.
Resumo:
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.
Resumo:
Voltage fluctuations caused by parasitic impedances in the power supply rails of modern ICs are a major concern in nowadays ICs. The voltage fluctuations are spread out to the diverse nodes of the internal sections causing two effects: a degradation of performances mainly impacting gate delays anda noisy contamination of the quiescent levels of the logic that drives the node. Both effects are presented together, in thispaper, showing than both are a cause of errors in modern and future digital circuits. The paper groups both error mechanismsand shows how the global error rate is related with the voltage deviation and the period of the clock of the digital system.
Resumo:
Background: Characterizing and comparing the determinant of cotinine concentrations in different populations should facilitate a better understanding of smoking patterns and addiction. This study describes and characterizes determinants of salivary cotinine concentration in a sample of Spanish adult daily smoker men and women. Methods: A cross-sectional study was carried out between March 2004 and December 2005 in a representative sample of 1245 people from the general population of Barcelona, Spain. A standard questionnaire was used to gather information on active tobacco smoking and passive exposure, and a saliva specimen was obtained to determine salivary cotinine concentration. Two hundred and eleven adult smokers (>16 years old) with complete data were included in the analysis. Determinants of cotinine concentrations were assessed using linear regression models. Results: Salivary cotinine concentration was associated with the reported number of cigarettes smoked in the previous 24 hours (R2 = 0.339; p < 0.05). The inclusion of a quadratic component for number of cigarettes smoked in the regression analyses resulted in an improvement of the fit (R2 = 0.386; p < 0.05). Cotinine concentration differed significantly by sex, with men having higher levels. Conclusion: This study shows that salivary cotinine concentration is significantly associated with the number of cigarettes smoked and sex, but not with other smoking-related variables.
Resumo:
N = 1 designs imply repeated registrations of the behaviour of the same experimental unit and the measurements obtained are often few due to time limitations, while they are also likely to be sequentially dependent. The analytical techniques needed to enhance statistical and clinical decision making have to deal with these problems. Different procedures for analysing data from single-case AB designs are discussed, presenting their main features and revising the results reported by previous studies. Randomization tests represent one of the statistical methods that seemed to perform well in terms of controlling false alarm rates. In the experimental part of the study a new simulation approach is used to test the performance of randomization tests and the results suggest that the technique is not always robust against the violation of the independence assumption. Moreover, sensitivity proved to be generally unacceptably low for series lengths equal to 30 and 40. Considering the evidence available, there does not seem to be an optimal technique for single-case data analysis
Resumo:
Una vez se dispone de los datos introducidos en el paquete estadístico del SPSS (Statistical Package of Social Science), en una matriz de datos, es el momento de plantearse optimizar esa matriz para poder extraer el máximo rendimiento a los datos, según el tipo de análisis que se pretende realizar. Para ello, el propio SPSS tiene una serie de utilidades que pueden ser de gran utilidad. Estas utilidades básicas pueden diferenciarse según su funcionalidad entre: utilidades para la edición de datos, utilidades para la modificación de variables, y las opciones de ayuda que nos brinda. A continuación se presentan algunas de estas utilidades.
Resumo:
El análisis discriminante es un método estadístico a través del cual se busca conocer qué variables, medidas en objetos o individuos, explican mejor la atribución de la diferencia de los grupos a los cuales pertenecen dichos objetos o individuos. Es una técnica que nos permite comprobar hasta qué punto las variables independientes consideradas en la investigación clasifican correctamente a los sujetos u objetos. Se muestran y explican los principales elementos que se relacionan con el procedimiento para llevar a cabo el análisis discriminante y su aplicación utilizando el paquete estadístico SPSS, versión 18, para el desarrollo del modelo estadístico, las condiciones para la aplicación del análisis, la estimación e interpretación de las funciones discriminantes, los métodos de clasificación y la validación de los resultados.