993 resultados para Sediments marins -- Mètodes estadístics
Resumo:
High-resolution side scan sonar has been used for mapping the seafloor of the Ría de Pontevedra. Four backscatter patterns have been mapped within the Ría: (1) Pattern with isolated reflections, correlated with granite and metamorphic outcrops and located close to the coastal prominence and Ons and Onza Islands. (2) Pattern of strong reflectivity usually located around the basement outcrops and near the coastline and produced by coarse-grained sediment. (3) Pattern of weak backscatter is correlated with fine sand to mud and comprising large areas in the central and deep part of the Ría, where the bottom currents are weak. It is generally featureless, except where pockmarks and anthropogenic features are present. (4) Patches of strong and weak backscatter are located in the boundary between coarse and fine-grained sediments and they are due to the effect of strong bottom currents. The presence of megaripples associated to both patterns of strong reflectivity and sedimentary patches indicate bedload transport of sediment during high energy conditions (storms). Side scan sonar records and supplementary bathymetry, bottom samples and hydrodynamic data reveal that the distribution of seafloor sediment is strongly related to oceanographic processes and the particular morphology and topography of the Ría.
Resumo:
Relaciones entre las asociaciones de facies de las tres unidades litoestratigráficas del complejo turbidítico y su asimilación a unidades deposicionales dentro de modelos de sedimentación submarina.
Resumo:
En la actualidad es difícil hablar de procesos estadísticos de análisis cuantitativo de datos sin hacer referencia a la informática aplicada a la investigación. Estos recursos informáticos se basan a menudo en paquetes de programas informáticos que tienen por objetivo ayudar al/la investigador/a en la fase de análisis de datos. En estos momentos uno de los paquetes más perfeccionados y completos es el SPSS (Statistical Package for the Social Sciences). El SPSS es un paquete de programas para llevar a cabo el análisis estadístico de los datos. Constituye una aplicación estadística muy potente, de la que se han ido desarrollando diversas versiones desde sus inicios, en los años setenta. En esta ficha las salidas de ordenador que se presentan corresponden a la versión 11.0.1. No obstante, aunque la forma ha ido variando desde sus inicios, su funcionamiento sigue siendo muy similar entre las diferentes versiones. Antes de iniciarnos en la utilización de las aplicaciones del SPSS es importante familiarizarse con algunas de las ventanas que más usaremos. Al entrar al SPSS lo primero que nos encontramos es el editor de datos. Esta ventana visualiza, básicamente, los datos que iremos introduciendo. El editor de datos incluye dos opciones: la vista de los datos y la de las variables. Estas opciones pueden seleccionarse a partir de las dos pestañas que se presentan en la parte inferior. La vista de datos contiene el menú general y la matriz de datos. Esta matriz está estructurada ubicando los casos en las filas y las variables en las columnas.
Resumo:
The present study explores the statistical properties of a randomization test based on the random assignment of the intervention point in a two-phase (AB) single-case design. The focus is on randomization distributions constructed with the values of the test statistic for all possible random assignments and used to obtain p-values. The shape of those distributions is investigated for each specific data division defined by the moment in which the intervention is introduced. Another aim of the study consisted in testing the detection of inexistent effects (i.e., production of false alarms) in autocorrelated data series, in which the assumption of exchangeability between observations may be untenable. In this way, it was possible to compare nominal and empirical Type I error rates in order to obtain evidence on the statistical validity of the randomization test for each individual data division. The results suggest that when either of the two phases has considerably less measurement times, Type I errors may be too probable and, hence, the decision making process to be carried out by applied researchers may be jeopardized.
Resumo:
Interdependence is the main feature of dyadic relationships and, in recent years, various statistical procedures have been proposed for quantifying and testing this social attribute in different dyadic designs. The purpose of this paper is to develop several functions for this kind of statistical tests in an R package, known as nonindependence, for use by applied social researchers. A Graphical User Interface (GUI) is also developed to facilitate the use of the functions included in this package. Examples drawn from psychological research and simulated data are used to illustrate how the software works.
Resumo:
The present work focuses the attention on the skew-symmetry index as a measure of social reciprocity. This index is based on the correspondence between the amount of behaviour that each individual addresses to its partners and what it receives from them in return. Although the skew-symmetry index enables researchers to describe social groups, statistical inferential tests are required. The main aim of the present study is to propose an overall statistical technique for testing symmetry in experimental conditions, calculating the skew-symmetry statistic (Φ) at group level. Sampling distributions for the skew- symmetry statistic have been estimated by means of a Monte Carlo simulation in order to allow researchers to make statistical decisions. Furthermore, this study will allow researchers to choose the optimal experimental conditions for carrying out their research, as the power of the statistical test has been estimated. This statistical test could be used in experimental social psychology studies in which researchers may control the group size and the number of interactions within dyads.
Resumo:
The present work deals with quantifying group characteristics. Specifically, dyadic measures of interpersonal perceptions were used to forecast group performance. 46 groups of students, 24 of four and 22 of five people, were studied in a real educational assignment context and marks were gathered as an indicator of group performance. Our results show that dyadic measures of interpersonal perceptions account for final marks. By means of linear regression analysis 85% and 85.6% of group performance was respectively explained for group sizes equal to four and five. Results found in the scientific literature based on the individualistic approach are no larger than 18%. The results of the present study support the utility of dyadic approaches for predicting group performance in social contexts.
Resumo:
Workgroup diversity can be conceptualized as variety, separation, or disparity. Thus, the proper operationalization of diversity depends on how a diversity dimension has been defined. Analytically, the minimal diversity must be obtained when there are no differences on an attribute among the members of a group, however maximal diversity has a different shape for each conceptualization of diversity. Previous work on diversity indexes indicated maximum values for variety (e.g., Blau"s index and Teachman"s index), separation (e.g., standard deviation and mean Euclidean distance), and disparity (e.g., coefficient of variation and the Gini coefficient of concentration), although these maximum values are not valid for all group characteristics (i.e., group size and group size parity) and attribute scales (i.e., number of categories). We demonstrate analytically appropriate upper boundaries for conditional diversity determined by some specific group characteristics, avoiding the bias related to absolute diversity. This will allow applied researchers to make better interpretations regarding the relationship between group diversity and group outcomes.
Resumo:
El presente trabajo recoge de forma breve laproblemática de la estimación de la serial en series temporales de datos obtenidos en registros ERP. Se centra en aquellos componentes de frecuencia mis baja, como es el caso de la CNV: Sepropone la utilización alternativa de las técnicas de suavizado del Análisis Exploratorio de Datos (EDA), para mejorar la estimación obtenida, en comparación con la técnica del promediado simple de diferentes ensayos.
Resumo:
Cuando se realiza una encuesta social en un amplio territorio queda siempre el deseo de aplicar análisis similares a los realizados en la encuesta a poblaciones o territorios más reducidos, evidentemente utilizando los propios datos de la encuesta. El objetivo de este articulo consiste en mostrar cómo cada estrato de una muestra estratificada puede constituir una base muestral para llevar a cabo dichos análisis con todas las garantías de precisión o, al menos, con garantías calculables y aceptables sin aumentar el número muestral para la encuesta general.
Resumo:
Amb la finalitat d’estudiar els ritmes d’acumulació de sediments durant els últims 100 anys, s’han extret tres testimonis de sediments del canyó d’Arenys a profunditats de 1074 m, 1410 m i 1632 m respectivament. Els ritmes de sedimentació basats en els perfils verticals de Pb-210 suggereixen que les tendències actuals sobre el flux i acumulació de sediments poden ser diferents a tendències passades. Durant la dècada dels 70 es va portar a terme una ràpida evolució de la flota pesquera del port d’Arenys de Mar. Aquest fet es pot relacionar amb els canvis en el ritme d’acumulació dels sediments al testimoni extret a 1074 m. Els flancs del canyó submarí són objectiu dels arrossegadors del port d’Arenys de Mar, una activitat que pot fer variar la morfologia del fons marí, la resuspensió de les partícules i pot crear fluxos de terbolesa. Per tant, els resultats suggereixen que l’activitat pesquera d’arrossegament pot afectar als ambients submarins d’una manera més important del que s’havia pensat.
Resumo:
L’objecte del present treball és la realització d’una aplicació que permeti portar a terme el control estadístic multivariable en línia d’una planta SBR.Aquesta eina ha de permetre realitzar un anàlisi estadístic multivariable complet del lot en procés, de l’últim lot finalitzat i de la resta de lots processats a la planta.L’aplicació s’ha de realitzar en l’entorn LabVIEW. L’elecció d’aquest programa vecondicionada per l’actualització del mòdul de monitorització de la planta que s’estàdesenvolupant en aquest mateix entorn
Resumo:
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.