71 resultados para Process control Statistical methods
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
L’objecte del present treball és la realització d’una aplicació que permeti portar a terme el control estadístic multivariable en línia d’una planta SBR.Aquesta eina ha de permetre realitzar un anàlisi estadístic multivariable complet del lot en procés, de l’últim lot finalitzat i de la resta de lots processats a la planta.L’aplicació s’ha de realitzar en l’entorn LabVIEW. L’elecció d’aquest programa vecondicionada per l’actualització del mòdul de monitorització de la planta que s’estàdesenvolupant en aquest mateix entorn
Resumo:
In the scope of the European project Hydroptimet, INTERREG IIIB-MEDOCC programme, limited area model (LAM) intercomparison of intense events that produced many damages to people and territory is performed. As the comparison is limited to single case studies, the work is not meant to provide a measure of the different models' skill, but to identify the key model factors useful to give a good forecast on such a kind of meteorological phenomena. This work focuses on the Spanish flash-flood event, also known as "Montserrat-2000" event. The study is performed using forecast data from seven operational LAMs, placed at partners' disposal via the Hydroptimet ftp site, and observed data from Catalonia rain gauge network. To improve the event analysis, satellite rainfall estimates have been also considered. For statistical evaluation of quantitative precipitation forecasts (QPFs), several non-parametric skill scores based on contingency tables have been used. Furthermore, for each model run it has been possible to identify Catalonia regions affected by misses and false alarms using contingency table elements. Moreover, the standard "eyeball" analysis of forecast and observed precipitation fields has been supported by the use of a state-of-the-art diagnostic method, the contiguous rain area (CRA) analysis. This method allows to quantify the spatial shift forecast error and to identify the error sources that affected each model forecasts. High-resolution modelling and domain size seem to have a key role for providing a skillful forecast. Further work is needed to support this statement, including verification using a wider observational data set.
Resumo:
Excitation-continuous music instrument control patterns are often not explicitly represented in current sound synthesis techniques when applied to automatic performance. Both physical model-based and sample-based synthesis paradigmswould benefit from a flexible and accurate instrument control model, enabling the improvement of naturalness and realism. Wepresent a framework for modeling bowing control parameters inviolin performance. Nearly non-intrusive sensing techniques allow for accurate acquisition of relevant timbre-related bowing control parameter signals.We model the temporal contour of bow velocity, bow pressing force, and bow-bridge distance as sequences of short Bézier cubic curve segments. Considering different articulations, dynamics, and performance contexts, a number of note classes are defined. Contours of bowing parameters in a performance database are analyzed at note-level by following a predefined grammar that dictates characteristics of curve segment sequences for each of the classes in consideration. As a result, contour analysis of bowing parameters of each note yields an optimal representation vector that is sufficient for reconstructing original contours with significant fidelity. From the resulting representation vectors, we construct a statistical model based on Gaussian mixtures suitable for both the analysis and synthesis of bowing parameter contours. By using the estimated models, synthetic contours can be generated through a bow planning algorithm able to reproduce possible constraints caused by the finite length of the bow. Rendered contours are successfully used in two preliminary synthesis frameworks: digital waveguide-based bowed stringphysical modeling and sample-based spectral-domain synthesis.
Resumo:
En este proyecto se ha desarrollado estrategias de control avanzadas para plantas de depuración de aguas residuales urbanas que eliminan conjuntamente materia orgánica, nitrógeno y fósforo. Las estrategias se han basado en el estudio multivariable del comportamiento del sistema, que ha producido subsidios para la utilización de lazos de control feedforward, de control predictivo y de un control de costes que automáticamente enviaba las consignas más adecuadas para los controladores de proceso. Para el desarrollo de las estrategias, se ha creado un sistema virtual de simulación (simulador) de plantas de depuradoras, basado en datos de literatura. Para el caso de una planta real, se ha desarrollado un simulador de la planta de Manresa (Catalunya). Sin embargo, el sistema de Manresa se ha utilizado exclusivamente para auxiliar los ingenieros de la planta en la tomada de decisiones de cambio de configuración para que la eliminación de fósforo se dé por la ruta biológica y no por la ruta química. La implementación de los simuladores ha permitido hacer muchas pruebas que en una planta real demandarían mucho tiempo y consumirían muchos recursos energéticos y financieros. Las estrategias de control más elaboradas han podido ahorrar hasta 150.000,00 Euros por año en relación a la operación de la planta sin el control automático. Cuanto a los estudios del modelo de la planta real, se concluyó que la eliminación biológica de fósforo puede sustituir el actual proceso químico de eliminación de fósforo, bajando los costes operacionales (costes del agente precipitante).
Resumo:
This presentation aims to make understandable the use and application context of two Webometrics techniques, the logs analysis and Google Analytics, which currently coexist in the Virtual Library of the UOC. In this sense, first of all it is provided a comprehensive introduction to webometrics and then it is analysed the case of the UOC's Virtual Library focusing on the assimilation of these techniques and the considerations underlying their use, and covering in a holistic way the process of gathering, processing and data exploitation. Finally there are also provided guidelines for the interpretation of the metric variables obtained.
Resumo:
In an earlier investigation (Burger et al., 2000) five sediment cores near the RodriguesTriple Junction in the Indian Ocean were studied applying classical statistical methods(fuzzy c-means clustering, linear mixing model, principal component analysis) for theextraction of endmembers and evaluating the spatial and temporal variation ofgeochemical signals. Three main factors of sedimentation were expected by the marinegeologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. Thedisplay of fuzzy membership values and/or factor scores versus depth providedconsistent results for two factors only; the ultra-basic component could not beidentified. The reason for this may be that only traditional statistical methods wereapplied, i.e. the untransformed components were used and the cosine-theta coefficient assimilarity measure.During the last decade considerable progress in compositional data analysis was madeand many case studies were published using new tools for exploratory analysis of thesedata. Therefore it makes sense to check if the application of suitable data transformations,reduction of the D-part simplex to two or three factors and visualinterpretation of the factor scores would lead to a revision of earlier results and toanswers to open questions . In this paper we follow the lines of a paper of R. Tolosana-Delgado et al. (2005) starting with a problem-oriented interpretation of the biplotscattergram, extracting compositional factors, ilr-transformation of the components andvisualization of the factor scores in a spatial context: The compositional factors will beplotted versus depth (time) of the core samples in order to facilitate the identification ofthe expected sources of the sedimentary process.Kew words: compositional data analysis, biplot, deep sea sediments
Resumo:
Supervisory systems evolution makes the obtaining of significant information from processes more important in the way that the supervision systems' particular tasks are simplified. So, having signal treatment tools capable of obtaining elaborate information from the process data is important. In this paper, a tool that obtains qualitative data about the trends and oscillation of signals is presented. An application of this tool is presented as well. In this case, the tool, implemented in a computer-aided control systems design (CACSD) environment, is used in order to give to an expert system for fault detection in a laboratory plant
Resumo:
Background: The COSMIN checklist (COnsensus-based Standards for the selection of health status Measurement INstruments) was developed in an international Delphi study to evaluate the methodological quality of studies on measurement properties of health-related patient reported outcomes (HR-PROs). In this paper, we explain our choices for the design requirements and preferred statistical methods for which no evidence is available in the literature or on which the Delphi panel members had substantial discussion. Methods: The issues described in this paper are a reflection of the Delphi process in which 43 panel members participated. Results: The topics discussed are internal consistency (relevance for reflective and formative models, and distinction with unidimensionality), content validity (judging relevance and comprehensiveness), hypotheses testing as an aspect of construct validity (specificity of hypotheses), criterion validity (relevance for PROs), and responsiveness (concept and relation to validity, and (in) appropriate measures).Conclusions: We expect that this paper will contribute to a better understanding of the rationale behind the items, thereby enhancing the acceptance and use of the COSMIN checklist.
Resumo:
Trees are a great bank of data, named sometimes for this reason as the "silentwitnesses" of the past. Due to annual formation of rings, which is normally influenced directly by of climate parameters (generally changes in temperature and moisture or precipitation) and other environmental factors; these changes, occurred in the past, are"written" in the tree "archives" and can be "decoded" in order to interpret what hadhappened before, mainly applied for the past climate reconstruction.Using dendrochronological methods for obtaining samples of Pinus nigra fromthe Catalonian PrePirineous region, the cores of 15 trees with total time spine of about 100 - 250 years were analyzed for the tree ring width (TRW) patterns and had quite high correlation between them (0.71 ¿ 0.84), corresponding to a common behaviour for the environmental changes in their annual growth.After different trials with raw TRW data for standardization in order to take outthe negative exponential growth curve dependency, the best method of doubledetrending (power transformation and smoothing line of 32 years) were selected for obtaining the indexes for further analysis.Analyzing the cross-correlations between obtained tree ring width indexes andclimate data, significant correlations (p<0.05) were observed in some lags, as forexample, annual precipitation in lag -1 (previous year) had negative correlation with TRW growth in the Pallars region. Significant correlation coefficients are between 0.27- 0.51 (with positive or negative signs) for many cases; as for recent (but very short period) climate data of Seu d¿Urgell meteorological station, some significant correlation coefficients were observed, of the order of 0.9.These results confirm the hypothesis of using dendrochronological data as aclimate signal for further analysis, such as reconstruction of climate in the past orprediction in the future for the same locality.
Resumo:
En este artículo abordamos el uso y la importancia de las herramientas estadísticas que se utilizan principalmente en los estudios médicos del ámbito de la oncología y la hematología, pero aplicables a muchos otros campos tanto médicos como experimentales o industriales. El objetivo del presente trabajo es presentar de una manera clara y precisa la metodología estadística necesaria para analizar los datos obtenidos en los estudios rigurosa y concisamente en cuanto a las hipótesis de trabajo planteadas por los investigadores. La medida de la respuesta al tratamiento elegidas en al tipo de estudio elegido determinarán los métodos estadísticos que se utilizarán durante el análisis de los datos del estudio y también el tamaño de muestra. Mediante la correcta aplicación del análisis estadístico y de una adecuada planificación se puede determinar si la relación encontrada entre la exposición a un tratamiento y un resultado es casual o por el contrario, está sujeto a una relación no aleatoria que podría establecer una relación de causalidad. Hemos estudiado los principales tipos de diseño de los estudios médicos más utilizados, tales como ensayos clínicos y estudios observacionales (cohortes, casos y controles, estudios de prevalencia y estudios ecológicos). También se presenta una sección sobre el cálculo del tamaño muestral de los estudios y cómo calcularlo, ¿Qué prueba estadística debe utilizarse?, los aspectos sobre fuerza del efecto ¿odds ratio¿ (OR) y riesgo relativo (RR), el análisis de supervivencia. Se presentan ejemplos en la mayoría de secciones del artículo y bibliografía más relevante.
Resumo:
Background: Molecular tools may help to uncover closely related and still diverging species from a wide variety of taxa and provide insight into the mechanisms, pace and geography of marine speciation. There is a certain controversy on the phylogeography and speciation modes of species-groups with an Eastern Atlantic-Western Indian Ocean distribution, with previous studies suggesting that older events (Miocene) and/or more recent (Pleistocene) oceanographic processes could have influenced the phylogeny of marine taxa. The spiny lobster genus Palinurus allows for testing among speciation hypotheses, since it has a particular distribution with two groups of three species each in the Northeastern Atlantic (P. elephas, P. mauritanicus and P. charlestoni) and Southeastern Atlantic and Southwestern Indian Oceans (P. gilchristi, P. delagoae and P. barbarae). In the present study, we obtain a more complete understanding of the phylogenetic relationships among these species through a combined dataset with both nuclear and mitochondrial markers, by testing alternative hypotheses on both the mutation rate and tree topology under the recently developed approximate Bayesian computation (ABC) methods. Results Our analyses support a North-to-South speciation pattern in Palinurus with all the South-African species forming a monophyletic clade nested within the Northern Hemisphere species. Coalescent-based ABC methods allowed us to reject the previously proposed hypothesis of a Middle Miocene speciation event related with the closure of the Tethyan Seaway. Instead, divergence times obtained for Palinurus species using the combined mtDNA-microsatellite dataset and standard mutation rates for mtDNA agree with known glaciation-related processes occurring during the last 2 my. Conclusion The Palinurus speciation pattern is a typical example of a series of rapid speciation events occurring within a group, with very short branches separating different species. Our results support the hypothesis that recent climate change-related oceanographic processes have influenced the phylogeny of marine taxa, with most Palinurus species originating during the last two million years. The present study highlights the value of new coalescent-based statistical methods such as ABC for testing different speciation hypotheses using molecular data.
Resumo:
The present study explores the statistical properties of a randomization test based on the random assignment of the intervention point in a two-phase (AB) single-case design. The focus is on randomization distributions constructed with the values of the test statistic for all possible random assignments and used to obtain p-values. The shape of those distributions is investigated for each specific data division defined by the moment in which the intervention is introduced. Another aim of the study consisted in testing the detection of inexistent effects (i.e., production of false alarms) in autocorrelated data series, in which the assumption of exchangeability between observations may be untenable. In this way, it was possible to compare nominal and empirical Type I error rates in order to obtain evidence on the statistical validity of the randomization test for each individual data division. The results suggest that when either of the two phases has considerably less measurement times, Type I errors may be too probable and, hence, the decision making process to be carried out by applied researchers may be jeopardized.
Resumo:
The present work focuses the attention on the skew-symmetry index as a measure of social reciprocity. This index is based on the correspondence between the amount of behaviour that each individual addresses to its partners and what it receives from them in return. Although the skew-symmetry index enables researchers to describe social groups, statistical inferential tests are required. The main aim of the present study is to propose an overall statistical technique for testing symmetry in experimental conditions, calculating the skew-symmetry statistic (Φ) at group level. Sampling distributions for the skew- symmetry statistic have been estimated by means of a Monte Carlo simulation in order to allow researchers to make statistical decisions. Furthermore, this study will allow researchers to choose the optimal experimental conditions for carrying out their research, as the power of the statistical test has been estimated. This statistical test could be used in experimental social psychology studies in which researchers may control the group size and the number of interactions within dyads.
Resumo:
We present in this paper the results of the application of several visual methods on a group of locations, dated between VI and I centuries BC, of the ager Tarraconensis (Tarragona, Spain) a Hinterland of the roman colony of Tarraco. The difficulty in interpreting the diverse results in a combined way has been resolved by means of the use of statistical methods, such as Principal Components Analysis (PCA) and K-means clustering analysis. These methods have allowed us to carry out site classifications in function of the landscape's visual structure that contains them and of the visual relationships that could be given among them.
Resumo:
Background:Our objective is to determine the activity of the antioxidant defense system at admission in patients with early onset first psychotic episodes compared with a control group. Methods: Total antioxidant status (TAS) and lipid peroxidation (LOOH) were determined in plasma. Enzyme activities and total glutathione levels were determined in erythrocytes in 102 children and adolescents with a first psychotic episode and 98 healthy controls. Results: A decrease in antioxidant defense was found in patients, measured as decreased TAS and glutathione levels. Lipid damage (LOOH) and glutathione peroxidase activity was higher in patients than controls. Our study shows a decrease in the antioxidant defense system in early onset first episode psychotic patients. Conclusions: Glutathione deficit seems to be implicated in psychosis, and may be an important indirect biomarker of oxidative stress in early-onset schizophrenia. Oxidative damage is present in these patients, and may contribute to its pathophysiology.