24 resultados para Analysis and statistical methods

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La acumulación de material sólido en embalses, cauces fluviales y en zonas marítimas hace que la extracción mecánica de estos materiales por medio de succión sea cada vez mas frecuente, por ello resulta importante estudiar el rendimiento de la succión de estos materiales analizando la forma de las boquillas y los parámetros del flujo incluyendo la bomba. Esta tesis estudia, mediante equipos experimentales, la eficacia de distintos dispositivos de extracción de sólidos (utilizando boquillas de diversas formas y bombas de velocidad variable). El dispositivo experimental ha sido desarrollado en el Laboratorio de Hidráulica de la E.T.S.I. de Caminos, C. y P. de la Universidad Politécnica de Madrid. Dicho dispositivo experimental incluye un lecho sumergido de distintos tipos de sedimentos, boquillas de extracción de sólidos y bomba de velocidad variable, así como un elemento de separación del agua y los sólidos extraídos. Los parámetros básicos analizados son el caudal líquido total bombeado, el caudal sólido extraído, diámetro de la tubería de succión, forma y sección de la boquilla extractora, así como los parámetros de velocidad y rendimiento en la bomba de velocidad variable. Los resultados de las medidas obtenidas en el dispositivo experimental han sido estudiados por medio del análisis dimensional y con métodos estadísticos. A partir de este estudio se ha desarrollado una nueva formulación, que relaciona el caudal sólido extraído con los diámetros de tubería y boquilla, caudal líquido bombeado y velocidad de giro de la bomba. Así mismo, desde el punto de vista práctico, se han analizado la influencia de la forma de la boquilla con la capacidad de extracción de sólidos a igualdad del resto de los parámetros, de forma que se puedan recomendar que forma de la boquilla es la más apropiada. The accumulation of solid material in reservoirs, river channels and sea areas causes the mechanical extraction of these materials by suction is becoming much more common, so it is important to study the performance of the suction of these materials analyzing the shape of the nozzles and flow parameters, including the pump. This thesis studies, using experimental equipment, the effectiveness of different solids removal devices (using nozzles of different shapes and variable speed pumps). The experimental device was developed at the Hydraulics Laboratory of the Civil University of the Polytechnic University of Madrid. The device included a submerged bed with different types of sediment solids, different removal nozzles and variable speed pump. It also includes a water separation element and the solids extracted. The key parameters analyzed are the total liquid volume pumped, the solid volume extracted, diameter of the suction pipe, a section of the nozzle and hood, and the parameters of speed and efficiency of the variable speed pump. The basic parameters analyzed are the total liquid volume pumped, the removed solid volume, the diameter of the suction pipe, the shape and cross-section of the nozzle, and the parameters of speed, efficiency and energy consumed by the variable speed pump. The measurements obtained on the experimental device have been studied with dimensional analysis and statistical methods. The outcome of this study is a new formulation, which relates the solid volume extracted with the pipe and nozzle diameters, the pumped liquid flow and the speed of the pump. Also, from a practical point of view, the influence of the shape of the nozzle was compared with the solid extraction capacity, keeping equal the rest of the parameters. So, a recommendation of the best shape of the nozzle can be given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pragmatism is the leading motivation of regularization. We can understand regularization as a modification of the maximum-likelihood estimator so that a reasonable answer could be given in an unstable or ill-posed situation. To mention some typical examples, this happens when fitting parametric or non-parametric models with more parameters than data or when estimating large covariance matrices. Regularization is usually used, in addition, to improve the bias-variance tradeoff of an estimation. Then, the definition of regularization is quite general, and, although the introduction of a penalty is probably the most popular type, it is just one out of multiple forms of regularization. In this dissertation, we focus on the applications of regularization for obtaining sparse or parsimonious representations, where only a subset of the inputs is used. A particular form of regularization, L1-regularization, plays a key role for reaching sparsity. Most of the contributions presented here revolve around L1-regularization, although other forms of regularization are explored (also pursuing sparsity in some sense). In addition to present a compact review of L1-regularization and its applications in statistical and machine learning, we devise methodology for regression, supervised classification and structure induction of graphical models. Within the regression paradigm, we focus on kernel smoothing learning, proposing techniques for kernel design that are suitable for high dimensional settings and sparse regression functions. We also present an application of regularized regression techniques for modeling the response of biological neurons. Supervised classification advances deal, on the one hand, with the application of regularization for obtaining a na¨ıve Bayes classifier and, on the other hand, with a novel algorithm for brain-computer interface design that uses group regularization in an efficient manner. Finally, we present a heuristic for inducing structures of Gaussian Bayesian networks using L1-regularization as a filter. El pragmatismo es la principal motivación de la regularización. Podemos entender la regularización como una modificación del estimador de máxima verosimilitud, de tal manera que se pueda dar una respuesta cuando la configuración del problema es inestable. A modo de ejemplo, podemos mencionar el ajuste de modelos paramétricos o no paramétricos cuando hay más parámetros que casos en el conjunto de datos, o la estimación de grandes matrices de covarianzas. Se suele recurrir a la regularización, además, para mejorar el compromiso sesgo-varianza en una estimación. Por tanto, la definición de regularización es muy general y, aunque la introducción de una función de penalización es probablemente el método más popular, éste es sólo uno de entre varias posibilidades. En esta tesis se ha trabajado en aplicaciones de regularización para obtener representaciones dispersas, donde sólo se usa un subconjunto de las entradas. En particular, la regularización L1 juega un papel clave en la búsqueda de dicha dispersión. La mayor parte de las contribuciones presentadas en la tesis giran alrededor de la regularización L1, aunque también se exploran otras formas de regularización (que igualmente persiguen un modelo disperso). Además de presentar una revisión de la regularización L1 y sus aplicaciones en estadística y aprendizaje de máquina, se ha desarrollado metodología para regresión, clasificación supervisada y aprendizaje de estructura en modelos gráficos. Dentro de la regresión, se ha trabajado principalmente en métodos de regresión local, proponiendo técnicas de diseño del kernel que sean adecuadas a configuraciones de alta dimensionalidad y funciones de regresión dispersas. También se presenta una aplicación de las técnicas de regresión regularizada para modelar la respuesta de neuronas reales. Los avances en clasificación supervisada tratan, por una parte, con el uso de regularización para obtener un clasificador naive Bayes y, por otra parte, con el desarrollo de un algoritmo que usa regularización por grupos de una manera eficiente y que se ha aplicado al diseño de interfaces cerebromáquina. Finalmente, se presenta una heurística para inducir la estructura de redes Bayesianas Gaussianas usando regularización L1 a modo de filtro.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this thesis is to study the mechanisms of instability that occur in swept wings when the angle of attack increases. For this, a simplified model for the a simplified model for the non-orthogonal swept leading edge boundary layer has been used as well as different numerical techniques in order to solve the linear stability problem that describes the behavior of perturbations superposed upon this base flow. Two different approaches, matrix-free and matrix forming methods, have been validated using direct numerical simulations with spectral resolution. In this way, flow instability in the non-orthogonal swept attachment-line boundary layer is addressed in a linear analysis framework via the solution of the pertinent global (Bi-Global) PDE-based eigenvalue problem. Subsequently, a simple extension of the extended G¨ortler-H¨ammerlin ODEbased polynomial model proposed by Theofilis, Fedorov, Obrist & Dallmann (2003) for orthogonal flow, which includes previous models as particular cases and recovers global instability analysis results, is presented for non-orthogonal flow. Direct numerical simulations have been used to verify the stability results and unravel the limits of validity of the basic flow model analyzed. The effect of the angle of attack, AoA, on the critical conditions of the non-orthogonal problem has been documented; an increase of the angle of attack, from AoA = 0 (orthogonal flow) up to values close to _/2 which make the assumptions under which the basic flow is derived questionable, is found to systematically destabilize the flow. The critical conditions of non-orthogonal flows at 0 _ AoA _ _/2 are shown to be recoverable from those of orthogonal flow, via a simple analytical transformation involving AoA. These results can help to understand the mechanisms of destabilization that occurs in the attachment line of wings at finite angles of attack. Studies taking into account variations of the pressure field in the basic flow or the extension to compressible flows are issues that remain open. El objetivo de esta tesis es estudiar los mecanismos de la inestabilidad que se producen en ciertos dispositivos aerodinámicos cuando se aumenta el ángulo de ataque. Para ello se ha utilizado un modelo simplificado del flujo de base, así como diferentes técnicas numéricas, con el fin de resolver el problema de estabilidad lineal asociado que describe el comportamiento de las perturbaciones. Estos métodos; sin y con formación de matriz, se han validado utilizando simulaciones numéricas directas con resolución espectral. De esta manera, la inestabilidad del flujo de capa límite laminar oblicuo entorno a la línea de estancamiento se aborda en un marco de análisis lineal por medio del método Bi-Global de resolución del problema de valores propios en derivadas parciales. Posteriormente se propone una extensión simple para el flujo no-ortogonal del modelo polinomial de ecuaciones diferenciales ordinarias, G¨ortler-H¨ammerlin extendido, propuesto por Theofilis et al. (2003) para el flujo ortogonal, que incluye los modelos previos como casos particulares y recupera los resultados del analisis global de estabilidad lineal. Se han realizado simulaciones directas con el fin de verificar los resultados del análisis de estabilidad así como para investigar los límites de validez del modelo de flujo base utilizado. En este trabajo se ha documentado el efecto del ángulo de ataque AoA en las condiciones críticas del problema no ortogonal obteniendo que el incremento del ángulo de ataque, de AoA = 0 (flujo ortogonal) hasta valores próximos a _/2, en el cual las hipótesis sobre las que se basa el flujo base dejan de ser válidas, tiende sistemáticamente a desestabilizar el flujo. Las condiciones críticas del caso no ortogonal 0 _ AoA _ _/2 pueden recuperarse a partir del caso ortogonal mediante el uso de una transformación analítica simple que implica el ángulo de ataque AoA. Estos resultados pueden ayudar a comprender los mecanismos de desestabilización que se producen en el borde de ataque de las alas de los aviones a ángulos de ataque finitos. Como tareas pendientes quedaría realizar estudios que tengan en cuenta variaciones del campo de presión en el flujo base así como la extensión de éste al caso de flujos compresibles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative descriptive analysis (QDA) is used to describe the nature and the intensity of sensory properties from a single evaluation of a product, whereas temporal dominance of sensation (TDS) is primarily used to identify dominant sensory properties over time. Previous studies with TDS have focused on model systems, but this is the first study to use a sequential approach, i.e. QDA then TDS in measuring sensory properties of a commercial product category, using the same set of trained assessors (n = 11). The main objectives of this study were to: (1) investigate the benefits of using a sequential approach of QDA and TDS and (2) to explore the impact of the sample composition on taste and flavour perceptions in blackcurrant squashes. The present study has proposed an alternative way of determining the choice of attributes for TDS measurement based on data obtained from previous QDA studies, where available. Both methods indicated that the flavour profile was primarily influenced by the level of dilution and complexity of sample composition combined with blackcurrant juice content. In addition, artificial sweeteners were found to modify the quality of sweetness and could also contribute to bitter notes. Using QDA and TDS in tandem was shown to be more beneficial than each just on its own enabling a more complete sensory profile of the products.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alzheimer's disease (AD) is the most common cause of dementia. Over the last few years, a considerable effort has been devoted to exploring new biomarkers. Nevertheless, a better understanding of brain dynamics is still required to optimize therapeutic strategies. In this regard, the characterization of mild cognitive impairment (MCI) is crucial, due to the high conversion rate from MCI to AD. However, only a few studies have focused on the analysis of magnetoencephalographic (MEG) rhythms to characterize AD and MCI. In this study, we assess the ability of several parameters derived from information theory to describe spontaneous MEG activity from 36 AD patients, 18 MCI subjects and 26 controls. Three entropies (Shannon, Tsallis and Rényi entropies), one disequilibrium measure (based on Euclidean distance ED) and three statistical complexities (based on Lopez Ruiz–Mancini–Calbet complexity LMC) were used to estimate the irregularity and statistical complexity of MEG activity. Statistically significant differences between AD patients and controls were obtained with all parameters (p < 0.01). In addition, statistically significant differences between MCI subjects and controls were achieved by ED and LMC (p < 0.05). In order to assess the diagnostic ability of the parameters, a linear discriminant analysis with a leave-one-out cross-validation procedure was applied. The accuracies reached 83.9% and 65.9% to discriminate AD and MCI subjects from controls, respectively. Our findings suggest that MCI subjects exhibit an intermediate pattern of abnormalities between normal aging and AD. Furthermore, the proposed parameters provide a new description of brain dynamics in AD and MCI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two different methods of analysis of plate bending, FEM and BM are discussed in this paper. The plate behaviour is assumed to be represented by using the linear thin plate theory where the Poisson-Kirchoff assumption holds. The BM based in a weighted mean square error technique produced good results for the problem of plate bending. The computational effort demanded in the BM is smaller than the one needed in a FEM analysis for the same level of accuracy. The general application of the FEM cannot be matched by the BM. Particularly, different types of geometry (plates of arbitrary geometry) need a similar but not identical treatment in the BM. However, this loss of generality is counterbalanced by the computational efficiency gained in the BM in the solution achievement

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a systematic method for the generation and treatment of the alarms' graphs, being its final object to find the Alarm Root Cause of the Massive Alarms that are produced in the dispatching centers. Although many works about this matter have been already developed, the problem about the alarm management in the industry is still completely unsolved. In this paper, a simple statistic analysis of the historical data base is conducted. The results obtained by the acquisition alarm systems, are used to generate a directed graph from which the more significant alarms are extracted, previously analyzing any possible case in which a great quantity of alarms are produced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: In this work, we present the analysis, design and optimization of one experimental device recently developed in the UK, called the 'GP' Thrombus Aspiration Device (GPTAD). This device has been designed to remove blood clots without the need to make contact with the clot itself thereby potentially reducing the risk of problems such as downstream embolisation. Method: To obtain the minimum pressure necessary to extract the clot and to optimize the device, we have simulated the performance of the GPTAD analysing the resistances, compliances and inertances effects. We model a range of diameters for the GPTAD considering different forces of adhesion of the blood clot to the artery wall, and different lengths of blood clot. In each case we determine the optimum pressure required to extract the blood clot from the artery using the GPTAD, which is attached at its proximal end to a suction pump. Result: We then compare the results of our mathematical modelling to measurements made in laboratory using plastic tube models of arteries of comparable diameter. We use abattoir porcine blood clots that are extracted using the GPTAD. The suction pressures required for such clot extraction in the plastic tube models compare favourably with those predicted by the mathematical modelling. Discussion & Conclusion: We conclude therefore that the mathematical modelling is a useful technique in predicting the performance of the GPTAD and may potentially be used in optimising the design of the device.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a method for the identification of different partial discharges (PDs) sources through the analysis of a collection of PD signals acquired with a PD measurement system. This method, robust and sensitive enough to cope with noisy data and external interferences, combines the characterization of each signal from the collection, with a clustering procedure, the CLARA algorithm. Several features are proposed for the characterization of the signals, being the wavelet variances, the frequency estimated with the Prony method, and the energy, the most relevant for the performance of the clustering procedure. The result of the unsupervised classification is a set of clusters each containing those signals which are more similar to each other than to those in other clusters. The analysis of the classification results permits both the identification of different PD sources and the discrimination between original PD signals, reflections, noise and external interferences. The methods and graphical tools detailed in this paper have been coded and published as a contributed package of the R environment under a GNU/GPL license.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between abstract interpretation [2] and partial evaluation [5] has received considerable attention and (partial) integrations have been proposed starting from both the partial deduction (see e.g. [6] and its references) and abstract interpretation perspectives. Abstract interpretation-based analyzers (such as the CiaoPP analyzer [9,4]) generally compute a program analysis graph [1] in order to propagate (abstract) call and success information by performing fixpoint computations when needed. On the other hand, partial deduction methods [7] incorporate powerful techniques for on-line specialization including (concrete) call propagation and unfolding.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Effective static analyses have been proposed which infer bounds on the number of resolutions or reductions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of the platform in order to determine the valúes of certain parameters for a given platform. These parameters calíbrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in the given platform. The approach has been implemented and integrated in the CiaoPP system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between abstract interpretation and partial deduction has received considerable attention and (partial) integrations have been proposed starting from both the partial deduction and abstract interpretation perspectives. In this work we present what we argüe is the first fully described generic algorithm for efñcient and precise integration of abstract interpretation and partial deduction. Taking as starting point state-of-the-art algorithms for context-sensitive, polyvariant abstract interpretation and (abstract) partial deduction, we present an algorithm which combines the best of both worlds. Key ingredients include the accurate success propagation inherent to abstract interpretation and the powerful program transformations achievable by partial deduction. In our algorithm, the calis which appear in the analysis graph are not analyzed w.r.t. the original definition of the procedure but w.r.t. specialized definitions of these procedures. Such specialized definitions are obtained by applying both unfolding and abstract executability. Our framework is parametric w.r.t. different control strategies and abstract domains. Different combinations of such parameters correspond to existing algorithms for program analysis and specialization. Simultaneously, our approach opens the door to the efñcient computation of strictly more precise results than those achievable by each of the individual techniques. The algorithm is now one of the key components of the CiaoPP analysis and specialization system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The genus Diplotaxis, comprising 32 or 34 species, plus several additional infraspecific taxa, displays a considerable degree of heterogeneity in the morphology, molecular markers, chromosome numbers and geographical amplitude of the species. The taxonomic relationships within the genus Diplotaxis were investigated by phenetic characterisation of germplasm belonging to 27 taxa of the genus, because there is an increasing interest in Diplotaxis, since some of its species (D. tenuifolia, D. muralis) are gathered or cultivated for human consumption, whereas others are frequent arable weeds (D. erucoides) in many European vineyards. Using a computer-aided vision system, 33 morpho-colorimetric features of seeds were electronically measured. The data were used to implement a statistical classifier, which is able to discriminate the taxa within the genus Diplotaxis, in order to compare the resulting species grouping with the current infrageneric systematics of this genus. Despite the high heterogeneity of the samples, due to the great intra-population variability, the stepwise Linear Discriminant Analysis method, applied to distinguish the groups, was able to reach over 80% correct identification. The results obtained allowed us to confirm the current taxonomic position of most taxa and suggested the taxonomic position of others for reconsideration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we use ARIMA modelling to estimate a set of characteristics of a short-term indicator (for example, the index of industrial production), as trends, seasonal variations, cyclical oscillations, unpredictability, deterministic effects (as a strike), etc. Thus for each sector and product (more than 1000), we construct a vector of values corresponding to the above-mentioned characteristics, that can be used for data editing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laparoscopic instrument tracking systems are an essential component in image-guided interventions and offer new possibilities to improve and automate objective assessment methods of surgical skills. In this study we present our system design to apply a third generation optical pose tracker (Micron- Tracker®) to laparoscopic practice. A technical evaluation of this design is performed in order to analyze its accuracy in computing the laparoscopic instrument tip position. Results show a stable fluctuation error over the entire analyzed workspace. The relative position errors are 1.776±1.675 mm, 1.817±1.762 mm, 1.854±1.740 mm, 2.455±2.164 mm, 2.545±2.496 mm, 2.764±2.342 mm, 2.512±2.493 mm for distances of 50, 100, 150, 200, 250, 300, and 350 mm, respectively. The accumulated distance error increases with the measured distance. The instrument inclination covered by the system is high, from 90 to 7.5 degrees. The system reports a low positional accuracy for the instrument tip.