28 resultados para smoothing
em Universidad Politécnica de Madrid
Resumo:
The quality and the reliability of the power generated by large grid-connected photovoltaic (PV) plants are negatively affected by the source characteristic variability. This paper deals with the smoothing of power fluctuations because of geographical dispersion of PV systems. The fluctuation frequency and the maximum fluctuation registered at a PV plant ensemble are analyzed to study these effects. We propose an empirical expression to compare the fluctuation attenuation because of both the size and the number of PV plants grouped. The convolution of single PV plants frequency distribution functions has turned out to be a successful tool to statistically describe the behavior of an ensemble of PV plants and determine their maximum output fluctuation. Our work is based on experimental 1-s data collected throughout 2009 from seven PV plants, 20 MWp in total, separated between 6 and 360 km.
Resumo:
This work evaluates a spline-based smoothing method applied to the output of a glucose predictor. Methods:Our on-line prediction algorithm is based on a neural network model (NNM). We trained/validated the NNM with a prediction horizon of 30 minutes using 39/54 profiles of patients monitored with the Guardian® Real-Time continuous glucose monitoring system The NNM output is smoothed by fitting a causal cubic spline. The assessment parameters are the error (RMSE), mean delay (MD) and the high-frequency noise (HFCrms). The HFCrms is the root-mean-square values of the high-frequency components isolated with a zero-delay non-causal filter. HFCrms is 2.90±1.37 (mg/dl) for the original profiles.
Resumo:
Thermal smoothing in the plasma ablated from a laser target under weakly nonuniform irradiation is analyzed, assuming absorption at nc and a deflagration regime (conduction restricted to a thin quasisteady layer next to the target). Magnetic generation effects are included and found to be weak. Differences from results available in the literature are explained; the importance of the character of the underdense flow at uniform irradiation is emphasized.
Resumo:
Refractive smoothing of weak non-uniformities in the illumination of laser targets is analyzed, assuming absorption at the critical density and restricting conduction to a thin layer, and using results from thermal smoothing, which is uncoupled from the refraction. Magnetic effects are included. Non-uniformity wavelengths comparable to the thickness of the conduction layer are considered; efficient smoothing exists at both short and long wavelengths in this range. Thermal focusing could make the ablated plasma unstable.
Resumo:
The objective of this thesis is the development of cooperative localization and tracking algorithms using nonparametric message passing techniques. In contrast to the most well-known techniques, the goal is to estimate the posterior probability density function (PDF) of the position of each sensor. This problem can be solved using Bayesian approach, but it is intractable in general case. Nevertheless, the particle-based approximation (via nonparametric representation), and an appropriate factorization of the joint PDFs (using message passing methods), make Bayesian approach acceptable for inference in sensor networks. The well-known method for this problem, nonparametric belief propagation (NBP), can lead to inaccurate beliefs and possible non-convergence in loopy networks. Therefore, we propose four novel algorithms which alleviate these problems: nonparametric generalized belief propagation (NGBP) based on junction tree (NGBP-JT), NGBP based on pseudo-junction tree (NGBP-PJT), NBP based on spanning trees (NBP-ST), and uniformly-reweighted NBP (URW-NBP). We also extend NBP for cooperative localization in mobile networks. In contrast to the previous methods, we use an optional smoothing, provide a novel communication protocol, and increase the efficiency of the sampling techniques. Moreover, we propose novel algorithms for distributed tracking, in which the goal is to track the passive object which cannot locate itself. In particular, we develop distributed particle filtering (DPF) based on three asynchronous belief consensus (BC) algorithms: standard belief consensus (SBC), broadcast gossip (BG), and belief propagation (BP). Finally, the last part of this thesis includes the experimental analysis of some of the proposed algorithms, in which we found that the results based on real measurements are very similar with the results based on theoretical models.
Resumo:
Within the regression framework, we show how different levels of nonlinearity influence the instantaneous firing rate prediction of single neurons. Nonlinearity can be achieved in several ways. In particular, we can enrich the predictor set with basis expansions of the input variables (enlarging the number of inputs) or train a simple but different model for each area of the data domain. Spline-based models are popular within the first category. Kernel smoothing methods fall into the second category. Whereas the first choice is useful for globally characterizing complex functions, the second is very handy for temporal data and is able to include inner-state subject variations. Also, interactions among stimuli are considered. We compare state-of-the-art firing rate prediction methods with some more sophisticated spline-based nonlinear methods: multivariate adaptive regression splines and sparse additive models. We also study the impact of kernel smoothing. Finally, we explore the combination of various local models in an incremental learning procedure. Our goal is to demonstrate that appropriate nonlinearity treatment can greatly improve the results. We test our hypothesis on both synthetic data and real neuronal recordings in cat primary visual cortex, giving a plausible explanation of the results from a biological perspective.
Resumo:
The aim of the present work is to provide an in-depth analysis of the most representative mirroring techniques used in SPH to enforce boundary conditions (BC) along solid profiles. We specifically refer to dummy particles, ghost particles, and Takeda et al. [Prog. Theor. Phys. 92 (1994), 939] boundary integrals. The analysis has been carried out by studying the convergence of the first- and second-order differential operators as the smoothing length (that is, the characteristic length on which relies the SPH interpolation) decreases. These differential operators are of fundamental importance for the computation of the viscous drag and the viscous/diffusive terms in the momentum and energy equations. It has been proved that close to the boundaries some of the mirroring techniques leads to intrinsic inaccuracies in the convergence of the differential operators. A consistent formulation has been derived starting from Takeda et al. boundary integrals (see the above reference). This original formulation allows implementing no-slip boundary conditions consistently in many practical applications as viscous flows and diffusion problems.
Resumo:
The implementation of boundary conditions is one of the points where the SPH methodology still has some work to do. The aim of the present work is to provide an in-depth analysis of the most representative mirroring techniques used in SPH to enforce boundary conditions (BC) along solid profiles. We specifically refer to dummy particles, ghost particles, and Takeda et al. [1] boundary integrals. A Pouseuille flow has been used as a example to gradually evaluate the accuracy of the different implementations. Our goal is to test the behavior of the second-order differential operator with the proposed boundary extensions when the smoothing length h and other dicretization parameters as dx/h tend simultaneously to zero. First, using a smoothed continuous approximation of the unidirectional Pouseuille problem, the evolution of the velocity profile has been studied focusing on the values of the velocity and the viscous shear at the boundaries, where the exact solution should be approximated as h decreases. Second, to evaluate the impact of the discretization of the problem, an Eulerian SPH discrete version of the former problem has been implemented and similar results have been monitored. Finally, for the sake of completeness, a 2D Lagrangian SPH implementation of the problem has been also studied to compare the consequences of the particle movement
Resumo:
Background Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60-mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results After an exhaustive process of pre-processing to ensure data quality--lost values imputation, probes quality, data smoothing and intraclass variability filtering--the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955)
Resumo:
Let D be a link diagram with n crossings, sA and sB be its extreme states and |sAD| (respectively, |sBD|) be the number of simple closed curves that appear when smoothing D according to sA (respectively, sB). We give a general formula for the sum |sAD| + |sBD| for a k-almost alternating diagram D, for any k, characterizing this sum as the number of faces in an appropriate triangulation of an appropriate surface with boundary. When D is dealternator connected, the triangulation is especially simple, yielding |sAD| + |sBD| = n + 2 - 2k. This gives a simple geometric proof of the upper bound of the span of the Jones polynomial for dealternator connected diagrams, a result first obtained by Zhu [On Kauffman brackets, J. Knot Theory Ramifications6(1) (1997) 125–148.]. Another upper bound of the span of the Jones polynomial for dealternator connected and dealternator reduced diagrams, discovered historically first by Adams et al. [Almost alternating links, Topology Appl.46(2) (1992) 151–165.], is obtained as a corollary. As a new application, we prove that the Turaev genus is equal to the number k of dealternator crossings for any dealternator connected diagram
Resumo:
Pragmatism is the leading motivation of regularization. We can understand regularization as a modification of the maximum-likelihood estimator so that a reasonable answer could be given in an unstable or ill-posed situation. To mention some typical examples, this happens when fitting parametric or non-parametric models with more parameters than data or when estimating large covariance matrices. Regularization is usually used, in addition, to improve the bias-variance tradeoff of an estimation. Then, the definition of regularization is quite general, and, although the introduction of a penalty is probably the most popular type, it is just one out of multiple forms of regularization. In this dissertation, we focus on the applications of regularization for obtaining sparse or parsimonious representations, where only a subset of the inputs is used. A particular form of regularization, L1-regularization, plays a key role for reaching sparsity. Most of the contributions presented here revolve around L1-regularization, although other forms of regularization are explored (also pursuing sparsity in some sense). In addition to present a compact review of L1-regularization and its applications in statistical and machine learning, we devise methodology for regression, supervised classification and structure induction of graphical models. Within the regression paradigm, we focus on kernel smoothing learning, proposing techniques for kernel design that are suitable for high dimensional settings and sparse regression functions. We also present an application of regularized regression techniques for modeling the response of biological neurons. Supervised classification advances deal, on the one hand, with the application of regularization for obtaining a na¨ıve Bayes classifier and, on the other hand, with a novel algorithm for brain-computer interface design that uses group regularization in an efficient manner. Finally, we present a heuristic for inducing structures of Gaussian Bayesian networks using L1-regularization as a filter. El pragmatismo es la principal motivación de la regularización. Podemos entender la regularización como una modificación del estimador de máxima verosimilitud, de tal manera que se pueda dar una respuesta cuando la configuración del problema es inestable. A modo de ejemplo, podemos mencionar el ajuste de modelos paramétricos o no paramétricos cuando hay más parámetros que casos en el conjunto de datos, o la estimación de grandes matrices de covarianzas. Se suele recurrir a la regularización, además, para mejorar el compromiso sesgo-varianza en una estimación. Por tanto, la definición de regularización es muy general y, aunque la introducción de una función de penalización es probablemente el método más popular, éste es sólo uno de entre varias posibilidades. En esta tesis se ha trabajado en aplicaciones de regularización para obtener representaciones dispersas, donde sólo se usa un subconjunto de las entradas. En particular, la regularización L1 juega un papel clave en la búsqueda de dicha dispersión. La mayor parte de las contribuciones presentadas en la tesis giran alrededor de la regularización L1, aunque también se exploran otras formas de regularización (que igualmente persiguen un modelo disperso). Además de presentar una revisión de la regularización L1 y sus aplicaciones en estadística y aprendizaje de máquina, se ha desarrollado metodología para regresión, clasificación supervisada y aprendizaje de estructura en modelos gráficos. Dentro de la regresión, se ha trabajado principalmente en métodos de regresión local, proponiendo técnicas de diseño del kernel que sean adecuadas a configuraciones de alta dimensionalidad y funciones de regresión dispersas. También se presenta una aplicación de las técnicas de regresión regularizada para modelar la respuesta de neuronas reales. Los avances en clasificación supervisada tratan, por una parte, con el uso de regularización para obtener un clasificador naive Bayes y, por otra parte, con el desarrollo de un algoritmo que usa regularización por grupos de una manera eficiente y que se ha aplicado al diseño de interfaces cerebromáquina. Finalmente, se presenta una heurística para inducir la estructura de redes Bayesianas Gaussianas usando regularización L1 a modo de filtro.
Resumo:
La determinación del origen de un material utilizado por el hombre en la prehistoria es de suma importancia en el ámbito de la arqueología. En los últimos años, los estudios de procedencia han utilizado técnicas que suelen ser muy precisas pero con el inconveniente de ser metodologías de carácter destructivo. El fenómeno de la minería a gran escala es una de las características que acompaña al Neolítico, de ahí que la revolución correspondiente a este periodo sea una de las etapas más importantes para la humanidad. El yacimiento arqueológico de Casa Montero es una mina de sílex neolítica ubicada en la Península Ibérica, de gran importancia por su antigüedad y su escala productiva. Este sitio arqueológico corresponde a una cantera de explotación de rocas silícicas desarrollada en el periodo neolítico en la que solamente se han encontrado los desechos de la extracción minera, lo cual incrementa la variabilidad de las muestras analizadas, de las que se desconoce su contexto económico, social y cultural. Es de gran interés arqueológico saber por qué esos grupos neolíticos explotaban de forma tan intensiva determinados tipos de material y cuál era el destino de la cadena productiva del sílex. Además, por ser una excavación de rescate, que ha tenido que procesar varias toneladas de material, en un tiempo relativamente corto, requiere de métodos expeditivos de clasificación y manejo de dicho material. Sin embargo,la implementación de cualquier método de clasificación debe evitar la alteración o modificación de la muestra,ya que,estudios previos sobre caracterización de rocas silícicas tienen el inconveniente de alterar parcialmente el objeto de estudio. Por lo que el objetivo de esta investigación fue la modelización del registro y procesamiento de datos espectrales adquiridos de rocas silícicas del yacimiento arqueológico de Casa Montero. Se implementó la metodología para el registro y procesamiento de datos espectrales de materiales líticos dentro del contexto arqueológico. Lo anterior se ha conseguido con la aplicación de modelos de análisis espectral, algoritmos de suavizado de firmas espectrales, reducción de la dimensionalidad de las características y la aplicación de métodos de clasificación, tanto de carácter vectorial como raster. Para la mayoría de los procedimientos se ha desarrollado una aplicación informática validada tanto por los propios resultados obtenidos como comparativamente con otras aplicaciones. Los ensayos de evaluación de la metodología propuesta han permitido comprobar la eficacia de los métodos. Por lo que se concluye que la metodología propuesta no solo es útil para materiales silícicos, sino que se puede generalizar en aquellos procesos donde la caracterización espectral puede ser relevante para la clasificación de materiales que no deban ser alterados, además, permite aplicarla a gran escala, dado que los costes de ejecución son mínimos si se comparan con los de métodos convencionales. Así mismo, es de destacar que los métodos propuestos, representan la variabilidad del material y permiten relacionarla con el estado del yacimiento, según su contenido respecto de las tipologías de la cadena operativa. ABSTRACT: The determination of the origin of a material used by man in prehistory is very important in the field of archaeology. In recent years the provenance studies have used techniques that tend to be very precise but with the drawback of being destructive methodologies. The phenomenon of mining on a large scale is a feature that accompanies the Neolithic period; the Neolithic revolution is one of the most important periods of humanity. The archaeological site of Casa Montero is a Neolithic flint mine located in the Iberian Peninsula of great importance for its antiquity and its scale. This archaeological site corresponds to a quarry exploitation of silicic rocks developed in the Neolithic period, in which only found debris from mining, which increases the variability of the samples analyzed, including their economic, social and cultural context is unknown. It is of great archaeological interest to know why these Neolithic groups exploited as intensive certain types of material and what the final destination of flint was in the productive chain. In addition, being an excavation of rescue that had to process several tons of material in a relatively short time requires expeditious methods of classification and handling of the material. However, the implementation of any method of classification should avoid the alteration or modification of the sample, since previous studies on characterization of silicic rocks have the disadvantage of destroying or partially modify the object of study. So the objective of this research wasthe modeling of the registration and processing of acquired spectral data of silicic rocks of the archaeological site of Casa Montero. The methodology implemented for modeling the registration and processing of existing spectral data of lithic materials within the archaeological context, was presented as an alternative to the conventional classification methods (methods destructive and expensive) or subjective methods that depend on the experience of the expert. The above has been achieved with the implementation of spectral analysis models, smoothing of spectral signatures and the dimensionality reduction algorithms. Trials of validation of the proposed methodology allowed testing the effectiveness of the methods in what refers to the spectral characterization of siliceous materials of Casa Montero. Is remarkable the algorithmic contribution of the signal filtering, improve of quality and reduction of the dimensionality, as well the proposal of using raster structures for efficient storage and analysis of spectral information. For which it is concluded that the proposed methodology is not only useful for siliceous materials, but it can be generalized in those processes where spectral characterization may be relevant to the classification of materials that must not be altered, also allows to apply it on a large scale, given that the implementation costs are minimal when compared with conventional methods.
Resumo:
Background:Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods: A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60-mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results: After an exhaustive process of pre-processing to ensure data quality--lost values imputation, probes quality, data smoothing and intraclass variability filtering--the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions: We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955).
Resumo:
Se presenta un estudio de algoritmos que ofrecen resultados óptimos en cuanto a lo que a la generalización vectorial de entidades lineales se refiere. Este estudio se encuentra dentro del marco del proyecto CENIT España Virtual para la investigación de nuevos algoritmos de procesado cartográfico. La generalización constituye uno de los procesos cartográficos más complejos, cobrando su mayor importancia a la hora de confeccionar mapas derivados a partir de otros a mayores escalas. La necesidad de una generalización se hace patente ante la imposibilidad de representar la realidad en su totalidad, teniendo ésta que ser limitada o reducida para la posterior elaboración del mapa, manteniendo, eso sí, las características esenciales del espacio geográfico cartografiado. La finalidad, por tanto, es obtener una imagen simplificada pero representativa de la realidad. Debido a que casi el ochenta por ciento de la cartografía vectorial está compuesta por elementos lineales, la investigación se centra en aquellos algoritmos capaces de procesar y actuar sobre éstos, demostrando además que su aplicación puede extenderse al tratamiento de elementos superficiales ya que son tratados a partir de la línea cerrada que los define. El estudio, además, profundiza en los procesos englobados dentro de la exageración lineal que pretenden destacar o enfatizar aquellos rasgos de entidades lineales sin los que la representatividad de nuestro mapa se vería mermada. Estas herramientas, acompañadas de otras más conocidas como la simplificación y el suavizado de líneas, pueden ofrecer resultados satisfactorios dentro de un proceso de generalización. Abstract: A study of algorithms that provide optimal results in vector generalization is presented. This study is within the CENIT project framework of the España Virtual for research of new cartographic processing algorithms. The generalization is one of the more complex mapping processes, taking its greatest importance when preparing maps derived from other at larger scales. The need for generalization is evident given the impossibility of representing whole real world, taking it to be limited or reduced for the subsequent preparation of the map, keeping main features of the geographical space. Therefore, the goal is to obtain a simplified but representative image of the reality. Due to nearly eighty percent of the mapping vector is composed of linear elements, the research focuses on those algorithms that can process them, proving that its application can also be extended to the treatment of surface elements as they are treated from the closed line that defines them. Moreover, the study focussed into the processes involved within the linear exaggeration intended to highlight or emphasize those features of linear entities that increase the representativeness of our map. These tools, together with others known as the simplification and smoothing of lines, can provide satisfactory results in a process of generalization.
Resumo:
There is now an emerging need for an efficient modeling strategy to develop a new generation of monitoring systems. One method of approaching the modeling of complex processes is to obtain a global model. It should be able to capture the basic or general behavior of the system, by means of a linear or quadratic regression, and then superimpose a local model on it that can capture the localized nonlinearities of the system. In this paper, a novel method based on a hybrid incremental modeling approach is designed and applied for tool wear detection in turning processes. It involves a two-step iterative process that combines a global model with a local model to take advantage of their underlying, complementary capacities. Thus, the first step constructs a global model using a least squares regression. A local model using the fuzzy k-nearest-neighbors smoothing algorithm is obtained in the second step. A comparative study then demonstrates that the hybrid incremental model provides better error-based performance indices for detecting tool wear than a transductive neurofuzzy model and an inductive neurofuzzy model.