928 resultados para Unconstrained and convex optimization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time series are proficiently converted into graphs via the horizontal visibility (HV) algorithm, which prompts interest in its capability for capturing the nature of different classes of series in a network context. We have recently shown [B. Luque et al., PLoS ONE 6, 9 (2011)] that dynamical systems can be studied from a novel perspective via the use of this method. Specifically, the period-doubling and band-splitting attractor cascades that characterize unimodal maps transform into families of graphs that turn out to be independent of map nonlinearity or other particulars. Here, we provide an in depth description of the HV treatment of the Feigenbaum scenario, together with analytical derivations that relate to the degree distributions, mean distances, clustering coefficients, etc., associated to the bifurcation cascades and their accumulation points. We describe how the resultant families of graphs can be framed into a renormalization group scheme in which fixed-point graphs reveal their scaling properties. These fixed points are then re-derived from an entropy optimization process defined for the graph sets, confirming a suggested connection between renormalization group and entropy optimization. Finally, we provide analytical and numerical results for the graph entropy and show that it emulates the Lyapunov exponent of the map independently of its sign.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Energy consumption in data centers is nowadays a critical objective because of its dramatic environmental and economic impact. Over the last years, several approaches have been proposed to tackle the energy/cost optimization problem, but most of them have failed on providing an analytical model to target both the static and dynamic optimization domains for complex heterogeneous data centers. This paper proposes and solves an optimization problem for the energy-driven configuration of a heterogeneous data center. It also advances in the proposition of a new mechanism for task allocation and distribution of workload. The combination of both approaches outperforms previous published results in the field of energy minimization in heterogeneous data centers and scopes a promising area of research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present work aims to assess Laser-Induced Plasma Spectrometry (LIPS) as a tool for the characterization of photovoltaic materials. Despite being a well-established technique with applications to many scientific and industrial fields, so far LIPS is little known to the photovoltaic scientific community. The technique allows the rapid characterization of layered samples without sample preparation, in open atmosphere and in real time. In this paper, we assess LIPS ability for the determination of elements that are difficult to analyze by other broadly used techniques, or for producing analytical information from very low-concentration elements. The results of the LIPS characterization of two different samples are presented: 1) a 90 nm, Al-doped ZnO layer deposited on a Si substrate by RF sputtering and 2) a Te-doped GaInP layer grown on GaAs by Metalorganic Vapor Phase Epitaxy. For both cases, the depth profile of the constituent and dopant elements is reported along with details of the experimental setup and the optimization of key parameters. It is remarkable that the longest time of analysis was ∼10 s, what, in conjunction with the other characteristics mentioned, makes of LIPS an appealing technique for rapid screening or quality control whether at the lab or at the production line.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A unified low complexity sign-bit correlation based symbol timing synchronization scheme for Multiband Orthogonal Frequency Division Multiplexing (MB-OFDM) Ultra Wideband (UWB) receiver system is proposed. By using the time domain sequence of the packet/frame synchronization preamble, the proposed scheme is in charge of detecting the upcoming MB-OFDM symbol and it estimates the exact boundary of the start of Fast Fourier Transform (FFT) window. The proposed algorithm is implemented by using an efficient Hardware-Software co-simulation methodology. The effectiveness of the proposed synchronization scheme and the optimization criteria is confirmed by hardware implementation results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Walker et al. defined two families of binary operations on M (set of functions of [0,1] in [0,1]), and they determined that, under certain conditions, those operations are t-norms (triangular norm) or t-conorms on L (all the normal and convex functions of M). We define binary operations on M, more general than those given by Walker et al., and we study many properties of these general operations that allow us to deduce new t-norms and t-conorms on both L, and M.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objeto de esta Tesis doctoral es el desarrollo de una metodologia para la deteccion automatica de anomalias a partir de datos hiperespectrales o espectrometria de imagen, y su cartografiado bajo diferentes condiciones tipologicas de superficie y terreno. La tecnologia hiperespectral o espectrometria de imagen ofrece la posibilidad potencial de caracterizar con precision el estado de los materiales que conforman las diversas superficies en base a su respuesta espectral. Este estado suele ser variable, mientras que las observaciones se producen en un numero limitado y para determinadas condiciones de iluminacion. Al aumentar el numero de bandas espectrales aumenta tambien el numero de muestras necesarias para definir espectralmente las clases en lo que se conoce como Maldicion de la Dimensionalidad o Efecto Hughes (Bellman, 1957), muestras habitualmente no disponibles y costosas de obtener, no hay mas que pensar en lo que ello implica en la Exploracion Planetaria. Bajo la definicion de anomalia en su sentido espectral como la respuesta significativamente diferente de un pixel de imagen respecto de su entorno, el objeto central abordado en la Tesis estriba primero en como reducir la dimensionalidad de la informacion en los datos hiperespectrales, discriminando la mas significativa para la deteccion de respuestas anomalas, y segundo, en establecer la relacion entre anomalias espectrales detectadas y lo que hemos denominado anomalias informacionales, es decir, anomalias que aportan algun tipo de informacion real de las superficies o materiales que las producen. En la deteccion de respuestas anomalas se asume un no conocimiento previo de los objetivos, de tal manera que los pixeles se separan automaticamente en funcion de su informacion espectral significativamente diferenciada respecto de un fondo que se estima, bien de manera global para toda la escena, bien localmente por segmentacion de la imagen. La metodologia desarrollada se ha centrado en la implicacion de la definicion estadistica del fondo espectral, proponiendo un nuevo enfoque que permite discriminar anomalias respecto fondos segmentados en diferentes grupos de longitudes de onda del espectro, explotando la potencialidad de separacion entre el espectro electromagnetico reflectivo y emisivo. Se ha estudiado la eficiencia de los principales algoritmos de deteccion de anomalias, contrastando los resultados del algoritmo RX (Reed and Xiaoli, 1990) adoptado como estandar por la comunidad cientifica, con el metodo UTD (Uniform Targets Detector), su variante RXD-UTD, metodos basados en subespacios SSRX (Subspace RX) y metodo basados en proyecciones de subespacios de imagen, como OSPRX (Orthogonal Subspace Projection RX) y PP (Projection Pursuit). Se ha desarrollado un nuevo metodo, evaluado y contrastado por los anteriores, que supone una variacion de PP y describe el fondo espectral mediante el analisis discriminante de bandas del espectro electromagnetico, separando las anomalias con el algortimo denominado Detector de Anomalias de Fondo Termico o DAFT aplicable a sensores que registran datos en el espectro emisivo. Se han evaluado los diferentes metodos de deteccion de anomalias en rangos del espectro electromagnetico del visible e infrarrojo cercano (Visible and Near Infrared-VNIR), infrarrojo de onda corta (Short Wavelenght Infrared-SWIR), infrarrojo medio (Meadle Infrared-MIR) e infrarrojo termico (Thermal Infrared-TIR). La respuesta de las superficies en las distintas longitudes de onda del espectro electromagnetico junto con su entorno, influyen en el tipo y frecuencia de las anomalias espectrales que puedan provocar. Es por ello que se han utilizado en la investigacion cubos de datos hiperepectrales procedentes de los sensores aeroportados cuya estrategia y diseno en la construccion espectrometrica de la imagen difiere. Se han evaluado conjuntos de datos de test de los sensores AHS (Airborne Hyperspectral System), HyMAP Imaging Spectrometer, CASI (Compact Airborne Spectrographic Imager), AVIRIS (Airborne Visible Infrared Imaging Spectrometer), HYDICE (Hyperspectral Digital Imagery Collection Experiment) y MASTER (MODIS/ASTER Simulator). Se han disenado experimentos sobre ambitos naturales, urbanos y semiurbanos de diferente complejidad. Se ha evaluado el comportamiento de los diferentes detectores de anomalias a traves de 23 tests correspondientes a 15 areas de estudio agrupados en 6 espacios o escenarios: Urbano - E1, Semiurbano/Industrial/Periferia Urbana - E2, Forestal - E3, Agricola - E4, Geologico/Volcanico - E5 y Otros Espacios Agua, Nubes y Sombras - E6. El tipo de sensores evaluados se caracteriza por registrar imagenes en un amplio rango de bandas, estrechas y contiguas, del espectro electromagnetico. La Tesis se ha centrado en el desarrollo de tecnicas que permiten separar y extraer automaticamente pixeles o grupos de pixeles cuya firma espectral difiere de manera discriminante de las que tiene alrededor, adoptando para ello como espacio muestral parte o el conjunto de las bandas espectrales en las que ha registrado radiancia el sensor hiperespectral. Un factor a tener en cuenta en la investigacion ha sido el propio instrumento de medida, es decir, la caracterizacion de los distintos subsistemas, sensores imagen y auxiliares, que intervienen en el proceso. Para poder emplear cuantitativamente los datos medidos ha sido necesario definir las relaciones espaciales y espectrales del sensor con la superficie observada y las potenciales anomalias y patrones objetivos de deteccion. Se ha analizado la repercusion que en la deteccion de anomalias tiene el tipo de sensor, tanto en su configuracion espectral como en las estrategias de diseno a la hora de registrar la radiacion prodecente de las superficies, siendo los dos tipos principales de sensores estudiados los barredores o escaneres de espejo giratorio (whiskbroom) y los barredores o escaneres de empuje (pushbroom). Se han definido distintos escenarios en la investigacion, lo que ha permitido abarcar una amplia variabilidad de entornos geomorfologicos y de tipos de coberturas, en ambientes mediterraneos, de latitudes medias y tropicales. En resumen, esta Tesis presenta una tecnica de deteccion de anomalias para datos hiperespectrales denominada DAFT en su variante de PP, basada en una reduccion de la dimensionalidad proyectando el fondo en un rango de longitudes de onda del espectro termico distinto de la proyeccion de las anomalias u objetivos sin firma espectral conocida. La metodologia propuesta ha sido probada con imagenes hiperespectrales reales de diferentes sensores y en diferentes escenarios o espacios, por lo tanto de diferente fondo espectral tambien, donde los resultados muestran los beneficios de la aproximacion en la deteccion de una gran variedad de objetos cuyas firmas espectrales tienen suficiente desviacion respecto del fondo. La tecnica resulta ser automatica en el sentido de que no hay necesidad de ajuste de parametros, dando resultados significativos en todos los casos. Incluso los objetos de tamano subpixel, que no pueden distinguirse a simple vista por el ojo humano en la imagen original, pueden ser detectados como anomalias. Ademas, se realiza una comparacion entre el enfoque propuesto, la popular tecnica RX y otros detectores tanto en su modalidad global como local. El metodo propuesto supera a los demas en determinados escenarios, demostrando su capacidad para reducir la proporcion de falsas alarmas. Los resultados del algoritmo automatico DAFT desarrollado, han demostrado la mejora en la definicion cualitativa de las anomalias espectrales que identifican a entidades diferentes en o bajo superficie, reemplazando para ello el modelo clasico de distribucion normal con un metodo robusto que contempla distintas alternativas desde el momento mismo de la adquisicion del dato hiperespectral. Para su consecucion ha sido necesario analizar la relacion entre parametros biofisicos, como la reflectancia y la emisividad de los materiales, y la distribucion espacial de entidades detectadas respecto de su entorno. Por ultimo, el algoritmo DAFT ha sido elegido como el mas adecuado para sensores que adquieren datos en el TIR, ya que presenta el mejor acuerdo con los datos de referencia, demostrando una gran eficacia computacional que facilita su implementacion en un sistema de cartografia que proyecte de forma automatica en un marco geografico de referencia las anomalias detectadas, lo que confirma un significativo avance hacia un sistema en lo que se denomina cartografia en tiempo real. The aim of this Thesis is to develop a specific methodology in order to be applied in automatic detection anomalies processes using hyperspectral data also called hyperspectral scenes, and to improve the classification processes. Several scenarios, areas and their relationship with surfaces and objects have been tested. The spectral characteristics of reflectance parameter and emissivity in the pattern recognition of urban materials in several hyperspectral scenes have also been tested. Spectral ranges of the visible-near infrared (VNIR), shortwave infrared (SWIR) and thermal infrared (TIR) from hyperspectral data cubes of AHS (Airborne Hyperspectral System), HyMAP Imaging Spectrometer, CASI (Compact Airborne Spectrographic Imager), AVIRIS (Airborne Visible Infrared Imaging Spectrometer), HYDICE (Hyperspectral Digital Imagery Collection Experiment) and MASTER (MODIS/ASTER Simulator) have been used in this research. It is assumed that there is not prior knowledge of the targets in anomaly detection. Thus, the pixels are automatically separated according to their spectral information, significantly differentiated with respect to a background, either globally for the full scene, or locally by the image segmentation. Several experiments on different scenarios have been designed, analyzing the behavior of the standard RX anomaly detector and different methods based on subspace, image projection and segmentation-based anomaly detection methods. Results and their consequences in unsupervised classification processes are discussed. Detection of spectral anomalies aims at extracting automatically pixels that show significant responses in relation of their surroundings. This Thesis deals with the unsupervised technique of target detection, also called anomaly detection. Since this technique assumes no prior knowledge about the target or the statistical characteristics of the data, the only available option is to look for objects that are differentiated from the background. Several methods have been developed in the last decades, allowing a better understanding of the relationships between the image dimensionality and the optimization of search procedures as well as the subpixel differentiation of the spectral mixture and its implications in anomalous responses. In other sense, image spectrometry has proven to be efficient in the characterization of materials, based on statistical methods using a specific reflection and absorption bands. Spectral configurations in the VNIR, SWIR and TIR have been successfully used for mapping materials in different urban scenarios. There has been an increasing interest in the use of high resolution data (both spatial and spectral) to detect small objects and to discriminate surfaces in areas with urban complexity. This has come to be known as target detection which can be either supervised or unsupervised. In supervised target detection, algorithms lean on prior knowledge, such as the spectral signature. The detection process for matching signatures is not straightforward due to the complications of converting data airborne sensor with material spectra in the ground. This could be further complicated by the large number of possible objects of interest, as well as uncertainty as to the reflectance or emissivity of these objects and surfaces. An important objective in this research is to establish relationships that allow linking spectral anomalies with what can be called informational anomalies and, therefore, identify information related to anomalous responses in some places rather than simply spotting differences from the background. The development in recent years of new hyperspectral sensors and techniques, widen the possibilities for applications in remote sensing of the Earth. Remote sensing systems measure and record electromagnetic disturbances that the surveyed objects induce in their surroundings, by means of different sensors mounted on airborne or space platforms. Map updating is important for management and decisions making people, because of the fast changes that usually happen in natural, urban and semi urban areas. It is necessary to optimize the methodology for obtaining the best from remote sensing techniques from hyperspectral data. The first problem with hyperspectral data is to reduce the dimensionality, keeping the maximum amount of information. Hyperspectral sensors augment considerably the amount of information, this allows us to obtain a better precision on the separation of material but at the same time it is necessary to calculate a bigger number of parameters, and the precision lowers with the increase in the number of bands. This is known as the Hughes effects (Bellman, 1957) . Hyperspectral imagery allows us to discriminate between a huge number of different materials however some land and urban covers are made up with similar material and respond similarly which produces confusion in the classification. The training and the algorithm used for mapping are also important for the final result and some properties of thermal spectrum for detecting land cover will be studied. In summary, this Thesis presents a new technique for anomaly detection in hyperspectral data called DAFT, as a PP's variant, based on dimensionality reduction by projecting anomalies or targets with unknown spectral signature to the background, in a range thermal spectrum wavelengths. The proposed methodology has been tested with hyperspectral images from different imaging spectrometers corresponding to several places or scenarios, therefore with different spectral background. The results show the benefits of the approach to the detection of a variety of targets whose spectral signatures have sufficient deviation in relation to the background. DAFT is an automated technique in the sense that there is not necessary to adjust parameters, providing significant results in all cases. Subpixel anomalies which cannot be distinguished by the human eye, on the original image, however can be detected as outliers due to the projection of the VNIR end members with a very strong thermal contrast. Furthermore, a comparison between the proposed approach and the well-known RX detector is performed at both modes, global and local. The proposed method outperforms the existents in particular scenarios, demonstrating its performance to reduce the probability of false alarms. The results of the automatic algorithm DAFT have demonstrated improvement in the qualitative definition of the spectral anomalies by replacing the classical model by the normal distribution with a robust method. For their achievement has been necessary to analyze the relationship between biophysical parameters such as reflectance and emissivity, and the spatial distribution of detected entities with respect to their environment, as for example some buried or semi-buried materials, or building covers of asbestos, cellular polycarbonate-PVC or metal composites. Finally, the DAFT method has been chosen as the most suitable for anomaly detection using imaging spectrometers that acquire them in the thermal infrared spectrum, since it presents the best results in comparison with the reference data, demonstrating great computational efficiency that facilitates its implementation in a mapping system towards, what is called, Real-Time Mapping.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyzes issues which appear when supporting pruning operators in tabled LP. A version of the once/1 control predicate tailored for tabled predicates is presented, and an implementation analyzed and evaluated. Using once/1 with answer-on-demand strategies makes it possible to avoid computing unneeded solutions for problems which can benefit from tabled LP but in which only a single solution is needed, such as model checking and planning. The proposed version of once/1 is also directly applicable to the efficient implementation of other optimizations, such as early completion, cut-fail loops (to, e.g., prune at the top level), if-then-else, and constraint-based branch-and-bound optimization. Although once/1 still presents open issues such as dependencies of tabled solutions on program history, our experimental evaluation confirms that it provides an arbitrarily large efficiency improvement in several application areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an adaptive unequal error protection (UEP) strategy built on the 1-D interleaved parity Application Layer Forward Error Correction (AL-FEC) code for protecting the transmission of stereoscopic 3D video content encoded with Multiview Video Coding (MVC) through IP-based networks. Our scheme targets the minimization of quality degradation produced by packet losses during video transmission in time-sensitive application scenarios. To that end, based on a novel packet-level distortion model, it selects in real time the most suitable packets within each Group of Pictures (GOP) to be protected and the most convenient FEC technique parameters, i.e., the size of the FEC generator matrix. In order to make these decisions, it considers the relevance of the packet, the behavior of the channel, and the available bitrate for protection purposes. Simulation results validate both the distortion model introduced to estimate the importance of packets and the optimization of the FEC technique parameter values.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los conjuntos borrosos de tipo 2 (T2FSs) fueron introducidos por L.A. Zadeh en 1975 [65], como una extensión de los conjuntos borrosos de tipo 1 (FSs). Mientras que en estos últimos el grado de pertenencia de un elemento al conjunto viene determinado por un valor en el intervalo [0, 1], en el caso de los T2FSs el grado de pertenencia de un elemento es un conjunto borroso en [0,1], es decir, un T2FS queda determinado por una función de pertenencia μ : X → M, donde M = [0, 1][0,1] = Map([0, 1], [0, 1]), es el conjunto de las funciones de [0,1] en [0,1] (ver [39], [42], [43], [61]). Desde que los T2FSs fueron introducidos, se han generalizado a dicho conjunto (ver [39], [42], [43], [61], por ejemplo), a partir del “Principio de Extensión” de Zadeh [65] (ver Teorema 1.1), muchas de las definiciones, operaciones, propiedades y resultados obtenidos en los FSs. Sin embargo, como sucede en cualquier área de investigación, quedan muchas lagunas y problemas abiertos que suponen un reto para cualquiera que quiera hacer un estudio profundo en este campo. A este reto se ha dedicado el presente trabajo, logrando avances importantes en este sentido de “rellenar huecos” existentes en la teoría de los conjuntos borrosos de tipo 2, especialmente en las propiedades de autocontradicción y N-autocontradicción, y en las operaciones de negación, t-norma y t-conorma sobre los T2FSs. Cabe destacar que en [61] se justifica que las operaciones sobre los T2FSs (Map(X,M)) se pueden definir de forma natural a partir de las operaciones sobre M, verificando las mismas propiedades. Por tanto, por ser más fácil, en el presente trabajo se toma como objeto de estudio a M, y algunos de sus subconjuntos, en vez de Map(X,M). En cuanto a la operación de negación, en el marco de los conjuntos borrosos de tipo 2 (T2FSs), usualmente se emplea para representar la negación en M, una operación asociada a la negación estándar en [0,1]. Sin embargo, dicha operación no verifica los axiomas que, intuitivamente, debe verificar cualquier operación para ser considerada negación en el conjunto M. En este trabajo se presentan los axiomas de negación y negación fuerte en los T2FSs. También se define una operación asociada a cualquier negación suprayectiva en [0,1], incluyendo la negación estándar, y se estudia, junto con otras propiedades, si es negación y negación fuerte en L (conjunto de las funciones de M normales y convexas). Además, se comprueba en qué condiciones se cumplen las leyes de De Morgan para un extenso conjunto de pares de operaciones binarias en M. Por otra parte, las propiedades de N-autocontradicción y autocontradicción, han sido suficientemente estudiadas en los conjuntos borrosos de tipo 1 (FSs) y en los conjuntos borrosos intuicionistas de Atanassov (AIFSs). En el presente trabajo se inicia el estudio de las mencionadas propiedades, dentro del marco de los T2FSs cuyos grados de pertenencia están en L. En este sentido, aquí se extienden los conceptos de N-autocontradicción y autocontradicción al conjunto L, y se determinan algunos criterios para verificar tales propiedades. En cuanto a otras operaciones, Walker et al. ([61], [63]) definieron dos familias de operaciones binarias sobre M, y determinaron que, bajo ciertas condiciones, estas operaciones son t-normas (normas triangulares) o t-conormas sobre L. En este trabajo se introducen operaciones binarias sobre M, unas más generales y otras diferentes a las dadas por Walker et al., y se estudian varias propiedades de las mismas, con el objeto de deducir nuevas t-normas y t-conormas sobre L. ABSTRACT Type-2 fuzzy sets (T2FSs) were introduced by L.A. Zadeh in 1975 [65] as an extension of type-1 fuzzy sets (FSs). Whereas for FSs the degree of membership of an element of a set is determined by a value in the interval [0, 1] , the degree of membership of an element for T2FSs is a fuzzy set in [0,1], that is, a T2FS is determined by a membership function μ : X → M, where M = [0, 1][0,1] is the set of functions from [0,1] to [0,1] (see [39], [42], [43], [61]). Later, many definitions, operations, properties and results known on FSs, have been generalized to T2FSs (e.g. see [39], [42], [43], [61]) by employing Zadeh’s Extension Principle [65] (see Theorem 1.1). However, as in any area of research, there are still many open problems which represent a challenge for anyone who wants to make a deep study in this field. Then, we have been dedicated to such challenge, making significant progress in this direction to “fill gaps” (close open problems) in the theory of T2FSs, especially on the properties of self-contradiction and N-self-contradiction, and on the operations of negations, t-norms (triangular norms) and t-conorms on T2FSs. Walker and Walker justify in [61] that the operations on Map(X,M) can be defined naturally from the operations onMand have the same properties. Therefore, we will work onM(study subject), and some subsets of M, as all the results are easily and directly extensible to Map(X,M). About the operation of negation, usually has been employed in the framework of T2FSs, a operation associated to standard negation on [0,1], but such operation does not satisfy the negation axioms on M. In this work, we introduce the axioms that a function inMshould satisfy to qualify as a type-2 negation and strong type-2 negation. Also, we define a operation on M associated to any suprajective negation on [0,1], and analyse, among others properties, if such operation is negation or strong negation on L (all normal and convex functions of M). Besides, we study the De Morgan’s laws, with respect to some binary operations on M. On the other hand, The properties of self-contradiction and N-self-contradiction have been extensively studied on FSs and on the Atanassov’s intuitionistic fuzzy sets (AIFSs). Thereon, in this research we begin the study of the mentioned properties on the framework of T2FSs. In this sense, we give the definitions about self-contradiction and N-self-contradiction on L, and establish the criteria to verify these properties on L. Respect to the t-norms and t-conorms, Walker et al. ([61], [63]) defined two families of binary operations on M and found that, under some conditions, these operations are t-norms or t-conorms on L. In this work we introduce more general binary operations on M than those given by Walker et al. and study which are the minimum conditions necessary for these operations satisfy each of the axioms of the t-norm and t-conorm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las necesidades energéticas actuales requieren el desarrollo de tecnologías eficaces y eficientes en producción, transporte y distribución de energía. Estas necesidades han impulsado nuevos desarrollos en el ámbito energético, entre los cuales se encuentran sistemas de almacenamiento de energía. El avance en ingeniería de materiales permite pensar en la posibilidad de almacenamiento mediante deformación elástica de vigas. Concretamente se parte de un concepto de mecanismo acumulador de energía basado en la deformación elástica de resortes espirales a torsión. Dichos resortes se pueden considerar como elementos vigas sometidos a flexión pura y grandes deflexiones. Esta Tesis de centra en el diseño y optimización de estos elementos con el fin de maximizar la densidad de energía que son capaces de absorber. El proceso de optimización comienza con la identificación del factor crítico del que depende dicho proceso, en este caso de trata de la densidad de energía. Dicho factor depende de la geometría de la sección resistente y del material empleado en su construcción. En los últimos años ha existido un gran desarrollo de los materiales compuestos de tipo polimérico reforzados con fibras (PRF). Estos materiales están sustituyendo gradualmente a otros materiales, como los metales, debido principalmente a su excelente relación entre propiedades mecánicas y peso. Por otro lado, analizando las posibles geometrías para la sección resistente, se observó que la más adecuada es una estructura tipo sándwich. Se implementa así un procedimiento de diseño de vigas sándwich sometidas a flexión pura, con las pieles fabricadas en materiales compuestos tipo PRF y un núcleo que debe garantizar el bajo peso de la estructura. Se desarrolla así un procedimiento sistemático que se puede particularizar dependiendo de los parámetros de entrada de la viga, y que tiene en cuenta y analiza la aparición de todos los posibles modos de fallo posibles. Así mismo se desarrollan una serie de mapas o ábacos de diseño que permiten seleccionar rápidamente las dimensiones preliminares de la viga. Finalmente se llevan a cabo ensayos que permiten, por un lado, validar el concepto del mecanismo acumulador de energía a través del ensayo de un muelle con sección monolítica, y por otro validar los distintos diseños de vigas sándwich propuestos y mostrar el incremento de la densidad de energía con respecto a la alternativa monolítica. Como líneas futuras de investigación se plantean la investigación en nuevos materiales, como la utilización de nanotubos de carbono, y la optimización del mecanismo de absorción de energía; optimizando el mecanismo de absorción a flexión pura e implementando sistemas que permitan acumular energía mediante la deformación elástica debida a esfuerzos de tracción-compresión. ABSTRACT Energy supply requires the development of effective and efficient technologies for the production, transport and distribution of energy. In recent years, many energy storage systems have been developed. Advances in the field of materials engineering has allowed the development of new concepts as the energy storage by elastic deformation of beams. Particularly, in this Thesis an energy storage device based on the elastic deformation of torsional springs has been studied. These springs can be considered as beam elements subjected to pure bending loads and large deflections. This Thesis is focused on the design and optimization of these beam elements in order to maximize its density of stored energy. The optimization process starts with the identification of the critical factors for the elastic energy storage: the density. This factor depends on the geometry of the cross section of the beam and the materials from which it is made. In the last 20 years, major advances in the field of composite materials have been made, particularly in the field of fiber reinforced polymers (FRP). This type of material is substituting gradually metallic materials to their excellent weight-mechanical properties ratio. In the other side, several possible geometries are analyzed for its use in the cross section of the beam; it was concluded that the best option, for maximum energy density, is using a sandwich beam. A design procedure for sandwich beams with skins made up with FRP composites and a light weight core is developed. This procedure can be particularized for different input parameters and it analyzes all the possible failure modes. Abacus and failure mode maps have been developed in order to simplify the design process. Finally several tested was made. Firstly, a prototype of the energy storage system which uses a monolithic composite beam was tested in order to validate the concept of the energy storage by elastic deformation. After that sandwich beam samples are built and tested, validating the design and showing the increase of energy density with respect to the monolithic beam. As futures research lines the following are proposed: research in new materials, as carbon nanotubes; and the optimization of the energy storage mechanism, that means optimizing the pure bending storage mechanism and developing new ones based on traction-compression mechanisms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Markov Chain Monte Carlo methods are widely used in signal processing and communications for statistical inference and stochastic optimization. In this work, we introduce an efficient adaptive Metropolis-Hastings algorithm to draw samples from generic multimodal and multidimensional target distributions. The proposal density is a mixture of Gaussian densities with all parameters (weights, mean vectors and covariance matrices) updated using all the previously generated samples applying simple recursive rules. Numerical results for the one and two-dimensional cases are provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of optimal impulsive collision avoidance between two colliding objects in 3-dimensional elliptical Keplerian orbits is investigated with the purpose of establishing the optimal impulse direction and orbit location that give rise to the maximum miss distance following the maneuver. Closed-form analytical expressions are provided that predicts such distance and can be employed to perform a full optimization analysis. After verifying the accuracy of the expression for any orbital eccentricity and encounter geometry the optimum maneuver direction is derived as a function of the arc length separation between the maneuver point and the predicted collision point. The provided formulas can be used for high accuracy instantaneous estimation of the outcome of a generic impulsive collision avoidance maneuver and its optimization

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monte Carlo (MC) methods are widely used in signal processing, machine learning and communications for statistical inference and stochastic optimization. A well-known class of MC methods is composed of importance sampling and its adaptive extensions (e.g., population Monte Carlo). In this work, we introduce an adaptive importance sampler using a population of proposal densities. The novel algorithm provides a global estimation of the variables of interest iteratively, using all the samples generated. The cloud of proposals is adapted by learning from a subset of previously generated samples, in such a way that local features of the target density can be better taken into account compared to single global adaptation procedures. Numerical results show the advantages of the proposed sampling scheme in terms of mean absolute error and robustness to initialization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monte Carlo (MC) methods are widely used in signal processing, machine learning and stochastic optimization. A well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms. In this work, we introduce a novel parallel interacting MCMC scheme, where the parallel chains share information using another MCMC technique working on the entire population of current states. These parallel ?vertical? chains are led by random-walk proposals, whereas the ?horizontal? MCMC uses a independent proposal, which can be easily adapted by making use of all the generated samples. Numerical results show the advantages of the proposed sampling scheme in terms of mean absolute error, as well as robustness w.r.t. to initial values and parameter choice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cognitive Wireless Sensor Networks are an emerging technology with a vast potential to avoid traditional wireless problems such as reliability, interferences and spectrum scarcity in Wireless Sensor Networks. Cognitive Wireless Sensor Networks test-beds are an important tool for future developments, protocol strategy testing and algorithm optimization in real scenarios. A new cognitive test-bed for Cognitive Wireless Sensor Networks is presented in this paper. This work in progress includes both the design of a cognitive simulator for networks with a high number of nodes and the implementation of a new platform with three wireless interfaces and a cognitive software for extracting real data. Finally, as a future work, a remote programmable system and the planning for the physical deployment of the nodes at the university building is presented.