78 resultados para ISE and ITSE optimization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cognitive Wireless Sensor Networks are an emerging technology with a vast potential to avoid traditional wireless problems such as reliability, interferences and spectrum scarcity in Wireless Sensor Networks. Cognitive Wireless Sensor Networks test-beds are an important tool for future developments, protocol strategy testing and algorithm optimization in real scenarios. A new cognitive test-bed for Cognitive Wireless Sensor Networks is presented in this paper. This work in progress includes both the design of a cognitive simulator for networks with a high number of nodes and the implementation of a new platform with three wireless interfaces and a cognitive software for extracting real data. Finally, as a future work, a remote programmable system and the planning for the physical deployment of the nodes at the university building is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En la actualidad, la gestión de embalses para el control de avenidas se realiza, comúnmente, utilizando modelos de simulación. Esto se debe, principalmente, a su facilidad de uso en tiempo real por parte del operador de la presa. Se han desarrollado modelos de optimización de la gestión del embalse que, aunque mejoran los resultados de los modelos de simulación, su aplicación en tiempo real se hace muy difícil o simplemente inviable, pues está limitada al conocimiento de la avenida futura que entra al embalse antes de tomar la decisión de vertido. Por esta razón, se ha planteado el objetivo de desarrollar un modelo de gestión de embalses en avenidas que incorpore las ventajas de un modelo de optimización y que sea de fácil uso en tiempo real por parte del gestor de la presa. Para ello, se construyó un modelo de red Bayesiana que representa los procesos de la cuenca vertiente y del embalse y, que aprende de casos generados sintéticamente mediante un modelo hidrológico agregado y un modelo de optimización de la gestión del embalse. En una primera etapa, se generó un gran número de episodios sintéticos de avenida utilizando el método de Monte Carlo, para obtener las lluvias, y un modelo agregado compuesto de transformación lluvia- escorrentía, para obtener los hidrogramas de avenida. Posteriormente, se utilizaron las series obtenidas como señales de entrada al modelo de gestión de embalses PLEM, que optimiza una función objetivo de costes mediante programación lineal entera mixta, generando igual número de eventos óptimos de caudal vertido y de evolución de niveles en el embalse. Los episodios simulados fueron usados para entrenar y evaluar dos modelos de red Bayesiana, uno que pronostica el caudal de entrada al embalse, y otro que predice el caudal vertido, ambos en un horizonte de tiempo que va desde una a cinco horas, en intervalos de una hora. En el caso de la red Bayesiana hidrológica, el caudal de entrada que se elige es el promedio de la distribución de probabilidad de pronóstico. En el caso de la red Bayesiana hidráulica, debido al comportamiento marcadamente no lineal de este proceso y a que la red Bayesiana devuelve un rango de posibles valores de caudal vertido, se ha desarrollado una metodología para seleccionar un único valor, que facilite el trabajo del operador de la presa. Esta metodología consiste en probar diversas estrategias propuestas, que incluyen zonificaciones y alternativas de selección de un único valor de caudal vertido en cada zonificación, a un conjunto suficiente de episodios sintéticos. Los resultados de cada estrategia se compararon con el método MEV, seleccionándose las estrategias que mejoran los resultados del MEV, en cuanto al caudal máximo vertido y el nivel máximo alcanzado por el embalse, cualquiera de las cuales puede usarse por el operador de la presa en tiempo real para el embalse de estudio (Talave). La metodología propuesta podría aplicarse a cualquier embalse aislado y, de esta manera, obtener, para ese embalse particular, diversas estrategias que mejoran los resultados del MEV. Finalmente, a modo de ejemplo, se ha aplicado la metodología a una avenida sintética, obteniendo el caudal vertido y el nivel del embalse en cada intervalo de tiempo, y se ha aplicado el modelo MIGEL para obtener en cada instante la configuración de apertura de los órganos de desagüe que evacuarán el caudal. Currently, the dam operator for the management of dams uses simulation models during flood events, mainly due to its ease of use in real time. Some models have been developed to optimize the management of the reservoir to improve the results of simulation models. However, real-time application becomes very difficult or simply unworkable, because the decision to discharge depends on the unknown future avenue entering the reservoir. For this reason, the main goal is to develop a model of reservoir management at avenues that incorporates the advantages of an optimization model. At the same time, it should be easy to use in real-time by the dam manager. For this purpose, a Bayesian network model has been developed to represent the processes of the watershed and reservoir. This model learns from cases generated synthetically by a hydrological model and an optimization model for managing the reservoir. In a first stage, a large number of synthetic flood events was generated using the Monte Carlo method, for rain, and rain-added processing model composed of runoff for the flood hydrographs. Subsequently, the series obtained were used as input signals to the reservoir management model PLEM that optimizes a target cost function using mixed integer linear programming. As a result, many optimal discharge rate events and water levels in the reservoir levels were generated. The simulated events were used to train and test two models of Bayesian network. The first one predicts the flow into the reservoir, and the second predicts the discharge flow. They work in a time horizon ranging from one to five hours, in intervals of an hour. In the case of hydrological Bayesian network, the chosen inflow is the average of the probability distribution forecast. In the case of hydraulic Bayesian network the highly non-linear behavior of this process results on a range of possible values of discharge flow. A methodology to select a single value has been developed to facilitate the dam operator work. This methodology tests various strategies proposed. They include zoning and alternative selection of a single value in each discharge rate zoning from a sufficient set of synthetic episodes. The results of each strategy are compared with the MEV method. The strategies that improve the outcomes of MEV are selected and can be used by the dam operator in real time applied to the reservoir study case (Talave). The methodology could be applied to any single reservoir and, thus, obtain, for the particular reservoir, various strategies that improve results from MEV. Finally, the methodology has been applied to a synthetic flood, obtaining the discharge flow and the reservoir level in each time interval. The open configuration floodgates to evacuate the flow at each interval have been obtained applying the MIGEL model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En la presente tesis se desarrolla una metodología y una herramienta informática que permite abordar de forma eficaz y eficiente problemas de gestión de los recursos intervinientes en emergencias. Se posibilita, a través de indicadores innovadores como el Índice de Respuesta Operativa (I.RO.), una evaluación correcta del riesgo real en función de los medios disponibles y de su probabilidad de colapso, permitiendo desarrollar una optimización de la utilización estos medios tanto en el presente como en escenarios futuros. Para su realización se describen inicialmente los principales actores que intervienen en las emergencias evaluándolos y mostrando las sinergias existentes. Se define y analiza, a través de sistemas complejos socialmente inteligentes (SCSI) el “ciclo de global de las emergencias”: planificación, prevención, detección, intervención, rehabilitación y el tratamiento informativo de la crisis. Del mismo modo se definen los distintos escenarios donde se interviene y cómo se puede prever su evolución. Para ello se establecen unas tipologías de siniestros y se identifican las similitudes o diferencias entre ellos. También se describe y modela el problema de la toma de decisiones a nivel de planificación operativa, desde la localización de instalaciones, tipologías de parques de bomberos, etc. Para demostrar la viabilidad de la metodología desarrollada se realiza su aplicación al territorio de la Comunidad Autónoma de Madrid obteniendo resultados satisfactorios a partir de los datos existentes. Es un estudio totalmente innovador y de amplia repercusión no solo en la gestión de las emergencias sino también en otros campos tales como el de estrategia militar, comercial, de gestión de organizaciones, etc. ABSTRACT This Phd Thesis presents a method and software tool that allows facing, in an efficient and effective manner, the resources involved in emergencies. It enables a correct assessment of the real risk as a function of the available resources and its collapse likelihood. This is achieved by mean of some novel indexes such as the Operative Response Index. Therefore, it allows a current and future optimization of the use of available resources. First, it describes the main factors affecting emergencies, assessing them and showing existing synergies. Then, it defines and analyse through complex systems socially intelligent (CSSI) the overall emergency cycle: planning, prevention, detection, intervention, rehabilitation and informative crisis coverage. Similarly, it defines the scenarios of intervention and how to forecast their progress. To this end, some typologies of disasters are defined, identifying commonalities. Moreover, it also describes and model decision-making issues at operationalplanning level, such as the location of facilities, typologies of fire stations, etc. In order to prove the feasibility of the developed methodology, it is applied to the Comunidad Autónoma de Madrid, getting successful results from the existing data. This Phd Thesis is an innovative study with far reaching impact, not only in emergency management but also in other fields such as the military, business strategy, organizational management, etc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El diseño de una antena reflectarray bajo la aproximación de periodicidad local requiere la determinación de la matriz de scattering de estructuras multicapa con metalizaciones periódicas para un gran número de geometrías diferentes. Por lo tanto, a la hora de diseñar antenas reflectarray en tiempos de CPU razonables, se necesitan herramientas númericas rápidas y precisas para el análisis de las estructuras periódicas multicapa. En esta tesis se aplica la versión Galerkin del Método de los Momentos (MDM) en el dominio espectral al análisis de las estructuras periódicas multicapa necesarias para el diseño de antenas reflectarray basadas en parches apilados o en dipolos paralelos coplanares. Desgraciadamente, la aplicación de este método numérico involucra el cálculo de series dobles infinitas, y mientras que algunas series convergen muy rápidamente, otras lo hacen muy lentamente. Para aliviar este problema, en esta tesis se propone un novedoso MDM espectral-espacial para el análisis de las estructuras periódicas multicapa, en el cual las series rápidamente convergente se calculan en el dominio espectral, y las series lentamente convergentes se calculan en el dominio espacial mediante una versión mejorada de la formulación de ecuaciones integrales de potenciales mixtos (EIPM) del MDM. Esta versión mejorada se basa en la interpolación eficiente de las funciones de Green multicapa periódicas, y en el cálculo eficiente de las integrales singulares que conducen a los elementos de la matriz del MDM. El novedoso método híbrido espectral-espacial y el tradicional MDM en el dominio espectral se han comparado en el caso de los elementos reflectarray basado en parches apilados. Las simulaciones numéricas han demostrado que el tiempo de CPU requerido por el MDM híbrido es alrededor de unas 60 veces más rápido que el requerido por el tradicional MDM en el dominio espectral para una precisión de dos cifras significativas. El uso combinado de elementos reflectarray con parches apilados y técnicas de optimización de banda ancha ha hecho posible diseñar antenas reflectarray de transmisiónrecepción (Tx-Rx) y polarización dual para aplicaciones de espacio con requisitos muy restrictivos. Desgraciadamente, el nivel de aislamiento entre las polarizaciones ortogonales en antenas DBS (típicamente 30 dB) es demasiado exigente para ser conseguido con las antenas basadas en parches apilados. Además, el uso de elementos reflectarray con parches apilados conlleva procesos de fabricación complejos y costosos. En esta tesis se investigan varias configuraciones de elementos reflectarray basadas en conjuntos de dipolos paralelos con el fin de superar los inconvenientes que presenta el elemento basado en parches apilados. Primeramente, se propone un elemento consistente en dos conjuntos apilados ortogonales de tres dipolos paralelos para aplicaciones de polarización dual. Se ha diseñado, fabricado y medido una antena basada en este elemento, y los resultados obtenidos para la antena indican que tiene unas altas prestaciones en términos de ancho de banda, pérdidas, eficiencia y discriminación contrapolar, además de requerir un proceso de fabricación mucho más sencillo que el de las antenas basadas en tres parches apilados. Desgraciadamente, el elemento basado en dos conjuntos ortogonales de tres dipolos paralelos no proporciona suficientes grados de libertad para diseñar antenas reflectarray de transmisión-recepción (Tx-Rx) de polarización dual para aplicaciones de espacio por medio de técnicas de optimización de banda ancha. Por este motivo, en la tesis se propone un nuevo elemento reflectarray que proporciona los grados de libertad suficientes para cada polarización. El nuevo elemento consiste en dos conjuntos ortogonales de cuatro dipolos paralelos. Cada conjunto contiene tres dipolos coplanares y un dipolo apilado. Para poder acomodar los dos conjuntos de dipolos en una sola celda de la antena reflectarray, el conjunto de dipolos de una polarización está desplazado medio período con respecto al conjunto de dipolos de la otra polarización. Este hecho permite usar solamente dos niveles de metalización para cada elemento de la antena, lo cual simplifica el proceso de fabricación como en el caso del elemento basados en dos conjuntos de tres dipolos paralelos coplanares. Una antena de doble polarización y doble banda (Tx-Rx) basada en el nuevo elemento ha sido diseñada, fabricada y medida. La antena muestra muy buenas presentaciones en las dos bandas de frecuencia con muy bajos niveles de polarización cruzada. Simulaciones numéricas presentadas en la tesis muestran que estos bajos de niveles de polarización cruzada se pueden reducir todavía más si se llevan a cabo pequeñas rotaciones de los dos conjuntos de dipolos asociados a cada polarización. ABSTRACT The design of a reflectarray antenna under the local periodicity assumption requires the determination of the scattering matrix of a multilayered structure with periodic metallizations for quite a large number of different geometries. Therefore, in order to design reflectarray antennas within reasonable CPU times, fast and accurate numerical tools for the analysis of the periodic multilayered structures are required. In this thesis the Galerkin’s version of the Method of Moments (MoM) in the spectral domain is applied to the analysis of the periodic multilayered structures involved in the design of reflectarray antennas made of either stacked patches or coplanar parallel dipoles. Unfortunately, this numerical approach involves the computation of double infinite summations, and whereas some of these summations converge very fast, some others converge very slowly. In order to alleviate this problem, in the thesis a novel hybrid MoM spectral-spatial domain approach is proposed for the analysis of the periodic multilayered structures. In the novel approach, whereas the fast convergent summations are computed in the spectral domain, the slowly convergent summations are computed by means of an enhanced Mixed Potential Integral Equation (MPIE) formulation of the MoM in the spatial domain. This enhanced formulation is based on the efficient interpolation of the multilayered periodic Green’s functions, and on the efficient computation of the singular integrals leading to the MoM matrix entries. The novel hybrid spectral-spatial MoM code and the standard spectral domain MoM code have both been compared in the case of reflectarray elements based on multilayered stacked patches. Numerical simulations have shown that the CPU time required by the hybrid MoM is around 60 times smaller than that required by the standard spectral MoM for an accuracy of two significant figures. The combined use of reflectarray elements based on stacked patches and wideband optimization techniques has made it possible to design dual polarization transmit-receive (Tx-Rx) reflectarrays for space applications with stringent requirements. Unfortunately, the required level of isolation between orthogonal polarizations in DBS antennas (typically 30 dB) is hard to achieve with the configuration of stacked patches. Moreover, the use of reflectarrays based on stacked patches leads to a complex and expensive manufacturing process. In this thesis, we investigate several configurations of reflectarray elements based on sets of parallel dipoles that try to overcome the drawbacks introduced by the element based on stacked patches. First, an element based on two stacked orthogonal sets of three coplanar parallel dipoles is proposed for dual polarization applications. An antenna made of this element has been designed, manufactured and measured, and the results obtained show that the antenna presents a high performance in terms of bandwidth, losses, efficiency and cross-polarization discrimination, while the manufacturing process is cheaper and simpler than that of the antennas made of stacked patches. Unfortunately, the element based on two sets of three coplanar parallel dipoles does not provide enough degrees of freedom to design dual-polarization transmit-receive (Tx-Rx) reflectarray antennas for space applications by means of wideband optimization techniques. For this reason, in the thesis a new reflectarray element is proposed which does provide enough degrees of freedom for each polarization. This new element consists of two orthogonal sets of four parallel dipoles, each set containing three coplanar dipoles and one stacked dipole. In order to accommodate the two sets of dipoles in each reflectarray cell, the set of dipoles for one polarization is shifted half a period from the set of dipoles for the other polarization. This also makes it possible to use only two levels of metallization for the reflectarray element, which simplifies the manufacturing process as in the case of the reflectarray element based on two sets of three parallel dipoles. A dual polarization dual-band (Tx-Rx) reflectarray antenna based on the new element has been designed, manufactured and measured. The antenna shows a very good performance in both Tx and Rx frequency bands with very low levels of cross-polarization. Numerical simulations carried out in the thesis have shown that the low levels of cross-polarization can be even made smaller by means of small rotations of the two sets of dipoles associated to each polarization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las sociedades desarrolladas generan una gran cantidad de residuos, que necesitan una adecuada gestión. Esta problemática requiere, de este modo, una atención creciente por parte de la sociedad, debido a la necesidad de proteger el medio ambiente. En este sentido, los esfuerzos se centran en reducir al máximo la generación de residuos y buscar vías de aprovechamiento de aquellos que son inevitables, soluciones mucho más aconsejables desde el punto de vista técnico, ecológico y económico que su vertido o destrucción. Las industrias deben adoptar las medidas precisas para fomentar la reducción de estos residuos, desarrollar tecnologías limpias que permitan el ahorro de los recursos naturales que poseemos, y sobre todo buscar métodos de reutilización, reciclado, inertización y valorización de los residuos generados en su producción. La industria de la construcción es un campo muy receptivo para el desarrollo de nuevos materiales en los que incorporar estos residuos. La incorporación de diferentes residuos industriales en matrices cerámicas se plantea como una vía barata de fijar las diferentes especies metálicas presentes en transformación de rocas ornamentales, lodos de galvanización o metalúrgicos, etc. En todos los casos, la adición de estos residuos requiere su caracterización previa y la optimización de las condiciones de conformado y cocción en el caso de su incorporación a la arcilla cocida. Entre los residuos incorporados en materiales de construcción se encuentran las escorias de aluminio. La industria metalúrgica produce durante sus procesos de fusión diferentes tipos de escorias. Su reciclado es una de las líneas de interés para estas industrias. En el caso de las escorias de aluminio, su tratamiento inicial consiste en una recuperación del aluminio mediante métodos mecánicos seguido de un tratamiento químico, o plasma. Este método conduce a que la escoria final apenas contenga aluminio y sea rica en sales solubles lo que limita su almacenamiento en escombreras. La escoria es una mezcla de aluminio metal y productos no metálicos como óxidos, nitruros y carburos de aluminio, sales y otros óxidos metálicos. En este estudio se ha analizado la posibilidad de la adición de escorias de aluminio procedentes de la metalurgia secundaria en materiales de construcción, de forma que tras un procesado de las mismas permita la obtención de materiales compuestos de matriz cerámica. En la presente Tesis Doctoral se ha analizado la viabilidad técnica de la incorporación de las escorias de aluminio procedentes de la metalurgia secundaria en una matriz de arcilla cocida. Para ello se han aplicado diferentes tratamientos a la escoria y se han aplicado diferentes variables en su procesado como la energía de molienda o la temperatura de sinterizacion, además del contenido de escoria. Su compactación con agua entre el 5-10 %, secado y sinterización permite obtener piezas rectangulares de diverso tamaño. Desde el punto de vista del contenido de la escoria, se incorporó entre un 10 y 40% de escoria TT, es decir sometida una calcinación previa a 750ºC en aire. Los mejores resultados alcanzados corresponden a un contenido del 20% ESC TT, sinterizada a 980ºC, por cuanto altos contenidos en escoria condicen a piezas con corazón negro. Los productos obtenidos con la adición de 20% de escoria de aluminio a la arcilla, presentan una baja expansión tras sinterización, mejores propiedades físicas y mecánicas, y mayor conductividad térmica que los productos obtenidos con arcilla sin adiciones. Aumenta su densidad, disminuye su absorción y aumenta sus resistencias de flexión y compresión, al presentar una porosidad cerrada y una interacción escoria-matriz. En todos los casos se produce una exudación superficial de aluminio metálico, cuyo volumen está relacionado con la cantidad de escoria adicionada. Mediante la incorporación de este contenido de escoria, tras un tratamiento de disolución de sales y posterior calcinación (ESC TTQ), se mejoran las propiedades del material compuesto, no sólo sobre la de la escoria calcinada (ESC TT), sino también, sobre la escoria sin tratamiento (ESC). Si además, la adición del 20% de escoria añadida, está tratada, no sólo térmicamente sino también químicamente (ESC TTQ), éstas mejoran aún más las propiedades del material compuesto, siendo el producto más compacto, con menos poros, por lo que los valores de densidad son más elevados, menores son las absorciones y mayores resistencias de flexión y compresión, que los productos obtenidos con la adición de escoria sólo tratada térmicamente. Alcanzando valores de resistencias características a compresión del orden de 109 MPa. Los valores de conductividad térmica obtenidos también son mayores. Los ensayos tecnológicos con piezas de 160 x 30 x 5 mm y el material compuesto optimizado de arcilla+ 20%ESCTTQ, consistieron en la determinación de su expansión por humedad, eflorescencia y heladicidad, mostrando en general un mejor comportamiento que la arcilla sin adiciones. Así, se han obtenido nuevos materiales compuestos de matriz cerámica para la construcción, mejorando sus propiedades físicas, mecánicas y térmicas, utilizando escorias de aluminio procedentes de la metalurgia secundaria, como opción de valorización de estos residuos, evitando así, que se viertan a vertederos y contaminen el medio ambiente. ABSTRACT Developed societies generate a lot of waste, which need proper management. Thus, this problem requires increased attention from the society, due to the need to protect the environment. In this regard, efforts are focused on to minimize the generation of waste and find ways of taking advantage of those who are inevitable, much more advisable solutions from the technical, ecological and economic viewpoint to disposal or destruction. Industries should adopt precise measures to promote waste reduction, develop clean technologies that allow the saving of natural resources that we possess, and above all seek methods of reuse, recycling, recovery and valorisation of the waste generated in their production. The industry of the construction is a very receptive field for the development of new materials in which to incorporate these residues. The incorporation of different industrial residues in ceramic counterfoils appears as a cheap route to fix the different metallic present species in transformation of ornamental rocks, muds of galvanization or metallurgical, etc. In all the cases, the addition of these residues needs his previous characterization and the optimization of the conditions of conformed and of baking in case of his incorporation to the baked clay. Residues incorporated into construction materials include aluminium slag. The metallurgical industry produces during their fusion processes different types of slags. Recycling is one of the lines of interest to these industries. In the case of aluminium slag, their initial treatment consists of a recovery of the aluminium using mechanical methods followed by chemical treatment, or plasma. This method leads to that final slag just contains aluminium and is rich in soluble salts which limits storage in dumps. The slag is a mixture of aluminium metal and non-metallic such as oxides, nitrides and carbides of aluminium salts products and other metal oxides. The present Doctoral thesis has analysed the technical viability of the incorporation of aluminium slag from secondary Metallurgy in an array of baked clay. So they have been applied different treatments to the slag and have been applied different variables in its processing as the temperature of sintering, in addition to the content of slag or energy grinding. Its compaction with water between 5-10%, drying and sintering allows rectangular pieces of different size. From the point of view of the content of the slag, it is incorporated between 10 and 40% slag TT, that is to say, submitted a calcination prior to 750 ° C in air. The best results achieved correspond to 20% ESC TT, sintered at 980 ° C, as high levels of slag in accordance to pieces with black heart. The products obtained with the addition of 20% of slag from aluminium to clay, present a low expansion after sintering, better physical properties and mechanical, and higher thermal conductivity than the products obtained with clay, without addictions. Its density increases, decreases its absorption and increases its resistance to bending and compression, introducing a closed porosity and slag-matrix interaction. In all cases there is a superficial exudation of metallic aluminium, whose volume is related to the amount of slag added. By incorporating this content of slag, following a treatment of salt solution and subsequent calcination (ESC TTQ), are improved the properties of composite material not only on the calcined slag (ESC TT), but also in the slag without treatment (ESC). If the addition of 20% of slag added, is also treated, not only thermally but also chemically (ESC TTQ), they further improve the properties of the composite material, the product is more compact, less porous, so the values are higher density, minors are absorptions and greater resistance in bending and compression, to the products obtained with the addition of slag only treated thermally. Reaching values of compressive resistance characteristic of the order of 109 MPa. The thermal conductivity values obtained are also higher. Testing technology with pieces of 160 x 30 x 5 mm and optimized composite material of clay 20% ESCTTQ, consisted in the determination of its expansion by moisture, efflorescence and frost resistance, in general, showing a better performance than the clay without additions. Thus, we have obtained new ceramic matrix composite materials for construction, improving its physical, mechanical and thermal properties, using aluminium slag secondary metallurgy, as an option Valuation of these wastes, thus preventing them from being poured to landfills and pollute environment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, a novel and approach for obtaining 3D models from video sequences captured with hand-held cameras is addressed. We define a pipeline that robustly deals with different types of sequences and acquiring devices. Our system follows a divide and conquer approach: after a frame decimation that pre-conditions the input sequence, the video is split into short-length clips. This allows to parallelize the reconstruction step which translates into a reduction in the amount of computational resources required. The short length of the clips allows an intensive search for the best solution at each step of reconstruction which robustifies the system. The process of feature tracking is embedded within the reconstruction loop for each clip as opposed to other approaches. A final registration step, merges all the processed clips to the same coordinate frame

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The technique of Abstract Interpretation has allowed the development of very sophisticated global program analyses which are at the same time provably correct and practical. We present in a tutorial fashion a novel program development framework which uses abstract interpretation as a fundamental tool. The framework uses modular, incremental abstract interpretation to obtain information about the program. This information is used to validate programs, to detect bugs with respect to partial specifications written using assertions (in the program itself and/or in system libraries), to generate and simplify run-time tests, and to perform high-level program transformations such as multiple abstract specialization, parallelization, and resource usage control, all in a provably correct way. In the case of validation and debugging, the assertions can refer to a variety of program points such as procedure entry, procedure exit, points within procedures, or global computations. The system can reason with much richer information than, for example, traditional types. This includes data structure shape (including pointer sharing), bounds on data structure sizes, and other operational variable instantiation properties, as well as procedure-level properties such as determinacy, termination, nonfailure, and bounds on resource consumption (time or space cost). CiaoPP, the preprocessor of the Ciao multi-paradigm programming system, which implements the described functionality, will be used to illustrate the fundamental ideas.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a novel framework for encoding latency analysis of arbitrary multiview video coding prediction structures. This framework avoids the need to consider an specific encoder architecture for encoding latency analysis by assuming an unlimited processing capacity on the multiview encoder. Under this assumption, only the influence of the prediction structure and the processing times have to be considered, and the encoding latency is solved systematically by means of a graph model. The results obtained with this model are valid for a multiview encoder with sufficient processing capacity and serve as a lower bound otherwise. Furthermore, with the objective of low latency encoder design with low penalty on rate-distortion performance, the graph model allows us to identify the prediction relationships that add higher encoding latency to the encoder. Experimental results for JMVM prediction structures illustrate how low latency prediction structures with a low rate-distortion penalty can be derived in a systematic manner using the new model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The research work that here is summarized, it is classed on the area of dynamics and measures of railway safety, specifically in the study of the influence of the cross wind on the high-speed trains as well as the study of new mitigation measures like wind breaking structures or wind fences, with optimized shapes. The work has been developed in the Research Center in Rail Technology (CITEF), and supported by the Universidad Politécnica de Madrid, Spain.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An aerodynamic optimization of the train aerodynamic characteristics in term of front wind action sensitivity is carried out in this paper. In particular, a genetic algorithm (GA) is used to perform a shape optimization study of a high-speed train nose. The nose is parametrically defined via Bézier Curves, including a wider range of geometries in the design space as possible optimal solutions. Using a GA, the main disadvantage to deal with is the large number of evaluations need before finding such optimal. Here it is proposed the use of metamodels to replace Navier-Stokes solver. Among all the posibilities, Rsponse Surface Models and Artificial Neural Networks (ANN) are considered. Best results of prediction and generalization are obtained with ANN and those are applied in GA code. The paper shows the feasibility of using GA in combination with ANN for this problem, and solutions achieved are included.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The evolution of water content on a sandy soil during the sprinkler irrigation campaign, in the summer of 2010, of a field of sugar beet crop located at Valladolid (Spain) is assessed by a capacitive FDR (Frequency Domain Reflectometry) EnviroScan. This field is one of the experimental sites of the Spanish research center for the sugar beet development (AIMCRA). The objective of the work focus on monitoring the soil water content evolution of consecutive irrigations during the second two weeks of July (from the 12th to the 28th). These measurements will be used to simulate water movement by means of Hydrus-2D. The water probe logged water content readings (m3/m3) at 10, 20, 40 and 60 cm depth every 30 minutes. The probe was placed between two rows in one of the typical 12 x 15 m sprinkler irrigation framework. Furthermore, a texture analysis at the soil profile was also conducted. The irrigation frequency in this farm was set by the own personal farmer 0 s criteria that aiming to minimizing electricity pumping costs, used to irrigate at night and during the weekend i.e. longer irrigation frequency than expected. However, the high evapotranspiration rates and the weekly sugar beet water consumption—up to 50mm/week—clearly determined the need for lower this frequency. Moreover, farmer used to irrigate for six or five hours whilst results from the EnviroScan probe showed the soil profile reaching saturation point after the first three hours. It must be noted that AIMCRA provides to his members with a SMS service regarding weekly sugar beet water requirement; from the use of different meteorological stations and evapotranspiration pans, farmers have an idea of the weekly irrigation needs. Nevertheless, it is the farmer 0 s decision to decide how to irrigate. Thus, in order to minimize water stress and pumping costs, a suitable irrigation time and irrigation frequency was modeled with Hydrus-2D. Results for the period above mentioned showed values of water content ranging from 35 and 30 (m3/m3) for the first 10 and 20cm profile depth (two hours after irrigation) to the minimum 14 and 13 (m3/m3) ( two hours before irrigation). For the 40 and 60 cm profile depth, water content moves steadily across the dates: The greater the root activity the greater the water content variation. According to the results in the EnviroScan probe and the modeling in Hydrus-2D, shorter frequencies and irrigation times are suggested.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the rising prices of the retail electricity and the decreasing cost of the PV technology, grid parity with commercial electricity will soon become a reality in Europe. This fact, together with less attractive PV feed-in-tariffs in the near future and incentives to promote self-consumption suggest, that new operation modes for the PV Distributed Generation should be explored; differently from the traditional approach which is only based on maximizing the exported electricity to the grid. The smart metering is experiencing a growth in Europe and the United States but the possibilities of its use are still uncertain, in our system we propose their use to manage the storage and to allow the user to know their electrical power and energy balances. The ADSM has many benefits studied previously but also it has important challenges, in this paper we can observe and ADSM implementation example where we propose a solution to these challenges. In this paper we study the effects of the Active Demand-Side Management (ADSM) and storage systems in the amount of consumed local electrical energy. It has been developed on a prototype of a self-sufficient solar house called “MagicBox” equipped with grid connection, PV generation, lead–acid batteries, controllable appliances and smart metering. We carried out simulations for long-time experiments (yearly studies) and real measures for short and mid-time experiments (daily and weekly studies). Results show the relationship between the electricity flows and the storage capacity, which is not linear and becomes an important design criterion.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article focuses on the evaluation of a biometric technique based on the performance of an identifying gesture by holding a telephone with an embedded accelerometer in his/her hand. The acceleration signals obtained when users perform gestures are analyzed following a mathematical method based on global sequence alignment. In this article, eight different scores are proposed and evaluated in order to quantify the differences between gestures, obtaining an optimal EER result of 3.42% when analyzing a random set of 40 users of a database made up of 80 users with real attempts of falsification. Moreover, a temporal study of the technique is presented leeding to the need to update the template to adapt the manner in which users modify how they perform their identifying gesture over time. Six updating schemes have been assessed within a database of 22 users repeating their identifying gesture in 20 sessions over 4 months, concluding that the more often the template is updated the better and more stable performance the technique presents.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The propagation losses (PL) of lithium niobate optical planar waveguides fabricated by swift heavy-ion irradiation (SHI), an alternative to conventional ion implantation, have been investigated and optimized. For waveguide fabrication, congruently melting LiNbO3 substrates were irradiated with F ions at 20 MeV or 30 MeV and fluences in the range 1013–1014 cm−2. The influence of the temperature and time of post-irradiation annealing treatments has been systematically studied. Optimum propagation losses lower than 0.5 dB/cm have been obtained for both TE and TM modes, after a two-stage annealing treatment at 350 and 375∘C. Possible loss mechanisms are discussed.