43 resultados para practical applicability
em Universidad Politécnica de Madrid
Resumo:
Esta tesis doctoral presenta el desarrollo, verificación y aplicación de un método original de regionalización estadística para generar escenarios locales de clima futuro de temperatura y precipitación diarias, que combina dos pasos. El primer paso es un método de análogos: los "n" días cuya configuración atmosférica de baja resolución es más parecida a la del día problema, se seleccionan de un banco de datos de referencia del pasado. En el segundo paso, se realiza un análisis de regresión múltiple sobre los "n" días más análogos para la temperatura, mientras que para la precipitación se utiliza la distribución de probabilidad de esos "n" días análogos para obtener la estima de precipitación. La verificación de este método se ha llevado a cabo para la España peninsular y las Islas Baleares. Los resultados muestran unas buenas prestaciones para temperatura (BIAS cerca de 0.1ºC y media de errores absolutos alrededor de 1.9ºC); y unas prestaciones aceptables para la precipitación (BIAS razonablemente bajo con una media de -18%; error medio absoluto menor que para una simulación de referencia (la persistencia); y una distribución de probabilidad simulada similar a la observada según dos test no-paramétricos de similitud). Para mostrar la aplicabilidad de la metodología desarrollada, se ha aplicado en detalle en un caso de estudio. El método se aplicó a cuatro modelos climáticos bajo diferentes escenarios futuros de emisiones de gases de efecto invernadero, para la región de Aragón, produciendo así proyecciones futuras de precipitación y temperaturas máximas y mínimas diarias. La fiabilidad de la técnica de regionalización fue evaluada de nuevo para el caso de estudio mediante un proceso de verificación. Para determinar la capacidad de los modelos climáticos para simular el clima real, sus simulaciones del pasado (la denominada salida 20C3M) se regionalizaron y luego se compararon con el clima observado (los resultados son bastante robustos para la temperatura y menos concluyentes para la precipitación). Las proyecciones futuras a escala local presentan un aumento significativo durante todo el siglo XXI de las temperaturas máximas y mínimas para todos los futuros escenarios de emisiones considerados. Las simulaciones de precipitación presentan mayores incertidumbres. Además, la aplicabilidad práctica del método se demostró también mediante su utilización para producir escenarios climáticos futuros para otros casos de estudio en los distintos sectores y regiones del mundo. Se ha prestado especial atención a una aplicación en Centroamérica, una región que ya está sufriendo importantes impactos del cambio climático y que tiene un clima muy diferente. ABSTRACT This doctoral thesis presents the development, verification and application of an original downscaling method for daily temperature and precipitation, which combines two statistical approaches. The first step is an analogue approach: the “n” days most similar to the day to be downscaled are selected. In the second step, a multiple regression analysis using the “n” most analogous days is performed for temperature, whereas for precipitation the probability distribution of the “n” analogous days is used to obtain the amount of precipitation. Verification of this method has been carried out for the Spanish Iberian Peninsula and the Balearic Islands. Results show good performance for temperature (BIAS close to 0.1ºC and Mean Absolute Errors around 1.9ºC); and an acceptable skill for precipitation (reasonably low BIAS with a mean of - 18%, Mean Absolute Error lower than for a reference simulation, i.e. persistence, and a well-simulated probability distribution according to two non-parametric tests of similarity). To show the applicability of the method, a study case has been analyzed. The method was applied to four climate models under different future emission scenarios for the region of Aragón, thus producing future projections of daily precipitation and maximum and minimum temperatures. The reliability of the downscaling technique was re-assessed for the study case by a verification process. To determine the ability of the climate models to simulate the real climate, their simulations of the past (the 20C3M output) were downscaled and then compared with the observed climate – the results are quite robust for temperature and less conclusive for the precipitation. The downscaled future projections exhibit a significant increase during the entire 21st century of the maximum and minimum temperatures for all the considered future emission scenarios. Precipitation simulations exhibit greater uncertainties. Furthermore, the practical applicability of the method was demonstrated also by using it to produce future climate scenarios for some other study cases in different sectors and regions of the world. Special attention was paid to an application of the method in Central America, a region that is already suffering from significant climate change impacts and that has a very different climate from others where the method was previously applied.
Resumo:
This paper presents and illustrates with an example a practical approach to the dataflow analysis of programs written in constraint logic programming (CLP) languages using abstract interpretation. It is first argued that, from the framework point of view, it sufnces to propose relatively simple extensions of traditional analysis methods which have already been proved useful and practical and for which efncient fixpoint algorithms have been developed. This is shown by proposing a simple but quite general extensión of Bruynooghe's traditional framework to the analysis of CLP programs. In this extensión constraints are viewed not as "suspended goals" but rather as new information in the store, following the traditional view of CLP. Using this approach, and as an example of its use, a complete, constraint system independent, abstract analysis is presented for approximating definiteness information. The analysis is in fact of quite general applicability. It has been implemented and used in the analysis of CLP(R) and Prolog-III applications. Results from the implementation of this analysis are also presented.
Resumo:
The research work as presented in this article covers the design of detached breakwaters since they constitute a type of coastal defence work with which to combat many of the erosion problems found on beaches in a stable, sustainable fashion. The main aim of this work is to formulate a functional and environmental (but not structural) design method, enabling the fundamental characteristics of a detached breakwater to be defined as a function of the effect it is wished to induce on the coast, and taking into account variables of a different nature (climate, geomorphology and geometry) influencing the changes the shoreline undergoes after its construction. With this article, it is intended to submit the final result of the investigation undertaken, applying the detached breakwater design method as developed to solving a practical case. Thus it may be shown how the method enables a detached breakwater’s geometric pre-sizing to be tackled at a place on the coast with certain climate, geomorphology and littoral dynamic characteristics, first setting the final state of equilibrium it is wanted to obtain therein after its construction.
Resumo:
El tema central de investigación en esta Tesis es el estudio del comportamientodinámico de una estructura mediante modelos que describen la distribución deenergía entre los componentes de la misma y la aplicación de estos modelos parala detección de daños incipientes.Los ensayos dinámicos son un modo de extraer información sobre las propiedadesde una estructura. Si tenemos un modelo de la estructura se podría ajustar éstepara que, con determinado grado de precisión, tenga la misma respuesta que elsistema real ensayado. Después de que se produjese un daño en la estructura,la respuesta al mismo ensayo variará en cierta medida; actualizando el modelo alas nuevas condiciones podemos detectar cambios en la configuración del modeloestructural que nos condujeran a la conclusión de que en la estructura se haproducido un daño.De este modo, la detección de un daño incipiente es posible si somos capacesde distinguir una pequeña variación en los parámetros que definen el modelo. Unrégimen muy apropiado para realizar este tipo de detección es a altas frecuencias,ya que la respuesta es muy dependiente de los pequeños detalles geométricos,dado que el tamaño característico en la estructura asociado a la respuesta esdirectamente proporcional a la velocidad de propagación de las ondas acústicas enel sólido, que para una estructura dada es inalterable, e inversamente proporcionala la frecuencia de la excitación. Al mismo tiempo, esta característica de la respuestaa altas frecuencias hace que un modelo de Elementos Finitos no sea aplicable enla práctica, debido al alto coste computacional.Un modelo ampliamente utilizado en el cálculo de la respuesta de estructurasa altas frecuencias en ingeniería es el SEA (Statistical Energy Analysis). El SEAaplica el balance energético a cada componente estructural, relacionando la energíade vibración de estos con la potencia disipada por cada uno de ellos y la potenciatransmitida entre ellos, cuya suma debe ser igual a la potencia inyectada a cadacomponente estructural. Esta relación es lineal y viene caracterizada por los factoresde pérdidas. Las magnitudes que intervienen en la respuesta se consideranpromediadas en la geometría, la frecuencia y el tiempo.Actualizar el modelo SEA a datos de ensayo es, por lo tanto, calcular losfactores de pérdidas que reproduzcan la respuesta obtenida en éste. Esta actualización,si se hace de manera directa, supone la resolución de un problema inversoque tiene la característica de estar mal condicionado. En la Tesis se propone actualizarel modelo SEA, no en término de los factores de pérdidas, sino en términos deparámetros estructurales que tienen sentido físico cuando se trata de la respuestaa altas frecuencias, como son los factores de disipación de cada componente, susdensidades modales y las rigideces características de los elementos de acoplamiento.Los factores de pérdidas se calculan como función de estos parámetros. Estaformulación es desarrollada de manera original en esta Tesis y principalmente sefunda en la hipótesis de alta densidad modal, es decir, que en la respuesta participanun gran número de modos de cada componente estructural.La teoría general del método SEA, establece que el modelo es válido bajounas hipótesis sobre la naturaleza de las excitaciones externas muy restrictivas,como que éstas deben ser de tipo ruido blanco local. Este tipo de carga es difícil dereproducir en condiciones de ensayo. En la Tesis mostramos con casos prácticos queesta restricción se puede relajar y, en particular, los resultados son suficientementebuenos cuando la estructura se somete a una carga armónica en escalón.Bajo estas aproximaciones se desarrolla un algoritmo de optimización por pasosque permite actualizar un modelo SEA a un ensayo transitorio cuando la carga esde tipo armónica en escalón. Este algoritmo actualiza el modelo no solamente parauna banda de frecuencia en particular sino para diversas bandas de frecuencia demanera simultánea, con el objetivo de plantear un problema mejor condicionado.Por último, se define un índice de daño que mide el cambio en la matriz depérdidas cuando se produce un daño estructural en una localización concreta deun componente. Se simula numéricamente la respuesta de una estructura formadapor vigas donde producimos un daño en la sección de una de ellas; como se tratade un cálculo a altas frecuencias, la simulación se hace mediante el Método delos Elementos Espectrales para lo que ha sido necesario desarrollar dentro de laTesis un elemento espectral de tipo viga dañada en una sección determinada. Losresultados obtenidos permiten localizar el componente estructural en que se haproducido el daño y la sección en que éste se encuentra con determinado grado deconfianza.AbstractThe main subject under research in this Thesis is the study of the dynamic behaviourof a structure using models that describe the energy distribution betweenthe components of the structure and the applicability of these models to incipientdamage detection.Dynamic tests are a way to extract information about the properties of astructure. If we have a model of the structure, it can be updated in order toreproduce the same response as in experimental tests, within a certain degree ofaccuracy. After damage occurs, the response will change to some extent; modelupdating to the new test conditions can help to detect changes in the structuralmodel leading to the conclusión that damage has occurred.In this way incipient damage detection is possible if we are able to detect srnallvariations in the model parameters. It turns out that the high frequency regimeis highly relevant for incipient damage detection, because the response is verysensitive to small structural geometric details. The characteristic length associatedwith the response is proportional to the propagation speed of acoustic waves insidethe solid, but inversely proportional to the excitation frequency. At the same time,this fact makes the application of a Finite Element Method impractical due to thehigh computational cost.A widely used model in engineering when dealing with the high frequencyresponse is SEA (Statistical Energy Analysis). SEA applies the energy balance toeach structural component, relating their vibrational energy with the dissipatedpower and the transmitted power between the different components; their summust be equal to the input power to each of them. This relationship is linear andcharacterized by loss factors. The magnitudes considered in the response shouldbe averaged in geometry, frequency and time.SEA model updating to test data is equivalent to calculating the loss factorsthat provide a better fit to the experimental response. This is formulated as an illconditionedinverse problem. In this Thesis a new updating algorithm is proposedfor the study of the high frequency response regime in terms of parameters withphysical meaning such as the internal dissipation factors, modal densities andcharacteristic coupling stiffness. The loss factors are then calculated from theseparameters. The approach is developed entirely in this Thesis and is mainlybased on a high modal density asumption, that is to say, a large number of modescontributes to the response.General SEA theory establishes the validity of the model under the asumptionof very restrictive external excitations. These should behave as a local white noise.This kind of excitation is difficult to reproduce in an experimental environment.In this Thesis we show that in practical cases this assumption can be relaxed, inparticular, results are good enough when the structure is excited with a harmonicstep function.Under these assumptions an optimization algorithm is developed for SEAmodel updating to a transient test when external loads are harmonic step functions.This algorithm considers the response not only in a single frequency band,but also for several of them simultaneously.A damage index is defined that measures the change in the loss factor matrixwhen a damage has occurred at a certain location in the structure. The structuresconsidered in this study are built with damaged beam elements; as we are dealingwith the high frequency response, the numerical simulation is implemented witha Spectral Element Method. It has therefore been necessary to develop a spectralbeam damaged element as well. The reported results show that damage detectionis possible with this algorithm, moreover, damage location is also possible withina certain degree of accuracy.
Resumo:
With the introduction of the European Higher Education Area and the development of the "Bologna" method in learning certain technological subjects, a pilot assessment procedure was launched in the "old" plan to observe, monitor and analyze the acquiring knowledge of senior students in various academic courses. This paper is a reflection on culture and knowledge. Will students accommodate to get a lower score on tests because they know they have a lot of tooltips to achieve their objectives?. Are their skills lower for these reason?.
Resumo:
This paper describes a practical activity, part of the renewable energy course where the students have to build their own complete wind generation system, including blades, PM-generator, power electronics and control. After connecting the system to the electric grid the system has been tested during real wind scenarios. The paper will describe the electric part of the work surface-mounted permanent magnet machine design criteria as well as the power electronics part for the power control and the grid connection. A Kalman filter is used for the voltage phase estimation and current commands obtained in order to control active and reactive power. The connection to the grid has been done and active and reactive power has been measured in the system.
Resumo:
This work presents a solution for the aerial coverage of a field by using a fleet of aerial vehicles. The use of Unmanned Aerial Vehicles allows to obtain high resolution mosaics to be used in Precision Agriculture techniques. This report is focus on providing a solution for the full simultaneous coverage problem taking into account restrictions as the required spatial resolution and overlap while maintaining similar light conditions and safety operation of the drones. Results obtained from real field tests are finally reported
Application of the Extended Kalman filter to fuzzy modeling: Algorithms and practical implementation
Resumo:
Modeling phase is fundamental both in the analysis process of a dynamic system and the design of a control system. If this phase is in-line is even more critical and the only information of the system comes from input/output data. Some adaptation algorithms for fuzzy system based on extended Kalman filter are presented in this paper, which allows obtaining accurate models without renounce the computational efficiency that characterizes the Kalman filter, and allows its implementation in-line with the process
Resumo:
According to the World Health Organization, 15 million people suffer stroke worldwide each year, of these, 5 million die and 5 million are permanently disabled. Stroke is therefore a major cause of mortality world-wide. The majority of strokes are caused by a blood clot that occludes an artery in the brain, and although thrombolytic agents such as Alteplase are used to dissolve clots that arise in the arteries of the brain, there are limitations on the use of these thrombolytic agents. However over the past decade, other methods of treatment have been developed which include Thrombectomy Devices e.g. the 'GP' Thrombus Aspiration Device ('GP' TAD). Such devices may be used as an alternative to thrombolytics or in conjunction with them to extract blood clots in arteries such as the middle cerebral artery of the midbrain brain, and the posterior inferior cerebellar artery (PICA) of the posterior aspect of the brain. In this paper, we mathematically model the removal of blood clots using the 'GP' TAD from selected arteries of the brain where blood clots may arise taking into account factors such as the resistances, compliances and inertances effects. Such mathematical modelling may have potential uses in predicting the pressures necessary to extract blood clots of given lengths, and masses from arteries in the Circle of Willis - posterior circulation of the brain
Resumo:
Global linear instability theory is concerned with the temporal or spatial development of small-amplitude perturbations superposed upon laminar steady or time-periodic threedimensional flows, which are inhomogeneous in two (and periodic in one) or all three spatial directions.1 The theory addresses flows developing in complex geometries, in which the parallel or weakly nonparallel basic flow approximation invoked by classic linear stability theory does not hold. As such, global linear theory is called to fill the gap in research into stability and transition in flows over or through complex geometries. Historically, global linear instability has been (and still is) concerned with solution of multi-dimensional eigenvalue problems; the maturing of non-modal linear instability ideas in simple parallel flows during the last decade of last century2–4 has given rise to investigation of transient growth scenarios in an ever increasing variety of complex flows. After a brief exposition of the theory, connections are sought with established approaches for structure identification in flows, such as the proper orthogonal decomposition and topology theory in the laminar regime and the open areas for future research, mainly concerning turbulent and three-dimensional flows, are highlighted. Recent results obtained in our group are reported in both the time-stepping and the matrix-forming approaches to global linear theory. In the first context, progress has been made in implementing a Jacobian-Free Newton Krylov method into a standard finite-volume aerodynamic code, such that global linear instability results may now be obtained in compressible flows of aeronautical interest. In the second context a new stable very high-order finite difference method is implemented for the spatial discretization of the operators describing the spatial BiGlobal EVP, PSE-3D and the TriGlobal EVP; combined with sparse matrix treatment, all these problems may now be solved on standard desktop computers.
Resumo:
Adaptive systems use feedback as a key strategy to cope with uncertainty and change in their environments. The information fed back from the sensorimotor loop into the control architecture can be used to change different elements of the controller at four different levels: parameters of the control model, the control model itself, the functional organization of the agent and the functional components of the agent. The complexity of such a space of potential configurations is daunting. The only viable alternative for the agent ?in practical, economical, evolutionary terms? is the reduction of the dimensionality of the configuration space. This reduction is achieved both by functionalisation —or, to be more precise, by interface minimization— and by patterning, i.e. the selection among a predefined set of organisational configurations. This last analysis let us state the central problem of how autonomy emerges from the integration of the cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. In this paper we will show a general model of how the emotional biological systems operate following this theoretical analysis and how this model is also of applicability to a wide spectrum of artificial systems.
Resumo:
In this paper, a system that allows applying precision agriculture techniques is described. The application is based on the deployment of a team of unmanned aerial vehicles that are able to take georeferenced pictures in order to create a full map by applying mosaicking procedures for postprocessing. The main contribution of this work is practical experimentation with an integrated tool. Contributions in different fields are also reported. Among them is a new one-phase automatic task partitioning manager, which is based on negotiation among the aerial vehicles, considering their state and capabilities. Once the individual tasks are assigned, an optimal path planning algorithm is in charge of determining the best path for each vehicle to follow. Also, a robust flight control based on the use of a control law that improves the maneuverability of the quadrotors has been designed. A set of field tests was performed in order to analyze all the capabilities of the system, from task negotiations to final performance. These experiments also allowed testing control robustness under different weather conditions.
Resumo:
A recent study by Pichugin et al. recall the Hemp’s solution for uniform load of 1974, showing that if allowable tensile and compressive stresses are unequal then the Hemp’s arch is optimal provided the ratio of stresses falls within a certain interval. This work is undoubtedly an important pass forward to find an optimal solution for the mathematical problem stated by Hemp. Furthermore, the Authors suggest that their optimal solutions are potentially reasonable from a practical perspective for materials with more allowable compressive stress than tensile one, as this kind of materials used to be not too much expensive. In this paper we profoundly analyse the solutions of the Authors from this practical perspective finding that the original Hemp’s solution —albeit sub-optimal for the mathematical problem— leads to real designs that are more efficient than the theoretic optimal solutions of the Authors.We show that the reasons for this shocking fact has to do with the class of problems considered by Hemp and the Authors.
Resumo:
This paper addresses the issue of the practicality of global flow analysis in logic program compilation, in terms of speed of the analysis, precisión, and usefulness of the information obtained. To this end, design and implementation aspects are discussed for two practical abstract interpretation-based flow analysis systems: MA , the MCC And-parallel Analyzer and Annotator; and Ms, an experimental mode inference system developed for SB-Prolog. The paper also provides performance data obtained (rom these implementations and, as an example of an application, a study of the usefulness of the mode information obtained in reducing run-time checks in independent and-parallelism.Based on the results obtained, it is concluded that the overhead of global flow analysis is not prohibitive, while the results of analysis can be quite precise and useful.
Resumo:
This paper presents and develops a generalized concept of Non-Strict Independent And Parallelism (NSIAP). NSIAP extends the applicability of Independent And- Parallelism (IAP) by enlarging the class of goals which are eligible for parallel execution. At the same time it maintains IAP's ability to run non-deterministic goals in parallel and to preserve the computational complexity expected in the execution of the program by the programmer. First, a parallel execution framework is defined and some fundamental correctness results, in the sense of equivalence of solutions with the sequential model, are discussed for this framework. The issue of efficiency is then considered. Two new definitions of NSI are given for the cases of puré and impure goals respectively and efficiency results are provided for programs parallelized under these definitions which include treatment of the case of goal failure: not only is reduction of execution time guaranteed (modulo run-time overheads) in the absence of failure but it is also shown that in the worst case of failure no speed-down will occur. In addition to applying to NSI, these results carry over and complete previous results shown in the context of IAP which did not deal with the case of goal failure. Finally, some practical examples of the application of the NSIAP concept to the parallelization of a set of programs are presented and performance results, showing the advantage of using NSI, are given.