12 resultados para OPERATIONAL PARAMETERS
em Universidad Politécnica de Madrid
Resumo:
Las uniones estructurales mecánicas y adhesivas requieren la combinación de un número importante de parámetros para la obtención de la continuidad estructural que exigen las condiciones de diseño. Las características de las uniones presentan importantes variaciones, ligadas a las condiciones de ejecución, tanto en uniones mecánicas como especialmente en uniones adhesivas y mixtas (unión mecánica y adhesiva, también conocidas como uniones híbridas). Las propiedades mecánicas de las uniones adhesivas dependen de la naturaleza y propiedades de los adhesivos y también de muchos otros parámetros que influyen directamente en el comportamiento de estas uniones. Algunos de los parámetros más significativos son: el acabado superficial de los materiales, área y espesor de la capa adhesiva, diseño adecuado, secuencia de aplicación, propiedades químicas de la superficie y preparación de los sustratos antes de aplicar el adhesivo. Los mecanismos de adhesión son complejos. En general, cada unión adhesiva solo puede explicarse considerando la actuación conjunta de varios mecanismos de adhesión. No existen adhesivos universales para un determinado material o aplicación, por lo que cada pareja sustrato-adhesivo requiere un particular estudio y el comportamiento obtenido puede variar, significativamente, de uno a otro caso. El fallo de una junta adhesiva depende del mecanismo cohesión-adhesión, ligado a la secuencia y modo de ejecución de los parámetros operacionales utilizados en la unión. En aplicaciones estructurales existen un número muy elevado de sistemas de unión y de posibles sustratos. En este trabajo se han seleccionado cuatro adhesivos diferentes (cianoacrilato, epoxi, poliuretano y silano modificado) y dos procesos de unión mecánica (remachado y clinchado). Estas uniones se han aplicado sobre chapas de acero al carbono en diferentes estados superficiales (chapa blanca, galvanizada y prepintada). Los parámetros operacionales analizados han sido: preparación superficial, espesor del adhesivo, secuencia de aplicación y aplicación de presión durante el curado. Se han analizado tanto las uniones individuales como las uniones híbridas (unión adhesiva y unión mecánica). La combinación de procesos de unión, sustratos y parámetros operacionales ha dado lugar a la preparación y ensayo de más de mil muestras. Pues, debido a la dispersión de resultados característica de las uniones adhesivas, para cada condición analizada se han ensayado seis probetas. Los resultados obtenidos han sido: El espesor de adhesivo utilizado es una variable muy importante en los adhesivos flexibles, donde cuanto menor es el espesor del adhesivo mayor es la resistencia mecánica a cortadura de la unión. Sin embargo en los adhesivos rígidos su influencia es mucho menor. La naturaleza de la superficie es fundamental para una buena adherencia del adhesivo al substrato, que repercute en la resistencia mecánica de la unión. La superficie que mejor adherencia presenta es la prepintada, especialmente cuando existe una alta compatibilidad entre la pintura y el adhesivo. La superficie que peor adherencia tiene es la galvanizada. La secuencia de aplicación ha sido un parámetro significativo en las uniones híbridas, donde los mejores resultados se han obtenido cuando se aplicaba primero el adhesivo y la unión mecánica se realizaba antes del curado del adhesivo. La aplicación de presión durante el curado se ha mostrado un parámetro significativo en los adhesivos con poca capacidad para el relleno de la junta. En los otros casos su influencia ha sido poco relevante. El comportamiento de las uniones estructurales mecánicas y adhesivas en cuanto a la resistencia mecánica de la unión puede variar mucho en función del diseño de dicha unión. La resistencia mecánica puede ser tan grande que falle antes el substrato que la unión. Las mejores resistencias se consiguen diseñando las uniones con adhesivo cianoacrilato, eligiendo adecuadamente las condiciones superficiales y operacionales, por ejemplo chapa blanca aplicando una presión durante el curado de la unión. La utilización de uniones mixtas aumenta muy poco o nada la resistencia mecánica, pero a cambio proporciona una baja dispersión de resultados, siendo destacable para la superficie galvanizada, que es la que presenta peor reproducibilidad cuando se realizan uniones sólo con adhesivo. Las uniones mixtas conducen a un aumento de la deformación antes de la rotura. Los adhesivos dan rotura frágil y las uniones mecánicas rotura dúctil. La unión mixta proporciona ductilidad a la unión. Las uniones mixtas también pueden dar rotura frágil, esto sucede cuando la resistencia del adhesivo es tres veces superior a la resistencia de la unión mecánica. Las uniones híbridas mejoran la rigidez de la junta, sobre todo se aprecia un aumento importante en las uniones mixtas realizadas con adhesivos flexibles, pudiendo decirse que para todos los adhesivos la rigidez de la unión híbrida es superior. ABSTRACT The mechanical and adhesive structural joints require the combination of a large number of parameters to obtain the structural continuity required for the design conditions. The characteristics of the junctions have important variations, linked to performance conditions, in mechanical joints as particular in mixed adhesive joints (mechanical and adhesive joints, also known as hybrid joints). The mechanical properties of the adhesive joints depend of the nature and properties of adhesives and also of many other parameters that directly influence in the behavior of these joints. Some of the most significant parameters are: the surface finished of the material, area and thickness of the adhesive layer, suitable design, and application sequence, chemical properties of the surface and preparation of the substrate before applying the adhesive. Adhesion mechanisms are complex. In general, each adhesive joint can only be explained by considering the combined action of several adhesions mechanisms. There aren’t universal adhesives for a given material or application, so that each pair substrate-adhesive requires a particular study and the behavior obtained can vary significantly from one to another case. The failure of an adhesive joint depends on the cohesion-adhesion mechanism, linked to the sequence and manner of execution of the operational parameters used in the joint. In the structural applications, there are a very high number of joining systems and possible substrates. In this work we have selected four different adhesives (cyanoacrylate, epoxy, polyurethane and silano modified) and two mechanical joining processes (riveting and clinching). These joints were applied on carbon steel with different types of surfaces (white sheet, galvanized and pre-painted). The operational parameters analyzed were: surface preparation, thickness of adhesive, application sequence and application of pressure during curing. We have analyzed individual joints both as hybrid joints (adhesive joint and mechanical joint). The combination of joining processes, substrates and operational parameters has resulted in the preparation and testing of over a thousand specimens. Then, due to the spread of results characteristic of adhesive joints, for each condition analyzed we have tested six samples. The results have been: The thickness of adhesive used is an important variable in the flexible adhesives, where the lower the adhesive thickness greater the shear strength of the joint. However in rigid adhesives is lower influence. The nature of the surface is essential for good adherence of the adhesive to the substrate, which affects the shear strength of the joint. The surface has better adherence is preprinted, especially when there is a high compatibility between the paint and the adhesive. The surface which has poor adherence is the galvanized. The sequence of application has been a significant parameter in the hybrid junctions, where the best results are obtained when applying first the adhesive and the mechanical joint is performed before cured of the adhesive. The application of pressure during curing has shown a significant parameter in the adhesives with little capacity for filler the joint. In other cases their influence has been less relevant. The behavior of structural mechanical and adhesive joints in the shear strength of the joint can vary greatly depending on the design of such a joint. The shear strength may be so large that the substrate fails before the joint. The best shear strengths are achieved by designing the junctions with cyanoacrylate adhesive, by selecting appropriately the surface and operating conditions, for example by white sheet applying a pressure during curing of the joint. The use of hybrid joints no increase shear strength, but instead provides a low dispersion of results, being remarkable for the galvanized surface, which is the having worst reproducibility when performed bonded joints. The hybrid joints leading to increased deformation before rupture. The joints witch adhesives give brittle fracture and the mechanics joints give ductile fracture. Hybrid joint provides ductility at the joint. Hybrid joint can also give brittle fracture, this happens when the shear strength of the adhesive is three times the shear strength of the mechanical joint. The hybrid joints improve stiffness of joint, especially seen a significant increase in hybrid joints bonding with flexible adhesives, can be said that for all the adhesives, the hybrid junction stiffness is higher.
Resumo:
Systems used for target localization, such as goods, individuals, or animals, commonly rely on operational means to meet the final application demands. However, what would happen if some means were powered up randomly by harvesting systems? And what if those devices not randomly powered had their duty cycles restricted? Under what conditions would such an operation be tolerable in localization services? What if the references provided by nodes in a tracking problem were distorted? Moreover, there is an underlying topic common to the previous questions regarding the transfer of conceptual models to reality in field tests: what challenges are faced upon deploying a localization network that integrates energy harvesting modules? The application scenario of the system studied is a traditional herding environment of semi domesticated reindeer (Rangifer tarandus tarandus) in northern Scandinavia. In these conditions, information on approximate locations of reindeer is as important as environmental preservation. Herders also need cost-effective devices capable of operating unattended in, sometimes, extreme weather conditions. The analyses developed are worthy not only for the specific application environment presented, but also because they may serve as an approach to performance of navigation systems in absence of reasonably accurate references like the ones of the Global Positioning System (GPS). A number of energy-harvesting solutions, like thermal and radio-frequency harvesting, do not commonly provide power beyond one milliwatt. When they do, battery buffers may be needed (as it happens with solar energy) which may raise costs and make systems more dependent on environmental temperatures. In general, given our problem, a harvesting system is needed that be capable of providing energy bursts of, at least, some milliwatts. Many works on localization problems assume that devices have certain capabilities to determine unknown locations based on range-based techniques or fingerprinting which cannot be assumed in the approach considered herein. The system presented is akin to range-free techniques, but goes to the extent of considering very low node densities: most range-free techniques are, therefore, not applicable. Animal localization, in particular, uses to be supported by accurate devices such as GPS collars which deplete batteries in, maximum, a few days. Such short-life solutions are not particularly desirable in the framework considered. In tracking, the challenge may times addressed aims at attaining high precision levels from complex reliable hardware and thorough processing techniques. One of the challenges in this Thesis is the use of equipment with just part of its facilities in permanent operation, which may yield high input noise levels in the form of distorted reference points. The solution presented integrates a kinetic harvesting module in some nodes which are expected to be a majority in the network. These modules are capable of providing power bursts of some milliwatts which suffice to meet node energy demands. The usage of harvesting modules in the aforementioned conditions makes the system less dependent on environmental temperatures as no batteries are used in nodes with harvesters--it may be also an advantage in economic terms. There is a second kind of nodes. They are battery powered (without kinetic energy harvesters), and are, therefore, dependent on temperature and battery replacements. In addition, their operation is constrained by duty cycles in order to extend node lifetime and, consequently, their autonomy. There is, in turn, a third type of nodes (hotspots) which can be static or mobile. They are also battery-powered, and are used to retrieve information from the network so that it is presented to users. The system operational chain starts at the kinetic-powered nodes broadcasting their own identifier. If an identifier is received at a battery-powered node, the latter stores it for its records. Later, as the recording node meets a hotspot, its full record of detections is transferred to the hotspot. Every detection registry comprises, at least, a node identifier and the position read from its GPS module by the battery-operated node previously to detection. The characteristics of the system presented make the aforementioned operation own certain particularities which are also studied. First, identifier transmissions are random as they depend on movements at kinetic modules--reindeer movements in our application. Not every movement suffices since it must overcome a certain energy threshold. Second, identifier transmissions may not be heard unless there is a battery-powered node in the surroundings. Third, battery-powered nodes do not poll continuously their GPS module, hence localization errors rise even more. Let's recall at this point that such behavior is tight to the aforementioned power saving policies to extend node lifetime. Last, some time is elapsed between the instant an identifier random transmission is detected and the moment the user is aware of such a detection: it takes some time to find a hotspot. Tracking is posed as a problem of a single kinetically-powered target and a population of battery-operated nodes with higher densities than before in localization. Since the latter provide their approximate positions as reference locations, the study is again focused on assessing the impact of such distorted references on performance. Unlike in localization, distance-estimation capabilities based on signal parameters are assumed in this problem. Three variants of the Kalman filter family are applied in this context: the regular Kalman filter, the alpha-beta filter, and the unscented Kalman filter. The study enclosed hereafter comprises both field tests and simulations. Field tests were used mainly to assess the challenges related to power supply and operation in extreme conditions as well as to model nodes and some aspects of their operation in the application scenario. These models are the basics of the simulations developed later. The overall system performance is analyzed according to three metrics: number of detections per kinetic node, accuracy, and latency. The links between these metrics and the operational conditions are also discussed and characterized statistically. Subsequently, such statistical characterization is used to forecast performance figures given specific operational parameters. In tracking, also studied via simulations, nonlinear relationships are found between accuracy and duty cycles and cluster sizes of battery-operated nodes. The solution presented may be more complex in terms of network structure than existing solutions based on GPS collars. However, its main gain lies on taking advantage of users' error tolerance to reduce costs and become more environmentally friendly by diminishing the potential amount of batteries that can be lost. Whether it is applicable or not depends ultimately on the conditions and requirements imposed by users' needs and operational environments, which is, as it has been explained, one of the topics of this Thesis.
Resumo:
Direct Steam Generation (DSG) in Linear Fresnel (LF) solar collectors is being consolidated as a feasible technology for Concentrating Solar Power (CSP) plants. The competitiveness of this technology relies on the following main features: water as heat transfer fluid (HTF) in Solar Field (SF), obtaining high superheated steam temperatures and pressures at turbine inlet (500ºC and 90 bar), no heat tracing required to avoid HTF freezing, no HTF degradation, no environmental impacts, any heat exchanger between SF and Balance Of Plant (BOP), and low cost installation and maintenance. Regarding to LF solar collectors, were recently developed as an alternative to Parabolic Trough Collector (PTC) technology. The main advantages of LF are: the reduced collector manufacturing cost and maintenance, linear mirrors shapes versus parabolic mirror, fixed receiver pipes (no ball joints reducing leaking for high pressures), lower susceptibility to wind damages, and light supporting structures allowing reduced driving devices. Companies as Novatec, Areva, Solar Euromed, etc., are investing in LF DSG technology and constructing different pilot plants to demonstrate the benefits and feasibility of this solution for defined locations and conditions (Puerto Errado 1 and 2 in Murcia Spain, Lidellin Newcastle Australia, Kogran Creek in South West Queensland Australia, Kimberlina in Bakersfield California USA, Llo Solar in Pyrénées France,Dhursar in India,etc). There are several critical decisions that must be taken in order to obtain a compromise and optimization between plant performance, cost, and durability. Some of these decisions go through the SF design: proper thermodynamic operational parameters, receiver material selection for high pressures, phase separators and recirculation pumps number and location, pipes distribution to reduce the amount of tubes (reducing possible leaks points and transient time, etc.), etc. Attending to these aspects, the correct design parameters selection and its correct assessment are the main target for designing DSG LF power plants. For this purpose in the recent few years some commercial software tools were developed to simulatesolar thermal power plants, the most focused on LF DSG design are Thermoflex and System Advisor Model (SAM). Once the simulation tool is selected,it is made the study of the proposed SFconfiguration that constitutes the main innovation of this work, and also a comparison with one of the most typical state-of-the-art configuration. The transient analysis must be simulated with high detail level, mainly in the BOP during start up, shut down, stand by, and partial loads are crucial, to obtain the annual plant performance. An innovative SF configurationwas proposed and analyzed to improve plant performance. Finally it was demonstrated thermal inertia and BOP regulation mode are critical points in low sun irradiation day plant behavior, impacting in annual performance depending on power plant location.
Resumo:
Los procesos de biofiltración por carbón activo biológico se han utilizado desde hace décadas, primeramente en Europa y después en Norte América, sin embargo no hay parámetros de diseño y operación específicos que se puedan utilizar de guía para la biofiltración. Además, el factor coste a la hora de elegir el carbón activo como medio de filtración impacta en el presupuesto, debido al elevado coste de inversión y de regeneración. A la hora de diseñar y operar filtros de carbón activo los requisitos que comúnmente se buscan son eliminar materia orgánica, olor, y sabor de agua. Dentro de la eliminación de materia orgánica se precisa la eliminación necesaria para evitar subproductos en la desinfección no deseados, y reducir los niveles de carbono orgánico disuelto biodegradable y asimilable a valores que consigan la bioestabilidad del agua producto, a fin de evitar recrecimiento de biofilm en las redes de distribución. El ozono se ha utilizado durante años como un oxidante previo a la biofiltración para reducir el olor, sabor, y color del agua, oxidando la materia orgánica convirtiendo los compuestos no biodegradables y lentamente biodegradables en biodegradables, consiguiendo que puedan ser posteriormente eliminados biológicamente en los filtros de carbón activo. Sin embargo la inestabilidad del ozono en el agua hace que se produzcan ácidos carboxilos, alcoholes y aldehídos, conocidos como subproductos de la desinfección. Con esta tesis se pretende dar respuesta principalmente a los siguientes objetivos: análisis de parámetros requeridos para el diseño de los filtros de carbón activo biológicos, necesidades de ozonización previa a la filtración, y comportamiento de la biofiltración en un sistema compuesto de coagulación sobre un filtro de carbón activo biológico. Los resultados obtenidos muestran que la biofiltración es un proceso que encaja perfectamente con los parámetros de diseño de plantas con filtración convencional. Aunque la capacidad de eliminación de materia orgánica se reduce a medida que el filtro se satura y entra en la fase biológica, la biodegradación en esta fase se mantienen estable y perdura a lo lago de los meses sin preocupaciones por la regeneración del carbón. Los valores de carbono orgánico disuelto biodegradable se mantienen por debajo de los marcados en la literatura existente para agua bioestable, lo que hace innecesaria la dosificación de ozono previa a la biofiltración. La adición de la coagulación con la corrección de pH sobre el carbón activo consigue una mejora en la reducción de la materia orgánica, sin afectar a la biodegradación del carbón activo, cumpliendo también con los requerimientos de turbidez a la salida de filtración. Lo que plantea importantes ventajas para el proceso. Granular activated carbon filters have been used for many years to treat and produce drinking water using the adsorption capacity of carbon, replacing it once the carbon lost its adsorption capacity and became saturated. On the other hand, biological activated carbon filters have been studied for decades, firstly in Europe and subsequently in North America, nevertheless are no generally accepted design and operational parameters documented to be used as design guidance for biofiltration. Perhaps this is because of the cost factor; to choose activated carbon as a filtration media requires a significant investment due to the high capital and regeneration costs. When activated carbon filters are typically required it is for the reduction of an organic load, removal of colour, taste and / or odour. In terms of organic matter reduction, the primary aim is to achieve as much removal as possible to reduce or avoid the introduction of disinfection by products, the required removal in biodegradable dissolved organic carbon and assimilable organic carbon to produce a biologically stable potable water which prohibits the regrowth of biofilm in the distribution systems. The ozone has historically been used as an oxidant to reduce colour, taste and odour by oxidizing the organic matter and increasing the biodegradability of the organic matter, enhancing the effectiveness of organic removal in downstream biological activated carbon filters. Unfortunately, ozone is unstable in water and reacts with organic matter producing carboxylic acids, alcohols, and aldehydes, known as disinfection by products. This thesis has the following objectives: determination of the required parameters for the design of the biological activated filters, the requirement of ozonization as a pre-treatment for the biological activated filters, and a performance assessment of biofiltration when coagulation is applied as a pretreatment for biological activated carbon filters. The results show that the process design parameters of biofiltration are compatible with those of conventional filtration. The organic matter removal reduces its effectiveness as soon as the filter is saturated and the biological stage starts, but the biodegradation continues steadily and lasts for a long period of time without the need of carbon regeneration. The removal of the biodegradable dissolved organic carbon is enough to produce a biostable water according to the values shown on the existing literature; therefore ozone is not required prior to the filtration. Furthermore, the addition of coagulant and pH control before the biological activated carbon filter achieves a additional removal of organic matter, without affecting the biodegradation that occurs in the activated carbon whilst also complying with the required turbidity removal.
Resumo:
This paper presents the Expectation Maximization algorithm (EM) applied to operational modal analysis of structures. The EM algorithm is a general-purpose method for maximum likelihood estimation (MLE) that in this work is used to estimate state space models. As it is well known, the MLE enjoys some optimal properties from a statistical point of view, which make it very attractive in practice. However, the EM algorithm has two main drawbacks: its slow convergence and the dependence of the solution on the initial values used. This paper proposes two different strategies to choose initial values for the EM algorithm when used for operational modal analysis: to begin with the parameters estimated by Stochastic Subspace Identification method (SSI) and to start using random points. The effectiveness of the proposed identification method has been evaluated through numerical simulation and measured vibration data in the context of a benchmark problem. Modal parameters (natural frequencies, damping ratios and mode shapes) of the benchmark structure have been estimated using SSI and the EM algorithm. On the whole, the results show that the application of the EM algorithm starting from the solution given by SSI is very useful to identify the vibration modes of a structure, discarding the spurious modes that appear in high order models and discovering other hidden modes. Similar results are obtained using random starting values, although this strategy allows us to analyze the solution of several starting points what overcome the dependence on the initial values used.
Resumo:
The estimation of modal parameters of a structure from ambient measurements has attracted the attention of many researchers in the last years. The procedure is now well established and the use of state space models, stochastic system identification methods and stabilization diagrams allows to identify the modes of the structure. In this paper the contribution of each identified mode to the measured vibration is discussed. This modal contribution is computed using the Kalman filter and it is an indicator of the importance of the modes. Also the variation of the modal contribution with the order of the model is studied. This analysis suggests selecting the order for the state space model as the order that includes the modes with higher contribution. The order obtained using this method is compared to those obtained using other well known methods, like Akaike criteria for time series or the singular values of the weighted projection matrix in the Stochastic Subspace Identification method. Finally, both simulated and measured vibration data are used to show the practicability of the derived technique. Finally, it is important to remark that the method can be used with any identification method working in the state space model.
Resumo:
The modal analysis of a structural system consists on computing its vibrational modes. The experimental way to estimate these modes requires to excite the system with a measured or known input and then to measure the system output at different points using sensors. Finally, system inputs and outputs are used to compute the modes of vibration. When the system refers to large structures like buildings or bridges, the tests have to be performed in situ, so it is not possible to measure system inputs such as wind, traffic, . . .Even if a known input is applied, the procedure is usually difficult and expensive, and there are still uncontrolled disturbances acting at the time of the test. These facts led to the idea of computing the modes of vibration using only the measured vibrations and regardless of the inputs that originated them, whether they are ambient vibrations (wind, earthquakes, . . . ) or operational loads (traffic, human loading, . . . ). This procedure is usually called Operational Modal Analysis (OMA), and in general consists on to fit a mathematical model to the measured data assuming the unobserved excitations are realizations of a stationary stochastic process (usually white noise processes). Then, the modes of vibration are computed from the estimated model. The first issue investigated in this thesis is the performance of the Expectation- Maximization (EM) algorithm for the maximum likelihood estimation of the state space model in the field of OMA. The algorithm is described in detail and it is analysed how to apply it to vibration data. After that, it is compared to another well known method, the Stochastic Subspace Identification algorithm. The maximum likelihood estimate enjoys some optimal properties from a statistical point of view what makes it very attractive in practice, but the most remarkable property of the EM algorithm is that it can be used to address a wide range of situations in OMA. In this work, three additional state space models are proposed and estimated using the EM algorithm: • The first model is proposed to estimate the modes of vibration when several tests are performed in the same structural system. Instead of analyse record by record and then compute averages, the EM algorithm is extended for the joint estimation of the proposed state space model using all the available data. • The second state space model is used to estimate the modes of vibration when the number of available sensors is lower than the number of points to be tested. In these cases it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors). Here, the proposed state space model and the EM algorithm are used to estimate the modal parameters taking into account the data of all setups. • And last, a state space model is proposed to estimate the modes of vibration in the presence of unmeasured inputs that cannot be modelled as white noise processes. In these cases, the frequency components of the inputs cannot be separated from the eigenfrequencies of the system, and spurious modes are obtained in the identification process. The idea is to measure the response of the structure corresponding to different inputs; then, it is assumed that the parameters common to all the data correspond to the structure (modes of vibration), and the parameters found in a specific test correspond to the input in that test. The problem is solved using the proposed state space model and the EM algorithm. Resumen El análisis modal de un sistema estructural consiste en calcular sus modos de vibración. Para estimar estos modos experimentalmente es preciso excitar el sistema con entradas conocidas y registrar las salidas del sistema en diferentes puntos por medio de sensores. Finalmente, los modos de vibración se calculan utilizando las entradas y salidas registradas. Cuando el sistema es una gran estructura como un puente o un edificio, los experimentos tienen que realizarse in situ, por lo que no es posible registrar entradas al sistema tales como viento, tráfico, . . . Incluso si se aplica una entrada conocida, el procedimiento suele ser complicado y caro, y todavía están presentes perturbaciones no controladas que excitan el sistema durante el test. Estos hechos han llevado a la idea de calcular los modos de vibración utilizando sólo las vibraciones registradas en la estructura y sin tener en cuenta las cargas que las originan, ya sean cargas ambientales (viento, terremotos, . . . ) o cargas de explotación (tráfico, cargas humanas, . . . ). Este procedimiento se conoce en la literatura especializada como Análisis Modal Operacional, y en general consiste en ajustar un modelo matemático a los datos registrados adoptando la hipótesis de que las excitaciones no conocidas son realizaciones de un proceso estocástico estacionario (generalmente ruido blanco). Posteriormente, los modos de vibración se calculan a partir del modelo estimado. El primer problema que se ha investigado en esta tesis es la utilización de máxima verosimilitud y el algoritmo EM (Expectation-Maximization) para la estimación del modelo espacio de los estados en el ámbito del Análisis Modal Operacional. El algoritmo se describe en detalle y también se analiza como aplicarlo cuando se dispone de datos de vibraciones de una estructura. A continuación se compara con otro método muy conocido, el método de los Subespacios. Los estimadores máximo verosímiles presentan una serie de propiedades que los hacen óptimos desde un punto de vista estadístico, pero la propiedad más destacable del algoritmo EM es que puede utilizarse para resolver un amplio abanico de situaciones que se presentan en el Análisis Modal Operacional. En este trabajo se proponen y estiman tres modelos en el espacio de los estados: • El primer modelo se utiliza para estimar los modos de vibración cuando se dispone de datos correspondientes a varios experimentos realizados en la misma estructura. En lugar de analizar registro a registro y calcular promedios, se utiliza algoritmo EM para la estimación conjunta del modelo propuesto utilizando todos los datos disponibles. • El segundo modelo en el espacio de los estados propuesto se utiliza para estimar los modos de vibración cuando el número de sensores disponibles es menor que vi Resumen el número de puntos que se quieren analizar en la estructura. En estos casos es usual realizar varios ensayos cambiando la posición de los sensores de un ensayo a otro (múltiples configuraciones de sensores). En este trabajo se utiliza el algoritmo EM para estimar los parámetros modales teniendo en cuenta los datos de todas las configuraciones. • Por último, se propone otro modelo en el espacio de los estados para estimar los modos de vibración en la presencia de entradas al sistema que no pueden modelarse como procesos estocásticos de ruido blanco. En estos casos, las frecuencias de las entradas no se pueden separar de las frecuencias del sistema y se obtienen modos espurios en la fase de identificación. La idea es registrar la respuesta de la estructura correspondiente a diferentes entradas; entonces se adopta la hipótesis de que los parámetros comunes a todos los registros corresponden a la estructura (modos de vibración), y los parámetros encontrados en un registro específico corresponden a la entrada en dicho ensayo. El problema se resuelve utilizando el modelo propuesto y el algoritmo EM.
Resumo:
In Operational Modal Analysis of structures we often have multiple time history records of vibrations measured at different time instants. This work presents a procedure for estimating the modal parameters of the structure processing all the records, that is, using all available information to obtain a single estimate of the modal parameters. The method uses Maximum Likelihood Estimation and the Expectation Maximization algorithm. Finally, it has been applied to various problems for both simulated and real structures and the results show the advantage of the joint analysis proposed.
Resumo:
This paper presents a time-domain stochastic system identification method based on Maximum Likelihood Estimation and the Expectation Maximization algorithm. The effectiveness of this structural identification method is evaluated through numerical simulation in the context of the ASCE benchmark problem on structural health monitoring. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the benchmark structure have been estimated applying the proposed identification method to a set of 100 simulated cases. The numerical results show that the proposed method estimates all the modal parameters reasonably well in the presence of 30% measurement noise even. Finally, advantages and disadvantages of the method have been discussed.
Resumo:
In Operational Modal Analysis (OMA) of a structure, the data acquisition process may be repeated many times. In these cases, the analyst has several similar records for the modal analysis of the structure that have been obtained at di�erent time instants (multiple records). The solution obtained varies from one record to another, sometimes considerably. The differences are due to several reasons: statistical errors of estimation, changes in the external forces (unmeasured forces) that modify the output spectra, appearance of spurious modes, etc. Combining the results of the di�erent individual analysis is not straightforward. To solve the problem, we propose to make the joint estimation of the parameters using all the records. This can be done in a very simple way using state space models and computing the estimates by maximum-likelihood. The method provides a single result for the modal parameters that combines optimally all the records.
Resumo:
Operational Modal Analysis consists on estimate the modal parameters of a structure (natural frequencies, damping ratios and modal vectors) from output-only vibration measurements. The modal vectors can be only estimated where a sensor is placed, so when the number of available sensors is lower than the number of tested points, it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors): some sensors stay at the same position from setup to setup, and the other sensors change the position until all the tested points are covered. The permanent sensors are then used to merge the mode shape estimated at each setup (or partial modal vectors) into global modal vectors. Traditionally, the partial modal vectors are estimated independently setup by setup, and the global modal vectors are obtained in a postprocess phase. In this work we present two state space models that can be used to process all the recorded setups at the same time, and we also present how these models can be estimated using the maximum likelihood method. The result is that the global mode shape of each mode is obtained automatically, and subsequently, a single value for the natural frequency and damping ratio of the mode is computed. Finally, both models are compared using real measured data.
Resumo:
The last decade, scientific studies have indicated an association between air pollution to which people are exposed and wide range of adverse health outcomes. We have developed a tool which is based on a model (MM5-CMAQ) running over Europe with 50 km spatial resolution, based on EMEP annual emissions, to produce a short-term forecast of the impact on health. In order to estimate the mortality change (forecasted for the next 24 hours) we have chosen a log-linear (Poisson) regression form to estimate the concentration-response function. The parameters involved in the C-R function have been estimated based on epidemiological studies, which have been published. Finally, we have derived the relationship between concentration change and mortality change from the C-R function which is the final health impact function.