22 resultados para Modal transformations
em Universidad Politécnica de Madrid
Resumo:
Interface discontinuity factors based on the Generalized Equivalence Theory are commonly used in nodal homogenized diffusion calculations so that diffusion average values approximate heterogeneous higher order solutions. In this paper, an additional form of interface correction factors is presented in the frame of the Analytic Coarse Mesh Finite Difference Method (ACMFD), based on a correction of the modal fluxes instead of the physical fluxes. In the ACMFD formulation, implemented in COBAYA3 code, the coupled multigroup diffusion equations inside a homogenized region are reduced to a set of uncoupled modal equations through diagonalization of the multigroup diffusion matrix. Then, physical fluxes are transformed into modal fluxes in the eigenspace of the diffusion matrix. It is possible to introduce interface flux discontinuity jumps as the difference of heterogeneous and homogeneous modal fluxes instead of introducing interface discontinuity factors as the ratio of heterogeneous and homogeneous physical fluxes. The formulation in the modal space has been implemented in COBAYA3 code and assessed by comparison with solutions using classical interface discontinuity factors in the physical space
Resumo:
Laminated glass is a sandwich element consisting of two or more glass sheets, with one or more interlayers of polyvinyl butyral (PVB). The dynamic response of laminated glass beams and plates can be predicted using analytical or numerical models in which the glass and the PVB are usually modelled as linear-elastic and linear viscoelastic materials, respectively. In this work the dynamic behavior of laminated glass beams are predicted using a finite element model and the analytical model of Ross-Kerwin-Ungar. The numerical and analytical results are compared with those obtained by operational modal analysis performed at different temperatures.
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.
Resumo:
This paper presents the Expectation Maximization algorithm (EM) applied to operational modal analysis of structures. The EM algorithm is a general-purpose method for maximum likelihood estimation (MLE) that in this work is used to estimate state space models. As it is well known, the MLE enjoys some optimal properties from a statistical point of view, which make it very attractive in practice. However, the EM algorithm has two main drawbacks: its slow convergence and the dependence of the solution on the initial values used. This paper proposes two different strategies to choose initial values for the EM algorithm when used for operational modal analysis: to begin with the parameters estimated by Stochastic Subspace Identification method (SSI) and to start using random points. The effectiveness of the proposed identification method has been evaluated through numerical simulation and measured vibration data in the context of a benchmark problem. Modal parameters (natural frequencies, damping ratios and mode shapes) of the benchmark structure have been estimated using SSI and the EM algorithm. On the whole, the results show that the application of the EM algorithm starting from the solution given by SSI is very useful to identify the vibration modes of a structure, discarding the spurious modes that appear in high order models and discovering other hidden modes. Similar results are obtained using random starting values, although this strategy allows us to analyze the solution of several starting points what overcome the dependence on the initial values used.
Resumo:
The estimation of modal parameters of a structure from ambient measurements has attracted the attention of many researchers in the last years. The procedure is now well established and the use of state space models, stochastic system identification methods and stabilization diagrams allows to identify the modes of the structure. In this paper the contribution of each identified mode to the measured vibration is discussed. This modal contribution is computed using the Kalman filter and it is an indicator of the importance of the modes. Also the variation of the modal contribution with the order of the model is studied. This analysis suggests selecting the order for the state space model as the order that includes the modes with higher contribution. The order obtained using this method is compared to those obtained using other well known methods, like Akaike criteria for time series or the singular values of the weighted projection matrix in the Stochastic Subspace Identification method. Finally, both simulated and measured vibration data are used to show the practicability of the derived technique. Finally, it is important to remark that the method can be used with any identification method working in the state space model.
Resumo:
This paper presents a time-domain stochastic system identification method based on maximum likelihood estimation (MLE) with the expectation maximization (EM) algorithm. The effectiveness of this structural identification method is evaluated through numerical simulation in the context of the ASCE benchmark problem on structural health monitoring. The benchmark structure is a four-story, two-bay by two-bay steel-frame scale model structure built in the Earthquake Engineering Research Laboratory at the University of British Columbia, Canada. This paper focuses on Phase I of the analytical benchmark studies. A MATLAB-based finite element analysis code obtained from the IASC-ASCE SHM Task Group web site is used to calculate the dynamic response of the prototype structure. A number of 100 simulations have been made using this MATLAB-based finite element analysis code in order to evaluate the proposed identification method. There are several techniques to realize system identification. In this work, stochastic subspace identification (SSI)method has been used for comparison. SSI identification method is a well known method and computes accurate estimates of the modal parameters. The principles of the SSI identification method has been introduced in the paper and next the proposed MLE with EM algorithm has been explained in detail. The advantages of the proposed structural identification method can be summarized as follows: (i) the method is based on maximum likelihood, that implies minimum variance estimates; (ii) EM is a computational simpler estimation procedure than other optimization algorithms; (iii) estimate more parameters than SSI, and these estimates are accurate. On the contrary, the main disadvantages of the method are: (i) EM algorithm is an iterative procedure and it consumes time until convergence is reached; and (ii) this method needs starting values for the parameters. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the benchmark structure have been estimated using both the SSI method and the proposed MLE + EM method. The numerical results show that the proposed method identifies eigenfrequencies, damping ratios and mode shapes reasonably well even in the presence of 10% measurement noises. These modal parameters are more accurate than the SSI estimated modal parameters.
Resumo:
The modal analysis of a structural system consists on computing its vibrational modes. The experimental way to estimate these modes requires to excite the system with a measured or known input and then to measure the system output at different points using sensors. Finally, system inputs and outputs are used to compute the modes of vibration. When the system refers to large structures like buildings or bridges, the tests have to be performed in situ, so it is not possible to measure system inputs such as wind, traffic, . . .Even if a known input is applied, the procedure is usually difficult and expensive, and there are still uncontrolled disturbances acting at the time of the test. These facts led to the idea of computing the modes of vibration using only the measured vibrations and regardless of the inputs that originated them, whether they are ambient vibrations (wind, earthquakes, . . . ) or operational loads (traffic, human loading, . . . ). This procedure is usually called Operational Modal Analysis (OMA), and in general consists on to fit a mathematical model to the measured data assuming the unobserved excitations are realizations of a stationary stochastic process (usually white noise processes). Then, the modes of vibration are computed from the estimated model. The first issue investigated in this thesis is the performance of the Expectation- Maximization (EM) algorithm for the maximum likelihood estimation of the state space model in the field of OMA. The algorithm is described in detail and it is analysed how to apply it to vibration data. After that, it is compared to another well known method, the Stochastic Subspace Identification algorithm. The maximum likelihood estimate enjoys some optimal properties from a statistical point of view what makes it very attractive in practice, but the most remarkable property of the EM algorithm is that it can be used to address a wide range of situations in OMA. In this work, three additional state space models are proposed and estimated using the EM algorithm: • The first model is proposed to estimate the modes of vibration when several tests are performed in the same structural system. Instead of analyse record by record and then compute averages, the EM algorithm is extended for the joint estimation of the proposed state space model using all the available data. • The second state space model is used to estimate the modes of vibration when the number of available sensors is lower than the number of points to be tested. In these cases it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors). Here, the proposed state space model and the EM algorithm are used to estimate the modal parameters taking into account the data of all setups. • And last, a state space model is proposed to estimate the modes of vibration in the presence of unmeasured inputs that cannot be modelled as white noise processes. In these cases, the frequency components of the inputs cannot be separated from the eigenfrequencies of the system, and spurious modes are obtained in the identification process. The idea is to measure the response of the structure corresponding to different inputs; then, it is assumed that the parameters common to all the data correspond to the structure (modes of vibration), and the parameters found in a specific test correspond to the input in that test. The problem is solved using the proposed state space model and the EM algorithm. Resumen El análisis modal de un sistema estructural consiste en calcular sus modos de vibración. Para estimar estos modos experimentalmente es preciso excitar el sistema con entradas conocidas y registrar las salidas del sistema en diferentes puntos por medio de sensores. Finalmente, los modos de vibración se calculan utilizando las entradas y salidas registradas. Cuando el sistema es una gran estructura como un puente o un edificio, los experimentos tienen que realizarse in situ, por lo que no es posible registrar entradas al sistema tales como viento, tráfico, . . . Incluso si se aplica una entrada conocida, el procedimiento suele ser complicado y caro, y todavía están presentes perturbaciones no controladas que excitan el sistema durante el test. Estos hechos han llevado a la idea de calcular los modos de vibración utilizando sólo las vibraciones registradas en la estructura y sin tener en cuenta las cargas que las originan, ya sean cargas ambientales (viento, terremotos, . . . ) o cargas de explotación (tráfico, cargas humanas, . . . ). Este procedimiento se conoce en la literatura especializada como Análisis Modal Operacional, y en general consiste en ajustar un modelo matemático a los datos registrados adoptando la hipótesis de que las excitaciones no conocidas son realizaciones de un proceso estocástico estacionario (generalmente ruido blanco). Posteriormente, los modos de vibración se calculan a partir del modelo estimado. El primer problema que se ha investigado en esta tesis es la utilización de máxima verosimilitud y el algoritmo EM (Expectation-Maximization) para la estimación del modelo espacio de los estados en el ámbito del Análisis Modal Operacional. El algoritmo se describe en detalle y también se analiza como aplicarlo cuando se dispone de datos de vibraciones de una estructura. A continuación se compara con otro método muy conocido, el método de los Subespacios. Los estimadores máximo verosímiles presentan una serie de propiedades que los hacen óptimos desde un punto de vista estadístico, pero la propiedad más destacable del algoritmo EM es que puede utilizarse para resolver un amplio abanico de situaciones que se presentan en el Análisis Modal Operacional. En este trabajo se proponen y estiman tres modelos en el espacio de los estados: • El primer modelo se utiliza para estimar los modos de vibración cuando se dispone de datos correspondientes a varios experimentos realizados en la misma estructura. En lugar de analizar registro a registro y calcular promedios, se utiliza algoritmo EM para la estimación conjunta del modelo propuesto utilizando todos los datos disponibles. • El segundo modelo en el espacio de los estados propuesto se utiliza para estimar los modos de vibración cuando el número de sensores disponibles es menor que vi Resumen el número de puntos que se quieren analizar en la estructura. En estos casos es usual realizar varios ensayos cambiando la posición de los sensores de un ensayo a otro (múltiples configuraciones de sensores). En este trabajo se utiliza el algoritmo EM para estimar los parámetros modales teniendo en cuenta los datos de todas las configuraciones. • Por último, se propone otro modelo en el espacio de los estados para estimar los modos de vibración en la presencia de entradas al sistema que no pueden modelarse como procesos estocásticos de ruido blanco. En estos casos, las frecuencias de las entradas no se pueden separar de las frecuencias del sistema y se obtienen modos espurios en la fase de identificación. La idea es registrar la respuesta de la estructura correspondiente a diferentes entradas; entonces se adopta la hipótesis de que los parámetros comunes a todos los registros corresponden a la estructura (modos de vibración), y los parámetros encontrados en un registro específico corresponden a la entrada en dicho ensayo. El problema se resuelve utilizando el modelo propuesto y el algoritmo EM.
Resumo:
During launch, satellite and their equipment are subjected to loads of random nature and with a wide frequency range. Their vibro-acoustic response is an important issue to be analysed, for example for folded solar arrays and antennas. The main issue at low modal density is the modelling combinations engaging air layers, structures and external fluid. Depending on the modal density different methodologies, as FEM, BEM and SEA should be considered. This work focuses on the analysis of different combinations of the methodologies previously stated used in order to characterise the vibro-acoustic response of two rectangular sandwich structure panels isolated and engaging an air layer between them under a diffuse acoustic field. Focusing on the modelling of air layers, different models are proposed. To illustrate the phenomenology described and studied, experimental results from an acoustic test on an ARA-MKIII solar array in folded configuration are presented along with numerical results.
Resumo:
Computing the modal parameters of structural systems often requires processing data from multiple non-simultaneously recorded setups of sensors. These setups share some sensors in common, the so-called reference sensors, which are fixed for all measurements, while the other sensors change their position from one setup to the next. One possibility is to process the setups separately resulting in different modal parameter estimates for each setup. Then, the reference sensors are used to merge or glue the different parts of the mode shapes to obtain global mode shapes, while the natural frequencies and damping ratios are usually averaged. In this paper we present a new state space model that processes all setups at once. The result is that the global mode shapes are obtained automatically, and only a value for the natural frequency and damping ratio of each mode is estimated. We also investigate the estimation of this model using maximum likelihood and the Expectation Maximization algorithm, and apply this technique to simulated and measured data corresponding to different structures.
Resumo:
In Operational Modal Analysis of structures we often have multiple time history records of vibrations measured at different time instants. This work presents a procedure for estimating the modal parameters of the structure processing all the records, that is, using all available information to obtain a single estimate of the modal parameters. The method uses Maximum Likelihood Estimation and the Expectation Maximization algorithm. Finally, it has been applied to various problems for both simulated and real structures and the results show the advantage of the joint analysis proposed.
Resumo:
This paper presents a time-domain stochastic system identification method based on Maximum Likelihood Estimation and the Expectation Maximization algorithm. The effectiveness of this structural identification method is evaluated through numerical simulation in the context of the ASCE benchmark problem on structural health monitoring. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the benchmark structure have been estimated applying the proposed identification method to a set of 100 simulated cases. The numerical results show that the proposed method estimates all the modal parameters reasonably well in the presence of 30% measurement noise even. Finally, advantages and disadvantages of the method have been discussed.
Resumo:
In Operational Modal Analysis (OMA) of a structure, the data acquisition process may be repeated many times. In these cases, the analyst has several similar records for the modal analysis of the structure that have been obtained at di�erent time instants (multiple records). The solution obtained varies from one record to another, sometimes considerably. The differences are due to several reasons: statistical errors of estimation, changes in the external forces (unmeasured forces) that modify the output spectra, appearance of spurious modes, etc. Combining the results of the di�erent individual analysis is not straightforward. To solve the problem, we propose to make the joint estimation of the parameters using all the records. This can be done in a very simple way using state space models and computing the estimates by maximum-likelihood. The method provides a single result for the modal parameters that combines optimally all the records.
Resumo:
In order to achieve to minimize car-based trips, transport planners have been particularly interested in understanding the factors that explain modal choices. In the transport modelling literature there has been an increasing awareness that socioeconomic attributes and quantitative variables are not sufficient to characterize travelers and forecast their travel behavior. Recent studies have also recognized that users? social interactions and land use patterns influence travel behavior, especially when changes to transport systems are introduced, but links between international and Spanish perspectives are rarely deal. In this paper, factorial and path analyses through a Multiple-Indicator Multiple-Cause (MIMIC) model are used to understand and describe the relationship between the different psychological and environmental constructs with social influence and socioeconomic variables. The MIMIC model generates Latent Variables (LVs) to be incorporated sequentially into Discrete Choice Models (DCM) where the levels of service and cost attributes of travel modes are also included directly to measure the effect of the transport policies that have been introduced in Madrid during the last three years in the context of the economic crisis. The data used for this paper are collected from a two panel smartphone-based survey (n=255 and 190 respondents, respectively) of Madrid.
Resumo:
Cualquier estructura vibra según unas frecuencias propias definidas por sus parámetros modales (frecuencias naturales, amortiguamientos y formas modales). A través de las mediciones de la vibración en puntos clave de la estructura, los parámetros modales pueden ser estimados. En estructuras civiles, es difícil excitar una estructura de manera controlada, por lo tanto, las técnicas que implican la estimación de los parámetros modales sólo registrando su respuesta son de vital importancia para este tipo de estructuras. Esta técnica se conoce como Análisis Modal Operacional (OMA). La técnica del OMA no necesita excitar artificialmente la estructura, atendiendo únicamente a su comportamiento en servicio. La motivación para llevar a cabo pruebas de OMA surge en el campo de la Ingeniería Civil, debido a que excitar artificialmente con éxito grandes estructuras no sólo resulta difícil y costoso, sino que puede incluso dañarse la estructura. Su importancia reside en que el comportamiento global de una estructura está directamente relacionado con sus parámetros modales, y cualquier variación de rigidez, masa o condiciones de apoyo, aunque sean locales, quedan reflejadas en los parámetros modales. Por lo tanto, esta identificación puede integrarse en un sistema de vigilancia de la integridad estructural. La principal dificultad para el uso de los parámetros modales estimados mediante OMA son las incertidumbres asociadas a este proceso de estimación. Existen incertidumbres en el valor de los parámetros modales asociadas al proceso de cálculo (internos) y también asociadas a la influencia de los factores ambientales (externas), como es la temperatura. Este Trabajo Fin de Máster analiza estas dos fuentes de incertidumbre. Es decir, en primer lugar, para una estructura de laboratorio, se estudian y cuantifican las incertidumbres asociadas al programa de OMA utilizado. En segundo lugar, para una estructura en servicio (una pasarela de banda tesa), se estudian tanto el efecto del programa OMA como la influencia del factor ambiental en la estimación de los parámetros modales. Más concretamente, se ha propuesto un método para hacer un seguimiento de las frecuencias naturales de un mismo modo. Este método incluye un modelo de regresión lineal múltiple que permite eliminar la influencia de estos agentes externos. A structure vibrates according to some of its vibration modes, defined by their modal parameters (natural frequencies, damping ratios and modal shapes). Through the measurements of the vibration at key points of the structure, the modal parameters can be estimated. In civil engineering structures, it is difficult to excite structures in a controlled manner, thus, techniques involving output-only modal estimation are of vital importance for these structure. This techniques are known as Operational Modal Analysis (OMA). The OMA technique does not need to excite artificially the structure, this considers its behavior in service only. The motivation for carrying out OMA tests arises in the area of Civil Engineering, because successfully artificially excite large structures is difficult and expensive. It also may even damage the structure. The main goal is that the global behavior of a structure is directly related to their modal parameters, and any variation of stiffness, mass or support conditions, although it is local, is also reflected in the modal parameters. Therefore, this identification may be within a Structural Health Monitoring system. The main difficulty for using the modal parameters estimated by an OMA is the uncertainties associated to this estimation process. Thus, there are uncertainties in the value of the modal parameters associated to the computing process (internal) and the influence of environmental factors (external), such as the temperature. This Master’s Thesis analyzes these two sources of uncertainties. That is, firstly, for a lab structure, the uncertainties associated to the OMA program used are studied and quantified. Secondly, for an in-service structure (a stress-ribbon footbridge), both the effect of the OMA program and the influence of environmental factor on the modal parameters estimation are studied. More concretely, a method to track natural frequencies of the same mode has been proposed. This method includes a multiple linear regression model that allows to remove the influence of these external agents.
Resumo:
Optical communications receivers using wavelet signals processing is proposed in this paper for dense wavelength-division multiplexed (DWDM) systems and modal-division multiplexed (MDM) transmissions. The optical signal-to-noise ratio (OSNR) required to demodulate polarization-division multiplexed quadrature phase shift keying (PDM-QPSK) modulation format is alleviated with the wavelet denoising process. This procedure improves the bit error rate (BER) performance and increasing the transmission distance in DWDM systems. Additionally, the wavelet-based design relies on signal decomposition using time-limited basis functions allowing to reduce the computational cost in Digital-Signal-Processing (DSP) module. Attending to MDM systems, a new scheme of encoding data bits based on wavelets is presented to minimize the mode coupling in few-mode (FWF) and multimode fibers (MMF). The Shifted Prolate Wave Spheroidal (SPWS) functions are proposed to reduce the modal interference.