39 resultados para On-road measurements
Resumo:
The FK is a two-stage optical concentrator for CPV, composed by a Fresnel lens working as POE and a refractive element working as SOE. Both elements perform Köhler integration, for uniform irradiance purposes. The FK has demonstrated that compares very well with other Fresnel-based concentrator optics. Recent on-sun measurements carried out on an FK mono-module prototype have already shown outstanding results, achieving electrical efficiencies over 34%. Further optimization of optical design together with application of AR coating on SOE will predictably lead to efficiencies over 35%.
Resumo:
Fiber reinforced polymer composites (FRP) have found widespread usage in the repair and strengthening of concrete structures. FRP composites exhibit high strength-to-weight ratio, corrosion resistance, and are convenient to use in repair applications. Externally bonded FRP flexural strengthening of concrete beams is the most extended application of this technique. A common cause of failure in such members is associated with intermediate crack-induced debonding (IC debonding) of the FRP substrate from the concrete in an abrupt manner. Continuous monitoring of the concrete?FRP interface is essential to pre- vent IC debonding. Objective condition assessment and performance evaluation are challenging activities since they require some type of monitoring to track the response over a period of time. In this paper, a multi-objective model updating method integrated in the context of structural health monitoring is demonstrated as promising technology for the safety and reliability of this kind of strengthening technique. The proposed method, solved by a multi-objective extension of the particle swarm optimization method, is based on strain measurements under controlled loading. The use of permanently installed fiber Bragg grating (FBG) sensors embedded into the FRP-concrete interface or bonded onto the FRP strip together with the proposed methodology results in an automated method able to operate in an unsupervised mode.
Resumo:
Actual system performance of a PV system can differ from its expected behaviour.. This is the main reason why the performance of PV systems should be monitored, analyzed and, if needed, improved on. Some of the current testing procedures relating to the electrical behaviour of PV systems are appropriated for detecting electrical performance losses, but they are not well-suited to reveal hidden defects in the modules of PV plants and BIPV, which can lead to future losses. This paper reports on the tests and procedures used to evaluate the performance of PV systems, and especially on a novel procedure for quick on-site measurements and defect recognition caused by overheating in PV modules located in operating PV installations.
Resumo:
The objective of this thesis is the development of cooperative localization and tracking algorithms using nonparametric message passing techniques. In contrast to the most well-known techniques, the goal is to estimate the posterior probability density function (PDF) of the position of each sensor. This problem can be solved using Bayesian approach, but it is intractable in general case. Nevertheless, the particle-based approximation (via nonparametric representation), and an appropriate factorization of the joint PDFs (using message passing methods), make Bayesian approach acceptable for inference in sensor networks. The well-known method for this problem, nonparametric belief propagation (NBP), can lead to inaccurate beliefs and possible non-convergence in loopy networks. Therefore, we propose four novel algorithms which alleviate these problems: nonparametric generalized belief propagation (NGBP) based on junction tree (NGBP-JT), NGBP based on pseudo-junction tree (NGBP-PJT), NBP based on spanning trees (NBP-ST), and uniformly-reweighted NBP (URW-NBP). We also extend NBP for cooperative localization in mobile networks. In contrast to the previous methods, we use an optional smoothing, provide a novel communication protocol, and increase the efficiency of the sampling techniques. Moreover, we propose novel algorithms for distributed tracking, in which the goal is to track the passive object which cannot locate itself. In particular, we develop distributed particle filtering (DPF) based on three asynchronous belief consensus (BC) algorithms: standard belief consensus (SBC), broadcast gossip (BG), and belief propagation (BP). Finally, the last part of this thesis includes the experimental analysis of some of the proposed algorithms, in which we found that the results based on real measurements are very similar with the results based on theoretical models.
Resumo:
This Doctoral Thesis entitled Contribution to the analysis, design and assessment of compact antenna test ranges at millimeter wavelengths aims to deepen the knowledge of a particular antenna measurement system: the compact range, operating in the frequency bands of millimeter wavelengths. The thesis has been developed at Radiation Group (GR), an antenna laboratory which belongs to the Signals, Systems and Radiocommunications department (SSR), from Technical University of Madrid (UPM). The Radiation Group owns an extensive experience on antenna measurements, running at present four facilities which operate in different configurations: Gregorian compact antenna test range, spherical near field, planar near field and semianechoic arch system. The research work performed in line with this thesis contributes the knowledge of the first measurement configuration at higher frequencies, beyond the microwaves region where Radiation Group features customer-level performance. To reach this high level purpose, a set of scientific tasks were sequentially carried out. Those are succinctly described in the subsequent paragraphs. A first step dealed with the State of Art review. The study of scientific literature dealed with the analysis of measurement practices in compact antenna test ranges in addition with the particularities of millimeter wavelength technologies. Joint study of both fields of knowledge converged, when this measurement facilities are of interest, in a series of technological challenges which become serious bottlenecks at different stages: analysis, design and assessment. Thirdly after the overview study, focus was set on Electromagnetic analysis algorithms. These formulations allow to approach certain electromagnetic features of interest, such as field distribution phase or stray signal analysis of particular structures when they interact with electromagnetic waves sources. Properly operated, a CATR facility features electromagnetic waves collimation optics which are large, in terms of wavelengths. Accordingly, the electromagnetic analysis tasks introduce an extense number of mathematic unknowns which grow with frequency, following different polynomic order laws depending on the used algorithmia. In particular, the optics configuration which was of our interest consisted on the reflection type serrated edge collimator. The analysis of these devices requires a flexible handling of almost arbitrary scattering geometries, becoming this flexibility the nucleus of the algorithmia’s ability to perform the subsequent design tasks. This thesis’ contribution to this field of knowledge consisted on reaching a formulation which was powerful at the same time when dealing with various analysis geometries and computationally speaking. Two algorithmia were developed. While based on the same principle of hybridization, they reached different order Physics performance at the cost of the computational efficiency. Inter-comparison of their CATR design capabilities was performed, reaching both qualitative as well as quantitative conclusions on their scope. In third place, interest was shifted from analysis - design tasks towards range assessment. Millimetre wavelengths imply strict mechanical tolerances and fine setup adjustment. In addition, the large number of unknowns issue already faced in the analysis stage appears as well in the on chamber field probing stage. Natural decrease of dynamic range available by semiconductor millimeter waves sources requires in addition larger integration times at each probing point. These peculiarities increase exponentially the difficulty of performing assessment processes in CATR facilities beyond microwaves. The bottleneck becomes so tight that it compromises the range characterization beyond a certain limit frequency which typically lies on the lowest segment of millimeter wavelength frequencies. However the value of range assessment moves, on the contrary, towards the highest segment. This thesis contributes this technological scenario developing quiet zone probing techniques which achieves substantial data reduction ratii. Collaterally, it increases the robustness of the results to noise, which is a virtual rise of the setup’s available dynamic range. In fourth place, the environmental sensitivity of millimeter wavelengths issue was approached. It is well known the drifts of electromagnetic experiments due to the dependance of the re sults with respect to the surrounding environment. This feature relegates many industrial practices of microwave frequencies to the experimental stage, at millimeter wavelengths. In particular, evolution of the atmosphere within acceptable conditioning bounds redounds in drift phenomena which completely mask the experimental results. The contribution of this thesis on this aspect consists on modeling electrically the indoor atmosphere existing in a CATR, as a function of environmental variables which affect the range’s performance. A simple model was developed, being able to handle high level phenomena, such as feed - probe phase drift as a function of low level magnitudes easy to be sampled: relative humidity and temperature. With this model, environmental compensation can be performed and chamber conditioning is automatically extended towards higher frequencies. Therefore, the purpose of this thesis is to go further into the knowledge of millimetre wavelengths involving compact antenna test ranges. This knowledge is dosified through the sequential stages of a CATR conception, form early low level electromagnetic analysis towards the assessment of an operative facility, stages for each one of which nowadays bottleneck phenomena exist and seriously compromise the antenna measurement practices at millimeter wavelengths.
Resumo:
Most fusion satellite image methodologies at pixel-level introduce false spatial details, i.e.artifacts, in the resulting fusedimages. In many cases, these artifacts appears because image fusion methods do not consider the differences in roughness or textural characteristics between different land covers. They only consider the digital values associated with single pixels. This effect increases as the spatial resolution image increases. To minimize this problem, we propose a new paradigm based on local measurements of the fractal dimension (FD). Fractal dimension maps (FDMs) are generated for each of the source images (panchromatic and each band of the multi-spectral images) with the box-counting algorithm and by applying a windowing process. The average of source image FDMs, previously indexed between 0 and 1, has been used for discrimination of different land covers present in satellite images. This paradigm has been applied through the fusion methodology based on the discrete wavelet transform (DWT), using the à trous algorithm (WAT). Two different scenes registered by optical sensors on board FORMOSAT-2 and IKONOS satellites were used to study the behaviour of the proposed methodology. The implementation of this approach, using the WAT method, allows adapting the fusion process to the roughness and shape of the regions present in the image to be fused. This improves the quality of the fusedimages and their classification results when compared with the original WAT method
Resumo:
La fisuración iniciada en la superficie de los pavimentos asfálticos constituye uno de los más frecuentes e importantes modos de deterioro que tienen lugar en los firmes bituminosos, como han demostrado los estudios teóricos y experimentales llevados a cabo en la última década. Sin embargo, este mecanismo de fallo no ha sido considerado por los métodos tradicionales de diseño de estos firmes. El concepto de firmes de larga duración se fundamenta en un adecuado seguimiento del proceso de avance en profundidad de estos deterioros y la intervención en el momento más apropiado para conseguir mantenerlos confinados como fisuras de profundidad parcial en la capa superficial más fácilmente accesible y reparable, de manera que pueda prolongarse la durabilidad y funcionalidad del firme y reducir los costes generalizados de su ciclo de vida. Por lo tanto, para la selección de la estrategia óptima de conservación de los firmes resulta esencial disponer de metodologías que posibiliten la identificación precisa in situ de la fisuración descendente, su seguimiento y control, y que además permitan una determinación fiable y con alto rendimiento de su profundidad y extensión. En esta Tesis Doctoral se presentan los resultados obtenidos mediante la investigación sistemática de laboratorio e in situ llevada a cabo para la obtención de datos sobre fisuración descendente en firmes asfálticos y para el estudio de procedimientos de evaluación de la profundidad de este tipo de fisuras empleando técnicas de ultrasonidos. Dichos resultados han permitido comprobar que la metodología no destructiva propuesta, de rápida ejecución, bajo coste y sencilla implementación (principalmente empleada hasta el momento en estructuras metálicas y de hormigón, debido a las dificultades que introduce la naturaleza viscoelástica de los materiales bituminosos) puede ser aplicada con suficiente fiabilidad y repetibilidad sobre firmes asfálticos. Las medidas resultan asimismo independientes del espesor total del firme. Además, permite resolver algunos de los inconvenientes frecuentes que presentan otros métodos de diagnóstico de las fisuras de pavimentos, tales como la extracción de testigos (sistema destructivo, de alto coste y prolongados tiempos de interrupción del tráfico) o algunas otras técnicas no destructivas como las basadas en medidas de deflexiones o el georradar, las cuales no resultan suficientemente precisas para la investigación de fisuras superficiales. Para ello se han realizado varias campañas de ensayos sobre probetas de laboratorio en las que se han estudiado diferentes condiciones empíricas como, por ejemplo, distintos tipos de mezclas bituminosas en caliente (AC, SMA y PA), espesores de firme y adherencias entre capas, temperaturas, texturas superficiales, materiales de relleno y agua en el interior de las grietas, posición de los sensores y un amplio rango de posibles profundidades de fisura. Los métodos empleados se basan en la realización de varias medidas de velocidad o de tiempo de transmisión del pulso ultrasónico sobre una única cara o superficie accesible del material, de manera que resulte posible obtener un coeficiente de transmisión de la señal (mediciones relativas o autocompensadas). Las mediciones se han realizado a bajas frecuencias de excitación mediante dos equipos de ultrasonidos diferentes dotados, en un caso, de transductores de contacto puntual seco (DPC) y siendo en el otro instrumento de contacto plano a través de un material especialmente seleccionado para el acoplamiento (CPC). Ello ha permitido superar algunos de los tradicionales inconvenientes que presenta el uso de los transductores convencionales y no precisar preparación previa de las superficies. La técnica de autocalibración empleada elimina los errores sistemáticos y la necesidad de una calibración local previa, demostrando el potencial de esta tecnología. Los resultados experimentales han sido comparados con modelos teóricos simplificados que simulan la propagación de las ondas ultrasónicas en estos materiales bituminosos fisurados, los cuales han sido deducidos previamente mediante un planteamiento analítico y han permitido la correcta interpretación de dichos datos empíricos. Posteriormente, estos modelos se han calibrado mediante los resultados de laboratorio, proporcionándose sus expresiones matemáticas generalizadas y gráficas para su uso rutinario en las aplicaciones prácticas. Mediante los ensayos con ultrasonidos efectuados en campañas llevadas a cabo in situ, acompañados de la extracción de testigos del firme, se han podido evaluar los modelos propuestos. El máximo error relativo promedio en la estimación de la profundidad de las fisuras al aplicar dichos modelos no ha superado el 13%, con un nivel de confianza del 95%, en el conjunto de todos los ensayos realizados. La comprobación in situ de los modelos ha permitido establecer los criterios y las necesarias recomendaciones para su utilización sobre firmes en servicio. La experiencia obtenida posibilita la integración de esta metodología entre las técnicas de auscultación para la gestión de su conservación. Abstract Surface-initiated cracking of asphalt pavements constitutes one of the most frequent and important types of distress that occur in flexible bituminous pavements, as clearly has been demonstrated in the technical and experimental studies done over the past decade. However, this failure mechanism has not been taken into consideration for traditional methods of flexible pavement design. The concept of long-lasting pavements is based on adequate monitoring of the depth and extent of these deteriorations and on intervention at the most appropriate moment so as to contain them in the surface layer in the form of easily-accessible and repairable partial-depth topdown cracks, thereby prolonging the durability and serviceability of the pavement and reducing the overall cost of its life cycle. Therefore, to select the optimal maintenance strategy for perpetual pavements, it becomes essential to have access to methodologies that enable precise on-site identification, monitoring and control of top-down propagated cracks and that also permit a reliable, high-performance determination of the extent and depth of cracking. This PhD Thesis presents the results of systematic laboratory and in situ research carried out to obtain information about top-down cracking in asphalt pavements and to study methods of depth evaluation of this type of cracking using ultrasonic techniques. These results have demonstrated that the proposed non-destructive methodology –cost-effective, fast and easy-to-implement– (mainly used to date for concrete and metal structures, due to the difficulties caused by the viscoelastic nature of bituminous materials) can be applied with sufficient reliability and repeatability to asphalt pavements. Measurements are also independent of the asphalt thickness. Furthermore, it resolves some of the common inconveniences presented by other methods used to evaluate pavement cracking, such as core extraction (a destructive and expensive procedure that requires prolonged traffic interruptions) and other non-destructive techniques, such as those based on deflection measurements or ground-penetrating radar, which are not sufficiently precise to measure surface cracks. To obtain these results, extensive tests were performed on laboratory specimens. Different empirical conditions were studied, such as various types of hot bituminous mixtures (AC, SMA and PA), differing thicknesses of asphalt and adhesions between layers, varied temperatures, surface textures, filling materials and water within the crack, different sensor positions, as well as an ample range of possible crack depths. The methods employed in the study are based on a series of measurements of ultrasonic pulse velocities or transmission times over a single accessible side or surface of the material that make it possible to obtain a signal transmission coefficient (relative or auto-calibrated readings). Measurements were taken at low frequencies by two short-pulse ultrasonic devices: one equipped with dry point contact transducers (DPC) and the other with flat contact transducers that require a specially-selected coupling material (CPC). In this way, some of the traditional inconveniences presented by the use of conventional transducers were overcome and a prior preparation of the surfaces was not required. The auto-compensating technique eliminated systematic errors and the need for previous local calibration, demonstrating the potential for this technology. The experimental results have been compared with simplified theoretical models that simulate ultrasonic wave propagation in cracked bituminous materials, which had been previously deduced using an analytical approach and have permitted the correct interpretation of the aforementioned empirical results. These models were subsequently calibrated using the laboratory results, providing generalized mathematical expressions and graphics for routine use in practical applications. Through a series of on-site ultrasound test campaigns, accompanied by asphalt core extraction, it was possible to evaluate the proposed models, with differences between predicted crack depths and those measured in situ lower than 13% (with a confidence level of 95%). Thereby, the criteria and the necessary recommendations for their implementation on in-service asphalt pavements have been established. The experience obtained through this study makes it possible to integrate this methodology into the evaluation techniques for pavement management systems.
Resumo:
En esta tesis se aborda la detección y el seguimiento automático de vehículos mediante técnicas de visión artificial con una cámara monocular embarcada. Este problema ha suscitado un gran interés por parte de la industria automovilística y de la comunidad científica ya que supone el primer paso en aras de la ayuda a la conducción, la prevención de accidentes y, en última instancia, la conducción automática. A pesar de que se le ha dedicado mucho esfuerzo en los últimos años, de momento no se ha encontrado ninguna solución completamente satisfactoria y por lo tanto continúa siendo un tema de investigación abierto. Los principales problemas que plantean la detección y seguimiento mediante visión artificial son la gran variabilidad entre vehículos, un fondo que cambia dinámicamente debido al movimiento de la cámara, y la necesidad de operar en tiempo real. En este contexto, esta tesis propone un marco unificado para la detección y seguimiento de vehículos que afronta los problemas descritos mediante un enfoque estadístico. El marco se compone de tres grandes bloques, i.e., generación de hipótesis, verificación de hipótesis, y seguimiento de vehículos, que se llevan a cabo de manera secuencial. No obstante, se potencia el intercambio de información entre los diferentes bloques con objeto de obtener el máximo grado posible de adaptación a cambios en el entorno y de reducir el coste computacional. Para abordar la primera tarea de generación de hipótesis, se proponen dos métodos complementarios basados respectivamente en el análisis de la apariencia y la geometría de la escena. Para ello resulta especialmente interesante el uso de un dominio transformado en el que se elimina la perspectiva de la imagen original, puesto que este dominio permite una búsqueda rápida dentro de la imagen y por tanto una generación eficiente de hipótesis de localización de los vehículos. Los candidatos finales se obtienen por medio de un marco colaborativo entre el dominio original y el dominio transformado. Para la verificación de hipótesis se adopta un método de aprendizaje supervisado. Así, se evalúan algunos de los métodos de extracción de características más populares y se proponen nuevos descriptores con arreglo al conocimiento de la apariencia de los vehículos. Para evaluar la efectividad en la tarea de clasificación de estos descriptores, y dado que no existen bases de datos públicas que se adapten al problema descrito, se ha generado una nueva base de datos sobre la que se han realizado pruebas masivas. Finalmente, se presenta una metodología para la fusión de los diferentes clasificadores y se plantea una discusión sobre las combinaciones que ofrecen los mejores resultados. El núcleo del marco propuesto está constituido por un método Bayesiano de seguimiento basado en filtros de partículas. Se plantean contribuciones en los tres elementos fundamentales de estos filtros: el algoritmo de inferencia, el modelo dinámico y el modelo de observación. En concreto, se propone el uso de un método de muestreo basado en MCMC que evita el elevado coste computacional de los filtros de partículas tradicionales y por consiguiente permite que el modelado conjunto de múltiples vehículos sea computacionalmente viable. Por otra parte, el dominio transformado mencionado anteriormente permite la definición de un modelo dinámico de velocidad constante ya que se preserva el movimiento suave de los vehículos en autopistas. Por último, se propone un modelo de observación que integra diferentes características. En particular, además de la apariencia de los vehículos, el modelo tiene en cuenta también toda la información recibida de los bloques de procesamiento previos. El método propuesto se ejecuta en tiempo real en un ordenador de propósito general y da unos resultados sobresalientes en comparación con los métodos tradicionales. ABSTRACT This thesis addresses on-road vehicle detection and tracking with a monocular vision system. This problem has attracted the attention of the automotive industry and the research community as it is the first step for driver assistance and collision avoidance systems and for eventual autonomous driving. Although many effort has been devoted to address it in recent years, no satisfactory solution has yet been devised and thus it is an active research issue. The main challenges for vision-based vehicle detection and tracking are the high variability among vehicles, the dynamically changing background due to camera motion and the real-time processing requirement. In this thesis, a unified approach using statistical methods is presented for vehicle detection and tracking that tackles these issues. The approach is divided into three primary tasks, i.e., vehicle hypothesis generation, hypothesis verification, and vehicle tracking, which are performed sequentially. Nevertheless, the exchange of information between processing blocks is fostered so that the maximum degree of adaptation to changes in the environment can be achieved and the computational cost is alleviated. Two complementary strategies are proposed to address the first task, i.e., hypothesis generation, based respectively on appearance and geometry analysis. To this end, the use of a rectified domain in which the perspective is removed from the original image is especially interesting, as it allows for fast image scanning and coarse hypothesis generation. The final vehicle candidates are produced using a collaborative framework between the original and the rectified domains. A supervised classification strategy is adopted for the verification of the hypothesized vehicle locations. In particular, state-of-the-art methods for feature extraction are evaluated and new descriptors are proposed by exploiting the knowledge on vehicle appearance. Due to the lack of appropriate public databases, a new database is generated and the classification performance of the descriptors is extensively tested on it. Finally, a methodology for the fusion of the different classifiers is presented and the best combinations are discussed. The core of the proposed approach is a Bayesian tracking framework using particle filters. Contributions are made on its three key elements: the inference algorithm, the dynamic model and the observation model. In particular, the use of a Markov chain Monte Carlo method is proposed for sampling, which circumvents the exponential complexity increase of traditional particle filters thus making joint multiple vehicle tracking affordable. On the other hand, the aforementioned rectified domain allows for the definition of a constant-velocity dynamic model since it preserves the smooth motion of vehicles in highways. Finally, a multiple-cue observation model is proposed that not only accounts for vehicle appearance but also integrates the available information from the analysis in the previous blocks. The proposed approach is proven to run near real-time in a general purpose PC and to deliver outstanding results compared to traditional methods.
Resumo:
In this study, the mechanical properties of YBa2Cu3O7−x, obtained by the Bridgman technique, were examined using a Berkovich tip indenter on the basal plane (0 0 1). Intrinsic hardness was measured by nanoindentation tests and corrected using the Nix and Gao model for this material. Furthermore, Vickers hardness tests were performed, in order to determine the possible size effect on these measurements. The results showed an underestimation of the hardness value when the tests were performed with large loads. Moreover, the elastic modulus of the Bridgman samples was 128 ± 5 GPa. Different residual imprints were visualised by atomic force microscopy and a focused ion beam, in order to observe superficial and internal fracturing. Mechanical properties presented a considerable reduction at the interface. This effect could be attributed to internal stress generated during the texturing process. In order to corroborate this hypothesis, an observation using transmission electron microscopy was performed.
Resumo:
Emission inventories are databases that aim to describe the polluting activities that occur across a certain geographic domain. According to the spatial scale, the availability of information will vary as well as the applied assumptions, which will strongly influence its quality, accuracy and representativeness. This study compared and contrasted two emission inventories describing the Greater Madrid Region (GMR) under an air quality simulation approach. The chosen inventories were the National Emissions Inventory (NEI) and the Regional Emissions Inventory of the Greater Madrid Region (REI). Both of them were used to feed air quality simulations with the CMAQ modelling system, and the results were compared with observations from the air quality monitoring network in the modelled domain. Through the application of statistical tools, the analysis of emissions at cell level and cell – expansion procedures, it was observed that the National Inventory showed better results for describing on – road traffic activities and agriculture, SNAP07 and SNAP10. The accurate description of activities, the good characterization of the vehicle fleet and the correct use of traffic emission factors were the main causes of such a good correlation. On the other hand, the Regional Inventory showed better descriptions for non – industrial combustion (SNAP02) and industrial activities (SNAP03). It incorporated realistic emission factors, a reasonable fuel mix and it drew upon local information sources to describe these activities, while NEI relied on surrogation and national datasets which leaded to a poorer representation. Off – road transportation (SNAP08) was similarly described by both inventories, while the rest of the SNAP activities showed a marginal contribution to the overall emissions.
Resumo:
Underpasses are common in modern railway lines. Wildlife corridors and drainage conduits often fall into this category of partially buried structures. Their dynamic behavior has received far less attention than that of other structures such as bridges, but their large number makes their study an interesting challenge from the viewpoint of safety and cost savings. Here, we present a complete study of a culvert, including on-site measurements and numerical modeling. The studied structure belongs to the high-speed railway line linking Segovia and Valladolid in Spain. The line was opened to traffic in 2004. On-site measurements were performed for the structure by recording the dynamic response at selected points of the structure during the passage of high-speed trains at speeds ranging between 200 and 300 km/h. The measurements provide not only reference values suitable for model fitting, but also a good insight into the main features of the dynamic behavior of this structure. Finite element techniques were used to model the dynamic behavior of the structure and its key features. Special attention is paid to vertical accelerations, the values of which should be limited to avoid track instability according to Eurocode. This study furthers our understanding of the dynamic response of railway underpasses to train loads.
Resumo:
Corrosion of reinforcing steel in concrete due to chloride ingress is one of the main causes of the deterioration of reinforced concrete structures. Structures most affected by such a corrosion are marine zone buildings and structures exposed to de-icing salts like highways and bridges. Such process is accompanied by an increase in volume of the corrosión products on the rebarsconcrete interface. Depending on the level of oxidation, iron can expand as much as six times its original volume. This increase in volume exerts tensile stresses in the surrounding concrete which result in cracking and spalling of the concrete cover if the concrete tensile strength is exceeded. The mechanism by which steel embedded in concrete corrodes in presence of chloride is the local breakdown of the passive layer formed in the highly alkaline condition of the concrete. It is assumed that corrosion initiates when a critical chloride content reaches the rebar surface. The mathematical formulation idealized the corrosion sequence as a two-stage process: an initiation stage, during which chloride ions penetrate to the reinforcing steel surface and depassivate it, and a propagation stage, in which active corrosion takes place until cracking of the concrete cover has occurred. The aim of this research is to develop computer tools to evaluate the duration of the service life of reinforced concrete structures, considering both the initiation and propagation periods. Such tools must offer a friendly interface to facilitate its use by the researchers even though their background is not in numerical simulation. For the evaluation of the initiation period different tools have been developed: Program TavProbabilidade: provides means to carry out a probability analysis of a chloride ingress model. Such a tool is necessary due to the lack of data and general uncertainties associated with the phenomenon of the chloride diffusion. It differs from the deterministic approach because it computes not just a chloride profile at a certain age, but a range of chloride profiles for each probability or occurrence. Program TavProbabilidade_Fiabilidade: carries out reliability analyses of the initiation period. It takes into account the critical value of the chloride concentration on the steel that causes breakdown of the passive layer and the beginning of the propagation stage. It differs from the deterministic analysis in that it does not predict if the corrosion is going to begin or not, but to quantifies the probability of corrosion initiation. Program TavDif_1D: was created to do a one dimension deterministic analysis of the chloride diffusion process by the finite element method (FEM) which numerically solves Fick’second Law. Despite of the different FEM solver already developed in one dimension, the decision to create a new code (TavDif_1D) was taken because of the need to have a solver with friendly interface for pre- and post-process according to the need of IETCC. An innovative tool was also developed with a systematic method devised to compare the ability of the different 1D models to predict the actual evolution of chloride ingress based on experimental measurements, and also to quantify the degree of agreement of the models with each others. For the evaluation of the entire service life of the structure: a computer program has been developed using finite elements method to do the coupling of both service life periods: initiation and propagation. The program for 2D (TavDif_2D) allows the complementary use of two external programs in a unique friendly interface: • GMSH - an finite element mesh generator and post-processing viewer • OOFEM – a finite element solver. This program (TavDif_2D) is responsible to decide in each time step when and where to start applying the boundary conditions of fracture mechanics module in function of the amount of chloride concentration and corrosion parameters (Icorr, etc). This program is also responsible to verify the presence and the degree of fracture in each element to send the Information of diffusion coefficient variation with the crack width. • GMSH - an finite element mesh generator and post-processing viewer • OOFEM – a finite element solver. The advantages of the FEM with the interface provided by the tool are: • the flexibility to input the data such as material property and boundary conditions as time dependent function. • the flexibility to predict the chloride concentration profile for different geometries. • the possibility to couple chloride diffusion (initiation stage) with chemical and mechanical behavior (propagation stage). The OOFEM code had to be modified to accept temperature, humidity and the time dependent values for the material properties, which is necessary to adequately describe the environmental variations. A 3-D simulation has been performed to simulate the behavior of the beam on both, action of the external load and the internal load caused by the corrosion products, using elements of imbedded fracture in order to plot the curve of the deflection of the central region of the beam versus the external load to compare with the experimental data.
Resumo:
A system for estimation of unknown rectangular room dimensions based on two radio transceivers, both capable of full duplex operations, is presented. The approach is based on CIR measurements taken at the same place where the signal is transmitted (generated), commonly known as self- to-self CIR. Another novelty is the receiver antenna design which consists of eight sectorized antennas with 45° aperture in the horizontal plane, whose total coverage corresponds to the isotropic one. The dimensions of a rectangular room are reconstructed directly from radio impulse responses by extracting the information regarding features like round trip time, received signal strength and reverberation time. Using radar approach the estimation of walls and corners positions are derived. Additionally, the analysis of the absorption coefficient of the test environment is conducted and a typical coefficient for office room with furniture is proposed. Its accuracy is confirmed through the results of volume estimation. Tests using measured data were performed, and the simulation results confirm the feasibility of the approach.
Resumo:
El objetivo principal perseguido en este proyecto es el estudio del sistema NetdB mediante la medida de diferentes parámetros de acústica arquitectónica. El proyecto se estructura en cinco partes bien diferenciadas que se comentarán a continuación. La primera parte, fundamentos teóricos, se centrará en explicar los distintos parámetros medidos y su relación con la acústica, tanto de forma teórica como práctica. Todo ello se hará mediante la definición de los conceptos teóricos básicos y el desarrollo matemático estrictamente necesario. Además se ofrecerán una serie de definiciones que ayudarán al seguimiento del proyecto en su totalidad. En la segunda parte se muestran los equipos de medida utilizados en las distintas prácticas, sus características más importantes y algunas de las relaciones entre estos y las prácticas a realizar. El sistema NetdB, al ser el objeto principal de estudio de este proyecto, tiene un apartado propio donde se explican sus características, las conexiones del equipo y la forma de configurarlo con el software dBBati. En la parte destinada a las medidas se realizarán dos prácticas basadas en la medida de dos parámetros acústicos de acuerdo a normativa nacional o internacional: se medirá el tiempo de reverberación según la norma UNE-EN ISO 3382-2:2008 y el aislamiento acústico entre locales según la norma UNE-EN ISO 140-4:1999. El resultado de cada una de las prácticas es independiente y sobre estas se planteará un posible guión de prácticas sobre el que puedan trabajar futuros estudiantes de la Escuela. Tras ello se expone un ejemplo de guión para las prácticas realizadas. A partir de la recapitulación de todos los datos alcanzados en cada medida se obtendrán una serie de conclusiones sobre el comportamiento del sonido en las salas de estudio mediante los parámetros medidos en ellas. Se realizará también una reflexión sobre si el sistema utilizado en este proyecto es adecuado de cara a obtener unas medidas normalizadas y se sugerirán una serie de ampliaciones aplicables a este proyecto que pueden complementar el estudio sobre este sistema. Finalmente se dedicará un anexo al software OneNote, en relación a su utilización con fines académicos. ABSTRACT. The main objective pursued by this project is the study of the NetdB system by the measurement of different parameters of architectural acoustics. The project has a structure of five parts, which will be explained in the next paragraphs. In the first part, labeled as theory fundamentals, will be focused on explaining the different parameters measured and relationship with the acoustics, in theoretical and practical way. All of this will be done by the definition of basic theoretical concepts and the needed mathematical development. Also, there will be offered a series of definitions that will be helpful while following the course of this project. In the second part there will be shown the measure equipments used in the different practices, their main characteristics, and some of the connections between this equipment and the practices that will be made. The NetdB system, being the main goal of the study of this project, has an own section where the characteristics, connections with the equipment and configuration within the dBBati software, will be explained. In the part focused on the measurements, two practices will be made. These will be based on the measurement of two parameters that follow either national or international regulations. There will be measured the reverberation time following UNE-EN ISO 3382-2:2008 regulations, and the sound isolation following UNE-EN ISO 140-4:1999 regulations. The results of each of the practices is independent, and based on this practices there will be planned a schedule of practices that could be made by future students of the school. After that there is an example script for the practices. From the summing-up of all the data reached through every measurement, there will be obtained a series of conclusions about the behavior of the sound in study rooms through the parameters measured in these places. Also, there will be a thought about whether this system used is fit or not for obtaining standardized measurements. Also the will be suggested some extensions to this project, that could complement this study. Finally, there will be an attachment about OneNote and the use of it with academic purposes.
Resumo:
El objetivo de este Proyecto Fin de Carrera es realizar el estudio de las características acústicas del Teatro Tomás y Valiente de Fuenlabrada mediante medidas y con el apoyo de los resultados obtenidos mediante la simulación del campo sonoro. El recinto está destinado principalmente a representaciones teatrales, empleándose también como sala polivalente, de forma que se analizará su comportamiento acústico e idoneidad ante la variedad de usos a los que se destina. Para ello, se realizan medidas experimentales in situ de todos los parámetros representativos de un recinto acústico y la predicción de los mismos mediante la simulación de la sala a través del software de simulación acústica EASE, de forma que las características acústicas obtenidas mediante ambos procesos sean comparadas proponiéndose mejoras en el entorno para cumplimiento de parámetros acústicos óptimos exigibles a la sala. En primer lugar se exponen los principales conceptos teóricos a tener en cuenta en el ámbito de la acústica, detallando las diferentes teorías de estudio, los principios básicos de la psicoacústica. Además, se definen los criterios utilizados en el diseño de recintos acústicos y parámetros que definan la calidad según el uso al que se destine en función del estudio de la utilización habitual de la sala y valores óptimos de los parámetros acústicos correspondientes a salas de tamaño y uso similar. A continuación, se describe la metodología aplicada para la realización de las medidas in situ obteniendo resultados de los parámetros acústicos representativos del recinto para el análisis de sus características acústicas y posterior comparación con la predicción de los mismos mediante la simulación del modelo informático. También se muestra el proceso que se ha seguido para el diseño del modelo acústico a partir de los planos del teatro y medidas realizadas en el recinto, para la simulación de parámetros y características acústicas. Finalmente se exponen las conclusiones extraídas tras el estudio realizado y la propuesta de mejoras en el entorno para cumplimiento de parámetros acústicos óptimos que se puedan exigir a esta sala, incluyendo un presupuesto que muestre la viabilidad económica del proyecto. ABSTRACT. The goal of this final project, is to perform an acoustic study and simulation of the Tomás y Valiente theatre in Fuenlabrada. These premises are mainly used for stage plays, but also as a multipurpose space, therefore its acustic behaviour and suitability for the expected uses will be analyzed. To accomplish this task, experimental measures for all the representative parameters for an acoustic hall, will be taken on site. The prediction for those measurements will be simulated through EASE software, so the acoustic characteristics obtained using both methods will be compared, and improvements will be proposed in order to achieve the best acoustic parameters, the hall can have. First at all, the theoretical concepts definition involves exposing the main concepts to consider in the acoustics field, detailing the basic principles of the psychoacoustic. On top of the criteria used in the design of acoustic enclosures and parameters defining the quality according to the use the enclosure is intended for. Research on the most common usage for the space, and optimal values, comparing it with similar rooms in size and use. Experimental measures are made of the acoustic parameters representative of the enclosure for the analysis of its acoustic characteristics and its later comparison with the prediction of the parameters through informatics model simulation. Also the process which has been followed for the design of acoustic model of the theater are taken from on site measurements, experimental representative measures and acoustic parameters, for the acoustic characteristics analysis and post comparison with the software model simulation and prediction. Acoustic design of the theater taking as a base the building blueprints, and manual measures, for the parameters and acoustic characteristics simulation. Finally, the conclusions extracted after the performed research are shown and the propose of improvements in the environment for fulfillment of acoustic optimal parameters which can be required to this room, including a quote with shows the economical viability of the project.