54 resultados para Extreme waves
em Universidad Politécnica de Madrid
Resumo:
The different theoretical models related with storm wave characterization focus on determining the significant wave height of the peak storm, the mean period and, usually assuming a triangle storm shape, their duration. In some cases, the main direction is also considered. Nevertheless, definition of the whole storm history, including the variation of the main random variables during the storm cycle is not taken into consideration. The representativeness of the proposed storm models, analysed in a recent study using an empirical maximum energy flux time dependent function shows that the behaviour of the different storm models is extremely dependent on the climatic characteristics of the project area. Moreover, there are no theoretical models able to adequately reproduce storm history evolution of the sea states characterized by important swell components. To overcome this shortcoming, several theoretical storm shapes are investigated taking into consideration the bases of the three best theoretical storm models, the Equivalent Magnitude Storm (EMS), the Equivalent Number of Waves Storm (ENWS) and the Equivalent Duration Storm (EDS) models. To analyse the representativeness of the new storm shape, the aforementioned maximum energy flux formulation and a wave overtopping discharge structure function are used. With the empirical energy flux formulation, correctness of the different approaches is focussed on the progressive hydraulic stability loss of the main armour layer caused by real and theoretical storms. For the overtopping structure equation, the total volume of discharge is considered. In all cases, the results obtained highlight the greater representativeness of the triangular EMS model for sea waves and the trapezoidal (nonparallel sides) EMS model for waves with a higher degree of wave development. Taking into account the increase in offshore and shallow water wind turbines, maritime transport and deep vertical breakwaters, the maximum wave height of the whole storm history and that corresponding to each sea state belonging to its cycle's evolution is also considered. The procedure considers the information usually available for extreme waves' characterization. Extrapolations of the maximum wave height of the selected storms have also been considered. The 4th order statistics of the sea state belonging to the real and theoretical storm have been estimated to complete the statistical analysis of individual wave height
Resumo:
Leonhardt demonstrated (2009) that the 2D Maxwell Fish Eye lens (MFE) can focus perfectly 2D Helmholtz waves of arbitrary frequency, i.e., it can transport perfectly an outward (monopole) 2D Helmholtz wave field, generated by a point source, towards a "perfect point drain" located at the corresponding image point. Moreover, a prototype with λ/5 superresolution (SR) property for one microwave frequency has been manufactured and tested (Ma et al, 2010). Although this prototype has been loaded with an impedance different from the "perfect point drain", it has shown super-resolution property. However, neither software simulations nor experimental measurements for a broad band of frequencies have yet been reported. Here we present steady state simulations for two cases, using perfect drain as suggested by Leonhardt and without perfect drain as in the prototype. All the simulations have been done using a device equivalent to the MFE, called the Spherical Geodesic Waveguide (SGW). The results show the super-resolution up to λ/3000, for the system loaded with the perfect drain, and up to λ/500 for a not perfect load. In both cases super-resolution only happens for discrete number of frequencies. Out of these frequencies, the SGW does not show super-resolution in the analysis carried out.
Resumo:
An image processing observational technique for the stereoscopic reconstruction of the wave form of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired wave form is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.
Resumo:
Stereo video techniques are effective for estimating the space-time wave dynamics over an area of the ocean. Indeed, a stereo camera view allows retrieval of both spatial and temporal data whose statistical content is richer than that of time series data retrieved from point wave probes. Classical epipolar techniques and modern variational methods are reviewed to reconstruct the sea surface from the stereo pairs sequentially in time. Current improvements of the variational methods are presented.
Resumo:
Computer Fluid Dynamics tools have already become a valuable instrument for Naval Architects during the ship design process, thanks to their accuracy and the available computer power. Unfortunately, the development of RANSE codes, generally used when viscous effects play a major role in the flow, has not reached a mature stage, being the accuracy of the turbulence models and the free surface representation the most important sources of uncertainty. Another level of uncertainty is added when the simulations are carried out for unsteady flows, as those generally studied in seakeeping and maneuvering analysis and URANS equations solvers are used. Present work shows the applicability and the benefits derived from the use of new approaches for the turbulence modeling (Detached Eddy Simulation) and the free surface representation (Level Set) on the URANS equations solver CFDSHIP-Iowa. Compared to URANS, DES is expected to predict much broader frequency contents and behave better in flows where boundary layer separation plays a major role. Level Set methods are able to capture very complex free surface geometries, including breaking and overturning waves. The performance of these improvements is tested in set of fairly complex flows, generated by a Wigley hull at pure drift motion, with drift angle ranging from 10 to 60 degrees and at several Froude numbers to study the impact of its variation. Quantitative verification and validation are performed with the obtained results to guarantee their accuracy. The results show the capability of the CFDSHIP-Iowa code to carry out time-accurate simulations of complex flows of extreme unsteady ship maneuvers. The Level Set method is able to capture very complex geometries of the free surface and the use of DES in unsteady simulations highly improves the results obtained. Vortical structures and instabilities as a function of the drift angle and Fr are qualitatively identified. Overall analysis of the flow pattern shows a strong correlation between the vortical structures and free surface wave pattern. Karman-like vortex shedding is identified and the scaled St agrees well with the universal St value. Tip vortices are identified and the associated helical instabilities are analyzed. St using the hull length decreases with the increase of the distance along the vortex core (x), which is similar to results from other simulations. However, St scaled using distance along the vortex cores shows strong oscillations compared to almost constants for those previous simulations. The difference may be caused by the effect of the free-surface, grid resolution, and interaction between the tip vortex and other vortical structures, which needs further investigations. This study is exploratory in the sense that finer grids are desirable and experimental data is lacking for large α, especially for the local flow. More recently, high performance computational capability of CFDSHIP-Iowa V4 has been improved such that large scale computations are possible. DES for DTMB 5415 with bilge keels at α = 20º were conducted using three grids with 10M, 48M and 250M points. DES analysis for flows around KVLCC2 at α = 30º is analyzed using a 13M grid and compared with the results of DES on the 1.6M grid by. Both studies are consistent with what was concluded on grid resolution herein since dominant frequencies for shear-layer, Karman-like, horse-shoe and helical instabilities only show marginal variation on grid refinement. The penalties of using coarse grids are smaller frequency amplitude and less resolved TKE. Therefore finer grids should be used to improve V&V for resolving most of the active turbulent scales for all different Fr and α, which hopefully can be compared with additional EFD data for large α when it becomes available.
Resumo:
Radiative shock waves play a pivotal role in the transport energy into the stellar medium. This fact has led to many efforts to scale the astrophysical phenomena to accessible laboratory conditions and their study has been highlighted as an area requiring further experimental investigations. Low density material with high atomic mass is suitable to achieve radiative regime, and, therefore, low density xenon gas is commonly used for the medium in which the radiative shock propagates. In this work the averageionization and the thermodynamicregimes of xenonplasmas are determined as functions of the matter density and temperature in a wide range of plasma conditions. The results obtained will be applied to characterize blastwaveslaunched in xenonclusters
Resumo:
We report on the fabrication of aluminum gallium nitride (AlGaN) Schottky diodes for extreme ultraviolet (EUV) detection. AlGaN layers were grown on silicon wafers by molecular beam epitaxy with the conventional and inverted Schottky structure, where the undoped, active layer was grown before or after the n-doped layer, respectively. Different current mechanisms were observed in the two structures. The inverted Schottky diode was designed for the optimized backside sensitivity in the hybrid imagers. A cut-off wavelength of 280 nm was observed with three orders of magnitude intrinsic rejection ratio of the visible radiation. Furthermore, the inverted structure was characterized using a EUV source based on helium discharge and an open electrode design was used to improve the sensitivity. The characteristic He I and He II emission lines were observed at the wavelengths of 58.4 nm and 30.4 nm, respectively, proving the feasibility of using the inverted layer stack for EUV detection
Resumo:
We develop a novel remote sensing technique for the observation of waves on the ocean surface. Our method infers the 3-D waveform and radiance of oceanic sea states via a variational stereo imagery formulation. In this setting, the shape and radiance of the wave surface are given by minimizers of a composite energy functional that combines a photometric matching term along with regularization terms involving the smoothness of the unknowns. The desired ocean surface shape and radiance are the solution of a system of coupled partial differential equations derived from the optimality conditions of the energy functional. The proposed method is naturally extended to study the spatiotemporal dynamics of ocean waves and applied to three sets of stereo video data. Statistical and spectral analysis are carried out. Our results provide evidence that the observed omnidirectional wavenumber spectrum S(k) decays as k-2.5 is in agreement with Zakharov's theory (1999). Furthermore, the 3-D spectrum of the reconstructed wave surface is exploited to estimate wave dispersion and currents.
Resumo:
The purpose of this paper is to provide information on the behaviour of steel prestressing wires under likely conditions that could be expected during a fire or impact loads. Four loadings were investigated: a) the influence of strain rate – from 10–3 to 600 s–1 – at room temperature, b) the influence of temperature – from 24 to 600 °C – at low strain rate, c) the influence of the joint effect of strain rate and temperature, and d) damage after three plausible fire scenarios. At room temperature it was found that using “static” values is a safe option. At high temperatures our results are in agreement with design codes. Regarding the joint effect of temperature and strain rate, mechanical properties decrease with increasing temperature, although for a given temperature, yield stress and tensile strength increase with strain rate. The data provided can be used profitably to model the mechanical behaviour of steel wires under different scenarios.
Resumo:
The prediction of train induced vibration levels in structures close to railway tracks before track construction starts is important in order to avoid having to implement costly mitigation measures afterwards. The used models require an accurate characterization of the propagation medium i.e. the soil layers. To this end the spectral analysis of surface waves (SASW) method has been chosen among the active surface waves techniques available. As dynamic source a modal sledge hammer has been used. The generated vibrations have been measured at known offsets by means of several accelerometers. There are many parameters involved in estimating the experimental dispersion curve and, later on, thickness and propagation velocities of the different layers. Tests have been carried out at the Segovia railway station. Its main building covers some of the railway tracks and vibration problems in the building should be avoided. In the paper these tests as well as the influence of several parameters on the estimated soil profile will be detailed.
Resumo:
Extreme weather and climate events have received increased attention in the last few years, due to the often large loss of agriculture business and exponentially increasing costs associated with them and insurance planning. This increased attention raises the question as to whether extreme weather and climate events are truly increasing, whether this is only a perceived increase exacerbated by enhanced media coverage, or both. There are a number of ways extreme climate events can be defined, such as extreme daily temperatures, extreme daily rainfall amounts, and large areas experiencing unusually warm monthly temperatures, among others. In this study, we will focus our attention in frost and heatstroke events measuring it as the number of days under 0 ºC and number of days with daily maximum over 30ºC monthly respectively. We have studied the trends in these extreme events applying a Fast Fourier Transform to the series to clarify the tendency. Lack of long-term climate data suitable for analysis of extremes is the single biggest obstacle to quantifying whether extreme events have changed over the twentieth century, including high temporal and spatial resolution observations of temperatures. However, several series have been grouped in different ways: chosen the longest series independently, by provinces, by main watersheds and altitude. On the other hand, synthetic series generated by Luna and Balairón (AEMet) were also analyzed. The results obtained by different pooling data are discussed concluding the difficulties to assess the extreme events tendencies and high regional variation in the trends.
Resumo:
The Universidad Politécnica of Madrid (UPM) includes schools and faculties that were for engineering degrees, architecture and computer science, that are now in a quick EEES Bolonia Plan metamorphosis getting into degrees, masters and doctorate structures. They are focused towards action in machines, constructions, enterprises, that are subjected to machines, human and environment created risks. These are present in actions such as use loads, wind, snow, waves, flows, earthquakes, forces and effects in machines, vehicles behavior, chemical effects, and other environmental factors including effects of crops, cattle and beasts, forests, and varied essential economic and social disturbances. Emphasis is for authors in this session more about risks of natural origin, such as for hail, winds, snow or waves that are not exactly known a priori, but that are often considered with statistical expected distributions giving extreme values for convenient return periods. These distributions are known from measures in time, statistic of extremes and models about hazard scenarios and about responses of man made constructions or devices. In each engineering field theories were built about hazards scenarios and how to cover for important risks. Engineers must get that the systems they handle, such as vehicles, machines, firms or agro lands or forests, obtain production with enough safety for persons and with decent economic results in spite of risks. For that risks must be considered in planning, in realization and in operation, and safety margins must be taken but at a reasonable cost. That is a small level of risks will often remain, due to limitations in costs or because of due to strange hazards, and maybe they will be covered by insurance in cases such as in transport with cars, ships or aircrafts, in agro for hail, or for fire in houses or in forests. These and other decisions about quality, security for men or about business financial risks are sometimes considered with Decision Theories models, using often tools from Statistics or operational Research. The authors have done and are following field surveys about risk consideration in the careers in UPM, making deep analysis of curricula taking into account the new structures of degrees in the EEES Bolonia Plan, and they have considered the risk structures offered by diverse schools of Decision theories. That gives an aspect of the needs and uses, and recommendations about improving in the teaching about risk, that may include special subjects especially oriented for each career, school or faculty, so as to be recommended to be included into the curricula, including an elaboration and presentation format using a multi-criteria decision model.
Resumo:
We report synchronization of networked excitable nodes embedded in a metric space, where the connectivity properties are mostly determined by the distance between units. Such a high clustered structure, combined with the lack of long-range connections, prevents full synchronization and yields instead the emergence of synchronization waves. We show that this regime is optimal for information transmission through the system, as it enhances the options of reconstructing the topology from the dynamics. Measurements of topological and functional centralities reveal that the wave-synchronization state allows detection of the most structurally relevant nodes from a single observation of the dynamics, without any a priori information on the model equations ruling the evolution of the ensemble
Resumo:
One important task in the design of an antenna is to carry out an analysis to find out the characteristics of the antenna that best fulfills the specifications fixed by the application. After that, a prototype is manufactured and the next stage in design process is to check if the radiation pattern differs from the designed one. Besides the radiation pattern, other radiation parameters like directivity, gain, impedance, beamwidth, efficiency, polarization, etc. must be also evaluated. For this purpose, accurate antenna measurement techniques are needed in order to know exactly the actual electromagnetic behavior of the antenna under test. Due to this fact, most of the measurements are performed in anechoic chambers, which are closed areas, normally shielded, covered by electromagnetic absorbing material, that simulate free space propagation conditions, due to the absorption of the radiation absorbing material. Moreover, these facilities can be employed independently of the weather conditions and allow measurements free from interferences. Despite all the advantages of the anechoic chambers, the results obtained both from far-field measurements and near-field measurements are inevitably affected by errors. Thus, the main objective of this Thesis is to propose algorithms to improve the quality of the results obtained in antenna measurements by using post-processing techniques and without requiring additional measurements. First, a deep revision work of the state of the art has been made in order to give a general vision of the possibilities to characterize or to reduce the effects of errors in antenna measurements. Later, new methods to reduce the unwanted effects of four of the most commons errors in antenna measurements are described and theoretical and numerically validated. The basis of all them is the same, to perform a transformation from the measurement surface to another domain where there is enough information to easily remove the contribution of the errors. The four errors analyzed are noise, reflections, truncation errors and leakage and the tools used to suppress them are mainly source reconstruction techniques, spatial and modal filtering and iterative algorithms to extrapolate functions. Therefore, the main idea of all the methods is to modify the classical near-field-to-far-field transformations by including additional steps with which errors can be greatly suppressed. Moreover, the proposed methods are not computationally complex and, because they are applied in post-processing, additional measurements are not required. The noise is the most widely studied error in this Thesis, proposing a total of three alternatives to filter out an important noise contribution before obtaining the far-field pattern. The first one is based on a modal filtering. The second alternative uses a source reconstruction technique to obtain the extreme near-field where it is possible to apply a spatial filtering. The last one is to back-propagate the measured field to a surface with the same geometry than the measurement surface but closer to the AUT and then to apply also a spatial filtering. All the alternatives are analyzed in the three most common near-field systems, including comprehensive noise statistical analyses in order to deduce the signal-to-noise ratio improvement achieved in each case. The method to suppress reflections in antenna measurements is also based on a source reconstruction technique and the main idea is to reconstruct the field over a surface larger than the antenna aperture in order to be able to identify and later suppress the virtual sources related to the reflective waves. The truncation error presents in the results obtained from planar, cylindrical and partial spherical near-field measurements is the third error analyzed in this Thesis. The method to reduce this error is based on an iterative algorithm to extrapolate the reliable region of the far-field pattern from the knowledge of the field distribution on the AUT plane. The proper termination point of this iterative algorithm as well as other critical aspects of the method are also studied. The last part of this work is dedicated to the detection and suppression of the two most common leakage sources in antenna measurements. A first method tries to estimate the leakage bias constant added by the receiver’s quadrature detector to every near-field data and then suppress its effect on the far-field pattern. The second method can be divided into two parts; the first one to find the position of the faulty component that radiates or receives unwanted radiation, making easier its identification within the measurement environment and its later substitution; and the second part of this method is able to computationally remove the leakage effect without requiring the substitution of the faulty component. Resumen Una tarea importante en el diseño de una antena es llevar a cabo un análisis para averiguar las características de la antena que mejor cumple las especificaciones fijadas por la aplicación. Después de esto, se fabrica un prototipo de la antena y el siguiente paso en el proceso de diseño es comprobar si el patrón de radiación difiere del diseñado. Además del patrón de radiación, otros parámetros de radiación como la directividad, la ganancia, impedancia, ancho de haz, eficiencia, polarización, etc. deben ser también evaluados. Para lograr este propósito, se necesitan técnicas de medida de antenas muy precisas con el fin de saber exactamente el comportamiento electromagnético real de la antena bajo prueba. Debido a esto, la mayoría de las medidas se realizan en cámaras anecoicas, que son áreas cerradas, normalmente revestidas, cubiertas con material absorbente electromagnético. Además, estas instalaciones se pueden emplear independientemente de las condiciones climatológicas y permiten realizar medidas libres de interferencias. A pesar de todas las ventajas de las cámaras anecoicas, los resultados obtenidos tanto en medidas en campo lejano como en medidas en campo próximo están inevitablemente afectados por errores. Así, el principal objetivo de esta Tesis es proponer algoritmos para mejorar la calidad de los resultados obtenidos en medida de antenas mediante el uso de técnicas de post-procesado. Primeramente, se ha realizado un profundo trabajo de revisión del estado del arte con el fin de dar una visión general de las posibilidades para caracterizar o reducir los efectos de errores en medida de antenas. Después, se han descrito y validado tanto teórica como numéricamente nuevos métodos para reducir el efecto indeseado de cuatro de los errores más comunes en medida de antenas. La base de todos ellos es la misma, realizar una transformación de la superficie de medida a otro dominio donde hay suficiente información para eliminar fácilmente la contribución de los errores. Los cuatro errores analizados son ruido, reflexiones, errores de truncamiento y leakage y las herramientas usadas para suprimirlos son principalmente técnicas de reconstrucción de fuentes, filtrado espacial y modal y algoritmos iterativos para extrapolar funciones. Por lo tanto, la principal idea de todos los métodos es modificar las transformaciones clásicas de campo cercano a campo lejano incluyendo pasos adicionales con los que los errores pueden ser enormemente suprimidos. Además, los métodos propuestos no son computacionalmente complejos y dado que se aplican en post-procesado, no se necesitan medidas adicionales. El ruido es el error más ampliamente estudiado en esta Tesis, proponiéndose un total de tres alternativas para filtrar una importante contribución de ruido antes de obtener el patrón de campo lejano. La primera está basada en un filtrado modal. La segunda alternativa usa una técnica de reconstrucción de fuentes para obtener el campo sobre el plano de la antena donde es posible aplicar un filtrado espacial. La última es propagar el campo medido a una superficie con la misma geometría que la superficie de medida pero más próxima a la antena y luego aplicar también un filtrado espacial. Todas las alternativas han sido analizadas en los sistemas de campo próximos más comunes, incluyendo detallados análisis estadísticos del ruido con el fin de deducir la mejora de la relación señal a ruido lograda en cada caso. El método para suprimir reflexiones en medida de antenas está también basado en una técnica de reconstrucción de fuentes y la principal idea es reconstruir el campo sobre una superficie mayor que la apertura de la antena con el fin de ser capaces de identificar y después suprimir fuentes virtuales relacionadas con las ondas reflejadas. El error de truncamiento que aparece en los resultados obtenidos a partir de medidas en un plano, cilindro o en la porción de una esfera es el tercer error analizado en esta Tesis. El método para reducir este error está basado en un algoritmo iterativo para extrapolar la región fiable del patrón de campo lejano a partir de información de la distribución del campo sobre el plano de la antena. Además, se ha estudiado el punto apropiado de terminación de este algoritmo iterativo así como otros aspectos críticos del método. La última parte de este trabajo está dedicado a la detección y supresión de dos de las fuentes de leakage más comunes en medida de antenas. El primer método intenta realizar una estimación de la constante de fuga del leakage añadido por el detector en cuadratura del receptor a todos los datos en campo próximo y después suprimir su efecto en el patrón de campo lejano. El segundo método se puede dividir en dos partes; la primera de ellas para encontrar la posición de elementos defectuosos que radian o reciben radiación indeseada, haciendo más fácil su identificación dentro del entorno de medida y su posterior substitución. La segunda parte del método es capaz de eliminar computacionalmente el efector del leakage sin necesidad de la substitución del elemento defectuoso.
Resumo:
We investigate the excitation and propagation of acoustic waves in polycrystalline aluminum nitride films along the directions parallel and normal to the c-axis. Longitudinal and transverse propagations are assessed through the frequency response of surface acoustic wave and bulk acoustic wave devices fabricated on films of different crystal qualities. The crystalline properties significantly affect the electromechanical coupling factors and acoustic properties of the piezoelectric layers. The presence of misoriented grains produces an overall decrease of the piezoelectric activity, degrading more severely the excitation and propagation of waves traveling transversally to the c-axis. It is suggested that the presence of such crystalline defects in c-axis-oriented films reduces the mechanical coherence between grains and hinders the transverse deformation of the film when the electric field is applied parallel to the surface.