829 resultados para NUMERICAL DATA


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hybrid simulation is a technique that combines experimental and numerical testing and has been used for the last decades in the fields of aerospace, civil and mechanical engineering. During this time, most of the research has focused on developing algorithms and the necessary technology, including but not limited to, error minimisation techniques, phase lag compensation and faster hydraulic cylinders. However, one of the main shortcomings in hybrid simulation that has pre- vented its widespread use is the size of the numerical models and the effect that higher frequencies may have on the stability and accuracy of the simulation. The first chapter in this document provides an overview of the hybrid simulation method and the different hybrid simulation schemes, and the corresponding time integration algorithms, that are more commonly used in this field. The scope of this thesis is presented in more detail in chapter 2: a substructure algorithm, the Substep Force Feedback (Subfeed), is adapted in order to fulfil the necessary requirements in terms of speed. The effects of more complex models on the Subfeed are also studied in detail, and the improvements made are validated experimentally. Chapters 3 and 4 detail the methodologies that have been used in order to accomplish the objectives mentioned in the previous lines, listing the different cases of study and detailing the hardware and software used to experimentally validate them. The third chapter contains a brief introduction to a project, the DFG Subshake, whose data have been used as a starting point for the developments that are shown later in this thesis. The results obtained are presented in chapters 5 and 6, with the first of them focusing on purely numerical simulations while the second of them is more oriented towards a more practical application including experimental real-time hybrid simulation tests with large numerical models. Following the discussion of the developments in this thesis is a list of hardware and software requirements that have to be met in order to apply the methods described in this document, and they can be found in chapter 7. The last chapter, chapter 8, of this thesis focuses on conclusions and achievements extracted from the results, namely: the adaptation of the hybrid simulation algorithm Subfeed to be used in conjunction with large numerical models, the study of the effect of high frequencies on the substructure algorithm and experimental real-time hybrid simulation tests with vibrating subsystems using large numerical models and shake tables. A brief discussion of possible future research activities can be found in the concluding chapter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerical modelling and simulations are needed to develop and test specific analysis methods by providing test data before BIRDY would be launched. This document describes the "satellite data simulator" which is a multi-sensor, multi-spectral satellite simulator produced especially for the BIRDY mission which could be used as well to analyse data from other satellite missions providing energetic particles data in the Solar system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present new methodologies to generate rational function approximations of broadband electromagnetic responses of linear and passive networks of high-speed interconnects, and to construct SPICE-compatible, equivalent circuit representations of the generated rational functions. These new methodologies are driven by the desire to improve the computational efficiency of the rational function fitting process, and to ensure enhanced accuracy of the generated rational function interpolation and its equivalent circuit representation. Toward this goal, we propose two new methodologies for rational function approximation of high-speed interconnect network responses. The first one relies on the use of both time-domain and frequency-domain data, obtained either through measurement or numerical simulation, to generate a rational function representation that extrapolates the input, early-time transient response data to late-time response while at the same time providing a means to both interpolate and extrapolate the used frequency-domain data. The aforementioned hybrid methodology can be considered as a generalization of the frequency-domain rational function fitting utilizing frequency-domain response data only, and the time-domain rational function fitting utilizing transient response data only. In this context, a guideline is proposed for estimating the order of the rational function approximation from transient data. The availability of such an estimate expedites the time-domain rational function fitting process. The second approach relies on the extraction of the delay associated with causal electromagnetic responses of interconnect systems to provide for a more stable rational function process utilizing a lower-order rational function interpolation. A distinctive feature of the proposed methodology is its utilization of scattering parameters. For both methodologies, the approach of fitting the electromagnetic network matrix one element at a time is applied. It is shown that, with regard to the computational cost of the rational function fitting process, such an element-by-element rational function fitting is more advantageous than full matrix fitting for systems with a large number of ports. Despite the disadvantage that different sets of poles are used in the rational function of different elements in the network matrix, such an approach provides for improved accuracy in the fitting of network matrices of systems characterized by both strongly coupled and weakly coupled ports. Finally, in order to provide a means for enforcing passivity in the adopted element-by-element rational function fitting approach, the methodology for passivity enforcement via quadratic programming is modified appropriately for this purpose and demonstrated in the context of element-by-element rational function fitting of the admittance matrix of an electromagnetic multiport.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The graph Laplacian operator is widely studied in spectral graph theory largely due to its importance in modern data analysis. Recently, the Fourier transform and other time-frequency operators have been defined on graphs using Laplacian eigenvalues and eigenvectors. We extend these results and prove that the translation operator to the i’th node is invertible if and only if all eigenvectors are nonzero on the i’th node. Because of this dependency on the support of eigenvectors we study the characteristic set of Laplacian eigenvectors. We prove that the Fiedler vector of a planar graph cannot vanish on large neighborhoods and then explicitly construct a family of non-planar graphs that do exhibit this property. We then prove original results in modern analysis on graphs. We extend results on spectral graph wavelets to create vertex-dyanamic spectral graph wavelets whose support depends on both scale and translation parameters. We prove that Spielman’s Twice-Ramanujan graph sparsifying algorithm cannot outperform his conjectured optimal sparsification constant. Finally, we present numerical results on graph conditioning, in which edges of a graph are rescaled to best approximate the complete graph and reduce average commute time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis focuses on experimental and numerical studies of the hydrodynamic interaction between two vessels in close proximity in waves. In the model tests, two identical box-like models with round corners were used. Regular waves with the same wave steepness and different wave frequencies were generated. Six degrees of freedom body motions and wave elevations between bodies were measured in a head sea condition. Three initial gap widths were examined. In the numerical computations, a panel-free method based seakeeping program, MAPS0, and a panel method based program, WAMIT, were used for the prediction of body motions and wave elevations. The computed body motions and wave elevations were compared with experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solving linear systems is an important problem for scientific computing. Exploiting parallelism is essential for solving complex systems, and this traditionally involves writing parallel algorithms on top of a library such as MPI. The SPIKE family of algorithms is one well-known example of a parallel solver for linear systems. The Hierarchically Tiled Array data type extends traditional data-parallel array operations with explicit tiling and allows programmers to directly manipulate tiles. The tiles of the HTA data type map naturally to the block nature of many numeric computations, including the SPIKE family of algorithms. The higher level of abstraction of the HTA enables the same program to be portable across different platforms. Current implementations target both shared-memory and distributed-memory models. In this thesis we present a proof-of-concept for portable linear solvers. We implement two algorithms from the SPIKE family using the HTA library. We show that our implementations of SPIKE exploit the abstractions provided by the HTA to produce a compact, clean code that can run on both shared-memory and distributed-memory models without modification. We discuss how we map the algorithms to HTA programs as well as examine their performance. We compare the performance of our HTA codes to comparable codes written in MPI as well as current state-of-the-art linear algebra routines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Observational data and a three dimensional numerical model (POM) are used to investigate the Persian Gulf outflow structure and its spreading pathway into the Oman Sea. The model is based on orthogonal curvilinear coordinate system in horizontal and train following coordinate (sigma coordinate) system in vertical. In the simulation, the horizontal diffusivity coefficients are calculated form Smogorinsky diffusivity formula and the eddy vertical diffusivities are obtained from a second turbulence closure model (namely Mellor-Yamada level 2.5 model of turbulence). The modeling area includes the east of the Persian Gulf, the Oman Sea and a part of the north-east of the Indian Ocean. In the model, the horizontal grid spacing was assumed to be about 3.5 km and the number of vertical levels was set to 32. The simulations show that the mean salinity of the PG outflow does not change substantially during the year and is about 39 psu, while its temperature exhibits seasonal variations. These lead to variations in outflow density in a way that is has its maximum density in late winter (March) and its minimum in mid-summer (August). At the entrance to the Oman Sea, the PG outflow turns to the right due to Coriolis Effect and falls down on the continental slope until it gains its equilibrium depth. The highest density of the outflow during March causes it to sink more into the deeper depths in contrast to that of August which the density is the lowest one. Hence, the neutral buoyancy depths of the outflow are about 500 m and 250 m for March and August respectively. Then, the outflow spreads in its equilibrium depths in the Oman Sea in vicinity of western and southern boundaries until it approach the Ras al Hamra Cape where the water depth suddenly begins to increase. Therefore, during March, the outflow that is deeper and wider relative to August, is more affected by the steep slope topography and as a result of vortex stretching mechanism and conservation of potential vorticity it separates from the lateral boundaries and finally forms an anti-cyclonic eddy in the Oman Sea. But during August the outflow moves as before in vicinity of lateral boundaries. In addition, the interaction of the PG outflow with tide in the Strait of Hormuz leads to intermittency in outflow movement into the Oman Sea and it could be the major reason for generations of Peddy (Peddies) in the Oman Sea.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Persian Gulf (PG) is a semi-enclosed shallow sea which is connected to open ocean through the Strait of Hormuz. Thermocline as a suddenly decrease of temperature in subsurface layer in water column leading to stratification happens in the PG seasonally. The forcing comprise tide, river inflow, solar radiation, evaporation, northwesterly wind and water exchange with the Oman Sea that influence on this process. In this research, analysis of the field data and a numerical (Princeton Ocean Model, POM) study on the summer thermocline development in the PG are presented. The Mt. Mitchell cruise 1992 salinity and temperature observations show that the thermocline is effectively removed due to strong wind mixing and lower solar radiation in winter but is gradually formed and developed during spring and summer; in fact as a result of an increase in vertical convection through the water in winter, vertical gradient of temperature is decreased and thermocline is effectively removed. Thermocline development that evolves from east to west is studied using numerical simulation and some existing observations. Results show that as the northwesterly wind in winter, at summer transition period, weakens the fresher inflow from Oman Sea, solar radiation increases in this time interval; such these factors have been caused the thermocline to be formed and developed from winter to summer even over the northwestern part of the PG. The model results show that for the more realistic monthly averaged wind experiments the thermocline develops as is indicated by summer observations. The formation of thermocline also seems to decrease the dissolved oxygen in water column due to lack of mixing as a result of induced stratification. Over most of PG the temperature difference between surface and subsurface increases exponentially from March until May. Similar variations for salinity differences are also predicted, although with smaller values than observed. Indeed thermocline development happens more rapidly in the Persian Gulf from spring to summer. Vertical difference of temperature increases to 9 centigrade degrees in some parts of the case study zone from surface to bottom in summer. Correlation coefficients of temperature and salinity between the model results and measurements have been obtained 0.85 and 0.8 respectively. The rate of thermcline development was found to be between 0.1 to 0.2 meter per day in the Persian Gulf during the 6 months from winter to early summer. Also it is resulted from the used model that turbulence kinetic energy increases in the northwestern part of the PG from winter to early summer that could be due to increase in internal waves activities and stability intensified through water column during this time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, numerical simulation of the Caspian Sea circulation was performed using COHERENS three-dimensional numerical model and field data. The COHERENS three-dimensional model and FVCOM were performed under the effect of the wind driven force, and then the simulation results obtained were compared. Simulation modeling was performed at the Caspian Sea. Its horizontal grid size is approximately equal to 5 Km and 30 sigma levels were considered. The numerical simulation results indicate that the winds' driven-forces and temperature gradient are the most important driving force factors of the Caspian circulation pattern. One of the effects of wind-driven currents was the upwelling phenomenon that was formed in the eastern shores of the Caspian Sea in the summer. The simulation results also indicate that this phenomenon occurred at a depth less than 40 meters, and the vertical velocity in July and August was 10 meters and 7 meters respectively. During the upwelling phenomenon period the temperatures on the east coast compared to the west coast were about 5°C lower. In autumn and winter, the warm waters moved from the south east coast to the north and the cold waters moved from the west coast of the central Caspian toward the south. In the subsurface and deep layers, these movements were much more structured and caused strengthening of the anti-clockwise circulation in the area, especially in the central area of Caspian. The obtained results of the two models COHERENS and FVCOM performed under wind driven-force show a high coordination of the two models, and so the wind current circulation pattern for both models is almost identical.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document describes the general principles of Digital Object Identifiers (DOI). It provide examples of DOI implementation useful for AtlantOS H2020 project networks. A DOI is an allocation of unique identifier. Generally used to identify scientific publications, a DOI can be attributed to any physical, numerical or abstract resource.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The catastrophic event of red tide has happened in the Strait of Hormuz, the Persian Gulf and Gulf of Oman from late summer 2008 to spring 2009. With its devastating effects, the phenomenon shocked all the countries located in the margin of the Persian Gulf and the Gulf of Oman and caused considerable losses to fishery industries, tourism, and tourist and trade economy of the region. In the maritime cruise carried out by the Persian Gulf and Gulf of Oman Ecological Research Institute, field data, including temperature, salinity, chlorophyll-a, dissolved oxygen and algal density were obtained for this research. Satellite information was received from MODIS and MERIS and SeaWiFS sensors. Temperature and surface chlorophyll images were obtained and compared with the field data and data of PROBE model. The results obtained from the present research indicated that with the occurrence of harmful algal blooms (HAB), the Chlorophyll-a and the dissolved oxygen contents increased in the surface water. Maximum algal density was seen in the northern coasts of the Strait of Hormuz. Less concentration of algal density was detected in deep and surface offshore water. Our results show that the occurred algal bloom was the result of seawater temperature drop, water circulation and the adverse environmental pollutions caused by industrial and urban sewages entering the coastal waters in this region of the Persian Gulf ,This red tide phenomenon was started in the Strait of Hormuz and eventually covered about 140,000 km2 of the Persian Gulf and total area of Strait of Hormuz and it survived for 10 months which is a record amongst the occurred algal blooms across the world. Temperature and chlorophyll satellite images were proportionate to the measured values obtained by the field method. This indicates that satellite measurements have acceptable precisions and they can be used in sea monitoring and modeling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis work deals with a mathematical description of flow in polymeric pipe and in a specific peristaltic pump. This study involves fluid-structure interaction analysis in presence of complex-turbulent flows treated in an arbitrary Lagrangian-Eulerian (ALE) framework. The flow simulations are performed in COMSOL 4.4, as 2D axial symmetric model, and ABAQUS 6.14.1, as 3D model with symmetric boundary conditions. In COMSOL, the fluid and structure problems are coupled by monolithic algorithm, while ABAQUS code links ABAQUS CFD and ABAQUS Standard solvers with single block-iterative partitioned algorithm. For the turbulent features of the flow, the fluid model in both codes is described by RNG k-ϵ. The structural model is described, on the basis of the pipe material, by Elastic models or Hyperelastic Neo-Hookean models with Rayleigh damping properties. In order to describe the pulsatile fluid flow after the pumping process, the available data are often defective for the fluid problem. Engineering measurements are normally able to provide average pressure or velocity at a cross-section. This problem has been analyzed by McDonald's and Womersley's work for average pressure at fixed cross section by Fourier analysis since '50, while nowadays sophisticated techniques including Finite Elements and Finite Volumes exist to study the flow. Finally, we set up peristaltic pipe simulations in ABAQUS code, by using the same model previously tested for the fl uid and the structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As condições de ambiente térmico e aéreo, no interior de instalações para animais, alteram-se durante o dia, devido à influência do ambiente externo. Para que análises estatísticas e geoestatísticas sejam representativas, uma grande quantidade de pontos distribuídos espacialmente na área da instalação deve ser monitorada. Este trabalho propõe que a variação no tempo das variáveis ambientais de interesse para a produção animal, monitoradas no interior de instalações para animais, pode ser modelada com precisão a partir de registros discretos no tempo. O objetivo deste trabalho foi desenvolver um método numérico para corrigir as variações temporais dessas variáveis ambientais, transformando os dados para que tais observações independam do tempo gasto durante a aferição. O método proposto aproximou os valores registrados com retardos de tempo aos esperados no exato momento de interesse, caso os dados fossem medidos simultaneamente neste momento em todos os pontos distribuídos espacialmente. O modelo de correção numérica para variáveis ambientais foi validado para o parâmetro ambiental temperatura do ar, sendo que os valores corrigidos pelo método não diferiram pelo teste Tukey, a 5% de probabilidade dos valores reais registrados por meio de dataloggers.