979 resultados para Numerical tests
Resumo:
The main objective of this work is to illustrate an application of angular active control in a sectioned airfoil using shape memory alloys. In the proposed model, one wants to establish the shape of the airfoil profile based on the determination of an angle between its two sections. This angle is obtained by the effect of the shape memory of the alloy by passing an electric current that modifies the temperature of the wire through the Joule effect, changing the shape of the alloy. This material is capable of converting thermal energy into mechanical energy and once permanently deformed, the material can return to its original shape by heating. Due to the presence of nonlinear effects, especially in the mathematical model of the alloy, this work proposes the application of a control system based on fuzzy logic. Through numerical tests, the performance of the fuzzy controller is compared with an on-off controller applied in a sectioned airfoil model.
Resumo:
With a view to assessing the vulnerability of columns to low elevation vehicular impacts, a non-linear explicit numerical model has been developed and validated using existing experimental results. The numerical model accounts for the effects of strain rate and confinement of the reinforced concrete, which are fundamental to the successful prediction of the impact response. The sensitivity of the material model parameters used for the validation is also scrutinised and numerical tests are performed to examine their suitability to simulate the shear failure conditions. Conflicting views on the strain gradient effects are discussed and the validation process is extended to investigate the ability of the equations developed under concentric loading conditions to simulate flexural failure events. Experimental data on impact force–time histories, mid span and residual deflections and support reactions have been verified against corresponding numerical results. A universal technique which can be applied to determine the vulnerability of the impacted columns against collisions with new generation vehicles under the most common impact modes is proposed. Additionally, the observed failure characteristics of the impacted columns are explained using extended outcomes. Based on the overall results, an analytical method is suggested to quantify the vulnerability of the columns.
Resumo:
The numerical modelling of electromagnetic waves has been the focus of many research areas in the past. Some specific applications of electromagnetic wave scattering are in the fields of Microwave Heating and Radar Communication Systems. The equations that govern the fundamental behaviour of electromagnetic wave propagation in waveguides and cavities are the Maxwell's equations. In the literature, a number of methods have been employed to solve these equations. Of these methods, the classical Finite-Difference Time-Domain scheme, which uses a staggered time and space discretisation, is the most well known and widely used. However, it is complicated to implement this method on an irregular computational domain using an unstructured mesh. In this work, a coupled method is introduced for the solution of Maxwell's equations. It is proposed that the free-space component of the solution is computed in the time domain, whilst the load is resolved using the frequency dependent electric field Helmholtz equation. This methodology results in a timefrequency domain hybrid scheme. For the Helmholtz equation, boundary conditions are generated from the time dependent free-space solutions. The boundary information is mapped into the frequency domain using the Discrete Fourier Transform. The solution for the electric field components is obtained by solving a sparse-complex system of linear equations. The hybrid method has been tested for both waveguide and cavity configurations. Numerical tests performed on waveguides and cavities for inhomogeneous lossy materials highlight the accuracy and computational efficiency of the newly proposed hybrid computational electromagnetic strategy.
Resumo:
The Wright-Fisher model is an Itô stochastic differential equation that was originally introduced to model genetic drift within finite populations and has recently been used as an approximation to ion channel dynamics within cardiac and neuronal cells. While analytic solutions to this equation remain within the interval [0,1], current numerical methods are unable to preserve such boundaries in the approximation. We present a new numerical method that guarantees approximations to a form of Wright-Fisher model, which includes mutation, remain within [0,1] for all time with probability one. Strong convergence of the method is proved and numerical experiments suggest that this new scheme converges with strong order 1/2. Extending this method to a multidimensional case, numerical tests suggest that the algorithm still converges strongly with order 1/2. Finally, numerical solutions obtained using this new method are compared to those obtained using the Euler-Maruyama method where the Wiener increment is resampled to ensure solutions remain within [0,1].
Resumo:
Waves generated by vertical seafloor movements are simulated by use of a fully nonlinear two-dimensional numerical wave tank. In the source region, the seafloor lifts to a designated height by a generation function. The numerical tests show that file linear theory is only valid for estimating the wave behaviors induced by the seafloor movements with a small amplitude, and the fully nonlinear numerical model should be adopted in the simulation of the wave generation by the large amplitude seafloor movements. Without the background surface waves, many numerical tests on the stable maximum elevations eta(max)(0) are carried out by both the linear theory and the fully nonlinear model. The results of two models are compared and analyzed. For the fully nonlinear model, the influences of the amplitudes and the horizontal lengths on eta(max)(0) are stronger than that of the characteristic duration times. Furthermore, results reveal that there are significant differences between the linear theory and the fully nonlinear model. When the influences of the background surface waves are considered, the corresponding numerical analyses reveal that with the fully nonlinear model the eta(max)(0) near-linearly varies with the wave amplitudes of the surface waves, and the eta(max)(0) has significant dependences on the wave lengths and the wave phases of the surface waves. In addition, the differences between the linear theory and the fully nonlinear model are still obvious, aid these differences are significantly affected by The wave parameters of the background surface waves, such as the wave amplitude, the wave length and the wave phase.
Resumo:
This paper discusses preconditioned Krylov subspace methods for solving large scale linear systems that originate from oil reservoir numerical simulations. Two types of preconditioners, one being based on an incomplete LU decomposition and the other being based on iterative algorithms, are used together in a combination strategy in order to achieve an adaptive and efficient preconditioner. Numerical tests show that different Krylov subspace methods combining with appropriate preconditioners are able to achieve optimal performance.
Resumo:
Financial modelling in the area of option pricing involves the understanding of the correlations between asset and movements of buy/sell in order to reduce risk in investment. Such activities depend on financial analysis tools being available to the trader with which he can make rapid and systematic evaluation of buy/sell contracts. In turn, analysis tools rely on fast numerical algorithms for the solution of financial mathematical models. There are many different financial activities apart from shares buy/sell activities. The main aim of this chapter is to discuss a distributed algorithm for the numerical solution of a European option. Both linear and non-linear cases are considered. The algorithm is based on the concept of the Laplace transform and its numerical inverse. The scalability of the algorithm is examined. Numerical tests are used to demonstrate the effectiveness of the algorithm for financial analysis. Time dependent functions for volatility and interest rates are also discussed. Applications of the algorithm to non-linear Black-Scholes equation where the volatility and the interest rate are functions of the option value are included. Some qualitative results of the convergence behaviour of the algorithm is examined. This chapter also examines the various computational issues of the Laplace transformation method in terms of distributed computing. The idea of using a two-level temporal mesh in order to achieve distributed computation along the temporal axis is introduced. Finally, the chapter ends with some conclusions.
Resumo:
Experimental static and fatigue tension-tension tests were carried out on 5HS/RTM6 composite intact coupons and coupons incorporating adhesively-bonded (FM300-2) stepped flush joints. The results show that the adhesive joint, which is widely used in repairs, significantly reduces the static strength as well as the fatigue life of the composite. Both, the static and the fatigue failure of the ‘repaired’ coupons occur at the adhesive joint and involve crack initiation and propagation. The latter is modelled using interface finite elements based on the decohezive zone approach. The material degradation in the interface constitutive law is described by a damage variable, which can evolve due to the applied loads as well as the number of fatigue cycles. The fatigue formulation, based on a published model, is adapted to fit the framework of the pseudotransient formulation that is used as a numerical tool to overcome convergence difficulties. The fatigue model requires three material parameters. Numerical tests show that a single set of these parameters can be used to recover, very accurately, the experimental S-N relationship. Sensitivity studies show that the results are not mesh dependent.
Resumo:
In order to reduce costs and time while improving quality, durability and sustainability in structural concrete constructions, a widely used material nowadays, special care must be taken in some crucial phases of the project and execution, including the structure design and calculation, the dosage, dumping and curing of concrete: another important aspect is the proper design and execution of assembly plans and construction details. The framework, a name designating the whole reinforcement bars cage already assembled as shown in the drawings, can be made up of several components and implies higher or lower industrialization degree. The framework costs constitute about one third of the price per cubic meter placed in concrete works. The best solutions from all points of view are clearly those involving an easier processing to achieve the same goal, and consequently carrying a high degree of industrialization, meaning quality and safety in the work. This thesis aims to provide an indepth analysis of a relatively new type of anchoring by plate known as headed reinforcement bars, which can potentially replace standard or L-shaped hooks, improving the cleaning of construction details and enabling a faster, more flexible, and therefore a more economical assembly. A literature review on the topic and an overview of typical applications is provided, followed by some examples of specific applications in real projects. Since a strict theoretical formulation used to provide the design plate dimensions has not yet been put forward, an equation is proposed for the side-face blowout strength of the anchorage, based on the capacity of concrete to carry concentrated loads in cases in which no transverse reinforcement is provided. The correlation of the calculated ultimate load with experimental results available in the literature is given. Besides, the proposed formulation can be expanded to cases in which a certain development length is available: using a software for nonlinear finite element analysis oriented to the study of reinforced concrete, numerical tests on the bond-bearing interaction are performed. The thesis ends with a testing of eight corner joints subjected to a closing moment, held in the Structures Laboratory of the Polytechnic University of Madrid, aiming to check whether the design of such plates as stated is adequate for these elements and whether an element with plate-anchored reinforcement is equivalent to one with a traditional construction detail.
Resumo:
This thesis concerns mixed flows (which are characterized by the simultaneous occurrence of free-surface and pressurized flow in sewers, tunnels, culverts or under bridges), and contributes to the improvement of the existing numerical tools for modelling these phenomena. The classic Preissmann slot approach is selected due to its simplicity and capability of predicting results comparable to those of a more recent and complex two-equation model, as shown here with reference to a laboratory test case. In order to enhance the computational efficiency, a local time stepping strategy is implemented in a shock-capturing Godunov-type finite volume numerical scheme for the integration of the de Saint-Venant equations. The results of different numerical tests show that local time stepping reduces run time significantly (between −29% and −85% CPU time for the test cases considered) compared to the conventional global time stepping, especially when only a small region of the flow field is surcharged, while solution accuracy and mass conservation are not impaired. The second part of this thesis is devoted to the modelling of the hydraulic effects of potentially pressurized structures, such as bridges and culverts, inserted in open channel domains. To this aim, a two-dimensional mixed flow model is developed first. The classic conservative formulation of the 2D shallow water equations for free-surface flow is adapted by assuming that two fictitious vertical slots, normally intersecting, are added on the ceiling of each integration element. Numerical results show that this schematization is suitable for the prediction of 2D flooding phenomena in which the pressurization of crossing structures can be expected. Given that the Preissmann model does not allow for the possibility of bridge overtopping, a one-dimensional model is also presented in this thesis to handle this particular condition. The flows below and above the deck are considered as parallel, and linked to the upstream and downstream reaches of the channel by introducing suitable internal boundary conditions. The comparison with experimental data and with the results of HEC-RAS simulations shows that the proposed model can be a useful and effective tool for predicting overtopping and backwater effects induced by the presence of bridges and culverts.
Resumo:
Aquifers are a vital water resource whose quality characteristics must be safeguarded or, if damaged, restored. The extent and complexity of aquifer contamination is related to characteristics of the porous medium, the influence of boundary conditions, and the biological, chemical and physical processes. After the nineties, the efforts of the scientists have been increased exponentially in order to find an efficient way for estimating the hydraulic parameters of the aquifers, and thus, recover the contaminant source position and its release history. To simplify and understand the influence of these various factors on aquifer phenomena, it is common for researchers to use numerical and controlled experiments. This work presents some of these methods, applying and comparing them on data collected during laboratory, field and numerical tests. The work is structured in four parts which present the results and the conclusions of the specific objectives.
Resumo:
In this paper we extend the ideas of Brugnano, Iavernaro and Trigiante in their development of HBVM($s,r$) methods to construct symplectic Runge-Kutta methods for all values of $s$ and $r$ with $s\geq r$. However, these methods do not see the dramatic performance improvement that HBVMs can attain. Nevertheless, in the case of additive stochastic Hamiltonian problems an extension of these ideas, which requires the simulation of an independent Wiener process at each stage of a Runge-Kutta method, leads to methods that have very favourable properties. These ideas are illustrated by some simple numerical tests for the modified midpoint rule.
Resumo:
Currently, the GNSS computing modes are of two classes: network-based data processing and user receiver-based processing. A GNSS reference receiver station essentially contributes raw measurement data in either the RINEX file format or as real-time data streams in the RTCM format. Very little computation is carried out by the reference station. The existing network-based processing modes, regardless of whether they are executed in real-time or post-processed modes, are centralised or sequential. This paper describes a distributed GNSS computing framework that incorporates three GNSS modes: reference station-based, user receiver-based and network-based data processing. Raw data streams from each GNSS reference receiver station are processed in a distributed manner, i.e., either at the station itself or at a hosting data server/processor, to generate station-based solutions, or reference receiver-specific parameters. These may include precise receiver clock, zenith tropospheric delay, differential code biases, ambiguity parameters, ionospheric delays, as well as line-of-sight information such as azimuth and elevation angles. Covariance information for estimated parameters may also be optionally provided. In such a mode the nearby precise point positioning (PPP) or real-time kinematic (RTK) users can directly use the corrections from all or some of the stations for real-time precise positioning via a data server. At the user receiver, PPP and RTK techniques are unified under the same observation models, and the distinction is how the user receiver software deals with corrections from the reference station solutions and the ambiguity estimation in the observation equations. Numerical tests demonstrate good convergence behaviour for differential code bias and ambiguity estimates derived individually with single reference stations. With station-based solutions from three reference stations within distances of 22–103 km the user receiver positioning results, with various schemes, show an accuracy improvement of the proposed station-augmented PPP and ambiguity-fixed PPP solutions with respect to the standard float PPP solutions without station augmentation and ambiguity resolutions. Overall, the proposed reference station-based GNSS computing mode can support PPP and RTK positioning services as a simpler alternative to the existing network-based RTK or regionally augmented PPP systems.
Resumo:
We study theoretically the destruction of spin nematic order due to quantum fluctuations in quasi-one-dimensional spin-1 magnets. If the nematic ordering is disordered by condensing disclinations, then quantum Berry phase effects induce dimerization in the resulting paramagnet. We develop a theory for a Landau-forbidden second order transition between the spin nematic and dimerized states found in recent numerical calculations. Numerical tests of the theory are suggested.
Resumo:
Finite volume methods traditionally employ dimension by dimension extension of the one-dimensional reconstruction and averaging procedures to achieve spatial discretization of the governing partial differential equations on a structured Cartesian mesh in multiple dimensions. This simple approach based on tensor product stencils introduces an undesirable grid orientation dependence in the computed solution. The resulting anisotropic errors lead to a disparity in the calculations that is most prominent between directions parallel and diagonal to the grid lines. In this work we develop isotropic finite volume discretization schemes which minimize such grid orientation effects in multidimensional calculations by eliminating the directional bias in the lowest order term in the truncation error. Explicit isotropic expressions that relate the cell face averaged line and surface integrals of a function and its derivatives to the given cell area and volume averages are derived in two and three dimensions, respectively. It is found that a family of isotropic approximations with a free parameter can be derived by combining isotropic schemes based on next-nearest and next-next-nearest neighbors in three dimensions. Use of these isotropic expressions alone in a standard finite volume framework, however, is found to be insufficient in enforcing rotational invariance when the flux vector is nonlinear and/or spatially non-uniform. The rotationally invariant terms which lead to a loss of isotropy in such cases are explicitly identified and recast in a differential form. Various forms of flux correction terms which allow for a full recovery of rotational invariance in the lowest order truncation error terms, while preserving the formal order of accuracy and discrete conservation of the original finite volume method, are developed. Numerical tests in two and three dimensions attest the superior directional attributes of the proposed isotropic finite volume method. Prominent anisotropic errors, such as spurious asymmetric distortions on a circular reaction-diffusion wave that feature in the conventional finite volume implementation are effectively suppressed through isotropic finite volume discretization. Furthermore, for a given spatial resolution, a striking improvement in the prediction of kinetic energy decay rate corresponding to a general two-dimensional incompressible flow field is observed with the use of an isotropic finite volume method instead of the conventional discretization. (C) 2014 Elsevier Inc. All rights reserved.