917 resultados para Non-linear waves
Resumo:
We explore the application of pseudo time marching schemes, involving either deterministic integration or stochastic filtering, to solve the inverse problem of parameter identification of large dimensional structural systems from partial and noisy measurements of strictly static response. Solutions of such non-linear inverse problems could provide useful local stiffness variations and do not have to confront modeling uncertainties in damping, an important, yet inadequately understood, aspect in dynamic system identification problems. The usual method of least-square solution is through a regularized Gauss-Newton method (GNM) whose results are known to be sensitively dependent on the regularization parameter and data noise intensity. Finite time, recursive integration of the pseudo-dynamical GNM (PD-GNM) update equation addresses the major numerical difficulty associated with the near-zero singular values of the linearized operator and gives results that are not sensitive to the time step of integration. Therefore, we also propose a pseudo-dynamic stochastic filtering approach for the same problem using a parsimonious representation of states and specifically solve the linearized filtering equations through apseudo-dynamic ensemble Kalman filter (PD-EnKF). For multiple sets ofmeasurements involving various load cases, we expedite the speed of the PD-EnKF by proposing an inner iteration within every time step. Results using the pseudo-dynamic strategy obtained through PD-EnKF and recursive integration are compared with those from the conventional GNM, which prove that the PD-EnKF is the best performer showing little sensitivity to process noise covariance and yielding reconstructions with less artifacts even when the ensemble size is small. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Masonry strength is dependent upon characteristics of the masonry unit,the mortar and the bond between them. Empirical formulae as well as analytical and finite element (FE) models have been developed to predict structural behaviour of masonry. This paper is focused on developing a three dimensional non-linear FE model based on micro-modelling approach to predict masonry prism compressive strength and crack pattern. The proposed FE model uses multi-linear stress-strain relationships to model the non-linear behaviour of solid masonry unit and the mortar. Willam-Warnke's five parameter failure theory developed for modelling the tri-axial behaviour of concrete has been adopted to model the failure of masonry materials. The post failure regime has been modelled by applying orthotropic constitutive equations based on the smeared crack approach. Compressive strength of the masonry prism predicted by the proposed FE model has been compared with experimental values as well as the values predicted by other failure theories and Eurocode formula. The crack pattern predicted by the FE model shows vertical splitting cracks in the prism. The FE model predicts the ultimate failure compressive stress close to 85 of the mean experimental compressive strength value.
Resumo:
The time dependent response of a polar solvent to a changing charge distribution is studied in solvation dynamics. The change in the energy of the solute is measured by a time domain Stokes shift in the fluorescence spectrum of the solute. Alternatively, one can use sophisticated non-linear optical spectroscopic techniques to measure the energy fluctuation of the solute at equilibrium. In both methods, the measured dynamic response is expressed by the normalized solvation time correlation function, S(t). The latter is found to exhibit uniquefeatures reflecting both the static and dynamic characteristics of each solvent. For water, S(t) consists of a dominant sub-50 fs ultrafast component, followed by a multi-exponential decay. Acetonitrile exhibitsa sub-100 fs ultrafast component, followed by an exponential decay. Alcohols and amides show features unique to each solvent and solvent series. However, understanding and interpretation of these results have proven to be difficult, and often controversial. Theoretical studiesand computer simulations have greatly facilitated the understanding ofS(t) in simple systems. Recently solvation dynamics has been used extensively to explore dynamics of complex systems, like micelles and reverse micelles, protein and DNA hydration layers, sol-gel mixtures and polymers. In each case one observes rich dynamical features, characterized again by multi-exponential decays but the initial and final time constants are now widely separated. In this tutorial review, we discuss the difficulties in interpreting the origin of the observed behaviour in complex systems.
Resumo:
In this paper, the steady laminar viscous hypersonic flow of an electrically conducting fluid in the region of the stagnation point of an insulating blunt body in the presence of a radial magnetic field is studied by similarity solution approach, taking into account the variation of the product of density and viscosity across the boundary layer. The two coupled non-linear ordinary differential equations are solved simultaneously using Runge-Kutta-Gill method. It has been found that the effect of the variation of the product of density and viscosity on skin friction coefficient and Nusselt number is appreciable. The skin friction coefficient increases but Nusselt number decreases as the magnetic field or the total enthalpy at the wall increases
Resumo:
In order to study the elastic behaviour of matter when subjected to very large pressures, such as occur for example in the interior of the earth, and to provide an explanation for phenomena like earthquakes, it is essential to be able to calculate the values of the elastic constants of a substance under a state of large initial stress in terms of the elastic constants of a natural or stress-free state. An attempt has been made in this paper to derive expressions for these quantities for a substance of cubic symmetry on the basis of non-linear theory of elasticity and including up to cubic powers of the strain components in the strain energy function. A simple method of deriving them directly from the energy function itself has been indicated for any general case and the same has been applied to the case of hydrostatic compression. The notion of an effective elastic energy-the energy require to effect an infinitesimal deformation over a state of finite strain-has been introduced, the coefficients in this expression being the effective elastic constants. A separation of this effective energy function into normal co-ordinates has been given for the particular case of cubic symmetry and it has been pointed out, that when any of such coefficients in this normal form becomes negative, elastic instability will set in, with associated release of energy.
Resumo:
Optically clear glasses of various compositions in the system (100-x)Li2B4O7 center dot x(Ba5Li2Ti2Nb8O30) (5 <= x <= 20, in molar ratio) were fabricated by splat quenching technique. Controlled heat-treatment of the as-quenched glasses at 500 degrees C for 8 h yielded nanocrystallites embedded in the glass matrix. High Resolution Transmission Electron Microscopy (HRTEM) of these samples established the composition of the nano-crystallites to be that of Ba5Li2Ti2Nb8O30. B-11 NMR studies revealed the transformation of BO4 structural units into BO3 units owing to the increase in TiO6 and NbO6 structural units as the composition of Ba5Li2Ti2Nb8O30 increased in the glass. This, in turn, resulted in an increase in the density of the glasses. The influence of the nominal composition of the glasses and glass nanocrystal composites on optical band gap (E-opt), Urbach energy (Delta E), refractive index (n), molar refraction (R-m), optical polarizability (alpha(m)) and third order non-linear optical susceptibility (chi(3)) were studied.
Resumo:
Detecting Earnings Management Using Neural Networks. Trying to balance between relevant and reliable accounting data, generally accepted accounting principles (GAAP) allow, to some extent, the company management to use their judgment and to make subjective assessments when preparing financial statements. The opportunistic use of the discretion in financial reporting is called earnings management. There have been a considerable number of suggestions of methods for detecting accrual based earnings management. A majority of these methods are based on linear regression. The problem with using linear regression is that a linear relationship between the dependent variable and the independent variables must be assumed. However, previous research has shown that the relationship between accruals and some of the explanatory variables, such as company performance, is non-linear. An alternative to linear regression, which can handle non-linear relationships, is neural networks. The type of neural network used in this study is the feed-forward back-propagation neural network. Three neural network-based models are compared with four commonly used linear regression-based earnings management detection models. All seven models are based on the earnings management detection model presented by Jones (1991). The performance of the models is assessed in three steps. First, a random data set of companies is used. Second, the discretionary accruals from the random data set are ranked according to six different variables. The discretionary accruals in the highest and lowest quartiles for these six variables are then compared. Third, a data set containing simulated earnings management is used. Both expense and revenue manipulation ranging between -5% and 5% of lagged total assets is simulated. Furthermore, two neural network-based models and two linear regression-based models are used with a data set containing financial statement data from 110 failed companies. Overall, the results show that the linear regression-based models, except for the model using a piecewise linear approach, produce biased estimates of discretionary accruals. The neural network-based model with the original Jones model variables and the neural network-based model augmented with ROA as an independent variable, however, perform well in all three steps. Especially in the second step, where the highest and lowest quartiles of ranked discretionary accruals are examined, the neural network-based model augmented with ROA as an independent variable outperforms the other models.
Resumo:
A state-of-the-art review on holographic optical elements (HOE) is presented in two parts. In Part I a conceptual overview and an assessment of the current status on the design of HOE have been included. It is pointed out that HOE development based on the use of squeezed light, speckle, non-linear recording, comparative studies between optics and communication approaches, are some of the promising directions for future research in this vital area of photonics.
Resumo:
This paper investigates the clustering pattern in the Finnish stock market. Using trading volume and time as factors capturing the clustering pattern in the market, the Keim and Madhavan (1996) and the Engle and Russell (1998) model provide the framework for the analysis. The descriptive and the parametric analysis provide evidences that an important determinant of the famous U-shape pattern in the market is the rate of information arrivals as measured by large trading volumes and durations at the market open and close. Precisely, 1) the larger the trading volume, the greater the impact on prices both in the short and the long run, thus prices will differ across quantities. 2) Large trading volume is a non-linear function of price changes in the long run. 3) Arrival times are positively autocorrelated, indicating a clustering pattern and 4) Information arrivals as approximated by durations are negatively related to trading flow.
Resumo:
In this paper, we present a wavelet - based approach to solve the non-linear perturbation equation encountered in optical tomography. A particularly suitable data gathering geometry is used to gather a data set consisting of differential changes in intensity owing to the presence of the inhomogeneous regions. With this scheme, the unknown image, the data, as well as the weight matrix are all represented by wavelet expansions, thus yielding the representation of the original non - linear perturbation equation in the wavelet domain. The advantage in use of the non-linear perturbation equation is that there is no need to recompute the derivatives during the entire reconstruction process. Once the derivatives are computed, they are transformed into the wavelet domain. The purpose of going to the wavelet domain, is that, it has an inherent localization and de-noising property. The use of approximation coefficients, without the detail coefficients, is ideally suited for diffuse optical tomographic reconstructions, as the diffusion equation removes most of the high frequency information and the reconstruction appears low-pass filtered. We demonstrate through numerical simulations, that through solving merely the approximation coefficients one can reconstruct an image which has the same information content as the reconstruction from a non-waveletized procedure. In addition we demonstrate a better noise tolerance and much reduced computation time for reconstructions from this approach.
Resumo:
Ex-situ grown thin films of SrBi2Nb2O9 (SBN) were deposited on platinum substrates using laser ablation technique. A low substrate-temperature-processing route was chosen to avoid any diffusion of bismuth into the Pt electrode. It was observed that the as grown films showed an oriented growth along the 'c'-axis (with zero spontaneous polarization). The as grown films were subsequently annealed to enhance crystallization. Upon annealing, these films transformed into a polycrystalline structure, and exhibited excellent ferroelectric properties. The switching was made to be possible by lowering the thickness without losing the electrically insulating behavior of the films. The hysteresis results showed an excellent square-shaped loop with results (P-r = 4 muC/cm(2) E-c = 90 kV/cm) in good agreement with the earlier reports. The films also exhibited a dielectric constant of 190 and a dissipation factor of 0.02, which showed dispersion at low frequencies. The frequency dispersion was found to obey Jonscher's universal power law relation, and was attributed to the ionic charge hopping process according to earlier reports. The de transport studies indicated an ohmic behavior in the low voltage region, while higher voltages induced a bulk space charge and resulted in non-linear current-voltage dependence.
Resumo:
Solid polymer electrolytes (SPEs) of poly(ethyleneoxide) and magnesium triflate, which are plasticized with propylene carbonate (PC), ethylene carbonate (EC) and a mixture of PC and EC, are studied for their conductivity, ac impedance of the Mg I SPE interface, cyclic voltammetry, infrared spectroscopy and differential scanning calorimetry. in the presence of plasticizers, the ionic conductivity (a) increases from a value of 1 x 10(-8) S cm(-1) to about 1 x 10(-4) S cm(-1) at ambient temperature. The a is found to follow a VTF relationship with temperature. The values of the activation energy, pre-exponential factor and equilibrium glass transition temperature are shown to depend on the concentration of plasticizer. Ac impedance studies indicate lower interfacial impedance of Mg/plasticized SPE than stainless steel/plasticized SPE. The impedance spectra are analyzed using a non-linear least square curve fitting technique and the interfacial resistance of Mg/plasticized SPE is evaluated. The cyclic voltammetric results suggest a quasireversible type of Mg/Mg2+ couple in plasticized SPE. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
To study the effect of hydrostatic pressure on the incommensurate lattice modulation at 153 K in K3Cu8S6, electrical resistivity measurements are done at 1.0 GPa, 1.5 GPa and 2.2 GPa. The sharp increase in resistance at 2.2 GPa is attributed to the incommensurate to commensurate transition. This is further confirmed by the non-linear I–V characteristics at 2.2 GPa showing the driven motion of the commensurate charge density wave in the presence of an external electric field.
Resumo:
This study reports the details of the finite element analysis of eleven shear critical partially prestressed concrete T-beams having steel fibers over partial or full depth. Prestressed concrete T-beams having a shear span to depth ratio of 2.65 and 1.59 and failing in the shear have been analyzed Using 'ANSYS'. The 'ANSYS' model accounts for the nonlinear phenomenon, such as, bond-slip of longitudinal reinforcements, post-cracking tensile stiffness of the concrete, stress transfer across the cracked blocks of the concrete and load sustenance through the bridging of steel fibers at crack interlace. The concrete is modeled using 'SOLID65'-eight-node brick element, which is capable Of simulating the cracking and crushing behavior of brittle materials. The reinforcements such as deformed bars, prestressing wires and steel fibers have been modeled discretely Using 'LINK8' - 3D spar element. The slip between the reinforcement (rebar, fibers) and the concrete has been modeled using a 'COMBIN39'-non-linear spring element connecting the nodes of the 'LINK8' element representing the reinforcement and nodes of the 'SOLID65' elements representing the concrete. The 'ANSYS' model correctly predicted the diagonal tension failure and shear compression failure of prestressed concrete beams observed in the experiment. I-lie capability of the model to capture the critical crack regions, loads and deflections for various types Of shear failures ill prestressed concrete beam has been illustrated.
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.