112 resultados para second order condition
Resumo:
This paper addresses of the advanced computational technique of steel structures for both simulation capacities simultaneously; specifically, they are the higher-order element formulation with element load effect (geometric nonlinearities) as well as the refined plastic hinge method (material nonlinearities). This advanced computational technique can capture the real behaviour of a whole second-order inelastic structure, which in turn ensures the structural safety and adequacy of the structure. Therefore, the emphasis of this paper is to advocate that the advanced computational technique can replace the traditional empirical design approach. In the meantime, the practitioner should be educated how to make use of the advanced computational technique on the second-order inelastic design of a structure, as this approach is the future structural engineering design. It means the future engineer should understand the computational technique clearly; realize the behaviour of a structure with respect to the numerical analysis thoroughly; justify the numerical result correctly; especially the fool-proof ultimate finite element is yet to come, of which is competent in modelling behaviour, user-friendly in numerical modelling and versatile for all structural forms and various materials. Hence the high-quality engineer is required, who can confidently manipulate the advanced computational technique for the design of a complex structure but not vice versa.
Resumo:
Interpolation techniques for spatial data have been applied frequently in various fields of geosciences. Although most conventional interpolation methods assume that it is sufficient to use first- and second-order statistics to characterize random fields, researchers have now realized that these methods cannot always provide reliable interpolation results, since geological and environmental phenomena tend to be very complex, presenting non-Gaussian distribution and/or non-linear inter-variable relationship. This paper proposes a new approach to the interpolation of spatial data, which can be applied with great flexibility. Suitable cross-variable higher-order spatial statistics are developed to measure the spatial relationship between the random variable at an unsampled location and those in its neighbourhood. Given the computed cross-variable higher-order spatial statistics, the conditional probability density function (CPDF) is approximated via polynomial expansions, which is then utilized to determine the interpolated value at the unsampled location as an expectation. In addition, the uncertainty associated with the interpolation is quantified by constructing prediction intervals of interpolated values. The proposed method is applied to a mineral deposit dataset, and the results demonstrate that it outperforms kriging methods in uncertainty quantification. The introduction of the cross-variable higher-order spatial statistics noticeably improves the quality of the interpolation since it enriches the information that can be extracted from the observed data, and this benefit is substantial when working with data that are sparse or have non-trivial dependence structures.
Resumo:
In this paper, a class of unconditionally stable difference schemes based on the Pad´e approximation is presented for the Riesz space-fractional telegraph equation. Firstly, we introduce a new variable to transform the original dfferential equation to an equivalent differential equation system. Then, we apply a second order fractional central difference scheme to discretise the Riesz space-fractional operator. Finally, we use (1, 1), (2, 2) and (3, 3) Pad´e approximations to give a fully discrete difference scheme for the resulting linear system of ordinary differential equations. Matrix analysis is used to show the unconditional stability of the proposed algorithms. Two examples with known exact solutions are chosen to assess the proposed difference schemes. Numerical results demonstrate that these schemes provide accurate and efficient methods for solving a space-fractional hyperbolic equation.
Resumo:
In the finite element modelling of structural frames, external loads such as wind loads, dead loads and imposed loads usually act along the elements rather than at the nodes only. Conventionally, when an element is subjected to these general transverse element loads, they are usually converted to nodal forces acting at the ends of the elements by either lumping or consistent load approaches. In addition, it is especially important for an element subjected to the first- and second-order elastic behaviour, to which the steel structure is critically prone to; in particular the thin-walled steel structures, when the stocky element section may be generally critical to the inelastic behaviour. In this sense, the accurate first- and second-order elastic displacement solutions of element load effect along an element is vitally crucial, but cannot be simulated using neither numerical nodal nor consistent load methods alone, as long as no equilibrium condition is enforced in the finite element formulation, which can inevitably impair the structural safety of the steel structure particularly. It can be therefore regarded as a unique element load method to account for the element load nonlinearly. If accurate displacement solution is targeted for simulating the first- and second-order elastic behaviour on an element on the basis of sophisticated non-linear element stiffness formulation, the numerous prescribed stiffness matrices must indispensably be used for the plethora of specific transverse element loading patterns encountered. In order to circumvent this shortcoming, the present paper proposes a numerical technique to include the transverse element loading in the non-linear stiffness formulation without numerous prescribed stiffness matrices, and which is able to predict structural responses involving the effect of first-order element loads as well as the second-order coupling effect between the transverse load and axial force in the element. This paper shows that the principle of superposition can be applied to derive the generalized stiffness formulation for element load effect, so that the form of the stiffness matrix remains unchanged with respect to the specific loading patterns, but with only the magnitude of the loading (element load coefficients) being needed to be adjusted in the stiffness formulation, and subsequently the non-linear effect on element loadings can be commensurate by updating the magnitude of element load coefficients through the non-linear solution procedures. In principle, the element loading distribution is converted into a single loading magnitude at mid-span in order to provide the initial perturbation for triggering the member bowing effect due to its transverse element loads. This approach in turn sacrifices the effect of element loading distribution except at mid-span. Therefore, it can be foreseen that the load-deflection behaviour may not be as accurate as those at mid-span, but its discrepancy is still trivial as proved. This novelty allows for a very useful generalised stiffness formulation for a single higher-order element with arbitrary transverse loading patterns to be formulated. Moreover, another significance of this paper is placed on shifting the nodal response (system analysis) to both nodal and element response (sophisticated element formulation). For the conventional finite element method, such as the cubic element, all accurate solutions can be only found at node. It means no accurate and reliable structural safety can be ensured within an element, and as a result, it hinders the engineering applications. The results of the paper are verified using analytical stability function studies, as well as with numerical results reported by independent researchers on several simple frames.
Resumo:
A two-dimensional variable-order fractional nonlinear reaction-diffusion model is considered. A second-order spatial accurate semi-implicit alternating direction method for a two-dimensional variable-order fractional nonlinear reaction-diffusion model is proposed. Stability and convergence of the semi-implicit alternating direct method are established. Finally, some numerical examples are given to support our theoretical analysis. These numerical techniques can be used to simulate a two-dimensional variable order fractional FitzHugh-Nagumo model in a rectangular domain. This type of model can be used to describe how electrical currents flow through the heart, controlling its contractions, and are used to ascertain the effects of certain drugs designed to treat arrhythmia.
Resumo:
Efficient and accurate geometric and material nonlinear analysis of the structures under ultimate loads is a backbone to the success of integrated analysis and design, performance-based design approach and progressive collapse analysis. This paper presents the advanced computational technique of a higher-order element formulation with the refined plastic hinge approach which can evaluate the concrete and steel-concrete structure prone to the nonlinear material effects (i.e. gradual yielding, full plasticity, strain-hardening effect when subjected to the interaction between axial and bending actions, and load redistribution) as well as the nonlinear geometric effects (i.e. second-order P-d effect and P-D effect, its associate strength and stiffness degradation). Further, this paper also presents the cross-section analysis useful to formulate the refined plastic hinge approach.
Resumo:
Many physical processes appear to exhibit fractional order behavior that may vary with time and/or space. The continuum of order in the fractional calculus allows the order of the fractional operator to be considered as a variable. In this paper, we consider a new space–time variable fractional order advection–dispersion equation on a finite domain. The equation is obtained from the standard advection–dispersion equation by replacing the first-order time derivative by Coimbra’s variable fractional derivative of order α(x)∈(0,1]α(x)∈(0,1], and the first-order and second-order space derivatives by the Riemann–Liouville derivatives of order γ(x,t)∈(0,1]γ(x,t)∈(0,1] and β(x,t)∈(1,2]β(x,t)∈(1,2], respectively. We propose an implicit Euler approximation for the equation and investigate the stability and convergence of the approximation. Finally, numerical examples are provided to show that the implicit Euler approximation is computationally efficient.
Resumo:
The solution of linear ordinary differential equations (ODEs) is commonly taught in first year undergraduate mathematics classrooms, but the understanding of the concept of a solution is not always grasped by students until much later. Recognising what it is to be a solution of a linear ODE and how to postulate such solutions, without resorting to tables of solutions, is an important skill for students to carry with them to advanced studies in mathematics. In this study we describe a teaching and learning strategy that replaces the traditional algorithmic, transmission presentation style for solving ODEs with a constructive, discovery based approach where students employ their existing skills as a framework for constructing the solutions of first and second order linear ODEs. We elaborate on how the strategy was implemented and discuss the resulting impact on a first year undergraduate class. Finally we propose further improvements to the strategy as well as suggesting other topics which could be taught in a similar manner.
Resumo:
We develop and test a theoretically-based integrative model of organizational innovation adoption. Confirmatory factor analyses using responses from 134 organizations showed that the hypothesized second-order model was a better fit to the data than the traditional model of independent factors. Furthermore, although not all elements were significant, the hypothesized model fit adoption better than the traditional model.
Resumo:
Query reformulation is a key user behavior during Web search. Our research goal is to develop predictive models of query reformulation during Web searching. This article reports results from a study in which we automatically classified the query-reformulation patterns for 964,780 Web searching sessions, composed of 1,523,072 queries, to predict the next query reformulation. We employed an n-gram modeling approach to describe the probability of users transitioning from one query-reformulation state to another to predict their next state. We developed first-, second-, third-, and fourth-order models and evaluated each model for accuracy of prediction, coverage of the dataset, and complexity of the possible pattern set. The results show that Reformulation and Assistance account for approximately 45% of all query reformulations; furthermore, the results demonstrate that the first- and second-order models provide the best predictability, between 28 and 40% overall and higher than 70% for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance.
Resumo:
This paper reports results from a study in which we automatically classified the query reformulation patterns for 964,780 Web searching sessions (composed of 1,523,072 queries) in order to predict what the next query reformulation would be. We employed an n-gram modeling approach to describe the probability of searchers transitioning from one query reformulation state to another and predict their next state. We developed first, second, third, and fourth order models and evaluated each model for accuracy of prediction. Findings show that Reformulation and Assistance account for approximately 45 percent of all query reformulations. Searchers seem to seek system searching assistant early in the session or after a content change. The results of our evaluations show that the first and second order models provided the best predictability, between 28 and 40 percent overall, and higher than 70 percent for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance in real time.
Resumo:
Suggestions that peripheral imagery may affect the development of refractive error have led to interest in the variation in refraction and aberration across the visual field. It is shown that, if the optical system of the eye is rotationally symmetric about an optical axis which does not coincide with the visual axis, measurements of refraction and aberration made along the horizontal and vertical meridians of the visual field will show asymmetry about the visual axis. The departures from symmetry are modelled for second-order aberrations, refractive components and third-order coma. These theoretical results are compared with practical measurements from the literature. The experimental data support the concept that departures from symmetry about the visual axis in the measurements of crossed-cylinder astigmatism J45 and J180 are largely explicable in terms of a decentred optical axis. Measurements of the mean sphere M suggest, however, that the retinal curvature must differ in the horizontal and vertical meridians.
Resumo:
In this paper, we consider the numerical solution of a fractional partial differential equation with Riesz space fractional derivatives (FPDE-RSFD) on a finite domain. Two types of FPDE-RSFD are considered: the Riesz fractional diffusion equation (RFDE) and the Riesz fractional advection–dispersion equation (RFADE). The RFDE is obtained from the standard diffusion equation by replacing the second-order space derivative with the Riesz fractional derivative of order αset membership, variant(1,2]. The RFADE is obtained from the standard advection–dispersion equation by replacing the first-order and second-order space derivatives with the Riesz fractional derivatives of order βset membership, variant(0,1) and of order αset membership, variant(1,2], respectively. Firstly, analytic solutions of both the RFDE and RFADE are derived. Secondly, three numerical methods are provided to deal with the Riesz space fractional derivatives, namely, the L1/L2-approximation method, the standard/shifted Grünwald method, and the matrix transform method (MTM). Thirdly, the RFDE and RFADE are transformed into a system of ordinary differential equations, which is then solved by the method of lines. Finally, numerical results are given, which demonstrate the effectiveness and convergence of the three numerical methods.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
A common optometric problem is to specify the eye’s ocular aberrations in terms of Zernike coefficients and to reduce that specification to a prescription for the optimum sphero-cylindrical correcting lens. The typical approach is first to reconstruct wavefront phase errors from measurements of wavefront slopes obtained by a wavefront aberrometer. This paper applies a new method to this clinical problem that does not require wavefront reconstruction. Instead, we base our analysis of axial wavefront vergence as inferred directly from wavefront slopes. The result is a wavefront vergence map that is similar to the axial power maps in corneal topography and hence has a potential to be favoured by clinicians. We use our new set of orthogonal Zernike slope polynomials to systematically analyse details of the vergence map analogous to Zernike analysis of wavefront maps. The result is a vector of slope coefficients that describe fundamental aberration components. Three different methods for reducing slope coefficients to a spherocylindrical prescription in power vector forms are compared and contrasted. When the original wavefront contains only second order aberrations, the vergence map is a function of meridian only and the power vectors from all three methods are identical. The differences in the methods begin to appear as we include higher order aberrations, in which case the wavefront vergence map is more complicated. Finally, we discuss the advantages and limitations of vergence map representation of ocular aberrations.