922 resultados para Lagrange interpolation
Resumo:
This thesis consists of two parts; in the first part we performed a single-molecule force extension measurement with 10kb long DNA-molecules from phage-λ to validate the calibration and single-molecule capability of our optical tweezers instrument. Fitting the worm-like chain interpolation formula to the data revealed that ca. 71% of the DNA tethers featured a contour length within ±15% of the expected value (3.38 µm). Only 25% of the found DNA had a persistence length between 30 and 60 nm. The correct value should be within 40 to 60 nm. In the second part we designed and built a precise temperature controller to remove thermal fluctuations that cause drifting of the optical trap. The controller uses feed-forward and PID (proportional-integral-derivative) feedback to achieve 1.58 mK precision and 0.3 K absolute accuracy. During a 5 min test run it reduced drifting of the trap from 1.4 nm/min in open-loop to 0.6 nm/min in closed-loop.
Resumo:
The structure and function of northern ecosystems are strongly influenced by climate change and variability and by human-induced disturbances. The projected global change is likely to have a pronounced effect on the distribution and productivity of different species, generating large changes in the equilibrium at the tree-line. In turn, movement of the tree-line and the redistribution of species produce feedback to both the local and the regional climate. This research was initiated with the objective of examining the influence of natural conditions on the small-scale spatial variation of climate in Finnish Lapland, and to study the interaction and feedback mechanisms in the climate-disturbances-vegetation system near the climatological border of boreal forest. The high (1 km) resolution spatial variation of climate parameters over northern Finland was determined by applying the Kriging interpolation method that takes into account the effect of external forcing variables, i.e., geographical coordinates, elevation, sea and lake coverage. Of all the natural factors shaping the climate, the geographical position, local topography and altitude proved to be the determining ones. Spatial analyses of temperature- and precipitation-derived parameters based on a 30-year dataset (1971-2000) provide a detailed description of the local climate. Maps of the mean, maximum and minimum temperatures, the frost-free period and the growing season indicate that the most favourable thermal conditions exist in the south-western part of Lapland, around large water bodies and in the Kemijoki basin, while the coldest regions are in highland and fell Lapland. The distribution of precipitation is predominantly longitudinally dependent but with the definite influence of local features. The impact of human-induced disturbances, i.e., forest fires, on local climate and its implication for forest recovery near the northern timberline was evaluated in the Tuntsa area of eastern Lapland, damaged by a widespread forest fire in 1960 and suffering repeatedly-failed vegetation recovery since that. Direct measurements of the local climate and simulated heat and water fluxes indicated the development of a more severe climate and physical conditions on the fire-disturbed site. Removal of the original, predominantly Norway spruce and downy birch vegetation and its substitution by tundra vegetation has generated increased wind velocity and reduced snow accumulation, associated with a large variation in soil temperature and moisture and deep soil frost. The changed structural parameters of the canopy have determined changes in energy fluxes by reducing the latter over the tundra vegetation. The altered surface and soil conditions, as well as the evolved severe local climate, have negatively affected seedling growth and survival, leading to more unfavourable conditions for the reproduction of boreal vegetation and thereby causing deviations in the regional position of the timberline. However it should be noted that other factors, such as an inadequate seed source or seedbed, the poor quality of the soil and the intensive logging of damaged trees could also exacerbate the poor tree regeneration. In spite of the failed forest recovery at Tunsta, the position and composition of the timberline and tree-line in Finnish Lapland may also benefit from present and future changes in climate. The already-observed and the projected increase in temperature, the prolonged growing season, as well as changes in the precipitation regime foster tree growth and new regeneration, resulting in an advance of the timberline and tree-line northward and upward. This shift in the distribution of vegetation might be decelerated or even halted by local topoclimatic conditions and by the expected increase in the frequency of disturbances.
Resumo:
A finite element model for the analysis of laminated composite cylindrical shells with through cracks is presented. The analysis takes into account anisotropic elastic behaviour, bending-extensional coupling and transverse shear deformation effects. The proposed finite element model is based on the approach of dividing a cracked configuration into triangular shaped singular elements around the crack tip with adjoining quadrilateral shaped regular elements. The parabolic isoparametric cylindrical shell elements (both singular and regular) used in this model employ independent displacement and rotation interpolation in the shell middle surface. The numerical comparisons show the evidence to the conclusion that the proposed model will yield accurate stress intensity factors from a relatively coarse mesh. Through the analysis of a pressurised fibre composite cylindrical shell with an axial crack, the effect of material orthotropy on the crack tip stress intensity factors is shown to be quite significant.
Resumo:
Curved hollow bars of laminated anisotropic construction are used as structural members in many industries. They are used in order to save weight without loss of stiffness in comparison with solid sections. In this paper are presented the details of the development of the stiffness matrices of laminated anisotropic curved hollow bars under line member assumptions for two typical sections, circular and square. They are 16dof elements which make use of one-dimensional first-order Hermite interpolation polynomials for the description of assumed displacement state. Problems for which analytical or other solutions are available are first solved using these elements. Good agreement was found between the results. In order to show the capability of the element, application is made to carbon fibre reinforced plastic layered anisotropic curved hollow bars.
Resumo:
We propose a new type of high-order elements that incorporates the mesh-free Galerkin formulations into the framework of finite element method. Traditional polynomial interpolation is replaced by mesh-free interpolations in the present high-order elements, and the strain smoothing technique is used for integration of the governing equations based on smoothing cells. The properties of high-order elements, which are influenced by the basis function of mesh-free interpolations and boundary nodes, are discussed through numerical examples. It can be found that the basis function has significant influence on the computational accuracy and upper-lower bounds of energy norm, when the strain smoothing technique retains the softening phenomenon. This new type of high-order elements shows good performance when quadratic basis functions are used in the mesh-free interpolations and present elements prove advantageous in adaptive mesh and nodes refinement schemes. Furthermore, it shows less sensitive to the quality of element because it uses the mesh-free interpolations and obeys the Weakened Weak (W2) formulation as introduced in [3, 5].
Resumo:
This paper presents finite element analysis of laminated anisotropic beams of bimodulus materials. The finite element has 16 d.o.f. and uses the displacement field in terms of first order Hermite interpolation polynomials. As the neutral axis position may change from point to point along the length of the beam, an iterative procedure is employed to determine the location of zero strain points along the length. Using this element some problems of laminated beams of bimodulus materials are solved for concentrated loads/moments perpendicular and parallel to the layering planes as well as combined loads.
Resumo:
As editors of the book Lilavati's Daughters: The Women Scientists of India, reviewed by Asha Gopinathan (Nature 460, 1082; 2009), we would like to elaborate on the background to its title. Lilavati was a mathematical treatise of the twelfth century, composed by the mathematician and astronomer Bhaskaracharya (1114–85) — also known as Bhaskara II — who was a teacher of repute and author of several other texts. The name Lilavati, which literally means 'playful', is a surprising title for an early scientific book. Some of the mathematical problems posed in the book are in verse form, and are addressed to a girl, the eponymous Lilavati. However, there is little real evidence concerning Lilavati's historicity. Tradition holds that she was Bhaskaracharya's daughter and that he wrote the treatise to console her after an accident that left her unable to marry. But this could be a later interpolation, as the idea was first mentioned in a Persian commentary. An alternative view has it that Lilavati was married at an inauspicious time and was widowed shortly afterwards. Other sources have implied that Lilavati was Bhaskaracharya's wife, or even one of his students — raising the possibility that women in parts of the Indian subcontinent could have participated in higher education as early as eight centuries ago. However, given that Bhaskara was a poet and pedagogue, it is also possible that he chose to address his mathematical problems to a doe-eyed girl simply as a whimsical and charming literary device.
Resumo:
The integration of stochastic wind power has accentuated a challenge for power system stability assessment. Since the power system is a time-variant system under wind generation fluctuations, pure time-domain simulations are difficult to provide real-time stability assessment. As a result, the worst-case scenario is simulated to give a very conservative assessment of system transient stability. In this study, a probabilistic contingency analysis through a stability measure method is proposed to provide a less conservative contingency analysis which covers 5-min wind fluctuations and a successive fault. This probabilistic approach would estimate the transfer limit of a critical line for a given fault with stochastic wind generation and active control devices in a multi-machine system. This approach achieves a lower computation cost and improved accuracy using a new stability measure and polynomial interpolation, and is feasible for online contingency analysis.
Resumo:
Past studies that have compared LBB stable discontinuous- and continuous-pressure finite element formulations on a variety of problems have concluded that both methods yield Solutions of comparable accuracy, and that the choice of interpolation is dictated by which of the two is more efficient. In this work, we show that using discontinuous-pressure interpolations can yield inaccurate solutions at large times on a class of transient problems, while the continuous-pressure formulation yields solutions that are in good agreement with the analytical Solution.
A Legendre spectral element model for sloshing and acoustic analysis in nearly incompressible fluids
Resumo:
A new spectral finite element formulation is presented for modeling the sloshing and the acoustic waves in nearly incompressible fluids. The formulation makes use of the Legendre polynomials in deriving the finite element interpolation shape functions in the Lagrangian frame of reference. The formulated element uses Gauss-Lobatto-Legendre quadrature scheme for integrating the volumetric stiffness and the mass matrices while the conventional Gauss-Legendre quadrature scheme is used on the rotational stiffness matrix to completely eliminate the zero energy modes, which are normally associated with the Lagrangian FE formulation. The numerical performance of the spectral element formulated here is examined by doing the inf-sup test oil a standard rectangular rigid tank partially filled with liquid The eigenvalues obtained from the formulated spectral element are compared with the conventional equally spaced node locations of the h-type Lagrangian finite element and the predicted results show that these spectral elements are more accurate and give superior convergence The efficiency and robustness of the formulated elements are demonstrated by solving few standard problems involving free vibration and dynamic response analysis with undistorted and distorted spectral elements. and the obtained results are compared with available results in the published literature (C) 2009 Elsevier Inc All rights reserved
Resumo:
In this paper we propose a novel family of kernels for multivariate time-series classification problems. Each time-series is approximated by a linear combination of piecewise polynomial functions in a Reproducing Kernel Hilbert Space by a novel kernel interpolation technique. Using the associated kernel function a large margin classification formulation is proposed which can discriminate between two classes. The formulation leads to kernels, between two multivariate time-series, which can be efficiently computed. The kernels have been successfully applied to writer independent handwritten character recognition.
Resumo:
A test for time-varying correlation is developed within the framework of a dynamic conditional score (DCS) model for both Gaussian and Student t-distributions. The test may be interpreted as a Lagrange multiplier test and modified to allow for the estimation of models for time-varying volatility in the individual series. Unlike standard moment-based tests, the score-based test statistic includes information on the level of correlation under the null hypothesis and local power arguments indicate the benefits of doing so. A simulation study shows that the performance of the score-based test is strong relative to existing tests across a range of data generating processes. An application to the Hong Kong and South Korean equity markets shows that the new test reveals changes in correlation that are not detected by the standard moment-based test.
Resumo:
This thesis studies quantile residuals and uses different methodologies to develop test statistics that are applicable in evaluating linear and nonlinear time series models based on continuous distributions. Models based on mixtures of distributions are of special interest because it turns out that for those models traditional residuals, often referred to as Pearson's residuals, are not appropriate. As such models have become more and more popular in practice, especially with financial time series data there is a need for reliable diagnostic tools that can be used to evaluate them. The aim of the thesis is to show how such diagnostic tools can be obtained and used in model evaluation. The quantile residuals considered here are defined in such a way that, when the model is correctly specified and its parameters are consistently estimated, they are approximately independent with standard normal distribution. All the tests derived in the thesis are pure significance type tests and are theoretically sound in that they properly take the uncertainty caused by parameter estimation into account. -- In Chapter 2 a general framework based on the likelihood function and smooth functions of univariate quantile residuals is derived that can be used to obtain misspecification tests for various purposes. Three easy-to-use tests aimed at detecting non-normality, autocorrelation, and conditional heteroscedasticity in quantile residuals are formulated. It also turns out that these tests can be interpreted as Lagrange Multiplier or score tests so that they are asymptotically optimal against local alternatives. Chapter 3 extends the concept of quantile residuals to multivariate models. The framework of Chapter 2 is generalized and tests aimed at detecting non-normality, serial correlation, and conditional heteroscedasticity in multivariate quantile residuals are derived based on it. Score test interpretations are obtained for the serial correlation and conditional heteroscedasticity tests and in a rather restricted special case for the normality test. In Chapter 4 the tests are constructed using the empirical distribution function of quantile residuals. So-called Khmaladze s martingale transformation is applied in order to eliminate the uncertainty caused by parameter estimation. Various test statistics are considered so that critical bounds for histogram type plots as well as Quantile-Quantile and Probability-Probability type plots of quantile residuals are obtained. Chapters 2, 3, and 4 contain simulations and empirical examples which illustrate the finite sample size and power properties of the derived tests and also how the tests and related graphical tools based on residuals are applied in practice.
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
A production experiment investigated the tonal shape of Finnish finite verbs in transitive sentences without narrow focus. Traditional descriptions of Finnish stating that non- focused finite verbs do not receive accents were only partly supported. Verbs were found to have a consistently smaller pitch range than words in other word classes, but their pitch contours were neither flat nor explainable by pure interpolation.