54 resultados para Degrees of freedom (mechanics)
Resumo:
Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations (SSPs) as well as for model error representation, uncertainty quantification, data assimilation, and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large-scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modeling. In this review, we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspective. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models.
Resumo:
The precipitation of bovine serum albumin (BSA), lysozyme (LYS) and alfalfa leaf protein (ALF) by two large- and two medium-sized condensed tannin (CT) fractions of similar flavan-3-ol subunit composition is described. CT fractions isolated from white clover flowers and big trefoil leaves exhibited high purity profiles by 1D/2D NMR and purities >90% (determined by thiolysis). At pH 6.5, large CTs with a mean degree of polymerization (mDP) of ~18 exhibited similar protein precipitation behaviors and were significantly more effective than medium CTs (mDP ~9). Medium CTs exhibited similar capacities to precipitate ALF or BSA, but showed small but significant differences in their capacity to precipitate LYS. All CTs precipitated ALF more effectively than BSA or LYS. Aggregation of CT-protein complexes likely aided precipitation of ALF and BSA, but not LYS. This study, one of the first to use CTs of confirmed high purity, demonstrates that mDP of CTs influences protein precipitation efficacy.
Resumo:
4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.
Resumo:
We consider the problem of scattering of a time-harmonic acoustic incident plane wave by a sound soft convex polygon. For standard boundary or finite element methods, with a piecewise polynomial approximation space, the computational cost required to achieve a prescribed level of accuracy grows linearly with respect to the frequency of the incident wave. Recently Chandler–Wilde and Langdon proposed a novel Galerkin boundary element method for this problem for which, by incorporating the products of plane wave basis functions with piecewise polynomials supported on a graded mesh into the approximation space, they were able to demonstrate that the number of degrees of freedom required to achieve a prescribed level of accuracy grows only logarithmically with respect to the frequency. Here we propose a related collocation method, using the same approximation space, for which we demonstrate via numerical experiments a convergence rate identical to that achieved with the Galerkin scheme, but with a substantially reduced computational cost.
Resumo:
In this paper we consider the problem of time-harmonic acoustic scattering in two dimensions by convex polygons. Standard boundary or finite element methods for acoustic scattering problems have a computational cost that grows at least linearly as a function of the frequency of the incident wave. Here we present a novel Galerkin boundary element method, which uses an approximation space consisting of the products of plane waves with piecewise polynomials supported on a graded mesh, with smaller elements closer to the corners of the polygon. We prove that the best approximation from the approximation space requires a number of degrees of freedom to achieve a prescribed level of accuracy that grows only logarithmically as a function of the frequency. Numerical results demonstrate the same logarithmic dependence on the frequency for the Galerkin method solution. Our boundary element method is a discretization of a well-known second kind combined-layer-potential integral equation. We provide a proof that this equation and its adjoint are well-posed and equivalent to the boundary value problem in a Sobolev space setting for general Lipschitz domains.
Resumo:
Empirical orthogonal function (EOF) analysis is a powerful tool for data compression and dimensionality reduction used broadly in meteorology and oceanography. Often in the literature, EOF modes are interpreted individually, independent of other modes. In fact, it can be shown that no such attribution can generally be made. This review demonstrates that in general individual EOF modes (i) will not correspond to individual dynamical modes, (ii) will not correspond to individual kinematic degrees of freedom, (iii) will not be statistically independent of other EOF modes, and (iv) will be strongly influenced by the nonlocal requirement that modes maximize variance over the entire domain. The goal of this review is not to argue against the use of EOF analysis in meteorology and oceanography; rather, it is to demonstrate the care that must be taken in the interpretation of individual modes in order to distinguish the medium from the message.
Resumo:
In this paper we consider the impedance boundary value problem for the Helmholtz equation in a half-plane with piecewise constant boundary data, a problem which models, for example, outdoor sound propagation over inhomogeneous. at terrain. To achieve good approximation at high frequencies with a relatively low number of degrees of freedom, we propose a novel Galerkin boundary element method, using a graded mesh with smaller elements adjacent to discontinuities in impedance and a special set of basis functions so that, on each element, the approximation space contains polynomials ( of degree.) multiplied by traces of plane waves on the boundary. We prove stability and convergence and show that the error in computing the total acoustic field is O( N-(v+1) log(1/2) N), where the number of degrees of freedom is proportional to N logN. This error estimate is independent of the wavenumber, and thus the number of degrees of freedom required to achieve a prescribed level of accuracy does not increase as the wavenumber tends to infinity.
Resumo:
In this paper we show stability and convergence for a novel Galerkin boundary element method approach to the impedance boundary value problem for the Helmholtz equation in a half-plane with piecewise constant boundary data. This problem models, for example, outdoor sound propagation over inhomogeneous flat terrain. To achieve a good approximation with a relatively low number of degrees of freedom we employ a graded mesh with smaller elements adjacent to discontinuities in impedance, and a special set of basis functions for the Galerkin method so that, on each element, the approximation space consists of polynomials (of degree $\nu$) multiplied by traces of plane waves on the boundary. In the case where the impedance is constant outside an interval $[a,b]$, which only requires the discretization of $[a,b]$, we show theoretically and experimentally that the $L_2$ error in computing the acoustic field on $[a,b]$ is ${\cal O}(\log^{\nu+3/2}|k(b-a)| M^{-(\nu+1)})$, where $M$ is the number of degrees of freedom and $k$ is the wavenumber. This indicates that the proposed method is especially commendable for large intervals or a high wavenumber. In a final section we sketch how the same methodology extends to more general scattering problems.
Resumo:
We consider scattering of a time harmonic incident plane wave by a convex polygon with piecewise constant impedance boundary conditions. Standard finite or boundary element methods require the number of degrees of freedom to grow at least linearly with respect to the frequency of the incident wave in order to maintain accuracy. Extending earlier work by Chandler-Wilde and Langdon for the sound soft problem, we propose a novel Galerkin boundary element method, with the approximation space consisting of the products of plane waves with piecewise polynomials supported on a graded mesh with smaller elements closer to the corners of the polygon. Theoretical analysis and numerical results suggest that the number of degrees of freedom required to achieve a prescribed level of accuracy grows only logarithmically with respect to the frequency of the incident wave.
Resumo:
We consider the scattering of a time-harmonic acoustic incident plane wave by a sound soft convex curvilinear polygon with Lipschitz boundary. For standard boundary or finite element methods, with a piecewise polynomial approximation space, the number of degrees of freedom required to achieve a prescribed level of accuracy grows at least linearly with respect to the frequency of the incident wave. Here we propose a novel Galerkin boundary element method with a hybrid approximation space, consisting of the products of plane wave basis functions with piecewise polynomials supported on several overlapping meshes; a uniform mesh on illuminated sides, and graded meshes refined towards the corners of the polygon on illuminated and shadow sides. Numerical experiments suggest that the number of degrees of freedom required to achieve a prescribed level of accuracy need only grow logarithmically as the frequency of the incident wave increases.
Resumo:
A robot mounted camera is useful in many machine vision tasks as it allows control over view direction and position. In this paper we report a technique for calibrating both the robot and the camera using only a single corresponding point. All existing head-eye calibration systems we have encountered rely on using pre-calibrated robots, pre- calibrated cameras, special calibration objects or combinations of these. Our method avoids using large scale non-linear optimizations by recovering the parameters in small dependent groups. This is done by performing a series of planned, but initially uncalibrated robot movements. Many of the kinematic parameters are obtained using only camera views in which the calibration feature is at, or near the image center, thus avoiding errors which could be introduced by lens distortion. The calibration is shown to be both stable and accurate. The robotic system we use consists of camera with pan-tilt capability mounted on a Cartesian robot, providing a total of 5 degrees of freedom.
Resumo:
This article proposes a new model for autoregressive conditional heteroscedasticity and kurtosis. Via a time-varying degrees of freedom parameter, the conditional variance and conditional kurtosis are permitted to evolve separately. The model uses only the standard Student’s t-density and consequently can be estimated simply using maximum likelihood. The method is applied to a set of four daily financial asset return series comprising U.S. and U.K. stocks and bonds, and significant evidence in favor of the presence of autoregressive conditional kurtosis is observed. Various extensions to the basic model are proposed, and we show that the response of kurtosis to good and bad news is not significantly asymmetric.
Resumo:
Currently, most operational forecasting models use latitude-longitude grids, whose convergence of meridians towards the poles limits parallel scaling. Quasi-uniform grids might avoid this limitation. Thuburn et al, JCP, 2009 and Ringler et al, JCP, 2010 have developed a method for arbitrarily-structured, orthogonal C-grids (TRiSK), which has many of the desirable properties of the C-grid on latitude-longitude grids but which works on a variety of quasi-uniform grids. Here, five quasi-uniform, orthogonal grids of the sphere are investigated using TRiSK to solve the shallow-water equations. We demonstrate some of the advantages and disadvantages of the hexagonal and triangular icosahedra, a Voronoi-ised cubed sphere, a Voronoi-ised skipped latitude-longitude grid and a grid of kites in comparison to a full latitude-longitude grid. We will show that the hexagonal-icosahedron gives the most accurate results (for least computational cost). All of the grids suffer from spurious computational modes; this is especially true of the kite grid, despite it having exactly twice as many velocity degrees of freedom as height degrees of freedom. However, the computational modes are easiest to control on the hexagonal icosahedron since they consist of vorticity oscillations on the dual grid which can be controlled using a diffusive advection scheme for potential vorticity.
Resumo:
The problem of spurious excitation of gravity waves in the context of four-dimensional data assimilation is investigated using a simple model of balanced dynamics. The model admits a chaotic vortical mode coupled to a comparatively fast gravity wave mode, and can be initialized such that the model evolves on a so-called slow manifold, where the fast motion is suppressed. Identical twin assimilation experiments are performed, comparing the extended and ensemble Kalman filters (EKF and EnKF, respectively). The EKF uses a tangent linear model (TLM) to estimate the evolution of forecast error statistics in time, whereas the EnKF uses the statistics of an ensemble of nonlinear model integrations. Specifically, the case is examined where the true state is balanced, but observation errors project onto all degrees of freedom, including the fast modes. It is shown that the EKF and EnKF will assimilate observations in a balanced way only if certain assumptions hold, and that, outside of ideal cases (i.e., with very frequent observations), dynamical balance can easily be lost in the assimilation. For the EKF, the repeated adjustment of the covariances by the assimilation of observations can easily unbalance the TLM, and destroy the assumptions on which balanced assimilation rests. It is shown that an important factor is the choice of initial forecast error covariance matrix. A balance-constrained EKF is described and compared to the standard EKF, and shown to offer significant improvement for observation frequencies where balance in the standard EKF is lost. The EnKF is advantageous in that balance in the error covariances relies only on a balanced forecast ensemble, and that the analysis step is an ensemble-mean operation. Numerical experiments show that the EnKF may be preferable to the EKF in terms of balance, though its validity is limited by ensemble size. It is also found that overobserving can lead to a more unbalanced forecast ensemble and thus to an unbalanced analysis.
Resumo:
We report numerical results from a study of balance dynamics using a simple model of atmospheric motion that is designed to help address the question of why balance dynamics is so stable. The non-autonomous Hamiltonian model has a chaotic slow degree of freedom (representing vortical modes) coupled to one or two linear fast oscillators (representing inertia-gravity waves). The system is said to be balanced when the fast and slow degrees of freedom are separated. We find adiabatic invariants that drift slowly in time. This drift is consistent with a random-walk behaviour at a speed which qualitatively scales, even for modest time scale separations, as the upper bound given by Neishtadt’s and Nekhoroshev’s theorems. Moreover, a similar type of scaling is observed for solutions obtained using a singular perturbation (‘slaving’) technique in resonant cases where Nekhoroshev’s theorem does not apply. We present evidence that the smaller Lyapunov exponents of the system scale exponentially as well. The results suggest that the observed stability of nearly-slow motion is a consequence of the approximate adiabatic invariance of the fast motion.