948 resultados para diffusive viscoelastic model, global weak solution, error estimate


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical simulation of the Oldroyd-B type viscoelastic fluids is a very challenging problem. rnThe well-known High Weissenberg Number Problem" has haunted the mathematicians, computer scientists, and rnengineers for more than 40 years. rnWhen the Weissenberg number, which represents the ratio of elasticity to viscosity, rnexceeds some limits, simulations done by standard methods break down exponentially fast in time. rnHowever, some approaches, such as the logarithm transformation technique can significantly improve rnthe limits of the Weissenberg number until which the simulations stay stable. rnrnWe should point out that the global existence of weak solutions for the Oldroyd-B model is still open. rnLet us note that in the evolution equation of the elastic stress tensor the terms describing diffusive rneffects are typically neglected in the modelling due to their smallness. However, when keeping rnthese diffusive terms in the constitutive law the global existence of weak solutions in two-space dimension rncan been shown. rnrnThis main part of the thesis is devoted to the stability study of the Oldroyd-B viscoelastic model. rnFirstly, we show that the free energy of the diffusive Oldroyd-B model as well as its rnlogarithm transformation are dissipative in time. rnFurther, we have developed free energy dissipative schemes based on the characteristic finite element and finite difference framework. rnIn addition, the global linear stability analysis of the diffusive Oldroyd-B model has also be discussed. rnThe next part of the thesis deals with the error estimates of the combined finite element rnand finite volume discretization of a special Oldroyd-B model which covers the limiting rncase of Weissenberg number going to infinity. Theoretical results are confirmed by a series of numerical rnexperiments, which are presented in the thesis, too.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Six land surface models and five global hydrological models participate in a model intercomparison project (WaterMIP), which for the first time compares simulation results of these different classes of models in a consistent way. In this paper the simulation setup is described and aspects of the multi-model global terrestrial water balance are presented. All models were run at 0.5 degree spatial resolution for the global land areas for a 15-year period (1985-1999) using a newly-developed global meteorological dataset. Simulated global terrestrial evapotranspiration, excluding Greenland and Antarctica, ranges from 415 to 586 mm year-1 (60,000 to 85,000 km3 year-1) and simulated runoff ranges from 290 to 457 mm year-1 (42,000 to 66,000 km3 year-1). Both the mean and median runoff fractions for the land surface models are lower than those of the global hydrological models, although the range is wider. Significant simulation differences between land surface and global hydrological models are found to be caused by the snow scheme employed. The physically-based energy balance approach used by land surface models generally results in lower snow water equivalent values than the conceptual degree-day approach used by global hydrological models. Some differences in simulated runoff and evapotranspiration are explained by model parameterizations, although the processes included and parameterizations used are not distinct to either land surface models or global hydrological models. The results show that differences between model are major sources of uncertainty. Climate change impact studies thus need to use not only multiple climate models, but also some other measure of uncertainty, (e.g. multiple impact models).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The observation-error covariance matrix used in data assimilation contains contributions from instrument errors, representativity errors and errors introduced by the approximated observation operator. Forward model errors arise when the observation operator does not correctly model the observations or when observations can resolve spatial scales that the model cannot. Previous work to estimate the observation-error covariance matrix for particular observing instruments has shown that it contains signifcant correlations. In particular, correlations for humidity data are more significant than those for temperature. However it is not known what proportion of these correlations can be attributed to the representativity errors. In this article we apply an existing method for calculating representativity error, previously applied to an idealised system, to NWP data. We calculate horizontal errors of representativity for temperature and humidity using data from the Met Office high-resolution UK variable resolution model. Our results show that errors of representativity are correlated and more significant for specific humidity than temperature. We also find that representativity error varies with height. This suggests that the assimilation scheme may be improved if these errors are explicitly included in a data assimilation scheme. This article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider non-negative solution of a chemotaxis system with non constant chemotaxis sensitivity function X. This system appears as a limit case of a model formorphogenesis proposed by Bollenbach et al. (Phys. Rev. E. 75, 2007).Under suitable boundary conditions, modeling the presence of a morphogen source at x=0, we prove the existence of a global and bounded weak solution using an approximation by problems where diffusion is introduced in the ordinary differential equation. Moreover,we prove the convergence of the solution to the unique steady state provided that ? is small and ? is large enough. Numerical simulations both illustrate these results and give rise to further conjectures on the solution behavior that go beyond the rigorously proved statements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new methodology to build parametric models to estimate global solar irradiation adjusted to specific on-site characteristics based on the evaluation of variable im- portance. Thus, those variables higly correlated to solar irradiation on a site are implemented in the model and therefore, different models might be proposed under different climates. This methodology is applied in a study case in La Rioja region (northern Spain). A new model is proposed and evaluated on stability and accuracy against a review of twenty-two already exist- ing parametric models based on temperatures and rainfall in seventeen meteorological stations in La Rioja. The methodology of model evaluation is based on bootstrapping, which leads to achieve a high level of confidence in model calibration and validation from short time series (in this case five years, from 2007 to 2011). The model proposed improves the estimates of the other twenty-two models with average mean absolute error (MAE) of 2.195 MJ/m2 day and average confidence interval width (95% C.I., n=100) of 0.261 MJ/m2 day. 41.65% of the daily residuals in the case of SIAR and 20.12% in that of SOS Rioja fall within the uncertainty tolerance of the pyranometers of the two networks (10% and 5%, respectively). Relative differences between measured and estimated irradiation on an annual cumulative basis are below 4.82%. Thus, the proposed model might be useful to estimate annual sums of global solar irradiation, reaching insignificant differences between measurements from pyranometers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new interpretation for the Superpave IDT strength test based on a viscoelastic-damage framework. The framework is based on continuum damage mechanics and the thermodynamics of irreversible processes with an anisotropic damage representation. The new approach introduces considerations for the viscoelastic effects and the damage accumulation that accompanies the fracture process in the interpretation of the Superpave IDT strength test for the identification of the Dissipated Creep Strain Energy (DCSE) limit from the test result. The viscoelastic model is implemented in a Finite Element Method (FEM) program for the simulation of the Superpave IDT strength test. The DCSE values obtained using the new approach is compared with the values obtained using the conventional approach to evaluate the validity of the assumptions made in the conventional interpretation of the test results. The result shows that the conventional approach over-estimates the DCSE value with increasing estimation error at higher deformation rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We shall consider the weak formulation of a linear elliptic model problem with discontinuous Dirichlet boundary conditions. Since such problems are typically not well-defined in the standard H^1-H^1 setting, we will introduce a suitable saddle point formulation in terms of weighted Sobolev spaces. Furthermore, we will discuss the numerical solution of such problems. Specifically, we employ an hp-discontinuous Galerkin method and derive an L^2-norm a posteriori error estimate. Numerical experiments demonstrate the effectiveness of the proposed error indicator in both the h- and hp-version setting. Indeed, in the latter case exponential convergence of the error is attained as the mesh is adaptively refined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we study the problem of modeling identification of a population employing a discrete dynamic model based on the Richards growth model. The population is subjected to interventions due to consumption, such as hunting or farming animals. The model identification allows us to estimate the probability or the average time for a population number to reach a certain level. The parameter inference for these models are obtained with the use of the likelihood profile technique as developed in this paper. The identification method here developed can be applied to evaluate the productivity of animal husbandry or to evaluate the risk of extinction of autochthon populations. It is applied to data of the Brazilian beef cattle herd population, and the the population number to reach a certain goal level is investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Moving towards autonomous operation and management of increasingly complex open distributed real-time systems poses very significant challenges. This is particularly true when reaction to events must be done in a timely and predictable manner while guaranteeing Quality of Service (QoS) constraints imposed by users, the environment, or applications. In these scenarios, the system should be able to maintain a global feasible QoS level while allowing individual nodes to autonomously adapt under different constraints of resource availability and input quality. This paper shows how decentralised coordination of a group of autonomous interdependent nodes can emerge with little communication, based on the robust self-organising principles of feedback. Positive feedback is used to reinforce the selection of the new desired global service solution, while negative feedback discourages nodes to act in a greedy fashion as this adversely impacts on the provided service levels at neighbouring nodes. The proposed protocol is general enough to be used in a wide range of scenarios characterised by a high degree of openness and dynamism where coordination tasks need to be time dependent. As the reported results demonstrate, it requires less messages to be exchanged and it is faster to achieve a globally acceptable near-optimal solution than other available approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper a new PCA-based positioning sensor and localization system for mobile robots to operate in unstructured environments (e. g. industry, services, domestic ...) is proposed and experimentally validated. The inexpensive positioning system resorts to principal component analysis (PCA) of images acquired by a video camera installed onboard, looking upwards to the ceiling. This solution has the advantage of avoiding the need of selecting and extracting features. The principal components of the acquired images are compared with previously registered images, stored in a reduced onboard image database, and the position measured is fused with odometry data. The optimal estimates of position and slippage are provided by Kalman filters, with global stable error dynamics. The experimental validation reported in this work focuses on the results of a set of experiments carried out in a real environment, where the robot travels along a lawn-mower trajectory. A small position error estimate with bounded co-variance was always observed, for arbitrarily long experiments, and slippage was estimated accurately in real time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is imperative to accept that failures can and will occur, even in meticulously designed distributed systems, and design proper measures to counter those failures. Passive replication minimises resource consumption by only activating redundant replicas in case of failures, as typically providing and applying state updates is less resource demanding than requesting execution. However, most existing solutions for passive fault tolerance are usually designed and configured at design time, explicitly and statically identifying the most critical components and their number of replicas, lacking the needed flexibility to handle the runtime dynamics of distributed component-based embedded systems. This paper proposes a cost-effective adaptive fault tolerance solution with a significant lower overhead compared to a strict active redundancy-based approach, achieving a high error coverage with the minimum amount of redundancy. The activation of passive replicas is coordinated through a feedback-based coordination model that reduces the complexity of the needed interactions among components until a new collective global service solution is determined, improving the overall maintainability and robustness of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The multiscale finite volume (MsFV) method has been developed to efficiently solve large heterogeneous problems (elliptic or parabolic); it is usually employed for pressure equations and delivers conservative flux fields to be used in transport problems. The method essentially relies on the hypothesis that the (fine-scale) problem can be reasonably described by a set of local solutions coupled by a conservative global (coarse-scale) problem. In most cases, the boundary conditions assigned for the local problems are satisfactory and the approximate conservative fluxes provided by the method are accurate. In numerically challenging cases, however, a more accurate localization is required to obtain a good approximation of the fine-scale solution. In this paper we develop a procedure to iteratively improve the boundary conditions of the local problems. The algorithm relies on the data structure of the MsFV method and employs a Krylov-subspace projection method to obtain an unconditionally stable scheme and accelerate convergence. Two variants are considered: in the first, only the MsFV operator is used; in the second, the MsFV operator is combined in a two-step method with an operator derived from the problem solved to construct the conservative flux field. The resulting iterative MsFV algorithms allow arbitrary reduction of the solution error without compromising the construction of a conservative flux field, which is guaranteed at any iteration. Since it converges to the exact solution, the method can be regarded as a linear solver. In this context, the schemes proposed here can be viewed as preconditioned versions of the Generalized Minimal Residual method (GMRES), with a very peculiar characteristic that the residual on the coarse grid is zero at any iteration (thus conservative fluxes can be obtained).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores three aspects of strategic uncertainty: its relation to risk, predictability of behavior and subjective beliefs of players. In a laboratory experiment we measure subjects certainty equivalents for three coordination games and one lottery. Behavior in coordination games is related to risk aversion, experience seeking, and age.From the distribution of certainty equivalents we estimate probabilities for successful coordination in a wide range of games. For many games, success of coordination is predictable with a reasonable error rate. The best response to observed behavior is close to the global-game solution. Comparing choices in coordination games with revealed risk aversion, we estimate subjective probabilities for successful coordination. In games with a low coordination requirement, most subjects underestimate the probability of success. In games with a high coordination requirement, most subjects overestimate this probability. Estimating probabilistic decision models, we show that the quality of predictions can be improved when individual characteristics are taken into account. Subjects behavior is consistent with probabilistic beliefs about the aggregate outcome, but inconsistent with probabilistic beliefs about individual behavior.