973 resultados para Approximations
Resumo:
Start-up shear rheology is a standard experiment used for characterizing polymer flow, and to test various models of polymer dynamics. A rich phenomenology is developed for behavior of entangled monodisperse linear polymers in such tests, documenting shear stress overshoots as a function of shear rates and molecular weights. A tube theory does a reasonable qualitative job at describing these phenomena, although it involves several drastic approximations and the agreement can be fortuitous. Recently, Lu and coworkers published several papers [e.g. Lu {\it et al.} {\it ACS Macro Lett}. 2014, 3, 569-573] reporting results from molecular dynamics simulations of linear entangled polymers, which contradict both theory and experiment. Based on these observations, they made very serious conclusions about the tube theory, which seem to be premature. In this letter, we repeat simulations of Lu {\it et al.} and systematically show that neither their simulation results, nor their comparison with theory are confirmed.
Resumo:
An equation of Monge-Ampère type has, for the first time, been solved numerically on the surface of the sphere in order to generate optimally transported (OT) meshes, equidistributed with respect to a monitor function. Optimal transport generates meshes that keep the same connectivity as the original mesh, making them suitable for r-adaptive simulations, in which the equations of motion can be solved in a moving frame of reference in order to avoid mapping the solution between old and new meshes and to avoid load balancing problems on parallel computers. The semi-implicit solution of the Monge-Ampère type equation involves a new linearisation of the Hessian term, and exponential maps are used to map from old to new meshes on the sphere. The determinant of the Hessian is evaluated as the change in volume between old and new mesh cells, rather than using numerical approximations to the gradients. OT meshes are generated to compare with centroidal Voronoi tesselations on the sphere and are found to have advantages and disadvantages; OT equidistribution is more accurate, the number of iterations to convergence is independent of the mesh size, face skewness is reduced and the connectivity does not change. However anisotropy is higher and the OT meshes are non-orthogonal. It is shown that optimal transport on the sphere leads to meshes that do not tangle. However, tangling can be introduced by numerical errors in calculating the gradient of the mesh potential. Methods for alleviating this problem are explored. Finally, OT meshes are generated using observed precipitation as a monitor function, in order to demonstrate the potential power of the technique.
Resumo:
In numerical weather prediction, parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known; and the parameterisations themselves are also approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation (DA), such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential DA methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to pre-determined functional forms of missing physics or parameterisations that are based upon prior information. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. Furthermore, it is shown how the method depends on the quality of the DA results. The results indicate that this new method is a powerful tool in systematic model improvement.
Resumo:
Inspired by the commercial desires of global brands and retailers to access the lucrative green consumer market, carbon is increasingly being counted and made knowable at the mundane sites of everyday production and consumption, from the carbon footprint of a plastic kitchen fork to that of an online bank account. Despite the challenges of counting and making commensurable the global warming impact of a myriad of biophysical and societal activities, this desire to communicate a product or service's carbon footprint has sparked complicated carbon calculative practices and enrolled actors at literally every node of multi-scaled and vastly complex global supply chains. Against this landscape, this paper critically analyzes the counting practices that create the ‘e’ in ‘CO2e’. It is shown that, central to these practices are a series of tools, models and databases which, in building upon previous work (Eden, 2012 and Star and Griesemer, 1989) we conceptualize here as ‘boundary objects’. By enrolling everyday actors from farmers to consumers, these objects abstract and stabilize greenhouse gas emissions from their messy material and social contexts into units of CO2e which can then be translated along a product's supply chain, thereby establishing a new currency of ‘everyday supply chain carbon’. However, in making all greenhouse gas-related practices commensurable and in enrolling and stabilizing the transfer of information between multiple actors these objects oversee a process of simplification reliant upon, and subject to, a multiplicity of approximations, assumptions, errors, discrepancies and/or omissions. Further the outcomes of these tools are subject to the politicized and commercial agendas of the worlds they attempt to link, with each boundary actor inscribing different meanings to a product's carbon footprint in accordance with their specific subjectivities, commercial desires and epistemic framings. It is therefore shown that how a boundary object transforms greenhouse gas emissions into units of CO2e, is the outcome of distinct ideologies regarding ‘what’ a product's carbon footprint is and how it should be made legible. These politicized decisions, in turn, inform specific reduction activities and ultimately advance distinct, specific and increasingly durable transition pathways to a low carbon society.
Resumo:
The general 1-D theory of waves propagating on a zonally varying flow is developed from basic wave theory, and equations are derived for the variation of wavenumber and energy along ray paths. Different categories of behaviour are found, depending on the sign of the group velocity (cg) and a wave property, B. For B positive the wave energy and the wave number vary in the same sense, with maxima in relative easterlies or westerlies, depending on the sign of cg. Also the wave accumulation of Webster and Chang (1988) occurs where cg goes to zero. However for B negative they behave in opposite senses and wave accumulation does not occur. The zonal propagation of the gravest equatorial waves is analysed in detail using the theory. For non-dispersive Kelvin waves, B reduces to 2, and analytic solution is possible. B is positive for all the waves considered, except for the westward moving mixed Rossby-gravity (WMRG) wave which can have negative as well as positive B. Comparison is made between the observed climatologies of the individual equatorial waves and the result of pure propagation on the climatological upper tropospheric flow. The Kelvin wave distribution is in remarkable agreement, considering the approximations made. Some aspects of the WMRG and Rossby wave distributions are also in qualitative agreement. However the observed maxima in these waves in the winter westerlies in the eastern Pacific and Atlantic are not consistent with the theory. This is consistent with the importance of the sources of equatorial waves in these westerly duct regions due to higher latitude wave activity.
Resumo:
Process scheduling techniques consider the current load situation to allocate computing resources. Those techniques make approximations such as the average of communication, processing, and memory access to improve the process scheduling, although processes may present different behaviors during their whole execution. They may start with high communication requirements and later just processing. By discovering how processes behave over time, we believe it is possible to improve the resource allocation. This has motivated this paper which adopts chaos theory concepts and nonlinear prediction techniques in order to model and predict process behavior. Results confirm the radial basis function technique which presents good predictions and also low processing demands show what is essential in a real distributed environment.
Resumo:
The problem of projecting multidimensional data into lower dimensions has been pursued by many researchers due to its potential application to data analyses of various kinds. This paper presents a novel multidimensional projection technique based on least square approximations. The approximations compute the coordinates of a set of projected points based on the coordinates of a reduced number of control points with defined geometry. We name the technique Least Square Projections ( LSP). From an initial projection of the control points, LSP defines the positioning of their neighboring points through a numerical solution that aims at preserving a similarity relationship between the points given by a metric in mD. In order to perform the projection, a small number of distance calculations are necessary, and no repositioning of the points is required to obtain a final solution with satisfactory precision. The results show the capability of the technique to form groups of points by degree of similarity in 2D. We illustrate that capability through its application to mapping collections of textual documents from varied sources, a strategic yet difficult application. LSP is faster and more accurate than other existing high-quality methods, particularly where it was mostly tested, that is, for mapping text sets.
Resumo:
We propose a discontinuous-Galerkin-based immersed boundary method for elasticity problems. The resulting numerical scheme does not require boundary fitting meshes and avoids boundary locking by switching the elements intersected by the boundary to a discontinuous Galerkin approximation. Special emphasis is placed on the construction of a method that retains an optimal convergence rate in the presence of non-homogeneous essential and natural boundary conditions. The role of each one of the approximations introduced is illustrated by analyzing an analog problem in one spatial dimension. Finally, extensive two- and three-dimensional numerical experiments on linear and nonlinear elasticity problems verify that the proposed method leads to optimal convergence rates under combinations of essential and natural boundary conditions. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Partition of Unity Implicits (PUI) has been recently introduced for surface reconstruction from point clouds. In this work, we propose a PUI method that employs a set of well-observed solutions in order to produce geometrically pleasant results without requiring time consuming or mathematically overloaded computations. One feature of our technique is the use of multivariate orthogonal polynomials in the least-squares approximation, which allows the recursive refinement of the local fittings in terms of the degree of the polynomial. However, since the use of high-order approximations based only on the number of available points is not reliable, we introduce the concept of coverage domain. In addition, the method relies on the use of an algebraically defined triangulation to handle two important tasks in PUI: the spatial decomposition and an adaptive polygonization. As the spatial subdivision is based on tetrahedra, the generated mesh may present poorly-shaped triangles that are improved in this work by means a specific vertex displacement technique. Furthermore, we also address sharp features and raw data treatment. A further contribution is based on the PUI locality property that leads to an intuitive scheme for improving or repairing the surface by means of editing local functions.
Resumo:
This paper considers the stability of explicit, implicit and Crank-Nicolson schemes for the one-dimensional heat equation on a staggered grid. Furthemore, we consider the cases when both explicit and implicit approximations of the boundary conditions arc employed. Why we choose to do this is clearly motivated and arises front solving fluid flow equations with free surfaces when the Reynolds number can be very small. in at least parts of the spatial domain. A comprehensive stability analysis is supplied: a novel result is the precise stability restriction on the Crank-Nicolson method when the boundary conditions are approximated explicitly, that is, at t =n delta t rather than t = (n + 1)delta t. The two-dimensional Navier-Stokes equations were then solved by a marker and cell approach for two simple problems that had analytic solutions. It was found that the stability results provided in this paper were qualitatively very similar. thereby providing insight as to why a Crank-Nicolson approximation of the momentum equations is only conditionally, stable. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
This article describes and compares three heuristics for a variant of the Steiner tree problem with revenues, which includes budget and hop constraints. First, a greedy method which obtains good approximations in short computational times is proposed. This initial solution is then improved by means of a destroy-and-repair method or a tabu search algorithm. Computational results compare the three methods in terms of accuracy and speed. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Purpose - The purpose of this paper is to develop a novel unstructured simulation approach for injection molding processes described by the Hele-Shaw model. Design/methodology/approach - The scheme involves dual dynamic meshes with active and inactive cells determined from an initial background pointset. The quasi-static pressure solution in each timestep for this evolving unstructured mesh system is approximated using a control volume finite element method formulation coupled to a corresponding modified volume of fluid method. The flow is considered to be isothermal and non-Newtonian. Findings - Supporting numerical tests and performance studies for polystyrene described by Carreau, Cross, Ellis and Power-law fluid models are conducted. Results for the present method are shown to be comparable to those from other methods for both Newtonian fluid and polystyrene fluid injected in different mold geometries. Research limitations/implications - With respect to the methodology, the background pointset infers a mesh that is dynamically reconstructed here, and there are a number of efficiency issues and improvements that would be relevant to industrial applications. For instance, one can use the pointset to construct special bases and invoke a so-called ""meshless"" scheme using the basis. This would require some interesting strategies to deal with the dynamic point enrichment of the moving front that could benefit from the present front treatment strategy. There are also issues related to mass conservation and fill-time errors that might be addressed by introducing suitable projections. The general question of ""rate of convergence"" of these schemes requires analysis. Numerical results here suggest first-order accuracy and are consistent with the approximations made, but theoretical results are not available yet for these methods. Originality/value - This novel unstructured simulation approach involves dual meshes with active and inactive cells determined from an initial background pointset: local active dual patches are constructed ""on-the-fly"" for each ""active point"" to form a dynamic virtual mesh of active elements that evolves with the moving interface.
Resumo:
The magnetic field line structure in a tokamak can be obtained by direct numerical integration of the field line equations. However, this is a lengthy procedure and the analysis of the solution may be very time-consuming. Otherwise we can use simple two-dimensional, area-preserving maps, obtained either by approximations of the magnetic field line equations, or from dynamical considerations. These maps can be quickly iterated, furnishing solutions that mirror the ones obtained from direct numerical integration, and which are useful when long-term studies of field line behavior are necessary (e.g. in diffusion calculations). In this work we focus on a set of simple tokamak maps for which these advantages are specially pronounced.
Resumo:
We study a stochastic process describing the onset of spreading dynamics of an epidemic in a population composed of individuals of three classes: susceptible (S), infected (I), and recovered (R). The stochastic process is defined by local rules and involves the following cyclic process: S -> I -> R -> S (SIRS). The open process S -> I -> R (SIR) is studied as a particular case of the SIRS process. The epidemic process is analyzed at different levels of description: by a stochastic lattice gas model and by a birth and death process. By means of Monte Carlo simulations and dynamical mean-field approximations we show that the SIRS stochastic lattice gas model exhibit a line of critical points separating the two phases: an absorbing phase where the lattice is completely full of S individuals and an active phase where S, I and R individuals coexist, which may or may not present population cycles. The critical line, that corresponds to the onset of epidemic spreading, is shown to belong in the directed percolation universality class. By considering the birth and death process we analyze the role of noise in stabilizing the oscillations. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We analyze by numerical simulations and mean-field approximations an asymmetric version of the stochastic sandpile model with height restriction in one dimension. Each site can have at most two particles. Single particles are inactive and do not move. Two particles occupying the same site are active and may hop to neighboring sites following an asymmetric rule. Jumps to the right or to the left occur with distinct probabilities. In the active state, there will be a net current of particles to the right or to the left. We have found that the critical behavior related to the transition from the active to the absorbing state is distinct from the symmetrical case, making the asymmetry a relevant field.