45 resultados para incremental computation
Resumo:
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.
Resumo:
Based on the potential benefits to human health, there is interest in developing sustainable nutritional strategies to enhance the concentration of long-chain n-3 fatty acids in ruminant-derived foods. Four Aberdeen Angus steers fitted with rumen and duodenal cannulae were used in a 4 × 4 Latin square experiment with 21 d experimental periods to examine the potential of fish oil (FO) in the diet to enhance the supply of 20 : 5n-3 and 22 : 6n-3 available for absorption in growing cattle. Treatments consisted of total mixed rations based on maize silage fed at a rate of 85 g DM/kg live weight0·75/d containing 0, 8, 16 and 24 g FO/kg diet DM. Supplements of FO reduced linearly (P < 0·01) DM intake and shifted (P < 0·01) rumen fermentation towards propionate at the expense of acetate and butyrate. FO in the diet enhanced linearly (P < 0·05) the flow of trans-16 : 1, trans-18 : 1, trans-18 : 2, 20 : 5n-3 and 22 : 6n-3, and decreased linearly (P < 0·05) 18 : 0 and 18 : 3n-3 at the duodenum. Increases in the flow of trans-18 : 1 were isomer dependent and were determined primarily by higher amounts of trans-11 reaching the duodenum. In conclusion, FO alters ruminal lipid metabolism of growing cattle in a dose-dependent manner consistent with an inhibition of ruminal biohydrogenation, and enhances the amount of long-chain n-3 fatty acids at the duodenum, but the increases are marginal due to extensive biohydrogenation in the rumen.
Resumo:
An interface between satellite retrievals and the incremental version of the four-dimensional variational assimilation scheme is developed, making full use of the information content of satellite measurements. In this paper, expressions for the function that calculates simulated observations from model states (called “observation operator”), together with its tangent linear version and adjoint, are derived. Results from our work can be used for implementing a quasi-optimal assimilation of satellite retrievals (e.g., of atmospheric trace gases) in operational meteorological centres.
Resumo:
The present work presents a new method for activity extraction and reporting from video based on the aggregation of fuzzy relations. Trajectory clustering is first employed mainly to discover the points of entry and exit of mobiles appearing in the scene. In a second step, proximity relations between resulting clusters of detected mobiles and contextual elements from the scene are modeled employing fuzzy relations. These can then be aggregated employing typical soft-computing algebra. A clustering algorithm based on the transitive closure calculation of the fuzzy relations allows building the structure of the scene and characterises the ongoing different activities of the scene. Discovered activity zones can be reported as activity maps with different granularities thanks to the analysis of the transitive closure matrix. Taking advantage of the soft relation properties, activity zones and related activities can be labeled in a more human-like language. We present results obtained on real videos corresponding to apron monitoring in the Toulouse airport in France.
Resumo:
This study was designed to determine the response of in vitro fermentation parameters to incremental levels of polyethylene glycol (PEG) when tanniniferous tree fruits (Dichrostachys cinerea, Acacia erioloba, A. erubiscens, A. nilotica and Piliostigma thonningii) were fermented using the Reading Pressure Technique. The trivalent ytterbium precipitable phenolics content of fruit substrates ranged from 175 g/kg DM in A. erubiscens to 607 g/kg DM in A. nilotica, while the soluble condensed tannin content ranged from 0.09 AU550nm/40mg in A. erioloba to 0.52 AU550nm/40 mg in D. cinerea. The ADF was highest in P. thonningii fruits (402 g/kg DM) and lowest in A. nilotica fruits (165 g/kg DM). Increasing the level of PEG caused an exponential rise to a maximum (asymptotic) for cumulative gas production, rate of gas production and nitrogen degradability in all substrates except P. thonningii fruits. Dry matter degradability for fruits containing higher levels of soluble condensed tannins (D. cinerea and P. thonningii), showed little response to incremental levels of PEG after incubation for 24 h. The minimum levels of PEG required to maximize in vitro fermentation of tree fruits was found to be 200 mg PEG/g DM of sample for all tree species except A. erubiscens fruits, which required 100 mg PEG/g DM sample. The study provides evidence that PEG levels lower than 1 g/g DM sample can be used for in vitro tannin bioassays to reduce the cost of evaluating non-conventional tanniniferous feedstuffs used in developing countries in the tropics and subtopics. The use of in vitro nitrogen degradability in place of the favoured dry matter degradability improved the accuracy of PEG as a diagnostic tool for tannins in in vitro fermentation systems.
Conditioning of incremental variational data assimilation, with application to the Met Office system
Resumo:
Implementations of incremental variational data assimilation require the iterative minimization of a series of linear least-squares cost functions. The accuracy and speed with which these linear minimization problems can be solved is determined by the condition number of the Hessian of the problem. In this study, we examine how different components of the assimilation system influence this condition number. Theoretical bounds on the condition number for a single parameter system are presented and used to predict how the condition number is affected by the observation distribution and accuracy and by the specified lengthscales in the background error covariance matrix. The theoretical results are verified in the Met Office variational data assimilation system, using both pseudo-observations and real data.
Resumo:
The real-time parallel computation of histograms using an array of pipelined cells is proposed and prototyped in this paper with application to consumer imaging products. The array operates in two modes: histogram computation and histogram reading. The proposed parallel computation method does not use any memory blocks. The resulting histogram bins can be stored into an external memory block in a pipelined fashion for subsequent reading or streaming of the results. The array of cells can be tuned to accommodate the required data path width in a VLSI image processing engine as present in many imaging consumer devices. Synthesis of the architectures presented in this paper in FPGA are shown to compute the real-time histogram of images streamed at over 36 megapixels at 30 frames/s by processing in parallel 1, 2 or 4 pixels per clock cycle.
Resumo:
A direct method is presented for determining the uncertainty in reservoir pressure, flow, and net present value (NPV) using the time-dependent, one phase, two- or three-dimensional equations of flow through a porous medium. The uncertainty in the solution is modelled as a probability distribution function and is computed from given statistical data for input parameters such as permeability. The method generates an expansion for the mean of the pressure about a deterministic solution to the system equations using a perturbation to the mean of the input parameters. Hierarchical equations that define approximations to the mean solution at each point and to the field covariance of the pressure are developed and solved numerically. The procedure is then used to find the statistics of the flow and the risked value of the field, defined by the NPV, for a given development scenario. This method involves only one (albeit complicated) solution of the equations and contrasts with the more usual Monte-Carlo approach where many such solutions are required. The procedure is applied easily to other physical systems modelled by linear or nonlinear partial differential equations with uncertain data.
Resumo:
This paper extends the singular value decomposition to a path of matricesE(t). An analytic singular value decomposition of a path of matricesE(t) is an analytic path of factorizationsE(t)=X(t)S(t)Y(t) T whereX(t) andY(t) are orthogonal andS(t) is diagonal. To maintain differentiability the diagonal entries ofS(t) are allowed to be either positive or negative and to appear in any order. This paper investigates existence and uniqueness of analytic SVD's and develops an algorithm for computing them. We show that a real analytic pathE(t) always admits a real analytic SVD, a full-rank, smooth pathE(t) with distinct singular values admits a smooth SVD. We derive a differential equation for the left factor, develop Euler-like and extrapolated Euler-like numerical methods for approximating an analytic SVD and prove that the Euler-like method converges.