988 resultados para Gaussian assumption
Resumo:
The multiscale finite-volume (MSFV) method is designed to reduce the computational cost of elliptic and parabolic problems with highly heterogeneous anisotropic coefficients. The reduction is achieved by splitting the original global problem into a set of local problems (with approximate local boundary conditions) coupled by a coarse global problem. It has been shown recently that the numerical errors in MSFV results can be reduced systematically with an iterative procedure that provides a conservative velocity field after any iteration step. The iterative MSFV (i-MSFV) method can be obtained with an improved (smoothed) multiscale solution to enhance the localization conditions, with a Krylov subspace method [e.g., the generalized-minimal-residual (GMRES) algorithm] preconditioned by the MSFV system, or with a combination of both. In a multiphase-flow system, a balance between accuracy and computational efficiency should be achieved by finding a minimum number of i-MSFV iterations (on pressure), which is necessary to achieve the desired accuracy in the saturation solution. In this work, we extend the i-MSFV method to sequential implicit simulation of time-dependent problems. To control the error of the coupled saturation/pressure system, we analyze the transport error caused by an approximate velocity field. We then propose an error-control strategy on the basis of the residual of the pressure equation. At the beginning of simulation, the pressure solution is iterated until a specified accuracy is achieved. To minimize the number of iterations in a multiphase-flow problem, the solution at the previous timestep is used to improve the localization assumption at the current timestep. Additional iterations are used only when the residual becomes larger than a specified threshold value. Numerical results show that only a few iterations on average are necessary to improve the MSFV results significantly, even for very challenging problems. Therefore, the proposed adaptive strategy yields efficient and accurate simulation of multiphase flow in heterogeneous porous media.
Resumo:
Semiclassical Einstein-Langevin equations for arbitrary small metric perturbations conformally coupled to a massless quantum scalar field in a spatially flat cosmological background are derived. Use is made of the fact that for this problem the in-in or closed time path effective action is simply related to the Feynman-Vernon influence functional which describes the effect of the ``environment,'' the quantum field which is coarse grained here, on the ``system,'' the gravitational field which is the field of interest. This leads to identify the dissipation and noise kernels in the in-in effective action, and to derive a fluctuation-dissipation relation. A tensorial Gaussian stochastic source which couples to the Weyl tensor of the spacetime metric is seen to modify the usual semiclassical equations which can be veiwed now as mean field equsations. As a simple application we derive the correlation functions of the stochastic metric fluctuations produced in a flat spacetime with small metric perturbations due to the quantum fluctuations of the matter field coupled to these perturbations.
Resumo:
The class of Schoenberg transformations, embedding Euclidean distances into higher dimensional Euclidean spaces, is presented, and derived from theorems on positive definite and conditionally negative definite matrices. Original results on the arc lengths, angles and curvature of the transformations are proposed, and visualized on artificial data sets by classical multidimensional scaling. A distance-based discriminant algorithm and a robust multidimensional centroid estimate illustrate the theory, closely connected to the Gaussian kernels of Machine Learning.
Resumo:
Based on the assumption that silicate application can raise soil P availability for crops, the aim of this research was to compare the effect of silicate application on soil P desorption with that of liming, in evaluations based on two extractors and plant growth. The experiment was carried out in randomized blocks with four replications, in a 3 × 3 × 5 factorial design, in which three soil types, three P rates, and four soil acidity correctives were evaluated in 180 experimental plots. Trials were performed in a greenhouse using corn plants in 20-dm³ pots. Three P rates (0, 50 and 150 mg dm-3) were applied in the form of powder triple superphosphate and the soil was incubated for 90 days. After this period, soil samples were collected for routine chemical analysis and P content determination by the extraction methods resin, Mehlich-1 and remaining P. Based on the results, acidity correctives were applied at rates calculated for base saturation increased to 70 %, with subsequent incubation for 60 more days, when P content was determined again. The acidity correctives consisted of: dolomitic lime, steelmaking slag, ladle furnace slag, and wollastonite. Therefore, our results showed that slags raised the soil P content more than lime, suggesting a positive correlation between P and Si in soil. Silicon did not affect the extractor choice since both Mehlich-1 and resin had the same behavior regarding extracted P when silicon was applied to the soil. For all evaluated plant parameters, there was significant interaction between P rates and correctives; highest values were obtained with silicate.
Resumo:
The Lorentz-Dirac equation is not an unavoidable consequence of solely linear and angular momenta conservation for a point charge. It also requires an additional assumption concerning the elementary character of the charge. We here use a less restrictive elementarity assumption for a spinless charge and derive a system of conservation equations that are not properly the equation of motion because, as it contains an extra scalar variable, the future evolution of the charge is not determined. We show that a supplementary constitutive relation can be added so that the motion is determined and free from the troubles that are customary in the Lorentz-Dirac equation, i.e., preacceleration and runaways.
Resumo:
Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.
Resumo:
We demonstrate that the self-similarity of some scale-free networks with respect to a simple degree-thresholding renormalization scheme finds a natural interpretation in the assumption that network nodes exist in hidden metric spaces. Clustering, i.e., cycles of length three, plays a crucial role in this framework as a topological reflection of the triangle inequality in the hidden geometry. We prove that a class of hidden variable models with underlying metric spaces are able to accurately reproduce the self-similarity properties that we measured in the real networks. Our findings indicate that hidden geometries underlying these real networks are a plausible explanation for their observed topologies and, in particular, for their self-similarity with respect to the degree-based renormalization.
Resumo:
We study free second-order processes driven by dichotomous noise. We obtain an exact differential equation for the marginal density p(x,t) of the position. It is also found that both the velocity ¿(t) and the position X(t) are Gaussian random variables for large t.
Resumo:
We study the motion of a particle governed by a generalized Langevin equation. We show that, when no fluctuation-dissipation relation holds, the long-time behavior of the particle may be from stationary to superdiffusive, along with subdiffusive and diffusive. When the random force is Gaussian, we derive the exact equations for the joint and marginal probability density functions for the position and velocity of the particle and find their solutions.
Greenhouse Gas and Nitrogen Fertilizer Scenarios for U.S. Agriculture and Global Biofuels, June 2011
Resumo:
This analysis uses the 2011 FAPRI-CARD (Food and Agricultural Policy Research Institute–Center for Agricultural and Rural Development) baseline to evaluate the impact of four alternative scenarios on U.S. and world agricultural markets, as well as on world fertilizer use and world agricultural greenhouse gas emissions. A key assumption in the 2011 baseline is that ethanol support policies disappear in 2012. The baseline also assumes that existing biofuel mandates remain in place and are binding. Two of the scenarios are adverse supply shocks, the first being a 10% increase in the price of nitrogen fertilizer in the United States, and the second, a reversion of cropland into forestland. The third scenario examines how lower energy prices would impact world agriculture. The fourth scenario reintroduces biofuel tax credits and duties. Given that the baseline excludes these policies, the fourth scenario is an attempt to understand the impact of these policies under the market conditions that prevail in early 2011. A key to understanding the results of this fourth scenario is that in the absence of tax credits and duties, the mandate drives biofuel use. Therefore, when the tax credits and duties are reintroduced, the impacts are relatively small. In general, the results show that the entire international commodity market system is remarkably robust with respect to policy changes in one country or in one sector. The policy implication is that domestic policy changes implemented by a large agricultural producer like the United States can have fairly significant impacts on the aggregate world commodity markets. A second point that emerges from the results is that the law of unintended consequences is at work in world agriculture. For example, a U.S. nitrogen tax that might presumably be motivated for environmental benefit results in an increase in world greenhouse gas emissions. A similar situation occurs in the afforestation scenario in which crop production shifts from high-yielding land in the United States to low-yielding land and probably native vegetation in the rest of the world, resulting in an unintended increase in global greenhouse gas emissions.
Resumo:
The assessment of spatial uncertainty in the prediction of nutrient losses by erosion associated with landscape models is an important tool for soil conservation planning. The purpose of this study was to evaluate the spatial and local uncertainty in predicting depletion rates of soil nutrients (P, K, Ca, and Mg) by soil erosion from green and burnt sugarcane harvesting scenarios, using sequential Gaussian simulation (SGS). A regular grid with equidistant intervals of 50 m (626 points) was established in the 200-ha study area, in Tabapuã, São Paulo, Brazil. The rate of soil depletion (SD) was calculated from the relation between the nutrient concentration in the sediments and the chemical properties in the original soil for all grid points. The data were subjected to descriptive statistical and geostatistical analysis. The mean SD rate for all nutrients was higher in the slash-and-burn than the green cane harvest scenario (Student’s t-test, p<0.05). In both scenarios, nutrient loss followed the order: Ca>Mg>K>P. The SD rate was highest in areas with greater slope. Lower uncertainties were associated to the areas with higher SD and steeper slopes. Spatial uncertainties were highest for areas of transition between concave and convex landforms.
Resumo:
The low-temperature isothermal magnetization curves, M(H), of SmCo4 and Fe3Tb thin films are studied according to the two-dimensional correlated spin-glass model of Chudnovsky. We have calculated the magnetization law in approach to saturation and shown that the M(H) data fit well the theory at high and low fields. In our fit procedure we have used three different correlation functions. The Gaussian decay correlation function fits well the experimental data for both samples.
Resumo:
[cat] L'article presenta els resultats d'unes proves empíriques que mostren com en la informació econòmica a la premsa escrita poden haver-hi biaixos sistemàtics. Les empreses de sectors regulats son més actives informativament la qual cosa es tradueix en un major protagonisme en la premsa econòmica especialitzada. La primera prova empírica consisteix en una comprovació del protagonisme de les empreses de l'Ibex 35 i del Dow Jones a la premsa general i la econòmica d'Espanya i els Estats Units respectivament En la segona es fa una anàlisi de la informació del periòdic econòmic Expansión. L'existència de biaixos informatius sistemàtics indueix a qüestionar-se la hipòtesi de racionalitat en els mercats financers.
Resumo:
Numerous sources of evidence point to the fact that heterogeneity within the Earth's deep crystalline crust is complex and hence may be best described through stochastic rather than deterministic approaches. As seismic reflection imaging arguably offers the best means of sampling deep crustal rocks in situ, much interest has been expressed in using such data to characterize the stochastic nature of crustal heterogeneity. Previous work on this problem has shown that the spatial statistics of seismic reflection data are indeed related to those of the underlying heterogeneous seismic velocity distribution. As of yet, however, the nature of this relationship has remained elusive due to the fact that most of the work was either strictly empirical or based on incorrect methodological approaches. Here, we introduce a conceptual model, based on the assumption of weak scattering, that allows us to quantitatively link the second-order statistics of a 2-D seismic velocity distribution with those of the corresponding processed and depth-migrated seismic reflection image. We then perform a sensitivity study in order to investigate what information regarding the stochastic model parameters describing crustal velocity heterogeneity might potentially be recovered from the statistics of a seismic reflection image using this model. Finally, we present a Monte Carlo inversion strategy to estimate these parameters and we show examples of its application at two different source frequencies and using two different sets of prior information. Our results indicate that the inverse problem is inherently non-unique and that many different combinations of the vertical and lateral correlation lengths describing the velocity heterogeneity can yield seismic images with the same 2-D autocorrelation structure. The ratio of all of these possible combinations of vertical and lateral correlation lengths, however, remains roughly constant which indicates that, without additional prior information, the aspect ratio is the only parameter describing the stochastic seismic velocity structure that can be reliably recovered.
Resumo:
For the general practitioner to be able to prescribe optimal therapy to his individual hypertensive patients, he needs accurate information on the therapeutic agents he is going to administer and practical treatment strategies. The information on drugs and drug combinations has to be applicable to the treatment of individual patients and not just patient study groups. A basic requirement is knowledge of the dose-response relationship for each compound in order to choose the optimal therapeutic dose. Contrary to general assumption, this key information is difficult to obtain and often not available to the physician for many years after marketing of a drug. As a consequence, excessive doses are often used. Furthermore, the physician needs comparative data on the various antihypertensive drugs that are applicable to the treatment of individual patients. In order to minimize potential side effects due to unnecessary combinations of compounds, the strategy of sequential monotherapy is proposed, with the goal of treating as many patients as possible with monotherapy at optimal doses. More drug trials of a crossover design and more individualized analyses of the results are badly needed to provide the physician with information that he can use in his daily practice. In this time of continuous intensive development of new antihypertensive agents, much could be gained in enhanced efficacy and reduced incidence of side effects by taking a closer look at the drugs already available and using them more appropriately in individual patients.