942 resultados para Explicit method, Mean square stability, Stochastic orthogonal Runge-Kutta, Chebyshev method
Resumo:
In this paper we investigate the equilibrium properties of magnetic dipolar (ferro-) fluids and discuss finite-size effects originating from the use of different boundary conditions in computer simulations. Both periodic boundary conditions and a finite spherical box are studied. We demonstrate that periodic boundary conditions and subsequent use of Ewald sum to account for the long-range dipolar interactions lead to a much faster convergence (in terms of the number of investigated dipolar particles) of the magnetization curve and the initial susceptibility to their thermodynamic limits. Another unwanted effect of the simulations in a finite spherical box geometry is a considerable sensitivity to the container size. We further investigate the influence of the surface term in the Ewald sum-that is, due to the surrounding continuum with magnetic permeability mu(BC)-on the convergence properties of our observables and on the final results. The two different ways of evaluating the initial susceptibility, i.e., (1) by the magnetization response of the system to an applied field and (2) by the zero-field fluctuation of the mean-square dipole moment of the system, are compared in terms of speed and accuracy.
Resumo:
Site-specific meteorological forcing appropriate for applications such as urban outdoor thermal comfort simulations can be obtained using a newly coupled scheme that combines a simple slab convective boundary layer (CBL) model and urban land surface model (ULSM) (here two ULSMs are considered). The former simulates daytime CBL height, air temperature and humidity, and the latter estimates urban surface energy and water balance fluxes accounting for changes in land surface cover. The coupled models are tested at a suburban site and two rural sites, one irrigated and one unirrigated grass, in Sacramento, U.S.A. All the variables modelled compare well to measurements (e.g. coefficient of determination = 0.97 and root mean square error = 1.5 °C for air temperature). The current version is applicable to daytime conditions and needs initial state conditions for the CBL model in the appropriate range to obtain the required performance. The coupled model allows routine observations from distant sites (e.g. rural, airport) to be used to predict air temperature and relative humidity in an urban area of interest. This simple model, which can be rapidly applied, could provide urban data for applications such as air quality forecasting and building energy modelling, in addition to outdoor thermal comfort.
Resumo:
Current feed evaluation systems for ruminants are too imprecise to describe diets in terms of their acidosis risk. The dynamic mechanistic model described herein arises from the integration of a lactic acid (La) metabolism module into an extant model of whole-rumen function. The model was evaluated using published data from cows and sheep fed a range of diets or infused with various doses of La. The model performed well in simulating peak rumen La concentrations (coefficient of determination = 0.96; root mean square prediction error = 16.96% of observed mean), although frequency of sampling for the published data prevented a comprehensive comparison of prediction of time to peak La accumulation. The model showed a tendency for increased La accumulation following feeding of diets rich in nonstructural carbohydrates, although less-soluble starch sources such as corn tended to limit rumen La concentration. Simulated La absorption from the rumen remained low throughout the feeding cycle. The competition between bacteria and protozoa for rumen La suggests a variable contribution of protozoa to total La utilization. However, the model was unable to simulate the effects of defaunation on rumen La metabolism, indicating a need for a more detailed description of protozoal metabolism. The model could form the basis of a feed evaluation system with regard to rumen La metabolism.
Resumo:
Multiple alternating zonal jets are a ubiquitous feature of planetary atmospheres and oceans. However, most studies to date have focused on the special case of barotropic jets. Here, the dynamics of freely evolving baroclinic jets are investigated using a two-layer quasigeostrophic annulus model with sloping topography. In a suite of 15 numerical simulations, the baroclinic Rossby radius and baroclinic Rhines scale are sampled by varying the stratification and root-mean-square eddy velocity, respectively. Small-scale eddies in the initial state evolve through geostrophic turbulence and accelerate zonally as they grow in horizontal scale, first isotropically and then anisotropically. This process leads ultimately to the formation of jets, which take about 2500 rotation periods to equilibrate. The kinetic energy spectrum of the equilibrated baroclinic zonal flow steepens from a −3 power law at small scales to a −5 power law near the jet scale. The conditions most favorable for producing multiple alternating baroclinic jets are large baroclinic Rossby radius (i.e., strong stratification) and small baroclinic Rhines scale (i.e., weak root-mean-square eddy velocity). The baroclinic jet width is diagnosed objectively and found to be 2.2–2.8 times larger than the baroclinic Rhines scale, with a best estimate of 2.5 times larger. This finding suggests that Rossby wave motions must be moving at speeds of approximately 6 times the turbulent eddy velocity in order to be capable of arresting the isotropic inverse energy cascade.
Resumo:
We systematically compare the performance of ETKF-4DVAR, 4DVAR-BEN and 4DENVAR with respect to two traditional methods (4DVAR and ETKF) and an ensemble transform Kalman smoother (ETKS) on the Lorenz 1963 model. We specifically investigated this performance with increasing nonlinearity and using a quasi-static variational assimilation algorithm as a comparison. Using the analysis root mean square error (RMSE) as a metric, these methods have been compared considering (1) assimilation window length and observation interval size and (2) ensemble size to investigate the influence of hybrid background error covariance matrices and nonlinearity on the performance of the methods. For short assimilation windows with close to linear dynamics, it has been shown that all hybrid methods show an improvement in RMSE compared to the traditional methods. For long assimilation window lengths in which nonlinear dynamics are substantial, the variational framework can have diffculties fnding the global minimum of the cost function, so we explore a quasi-static variational assimilation (QSVA) framework. Of the hybrid methods, it is seen that under certain parameters, hybrid methods which do not use a climatological background error covariance do not need QSVA to perform accurately. Generally, results show that the ETKS and hybrid methods that do not use a climatological background error covariance matrix with QSVA outperform all other methods due to the full flow dependency of the background error covariance matrix which also allows for the most nonlinearity.
Resumo:
The Arctic is an important region in the study of climate change, but monitoring surface temperatures in this region is challenging, particularly in areas covered by sea ice. Here in situ, satellite and reanalysis data were utilised to investigate whether global warming over recent decades could be better estimated by changing the way the Arctic is treated in calculating global mean temperature. The degree of difference arising from using five different techniques, based on existing temperature anomaly dataset techniques, to estimate Arctic SAT anomalies over land and sea ice were investigated using reanalysis data as a testbed. Techniques which interpolated anomalies were found to result in smaller errors than non-interpolating techniques. Kriging techniques provided the smallest errors in anomaly estimates. Similar accuracies were found for anomalies estimated from in situ meteorological station SAT records using a kriging technique. Whether additional data sources, which are not currently utilised in temperature anomaly datasets, would improve estimates of Arctic surface air temperature anomalies was investigated within the reanalysis testbed and using in situ data. For the reanalysis study, the additional input anomalies were reanalysis data sampled at certain supplementary data source locations over Arctic land and sea ice areas. For the in situ data study, the additional input anomalies over sea ice were surface temperature anomalies derived from the Advanced Very High Resolution Radiometer satellite instruments. The use of additional data sources, particularly those located in the Arctic Ocean over sea ice or on islands in sparsely observed regions, can lead to substantial improvements in the accuracy of estimated anomalies. Decreases in Root Mean Square Error can be up to 0.2K for Arctic-average anomalies and more than 1K for spatially resolved anomalies. Further improvements in accuracy may be accomplished through the use of other data sources.
Resumo:
This paper shows that radiometer channel radiances for cloudy atmospheric conditions can be simulated with an optimised frequency grid derived under clear-sky conditions. A new clear-sky optimised grid is derived for AVHRR channel 5 ð12 m m, 833 cm �1 Þ. For HIRS channel 11 ð7:33 m m, 1364 cm �1 Þ and AVHRR channel 5, radiative transfer simulations using an optimised frequency grid are compared with simulations using a reference grid, where the optimised grid has roughly 100–1000 times less frequencies than the full grid. The root mean square error between the optimised and the reference simulation is found to be less than 0.3 K for both comparisons, with the magnitude of the bias less than 0.03 K. The simulations have been carried out with the radiative transfer model Atmospheric Radiative Transfer Simulator (ARTS), version 2, using a backward Monte Carlo module for the treatment of clouds. With this module, the optimised simulations are more than 10 times faster than the reference simulations. Although the number of photons is the same, the smaller number of frequencies reduces the overhead for preparing the optical properties for each frequency. With deterministic scattering solvers, the relative decrease in runtime would be even more. The results allow for new radiative transfer applications, such as the development of new retrievals, because it becomes much quicker to carry out a large number of simulations. The conclusions are applicable to any downlooking infrared radiometer.
Resumo:
Sea-level rise (SLR) from global warming may have severe consequences for coastal cities, particularly when combined with predicted increases in the strength of tidal surges. Predicting the regional impact of SLR flooding is strongly dependent on the modelling approach and accuracy of topographic data. Here, the areas under risk of sea water flooding for London boroughs were quantified based on the projected SLR scenarios reported in Intergovernmental Panel on Climate Change (IPCC) fifth assessment report (AR5) and UK climatic projections 2009 (UKCP09) using a tidally-adjusted bathtub modelling approach. Medium- to very high-resolution digital elevation models (DEMs) are used to evaluate inundation extents as well as uncertainties. Depending on the SLR scenario and DEMs used, it is estimated that 3%–8% of the area of Greater London could be inundated by 2100. The boroughs with the largest areas at risk of flooding are Newham, Southwark, and Greenwich. The differences in inundation areas estimated from a digital terrain model and a digital surface model are much greater than the root mean square error differences observed between the two data types, which may be attributed to processing levels. Flood models from SRTM data underestimate the inundation extent, so their results may not be reliable for constructing flood risk maps. This analysis provides a broad-scale estimate of the potential consequences of SLR and uncertainties in the DEM-based bathtub type flood inundation modelling for London boroughs.
Resumo:
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Resumo:
In this paper, we develop a novel constrained recursive least squares algorithm for adaptively combining a set of given multiple models. With data available in an online fashion, the linear combination coefficients of submodels are adapted via the proposed algorithm.We propose to minimize the mean square error with a forgetting factor, and apply the sum to one constraint to the combination parameters. Moreover an l1-norm constraint to the combination parameters is also applied with the aim to achieve sparsity of multiple models so that only a subset of models may be selected into the final model. Then a weighted l2-norm is applied as an approximation to the l1-norm term. As such at each time step, a closed solution of the model combination parameters is available. The contribution of this paper is to derive the proposed constrained recursive least squares algorithm that is computational efficient by exploiting matrix theory. The effectiveness of the approach has been demonstrated using both simulated and real time series examples.
Resumo:
The sensitivity of solar irradiance at the surface to the variability of aerosol intensive optical properties is investigated for a site (Alta Floresta) in the southern portion of the Amazon basin using detailed comparisons between measured and modeled irradiances. Apart from aerosol intensive optical properties, specifically single scattering albedo (omega(o lambda)) and asymmetry parameter (g(lambda)), which were assumed constant, all other relevant input to the model were prescribed based on observation. For clean conditions, the differences between observed and modeled irradiances were consistent with instrumental uncertainty. For polluted conditions, the agreement was significantly worse, with a root mean square difference three times larger (23.5 Wm(-2)). Analysis revealed a noteworthy correlation between the irradiance differences (observed minus modeled) and the column water vapor (CWV) for polluted conditions. Positive differences occurred mostly in wet conditions, while the differences became more negative as the atmosphere dried. To explore the hypothesis that the irradiance differences might be linked to the modulation of omega(o lambda) and g(lambda) by humidity, AERONET retrievals of aerosol properties and CWV over the same site were analyzed. The results highlight the potential role of humidity in modifying omega(o lambda) and g(lambda) and suggest that to explain the relationship seen between irradiances differences via aerosols properties the focus has to be on humidity-dependent processes that affect particles chemical composition. Undoubtedly, there is a need to better understand the role of humidity in modifying the properties of smoke aerosols in the southern portion of the Amazon basin.
Resumo:
This work presents a numerical method suitable for the study of the development of internal boundary layers (IBL) and their characteristics for flows over various types of coastal cliffs. The IBL is an important meteorological occurrence for flows with surface roughness and topographical step changes. A two-dimensional flow program was used for this study. The governing equations were written using the vorticity-velocity formulation. The spatial derivatives were discretized by high-order compact finite differences schemes. The time integration was performed with a low storage fourth-order Runge-Kutta scheme. The coastal cliff (step) was specified through an immersed boundary method. The validation of the code was done by comparison of the results with experimental and observational data. The numerical simulations were carried out for different coastal cliff heights and inclinations. The results show that the predominant factors for the height of the IBL and its characteristics are the upstream velocity, and the height and form (inclination) of the coastal cliff. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
A previously proposed model describing the trapping site of the interstitial atomic hydrogen in borate glasses is analyzed. In this model the atomic hydrogen is stabilized at the centers of oxygen polygons belonging to B-O ring structures in the glass network by van der Waals forces. The previously reported atomic hydrogen isothermal decay experimental data are discussed in the light of this microscopic model. A coupled differential equation system of the observed decay kinetics was solved numerically using the Runge Kutta method. The experimental untrapping activation energy of 0.7 x 10(-19) J is in good agreement with the calculated results of dispersion interaction between the stabilized atomic hydrogen and the neighboring oxygen atoms at the vertices of hexagonal ring structures. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The scalar form factor describes modifications induced by the pion over the quark condensate. Assuming that representations produced by chiral perturbation theory can be pushed to high values of negative-t, a region in configuration space is reached (r < R similar to 0.5 fm) where the form factor changes sign, indicating that the condensate has turned into empty space. A simple model for the pion incorporates this feature into density functions. When supplemented by scalar-meson excitations, it yields predictions close to empirical values for the mean square radius (< r(2)>(pi)(S) = 0.59 fm(2)) and for one of the low energy constants ((l) over bar (4) = 4.3), with no adjusted parameters.
Resumo:
In this work, two different docking programs were used, AutoDock and FlexX, which use different types of scoring functions and searching methods. The docking poses of all quinone compounds studied stayed in the same region in the trypanothione reductase. This region is a hydrophobic pocket near to Phe396, Pro398 and Leu399 amino acid residues. The compounds studied displays a higher affinity in trypanothione reductase (TR) than glutathione reductase (GR), since only two out of 28 quinone compounds presented more favorable docking energy in the site of human enzyme. The interaction of quinone compounds with the TR enzyme is in agreement with other studies, which showed different binding sites from the ones formed by cysteines 52 and 58. To verify the results obtained by docking, we carried out a molecular dynamics simulation with the compounds that presented the highest and lowest docking energies. The results showed that the root mean square deviation (RMSD) between the initial and final pose were very small. In addition, the hydrogen bond pattern was conserved along the simulation. In the parasite enzyme, the amino acid residues Leu399, Met400 and Lys402 are replaced in the human enzyme by Met406, Tyr407 and Ala409, respectively. In view of the fact that Leu399 is an amino acid of the Z site, this difference could be explored to design selective inhibitors of TR.