960 resultados para DIMENSIONAL MODEL
Resumo:
The interpretation of soil water dynamics under drip irrigation systems is relevant for crop production as well as on water use and management. In this study a three-dimensional representation of the flow of water under drip irrigation is presented. The work includes analysis of the water balance at point scale as well as area-average, exploring uncertainties in water balance estimations depending on the number of locations sampled. The water flow was monitored by detailed profile water content measurements before irrigation, after irrigation and 24 h later with a dense array of soil moisture access tubes radially distributed around selected drippers. The objective was to develop a methodology that could be used on selected occasions to obtain 'snap shots' of the detailed three-dimensional patterns of soil moisture. Such patterns are likely to be very complex, as spatial variability will be induced for a number of reasons, such as strong horizontal gradients in soil moisture, variations between individual sources in the amount of water applied and spatial variability is soil hydraulic properties. Results are compared with a widely used numerical model, Hydrus-2D. The observed dynamic of the water content distribution is in good agreement with model simulations, although some discrepancies concerning the horizontal distribution of the irrigation bulb are noted due to soil heterogeneity. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
A one-dimensional water column model using the Mellor and Yamada level 2.5 parameterization of vertical turbulent fluxes is presented. The model equations are discretized with a mixed finite element scheme. Details of the finite element discrete equations are given and adaptive mesh refinement strategies are presented. The refinement criterion is an "a posteriori" error estimator based on stratification, shear and distance to surface. The model performances are assessed by studying the stress driven penetration of a turbulent layer into a stratified fluid. This example illustrates the ability of the presented model to follow some internal structures of the flow and paves the way for truly generalized vertical coordinates. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The commonly held view of the conditions in the North Atlantic at the last glacial maximum, based on the interpretation of proxy records, is of large-scale cooling compared to today, limited deep convection, and extensive sea ice, all associated with a southward displaced and weakened overturning thermohaline circulation (THC) in the North Atlantic. Not all studies support that view; in particular, the "strength of the overturning circulation" is contentious and is a quantity that is difficult to determine even for the present day. Quasi-equilibrium simulations with coupled climate models forced by glacial boundary conditions have produced differing results, as have inferences made from proxy records. Most studies suggest the weaker circulation, some suggest little or no change, and a few suggest a stronger circulation. Here results are presented from a three-dimensional climate model, the Hadley Centre Coupled Model version 3 (HadCM3), of the coupled atmosphere - ocean - sea ice system suggesting, in a qualitative sense, that these diverging views could all have occurred at different times during the last glacial period, with different modes existing at different times. One mode might have been characterized by an active THC associated with moderate temperatures in the North Atlantic and a modest expanse of sea ice. The other mode, perhaps forced by large inputs of meltwater from the continental ice sheets into the northern North Atlantic, might have been characterized by a sluggish THC associated with very cold conditions around the North Atlantic and a large areal cover of sea ice. The authors' model simulation of such a mode, forced by a large input of freshwater, bears several of the characteristics of the Climate: Long-range Investigation, Mapping, and Prediction (CLIMAP) Project's reconstruction of glacial sea surface temperature and sea ice extent.
Resumo:
We develop the linearization of a semi-implicit semi-Lagrangian model of the one-dimensional shallow-water equations using two different methods. The usual tangent linear model, formed by linearizing the discrete nonlinear model, is compared with a model formed by first linearizing the continuous nonlinear equations and then discretizing. Both models are shown to perform equally well for finite perturbations. However, the asymptotic behaviour of the two models differs as the perturbation size is reduced. This leads to difficulties in showing that the models are correctly coded using the standard tests. To overcome this difficulty we propose a new method for testing linear models, which we demonstrate both theoretically and numerically. © Crown copyright, 2003. Royal Meteorological Society
Resumo:
Ozone and temperature profiles from the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) have been assimilated, using three-dimensional variational assimilation, into a stratosphere troposphere version of the Met Office numerical weather-prediction system. Analyses are made for the month of September 2002, when there was an unprecedented split in the southern hemisphere polar vortex. The analyses are validated against independent ozone observations from sondes, limb-occultation and total column ozone satellite instruments. Through most of the stratosphere, precision varies from 5 to 15%, and biases are 15% or less of the analysed field. Problems remain in the vortex and below the 60 hPa. level, especially at the tropopause where the analyses have too much ozone and poor agreement with independent data. Analysis problems are largely a result of the model rather than the data, giving confidence in the MIPAS ozone retrievals, though there may be a small high bias in MIPAS ozone in the lower stratosphere. Model issues include an excessive Brewer-Dobson circulation, which results both from known problems with the tracer transport scheme and from the data assimilation of dynamical variables. The extreme conditions of the vortex split reveal large differences between existing linear ozone photochemistry schemes. Despite these issues, the ozone analyses are able to successfully describe the ozone hole split and compare well to other studies of this event. Recommendations are made for the further development of the ozone assimilation system.
Resumo:
Results from the first Sun-to-Earth coupled numerical model developed at the Center for Integrated Space Weather Modeling are presented. The model simulates physical processes occurring in space spanning from the corona of the Sun to the Earth's ionosphere, and it represents the first step toward creating a physics-based numerical tool for predicting space weather conditions in the near-Earth environment. Two 6- to 7-d intervals, representing different heliospheric conditions in terms of the three-dimensional configuration of the heliospheric current sheet, are chosen for simulations. These conditions lead to drastically different responses of the simulated magnetosphere-ionosphere system, emphasizing, on the one hand, challenges one encounters in building such forecasting tools, and on the other hand, emphasizing successes that can already be achieved even at this initial stage of Sun-to-Earth modeling.
Resumo:
In this study, the processes affecting sea surface temperature variability over the 1992–98 period, encompassing the very strong 1997–98 El Niño event, are analyzed. A tropical Pacific Ocean general circulation model, forced by a combination of weekly ERS1–2 and TAO wind stresses, and climatological heat and freshwater fluxes, is first validated against observations. The model reproduces the main features of the tropical Pacific mean state, despite a weaker than observed thermal stratification, a 0.1 m s−1 too strong (weak) South Equatorial Current (North Equatorial Countercurrent), and a slight underestimate of the Equatorial Undercurrent. Good agreement is found between the model dynamic height and TOPEX/Poseidon sea level variability, with correlation/rms differences of 0.80/4.7 cm on average in the 10°N–10°S band. The model sea surface temperature variability is a bit weak, but reproduces the main features of interannual variability during the 1992–98 period. The model compares well with the TAO current variability at the equator, with correlation/rms differences of 0.81/0.23 m s−1 for surface currents. The model therefore reproduces well the observed interannual variability, with wind stress as the only interannually varying forcing. This good agreement with observations provides confidence in the comprehensive three-dimensional circulation and thermal structure of the model. A close examination of mixed layer heat balance is thus undertaken, contrasting the mean seasonal cycle of the 1993–96 period and the 1997–98 El Niño. In the eastern Pacific, cooling by exchanges with the subsurface (vertical advection, mixing, and entrainment), the atmospheric forcing, and the eddies (mainly the tropical instability waves) are the three main contributors to the heat budget. In the central–western Pacific, the zonal advection by low-frequency currents becomes the main contributor. Westerly wind bursts (in December 1996 and March and June 1997) were found to play a decisive role in the onset of the 1997–98 El Niño. They contributed to the early warming in the eastern Pacific because the downwelling Kelvin waves that they excited diminished subsurface cooling there. But it is mainly through eastward advection of the warm pool that they generated temperature anomalies in the central Pacific. The end of El Niño can be linked to the large-scale easterly anomalies that developed in the western Pacific and spread eastward, from the end of 1997 onward. In the far-western Pacific, because of the shallower than normal thermocline, these easterlies cooled the SST by vertical processes. In the central Pacific, easterlies pushed the warm pool back to the west. In the east, they led to a shallower thermocline, which ultimately allowed subsurface cooling to resume and to quickly cool the surface layer.
Resumo:
The influence of orography on the structure of stationary planetary Rossby waves is studied in the context of a contour dynamics model of the large-scale atmospheric flow. Orography of infinitesimal and finite amplitude is studied using analytical and numerical techniques. Three different types of orography are considered: idealized orography in the form of a global wave, idealized orography in the form of a local table mountain, and the earth's orography. The study confirms the importance of resonances, both in the infinitesimal orography and in the finite orography cases. With finite orography the stationary waves organize themselves into a one-dimensional set of solutions, which due to the resonances, is piecewise connected. It is pointed out that these stationary waves could be relevant for atmospheric regimes.
Resumo:
The development of effective methods for predicting the quality of three-dimensional (3D) models is fundamentally important for the success of tertiary structure (TS) prediction strategies. Since CASP7, the Quality Assessment (QA) category has existed to gauge the ability of various model quality assessment programs (MQAPs) at predicting the relative quality of individual 3D models. For the CASP8 experiment, automated predictions were submitted in the QA category using two methods from the ModFOLD server-ModFOLD version 1.1 and ModFOLDclust. ModFOLD version 1.1 is a single-model machine learning based method, which was used for automated predictions of global model quality (QMODE1). ModFOLDclust is a simple clustering based method, which was used for automated predictions of both global and local quality (QMODE2). In addition, manual predictions of model quality were made using ModFOLD version 2.0-an experimental method that combines the scores from ModFOLDclust and ModFOLD v1.1. Predictions from the ModFOLDclust method were the most successful of the three in terms of the global model quality, whilst the ModFOLD v1.1 method was comparable in performance to other single-model based methods. In addition, the ModFOLDclust method performed well at predicting the per-residue, or local, model quality scores. Predictions of the per-residue errors in our own 3D models, selected using the ModFOLD v2.0 method, were also the most accurate compared with those from other methods. All of the MQAPs described are publicly accessible via the ModFOLD server at: http://www.reading.ac.uk/bioinf/ModFOLD/. The methods are also freely available to download from: http://www.reading.ac.uk/bioinf/downloads/.
Resumo:
Multiscale modeling is emerging as one of the key challenges in mathematical biology. However, the recent rapid increase in the number of modeling methodologies being used to describe cell populations has raised a number of interesting questions. For example, at the cellular scale, how can the appropriate discrete cell-level model be identified in a given context? Additionally, how can the many phenomenological assumptions used in the derivation of models at the continuum scale be related to individual cell behavior? In order to begin to address such questions, we consider a discrete one-dimensional cell-based model in which cells are assumed to interact via linear springs. From the discrete equations of motion, the continuous Rouse [P. E. Rouse, J. Chem. Phys. 21, 1272 (1953)] model is obtained. This formalism readily allows the definition of a cell number density for which a nonlinear "fast" diffusion equation is derived. Excellent agreement is demonstrated between the continuum and discrete models. Subsequently, via the incorporation of cell division, we demonstrate that the derived nonlinear diffusion model is robust to the inclusion of more realistic biological detail. In the limit of stiff springs, where cells can be considered to be incompressible, we show that cell velocity can be directly related to cell production. This assumption is frequently made in the literature but our derivation places limits on its validity. Finally, the model is compared with a model of a similar form recently derived for a different discrete cell-based model and it is shown how the different diffusion coefficients can be understood in terms of the underlying assumptions about cell behavior in the respective discrete models.
Resumo:
Synthesis, structural characterization, and magnetic properties of a new cyano-bridged one-dimensional iron (III)-gadolinium (III) compound, trans-[Gd(o-phen)(2)(H2O)(2)(mu-CN)(2)Fe(CN)(4)], - 2no-phen (o-phen = 1,10-phenanthroline), have been described. The compound crystallizes in the triclinic P (1) over bar space group with the following unit cell parameters: a = 10.538(14) angstrom, b = 12.004(14) angstrom, c = 20.61(2) angstrom, alpha = 92.41(1)degrees, beta = 92.76(1)degrees, gamma = 11 2.72(1)degrees, and Z = 2. In this complex, each gadolinium (III) is coordinated to two nitrile nitrogens of the CN groups coming from two different ferricyanides, the mutually trans cyanides of each of which links another different Gd-III to create -NC-Fe(CN)(4)-CN-Gd-NC- type 1-D chain structure. The one-dimensional chains are self-assembled in two-dimensions via weak C-H center dot center dot center dot N hydrogen bonds. Both the variable-temperature (2-300 K, 0.01 T and 0.8 T) and variable-field (0-50 000 Gauss, 2 K) magnetic measurements reveal the existence of very weak interaction in this molecule. The temperature dependence of the susceptibilities has been analyzed using a model for a chain of alternating classic (7/2) and quantum (1/2) spins. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Quantum calculations of the ground vibrational state tunneling splitting of H-atom and D-atom transfer in malonaldehyde are performed on a full-dimensional ab initio potential energy surface (PES). The PES is a fit to 11 147 near basis-set-limit frozen-core CCSD(T) electronic energies. This surface properly describes the invariance of the potential with respect to all permutations of identical atoms. The saddle-point barrier for the H-atom transfer on the PES is 4.1 kcal/mol, in excellent agreement with the reported ab initio value. Model one-dimensional and "exact" full-dimensional calculations of the splitting for H- and D-atom transfer are done using this PES. The tunneling splittings in full dimensionality are calculated using the unbiased "fixed-node" diffusion Monte Carlo (DMC) method in Cartesian and saddle-point normal coordinates. The ground-state tunneling splitting is found to be 21.6 cm(-1) in Cartesian coordinates and 22.6 cm(-1) in normal coordinates, with an uncertainty of 2-3 cm(-1). This splitting is also calculated based on a model which makes use of the exact single-well zero-point energy (ZPE) obtained with the MULTIMODE code and DMC ZPE and this calculation gives a tunneling splitting of 21-22 cm(-1). The corresponding computed splittings for the D-atom transfer are 3.0, 3.1, and 2-3 cm(-1). These calculated tunneling splittings agree with each other to within less than the standard uncertainties obtained with the DMC method used, which are between 2 and 3 cm(-1), and agree well with the experimental values of 21.6 and 2.9 cm(-1) for the H and D transfer, respectively. (C) 2008 American Institute of Physics.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
Several previous studies have attempted to assess the sublimation depth-scales of ice particles from clouds into clear air. Upon examining the sublimation depth-scales in the Met Office Unified Model (MetUM), it was found that the MetUM has evaporation depth-scales 2–3 times larger than radar observations. Similar results can be seen in the European Centre for Medium-Range Weather Forecasts (ECMWF), Regional Atmospheric Climate Model (RACMO) and Météo-France models. In this study, we use radar simulation (converting model variables into radar observations) and one-dimensional explicit microphysics numerical modelling to test and diagnose the cause of the deep sublimation depth-scales in the forecast model. The MetUM data and parametrization scheme are used to predict terminal velocity, which can be compared with the observed Doppler velocity. This can then be used to test the hypothesis as to why the sublimation depth-scale is too large within the MetUM. Turbulence could lead to dry air entrainment and higher evaporation rates; particle density may be wrong, particle capacitance may be too high and lead to incorrect evaporation rates or the humidity within the sublimating layer may be incorrectly represented. We show that the most likely cause of deep sublimation zones is an incorrect representation of model humidity in the layer. This is tested further by using a one-dimensional explicit microphysics model, which tests the sensitivity of ice sublimation to key atmospheric variables and is capable of including sonde and radar measurements to simulate real cases. Results suggest that the MetUM grid resolution at ice cloud altitudes is not sufficient enough to maintain the sharp drop in humidity that is observed in the sublimation zone.
Resumo:
A common problem in many data based modelling algorithms such as associative memory networks is the problem of the curse of dimensionality. In this paper, a new two-stage neurofuzzy system design and construction algorithm (NeuDeC) for nonlinear dynamical processes is introduced to effectively tackle this problem. A new simple preprocessing method is initially derived and applied to reduce the rule base, followed by a fine model detection process based on the reduced rule set by using forward orthogonal least squares model structure detection. In both stages, new A-optimality experimental design-based criteria we used. In the preprocessing stage, a lower bound of the A-optimality design criterion is derived and applied as a subset selection metric, but in the later stage, the A-optimality design criterion is incorporated into a new composite cost function that minimises model prediction error as well as penalises the model parameter variance. The utilisation of NeuDeC leads to unbiased model parameters with low parameter variance and the additional benefit of a parsimonious model structure. Numerical examples are included to demonstrate the effectiveness of this new modelling approach for high dimensional inputs.