965 resultados para Approximat Model (scheme)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines to what extent crops and their environment should be viewed as a coupled system. Crop impact assessments currently use climate model output offline to drive process-based crop models. However, in regions where local climate is sensitive to land surface conditions more consistent assessments may be produced with the crop model embedded within the land surface scheme of the climate model. Using a recently developed coupled crop–climate model, the sensitivity of local climate, in particular climate variability, to climatically forced variations in crop growth throughout the tropics is examined by comparing climates simulated with dynamic and prescribed seasonal growth of croplands. Interannual variations in land surface properties associated with variations in crop growth and development were found to have significant impacts on near-surface fluxes and climate; for example, growing season temperature variability was increased by up to 40% by the inclusion of dynamic crops. The impact was greatest in dry years where the response of crop growth to soil moisture deficits enhanced the associated warming via a reduction in evaporation. Parts of the Sahel, India, Brazil, and southern Africa were identified where local climate variability is sensitive to variations in crop growth, and where crop yield is sensitive to variations in surface temperature. Therefore, offline seasonal forecasting methodologies in these regions may underestimate crop yield variability. The inclusion of dynamic crops also altered the mean climate of the humid tropics, highlighting the importance of including dynamical vegetation within climate models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

QUAGMIRE is a quasi-geostrophic numerical model for performing fast, high-resolution simulations of multi-layer rotating annulus laboratory experiments on a desktop personal computer. The model uses a hybrid finite-difference/spectral approach to numerically integrate the coupled nonlinear partial differential equations of motion in cylindrical geometry in each layer. Version 1.3 implements the special case of two fluid layers of equal resting depths. The flow is forced either by a differentially rotating lid, or by relaxation to specified streamfunction or potential vorticity fields, or both. Dissipation is achieved through Ekman layer pumping and suction at the horizontal boundaries, including the internal interface. The effects of weak interfacial tension are included, as well as the linear topographic beta-effect and the quadratic centripetal beta-effect. Stochastic forcing may optionally be activated, to represent approximately the effects of random unresolved features. A leapfrog time stepping scheme is used, with a Robert filter. Flows simulated by the model agree well with those observed in the corresponding laboratory experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fast radiative transfer model (RTM) to compute emitted infrared radiances for a very high resolution radiometer (VHRR), onboard the operational Indian geostationary satellite Kalpana has been developed and verified. This work is a step towards the assimilation of Kalpana water vapor (WV) radiances into numerical weather prediction models. The fast RTM uses a regression‐based approach to parameterize channel‐specific convolved level to space transmittances. A comparison between the fast RTM and the line‐by‐line RTM demonstrated that the fast RTM can simulate line‐by‐line radiances for the Kalpana WV channel to an accuracy better than the instrument noise, while offering more rapid radiance calculations. A comparison of clear sky radiances of the Kalpana WV channel with the ECMWF model first guess radiances is also presented, aiming to demonstrate the fast RTM performance with the real observations. In order to assimilate the radiances from Kalpana, a simple scheme for bias correction has been suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The too diverse representation of ENSO in a coupled GCM limits one’s ability to describe future change of its properties. Several studies pointed to the key role of atmosphere feedbacks in contributing to this diversity. These feedbacks are analyzed here in two simulations of a coupled GCM that differ only by the parameterization of deep atmospheric convection and the associated clouds. Using the Kerry–Emanuel (KE) scheme in the L’Institut Pierre-Simon Laplace Coupled Model, version 4 (IPSL CM4; KE simulation), ENSO has about the right amplitude, whereas it is almost suppressed when using the Tiedke (TI) scheme. Quantifying both the dynamical Bjerknes feedback and the heat flux feedback in KE, TI, and the corresponding Atmospheric Model Intercomparison Project (AMIP) atmosphere-only simulations, it is shown that the suppression of ENSO in TI is due to a doubling of the damping via heat flux feedback. Because the Bjerknes positive feedback is weak in both simulations, the KE simulation exhibits the right ENSO amplitude owing to an error compensation between a too weak heat flux feedback and a too weak Bjerknes feedback. In TI, the heat flux feedback strength is closer to estimates from observations and reanalysis, leading to ENSO suppression. The shortwave heat flux and, to a lesser extent, the latent heat flux feedbacks are the dominant contributors to the change between TI and KE. The shortwave heat flux feedback differences are traced back to a modified distribution of the large-scale regimes of deep convection (negative feedback) and subsidence (positive feedback) in the east Pacific. These are further associated with the model systematic errors. It is argued that a systematic and detailed evaluation of atmosphere feedbacks during ENSO is a necessary step to fully understand its simulation in coupled GCMs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The intraseasonal variability (ISV) of the Indian summer monsoon is dominated by a 30–50 day oscillation between “active” and “break” events of enhanced and reduced rainfall over the subcontinent, respectively. These organized convective events form in the equatorial Indian Ocean and propagate north to India. Atmosphere–ocean coupled processes are thought to play a key role the intensity and propagation of these events. A high-resolution, coupled atmosphere–mixed-layer-oceanmodel is assembled: HadKPP. HadKPP comprises the Hadley Centre Atmospheric Model (HadAM3) and the K Profile Parameterization (KPP) mixed-layer ocean model. Following studies that upper-ocean vertical resolution and sub-diurnal coupling frequencies improve the simulation of ISV in SSTs, KPP is run at 1 m vertical resolution near the surface; the atmosphere and ocean are coupled every three hours. HadKPP accurately simulates the 30–50 day ISV in rainfall and SSTs over India and the Bay of Bengal, respectively, but suffers from low ISV on the equator. This is due to the HadAM3 convection scheme producing limited ISV in surface fluxes. HadKPP demonstrates little of the observed northward propagation of intraseasonal events, producing instead a standing oscillation. The lack of equatorial ISV in convection in HadAM3 constrains the ability of KPP to produce equatorial SST anomalies, which further weakens the ISV of convection. It is concluded that while atmosphere–ocean interactions are undoubtedly essential to an accurate simulation of ISV, they are not a panacea for model deficiencies. In regions where the atmospheric forcing is adequate, such as the Bay of Bengal, KPP produces SST anomalies that are comparable to the Tropical Rainfall Measuring Mission Microwave Imager (TMI) SST analyses in both their magnitude and their timing with respect to rainfall anomalies over India. HadKPP also displays a much-improved phase relationship between rainfall and SSTs over a HadAM3 ensemble forced by observed SSTs, when both are compared to observations. Coupling to mixed-layer models such as KPP has the potential to improve operational predictions of ISV, particularly when the persistence time of SST anomalies is shorter than the forecast lead time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ozone and temperature profiles from the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) have been assimilated, using three-dimensional variational assimilation, into a stratosphere troposphere version of the Met Office numerical weather-prediction system. Analyses are made for the month of September 2002, when there was an unprecedented split in the southern hemisphere polar vortex. The analyses are validated against independent ozone observations from sondes, limb-occultation and total column ozone satellite instruments. Through most of the stratosphere, precision varies from 5 to 15%, and biases are 15% or less of the analysed field. Problems remain in the vortex and below the 60 hPa. level, especially at the tropopause where the analyses have too much ozone and poor agreement with independent data. Analysis problems are largely a result of the model rather than the data, giving confidence in the MIPAS ozone retrievals, though there may be a small high bias in MIPAS ozone in the lower stratosphere. Model issues include an excessive Brewer-Dobson circulation, which results both from known problems with the tracer transport scheme and from the data assimilation of dynamical variables. The extreme conditions of the vortex split reveal large differences between existing linear ozone photochemistry schemes. Despite these issues, the ozone analyses are able to successfully describe the ozone hole split and compare well to other studies of this event. Recommendations are made for the further development of the ozone assimilation system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intercontinental Transport of Ozone and Precursors (ITOP) (part of International Consortium for Atmospheric Research on Transport and Transformation (ICARTT)) was an intense research effort to measure long-range transport of pollution across the North Atlantic and its impact on O3 production. During the aircraft campaign plumes were encountered containing large concentrations of CO plus other tracers and aerosols from forest fires in Alaska and Canada. A chemical transport model, p-TOMCAT, and new biomass burning emissions inventories are used to study the emissions long-range transport and their impact on the troposphere O3 budget. The fire plume structure is modeled well over long distances until it encounters convection over Europe. The CO values within the simulated plumes closely match aircraft measurements near North America and over the Atlantic and have good agreement with MOPITT CO data. O3 and NOx values were initially too great in the model plumes. However, by including additional vertical mixing of O3 above the fires, and using a lower NO2/CO emission ratio (0.008) for boreal fires, O3 concentrations are reduced closer to aircraft measurements, with NO2 closer to SCIAMACHY data. Too little PAN is produced within the simulated plumes, and our VOC scheme's simplicity may be another reason for O3 and NOx model-data discrepancies. In the p-TOMCAT simulations the fire emissions lead to increased tropospheric O3 over North America, the north Atlantic and western Europe from photochemical production and transport. The increased O3 over the Northern Hemisphere in the simulations reaches a peak in July 2004 in the range 2.0 to 6.2 Tg over a baseline of about 150 Tg.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, a fault-tolerant control scheme is applied to a air handling unit of a heating, ventilation and air-conditioning system. Using the multiple-model approach it is possible to identify faults and to control the system under faulty and normal conditions in an effective way. Using well known techniques to model and control the process, this work focuses on the importance of the cost function in the fault detection and its influence on the reconfigurable controller. Experimental results show how the control of the terminal unit is affected in the presence a fault, and how the recuperation and reconfiguration of the control action is able to deal with the effects of faults.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The combination of model predictive control based on linear models (MPC) with feedback linearization (FL) has attracted interest for a number of years, giving rise to MPC+FL control schemes. An important advantage of such schemes is that feedback linearizable plants can be controlled with a linear predictive controller with a fixed model. Handling input constraints within such schemes is difficult since simple bound contraints on the input become state dependent because of the nonlinear transformation introduced by feedback linearization. This paper introduces a technique for handling input constraints within a real time MPC/FL scheme, where the plant model employed is a class of dynamic neural networks. The technique is based on a simple affine transformation of the feasible area. A simulated case study is presented to illustrate the use and benefits of the technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this correspondence new robust nonlinear model construction algorithms for a large class of linear-in-the-parameters models are introduced to enhance model robustness via combined parameter regularization and new robust structural selective criteria. In parallel to parameter regularization, we use two classes of robust model selection criteria based on either experimental design criteria that optimizes model adequacy, or the predicted residual sums of squares (PRESS) statistic that optimizes model generalization capability, respectively. Three robust identification algorithms are introduced, i.e., combined A- and D-optimality with regularized orthogonal least squares algorithm, respectively; and combined PRESS statistic with regularized orthogonal least squares algorithm. A common characteristic of these algorithms is that the inherent computation efficiency associated with the orthogonalization scheme in orthogonal least squares or regularized orthogonal least squares has been extended such that the new algorithms are computationally efficient. Numerical examples are included to demonstrate effectiveness of the algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new identification algorithm is introduced for the Hammerstein model consisting of a nonlinear static function followed by a linear dynamical model. The nonlinear static function is characterised by using the Bezier-Bernstein approximation. The identification method is based on a hybrid scheme including the applications of the inverse of de Casteljau's algorithm, the least squares algorithm and the Gauss-Newton algorithm subject to constraints. The related work and the extension of the proposed algorithm to multi-input multi-output systems are discussed. Numerical examples including systems with some hard nonlinearities are used to illustrate the efficacy of the proposed approach through comparisons with other approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several previous studies have attempted to assess the sublimation depth-scales of ice particles from clouds into clear air. Upon examining the sublimation depth-scales in the Met Office Unified Model (MetUM), it was found that the MetUM has evaporation depth-scales 2–3 times larger than radar observations. Similar results can be seen in the European Centre for Medium-Range Weather Forecasts (ECMWF), Regional Atmospheric Climate Model (RACMO) and Météo-France models. In this study, we use radar simulation (converting model variables into radar observations) and one-dimensional explicit microphysics numerical modelling to test and diagnose the cause of the deep sublimation depth-scales in the forecast model. The MetUM data and parametrization scheme are used to predict terminal velocity, which can be compared with the observed Doppler velocity. This can then be used to test the hypothesis as to why the sublimation depth-scale is too large within the MetUM. Turbulence could lead to dry air entrainment and higher evaporation rates; particle density may be wrong, particle capacitance may be too high and lead to incorrect evaporation rates or the humidity within the sublimating layer may be incorrectly represented. We show that the most likely cause of deep sublimation zones is an incorrect representation of model humidity in the layer. This is tested further by using a one-dimensional explicit microphysics model, which tests the sensitivity of ice sublimation to key atmospheric variables and is capable of including sonde and radar measurements to simulate real cases. Results suggest that the MetUM grid resolution at ice cloud altitudes is not sufficient enough to maintain the sharp drop in humidity that is observed in the sublimation zone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In financial decision-making, a number of mathematical models have been developed for financial management in construction. However, optimizing both qualitative and quantitative factors and the semi-structured nature of construction finance optimization problems are key challenges in solving construction finance decisions. The selection of funding schemes by a modified construction loan acquisition model is solved by an adaptive genetic algorithm (AGA) approach. The basic objectives of the model are to optimize the loan and to minimize the interest payments for all projects. Multiple projects being undertaken by a medium-size construction firm in Hong Kong were used as a real case study to demonstrate the application of the model to the borrowing decision problems. A compromise monthly borrowing schedule was finally achieved. The results indicate that Small and Medium Enterprise (SME) Loan Guarantee Scheme (SGS) was first identified as the source of external financing. Selection of sources of funding can then be made to avoid the possibility of financial problems in the firm by classifying qualitative factors into external, interactive and internal types and taking additional qualitative factors including sovereignty, credit ability and networking into consideration. Thus a more accurate, objective and reliable borrowing decision can be provided for the decision-maker to analyse the financial options.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel Neuropredictive Teleoperation (NPT) Scheme is presented. The design results from two key ideas: the exploitation of the measured or estimated neural input to the human arm or its electromyograph (EMG) as the system input and the employment of a predictor of the arm movement, based on this neural signal and an arm model, to compensate for time delays in the system. Although a multitude of such models, as well as measuring devices for the neural signals and the EMG, have been proposed, current telemanipulator research has only been considering highly simplified arm models. In the present design, the bilateral constraint that the master and slave are simultaneously compliant to each other's state (equal positions and forces) is abandoned, thus obtaining a simple to analyzesuccession of only locally controlled modules, and a robustness to time delays of up to 500 ms. The proposed designs were inspired by well established physiological evidence that the brain, rather than controlling the movement on-line, programs the arm with an action plan of a complete movement, which is then executed largely in open loop, regulated only by local reflex loops. As a model of the human arm the well-established Stark model is employed, whose mathematical representation is modified to make it suitable for an engineering application. The proposed scheme is however valid for any arm model. BIBO-stability and passivity results for a variety of local control laws are reported. Simulation results and comparisons with traditional designs also highlight the advantages of the proposed design.