48 resultados para Average model
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
High resolution Northern Hemisphere wintertime mid-latitude dynamics during the Last Glacial Maximum
Resumo:
Hourly winter weather of the Last Glacial Maximum (LGM) is simulated using the Community Climate Model version 3 (CCM3) on a globally resolved T170 (75 km) grid. Results are compared to a longer LGM climatological run with the same boundary conditions and monthly saves. Hourly-scale animations are used to enhance interpretations. The purpose of the study is to explore whether additional insights into ice age conditions can be gleaned by going beyond the standard employment of monthly average model statistics to infer ice age weather and climate. Results for both LGM runs indicate a decrease in North Atlantic and increase in North Pacific cyclogenesis. Storm trajectories react to the mechanical forcing of the Laurentide Ice Sheet, with Pacific storms tracking over middle Alaska and northern Canada, terminating in the Labrador Sea. This result is coincident with other model results in also showing a significant reduction in Greenland wintertime precipitation – a response supported by ice core evidence. Higher-temporal resolution puts in sharper focus the close tracking of Pacific storms along the west coast of North America. This response is consistent with increased poleward heat transport in the LGM climatological run and could help explain “early” glacial warming inferred in this region from proxy climate records. Additional analyses shows a large increase in central Asian surface gustiness that support observational inferences that upper-level winds associated with Asian- Pacific storms transported Asian dust to Greenland during the LGM.
Resumo:
We present an intercomparison and verification analysis of 20 GCMs (Global Circulation Models) included in the 4th IPCC assessment report regarding their representation of the hydrological cycle on the Danube river basin for 1961–2000 and for the 2161–2200 SRESA1B scenario runs. The basin-scale properties of the hydrological cycle are computed by spatially integrating the precipitation, evaporation, and runoff fields using the Voronoi-Thiessen tessellation formalism. The span of the model- simulated mean annual water balances is of the same order of magnitude of the observed Danube discharge of the Delta; the true value is within the range simulated by the models. Some land components seem to have deficiencies since there are cases of violation of water conservation when annual means are considered. The overall performance and the degree of agreement of the GCMs are comparable to those of the RCMs (Regional Climate Models) analyzed in a previous work, in spite of the much higher resolution and common nesting of the RCMs. The reanalyses are shown to feature several inconsistencies and cannot be used as a verification benchmark for the hydrological cycle in the Danubian region. In the scenario runs, for basically all models the water balance decreases, whereas its interannual variability increases. Changes in the strength of the hydrological cycle are not consistent among models: it is confirmed that capturing the impact of climate change on the hydrological cycle is not an easy task over land areas. Moreover, in several cases we find that qualitatively different behaviors emerge among the models: the ensemble mean does not represent any sort of average model, and often it falls between the models’ clusters.
Resumo:
This article examines the ability of several models to generate optimal hedge ratios. Statistical models employed include univariate and multivariate generalized autoregressive conditionally heteroscedastic (GARCH) models, and exponentially weighted and simple moving averages. The variances of the hedged portfolios derived using these hedge ratios are compared with those based on market expectations implied by the prices of traded options. One-month and three-month hedging horizons are considered for four currency pairs. Overall, it has been found that an exponentially weighted moving-average model leads to lower portfolio variances than any of the GARCH-based, implied or time-invariant approaches.
Resumo:
Microbial processes in soil are moisture, nutrient and temperature dependent and, consequently, accurate calculation of soil temperature is important for modelling nitrogen processes. Microbial activity in soil occurs even at sub-zero temperatures so that, in northern latitudes, a method to calculate soil temperature under snow cover and in frozen soils is required. This paper describes a new and simple model to calculate daily values for soil temperature at various depths in both frozen and unfrozen soils. The model requires four parameters average soil thermal conductivity, specific beat capacity of soil, specific heat capacity due to freezing and thawing and an empirical snow parameter. Precipitation, air temperature and snow depth (measured or calculated) are needed as input variables. The proposed model was applied to five sites in different parts of Finland representing different climates and soil types. Observed soil temperatures at depths of 20 and 50 cm (September 1981-August 1990) were used for model calibration. The calibrated model was then tested using observed soil temperatures from September 1990 to August 2001. R-2-values of the calibration period varied between 0.87 and 0.96 at a depth of 20 cm and between 0.78 and 0.97 at 50 cm. R-2 -values of the testing period were between 0.87 and 0.94 at a depth of 20cm. and between 0.80 and 0.98 at 50cm. Thus, despite the simplifications made, the model was able to simulate soil temperature at these study sites. This simple model simulates soil temperature well in the uppermost soil layers where most of the nitrogen processes occur. The small number of parameters required means, that the model is suitable for addition to catchment scale models.
Resumo:
A low resolution coupled ocean-atmosphere general circulation model OAGCM is used to study the characteristics of the large scale ocean circulation and its climatic impacts in a series of global coupled aquaplanet experiments. Three configurations, designed to produce fundamentally different ocean circulation regimes, are considered. The first has no obstruction to zonal flow, the second contains a low barrier that blocks zonal flow in the ocean at all latitudes, creating a single enclosed basin, whilst the third contains a gap in the barrier to allow circumglobal flow at high southern latitudes. Warm greenhouse climates with a global average air surface temperature of around 27C result in all cases. Equator to pole temperature gradients are shallower than that of a current climate simulation. Whilst changes in the land configuration cause regional changes in temperature, winds and rainfall, heat transports within the system are little affected. Inhibition of all ocean transport on the aquaplanet leads to a reduction in global mean surface temperature of 8C, along with a sharpening of the meridional temperature gradient. This results from a reduction in global atmospheric water vapour content and an increase in tropical albedo, both of which act to reduce global surface temperatures. Fitting a simple radiative model to the atmospheric characteristics of the OAGCM solutions suggests that a simpler atmosphere model, with radiative parameters chosen a priori based on the changing surface configuration, would have produced qualitatively different results. This implies that studies with reduced complexity atmospheres need to be guided by more complex OAGCM results on a case by case basis.
Resumo:
In this study, the processes affecting sea surface temperature variability over the 1992–98 period, encompassing the very strong 1997–98 El Niño event, are analyzed. A tropical Pacific Ocean general circulation model, forced by a combination of weekly ERS1–2 and TAO wind stresses, and climatological heat and freshwater fluxes, is first validated against observations. The model reproduces the main features of the tropical Pacific mean state, despite a weaker than observed thermal stratification, a 0.1 m s−1 too strong (weak) South Equatorial Current (North Equatorial Countercurrent), and a slight underestimate of the Equatorial Undercurrent. Good agreement is found between the model dynamic height and TOPEX/Poseidon sea level variability, with correlation/rms differences of 0.80/4.7 cm on average in the 10°N–10°S band. The model sea surface temperature variability is a bit weak, but reproduces the main features of interannual variability during the 1992–98 period. The model compares well with the TAO current variability at the equator, with correlation/rms differences of 0.81/0.23 m s−1 for surface currents. The model therefore reproduces well the observed interannual variability, with wind stress as the only interannually varying forcing. This good agreement with observations provides confidence in the comprehensive three-dimensional circulation and thermal structure of the model. A close examination of mixed layer heat balance is thus undertaken, contrasting the mean seasonal cycle of the 1993–96 period and the 1997–98 El Niño. In the eastern Pacific, cooling by exchanges with the subsurface (vertical advection, mixing, and entrainment), the atmospheric forcing, and the eddies (mainly the tropical instability waves) are the three main contributors to the heat budget. In the central–western Pacific, the zonal advection by low-frequency currents becomes the main contributor. Westerly wind bursts (in December 1996 and March and June 1997) were found to play a decisive role in the onset of the 1997–98 El Niño. They contributed to the early warming in the eastern Pacific because the downwelling Kelvin waves that they excited diminished subsurface cooling there. But it is mainly through eastward advection of the warm pool that they generated temperature anomalies in the central Pacific. The end of El Niño can be linked to the large-scale easterly anomalies that developed in the western Pacific and spread eastward, from the end of 1997 onward. In the far-western Pacific, because of the shallower than normal thermocline, these easterlies cooled the SST by vertical processes. In the central Pacific, easterlies pushed the warm pool back to the west. In the east, they led to a shallower thermocline, which ultimately allowed subsurface cooling to resume and to quickly cool the surface layer.
Resumo:
The climatology of the OPA/ARPEGE-T21 coupled general circulation model (GCM) is presented. The atmosphere GCM has a T21 spectral truncation and the ocean GCM has a 2°×1.5° average resolution. A 50-year climatic simulation is performed using the OASIS coupler, without flux correction techniques. The mean state and seasonal cycle for the last 10 years of the experiment are described and compared to the corresponding uncoupled experiments and to climatology when available. The model reasonably simulates most of the basic features of the observed climate. Energy budgets and transports in the coupled system, of importance for climate studies, are assessed and prove to be within available estimates. After an adjustment phase of a few years, the model stabilizes around a mean state where the tropics are warm and resemble a permanent ENSO, the Southern Ocean warms and almost no sea-ice is left in the Southern Hemisphere. The atmospheric circulation becomes more zonal and symmetric with respect to the equator. Once those systematic errors are established, the model shows little secular drift, the small remaining trends being mainly associated to horizontal physics in the ocean GCM. The stability of the model is shown to be related to qualities already present in the uncoupled GCMs used, namely a balanced radiation budget at the top-of-the-atmosphere and a tight ocean thermocline.
Resumo:
Measurements of anthropogenic tracers such as chlorofluorocarbons and tritium must be quantitatively combined with ocean general circulation models as a component of systematic model development. The authors have developed and tested an inverse method, using a Green's function, to constrain general circulation models with transient tracer data. Using this method chlorofluorocarbon-11 and -12 (CFC-11 and -12) observations are combined with a North Atlantic configuration of the Miami Isopycnic Coordinate Ocean Model with 4/3 degrees resolution. Systematic differences can be seen between the observed CFC concentrations and prior CFC fields simulated by the model. These differences are reduced by the inversion, which determines the optimal gas transfer across the air-sea interface, accounting for uncertainties in the tracer observations. After including the effects of unresolved variability in the CFC fields, the model is found to be inconsistent with the observations because the model/data misfit slightly exceeds the error estimates. By excluding observations in waters ventilated north of the Greenland-Scotland ridge (sigma (0) < 27.82 kg m(-3); shallower than about 2000 m), the fit is improved, indicating that the Nordic overflows are poorly represented in the model. Some systematic differences in the model/data residuals remain and are related, in part, to excessively deep model ventilation near Rockall and deficient ventilation in the main thermocline of the eastern subtropical gyre. Nevertheless, there do not appear to be gross errors in the basin-scale model circulation. Analysis of the CFC inventory using the constrained model suggests that the North Atlantic Ocean shallower than about 2000 m was near 20% saturated in the mid-1990s. Overall, this basin is a sink to 22% of the total atmosphere-to-ocean CFC-11 flux-twice the global average value. The average water mass formation rates over the CFC transient are 7.0 and 6.0 Sv (Sv = 10(6) m(3) s(-1)) for subtropical mode water and subpolar mode water, respectively.
Resumo:
This research is associated with the goal of the horticultural sector of the Colombian southwest, which is to obtain climatic information, specifically, to predict the monthly average temperature in sites where it has not been measured. The data correspond to monthly average temperature, and were recorded in meteorological stations at Valle del Cauca, Colombia, South America. Two components are identified in the data of this research: (1) a component due to the temporal aspects, determined by characteristics of the time series, distribution of the monthly average temperature through the months and the temporal phenomena, which increased (El Nino) and decreased (La Nina) the temperature values, and (2) a component due to the sites, which is determined for the clear differentiation of two populations, the valley and the mountains, which are associated with the pattern of monthly average temperature and with the altitude. Finally, due to the closeness between meteorological stations it is possible to find spatial correlation between data from nearby sites. In the first instance a random coefficient model without spatial covariance structure in the errors is obtained by month and geographical location (mountains and valley, respectively). Models for wet periods in mountains show a normal distribution in the errors; models for the valley and dry periods in mountains do not exhibit a normal pattern in the errors. In models of mountains and wet periods, omni-directional weighted variograms for residuals show spatial continuity. The random coefficient model without spatial covariance structure in the errors and the random coefficient model with spatial covariance structure in the errors are capturing the influence of the El Nino and La Nina phenomena, which indicates that the inclusion of the random part in the model is appropriate. The altitude variable contributes significantly in the models for mountains. In general, the cross-validation process indicates that the random coefficient model with spatial spherical and the random coefficient model with spatial Gaussian are the best models for the wet periods in mountains, and the worst model is the model used by the Colombian Institute for Meteorology, Hydrology and Environmental Studies (IDEAM) to predict temperature.
Resumo:
Controlled human intervention trials are required to confirm the hypothesis that dietary fat quality may influence insulin action. The aim was to develop a food-exchange model, suitable for use in free-living volunteers, to investigate the effects of four experimental diets distinct in fat quantity and quality: high SFA (HSFA); high MUFA (HMUFA) and two low-fat (LF) diets, one supplemented with 1.24g EPA and DHA/d (LFn-3). A theoretical food-exchange model was developed. The average quantity of exchangeable fat was calculated as the sum of fat provided by added fats (spreads and oils), milk, cheese, biscuits, cakes, buns and pastries using data from the National Diet and Nutrition Survey of UK adults. Most of the exchangeable fat was replaced by specifically designed study foods. Also critical to the model was the use of carbohydrate exchanges to ensure the diets were isoenergetic. Volunteers from eight centres across Europe completed the dietary intervention. Results indicated that compositional targets were largely achieved with significant differences in fat quantity between the high-fat diets (39.9 (SEM 0.6) and 38.9 (SEM 0.51) percentage energy (%E) from fat for the HSFA and HMUFA diets respectively) and the low-fat diets (29.6 (SEM 0.6) and 29.1 (SEM 0.5) %E from fat for the LF and LFn-3 diets respectively) and fat quality (17.5 (SEM 0.3) and 10.4 (SEM 0.2) %E front SFA and 12.7 (SEM 0.3) and 18.7 (SEM 0.4) %E MUFA for the HSFA and HMUFA diets respectively). In conclusion, a robust, flexible food-exchange model was developed and implemented successfully in the LIPGENE dietary intervention trial.
Resumo:
Temperature results from multi-decadal simulations of coupled chemistry climate models for the recent past are analyzed using multi-linear regression including a trend, solar cycle, lower stratospheric tropical wind, and volcanic aerosol terms. The climatology of the models for recent years is in good agreement with observations for the troposphere but the model results diverge from each other and from observations in the stratosphere. Overall, the models agree better with observations than in previous assessments, primarily because of corrections in the observed temperatures. The annually averaged global and polar temperature trends simulated by the models are generally in agreement with revised satellite observations and radiosonde data over much of their altitude range. In the global average, the model trends underpredict the radiosonde data slightly at the top of the observed range. Over the Antarctic some models underpredict the temperature trend in the lower stratosphere, while others overpredict the trends
Resumo:
Temperature is one of the most prominent environmental factors that determine plant growth, devel- opment, and yield. Cool and moist conditions are most favorable for wheat. Wheat is likely to be highly vulnerable to further warming because currently the temperature is already close to or above optimum. In this study, the impacts of warming and extreme high temperature stress on wheat yield over China were investigated by using the general large area model (GLAM) for annual crops. The results showed that each 1±C rise in daily mean temperature would reduce the average wheat yield in China by about 4.6%{5.7% mainly due to the shorter growth duration, except for a small increase in yield at some grid cells. When the maximum temperature exceeded 30.5±C, the simulated grain-set fraction declined from 1 at 30.5±C to close to 0 at about 36±C. When the total grain-set was lower than the critical fractional grain-set (0.575{0.6), harvest index and potential grain yield were reduced. In order to reduce the negative impacts of warming, it is crucial to take serious actions to adapt to the climate change, for example, by shifting sowing date, adjusting crop distribution and structure, breeding heat-resistant varieties, and improving the monitoring, forecasting, and early warning of extreme climate events.
Resumo:
The intensity and distribution of daily precipitation is predicted to change under scenarios of increased greenhouse gases (GHGs). In this paper, we analyse the ability of HadCM2, a general circulation model (GCM), and a high-resolution regional climate model (RCM), both developed at the Met Office's Hadley Centre, to simulate extreme daily precipitation by reference to observations. A detailed analysis of daily precipitation is made at two UK grid boxes, where probabilities of reaching daily thresholds in the GCM and RCM are compared with observations. We find that the RCM generally overpredicts probabilities of extreme daily precipitation but that, when the GCM and RCM simulated values are scaled to have the same mean as the observations, the RCM captures the upper-tail distribution more realistically. To compare regional changes in daily precipitation in the GHG-forced period 2080-2100 in the GCM and the RCM, we develop two methods. The first considers the fractional changes in probability of local daily precipitation reaching or exceeding a fixed 15 mm threshold in the anomaly climate compared with the control. The second method uses the upper one-percentile of the control at each point as the threshold. Agreement between the models is better in both seasons with the latter method, which we suggest may be more useful when considering larger scale spatial changes. On average, the probability of precipitation exceeding the 1% threshold increases by a factor of 2.5 (GCM and RCM) in winter and by I .7 (GCM) or 1.3 (RCM) in summer.
Resumo:
Using the virtual porous carbon model proposed by Harris et al, we study the effect of carbon surface oxidation on the pore size distribution (PSD) curve determined from simulated Ar, N(2) and CO(2) isotherms. It is assumed that surface oxidation is not destructive for the carbon skeleton, and that all pores are accessible for studied molecules (i.e., only the effect of the change of surface chemical composition is studied). The results obtained show two important things, i.e., oxidation of the carbon surface very slightly changes the absolute porosity (calculated from the geometric method of Bhattacharya and Gubbins (BG)); however, PSD curves calculated from simulated isotherms are to a greater or lesser extent affected by the presence of surface oxides. The most reliable results are obtained from Ar adsorption data. Not only is adsorption of this adsorbate practically independent from the presence of surface oxides, but, more importantly, for this molecule one can apply the slit-like model of pores as the first approach to recover the average pore diameter of a real carbon structure. For nitrogen, the effect of carbon surface chemical composition is observed due to the quadrupole moment of this molecule, and this effect shifts the PSD curves compared to Ar. The largest differences are seen for CO2, and it is clearly demonstrated that the PSD curves obtained from adsorption isotherms of this molecule contain artificial peaks and the average pore diameter is strongly influenced by the presence of electrostatic adsorbate-adsorbate as well as adsorbate-adsorbent interactions.