73 resultados para Power series models
Resumo:
An H-infinity control strategy has been developed for the design of controllers used in feedback controlled electrical substitution measurements (FCESM). The methodology has the potential to provide substantial improvements in both response time and resolution of a millimetre-wave absolute photoacoustic power meter.
Resumo:
The ability of four operational weather forecast models [ECMWF, Action de Recherche Petite Echelle Grande Echelle model (ARPEGE), Regional Atmospheric Climate Model (RACMO), and Met Office] to generate a cloud at the right location and time (the cloud frequency of occurrence) is assessed in the present paper using a two-year time series of observations collected by profiling ground-based active remote sensors (cloud radar and lidar) located at three different sites in western Europe (Cabauw. Netherlands; Chilbolton, United Kingdom; and Palaiseau, France). Particular attention is given to potential biases that may arise from instrumentation differences (especially sensitivity) from one site to another and intermittent sampling. In a second step the statistical properties of the cloud variables involved in most advanced cloud schemes of numerical weather forecast models (ice water content and cloud fraction) are characterized and compared with their counterparts in the models. The two years of observations are first considered as a whole in order to evaluate the accuracy of the statistical representation of the cloud variables in each model. It is shown that all models tend to produce too many high-level clouds, with too-high cloud fraction and ice water content. The midlevel and low-level cloud occurrence is also generally overestimated, with too-low cloud fraction but a correct ice water content. The dataset is then divided into seasons to evaluate the potential of the models to generate different cloud situations in response to different large-scale forcings. Strong variations in cloud occurrence are found in the observations from one season to the same season the following year as well as in the seasonal cycle. Overall, the model biases observed using the whole dataset are still found at seasonal scale, but the models generally manage to well reproduce the observed seasonal variations in cloud occurrence. Overall, models do not generate the same cloud fraction distributions and these distributions do not agree with the observations. Another general conclusion is that the use of continuous ground-based radar and lidar observations is definitely a powerful tool for evaluating model cloud schemes and for a responsive assessment of the benefit achieved by changing or tuning a model cloud
Resumo:
Bayesian Model Averaging (BMA) is used for testing for multiple break points in univariate series using conjugate normal-gamma priors. This approach can test for the number of structural breaks and produce posterior probabilities for a break at each point in time. Results are averaged over specifications including: stationary; stationary around trend and unit root models, each containing different types and number of breaks and different lag lengths. The procedures are used to test for structural breaks on 14 annual macroeconomic series and 11 natural resource price series. The results indicate that there are structural breaks in all of the natural resource series and most of the macroeconomic series. Many of the series had multiple breaks. Our findings regarding the existence of unit roots, having allowed for structural breaks in the data, are largely consistent with previous work.
Resumo:
This paper presents the evaluation in power consumption of a clocking technique for pipelined designs. The technique shows a dynamic power consumption saving of around 30% over a conventional global clocking mechanism. The results were obtained from a series of experiments of a systolic circuit implemented in Virtex-II devices. The conversion from a global-clocked pipelined design to the proposed technique is straightforward, preserving the original datapath design. The savings can be used immediately either as a power reduction benefit or to increase the frequency of operation of a design for the same power consumption.
Resumo:
Satellite data are used to quantify and examine the bias in the outgoing long-wave (LW) radiation over North Africa during May–July simulated by a range of climate models and the Met Office global numerical weather prediction (NWP) model. Simulations from an ensemble-mean of multiple climate models overestimate outgoing clear-sky long-wave radiation (LWc) by more than 20 W m−2 relative to observations from Clouds and the Earth's Radiant Energy System (CERES) for May–July 2000 over parts of the west Sahara, and by 9 W m−2 for the North Africa region (20°W–30°E, 10–40°N). Experiments with the atmosphere-only version of the High-resolution Hadley Centre Global Environment Model (HiGEM), suggest that including mineral dust radiative effects removes this bias. Furthermore, only by reducing surface temperature and emissivity by unrealistic amounts is it possible to explain the magnitude of the bias. Comparing simulations from the Met Office NWP model with satellite observations from Geostationary Earth Radiation Budget (GERB) instruments suggests that the model overestimates the LW by 20–40 W m−2 during North African summer. The bias declines over the period 2003–2008, although this is likely to relate to improvements in the model and inhomogeneity in the satellite time series. The bias in LWc coincides with high aerosol dust loading estimated from the Ozone Monitoring Instrument (OMI), including during the GERBILS field campaign (18–28 June 2007) where model overestimates in LWc greater than 20 W m−2 and OMI-estimated aerosol optical depth (AOD) greater than 0.8 are concurrent around 20°N, 0–20°W. A model-minus-GERB LW bias of around 30 W m−2 coincides with high AOD during the period 18–21 June 2007, although differences in cloud cover also impact the model–GERB differences. Copyright © Royal Meteorological Society and Crown Copyright, 2010
Resumo:
The authors discuss an implementation of an object oriented (OO) fault simulator and its use within an adaptive fault diagnostic system. The simulator models the flow of faults around a power network, reporting switchgear indications and protection messages that would be expected in a real fault scenario. The simulator has been used to train an adaptive fault diagnostic system; results and implications are discussed.
Resumo:
We consider the finite sample properties of model selection by information criteria in conditionally heteroscedastic models. Recent theoretical results show that certain popular criteria are consistent in that they will select the true model asymptotically with probability 1. To examine the empirical relevance of this property, Monte Carlo simulations are conducted for a set of non–nested data generating processes (DGPs) with the set of candidate models consisting of all types of model used as DGPs. In addition, not only is the best model considered but also those with similar values of the information criterion, called close competitors, thus forming a portfolio of eligible models. To supplement the simulations, the criteria are applied to a set of economic and financial series. In the simulations, the criteria are largely ineffective at identifying the correct model, either as best or a close competitor, the parsimonious GARCH(1, 1) model being preferred for most DGPs. In contrast, asymmetric models are generally selected to represent actual data. This leads to the conjecture that the properties of parameterizations of processes commonly used to model heteroscedastic data are more similar than may be imagined and that more attention needs to be paid to the behaviour of the standardized disturbances of such models, both in simulation exercises and in empirical modelling.
Resumo:
The applicability of BET model for calculation of surface area of activated carbons is checked by using molecular simulations. By calculation of geometric surface areas for the simple model carbon slit-like pore with the increasing width, and by comparison of the obtained values with those for the same systems from the VEGA ZZ package (adsorbate-accessible molecular surface), it is shown that the latter methods provide correct values. For the system where a monolayer inside a pore is created the ASA approach (GCMC, Ar, T = 87 K) underestimates the value of surface area for micropores (especially, where only one layer is observed and/or two layers of adsorbed Ar are formed). Therefore, we propose the modification of this method based on searching the relationship between the pore diameter and the number of layers in a pore. Finally BET; original andmodified ASA; and A, B and C-point surface areas are calculated for a series of virtual porous carbons using simulated Ar adsorption isotherms (GCMC and T = 87 K). The comparison of results shows that the BET method underestimates and not, as it was usually postulated, overestimates the surface areas of microporous carbons.
Resumo:
It has been known for decades that the metabolic rate of animals scales with body mass with an exponent that is almost always <1, >2/3, and often very close to 3/4. The 3/4 exponent emerges naturally from two models of resource distribution networks, radial explosion and hierarchically branched, which incorporate a minimum of specific details. Both models show that the exponent is 2/3 if velocity of flow remains constant, but can attain a maximum value of 3/4 if velocity scales with its maximum exponent, 1/12. Quarterpower scaling can arise even when there is no underlying fractality. The canonical “fourth dimension” in biological scaling relations can result from matching the velocity of flow through the network to the linear dimension of the terminal “service volume” where resources are consumed. These models have broad applicability for the optimal design of biological and engineered systems where energy, materials, or information are distributed from a single source.
Resumo:
Accurate replication of the processes associated with the energetics of the tropical ocean is necessary if coupled GCMs are to simulate the physics of ENSO correctly, including the transfer of energy from the winds to the ocean thermocline and energy dissipation during the ENSO cycle. Here, we analyze ocean energetics in coupled GCMs in terms of two integral parameters describing net energy loss in the system using the approach recently proposed by Brown and Fedorov (J Clim 23:1563–1580, 2010a) and Fedorov (J Clim 20:1108–1117, 2007). These parameters are (1) the efficiency c of the conversion of wind power into the buoyancy power that controls the rate of change of the available potential energy (APE) in the ocean and (2) the e-folding rate a that characterizes the damping of APE by turbulent diffusion and other processes. Estimating these two parameters for coupled models reveals potential deficiencies (and large differences) in how state-of-the-art coupled GCMs reproduce the ocean energetics as compared to ocean-only models and data assimilating models. The majority of the coupled models we analyzed show a lower efficiency (values of c in the range of 10–50% versus 50–60% for ocean-only simulations or reanalysis) and a relatively strong energy damping (values of a-1 in the range 0.4–1 years versus 0.9–1.2 years). These differences in the model energetics appear to reflect differences in the simulated thermal structure of the tropical ocean, the structure of ocean equatorial currents, and deficiencies in the way coupled models simulate ENSO.