966 resultados para Average model


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Background It remains unclear over whether it is possible to develop an epidemic forecasting model for transmission of dengue fever in Queensland, Australia. Objectives To examine the potential impact of El Niño/Southern Oscillation on the transmission of dengue fever in Queensland, Australia and explore the possibility of developing a forecast model of dengue fever. Methods Data on the Southern Oscillation Index (SOI), an indicator of El Niño/Southern Oscillation activity, were obtained from the Australian Bureau of Meteorology. Numbers of dengue fever cases notified and the numbers of postcode areas with dengue fever cases between January 1993 and December 2005 were obtained from the Queensland Health and relevant population data were obtained from the Australia Bureau of Statistics. A multivariate Seasonal Auto-regressive Integrated Moving Average model was developed and validated by dividing the data file into two datasets: the data from January 1993 to December 2003 were used to construct a model and those from January 2004 to December 2005 were used to validate it. Results A decrease in the average SOI (ie, warmer conditions) during the preceding 3–12 months was significantly associated with an increase in the monthly numbers of postcode areas with dengue fever cases (β=−0.038; p = 0.019). Predicted values from the Seasonal Auto-regressive Integrated Moving Average model were consistent with the observed values in the validation dataset (root-mean-square percentage error: 1.93%). Conclusions Climate variability is directly and/or indirectly associated with dengue transmission and the development of an SOI-based epidemic forecasting system is possible for dengue fever in Queensland, Australia.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents a computationally efficient model for a dc-dc boost converter, which is valid for continuous and discontinuous conduction modes; the model also incorporates significant non-idealities of the converter. Simulation of the dc-dc boost converter using an average model provides practically all the details, which are available from the simulation using the switching (instantaneous) model, except for the quantum of ripple in currents and voltages. A harmonic model of the converter can be used to evaluate the ripple quantities. This paper proposes a combined (average-cum-harmonic) model of the boost converter. The accuracy of the combined model is validated through extensive simulations and experiments. A quantitative comparison of the computation times of the average, combined and switching models are presented. The combined model is shown to be more computationally efficient than the switching model for simulation of transient and steady-state responses of the converter under various conditions.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents a comparative evaluation of the average and switching models of a dc-dc boost converter from the point of view of real-time simulation. Both the models are used to simulate the converter in real-time on a Field Programmable Gate Array (FPGA) platform. The converter is considered to function over a wide range of operating conditions, and could do transition between continuous conduction mode (CCM) and discontinuous conduction mode (DCM). While the average model is known to be computationally efficient from the perspective of off-line simulation, the same is shown here to consume more logical resources than the switching model for real-time simulation of the dc-dc converter. Further, evaluation of the boundary condition between CCM and DCM is found to be the main reason for the increased consumption of resources by the average model.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Nearly half of the earth's photosynthetically fixed carbon derives from the oceans. To determine global and region specific rates, we rely on models that estimate marine net primary productivity (NPP) thus it is essential that these models are evaluated to determine their accuracy. Here we assessed the skill of 21 ocean color models by comparing their estimates of depth-integrated NPP to 1156 in situ C-14 measurements encompassing ten marine regions including the Sargasso Sea, pelagic North Atlantic, coastal Northeast Atlantic, Black Sea, Mediterranean Sea, Arabian Sea, subtropical North Pacific, Ross Sea, West Antarctic Peninsula, and the Antarctic Polar Frontal Zone. Average model skill, as determined by root-mean square difference calculations, was lowest in the Black and Mediterranean Seas, highest in the pelagic North Atlantic and the Antarctic Polar Frontal Zone, and intermediate in the other six regions. The maximum fraction of model skill that may be attributable to uncertainties in both the input variables and in situ NPP measurements was nearly 72%. on average, the simplest depth/wavelength integrated models performed no worse than the more complex depth/wavelength resolved models. Ocean color models were not highly challenged in extreme conditions of surface chlorophyll-a and sea surface temperature, nor in high-nitrate low-chlorophyll waters. Water column depth was the primary influence on ocean color model performance such that average skill was significantly higher at depths greater than 250 m, suggesting that ocean color models are more challenged in Case-2 waters (coastal) than in Case-1 (pelagic) waters. Given that in situ chlorophyll-a data was used as input data, algorithm improvement is required to eliminate the poor performance of ocean color NPP models in Case-2 waters that are close to coastlines. Finally, ocean color chlorophyll-a algorithms are challenged by optically complex Case-2 waters, thus using satellite-derived chlorophyll-a to estimate NPP in coastal areas would likely further reduce the skill of ocean color models.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Changes in marine net primary productivity (PP) and export of particulate organic carbon (EP) are projected over the 21st century with four global coupled carbon cycle-climate models. These include representations of marine ecosystems and the carbon cycle of different structure and complexity. All four models show a decrease in global mean PP and EP between 2 and 20% by 2100 relative to preindustrial conditions, for the SRES A2 emission scenario. Two different regimes for productivity changes are consistently identified in all models. The first chain of mechanisms is dominant in the low- and mid-latitude ocean and in the North Atlantic: reduced input of macro-nutrients into the euphotic zone related to enhanced stratification, reduced mixed layer depth, and slowed circulation causes a decrease in macro-nutrient concentrations and in PP and EP. The second regime is projected for parts of the Southern Ocean: an alleviation of light and/or temperature limitation leads to an increase in PP and EP as productivity is fueled by a sustained nutrient input. A region of disagreement among the models is the Arctic, where three models project an increase in PP while one model projects a decrease. Projected changes in seasonal and interannual variability are modest in most regions. Regional model skill metrics are proposed to generate multi-model mean fields that show an improved skill in representing observation-based estimates compared to a simple multi-model average. Model results are compared to recent productivity projections with three different algorithms, usually applied to infer net primary production from satellite observations.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

To carry out stability and voltage regulation studies on more electric aircraft systems in which there is a preponderance of multi-pulse, rectifier-fed motor-drive equipment, average dynamic models of the rectifier converters are required. Existing methods are difficult to apply to anything other than single converters with a low pulse number. Therefore an efficient, compact method for deriving the approximate, linear, average model of 6- and 12-pulse rectifiers, based on the assumption of a small duration of the overlap angle is presented. The models are validated against detailed simulations and laboratory prototypes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the problem of how to construct robust designs for Poisson regression models. An analytical expression is derived for robust designs for first-order Poisson regression models where uncertainty exists in the prior parameter estimates. Given certain constraints in the methodology, it may be necessary to extend the robust designs for implementation in practical experiments. With these extensions, our methodology constructs designs which perform similarly, in terms of estimation, to current techniques, and offers the solution in a more timely manner. We further apply this analytic result to cases where uncertainty exists in the linear predictor. The application of this methodology to practical design problems such as screening experiments is explored. Given the minimal prior knowledge that is usually available when conducting such experiments, it is recommended to derive designs robust across a variety of systems. However, incorporating such uncertainty into the design process can be a computationally intense exercise. Hence, our analytic approach is explored as an alternative.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the problem of how to construct robust designs for Poisson regression models. An analytical expression is derived for robust designs for first-order Poisson regression models where uncertainty exists in the prior parameter estimates. Given certain constraints in the methodology, it may be necessary to extend the robust designs for implementation in practical experiments. With these extensions, our methodology constructs designs which perform similarly, in terms of estimation, to current techniques, and offers the solution in a more timely manner. We further apply this analytic result to cases where uncertainty exists in the linear predictor. The application of this methodology to practical design problems such as screening experiments is explored. Given the minimal prior knowledge that is usually available when conducting such experiments, it is recommended to derive designs robust across a variety of systems. However, incorporating such uncertainty into the design process can be a computationally intense exercise. Hence, our analytic approach is explored as an alternative.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Spatial data are now prevalent in a wide range of fields including environmental and health science. This has led to the development of a range of approaches for analysing patterns in these data. In this paper, we compare several Bayesian hierarchical models for analysing point-based data based on the discretization of the study region, resulting in grid-based spatial data. The approaches considered include two parametric models and a semiparametric model. We highlight the methodology and computation for each approach. Two simulation studies are undertaken to compare the performance of these models for various structures of simulated point-based data which resemble environmental data. A case study of a real dataset is also conducted to demonstrate a practical application of the modelling approaches. Goodness-of-fit statistics are computed to compare estimates of the intensity functions. The deviance information criterion is also considered as an alternative model evaluation criterion. The results suggest that the adaptive Gaussian Markov random field model performs well for highly sparse point-based data where there are large variations or clustering across the space; whereas the discretized log Gaussian Cox process produces good fit in dense and clustered point-based data. One should generally consider the nature and structure of the point-based data in order to choose the appropriate method in modelling a discretized spatial point-based data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Due to its three-dimensional folding pattern, the human neocortex; poses a challenge for accurate co-registration of grouped functional; brain imaging data. The present study addressed this problem by; employing three-dimensional continuum-mechanical image-warping; techniques to derive average anatomical representations for coregistration; of functional magnetic resonance brain imaging data; obtained from 10 male first-episode schizophrenia patients and 10 age-matched; male healthy volunteers while they performed a version of the; Tower of London task. This novel technique produced an equivalent; representation of blood oxygenation level dependent (BOLD) response; across hemispheres, cortical regions, and groups, respectively, when; compared to intensity average co-registration, using a deformable; Brodmann area atlas as anatomical reference. Somewhat closer; association of Brodmann area boundaries with primary visual and; auditory areas was evident using the gyral pattern average model.; Statistically-thresholded BOLD cluster data confirmed predominantly; bilateral prefrontal and parietal, right frontal and dorsolateral; prefrontal, and left occipital activation in healthy subjects, while; patients’ hemispheric dominance pattern was diminished or reversed,; particularly decreasing cortical BOLD response with increasing task; difficulty in the right superior temporal gyrus. Reduced regional gray; matter thickness correlated with reduced left-hemispheric prefrontal/; frontal and bilateral parietal BOLD activation in patients. This is the; first study demonstrating that reduction of regional gray matter in; first-episode schizophrenia patients is associated with impaired brain; function when performing the Tower of London task, and supports; previous findings of impaired executive attention and working memory; in schizophrenia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper demonstrates the application of inverse filtering technique for power systems. In order to implement this method, the control objective should be based on a system variable that needs to be set on a specific value for each sampling time. A control input is calculated to generate the desired output of the plant and the relationship between the two is used design an auto-regressive model. The auto-regressive model is converted to a moving average model to calculate the control input based on the future values of the desired output. Therefore, required future values to construct the output are predicted to generate the appropriate control input for the next sampling time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

From the autocorrelation function of geomagnetic polarity intervals, it is shown that the field reversal intervals are not independent but form a process akin to the Markov process, where the random input to the model is itself a moving average process. The input to the moving average model is, however, an independent Gaussian random sequence. All the parameters in this model of the geomagnetic field reversal have been estimated. In physical terms this model implies that the mechanism of reversal possesses a memory.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper discusses dynamic modeling of non-isolated DC-DC converters (buck, boost and buck-boost) under continuous and discontinuous modes of operation. Three types of models are presented for each converter, namely, switching model, average model and harmonic model. These models include significant non-idealities of the converters. The switching model gives the instantaneous currents and voltages of the converter. The average model provides the ripple-free currents and voltages, averaged over a switching cycle. The harmonic model gives the peak to peak values of ripple in currents and voltages. The validity of all these models is established by comparing the simulation results with the experimental results from laboratory prototypes, at different steady state and transient conditions. Simulation based on a combination of average and harmonic models is shown to provide all relevant information as obtained from the switching model, while consuming less computation time than the latter.