938 resultados para Building demand estimation model
Resumo:
We address here aspects of the implementation of a memory evolutive system (MES), based on the model proposed by A. Ehresmann and J. Vanbremeersch (2007), by means of a simulated network of spiking neurons with time dependent plasticity. We point out the advantages and challenges of applying category theory for the representation of cognition, by using the MES architecture. Then we discuss the issues concerning the minimum requirements that an artificial neural network (ANN) should fulfill in order that it would be capable of expressing the categories and mappings between them, underlying the MES. We conclude that a pulsed ANN based on Izhikevich`s formal neuron with STDP (spike time-dependent plasticity) has sufficient dynamical properties to achieve these requirements, provided it can cope with the topological requirements. Finally, we present some perspectives of future research concerning the proposed ANN topology.
Resumo:
Among several process variability sources, valve friction and inadequate controller tuning are supposed to be two of the most prevalent. Friction quantification methods can be applied to the development of model-based compensators or to diagnose valves that need repair, whereas accurate process models can be used in controller retuning. This paper extends existing methods that jointly estimate the friction and process parameters, so that a nonlinear structure is adopted to represent the process model. The developed estimation algorithm is tested with three different data sources: a simulated first order plus dead time process, a hybrid setup (composed of a real valve and a simulated pH neutralization process) and from three industrial datasets corresponding to real control loops. The results demonstrate that the friction is accurately quantified, as well as ""good"" process models are estimated in several situations. Furthermore, when a nonlinear process model is considered, the proposed extension presents significant advantages: (i) greater accuracy for friction quantification and (ii) reasonable estimates of the nonlinear steady-state characteristics of the process. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Accurate price forecasting for agricultural commodities can have significant decision-making implications for suppliers, especially those of biofuels, where the agriculture and energy sectors intersect. Environmental pressures and high oil prices affect demand for biofuels and have reignited the discussion about effects on food prices. Suppliers in the sugar-alcohol sector need to decide the ideal proportion of ethanol and sugar to optimise their financial strategy. Prices can be affected by exogenous factors, such as exchange rates and interest rates, as well as non-observable variables like the convenience yield, which is related to supply shortages. The literature generally uses two approaches: artificial neural networks (ANNs), which are recognised as being in the forefront of exogenous-variable analysis, and stochastic models such as the Kalman filter, which is able to account for non-observable variables. This article proposes a hybrid model for forecasting the prices of agricultural commodities that is built upon both approaches and is applied to forecast the price of sugar. The Kalman filter considers the structure of the stochastic process that describes the evolution of prices. Neural networks allow variables that can impact asset prices in an indirect, nonlinear way, what cannot be incorporated easily into traditional econometric models.
Resumo:
In a sample of censored survival times, the presence of an immune proportion of individuals who are not subject to death, failure or relapse, may be indicated by a relatively high number of individuals with large censored survival times. In this paper the generalized log-gamma model is modified for the possibility that long-term survivors may be present in the data. The model attempts to separately estimate the effects of covariates on the surviving fraction, that is, the proportion of the population for which the event never occurs. The logistic function is used for the regression model of the surviving fraction. Inference for the model parameters is considered via maximum likelihood. Some influence methods, such as the local influence and total local influence of an individual are derived, analyzed and discussed. Finally, a data set from the medical area is analyzed under the log-gamma generalized mixture model. A residual analysis is performed in order to select an appropriate model.
Resumo:
The zero-inflated negative binomial model is used to account for overdispersion detected in data that are initially analyzed under the zero-Inflated Poisson model A frequentist analysis a jackknife estimator and a non-parametric bootstrap for parameter estimation of zero-inflated negative binomial regression models are considered In addition an EM-type algorithm is developed for performing maximum likelihood estimation Then the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and some ways to perform global influence analysis are derived In order to study departures from the error assumption as well as the presence of outliers residual analysis based on the standardized Pearson residuals is discussed The relevance of the approach is illustrated with a real data set where It is shown that zero-inflated negative binomial regression models seems to fit the data better than the Poisson counterpart (C) 2010 Elsevier B V All rights reserved
Resumo:
In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Using a dynamic systems model specifically developed for Piracicaba, Capivari and Jundia River Water Basins (BH-PCJ) as a tool to help to analyze water resources management alternatives for policy makers and decision takers, five simulations for 50 years timeframe were performed. The model estimates water supply and demand, as well as wastewater generation from the consumers at BH-PCJ. A run was performed using mean precipitation value constant, and keeping the actual water supply and demand rates, the business as usual scenario. Under these considerations, it is expected an increment of about similar to 76% on water demand, that similar to 39% of available water volume will come from wastewater reuse, and that waste load increases to similar to 91%. Falkenmark Index will change from 1,403 m(3) person(-1) year(-1) in 2004, to 734 m(3) P(-1) year(-1) by 2054, and the Sustainability Index from 0.44 to 0.20. Another four simulations were performed by affecting the annual precipitation by 90 and 110%; considering an ecological flow equal to 30% of the mean daily flow; and keeping the same rates for all other factors except for ecological flow and household water consumption. All of them showed a tendency to a water crisis in the near future at BH-PCJ.
Resumo:
The objective of this investigation was to examine in a systematic manner the influence of plasma protein binding on in vivo pharmacodynamics. Comparative pharmacokinetic-pharmacodynamic studies with four beta blockers were performed in conscious rats, using heart rate under isoprenaline-induced tachycardia as a pharmacodynamic endpoint. A recently proposed mechanism-based agonist-antagonist interaction model was used to obtain in vivo estimates of receptor affinities (K(B),(vivo)). These values were compared with in vitro affinities (K(B),(vitro)) on the basis of both total and free drug concentrations. For the total drug concentrations, the K(B),(vivo) estimates were 26, 13, 6.5 and 0.89 nM for S(-)-atenolol, S(-)-propranolol, S(-)-metoprolol and timolol. The K(B),(vivo) estimates on the basis of the free concentrations were 25, 2.0, 5.2 and 0.56 nM, respectively. The K(B),(vivo)-K(B),(vitro) correlation for total drug concentrations clearly deviated from the line of identity, especially for the most highly bound drug S(-)-propranolol (ratio K(B),(vivo)/K(B),(vitro) similar to 6.8). For the free drug, the correlation approximated the line of identity. Using this model, for beta-blockers the free plasma concentration appears to be the best predictor of in vivo pharmacodynamics. (C) 2008 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 98:3816-3828, 2009
Resumo:
The detection of seizure in the newborn is a critical aspect of neurological research. Current automatic detection techniques are difficult to assess due to the problems associated with acquiring and labelling newborn electroencephalogram (EEG) data. A realistic model for newborn EEG would allow confident development, assessment and comparison of these detection techniques. This paper presents a model for newborn EEG that accounts for its self-similar and non-stationary nature. The model consists of background and seizure sub-models. The newborn EEG background model is based on the short-time power spectrum with a time-varying power law. The relationship between the fractal dimension and the power law of a power spectrum is utilized for accurate estimation of the short-time power law exponent. The newborn EEG seizure model is based on a well-known time-frequency signal model. This model addresses all significant time-frequency characteristics of newborn EEG seizure which include; multiple components or harmonics, piecewise linear instantaneous frequency laws and harmonic amplitude modulation. Estimates of the parameters of both models are shown to be random and are modelled using the data from a total of 500 background epochs and 204 seizure epochs. The newborn EEG background and seizure models are validated against real newborn EEG data using the correlation coefficient. The results show that the output of the proposed models has a higher correlation with real newborn EEG than currently accepted models (a 10% and 38% improvement for background and seizure models, respectively).
Resumo:
Previous work has identified several short-comings in the ability of four spring wheat and one barley model to simulate crop processes and resource utilization. This can have important implications when such models are used within systems models where final soil water and nitrogen conditions of one crop define the starting conditions of the following crop. In an attempt to overcome these limitations and to reconcile a range of modelling approaches, existing model components that worked demonstrably well were combined with new components for aspects where existing capabilities were inadequate. This resulted in the Integrated Wheat Model (I_WHEAT), which was developed as a module of the cropping systems model APSIM. To increase predictive capability of the model, process detail was reduced, where possible, by replacing groups of processes with conservative, biologically meaningful parameters. I_WHEAT does not contain a soil water or soil nitrogen balance. These are present as other modules of APSIM. In I_WHEAT, yield is simulated using a linear increase in harvest index whereby nitrogen or water limitations can lead to early termination of grainfilling and hence cessation of harvest index increase. Dry matter increase is calculated either from the amount of intercepted radiation and radiation conversion efficiency or from the amount of water transpired and transpiration efficiency, depending on the most limiting resource. Leaf area and tiller formation are calculated from thermal time and a cultivar specific phyllochron interval. Nitrogen limitation first reduces leaf area and then affects radiation conversion efficiency as it becomes more severe. Water or nitrogen limitations result in reduced leaf expansion, accelerated leaf senescence or tiller death. This reduces the radiation load on the crop canopy (i.e. demand for water) and can make nitrogen available for translocation to other organs. Sensitive feedbacks between light interception and dry matter accumulation are avoided by having environmental effects acting directly on leaf area development, rather than via biomass production. This makes the model more stable across environments without losing the interactions between the different external influences. When comparing model output with models tested previously using data from a wide range of agro-climatic conditions, yield and biomass predictions were equal to the best of those models, but improvements could be demonstrated for simulating leaf area dynamics in response to water and nitrogen supply, kernel nitrogen content, and total water and nitrogen use. I_WHEAT does not require calibration for any of the environments tested. Further model improvement should concentrate on improving phenology simulations, a more thorough derivation of coefficients to describe leaf area development and a better quantification of some processes related to nitrogen dynamics. (C) 1998 Elsevier Science B.V.
Resumo:
Phonemic codes are accorded a privileged role in most current models of immediate serial recall, although their effects are apparent in short-term proactive interference (PI) effects as well. The present research looks at how assumptions concerning distributed representation and distributed storage involving both semantic and phonemic codes might be operationalized to produce PI in a short-term cued recall task. The four experiments reported here attempted to generate the phonemic characteristics of a nonrhyming, interfering foil from unrelated filler items in the same list. PI was observed when a rhyme of the foil was studied or when the three phonemes of the foil were distributed across three studied filler items. The results suggest that items in short-term memory are stored in terms of feature bundles and that all items are simultaneously available at retrieval.
Resumo:
Wildlife-habitat models are an important tool in wildlife management toda?, and by far the majority of these predict aspects of species distribution (abundance or presence) as a proxy measure of habitat quality. Unfortunately, few are tested on independent data, and of those that are, few show useful predictive st;ill. We demonstrate that six critical assumptions underlie distribution based wildlife-habitat models, all of which must be valid for the model to predict habitat quality. We outline these assumptions in a mete-model, and discuss methods for their validation. Even where all sis assumptions show a high level of validity, there is still a strong likelihood that the model will not predict habitat quality. However, the meta-model does suggest habitat quality can be predicted more accurately if distributional data are ignored, and variables more indicative of habitat quality are modelled instead.
Resumo:
Background From the mid-1980s to mid-1990s, the WHO MONICA Project monitored coronary events and classic risk factors for coronary heart disease (CHD) in 38 populations from 21 countries. We assessed the extent to which changes in these risk factors explain the variation in the trends in coronary-event rates across the populations. Methods In men and women aged 35-64 years, non-fatal myocardial infarction and coronary deaths were registered continuously to assess trends in rates of coronary events. We carried out population surveys to estimate trends in risk factors. Trends in event rates were regressed on trends in risk score and in individual risk factors. Findings Smoking rates decreased in most male populations but trends were mixed in women; mean blood pressures and cholesterol concentrations decreased, body-mass index increased, and overall risk scores and coronary-event rates decreased. The model of trends in 10-year coronary-event rates against risk scores and single risk factors showed a poor fit, but this was improved with a 4-year time lag for coronary events. The explanatory power of the analyses was limited by imprecision of the estimates and homogeneity of trends in the study populations. Interpretation Changes in the classic risk factors seem to partly explain the variation in population trends in CHD. Residual variance is attributable to difficulties in measurement and analysis, including time lag, and to factors that were not included, such as medical interventions. The results support prevention policies based on the classic risk factors but suggest potential for prevention beyond these.
Resumo:
A model of Australian wheat grower supply response was specified under the constraints of price and yield uncertainty, risk aversion, partial adjustment, and quadratic costs. The model was solved to obtain area planted. The results of estimation indicate that risk arising from prices and climate have had a significant influence on producer decision making. The coefficient of relative risk aversion and short-run and long-run elasticities of supply with respect to price were calculated. Wheat growers' risk premium, expected at the start of the season for exposed price and yield risk, was 2.8 percent of revenue or 10.4 percent of profit as measured by producer surplus. (C) 2000 John Wiley & Sons, Inc.
Resumo:
Background We present a method (The CHD Prevention Model) for modelling the incidence of fatal and nonfatal coronary heart disease (CHD) within various CHD risk percentiles of an adult population. The model provides a relatively simple tool for lifetime risk prediction for subgroups within a population. It allows an estimation of the absolute primary CHD risk in different populations and will help identify subgroups of the adult population where primary CHD prevention is most appropriate and cost-effective. Methods The CHD risk distribution within the Australian population was modelled, based on the prevalence of CHD risk, individual estimates of integrated CHD risk, and current CHD mortality rates. Predicted incidence of first fatal and nonfatal myocardial infarction within CHD risk strata of the Australian population was determined. Results Approximately 25% of CHD deaths were predicted to occur amongst those in the top 10 percentiles of integrated CHD risk, regardless of age group or gender. It was found that while all causes survival did not differ markedly between percentiles of CHD risk before the ages of around 50-60, event-free survival began visibly to differ about 5 years earlier. Conclusions The CHD Prevention Model provides a means of predicting future CHD incidence amongst various strata of integrated CHD risk within an adult population. It has significant application both in individual risk counselling and in the identification of subgroups of the population where drug therapy to reduce CHD risk is most cost-effective. J Cardiovasc Risk 8:31-37 (C) 2001 Lippincott Williams & Wilkins.