938 resultados para Building demand estimation model
Resumo:
A dynamic modelling methodology, which combines on-line variable estimation and parameter identification with physical laws to form an adaptive model for rotary sugar drying processes, is developed in this paper. In contrast to the conventional rate-based models using empirical transfer coefficients, the heat and mass transfer rates are estimated by using on-line measurements in the new model. Furthermore, a set of improved sectional solid transport equations with localized parameters is developed in this work to reidentified on-line using measurement data, the model is able to closely track the dynamic behaviour of rotary drying processes within a broad range of operational conditions. This adaptive model is validated against experimental data obtained from a pilot-scale rotary sugar dryer. The proposed modelling methodology can be easily incorporated into nonlinear model based control schemes to form a unified modelling and control framework.place the global correlation for the computation of solid retention time. Since a number of key model variables and parameters are identified on-line using measurement data, the model is able to closely track the dynamic behaviour of rotary drying processes within a broad range of operational conditions. This adaptive model is validated against experimental data obtained from a pilot-scale rotary sugar dryer. The proposed modelling methodology can be easily incorporated into nonlinear model based control schemes to form a unified modelling and control framework.
Resumo:
This paper proposed a novel model for short term load forecast in the competitive electricity market. The prior electricity demand data are treated as time series. The forecast model is based on wavelet multi-resolution decomposition by autocorrelation shell representation and neural networks (multilayer perceptrons, or MLPs) modeling of wavelet coefficients. To minimize the influence of noisy low level coefficients, we applied the practical Bayesian method Automatic Relevance Determination (ARD) model to choose the size of MLPs, which are then trained to provide forecasts. The individual wavelet domain forecasts are recombined to form the accurate overall forecast. The proposed method is tested using Queensland electricity demand data from the Australian National Electricity Market. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
In this paper we study the n-fold multiplicative model involving Weibull distributions and examine some properties of the model. These include the shapes for the density and failure rate functions and the WPP plot. These allow one to decide if a given data set can be adequately modelled by the model. We also discuss the estimation of model parameters based on the WPP plot. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
A migration of Helicoverpa punctigera (Wallengren), Heliothis punctifera (Walker) and Agrotis munda Walker was tracked from Cameron Corner (29degrees00'S, 141degrees00'E) in inland Australia to the Wilcannia region, approximately 400 km to the south-east. A relatively isolated source population was located using a distribution model to predict winter breeding, and confirmed by surveys using sweep netting for larvae. When a synoptic weather pattern likely to produce suitable conditions for migration developed, moths were trapped in the source region. The next morning a simulation model of migration using wind-field data generated by a numerical weather-prediction model was run. Surveys using sweep netting for larvae, trapping and flush counts were then conducted in and around the predicted moth fallout area, approximately 400 km to the south-east. Pollen carried on the probosces of moths caught in this area was compared with that on moths caught in the source area. The survey data and pollen comparisons provided evidence that migration had occurred, and that the migration model gave accurate estimation of the fallout region. The ecological and economic implications of such migrations are discussed.
Resumo:
Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.
Resumo:
In the last 7 years, a method has been developed to analyse building energy performance using computer simulation, in Brazil. The method combines analysis of building design plans and documentation, walk-through visits, electric and thermal measurements and the use of an energy simulation tool (DOE-2.1E code), The method was used to model more than 15 office buildings (more than 200 000 m(2)), located between 12.5degrees and 27.5degrees South latitude. The paper describes the basic methodology, with data for one building and presents additional results for other six cases. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Chest clapping, vibration, and shaking were studied in 10 physiotherapists who applied these techniques on an anesthetized animal model. Hemodynamic variables (such as heart rate, blood pressure, pulmonary artery pressure, and right atrial pressure) were measured during the application of these techniques to verify claims of adverse events. In addition, expired tidal volume and peak expiratory flow rate were measured to ascertain effects of these techniques. Physiotherapists in this study applied chest clapping at a rate of 6.2 +/- 0.9 Hz, vibration at 10.5 +/- 2.3 Hz, and shaking at 6.2 +/- 2.3 Hz. With the use of these rates, esophageal pressure swings of 8.8 +/- 5.0, 0.7 +/- 0.3, and 1.4 +/- 0.7 mmHg resulted from clapping, vibration, and shaking respectively. Variability in rates and forces generated by these techniques was 80% of variance in shaking force (P = 0.003). Application of these techniques by physiotherapists was found to have no significant effects on hemodynamic and most ventilatory variables in this study. From this study, we conclude that chest clapping, vibration, and shaking 1) can be consistently performed by physiotherapists; 2) are significantly related to physiotherapists' characteristics, particularly clinical experience; and 3) caused no significant hemodynamic effects.
Resumo:
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
This paper deals with an n-fold Weibull competing risk model. A characterisation of the WPP plot is given along with estimation of model parameters when modelling a given data set. These are illustrated through two examples. A study of the different possible shapes for the density and failure rate functions is also presented. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Steel fiber reinforced concrete (SFRC) is widely applied in the construction industry. Numerical elastoplastic analysis of the macroscopic behavior is complex. This typically involves a piecewise linear failure curve including corner singularities. This paper presents a single smooth biaxial failure curve for SFRC based on a semianalytical approximation. Convexity of the proposed model is guaranteed so that numerical problems are avoided. The model has sufficient flexibility to closely match experimental results. The failure curve is also suitable for modeling plain concrete under biaxial loading. Since this model is capable of simulating the failure states in all stress regimes with a single envelope, the elastoplastic formulation is very concise and simple. The finite element implementation is developed to demonstrate the conciseness and the effectiveness of the model. The computed results display good agreement with published experimental data.
Resumo:
The absorption of fluid by unsaturated, rigid porous materials may be characterized by the sorptivity. This is a simple parameter to determine and is increasingly being used as a measure of a material's resistance to exposure to fluids (especially moisture and reactive solutes) in aggressive environments. The complete isothermal absorption process is described by a nonlinear diffusion equation, with the hydraulic diffusivity being a strongly nonlinear function of the degree of saturation of the material. This diffusivity can be estimated from the sorptivity test. In a typical test the cumulative absorption is proportional to the square root of time. However, a number of researchers have observed deviation from this behaviour when the infiltrating fluid is water and there is some potential for chemo-mechanical interaction with the material. In that case the current interpretation of the test and estimation of the hydraulic diffusivity is no longer appropriate. Kuntz and Lavallee (2001) discuss the anomalous behaviour and propose a non-Darcian model as a more appropriate physical description. We present an alternative Darcian explanation and theory that retrieves the earlier advantages of the simple sorptivity test in providing parametric information about the material's hydraulic properties and allowing simple predictive formulae for the wetting profile to be generated.
Resumo:
The use of a fitted parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can lead to predictive nonuniqueness. The extent of model predictive uncertainty should be investigated if management decisions are to be based on model projections. Using models built for four neighboring watersheds in the Neuse River Basin of North Carolina, the application of the automated parameter optimization software PEST in conjunction with the Hydrologic Simulation Program Fortran (HSPF) is demonstrated. Parameter nonuniqueness is illustrated, and a method is presented for calculating many different sets of parameters, all of which acceptably calibrate a watershed model. A regularization methodology is discussed in which models for similar watersheds can be calibrated simultaneously. Using this method, parameter differences between watershed models can be minimized while maintaining fit between model outputs and field observations. In recognition of the fact that parameter nonuniqueness and predictive uncertainty are inherent to the modeling process, PEST's nonlinear predictive analysis functionality is then used to explore the extent of model predictive uncertainty.
Resumo:
Rapid prototyping (RP) is an approach for automatically building a physical object through solid freeform fabrication. Nowadays, RP has become a vital aspect of most product development processes, due to the significant competitive advantages it offers compared to traditional manual model making. Even in academic environments, it is important to be able to quickly create accurate physical representations of concept solutions. Some of these can be used for simple visual validation, while others can be employed for ergonomic assessment by potential users or even for physical testing. However, the cost of traditional RP methods prevents their use in most academic environments on a regular basis, and even for very preliminary prototypes in many small companies. That results in delaying the first physical prototypes to later stages, or creating very rough mock-ups which are not as useful as they could be. In this paper we propose an approach for rapid and inexpensive model-making, which was developed in an academic context, and which can be employed for a variety of objects.
Resumo:
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant’s pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant’s pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant’s main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant’s pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67±34μm and 108μm, and angular misfits of 0.15±0.08º and 1.4º, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants’ pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.
Resumo:
The portfolio generating the iTraxx EUR index is modeled by coupled Markov chains. Each of the industries of the portfolio evolves according to its own Markov transition matrix. Using a variant of the method of moments, the model parameters are estimated from a data set of Standard and Poor's. Swap spreads are evaluated by Monte-Carlo simulations. Along with an actuarially fair spread, at least squares spread is considered.