938 resultados para Building demand estimation model
Resumo:
Comparison of donor-acceptor electronic couplings calculated within two-state and three-state models suggests that the two-state treatment can provide unreliable estimates of Vda because of neglecting the multistate effects. We show that in most cases accurate values of the electronic coupling in a π stack, where donor and acceptor are separated by a bridging unit, can be obtained as Ṽ da = (E2 - E1) μ12 Rda + (2 E3 - E1 - E2) 2 μ13 μ23 Rda2, where E1, E2, and E3 are adiabatic energies of the ground, charge-transfer, and bridge states, respectively, μij is the transition dipole moments between the states i and j, and Rda is the distance between the planes of donor and acceptor. In this expression based on the generalized Mulliken-Hush approach, the first term corresponds to the coupling derived within a two-state model, whereas the second term is the superexchange correction accounting for the bridge effect. The formula is extended to bridges consisting of several subunits. The influence of the donor-acceptor energy mismatch on the excess charge distribution, adiabatic dipole and transition moments, and electronic couplings is examined. A diagnostic is developed to determine whether the two-state approach can be applied. Based on numerical results, we showed that the superexchange correction considerably improves estimates of the donor-acceptor coupling derived within a two-state approach. In most cases when the two-state scheme fails, the formula gives reliable results which are in good agreement (within 5%) with the data of the three-state generalized Mulliken-Hush model
Resumo:
Human arteries affected by atherosclerosis are characterized by altered wall viscoelastic properties. The possibility of noninvasively assessing arterial viscoelasticity in vivo would significantly contribute to the early diagnosis and prevention of this disease. This paper presents a noniterative technique to estimate the viscoelastic parameters of a vascular wall Zener model. The approach requires the simultaneous measurement of flow variations and wall displacements, which can be provided by suitable ultrasound Doppler instruments. Viscoelastic parameters are estimated by fitting the theoretical constitutive equations to the experimental measurements using an ARMA parameter approach. The accuracy and sensitivity of the proposed method are tested using reference data generated by numerical simulations of arterial pulsation in which the physiological conditions and the viscoelastic parameters of the model can be suitably varied. The estimated values quantitatively agree with the reference values, showing that the only parameter affected by changing the physiological conditions is viscosity, whose relative error was about 27% even when a poor signal-to-noise ratio is simulated. Finally, the feasibility of the method is illustrated through three measurements made at different flow regimes on a cylindrical vessel phantom, yielding a parameter mean estimation error of 25%.
Resumo:
This paper analyzes the nature of health care provider choice inthe case of patient-initiated contacts, with special reference toa National Health Service setting, where monetary prices are zeroand general practitioners act as gatekeepers to publicly financedspecialized care. We focus our attention on the factors that mayexplain the continuously increasing use of hospital emergencyvisits as opposed to other provider alternatives. An extendedversion of a discrete choice model of demand for patient-initiatedcontacts is presented, allowing for individual and town residencesize differences in perceived quality (preferences) betweenalternative providers and including travel and waiting time asnon-monetary costs. Results of a nested multinomial logit model ofprovider choice are presented. Individual choice betweenalternatives considers, in a repeated nested structure, self-care,primary care, hospital and clinic emergency services. Welfareimplications and income effects are analyzed by computingcompensating variations, and by simulating the effects of userfees by levels of income. Results indicate that compensatingvariation per visit is higher than the direct marginal cost ofemergency visits, and consequently, emergency visits do not appearas an inefficient alternative even for non-urgent conditions.
Resumo:
Customer choice behavior, such as 'buy-up' and 'buy-down', is an importantphe-nomenon in a wide range of industries. Yet there are few models ormethodologies available to exploit this phenomenon within yield managementsystems. We make some progress on filling this void. Specifically, wedevelop a model of yield management in which the buyers' behavior ismodeled explicitly using a multi-nomial logit model of demand. Thecontrol problem is to decide which subset of fare classes to offer ateach point in time. The set of open fare classes then affects the purchaseprobabilities for each class. We formulate a dynamic program todetermine the optimal control policy and show that it reduces to a dynamicnested allocation policy. Thus, the optimal choice-based policy caneasily be implemented in reservation systems that use nested allocationcontrols. We also develop an estimation procedure for our model based onthe expectation-maximization (EM) method that jointly estimates arrivalrates and choice model parameters when no-purchase outcomes areunobservable. Numerical results show that this combined optimization-estimation approach may significantly improve revenue performancerelative to traditional leg-based models that do not account for choicebehavior.
Resumo:
The need for integration in the supply chain management leads us to considerthe coordination of two logistic planning functions: transportation andinventory. The coordination of these activities can be an extremely importantsource of competitive advantage in the supply chain management. The battle forcost reduction can pass through the equilibrium of transportation versusinventory managing costs. In this work, we study the specific case of aninventory-routing problem for a week planning period with different types ofdemand. A heuristic methodology, based on the Iterated Local Search, isproposed to solve the Multi-Period Inventory Routing Problem with stochasticand deterministic demand.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.
Resumo:
PURPOSE: The aim of this study was to develop models based on kernel regression and probability estimation in order to predict and map IRC in Switzerland by taking into account all of the following: architectural factors, spatial relationships between the measurements, as well as geological information. METHODS: We looked at about 240,000 IRC measurements carried out in about 150,000 houses. As predictor variables we included: building type, foundation type, year of construction, detector type, geographical coordinates, altitude, temperature and lithology into the kernel estimation models. We developed predictive maps as well as a map of the local probability to exceed 300 Bq/m(3). Additionally, we developed a map of a confidence index in order to estimate the reliability of the probability map. RESULTS: Our models were able to explain 28% of the variations of IRC data. All variables added information to the model. The model estimation revealed a bandwidth for each variable, making it possible to characterize the influence of each variable on the IRC estimation. Furthermore, we assessed the mapping characteristics of kernel estimation overall as well as by municipality. Overall, our model reproduces spatial IRC patterns which were already obtained earlier. On the municipal level, we could show that our model accounts well for IRC trends within municipal boundaries. Finally, we found that different building characteristics result in different IRC maps. Maps corresponding to detached houses with concrete foundations indicate systematically smaller IRC than maps corresponding to farms with earth foundation. CONCLUSIONS: IRC mapping based on kernel estimation is a powerful tool to predict and analyze IRC on a large-scale as well as on a local level. This approach enables to develop tailor-made maps for different architectural elements and measurement conditions and to account at the same time for geological information and spatial relations between IRC measurements.
Resumo:
In this paper, the theory of hidden Markov models (HMM) isapplied to the problem of blind (without training sequences) channel estimationand data detection. Within a HMM framework, the Baum–Welch(BW) identification algorithm is frequently used to find out maximum-likelihood (ML) estimates of the corresponding model. However, such a procedureassumes the model (i.e., the channel response) to be static throughoutthe observation sequence. By means of introducing a parametric model fortime-varying channel responses, a version of the algorithm, which is moreappropriate for mobile channels [time-dependent Baum-Welch (TDBW)] isderived. Aiming to compare algorithm behavior, a set of computer simulationsfor a GSM scenario is provided. Results indicate that, in comparisonto other Baum–Welch (BW) versions of the algorithm, the TDBW approachattains a remarkable enhancement in performance. For that purpose, onlya moderate increase in computational complexity is needed.
Resumo:
Sähkönkulutuksen lyhyen aikavälin ennustamista on tutkittu jo pitkään. Pohjoismaisien sähkömarkkinoiden vapautuminen on vaikuttanut sähkönkulutuksen ennustamiseen. Aluksi työssä perehdyttiin aiheeseen liittyvään kirjallisuuteen. Sähkönkulutuksen käyttäytymistä tutkittiin eri aikoina. Lämpötila tilastojen käyttökelpoisuutta arvioitiin sähkönkulutusennustetta ajatellen. Kulutus ennusteet tehtiin tunneittain ja ennustejaksona käytettiin yhtä viikkoa. Työssä tutkittiin sähkönkulutuksen- ja lämpötiladatan saatavuutta ja laatua Nord Poolin markkina-alueelta. Syötettävien tietojen ominaisuudet vaikuttavat tunnittaiseen sähkönkulutuksen ennustamiseen. Sähkönkulutuksen ennustamista varten mallinnettiin kaksi lähestymistapaa. Testattavina malleina käytettiin regressiomallia ja autoregressiivistä mallia (autoregressive model, ARX). Mallien parametrit estimoitiin pienimmän neliösumman menetelmällä. Tulokset osoittavat että kulutus- ja lämpötiladata on tarkastettava jälkikäteen koska reaaliaikaisen syötetietojen laatu on huonoa. Lämpötila vaikuttaa kulutukseen talvella, mutta se voidaan jättää huomiotta kesäkaudella. Regressiomalli on vakaampi kuin ARX malli. Regressiomallin virhetermi voidaan mallintaa aikasarjamallia hyväksikäyttäen.
Resumo:
The Kenyan forestry and sawmilling industry have been subject to a changing environment since 1999 when the industrial forest plantations were closed down. This has lowered raw material supply and it has affected and reduced the sawmill operations and the viability of the sawmill enterprises. The capacity of the 276 registered sawmills is not sufficient to fulfill sawn timber demand in Kenya. This is because of the technological degradation and lack of a qualified labor force, which were caused because of non-existent sawmilling education and further training in Kenya. Lack of competent sawmill workers has led to low raw material recovery, under utilization of resources and loss of employment. The objective of the work was to suggest models, methods and approaches for the competence and capacity development of the Kenyan sawmilling industry, sawmills and their workers. A nationwide field survey, interviews, questionnaire and literature review was used for data collection to find out the sawmills’ competence development areas and to suggest models and methods for their capacity building. The sampling frame included 22 sawmills that represented 72,5% of all the registered sawmills in Kenya. The results confirmed that the sawmills’ technological level was backwards, productivity low, raw material recovery unacceptable and workers’ professional education low. The future challenges will be how to establish the sawmills’ capacity building and workers’ competence development. Sawmilling industry development requires various actions through new development models and approaches. Activities should be started for technological development and workers’ competence development. This requires re-starting of vocational training in sawmilling and the establishment of more effective co-operation between the sawmills and their stakeholder groups. In competence development the Enterprise Competence Management Model of Nurminen (2007) can be used, whereas the best training model and approach would be a practically oriented learning at work model in which the short courses, technical assistance and extension services would be the key functions.
Resumo:
The numerous methods for calculating the potential or reference evapotranspiration (ETo or ETP) almost always do for a 24-hour period, including values of climatic parameters throughout the nocturnal period (daily averages). These results have a nil effect on transpiration, constituting the main evaporative demand process in cases of localized irrigation. The aim of the current manuscript was to come up with a model rather simplified for the calculation of diurnal daily ETo. It deals with an alternative approach based on the theoretical background of the Penman method without having to consider values of aerodynamic conductance of latent and sensible heat fluxes, as well as data of wind speed and relative humidity of the air. The comparison between the diurnal values of ETo measured in weighing lysimeters with elevated precision and estimated by either the Penman-Monteith method or the Simplified-Penman approach in study also points out a fairly consistent agreement among the potential demand calculation criteria. The Simplified-Penman approach was a feasible alternative to estimate ETo under the local meteorological conditions of two field trials. With the availability of the input data required, such a method could be employed in other climatic regions for scheduling irrigation.