951 resultados para Building demand estimation model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models are abstractions of reality that have predetermined limits (often not consciously thought through) on what problem domains the models can be used to explore. These limits are determined by the range of observed data used to construct and validate the model. However, it is important to remember that operating the model beyond these limits, one of the reasons for building the model in the first place, potentially brings unwanted behaviour and thus reduces the usefulness of the model. Our experience with the Agricultural Production Systems Simulator (APSIM), a farming systems model, has led us to adapt techniques from the disciplines of modelling and software development to create a model development process. This process is simple, easy to follow, and brings a much higher level of stability to the development effort, which then delivers a much more useful model. A major part of the process relies on having a range of detailed model tests (unit, simulation, sensibility, validation) that exercise a model at various levels (sub-model, model and simulation). To underline the usefulness of testing, we examine several case studies where simulated output can be compared with simple relationships. For example, output is compared with crop water use efficiency relationships gleaned from the literature to check that the model reproduces the expected function. Similarly, another case study attempts to reproduce generalised hydrological relationships found in the literature. This paper then describes a simple model development process (using version control, automated testing and differencing tools), that will enhance the reliability and usefulness of a model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper conceptualizes a framework for bridging the BIM (building information modelling)-specifications divide through augmenting objects within BIM with specification parameters derived from a product library. We demonstrate how model information, enriched with data at various LODs (levels of development), can evolve simultaneously with design and construction using different representation of a window object embedded in a wall as lifecycle phase exemplars at different levels of granularity. The conceptual standpoint is informed by the need for exploring a methodological approach which extends beyond current limitations of current modelling platforms in enhancing the information content of BIM models. Therefore, this work demonstrates that BIM objects can be augmented with construction specification parameters leveraging product libraries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a unified approach to repulsion in ionic and van der Waals solids based on a compressible-ion/atom model. Earlier studies have shown that repulsion in ionic crystals can be viewed as arising from the compression energy of ions, described by two parameters per ion. Here we obtain the compression parameters of the rare-gas atoms Ne. Ar. Kr and Xe by interpolation using the known parameters of related equi-electronic ions (e.g. Ar from S2-. Cl-, K- and Ca2-). These parameters fit the experimental zero-temperature interatomic distances and compressibilities of the rare-gas crystals satisfactorily. A hightemperature equation of state based on an Einstein model of thermal motions is used to calculate the thermal expansivities, compressibilities and their temperature derivatives for Ar. Kr and Xe. It is argued that an instability at higher temperatures represents the limit to which the solid can be superheated. beyond which sublimation must occur.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Common diseases such as endometriosis (ED), Alzheimer's disease (AD) and multiple sclerosis (MS) account for a significant proportion of the health care burden in many countries. Genome-wide association studies (GWASs) for these diseases have identified a number of individual genetic variants contributing to the risk of those diseases. However, the effect size for most variants is small and collectively the known variants explain only a small proportion of the estimated heritability. We used a linear mixed model to fit all single nucleotide polymorphisms (SNPs) simultaneously, and estimated genetic variances on the liability scale using SNPs from GWASs in unrelated individuals for these three diseases. For each of the three diseases, case and control samples were not all genotyped in the same laboratory. We demonstrate that a careful analysis can obtain robust estimates, but also that insufficient quality control (QC) of SNPs can lead to spurious results and that too stringent QC is likely to remove real genetic signals. Our estimates show that common SNPs on commercially available genotyping chips capture significant variation contributing to liability for all three diseases. The estimated proportion of total variation tagged by all SNPs was 0.26 (SE 0.04) for ED, 0.24 (SE 0.03) for AD and 0.30 (SE 0.03) for MS. Further, we partitioned the genetic variance explained into five categories by a minor allele frequency (MAF), by chromosomes and gene annotation. We provide strong evidence that a substantial proportion of variation in liability is explained by common SNPs, and thereby give insights into the genetic architecture of the diseases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Anticipating the number and identity of bidders has significant influence in many theoretical results of the auction itself and bidders’ bidding behaviour. This is because when a bidder knows in advance which specific bidders are likely competitors, this knowledge gives a company a head start when setting the bid price. However, despite these competitive implications, most previous studies have focused almost entirely on forecasting the number of bidders and only a few authors have dealt with the identity dimension qualitatively. Using a case study with immediate real-life applications, this paper develops a method for estimating every potential bidder’s probability of participating in a future auction as a function of the tender economic size removing the bias caused by the contract size opportunities distribution. This way, a bidder or auctioner will be able to estimate the likelihood of a specific group of key, previously identified bidders in a future tender.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a modified Heffron-Phillip's (K-constant) model is derived for the design of power system stabilizers. A knowledge of external system parameters, such as equivalent infinite bus voltage and external impedances or their equivalent estimated values is required for designing a conventional power system stabilizer. In the proposed method, information available at the secondary bus of the step-up transformer is used to set up a modified Heffron-Phillip's (ModHP) model. The PSS design based on this model utilizes signals available within the generating station. The efficacy of the proposed design technique and the performance of the stabilizer has been evaluated over a range of operating and system conditions. The simulation results have shown that the performance of the proposed stabilizer is comparable to that could be obtained by conventional design but without the need for the estimation and computation of external system parameters. The proposed design is thus well suited for practical applications to power system stabilization, including possibly the multi-machine applications where accurate system information is not readily available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of remote sensing imagery as auxiliary data in forest inventory is based on the correlation between features extracted from the images and the ground truth. The bidirectional reflectance and radial displacement cause variation in image features located in different segments of the image but forest characteristics remaining the same. The variation has so far been diminished by different radiometric corrections. In this study the use of sun azimuth based converted image co-ordinates was examined to supplement auxiliary data extracted from digitised aerial photographs. The method was considered as an alternative for radiometric corrections. Additionally, the usefulness of multi-image interpretation of digitised aerial photographs in regression estimation of forest characteristics was studied. The state owned study area located in Leivonmäki, Central Finland and the study material consisted of five digitised and ortho-rectified colour-infrared (CIR) aerial photographs and field measurements of 388 plots, out of which 194 were relascope (Bitterlich) plots and 194 were concentric circular plots. Both the image data and the field measurements were from the year 1999. When examining the effect of the location of the image point on pixel values and texture features of Finnish forest plots in digitised CIR photographs the clearest differences were found between front-and back-lighted image halves. Inside the image half the differences between different blocks were clearly bigger on the front-lighted half than on the back-lighted half. The strength of the phenomenon varied by forest category. The differences between pixel values extracted from different image blocks were greatest in developed and mature stands and smallest in young stands. The differences between texture features were greatest in developing stands and smallest in young and mature stands. The logarithm of timber volume per hectare and the angular transformation of the proportion of broadleaved trees of the total volume were used as dependent variables in regression models. Five different converted image co-ordinates based trend surfaces were used in models in order to diminish the effect of the bidirectional reflectance. The reference model of total volume, in which the location of the image point had been ignored, resulted in RMSE of 1,268 calculated from test material. The best of the trend surfaces was the complete third order surface, which resulted in RMSE of 1,107. The reference model of the proportion of broadleaved trees resulted in RMSE of 0,4292 and the second order trend surface was the best, resulting in RMSE of 0,4270. The trend surface method is applicable, but it has to be applied by forest category and by variable. The usefulness of multi-image interpretation of digitised aerial photographs was studied by building comparable regression models using either the front-lighted image features, back-lighted image features or both. The two-image model turned out to be slightly better than the one-image models in total volume estimation. The best one-image model resulted in RMSE of 1,098 and the two-image model resulted in RMSE of 1,090. The homologous features did not improve the models of the proportion of broadleaved trees. The overall result gives motivation for further research of multi-image interpretation. The focus may be improving regression estimation and feature selection or examination of stratification used in two-phase sampling inventory techniques. Keywords: forest inventory, digitised aerial photograph, bidirectional reflectance, converted image co­ordinates, regression estimation, multi-image interpretation, pixel value, texture, trend surface

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Age estimation from facial images is increasingly receiving attention to solve age-based access control, age-adaptive targeted marketing, amongst other applications. Since even humans can be induced in error due to the complex biological processes involved, finding a robust method remains a research challenge today. In this paper, we propose a new framework for the integration of Active Appearance Models (AAM), Local Binary Patterns (LBP), Gabor wavelets (GW) and Local Phase Quantization (LPQ) in order to obtain a highly discriminative feature representation which is able to model shape, appearance, wrinkles and skin spots. In addition, this paper proposes a novel flexible hierarchical age estimation approach consisting of a multi-class Support Vector Machine (SVM) to classify a subject into an age group followed by a Support Vector Regression (SVR) to estimate a specific age. The errors that may happen in the classification step, caused by the hard boundaries between age classes, are compensated in the specific age estimation by a flexible overlapping of the age ranges. The performance of the proposed approach was evaluated on FG-NET Aging and MORPH Album 2 datasets and a mean absolute error (MAE) of 4.50 and 5.86 years was achieved respectively. The robustness of the proposed approach was also evaluated on a merge of both datasets and a MAE of 5.20 years was achieved. Furthermore, we have also compared the age estimation made by humans with the proposed approach and it has shown that the machine outperforms humans. The proposed approach is competitive with current state-of-the-art and it provides an additional robustness to blur, lighting and expression variance brought about by the local phase features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There exists various suggestions for building a functional and a fault-tolerant large-scale quantum computer. Topological quantum computation is a more exotic suggestion, which makes use of the properties of quasiparticles manifest only in certain two-dimensional systems. These so called anyons exhibit topological degrees of freedom, which, in principle, can be used to execute quantum computation with intrinsic fault-tolerance. This feature is the main incentive to study topological quantum computation. The objective of this thesis is to provide an accessible introduction to the theory. In this thesis one has considered the theory of anyons arising in two-dimensional quantum mechanical systems, which are described by gauge theories based on so called quantum double symmetries. The quasiparticles are shown to exhibit interactions and carry quantum numbers, which are both of topological nature. Particularly, it is found that the addition of the quantum numbers is not unique, but that the fusion of the quasiparticles is described by a non-trivial fusion algebra. It is discussed how this property can be used to encode quantum information in a manner which is intrinsically protected from decoherence and how one could, in principle, perform quantum computation by braiding the quasiparticles. As an example of the presented general discussion, the particle spectrum and the fusion algebra of an anyon model based on the gauge group S_3 are explicitly derived. The fusion algebra is found to branch into multiple proper subalgebras and the simplest one of them is chosen as a model for an illustrative demonstration. The different steps of a topological quantum computation are outlined and the computational power of the model is assessed. It turns out that the chosen model is not universal for quantum computation. However, because the objective was a demonstration of the theory with explicit calculations, none of the other more complicated fusion subalgebras were considered. Studying their applicability for quantum computation could be a topic of further research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual’s previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag–recapture data and tag–recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We use Bayesian model selection techniques to test extensions of the standard flat LambdaCDM paradigm. Dark-energy and curvature scenarios, and primordial perturbation models are considered. To that end, we calculate the Bayesian evidence in favour of each model using Population Monte Carlo (PMC), a new adaptive sampling technique which was recently applied in a cosmological context. The Bayesian evidence is immediately available from the PMC sample used for parameter estimation without further computational effort, and it comes with an associated error evaluation. Besides, it provides an unbiased estimator of the evidence after any fixed number of iterations and it is naturally parallelizable, in contrast with MCMC and nested sampling methods. By comparison with analytical predictions for simulated data, we show that our results obtained with PMC are reliable and robust. The variability in the evidence evaluation and the stability for various cases are estimated both from simulations and from data. For the cases we consider, the log-evidence is calculated with a precision of better than 0.08. Using a combined set of recent CMB, SNIa and BAO data, we find inconclusive evidence between flat LambdaCDM and simple dark-energy models. A curved Universe is moderately to strongly disfavoured with respect to a flat cosmology. Using physically well-motivated priors within the slow-roll approximation of inflation, we find a weak preference for a running spectral index. A Harrison-Zel'dovich spectrum is weakly disfavoured. With the current data, tensor modes are not detected; the large prior volume on the tensor-to-scalar ratio r results in moderate evidence in favour of r=0.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Australian policy mandates consumer and carer participation in mental health services at all levels including research. Inspired by a UK model - Service Users Group Advising on Research [SUGAR] - we conducted a scoping project in 2013 with a view to create a consumer and carer led research process that moves beyond stigma and tokenism, that values the unique knowledge of lived experience and leads to people being treated better when accessing services. This poster presents the initial findings. Aims The project’s purpose was to explore with consumers, consumer companions and carers at the Metro North Mental Health-RBWH their interest in and views about research partnerships with academic and clinical colleagues. Methods This poster overviews the initial findings from three audio-recorded focus groups conducted with a total of 14 consumers, carers and consumer companions at the Brisbane site. Analysis Our work was guided by framework analysis (Gale et al. 2013). It defines 5 steps for analysing narrative data: familiarising; development of categories; indexing; charting and interpretation. Eight main ideas were initially developed and were divided between the authors to further index. This process identified 37 related analytic ideas. The authors integrated these by combining, removing and redefining them by consensus though a mapping process. The final step is the return of the analysis to the participants for feedback and input into the interpretation of the focus group discussions. Results 1. Value & Respect: Feeling Valued & Respected, Tokenism, Stigma, Governance, Valuing prior knowledge / background 2. Pathways to Knowledge and Involvement in Research: ‘Where to begin’, Support, Unity & partnership, Communication, Co-ordination, Flexibility due to fluctuating capacity 3. Personal Context: Barriers regarding Commitments & the nature of mental illness, Wellbeing needs, Prior experience of research, Motivators, Attributes 4. What is research? Developing Knowledge, What to do research on, how and why? Conclusion and Discussion Initial analysis suggests that participants saw potential for ‘amazing things’ in mental health research such as reflecting their priorities and moving beyond stigma and tokenism. The main needs identified were education, mentoring, funding support and research processes that fitted consumers’ and carers’limitations and fluctuating capacities. They identified maintaining motivation and interest as an issue since research processes are often extended by ethics and funding applications. Participants felt that consumer and carer led research would value the unique knowledge that the lived experience of consumers and carers brings and lead to people being treated better when accessing services.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The built environment is a major contributor to the world’s carbon dioxide emissions, with a considerable amount of energy being consumed in buildings due to heating, ventilation and air-conditioning, space illumination, use of electrical appliances, etc., to facilitate various anthropogenic activities. The development of sustainable buildings seeks to ameliorate this situation mainly by reducing energy consumption. Sustainable building design, however, is a complicated process involving a large number of design variables, each with a range of feasible values. There are also multiple, often conflicting, objectives involved such as the life cycle costs and occupant satisfaction. One approach to dealing with this is through the use of optimization models. In this paper, a new multi-objective optimization model is developed for sustainable building design by considering the design objectives of cost and energy consumption minimization and occupant comfort level maximization. In a case study demonstration, it is shown that the model can derive a set of suitable design solutions in terms of life cycle cost, energy consumption and indoor environmental quality so as to help the client and design team gain a better understanding of the design space and trade-off patterns between different design objectives. The model can very useful in the conceptual design stages to determine appropriate operational settings to achieve the optimal building performance in terms of minimizing energy consumption and maximizing occupant comfort level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Minimum Description Length (MDL) principle is a general, well-founded theoretical formalization of statistical modeling. The most important notion of MDL is the stochastic complexity, which can be interpreted as the shortest description length of a given sample of data relative to a model class. The exact definition of the stochastic complexity has gone through several evolutionary steps. The latest instantation is based on the so-called Normalized Maximum Likelihood (NML) distribution which has been shown to possess several important theoretical properties. However, the applications of this modern version of the MDL have been quite rare because of computational complexity problems, i.e., for discrete data, the definition of NML involves an exponential sum, and in the case of continuous data, a multi-dimensional integral usually infeasible to evaluate or even approximate accurately. In this doctoral dissertation, we present mathematical techniques for computing NML efficiently for some model families involving discrete data. We also show how these techniques can be used to apply MDL in two practical applications: histogram density estimation and clustering of multi-dimensional data.