971 resultados para Stochastic modelling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

AbstractObjectives Decision support tools (DSTs) for invasive species management have had limited success in producing convincing results and meeting users' expectations. The problems could be linked to the functional form of model which represents the dynamic relationship between the invasive species and crop yield loss in the DSTs. The objectives of this study were: a) to compile and review the models tested on field experiments and applied to DSTs; and b) to do an empirical evaluation of some popular models and alternatives. Design and methods This study surveyed the literature and documented strengths and weaknesses of the functional forms of yield loss models. Some widely used models (linear, relative yield and hyperbolic models) and two potentially useful models (the double-scaled and density-scaled models) were evaluated for a wide range of weed densities, maximum potential yield loss and maximum yield loss per weed. Results Popular functional forms include hyperbolic, sigmoid, linear, quadratic and inverse models. Many basic models were modified to account for the effect of important factors (weather, tillage and growth stage of crop at weed emergence) influencing weed–crop interaction and to improve prediction accuracy. This limited their applicability for use in DSTs as they became less generalized in nature and often were applicable to a much narrower range of conditions than would be encountered in the use of DSTs. These factors' effects could be better accounted by using other techniques. Among the model empirically assessed, the linear model is a very simple model which appears to work well at sparse weed densities, but it produces unrealistic behaviour at high densities. The relative-yield model exhibits expected behaviour at high densities and high levels of maximum yield loss per weed but probably underestimates yield loss at low to intermediate densities. The hyperbolic model demonstrated reasonable behaviour at lower weed densities, but produced biologically unreasonable behaviour at low rates of loss per weed and high yield loss at the maximum weed density. The density-scaled model is not sensitive to the yield loss at maximum weed density in terms of the number of weeds that will produce a certain proportion of that maximum yield loss. The double-scaled model appeared to produce more robust estimates of the impact of weeds under a wide range of conditions. Conclusions Previously tested functional forms exhibit problems for use in DSTs for crop yield loss modelling. Of the models evaluated, the double-scaled model exhibits desirable qualitative behaviour under most circumstances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The stochastic filtering has been in general an estimation of indirectly observed states given observed data. This means that one is discussing conditional expected values as being one of the most accurate estimation, given the observations in the context of probability space. In my thesis, I have presented the theory of filtering using two different kind of observation process: the first one is a diffusion process which is discussed in the first chapter, while the third chapter introduces the latter which is a counting process. The majority of the fundamental results of the stochastic filtering is stated in form of interesting equations, such the unnormalized Zakai equation that leads to the Kushner-Stratonovich equation. The latter one which is known also by the normalized Zakai equation or equally by Fujisaki-Kallianpur-Kunita (FKK) equation, shows the divergence between the estimate using a diffusion process and a counting process. I have also introduced an example for the linear gaussian case, which is mainly the concept to build the so-called Kalman-Bucy filter. As the unnormalized and the normalized Zakai equations are in terms of the conditional distribution, a density of these distributions will be developed through these equations and stated by Kushner Theorem. However, Kushner Theorem has a form of a stochastic partial differential equation that needs to be verify in the sense of the existence and uniqueness of its solution, which is covered in the second chapter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Minimum Description Length (MDL) is an information-theoretic principle that can be used for model selection and other statistical inference tasks. There are various ways to use the principle in practice. One theoretically valid way is to use the normalized maximum likelihood (NML) criterion. Due to computational difficulties, this approach has not been used very often. This thesis presents efficient floating-point algorithms that make it possible to compute the NML for multinomial, Naive Bayes and Bayesian forest models. None of the presented algorithms rely on asymptotic analysis and with the first two model classes we also discuss how to compute exact rational number solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Heat stress can cause sterility in sorghum and the anticipated increased frequency of high temperature events implies increasing risk to sorghum productivity in Australia. Here we summarise our research on specific varietal attributes associated with heat stress tolerance in sorghum and evaluate how they might affect yield outcomes in production environments by a crop simulation analysis. We have recently conducted a range of controlled environment and field experiments to study the physiology and genetics of high temperature effects on growth and development of sorghum. Sorghum seed set was reduced by high temperature effects (>36-38oC) on pollen germination around flowering, but genotypes differed in their tolerance to high temperature stress. Effects were quantified in a manner that enabled their incorporation into the APSIM sorghum crop model. Simulation analysis indicated that risk of high temperature damage and yield loss depended on sowing date, and variety. While climate trends will exacerbate high temperature effects, avoidance by crop management and genetic tolerance seems possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A central tenet in the theory of reliability modelling is the quantification of the probability of asset failure. In general, reliability depends on asset age and the maintenance policy applied. Usually, failure and maintenance times are the primary inputs to reliability models. However, for many organisations, different aspects of these data are often recorded in different databases (e.g. work order notifications, event logs, condition monitoring data, and process control data). These recorded data cannot be interpreted individually, since they typically do not have all the information necessary to ascertain failure and preventive maintenance times. This paper presents a methodology for the extraction of failure and preventive maintenance times using commonly-available, real-world data sources. A text-mining approach is employed to extract keywords indicative of the source of the maintenance event. Using these keywords, a Naïve Bayes classifier is then applied to attribute each machine stoppage to one of two classes: failure or preventive. The accuracy of the algorithm is assessed and the classified failure time data are then presented. The applicability of the methodology is demonstrated on a maintenance data set from an Australian electricity company.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Progress in crop improvement is limited by the ability to identify favourable combinations of genotypes (G) and management practices (M) in relevant target environments (E) given the resources available to search among the myriad of possible combinations. To underpin yield advance we require prediction of phenotype based on genotype. In plant breeding, traditional phenotypic selection methods have involved measuring phenotypic performance of large segregating populations in multi-environment trials and applying rigorous statistical procedures based on quantitative genetic theory to identify superior individuals. Recent developments in the ability to inexpensively and densely map/sequence genomes have facilitated a shift from the level of the individual (genotype) to the level of the genomic region. Molecular breeding strategies using genome wide prediction and genomic selection approaches have developed rapidly. However, their applicability to complex traits remains constrained by gene-gene and gene-environment interactions, which restrict the predictive power of associations of genomic regions with phenotypic responses. Here it is argued that crop ecophysiology and functional whole plant modelling can provide an effective link between molecular and organism scales and enhance molecular breeding by adding value to genetic prediction approaches. A physiological framework that facilitates dissection and modelling of complex traits can inform phenotyping methods for marker/gene detection and underpin prediction of likely phenotypic consequences of trait and genetic variation in target environments. This approach holds considerable promise for more effectively linking genotype to phenotype for complex adaptive traits. Specific examples focused on drought adaptation are presented to highlight the concepts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The following problem is considered. Given the locations of the Central Processing Unit (ar;the terminals which have to communicate with it, to determine the number and locations of the concentrators and to assign the terminals to the concentrators in such a way that the total cost is minimized. There is alao a fixed cost associated with each concentrator. There is ail upper limit to the number of terminals which can be connected to a concentrator. The terminals can be connected directly to the CPU also In this paper it is assumed that the concentrators can bo located anywhere in the area A containing the CPU and the terminals. Then this becomes a multimodal optimization problem. In the proposed algorithm a stochastic automaton is used as a search device to locate the minimum of the multimodal cost function . The proposed algorithm involves the following. The area A containing the CPU and the terminals is divided into an arbitrary number of regions (say K). An approximate value for the number of concentrators is assumed (say m). The optimum number is determined by iteration later The m concentrators can be assigned to the K regions in (mk) ways (m > K) or (km) ways (K>m).(All possible assignments are feasible, i.e. a region can contain 0,1,…, to concentrators). Each possible assignment is assumed to represent a state of the stochastic variable structure automaton. To start with, all the states are assigned equal probabilities. At each stage of the search the automaton visits a state according to the current probability distribution. At each visit the automaton selects a 'point' inside that state with uniform probability. The cost associated with that point is calculated and the average cost of that state is updated. Then the probabilities of all the states are updated. The probabilities are taken to bo inversely proportional to the average cost of the states After a certain number of searches the search probabilities become stationary and the automaton visits a particular state again and again. Then the automaton is said to have converged to that state Then by conducting a local gradient search within that state the exact locations of the concentrators are determined This algorithm was applied to a set of test problems and the results were compared with those given by Cooper's (1964, 1967) EAC algorithm and on the average it was found that the proposed algorithm performs better.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The phosphine distribution in a cylindrical silo containing grain is predicted. A three-dimensional mathematical model, which accounts for multicomponent gas phase transport and the sorption of phosphine into the grain kernel is developed. In addition, a simple model is presented to describe the death of insects within the grain as a function of their exposure to phosphine gas. The proposed model is solved using the commercially available computational fluid dynamics (CFD) software, FLUENT, together with our own C code to customize the solver in order to incorporate the models for sorption and insect extinction. Two types of fumigation delivery are studied, namely, fan- forced from the base of the silo and tablet from the top of the silo. An analysis of the predicted phosphine distribution shows that during fan forced fumigation, the position of the leaky area is very important to the development of the gas flow field and the phosphine distribution in the silo. If the leak is in the lower section of the silo, insects that exist near the top of the silo may not be eradicated. However, the position of a leak does not affect phosphine distribution during tablet fumigation. For such fumigation in a typical silo configuration, phosphine concentrations remain low near the base of the silo. Furthermore, we find that half-life pressure test readings are not an indicator of phosphine distribution during tablet fumigation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses an output feedback control problem for a class of networked control systems (NCSs) with a stochastic communication protocol. Under the scenario that only one sensor is allowed to obtain the communication access at each transmission instant, a stochastic communication protocol is first defined, where the communication access is modelled by a discrete-time Markov chain with partly unknown transition probabilities. Secondly, by use of a network-based output feedback control strategy and a time-delay division method, the closed-loop system is modeled as a stochastic system with multi time-varying delays, where the inherent characteristic of the network delay is well considered to improve the control performance. Then, based on the above constructed stochastic model, two sufficient conditions are derived for ensuring the mean-square stability and stabilization of the system under consideration. Finally, two examples are given to show the effectiveness of the proposed method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The noted 19th century biologist, Ernst Haeckel, put forward the idea that the growth (ontogenesis) of an organism recapitulated the history of its evolutionary development. While this idea is defunct within biology, the idea has been promoted in areas such as education (the idea of an education being the repetition of the civilizations before). In the research presented in this paper, recapitulation is used as a metaphor within computer-aided design as a way of grouping together different generations of spatial layouts. In most CAD programs, a spatial layout is represented as a series of objects (lines, or boundary representations) that stand in as walls. The relationships between spaces are not usually explicitly stated. A representation using Lindenmayer Systems (originally designed for the purpose of modelling plant morphology) is put forward as a way of representing the morphology of a spatial layout. The aim of this research is not just to describe an individual layout, but to find representations that link together lineages of development. This representation can be used in generative design as a way of creating more meaningful layouts which have particular characteristics. The use of genetic operators (mutation and crossover) is also considered, making this representation suitable for use with genetic algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The phenomenon of adsorption is governed by the various interactions among the constituents of the interface and the forms of adsorption isotherms hold the clue to the nature of the se in teractions. An understanding of this phenomenon may be said to be complete only when the parameters occurring in such expres - sions for isotherms are interpretable in terms of molecular/electronic interactions.This objective viz. expressing the composition of the isotherm parameters through a microscopic modelling is by no means a simple one. Such a task is particularly made difficult in the case of charged interfaces where idealisation is difficult to make and, when made, not so easy to justify.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The term acclimation has been used with several connotations in the field of acclimatory physiology. An attempt has been made, in this paper, to define precisely the term “acclimation” for effective modelling of acclimatory processes. Acclimation is defined with respect to a specific variable, as cumulative experience gained by the organism when subjected to a step change in the environment. Experimental observations on a large number of variables in animals exposed to sustained stress, show that after initial deviation from the basal value (defined as “growth”), the variables tend to return to basal levels (defined as “decay”). This forms the basis for modelling biological responses in terms of their growth and decay. Hierarchical systems theory as presented by Mesarovic, Macko & Takahara (1970) facilitates modelling of complex and partially characterized systems. This theory, in conjunction with “growth-decay” analysis of biological variables, is used to model temperature regulating system in animals exposed to cold. This approach appears to be applicable at all levels of biological organization. Regulation of hormonal activity which forms a part of the temperature regulating system, and the relationship of the latter with the “energy” system of the animal of which it forms a part, are also effectively modelled by this approach. It is believed that this systematic approach would eliminate much of the current circular thinking in the area of acclimatory physiology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recommender systems assist users in finding what they want. The challenging issue is how to efficiently acquire user preferences or user information needs for building personalized recommender systems. This research explores the acquisition of user preferences using data taxonomy information to enhance personalized recommendations for alleviating cold-start problem. A concept hierarchy model is proposed, which provides a two-dimensional hierarchy for acquiring user preferences. The language model is also extended for the proposed hierarchy in order to generate an effective recommender algorithm. Both Amazon.com book and music datasets are used to evaluate the proposed approach, and the experimental results show that the proposed approach is promising.