889 resultados para Stochastic processes -- Mathematical models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Facility location concerns the placement of facilities, for various objectives, by use of mathematical models and solution procedures. Almost all facility location models that can be found in literature are based on minimizing costs or maximizing cover, to cover as much demand as possible. These models are quite efficient for finding an optimal location for a new facility for a particular data set, which is considered to be constant and known in advance. In a real world situation, input data like demand and travelling costs are not fixed, nor known in advance. This uncertainty and uncontrollability can lead to unacceptable losses or even bankruptcy. A way of dealing with these factors is robustness modelling. A robust facility location model aims to locate a facility that stays within predefined limits for all expectable circumstances as good as possible. The deviation robustness concept is used as basis to develop a new competitive deviation robustness model. The competition is modelled with a Huff based model, which calculates the market share of the new facility. Robustness in this model is defined as the ability of a facility location to capture a minimum market share, despite variations in demand. A test case is developed by which algorithms can be tested on their ability to solve robust facility location models. Four stochastic optimization algorithms are considered from which Simulated Annealing turned out to be the most appropriate. The test case is slightly modified for a competitive market situation. With the Simulated Annealing algorithm, the developed competitive deviation model is solved, for three considered norms of deviation. At the end, also a grid search is performed to illustrate the landscape of the objective function of the competitive deviation model. The model appears to be multimodal and seems to be challenging for further research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Pierre Auger Cosmic Ray Observatory North site employs a large array of surface detector stations (tanks) to detect the secondary particle showers generated by ultra-high energy cosmic rays. Due to the rare nature of ultra-high energy cosmic rays, it is important to have a high reliability on tank communications, ensuring no valuable data is lost. The Auger North site employs a peer-to-peer paradigm, the Wireless Architecture for Hard Real-Time Embedded Networks (WAHREN), designed specifically for highly reliable message delivery over fixed networks, under hard real-time deadlines. The WAHREN design included two retransmission protocols, Micro- and Macro- retransmission. To fully understand how each retransmission protocol increased the reliability of communications, this analysis evaluated the system without using either retransmission protocol (Case-0), both Micro- and Macro-retransmission individually (Micro and Macro), and Micro- and Macro-retransmission combined. This thesis used a multimodal modeling methodology to prove that a performance and reliability analysis of WAHREN was possible, and provided the results of the analysis. A multimodal approach was necessary because these processes were driven by different mathematical models. The results from this analysis can be used as a framework for making design decisions for the Auger North communication system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research develops an econometric framework to analyze time series processes with bounds. The framework is general enough that it can incorporate several different kinds of bounding information that constrain continuous-time stochastic processes between discretely-sampled observations. It applies to situations in which the process is known to remain within an interval between observations, by way of either a known constraint or through the observation of extreme realizations of the process. The main statistical technique employs the theory of maximum likelihood estimation. This approach leads to the development of the asymptotic distribution theory for the estimation of the parameters in bounded diffusion models. The results of this analysis present several implications for empirical research. The advantages are realized in the form of efficiency gains, bias reduction and in the flexibility of model specification. A bias arises in the presence of bounding information that is ignored, while it is mitigated within this framework. An efficiency gain arises, in the sense that the statistical methods make use of conditioning information, as revealed by the bounds. Further, the specification of an econometric model can be uncoupled from the restriction to the bounds, leaving the researcher free to model the process near the bound in a way that avoids bias from misspecification. One byproduct of the improvements in model specification is that the more precise model estimation exposes other sources of misspecification. Some processes reveal themselves to be unlikely candidates for a given diffusion model, once the observations are analyzed in combination with the bounding information. A closer inspection of the theoretical foundation behind diffusion models leads to a more general specification of the model. This approach is used to produce a set of algorithms to make the model computationally feasible and more widely applicable. Finally, the modeling framework is applied to a series of interest rates, which, for several years, have been constrained by the lower bound of zero. The estimates from a series of diffusion models suggest a substantial difference in estimation results between models that ignore bounds and the framework that takes bounding information into consideration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With progressive climate change, the preservation of biodiversity is becoming increasingly important. Only if the gene pool is large enough and requirements of species are diverse, there will be species that can adapt to the changing circumstances. To maintain biodiversity, we must understand the consequences of the various strategies. Mathematical models of population dynamics could provide prognoses. However, a model that would reproduce and explain the mechanisms behind the diversity of species that we observe experimentally and in nature is still needed. A combination of theoretical models with detailed experiments is needed to test biological processes in models and compare predictions with outcomes in reality. In this thesis, several food webs are modeled and analyzed. Among others, models are formulated of laboratory experiments performed in the Zoological Institute of the University of Cologne. Numerical data of the simulations is in good agreement with the real experimental results. Via numerical simulations it can be demonstrated that few assumptions are necessary to reproduce in a model the sustained oscillations of the population size that experiments show. However, analysis indicates that species "thrown together by chance" are not very likely to survive together over long periods. Even larger food nets do not show significantly different outcomes and prove how extraordinary and complicated natural diversity is. In order to produce such a coexistence of randomly selected species—as the experiment does—models require additional information about biological processes or restrictions on the assumptions. Another explanation for the observed coexistence is a slow extinction that takes longer than the observation time. Simulated species survive a comparable period of time before they die out eventually. Interestingly, it can be stated that the same models allow the survival of several species in equilibrium and thus do not follow the so-called competitive exclusion principle. This state of equilibrium is more fragile, however, to changes in nutrient supply than the oscillating coexistence. Overall, the studies show, that having a diverse system means that population numbers are probably oscillating, and on the other hand oscillating population numbers stabilize a food web both against demographic noise as well as against changes of the habitat. Model predictions can certainly not be converted at their face value into policies for real ecosystems. But the stabilizing character of fluctuations should be considered in the regulations of animal populations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The release of ultrafine particles (UFP) from laser printers and office equipment was analyzed using a particle counter (FMPS; Fast Mobility Particle Sizer) with a high time resolution, as well as the appropriate mathematical models. Measurements were carried out in a 1 m³ chamber, a 24 m³ chamber and an office. The time-dependent emission rates were calculated for these environments using a deconvolution model, after which the total amount of emitted particles was calculated. The total amounts of released particles were found to be independent of the environmental parameters and therefore, in principle, they were appropriate for the comparison of different printers. On the basis of the time-dependent emission rates, “initial burst” emitters and constant emitters could also be distinguished. In the case of an “initial burst” emitter, the comparison to other devices is generally affected by strong variations between individual measurements. When conducting exposure assessments for UFP in an office, the spatial distribution of the particles also had to be considered. In this work, the spatial distribution was predicted on a case by case basis, using CFD simulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experiments were undertaken to study drying kinetics of moist cylindrical shaped food particulates during fluidised bed drying. Cylindrical particles were prepared from Green beans with three different length:diameter ratios, 3:1, 2:1 and 1:1. A batch fluidised bed dryer connected to a heat pump system was used for the experimentation. A Heat pump and fluid bed combination was used to increase overall energy efficiency and achieve higher drying rates. Drying kinetics, were evaluated with non-dimensional moisture at three different drying temperatures of 30, 40 and 50o C. Numerous mathematical models can be used to calculate drying kinetics ranging from analytical models with simplified assumptions to empirical models built by regression using experimental data. Empirical models are commonly used for various food materials due to their simpler approach. However problems in accuracy, limits the applications of empirical models. Some limitations of empirical models could be reduced by using semi-empirical models based on heat and mass transfer of the drying operation. One such method is the quasi-stationary approach. In this study, a modified quasi-stationary approach was used to model drying kinetics of the cylindrical food particles at three drying temperatures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technology is continually changing, and evolving, throughout the entire construction industry; and particularly in the design process. One of the principal manifestations of this is a move away from team working in a shared work space to team working in a virtual space, using increasingly sophisticated electronic media. Due to the significant operating differences when working in shared and virtual spaces adjustments to generic skills utilised by members is a necessity when moving between the two conditions. This paper reports an aspect of a CRC-CI research project based on research of ‘generic skills’ used by individuals and teams when engaging with high bandwidth information and communication technologies (ICT). It aligns with the project’s other two aspects of collaboration in virtual environments: ‘processes’ and ‘models’. The entire project focuses on the early stages of a project (i.e. design) in which models for the project are being developed and revised. The paper summarises the first stage of the research project which reviews literature to identify factors of virtual teaming which may affect team member skills. It concludes that design team participants require ‘appropriate skills’ to function efficiently and effectively, and that the introduction of high band-width technologies reinforces the need for skills mapping and measurement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In sport and exercise biomechanics, forward dynamics analyses or simulations have frequently been used in attempts to establish optimal techniques for performance of a wide range of motor activities. However, the accuracy and validity of these simulations is largely dependent on the complexity of the mathematical model used to represent the neuromusculoskeletal system. It could be argued that complex mathematical models are superior to simple mathematical models as they enable basic mechanical insights to be made and individual-specific optimal movement solutions to be identified. Contrary to some claims in the literature, however, we suggest that it is currently not possible to identify the complete optimal solution for a given motor activity. For a complete optimization of human motion, dynamical systems theory implies that mathematical models must incorporate a much wider range of organismic, environmental and task constraints. These ideas encapsulate why sports medicine specialists need to adopt more individualized clinical assessment procedures in interpreting why performers' movement patterns may differ.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern Engineering Asset Management (EAM) requires the accurate assessment of current and the prediction of future asset health condition. Appropriate mathematical models that are capable of estimating times to failures and the probability of failures in the future are essential in EAM. In most real-life situations, the lifetime of an engineering asset is influenced and/or indicated by different factors that are termed as covariates. Hazard prediction with covariates is an elemental notion in the reliability theory to estimate the tendency of an engineering asset failing instantaneously beyond the current time assumed that it has already survived up to the current time. A number of statistical covariate-based hazard models have been developed. However, none of them has explicitly incorporated both external and internal covariates into one model. This paper introduces a novel covariate-based hazard model to address this concern. This model is named as Explicit Hazard Model (EHM). Both the semi-parametric and non-parametric forms of this model are presented in the paper. The major purpose of this paper is to illustrate the theoretical development of EHM. Due to page limitation, a case study with the reliability field data is presented in the applications part of this study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The driving task requires sustained attention during prolonged periods, and can be performed in highly predictable or repetitive environments. Such conditions could create drowsiness or hypovigilance and impair the ability to react to critical events. Identifying vigilance decrement in monotonous conditions has been a major subject of research, but no research to date has attempted to predict this vigilance decrement. This pilot study aims to show that vigilance decrements due to monotonous tasks can be predicted through mathematical modelling. A short vigilance task sensitive to short periods of lapses of vigilance called Sustained Attention to Response Task is used to assess participants’ performance. This task models the driver’s ability to cope with unpredicted events by performing the expected action. A Hidden Markov Model (HMM) is proposed to predict participants’ hypovigilance. Driver’s vigilance evolution is modelled as a hidden state and is correlated to an observable variable: the participant’s reactions time. This experiment shows that the monotony of the task can lead to an important vigilance decline in less than five minutes. This impairment can be predicted four minutes in advance with an 86% accuracy using HMMs. This experiment showed that mathematical models such as HMM can efficiently predict hypovigilance through surrogate measures. The presented model could result in the development of an in-vehicle device that detects driver hypovigilance in advance and warn the driver accordingly, thus offering the potential to enhance road safety and prevent road crashes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading. However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer. This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach. A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures. A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering. This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experiments were undertaken to study drying kinetics of different shaped moist food particulates during heat pump assisted fluidised bed drying. Three particular geometrical shapes of parallelepiped, cylindrical and spheres were selected from potatoes (aspect ratio = 1:1, 2:1, 3:1), cut beans (length: diameter = 1:1, 2:1, 3:1) and peas respectively. A batch fluidised bed dryer connected to a heat pump system was used for the experimentation. A Heat pump and fluid bed combination was used to increase overall energy efficiency and achieve higher drying rates. Drying kinetics, were evaluated with non-dimensional moisture at three different drying temperatures of 30, 40 and 50o C. Due to complex hydrodynamics of the fluidised beds, drying kinetics are dryer or material specific. Numerous mathematical models can be used to calculate drying kinetics ranging from analytical models with simplified assumptions to empirical models built by regression using experimental data. Empirical models are commonly used for various food materials due to their simpler approach. However problems in accuracy, limits the applications of empirical models. Some limitations of empirical models could be reduced by using semi-empirical models based on heat and mass transfer of the drying operation. One such method is the quasi-stationary approach. In this study, a modified quasi-stationary approach was used to model drying kinetics of the cylindrical food particles at three drying temperatures.

Relevância:

100.00% 100.00%

Publicador: