947 resultados para Stochastic models
Resumo:
In the exclusion-process literature, mean-field models are often derived by assuming that the occupancy status of lattice sites is independent. Although this assumption is questionable, it is the foundation of many mean-field models. In this work we develop methods to relax the independence assumption for a range of discrete exclusion process-based mechanisms motivated by applications from cell biology. Previous investigations that focussed on relaxing the independence assumption have been limited to studying initially-uniform populations and ignored any spatial variations. By ignoring spatial variations these previous studies were greatly simplified due to translational invariance of the lattice. These previous corrected mean-field models could not be applied to many important problems in cell biology such as invasion waves of cells that are characterised by moving fronts. Here we propose generalised methods that relax the independence assumption for spatially inhomogeneous problems, leading to corrected mean-field descriptions of a range of exclusion process-based models that incorporate (i) unbiased motility, (ii) biased motility, and (iii) unbiased motility with agent birth and death processes. The corrected mean-field models derived here are applicable to spatially variable processes including invasion wave type problems. We show that there can be large deviations between simulation data and traditional mean-field models based on invoking the independence assumption. Furthermore, we show that the corrected mean-field models give an improved match to the simulation data in all cases considered.
Resumo:
We present the findings of a study into the implementation of explicitly criterion- referenced assessment in undergraduate courses in mathematics. We discuss students' concepts of criterion referencing and also the various interpretations that this concept has among mathematics educators. Our primary goal was to move towards a classification of criterion referencing models in quantitative courses. A secondary goal was to investigate whether explicitly presenting assessment criteria to students was useful to them and guided them in responding to assessment tasks. The data and feedback from students indicates that while students found the criteria easy to understand and useful in informing them as to how they would be graded, it did not alter the way the actually approached the assessment activity.
Resumo:
Currently, well-established clinical therapeutic approaches for bone reconstruction are restricted to the transplantation of autografts and allografts, and the implantation of metal devices or ceramic-based implants to assist bone regeneration. Bone grafts possess osteoconductive and osteoinductive properties, however they are limited in access and availability and associated with donor site morbidity, haemorrhage, risk of infection, insufficient transplant integration, graft devitalisation, and subsequent resorption resulting in decreased mechanical stability. As a result, recent research focuses on the development of alternative therapeutic concepts. Analysing the tissue engineering literature it can be concluded that bone regeneration has become a focus area in the field. Hence, a considerable number of research groups and commercial entities work on the development of tissue engineered constructs for bone regeneration. However, bench to bedside translations are still infrequent as the process towards approval by regulatory bodies is protracted and costly, requiring both comprehensive in vitro and in vivo studies. In translational orthopaedic research, the utilisation of large preclinical animal models is a conditio sine qua non. Consequently, to allow comparison between different studies and their outcomes, it is essential that animal models, fixation devices, surgical procedures and methods of taking measurements are well standardized to produce reliable data pools as a base for further research directions. The following chapter reviews animal models of the weight-bearing lower extremity utilized in the field which include representations of fracture-healing, segmental bone defects, and fracture non-unions.
Resumo:
On obstacle-cluttered construction sites, understanding the motion characteristics of objects is important for anticipating collisions and preventing accidents. This study investigates algorithms for object identification applications that can be used by heavy equipment operators to effectively monitor congested local environment. The proposed framework contains algorithms for three-dimensional spatial modeling and image matching that are based on 3D images scanned by a high-frame rate range sensor. The preliminary results show that an occupancy grid spatial modeling algorithm can successfully build the most pertinent spatial information, and that an image matching algorithm is best able to identify which objects are in the scanned scene.
Resumo:
As organizations reach higher levels of Business Process Management maturity, they tend to accumulate large collections of process models. These repositories may contain thousands of activities and be managed by different stakeholders with varying skills and responsibilities. However, while being of great value, these repositories induce high management costs. Thus, it becomes essential to keep track of the various model versions as they may mutually overlap, supersede one another and evolve over time. We propose an innovative versioning model, and associated storage structure, specifically designed to maximize sharing across process models and process model versions, reduce conflicts in concurrent edits and automatically handle controlled change propagation. The focal point of this technique is to version single process model fragments, rather than entire process models. Indeed empirical evidence shows that real-life process model repositories have numerous duplicate fragments. Experiments on two industrial datasets confirm the usefulness of our technique.
Resumo:
One of the prominent topics in Business Service Management is business models for (new) services. Business models are useful for service management and engineering as they provide a broader and more holistic perspective on services. Business models are particularly relevant for service innovation as this requires paying attention to the business models that make new services viable and business model innovation can drive the innovation of new and established services. Before we can have a look at business models for services, we first need to understand what business models are. This is not straight-forward as business models are still not well comprehended and the knowledge about business models is fragmented over different disciplines, such as information systems, strategy, innovation, and entrepreneurship. This whitepaper, ‘Understanding business models,’ introduces readers to business models. This whitepaper contributes to enhancing the understanding of business models, in particular the conceptualisation of business models by discussing and integrating business model definitions, frameworks and archetypes from different disciplines. After reading this whitepaper, the reader will have a well-developed understanding about what business models are and how the concept is sometimes interpreted and used in different ways. It will help the reader in assessing their own understanding of business models and that and of others. This will contribute to a better and more beneficial use of business models, an increase in shared understanding, and making it easier to work with business model techniques and tools.
Resumo:
Current knowledge about the relationship between transport disadvantage and activity space size is limited to urban areas, and as a result, very little is known to date about this link in a rural context. In addition, although research has identified transport disadvantaged groups based on their size of activity spaces, these studies have, however, not empirically explained such differences and the result is often a poor identification of the problems facing disadvantaged groups. Research has shown that transport disadvantage varies over time. The static nature of analysis using the activity space concept in previous research studies has lacked the ability to identify transport disadvantage in time. Activity space is a dynamic concept; and therefore possesses a great potential in capturing temporal variations in behaviour and access opportunities. This research derives measures of the size and fullness of activity spaces for 157 individuals for weekdays, weekends, and for a week using weekly activity-travel diary data from three case study areas located in rural Northern Ireland. Four focus groups were also conducted in order to triangulate the quantitative findings and to explain the differences between different socio-spatial groups. The findings of this research show that despite having a smaller sized activity space, individuals were not disadvantaged because they were able to access their required activities locally. Car-ownership was found to be an important life line in rural areas. Temporal disaggregation of the data reveals that this is true only on weekends due to a lack of public transport services. In addition, despite activity spaces being at a similar size, the fullness of activity spaces of low-income individuals was found to be significantly lower compared to their high-income counterparts. Focus group data shows that financial constraint, poor connections both between public transport services and between transport routes and opportunities forced individuals to participate in activities located along the main transport corridors.
Resumo:
Genomic and proteomic analyses have attracted a great deal of interests in biological research in recent years. Many methods have been applied to discover useful information contained in the enormous databases of genomic sequences and amino acid sequences. The results of these investigations inspire further research in biological fields in return. These biological sequences, which may be considered as multiscale sequences, have some specific features which need further efforts to characterise using more refined methods. This project aims to study some of these biological challenges with multiscale analysis methods and stochastic modelling approach. The first part of the thesis aims to cluster some unknown proteins, and classify their families as well as their structural classes. A development in proteomic analysis is concerned with the determination of protein functions. The first step in this development is to classify proteins and predict their families. This motives us to study some unknown proteins from specific families, and to cluster them into families and structural classes. We select a large number of proteins from the same families or superfamilies, and link them to simulate some unknown large proteins from these families. We use multifractal analysis and the wavelet method to capture the characteristics of these linked proteins. The simulation results show that the method is valid for the classification of large proteins. The second part of the thesis aims to explore the relationship of proteins based on a layered comparison with their components. Many methods are based on homology of proteins because the resemblance at the protein sequence level normally indicates the similarity of functions and structures. However, some proteins may have similar functions with low sequential identity. We consider protein sequences at detail level to investigate the problem of comparison of proteins. The comparison is based on the empirical mode decomposition (EMD), and protein sequences are detected with the intrinsic mode functions. A measure of similarity is introduced with a new cross-correlation formula. The similarity results show that the EMD is useful for detection of functional relationships of proteins. The third part of the thesis aims to investigate the transcriptional regulatory network of yeast cell cycle via stochastic differential equations. As the investigation of genome-wide gene expressions has become a focus in genomic analysis, researchers have tried to understand the mechanisms of the yeast genome for many years. How cells control gene expressions still needs further investigation. We use a stochastic differential equation to model the expression profile of a target gene. We modify the model with a Gaussian membership function. For each target gene, a transcriptional rate is obtained, and the estimated transcriptional rate is also calculated with the information from five possible transcriptional regulators. Some regulators of these target genes are verified with the related references. With these results, we construct a transcriptional regulatory network for the genes from the yeast Saccharomyces cerevisiae. The construction of transcriptional regulatory network is useful for detecting more mechanisms of the yeast cell cycle.
Resumo:
Focusing on the conditions that an optimization problem may comply with, the so-called convergence conditions have been proposed and sequentially a stochastic optimization algorithm named as DSZ algorithm is presented in order to deal with both unconstrained and constrained optimizations. The principle is discussed in the theoretical model of DSZ algorithm, from which we present the practical model of DSZ algorithm. Practical model efficiency is demonstrated by the comparison with the similar algorithms such as Enhanced simulated annealing (ESA), Monte Carlo simulated annealing (MCS), Sniffer Global Optimization (SGO), Directed Tabu Search (DTS), and Genetic Algorithm (GA), using a set of well-known unconstrained and constrained optimization test cases. Meanwhile, further attention goes to the strategies how to optimize the high-dimensional unconstrained problem using DSZ algorithm.
Resumo:
We examine the impact of individual-specific information processing strategies (IPSs) on the inclusion/exclusion of attributes on the parameter estimates and behavioural outputs of models of discrete choice. Current practice assumes that individuals employ a homogenous IPS with regards to how they process attributes of stated choice (SC) experiments. We show how information collected exogenous of the SC experiment on whether respondents either ignored or considered each attribute may be used in the estimation process, and how such information provides outputs that are IPS segment specific. We contend that accounting the inclusion/exclusion of attributes will result in behaviourally richer population parameter estimates.
Resumo:
This research paper aims to develop a method to explore the travel behaviour differences between disadvantaged and non-disadvantaged populations. It also aims to develop a modelling approach or a framework to integrate disadvantage analysis into transportation planning models (TPMs). The methodology employed identifies significantly disadvantaged groups through a cluster analysis and the paper presents a disadvantage-integrated TPM. This model could be useful in determining areas with concentrated disadvantaged population and also developing and formulating relevant disadvantage sensitive policies. (a) For the covering entry of this conference, please see ITRD abstract no. E214666.
Resumo:
Over recent years a significant amount of research has been undertaken to develop prognostic models that can be used to predict the remaining useful life of engineering assets. Implementations by industry have only had limited success. By design, models are subject to specific assumptions and approximations, some of which are mathematical, while others relate to practical implementation issues such as the amount of data required to validate and verify a proposed model. Therefore, appropriate model selection for successful practical implementation requires not only a mathematical understanding of each model type, but also an appreciation of how a particular business intends to utilise a model and its outputs. This paper discusses business issues that need to be considered when selecting an appropriate modelling approach for trial. It also presents classification tables and process flow diagrams to assist industry and research personnel select appropriate prognostic models for predicting the remaining useful life of engineering assets within their specific business environment. The paper then explores the strengths and weaknesses of the main prognostics model classes to establish what makes them better suited to certain applications than to others and summarises how each have been applied to engineering prognostics. Consequently, this paper should provide a starting point for young researchers first considering options for remaining useful life prediction. The models described in this paper are Knowledge-based (expert and fuzzy), Life expectancy (stochastic and statistical), Artificial Neural Networks, and Physical models.
Resumo:
We study the regret of optimal strategies for online convex optimization games. Using von Neumann's minimax theorem, we show that the optimal regret in this adversarial setting is closely related to the behavior of the empirical minimization algorithm in a stochastic process setting: it is equal to the maximum, over joint distributions of the adversary's action sequence, of the difference between a sum of minimal expected losses and the minimal empirical loss. We show that the optimal regret has a natural geometric interpretation, since it can be viewed as the gap in Jensen's inequality for a concave functional--the minimizer over the player's actions of expected loss--defined on a set of probability distributions. We use this expression to obtain upper and lower bounds on the regret of an optimal strategy for a variety of online learning problems. Our method provides upper bounds without the need to construct a learning algorithm; the lower bounds provide explicit optimal strategies for the adversary. Peter L. Bartlett, Alexander Rakhlin