969 resultados para Stochastic Processes
Resumo:
This study uses weekday Automatic Fare Collection (AFC) data on a premium bus line in Brisbane, Australia •Stochastic analysis is compared to peak hour factor (PHF) analysis for insight into passenger loading variability •Hourly design load factor (e.g. 88th percentile) is found to be a useful method of modeling a segment’s passenger demand time-history across a study weekday, for capacity and QoS assessment •Hourly coefficient of variation of load factor is found to be a useful QoS and operational assessment measure, particularly through its relationship with hourly average load factor, and with design load factor •An assessment table based on hourly coefficient of variation of load factor is developed from the case study
Resumo:
Increasing numbers of preclinical and clinical studies are utilizing pDNA (plasmid DNA) as the vector. In addition, there has been a growing trend towards larger and larger doses of pDNA utilized in human trials. The growing demand on pDNA manufacture leads to pressure to make more in less time. A key intervention has been the use of monoliths as stationary phases in liquid chromatography. Monolithic stationary phases offer fast separation to pDNA owing to their large pore size, making pDNA in the size range from 100 nm to over 300 nm easily accessible. However, the convective transport mechanism of monoliths does not guarantee plasmid purity. The recovery of pure pDNA hinges on a proper balance in the properties of the adsorbent phase, the mobile phase and the feedstock. The effects of pH and ionic strength of binding buffer, temperature of feedstock, active group density and the pore size of the stationary phase were considered as avenues to improve the recovery and purity of pDNA using a methacrylate-based monolithic adsorbent and Escherichia coli DH5α-pUC19 clarified lysate as feedstock. pDNA recovery was found to be critically dependent on the pH and ionic strength of the mobile phase. Up to a maximum of approx. 92% recovery was obtained under optimum conditions of pH and ionic strength. Increasing the feedstock temperature to 80°C increased the purity of pDNA owing to the extra thermal stability associated with pDNA over contaminants such as proteins. Results from toxicological studies of the plasmid samples using endotoxin standard (E. coli 0.55:B5 lipopolysaccharide) show that endotoxin level decreases with increasing salt concentration. It was obvious that large quantities of pure pDNA can be obtained with minimal extra effort simply by optimizing process parameters and conditions for pDNA purification.
Resumo:
Business Process Management describes a holistic management approach for the systematic design, modeling, execution, validation, monitoring and improvement of organizational business processes. Traditionally, most attention within this community has been given to control-flow aspects, i.e., the ordering and sequencing of business activities, oftentimes in isolation with regards to the context in which these activities occur. In this paper, we propose an approach that allows executable process models to be integrated with Geographic Information Systems. This approach enables process models to take geospatial and other geographic aspects into account in an explicit manner both during the modeling phase and the execution phase. We contribute a structured modeling methodology, based on the well-known Business Process Model and Notation standard, which is formalized by means of a mapping to executable Colored Petri nets. We illustrate the feasibility of our approach by means of a sustainability-focused case example of a process with important ecological concerns.
Resumo:
In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3%, outperforming a recent HMM-based approach which obtained 71.2%.
Resumo:
We describe the development and parameterization of a grid-based model of African savanna vegetation processes. The model was developed with the objective of exploring elephant effects on the diversity of savanna species and structure, and in this formulation concentrates on the relative cover of grass and woody plants, the vertical structure of the woody plant community, and the distribution of these over space. Grid cells are linked by seed dispersal and fire, and environmental variability is included in the form of stochastic rainfall and fire events. The model was parameterized from an extensive review of the African savanna literature; when available, parameter values varied widely. The most plausible set of parameters produced long-term coexistence between woody plants and grass, with the tree-grass balance being more sensitive to changes in parameters influencing demographic processes and drought incidence and response, while less sensitive to fire regime. There was considerable diversity in the woody structure of savanna systems within the range of uncertainty in tree growth rate parameters. Thus, given the paucity of height growth data regarding woody plant species in southern African savannas, managers of natural areas should be cognizant of different tree species growth and damage response attributes when considering whether to act on perceived elephant threats to vegetation. © 2007 Springer Science+Business Media B.V.
Resumo:
Wound healing and tumour growth involve collective cell spreading, which is driven by individual motility and proliferation events within a population of cells. Mathematical models are often used to interpret experimental data and to estimate the parameters so that predictions can be made. Existing methods for parameter estimation typically assume that these parameters are constants and often ignore any uncertainty in the estimated values. We use approximate Bayesian computation (ABC) to estimate the cell diffusivity, D, and the cell proliferation rate, λ, from a discrete model of collective cell spreading, and we quantify the uncertainty associated with these estimates using Bayesian inference. We use a detailed experimental data set describing the collective cell spreading of 3T3 fibroblast cells. The ABC analysis is conducted for different combinations of initial cell densities and experimental times in two separate scenarios: (i) where collective cell spreading is driven by cell motility alone, and (ii) where collective cell spreading is driven by combined cell motility and cell proliferation. We find that D can be estimated precisely, with a small coefficient of variation (CV) of 2–6%. Our results indicate that D appears to depend on the experimental time, which is a feature that has been previously overlooked. Assuming that the values of D are the same in both experimental scenarios, we use the information about D from the first experimental scenario to obtain reasonably precise estimates of λ, with a CV between 4 and 12%. Our estimates of D and λ are consistent with previously reported values; however, our method is based on a straightforward measurement of the position of the leading edge whereas previous approaches have involved expensive cell counting techniques. Additional insights gained using a fully Bayesian approach justify the computational cost, especially since it allows us to accommodate information from different experiments in a principled way.
Resumo:
Engaging with the emerging discourse on children that recognises childhood as culturally specific and that children actively engage with their environment, this paper questions the dominant discourse’s view of children as passive recipients of socialisation. This paper argues that the discourse on children’s agency is a more useful framework for understanding the experiences of former child soldiers and that engaging meaningfully with this discourse will both improve life outcomes and reduce the risk of ongoing instability. This argument is made by an examination of the two discourses; examining their development and arguing for the usefulness of the agency discourse. This provides for an examination of children’s agency in education and skills training programs and of their political involvement (or marginalisation) in three conflicts: Colombia, Sierra Leone and Uganda. Recognising children as agents and engaging with how they navigate their lived experiences after involvement in conflict testifies to children’s resilience and their desire for change. Challenging the dominant discourse through the agency discourse allows for the acknowledgement of former child soldiers as both social and political agents in their own right and of their potential for contributing to stable and lasting peace.
Resumo:
Passenger flow simulations are an important tool for designing and managing airports. This thesis examines the different boarding strategies for the Boeing 777 and Airbus 380 aircraft in order to investigate their current performance and to determine minimum boarding times. The most optimal strategies have been discovered and new strategies that are more efficient are proposed. The methods presented offer reduced aircraft boarding times which plays an important role for reducing the overall aircraft Turn Time for an airline.
Resumo:
In this paper we present a new method for performing Bayesian parameter inference and model choice for low count time series models with intractable likelihoods. The method involves incorporating an alive particle filter within a sequential Monte Carlo (SMC) algorithm to create a novel pseudo-marginal algorithm, which we refer to as alive SMC^2. The advantages of this approach over competing approaches is that it is naturally adaptive, it does not involve between-model proposals required in reversible jump Markov chain Monte Carlo and does not rely on potentially rough approximations. The algorithm is demonstrated on Markov process and integer autoregressive moving average models applied to real biological datasets of hospital-acquired pathogen incidence, animal health time series and the cumulative number of poison disease cases in mule deer.