971 resultados para Stochastic modelling
Resumo:
Many nations are highlighting the need for a renaissance in the mathematical sciences as essential to the well-being of all citizens (e.g., Australian Academy of Science, 2006; 2010; The National Academies, 2009). Indeed, the first recommendation of The National Academies’ Rising Above the Storm (2007) was to vastly improve K–12 science and mathematics education. The subsequent report, Rising Above the Gathering Storm Two Years Later (2009), highlighted again the need to target mathematics and science from the earliest years of schooling: “It takes years or decades to build the capability to have a society that depends on science and technology . . . You need to generate the scientists and engineers, starting in elementary and middle school” (p. 9). Such pleas reflect the rapidly changing nature of problem solving and reasoning needed in today’s world, beyond the classroom. As The National Academies (2009) reported, “Today the problems are more complex than they were in the 1950s, and more global. They’ll require a new educated workforce, one that is more open, collaborative, and cross-disciplinary” (p. 19). The implications for the problem solving experiences we implement in schools are far-reaching. In this chapter, I consider problem solving and modelling in the primary school, beginning with the need to rethink the experiences we provide in the early years. I argue for a greater awareness of the learning potential of young children and the need to provide stimulating learning environments. I then focus on data modelling as a powerful means of advancing children’s statistical reasoning abilities, which they increasingly need as they navigate their data-drenched world.
Resumo:
Wound healing and tumour growth involve collective cell spreading, which is driven by individual motility and proliferation events within a population of cells. Mathematical models are often used to interpret experimental data and to estimate the parameters so that predictions can be made. Existing methods for parameter estimation typically assume that these parameters are constants and often ignore any uncertainty in the estimated values. We use approximate Bayesian computation (ABC) to estimate the cell diffusivity, D, and the cell proliferation rate, λ, from a discrete model of collective cell spreading, and we quantify the uncertainty associated with these estimates using Bayesian inference. We use a detailed experimental data set describing the collective cell spreading of 3T3 fibroblast cells. The ABC analysis is conducted for different combinations of initial cell densities and experimental times in two separate scenarios: (i) where collective cell spreading is driven by cell motility alone, and (ii) where collective cell spreading is driven by combined cell motility and cell proliferation. We find that D can be estimated precisely, with a small coefficient of variation (CV) of 2–6%. Our results indicate that D appears to depend on the experimental time, which is a feature that has been previously overlooked. Assuming that the values of D are the same in both experimental scenarios, we use the information about D from the first experimental scenario to obtain reasonably precise estimates of λ, with a CV between 4 and 12%. Our estimates of D and λ are consistent with previously reported values; however, our method is based on a straightforward measurement of the position of the leading edge whereas previous approaches have involved expensive cell counting techniques. Additional insights gained using a fully Bayesian approach justify the computational cost, especially since it allows us to accommodate information from different experiments in a principled way.
Resumo:
Stations on Bus Rapid Transit (BRT) lines ordinarily control line capacity because they act as bottlenecks. At stations with passing lanes, congestion may occur when buses maneuvering into and out of the platform stopping lane interfere with bus flow, or when a queue of buses forms upstream of the station blocking inflow. We contend that, as bus inflow to the station area approaches capacity, queuing will become excessive in a manner similar to operation of a minor movement on an unsignalized intersection. This analogy was used to treat BRT station operation and to analyze the relationship between station queuing and capacity. We conducted microscopic simulation to study and analyze operating characteristics of the station under near steady state conditions through output variables of capacity, degree of saturation and queuing. In the first of two stages, a mathematical model was developed for all stopping buses potential capacity with bus to bus interference and the model was validated. Secondly, a mathematical model was developed to estimate the relationship between average queue and degree of saturation and calibrated for a specified range of controlled scenarios of mean and coefficient of variation of dwell time.
Resumo:
This study focuses on trying to understand why the range of experience with respect to HIV infection is so diverse, especially as regards to the latency period. The challenge is to determine what assumptions can be made about the nature of the experience of antigenic invasion and diversity that can be modelled, tested and argued plausibly. To investigate this, an agent-based approach is used to extract high-level behaviour which cannot be described analytically from the set of interaction rules at the cellular level. A prototype model encompasses local variation in baseline properties contributing to the individual disease experience and is included in a network which mimics the chain of lymphatic nodes. Dealing with massively multi-agent systems requires major computational efforts. However, parallelisation methods are a natural consequence and advantage of the multi-agent approach. These are implemented using the MPI library.
Resumo:
The field of epigenetics looks at changes in the chromosomal structure that affect gene expression without altering DNA sequence. A large-scale modelling project to better understand these mechanisms is gaining momentum. Early advances in genetics led to the all-genetic paradigm: phenotype (an organism's characteristics/behaviour) is determined by genotype (its genetic make-up). This was later amended and expressed by the well-known formula P = G + E, encompassing the notion that the visible characteristics of a living organism (the phenotype, P) is a combination of hereditary genetic factors (the genotype, G) and environmental factors (E). However, this method fails to explain why in diseases such as schizophrenia we still observe differences between identical twins. Furthermore, the identification of environmental factors (such as smoking and air quality for lung cancer) is relatively rare. The formula also fails to explain cell differentiation from a single fertilized cell. In the wake of early work by Waddington, more recent results have emphasized that the expression of the genotype can be altered without any change in the DNA sequence. This phenomenon has been tagged as epigenetics. To form the chromosome, DNA strands roll over nucleosomes, which are a cluster of nine proteins (histones), as detailed in Figure 1. Epigenetic mechanisms involve inherited alterations in these two structures, eg through attachment of a functional group to the amino acids (methyl, acetyl and phosphate). These 'stable alterations' arise during development and cell proliferation and persist through cell division. While information within the genetic material is not changed, instructions for its assembly and interpretation may be. Modelling this new paradigm, P = G + E + EpiG, is the object of our study.
Resumo:
The three phases of the macroscopic evolution of the HIV infection are well known, but it is still difficult to understand how the cellular-level interactions come together to create this characteristic pattern and, in particular, why there are such differences in individual responses. An 'agent-based' approach is chosen as a means of inferring high-level behaviour from a small set of interaction rules at the cellular level. Here the emphasis is on cell mobility and viral mutations.
Resumo:
This thesis investigated the complexity of busway operation with stopping and non-stopping buses using field data and microscopic simulation modelling. The proposed approach made significant recommendations to transit authorities to achieve the most practicable system capacity for existing and new busways. The empirical equations developed in this research and newly introduced analysis methods will be ideal tools for transit planners to achieve optimal reliability of busways.
Resumo:
A mathematical model is developed for the ripening of cheese. Such models may assist predicting final cheese quality using measured initial composition. The main constituent chemical reactions are described with ordinary differential equations. Numerical solutions to the model equations are found using Matlab. Unknown parameter values have been fitted using experimental data available in the literature. The results from the numerical fitting are in good agreement with the data. Statistical analysis is performed on near infrared data provided to the MISG. However, due to the inhomogeneity and limited nature of the data, not many conclusions can be drawn from the analysis. A simple model of the potential changes in acidity of cheese is also considered. The results from this model are consistent with cheese manufacturing knowledge, in that the pH of cheddar cheese does not significantly change during ripening.
Resumo:
The purpose of this research is to assess daylight performance of buildings with climatic responsive envelopes with complex geometry that integrates shading devices in the façade. To this end two case studies are chosen due to their complex geometries and integrated daylight devices. The effect of different parameters of the daylight devices is analysed through Climate base daylight metrics.
Resumo:
Electric walking draglines are physically large and powerful machines used in the mining industry. However with the addition of suitable sensors and a controller a dragline can be considered as a numerically controlled machine or robot which can then perform parts of the operating cycle automatically. This paper presents an analysis of the electromechanical system necessary precursor to automatic control
Resumo:
Passenger flow simulations are an important tool for designing and managing airports. This thesis examines the different boarding strategies for the Boeing 777 and Airbus 380 aircraft in order to investigate their current performance and to determine minimum boarding times. The most optimal strategies have been discovered and new strategies that are more efficient are proposed. The methods presented offer reduced aircraft boarding times which plays an important role for reducing the overall aircraft Turn Time for an airline.
Resumo:
The most important aspect of modelling a geological variable, such as metal grade, is the spatial correlation. Spatial correlation describes the relationship between realisations of a geological variable sampled at different locations. Any method for spatially modelling such a variable should be capable of accurately estimating the true spatial correlation. Conventional kriged models are the most commonly used in mining for estimating grade or other variables at unsampled locations, and these models use the variogram or covariance function to model the spatial correlations in the process of estimation. However, this usage assumes the relationships of the observations of the variable of interest at nearby locations are only influenced by the vector distance between the locations. This means that these models assume linear spatial correlation of grade. In reality, the relationship with an observation of grade at a nearby location may be influenced by both distance between the locations and the value of the observations (ie non-linear spatial correlation, such as may exist for variables of interest in geometallurgy). Hence this may lead to inaccurate estimation of the ore reserve if a kriged model is used for estimating grade of unsampled locations when nonlinear spatial correlation is present. Copula-based methods, which are widely used in financial and actuarial modelling to quantify the non-linear dependence structures, may offer a solution. This method was introduced by Bárdossy and Li (2008) to geostatistical modelling to quantify the non-linear spatial dependence structure in a groundwater quality measurement network. Their copula-based spatial modelling is applied in this research paper to estimate the grade of 3D blocks. Furthermore, real-world mining data is used to validate this model. These copula-based grade estimates are compared with the results of conventional ordinary and lognormal kriging to present the reliability of this method.
Resumo:
In this paper we present a new method for performing Bayesian parameter inference and model choice for low count time series models with intractable likelihoods. The method involves incorporating an alive particle filter within a sequential Monte Carlo (SMC) algorithm to create a novel pseudo-marginal algorithm, which we refer to as alive SMC^2. The advantages of this approach over competing approaches is that it is naturally adaptive, it does not involve between-model proposals required in reversible jump Markov chain Monte Carlo and does not rely on potentially rough approximations. The algorithm is demonstrated on Markov process and integer autoregressive moving average models applied to real biological datasets of hospital-acquired pathogen incidence, animal health time series and the cumulative number of poison disease cases in mule deer.
Resumo:
This thesis targets on a challenging issue that is to enhance users' experience over massive and overloaded web information. The novel pattern-based topic model proposed in this thesis can generate high-quality multi-topic user interest models technically by incorporating statistical topic modelling and pattern mining. We have successfully applied the pattern-based topic model to both fields of information filtering and information retrieval. The success of the proposed model in finding the most relevant information to users mainly comes from its precisely semantic representations to represent documents and also accurate classification of the topics at both document level and collection level.