994 resultados para 019900 OTHER MATHEMATICAL SCIENCES
Resumo:
We consider a model for thin film flow down the outside and inside of a vertical cylinder. Our focus is to study the effect that the curvature of the cylinder has on the gravity-driven instability of the advancing contact line and to simulate the resulting fingering patterns that form due to this instability. The governing partial differential equation is fourth order with a nonlinear degenerate diffusion term that represents the stabilising effect of surface tension. We present numerical solutions obtained by implementing an efficient alternating direction implicit scheme. When compared to the problem of flow down a vertical plane, we find that increasing substrate curvature tends to increase the fingering instability for flow down the outside of the cylinder, whereas flow down the inside of the cylinder substrate curvature has the opposite effect. Further, we demonstrate the existence of nontrivial travelling wave solutions which describe fingering patterns that propagate down the inside of a cylinder at constant speed without changing form. These solutions are perfectly analogous to those found previously for thin film flow down an inclined plane.
Resumo:
This paper presents an approach for identifying the limit states of resilience in a water supply system when influenced by different types of pressure (disturbing) forces. Understanding of systemic resilience facilitates identification of the trigger points for early managerial action to avoid further loss of ability to provide satisfactory service availability when the ability to supply water is under pressure. The approach proposed here is to illustrate the usefulness of a surrogate measure of resilience depicted in a three dimensional space encompassing independent pressure factors. That enables visualisation of the transition of the system-state (resilience) between high to low resilience regions and acts as an early warning trigger for decision-making. The necessity of a surrogate measure arises as a means of linking resilience to the identified pressures as resilience cannot be measured directly. The basis for identifying the resilience surrogate and exploring the interconnected relationships within the complete system, is derived from a meta-system model consisting of three nested sub-systems representing the water catchment and reservoir; treatment plant; and the distribution system and end-users. This approach can be used as a framework for assessing levels of resilience in different infrastructure systems by identifying a surrogate measure and its relationship to relevant pressures acting on the system.
Resumo:
The emergence of pseudo-marginal algorithms has led to improved computational efficiency for dealing with complex Bayesian models with latent variables. Here an unbiased estimator of the likelihood replaces the true likelihood in order to produce a Bayesian algorithm that remains on the marginal space of the model parameter (with latent variables integrated out), with a target distribution that is still the correct posterior distribution. Very efficient proposal distributions can be developed on the marginal space relative to the joint space of model parameter and latent variables. Thus psuedo-marginal algorithms tend to have substantially better mixing properties. However, for pseudo-marginal approaches to perform well, the likelihood has to be estimated rather precisely. This can be difficult to achieve in complex applications. In this paper we propose to take advantage of multiple central processing units (CPUs), that are readily available on most standard desktop computers. Here the likelihood is estimated independently on the multiple CPUs, with the ultimate estimate of the likelihood being the average of the estimates obtained from the multiple CPUs. The estimate remains unbiased, but the variability is reduced. We compare and contrast two different technologies that allow the implementation of this idea, both of which require a negligible amount of extra programming effort. The superior performance of this idea over the standard approach is demonstrated on simulated data from a stochastic volatility model.
Resumo:
Invasion waves of cells play an important role in development, disease and repair. Standard discrete models of such processes typically involve simulating cell motility, cell proliferation and cell-to-cell crowding effects in a lattice-based framework. The continuum-limit description is often given by a reaction–diffusion equation that is related to the Fisher–Kolmogorov equation. One of the limitations of a standard lattice-based approach is that real cells move and proliferate in continuous space and are not restricted to a predefined lattice structure. We present a lattice-free model of cell motility and proliferation, with cell-to-cell crowding effects, and we use the model to replicate invasion wave-type behaviour. The continuum-limit description of the discrete model is a reaction–diffusion equation with a proliferation term that is different from lattice-based models. Comparing lattice based and lattice-free simulations indicates that both models lead to invasion fronts that are similar at the leading edge, where the cell density is low. Conversely, the two models make different predictions in the high density region of the domain, well behind the leading edge. We analyse the continuum-limit description of the lattice based and lattice-free models to show that both give rise to invasion wave type solutions that move with the same speed but have very different shapes. We explore the significance of these differences by calibrating the parameters in the standard Fisher–Kolmogorov equation using data from the lattice-free model. We conclude that estimating parameters using this kind of standard procedure can produce misleading results.
Resumo:
We define a pair-correlation function that can be used to characterize spatiotemporal patterning in experimental images and snapshots from discrete simulations. Unlike previous pair-correlation functions, the pair-correlation functions developed here depend on the location and size of objects. The pair-correlation function can be used to indicate complete spatial randomness, aggregation or segregation over a range of length scales, and quantifies spatial structures such as the shape, size and distribution of clusters. Comparing pair-correlation data for various experimental and simulation images illustrates their potential use as a summary statistic for calibrating discrete models of various physical processes.
Resumo:
Cell trajectory data is often reported in the experimental cell biology literature to distinguish between different types of cell migration. Unfortunately, there is no accepted protocol for designing or interpreting such experiments and this makes it difficult to quantitatively compare different published data sets and to understand how changes in experimental design influence our ability to interpret different experiments. Here, we use an individual based mathematical model to simulate the key features of a cell trajectory experiment. This shows that our ability to correctly interpret trajectory data is extremely sensitive to the geometry and timing of the experiment, the degree of motility bias and the number of experimental replicates. We show that cell trajectory experiments produce data that is most reliable when the experiment is performed in a quasi 1D geometry with a large number of identically{prepared experiments conducted over a relatively short time interval rather than few trajectories recorded over particularly long time intervals.
Resumo:
Information that is elicited from experts can be treated as `data', so can be analysed using a Bayesian statistical model, to formulate a prior model. Typically methods for encoding a single expert's knowledge have been parametric, constrained by the extent of an expert's knowledge and energy regarding a target parameter. Interestingly these methods have often been deterministic, in that all elicited information is treated at `face value', without error. Here we sought a parametric and statistical approach for encoding assessments from multiple experts. Our recent work proposed and demonstrated the use of a flexible hierarchical model for this purpose. In contrast to previous mathematical approaches like linear or geometric pooling, our new approach accounts for several sources of variation: elicitation error, encoding error and expert diversity. Of interest are the practical, mathematical and philosophical interpretations of this form of hierarchical pooling (which is both statistical and parametric), and how it fits within the subjective Bayesian paradigm. Case studies from a bioassay and project management (on PhDs) are used to illustrate the approach.
Resumo:
Introduction: Calculating segmental (vertebral level-by-level) torso masses in Adolescent Idiopathic Scoliosis (AIS) patients allows the gravitational loading on the scoliotic spine during relaxed standing to be estimated. This study used supine CT scans of AIS patients to measure segmental torso masses and explored the joint moments in the coronal plane, particularly at the apex of a scoliotic major curve. Methods: Existing low dose CT data from the Paediatric Spine Research Group was used to calculate vertebral level-by-level torso masses and joint moments occurring in the spine for a group of 20 female AIS patients with right sided thoracic curves. The mean age was 15.0 ± 2.7 years and all curves were classified Lenke Type 1 with a mean Cobb angle 52 ± 5.9°. Image processing software, ImageJ (v1.45 NIH USA) was used to create reformatted coronal plane images, reconstruct vertebral level-by-level torso segments and subsequently measure the torso volume corresponding to each vertebral level. Segment mass was then determined by assuming a tissue density of 1.04x103 kg/m3. Body segment masses for the head, neck and arms were taken from published anthropometric data (Winter 2009). Intervertebral joint moments in the coronal plane at each vertebral level were found from the position of the centroid of the segment masses relative to the joint centres with the segmental body mass data. Results and Discussion: The magnitude of the torso masses from T1-L5 increased inferiorly, with a 150% increase in mean segmental torso mass from 0.6kg at T1 to 1.5kg at L5. The magnitudes of the calculated coronal plane joint moments during relaxed standing were typically 5-7 Nm at the apex of the curve, with the highest apex joint torque of 7Nm. The CT scans were performed in the supine position and curve magnitudes are known to be 7-10° smaller than those measured in standing, due to the absence of gravity acting on the spine. Hence, it can be expected that the moments produced by gravity in the standing individual will be greater than those calculated here.
Resumo:
The count-min sketch is a useful data structure for recording and estimating the frequency of string occurrences, such as passwords, in sub-linear space with high accuracy. However, it cannot be used to draw conclusions on groups of strings that are similar, for example close in Hamming distance. This paper introduces a variant of the count-min sketch which allows for estimating counts within a specified Hamming distance of the queried string. This variant can be used to prevent users from choosing popular passwords, like the original sketch, but it also allows for a more efficient method of analysing password statistics.
Resumo:
Purpose: Flat-detector, cone-beam computed tomography (CBCT) has enormous potential to improve the accuracy of treatment delivery in image-guided radiotherapy (IGRT). To assist radiotherapists in interpreting these images, we use a Bayesian statistical model to label each voxel according to its tissue type. Methods: The rich sources of prior information in IGRT are incorporated into a hidden Markov random field (MRF) model of the 3D image lattice. Tissue densities in the reference CT scan are estimated using inverse regression and then rescaled to approximate the corresponding CBCT intensity values. The treatment planning contours are combined with published studies of physiological variability to produce a spatial prior distribution for changes in the size, shape and position of the tumour volume and organs at risk (OAR). The voxel labels are estimated using the iterated conditional modes (ICM) algorithm. Results: The accuracy of the method has been evaluated using 27 CBCT scans of an electron density phantom (CIRS, Inc. model 062). The mean voxel-wise misclassification rate was 6.2%, with Dice similarity coefficient of 0.73 for liver, muscle, breast and adipose tissue. Conclusions: By incorporating prior information, we are able to successfully segment CBCT images. This could be a viable approach for automated, online image analysis in radiotherapy.
Resumo:
The first objective of this project is to develop new efficient numerical methods and supporting error and convergence analysis for solving fractional partial differential equations to study anomalous diffusion in biological tissue such as the human brain. The second objective is to develop a new efficient fractional differential-based approach for texture enhancement in image processing. The results of the thesis highlight that the fractional order analysis captured important features of nuclear magnetic resonance (NMR) relaxation and can be used to improve the quality of medical imaging.
Resumo:
This thesis developed and applied Bayesian models for the analysis of survival data. The gene expression was considered as explanatory variables within the Bayesian survival model which can be considered the new contribution in the analysis of such data. The censoring factor that is inherent of survival data has also been addressed in terms of its impact on the fitting of a finite mixture of Weibull distribution with and without covariates. To investigate this, simulation study were carried out under several censoring percentages. Censoring percentage as high as 80% is acceptable here as the work involved high dimensional data. Lastly the Bayesian model averaging approach was developed to incorporate model uncertainty in the prediction of survival.
Resumo:
Many efforts are made to assist with the advancement of developing economies through the activities of Non Government Organizations (NGOs). There are many management l and engagement issues associated with any successful NGO in a developed economy. When the organization is operating in a developing country, where a lack of infrastructure and education, distrust and corruption are part of the operating environment, the issues multiply. This case study discusses the structure of a NGO started in 2008 and describes the development and interaction of its two main components: a community-based NGO in a developing country and a NGO with a voluntary committee in a developed economy, australia. Despertai Mozambique is the NGO set up in Beira, which provides the necessary support, training and funding to the local poor (defined by the UN as living below the internationally accepted poverty line of US$1.25 a day) to help them set up small, often informal, businesses, to enable them to become sustainable, and ultimately to help alleviate poverty. The partnering Australian organization, Awaken Mozambique, is responsible for providing the necessary intellectual and financial resources required by Despertai Mozambique to operate.
Resumo:
In this paper, a polynomial time algorithm is presented for solving the Eden problem for graph cellular automata. The algorithm is based on our neighborhood elimination operation which removes local neighborhood configurations which cannot be used in a pre-image of a given configuration. This paper presents a detailed derivation of our algorithm from first principles, and a detailed complexity and accuracy analysis is also given. In the case of time complexity, it is shown that the average case time complexity of the algorithm is \Theta(n^2), and the best and worst cases are \Omega(n) and O(n^3) respectively. This represents a vast improvement in the upper bound over current methods, without compromising average case performance.
Resumo:
This study seeks to bring the discipline of exercise science into the discussion of Quantitative Skills (QS) in Science. The author’s experiences of providing learning support to students and working with educators in the field are described, demonstrating the difficulty of encouraging students to address their skills deficit. A survey of students’ perceptions of their own QS and of that required for their course, demonstrates the difficulties faced by students who do not have the prescribed assumed knowledge for the course. Limited results from academics suggest that their perceptions of students’ QS deficits are even more dire than those of the under-prepared students.