987 resultados para African Institute for Mathematical Sciences
Resumo:
Completing a PhD on time is a complex process, influenced by many interacting factors. In this paper we take a Bayesian Network approach to analyzing the factors perceived to be important in achieving this aim. Focusing on a single research group in Mathematical Sciences, we develop a conceptual model to describe the factors considered to be important to students and then quantify the network based on five individual perspectives: the students, a supervisor and a university research students centre manager. The resultant network comprised 37 factors and 40 connections, with an overall probability of timely completion of between 0.6 and 0.8. Across all participants, the four factors that were considered to most directly influence timely completion were personal aspects, the research environment, the research project, and incoming skills.
Resumo:
In this paper we demonstrate passive vision-based localization in environments more than two orders of magnitude darker than the current benchmark using a 100 webcam and a 500 camera. Our approach uses the camera’s maximum exposure duration and sensor gain to achieve appropriately exposed images even in unlit night-time environments, albeit with extreme levels of motion blur. Using the SeqSLAM algorithm, we first evaluate the effect of variable motion blur caused by simulated exposures of 132 ms to 10000 ms duration on localization performance. We then use actual long exposure camera datasets to demonstrate day-night localization in two different environments. Finally we perform a statistical analysis that compares the baseline performance of matching unprocessed greyscale images to using patch normalization and local neighbourhood normalization – the two key SeqSLAM components. Our results and analysis show for the first time why the SeqSLAM algorithm is effective, and demonstrate the potential for cheap camera-based localization systems that function across extreme perceptual change.
Resumo:
A new optimal control model of the interactions between a growing tumour and the host immune system along with an immunotherapy treatment strategy is presented. The model is based on an ordinary differential equation model of interactions between the growing tu- mour and the natural killer, cytotoxic T lymphocyte and dendritic cells of the host immune system, extended through the addition of a control function representing the application of a dendritic cell treat- ment to the system. The numerical solution of this model, obtained from a multi species Runge–Kutta forward-backward sweep scheme, is described. We investigate the effects of varying the maximum al- lowed amount of dendritic cell vaccine administered to the system and find that control of the tumour cell population is best effected via a high initial vaccine level, followed by reduced treatment and finally cessation of treatment. We also found that increasing the strength of the dendritic cell vaccine causes an increase in the number of natural killer cells and lymphocytes, which in turn reduces the growth of the tumour.
Resumo:
A dual-scale model of the torrefaction of wood was developed and used to study industrial configurations. At the local scale, the computational code solves the coupled heat and mass transfer and the thermal degradation mechanisms of the wood components. At the global scale, the two-way coupling between the boards and the stack channels is treated as an integral component of the process. This model is used to investigate the effect of the stack configuration on the heat treatment of the boards. The simulations highlight that the exothermic reactions occurring in each single board can be accumulated along the stack. This phenomenon may result in a dramatic eterogeneity of the process and poses a serious risk of thermal runaway, which is often observed in industrial plants. The model is used to explain how thermal runaway can be lowered by increasing the airflow velocity, the sticker thickness or by gas flow reversal.
Resumo:
The work presented in this thesis investigates the mathematical modelling of charge transport in electrolyte solutions, within the nanoporous structures of electrochemical devices. We compare two approaches found in the literature, by developing onedimensional transport models based on the Nernst-Planck and Maxwell-Stefan equations. The development of the Nernst-Planck equations relies on the assumption that the solution is infinitely dilute. However, this is typically not the case for the electrolyte solutions found within electrochemical devices. Furthermore, ionic concentrations much higher than those of the bulk concentrations can be obtained near the electrode/electrolyte interfaces due to the development of an electric double layer. Hence, multicomponent interactions which are neglected by the Nernst-Planck equations may become important. The Maxwell-Stefan equations account for these multicomponent interactions, and thus they should provide a more accurate representation of transport in electrolyte solutions. To allow for the effects of the electric double layer in both the Nernst-Planck and Maxwell-Stefan equations, we do not assume local electroneutrality in the solution. Instead, we model the electrostatic potential as a continuously varying function, by way of Poisson’s equation. Importantly, we show that for a ternary electrolyte solution at high interfacial concentrations, the Maxwell-Stefan equations predict behaviour that is not recovered from the Nernst-Planck equations. The main difficulty in the application of the Maxwell-Stefan equations to charge transport in electrolyte solutions is knowledge of the transport parameters. In this work, we apply molecular dynamics simulations to obtain the required diffusivities, and thus we are able to incorporate microscopic behaviour into a continuum scale model. This is important due to the small size scales we are concerned with, as we are still able to retain the computational efficiency of continuum modelling. This approach provides an avenue by which the microscopic behaviour may ultimately be incorporated into a full device-scale model. The one-dimensional Maxwell-Stefan model is extended to two dimensions, representing an important first step for developing a fully-coupled interfacial charge transport model for electrochemical devices. It allows us to begin investigation into ambipolar diffusion effects, where the motion of the ions in the electrolyte is affected by the transport of electrons in the electrode. As we do not consider modelling in the solid phase in this work, this is simulated by applying a time-varying potential to one interface of our two-dimensional computational domain, thus allowing a flow field to develop in the electrolyte. Our model facilitates the observation of the transport of ions near the electrode/electrolyte interface. For the simulations considered in this work, we show that while there is some motion in the direction parallel to the interface, the interfacial coupling is not sufficient for the ions in solution to be "dragged" along the interface for long distances.
Resumo:
A novel in-cylinder pressure method for determining ignition delay has been proposed and demonstrated. This method proposes a new Bayesian statistical model to resolve the start of combustion, defined as being the point at which the band-pass in-cylinder pressure deviates from background noise and the combustion resonance begins. Further, it is demonstrated that this method is still accurate in situations where there is noise present. The start of combustion can be resolved for each cycle without the need for ad hoc methods such as cycle averaging. Therefore, this method allows for analysis of consecutive cycles and inter-cycle variability studies. Ignition delay obtained by this method and by the net rate of heat release have been shown to give good agreement. However, the use of combustion resonance to determine the start of combustion is preferable over the net rate of heat release method because it does not rely on knowledge of heat losses and will still function accurately in the presence of noise. Results for a six-cylinder turbo-charged common-rail diesel engine run with neat diesel fuel at full, three quarters and half load have been presented. Under these conditions the ignition delay was shown to increase as the load was decreased with a significant increase in ignition delay at half load, when compared with three quarter and full loads.
Resumo:
A user’s query is considered to be an imprecise description of their information need. Automatic query expansion is the process of reformulating the original query with the goal of improving retrieval effectiveness. Many successful query expansion techniques ignore information about the dependencies that exist between words in natural language. However, more recent approaches have demonstrated that by explicitly modeling associations between terms significant improvements in retrieval effectiveness can be achieved over those that ignore these dependencies. State-of-the-art dependency-based approaches have been shown to primarily model syntagmatic associations. Syntagmatic associations infer a likelihood that two terms co-occur more often than by chance. However, structural linguistics relies on both syntagmatic and paradigmatic associations to deduce the meaning of a word. Given the success of dependency-based approaches and the reliance on word meanings in the query formulation process, we argue that modeling both syntagmatic and paradigmatic information in the query expansion process will improve retrieval effectiveness. This article develops and evaluates a new query expansion technique that is based on a formal, corpus-based model of word meaning that models syntagmatic and paradigmatic associations. We demonstrate that when sufficient statistical information exists, as in the case of longer queries, including paradigmatic information alone provides significant improvements in retrieval effectiveness across a wide variety of data sets. More generally, when our new query expansion approach is applied to large-scale web retrieval it demonstrates significant improvements in retrieval effectiveness over a strong baseline system, based on a commercial search engine.
Resumo:
This document provides data for the case study presented in our recent earthwork planning papers. Some results are also provided in a graphical format using Excel.
Resumo:
Travelling wave phenomena are observed in many biological applications. Mathematical theory of standard reaction-diffusion problems shows that simple partial differential equations exhibit travelling wave solutions with constant wavespeed and such models are used to describe, for example, waves of chemical concentrations, electrical signals, cell migration, waves of epidemics and population dynamics. However, as in the study of cell motion in complex spatial geometries, experimental data are often not consistent with constant wavespeed. Non-local spatial models have successfully been used to model anomalous diffusion and spatial heterogeneity in different physical contexts. In this paper, we develop a fractional model based on the Fisher-Kolmogoroff equation and analyse it for its wavespeed properties, attempting to relate the numerical results obtained from our simulations to experimental data describing enteric neural crest-derived cells migrating along the intact gut of mouse embryos. The model proposed essentially combines fractional and standard diffusion in different regions of the spatial domain and qualitatively reproduces the behaviour of neural crest-derived cells observed in the caecum and the hindgut of mouse embryos during in vivo experiments.
Resumo:
The emergence of pseudo-marginal algorithms has led to improved computational efficiency for dealing with complex Bayesian models with latent variables. Here an unbiased estimator of the likelihood replaces the true likelihood in order to produce a Bayesian algorithm that remains on the marginal space of the model parameter (with latent variables integrated out), with a target distribution that is still the correct posterior distribution. Very efficient proposal distributions can be developed on the marginal space relative to the joint space of model parameter and latent variables. Thus psuedo-marginal algorithms tend to have substantially better mixing properties. However, for pseudo-marginal approaches to perform well, the likelihood has to be estimated rather precisely. This can be difficult to achieve in complex applications. In this paper we propose to take advantage of multiple central processing units (CPUs), that are readily available on most standard desktop computers. Here the likelihood is estimated independently on the multiple CPUs, with the ultimate estimate of the likelihood being the average of the estimates obtained from the multiple CPUs. The estimate remains unbiased, but the variability is reduced. We compare and contrast two different technologies that allow the implementation of this idea, both of which require a negligible amount of extra programming effort. The superior performance of this idea over the standard approach is demonstrated on simulated data from a stochastic volatility model.
Resumo:
Information that is elicited from experts can be treated as `data', so can be analysed using a Bayesian statistical model, to formulate a prior model. Typically methods for encoding a single expert's knowledge have been parametric, constrained by the extent of an expert's knowledge and energy regarding a target parameter. Interestingly these methods have often been deterministic, in that all elicited information is treated at `face value', without error. Here we sought a parametric and statistical approach for encoding assessments from multiple experts. Our recent work proposed and demonstrated the use of a flexible hierarchical model for this purpose. In contrast to previous mathematical approaches like linear or geometric pooling, our new approach accounts for several sources of variation: elicitation error, encoding error and expert diversity. Of interest are the practical, mathematical and philosophical interpretations of this form of hierarchical pooling (which is both statistical and parametric), and how it fits within the subjective Bayesian paradigm. Case studies from a bioassay and project management (on PhDs) are used to illustrate the approach.
Resumo:
Constructing train schedules is vital in railways. This complex and time consuming task is however made more difficult by additional requirements to make train schedules robust to delays and other disruptions. For a timetable to be regarded as robust, it should be insensitive to delays of a specified level and its performance with respect to a given metric, should be within given tolerances. In other words the effect of delays should be identifiable and should be shown to be minimal. To this end, a sensitivity analysis is proposed that identifies affected operations. More specifically a sensitivity analysis for determining what operation delays cause each operation to be affected is proposed. The information provided by this analysis gives another measure of timetable robustness and also provides control information that can be used when delays occur in practice. Several algorithms are proposed to identify this information and they utilise a disjunctive graph model of train operations. Upon completion the sets of affected operations can also be used to define the impact of all delays without further disjunctive graph evaluations.
Resumo:
The first objective of this project is to develop new efficient numerical methods and supporting error and convergence analysis for solving fractional partial differential equations to study anomalous diffusion in biological tissue such as the human brain. The second objective is to develop a new efficient fractional differential-based approach for texture enhancement in image processing. The results of the thesis highlight that the fractional order analysis captured important features of nuclear magnetic resonance (NMR) relaxation and can be used to improve the quality of medical imaging.
Resumo:
This thesis explored the development of statistical methods to support the monitoring and improvement in quality of treatment delivered to patients undergoing coronary angioplasty procedures. To achieve this goal, a suite of outcome measures was identified to characterise performance of the service, statistical tools were developed to monitor the various indicators and measures to strengthen governance processes were implemented and validated. Although this work focused on pursuit of these aims in the context of a an angioplasty service located at a single clinical site, development of the tools and techniques was undertaken mindful of the potential application to other clinical specialties and a wider, potentially national, scope.
Resumo:
This thesis developed and applied Bayesian models for the analysis of survival data. The gene expression was considered as explanatory variables within the Bayesian survival model which can be considered the new contribution in the analysis of such data. The censoring factor that is inherent of survival data has also been addressed in terms of its impact on the fitting of a finite mixture of Weibull distribution with and without covariates. To investigate this, simulation study were carried out under several censoring percentages. Censoring percentage as high as 80% is acceptable here as the work involved high dimensional data. Lastly the Bayesian model averaging approach was developed to incorporate model uncertainty in the prediction of survival.