987 resultados para African Institute for Mathematical Sciences
Resumo:
One of the next great challenges of cell biology is the determination of the enormous number of protein structures encoded in genomes. In recent years, advances in electron cryo-microscopy and high-resolution single particle analysis have developed to the point where they now provide a methodology for high resolution structure determination. Using this approach, images of randomly oriented single particles are aligned computationally to reconstruct 3-D structures of proteins and even whole viruses. One of the limiting factors in obtaining high-resolution reconstructions is obtaining a large enough representative dataset ($>100,000$ particles). Traditionally particles have been manually picked which is an extremely labour intensive process. The problem is made especially difficult by the low signal-to-noise ratio of the images. This paper describes the development of automatic particle picking software, which has been tested with both negatively stained and cryo-electron micrographs. This algorithm has been shown to be capable of selecting most of the particles, with few false positives. Further work will involve extending the software to detect differently shaped and oriented particles.
Resumo:
This paper addresses development of an ingenious decision support system (iDSS) based on the methodology of survey instruments and identification of significant variables to be used in iDSS using statistical analysis. A survey was undertaken with pregnant women and factorial experimental design was chosen to acquire sample size. Variables with good reliability in any one of the statistical techniques such as Chi-square, Cronbach’s α and Classification Tree were incorporated in the iDSS. The ingenious decision support system was implemented with Visual Basic as front end and Microsoft SQL server management as backend. Outcome of the ingenious decision support system include advice on Symptoms, Diet and Exercise to pregnant women.
Resumo:
OBJECTIVE: To determine the point at which differences in clinical assessment scores on physical ability, pain and overall condition are sufficiently large to correspond to a subjective perception of a meaningful difference from the perspective of the patient. METHODS: Forty patients with a diagnosis of rheumatoid arthritis participated in an evening of clinical assessment and one-on-one conversations with each other regarding their arthritic condition. The assessments included tender and swollen joint counts, clinician and patient global assessments, participant assessment of pain and the Health Assessment Questionnaire (HAQ) on physical ability. After each conversation, participants rated themselves relative to their conversational partner on physical ability, pain and overall condition. These subjective comparative ratings were compared to the differences of the individual clinical assessments. RESULTS: In total there were 120 conversations. Generally participants judged themselves as less disabled than others. They rated themselves as "somewhat better" than their conversation partner when they had a (mean) 7% better score on the HAQ, 6% less pain, and 9% better global assessment. In contrast, they rated themselves as "somewhat worse" when they had a (mean) 16% worse score on the HAQ, 16% more pain, and 29% worse global assessment. CONCLUSIONS: Patients view clinically important differences in an asymmetric manner. These results can provide guidance in interpreting results and planning clinical trials.
Resumo:
The three-component reaction-diffusion system introduced in [C. P. Schenk et al., Phys. Rev. Lett., 78 (1997), pp. 3781–3784] has become a paradigm model in pattern formation. It exhibits a rich variety of dynamics of fronts, pulses, and spots. The front and pulse interactions range in type from weak, in which the localized structures interact only through their exponentially small tails, to strong interactions, in which they annihilate or collide and in which all components are far from equilibrium in the domains between the localized structures. Intermediate to these two extremes sits the semistrong interaction regime, in which the activator component of the front is near equilibrium in the intervals between adjacent fronts but both inhibitor components are far from equilibrium there, and hence their concentration profiles drive the front evolution. In this paper, we focus on dynamically evolving N-front solutions in the semistrong regime. The primary result is use of a renormalization group method to rigorously derive the system of N coupled ODEs that governs the positions of the fronts. The operators associated with the linearization about the N-front solutions have N small eigenvalues, and the N-front solutions may be decomposed into a component in the space spanned by the associated eigenfunctions and a component projected onto the complement of this space. This decomposition is carried out iteratively at a sequence of times. The former projections yield the ODEs for the front positions, while the latter projections are associated with remainders that we show stay small in a suitable norm during each iteration of the renormalization group method. Our results also help extend the application of the renormalization group method from the weak interaction regime for which it was initially developed to the semistrong interaction regime. The second set of results that we present is a detailed analysis of this system of ODEs, providing a classification of the possible front interactions in the cases of $N=1,2,3,4$, as well as how front solutions interact with the stationary pulse solutions studied earlier in [A. Doelman, P. van Heijster, and T. J. Kaper, J. Dynam. Differential Equations, 21 (2009), pp. 73–115; P. van Heijster, A. Doelman, and T. J. Kaper, Phys. D, 237 (2008), pp. 3335–3368]. Moreover, we present some results on the general case of N-front interactions.
Resumo:
In this article, we analyze the three-component reaction-diffusion system originally developed by Schenk et al. (PRL 78:3781–3784, 1997). The system consists of bistable activator-inhibitor equations with an additional inhibitor that diffuses more rapidly than the standard inhibitor (or recovery variable). It has been used by several authors as a prototype three-component system that generates rich pulse dynamics and interactions, and this richness is the main motivation for the analysis we present. We demonstrate the existence of stationary one-pulse and two-pulse solutions, and travelling one-pulse solutions, on the real line, and we determine the parameter regimes in which they exist. Also, for one-pulse solutions, we analyze various bifurcations, including the saddle-node bifurcation in which they are created, as well as the bifurcation from a stationary to a travelling pulse, which we show can be either subcritical or supercritical. For two-pulse solutions, we show that the third component is essential, since the reduced bistable two-component system does not support them. We also analyze the saddle-node bifurcation in which two-pulse solutions are created. The analytical method used to construct all of these pulse solutions is geometric singular perturbation theory, which allows us to show that these solutions lie in the transverse intersections of invariant manifolds in the phase space of the associated six-dimensional travelling wave system. Finally, as we illustrate with numerical simulations, these solutions form the backbone of the rich pulse dynamics this system exhibits, including pulse replication, pulse annihilation, breathing pulses, and pulse scattering, among others.
Resumo:
The use of Bayesian methodologies for solving optimal experimental design problems has increased. Many of these methods have been found to be computationally intensive for design problems that require a large number of design points. A simulation-based approach that can be used to solve optimal design problems in which one is interested in finding a large number of (near) optimal design points for a small number of design variables is presented. The approach involves the use of lower dimensional parameterisations that consist of a few design variables, which generate multiple design points. Using this approach, one simply has to search over a few design variables, rather than searching over a large number of optimal design points, thus providing substantial computational savings. The methodologies are demonstrated on four applications, including the selection of sampling times for pharmacokinetic and heat transfer studies, and involve nonlinear models. Several Bayesian design criteria are also compared and contrasted, as well as several different lower dimensional parameterisation schemes for generating the many design points.
Resumo:
This paper describes a generalised linear mixed model (GLMM) approach for understanding spatial patterns of participation in population health screening, in the presence of multiple screening facilities. The models presented have dual focus, namely the prediction of expected patient flows from regions to services and relative rates of participation by region- service combination, with both outputs having meaningful implications for the monitoring of current service uptake and provision. The novelty of this paper lies with the former focus, and an approach for distributing expected participation by region based on proximity to services is proposed. The modelling of relative rates of participation is achieved through the combination of different random effects, as a means of assigning excess participation to different sources. The methodology is applied to participation data collected from a government-funded mammography program in Brisbane, Australia.
Resumo:
We consider the space fractional advection–dispersion equation, which is obtained from the classical advection–diffusion equation by replacing the spatial derivatives with a generalised derivative of fractional order. We derive a finite volume method that utilises fractionally-shifted Grünwald formulae for the discretisation of the fractional derivative, to numerically solve the equation on a finite domain with homogeneous Dirichlet boundary conditions. We prove that the method is stable and convergent when coupled with an implicit timestepping strategy. Results of numerical experiments are presented that support the theoretical analysis.
Resumo:
To fumigate grain stored in a silo, phosphine gas is distributed by a combination of diffusion and fan-forced advection. This initial study of the problem mainly focuses on the advection, numerically modelled as fluid flow in a porous medium. We find satisfactory agreement between the flow predictions of two Computational Fluid Dynamics packages, Comsol and Fluent. The flow predictions demonstrate that the highest velocity (>0.1 m/s) occurs less than 0.2m from the inlet and reduces drastically over one metre of silo height, with the flow elsewhere less than 0.002 m/s or 1% of the velocity injection. The flow predictions are examined to identify silo regions where phosphine dosage levels are likely to be too low for effective grain fumigation.
Resumo:
Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.
Resumo:
This article focuses on problem solving activities in a first grade classroom in a typical small community and school in Indiana. But, the teacher and the activities in this class were not at all typical of what goes on in most comparable classrooms; and, the issues that will be addressed are relevant and important for students from kindergarten through college. Can children really solve problems that involve concepts (or skills) that they have not yet been taught? Can children really create important mathematical concepts on their own – without a lot of guidance from teachers? What is the relationship between problem solving abilities and the mastery of skills that are widely regarded as being “prerequisites” to such tasks?Can primary school children (whose toolkits of skills are limited) engage productively in authentic simulations of “real life” problem solving situations? Can three-person teams of primary school children really work together collaboratively, and remain intensely engaged, on problem solving activities that require more than an hour to complete? Are the kinds of learning and problem solving experiences that are recommended (for example) in the USA’s Common Core State Curriculum Standards really representative of the kind that even young children encounter beyond school in the 21st century? … This article offers an existence proof showing why our answers to these questions are: Yes. Yes. Yes. Yes. Yes. Yes. And: No. … Even though the evidence we present is only intended to demonstrate what’s possible, not what’s likely to occur under any circumstances, there is no reason to expect that the things that our children accomplished could not be accomplished by average ability children in other schools and classrooms.
Resumo:
The numerical solution of stochastic differential equations (SDEs) has been focused recently on the development of numerical methods with good stability and order properties. These numerical implementations have been made with fixed stepsize, but there are many situations when a fixed stepsize is not appropriate. In the numerical solution of ordinary differential equations, much work has been carried out on developing robust implementation techniques using variable stepsize. It has been necessary, in the deterministic case, to consider the "best" choice for an initial stepsize, as well as developing effective strategies for stepsize control-the same, of course, must be carried out in the stochastic case. In this paper, proportional integral (PI) control is applied to a variable stepsize implementation of an embedded pair of stochastic Runge-Kutta methods used to obtain numerical solutions of nonstiff SDEs. For stiff SDEs, the embedded pair of the balanced Milstein and balanced implicit method is implemented in variable stepsize mode using a predictive controller for the stepsize change. The extension of these stepsize controllers from a digital filter theory point of view via PI with derivative (PID) control will also be implemented. The implementations show the improvement in efficiency that can be attained when using these control theory approaches compared with the regular stepsize change strategy.
Resumo:
In this paper we give an overview of some very recent work, as well as presenting a new approach, on the stochastic simulation of multi-scaled systems involving chemical reactions. In many biological systems (such as genetic regulation and cellular dynamics) there is a mix between small numbers of key regulatory proteins, and medium and large numbers of molecules. In addition, it is important to be able to follow the trajectories of individual molecules by taking proper account of the randomness inherent in such a system. We describe different types of simulation techniques (including the stochastic simulation algorithm, Poisson Runge-Kutta methods and the balanced Euler method) for treating simulations in the three different reaction regimes: slow, medium and fast. We then review some recent techniques on the treatment of coupled slow and fast reactions for stochastic chemical kinetics and present a new approach which couples the three regimes mentioned above. We then apply this approach to a biologically inspired problem involving the expression and activity of LacZ and LacY proteins in E coli, and conclude with a discussion on the significance of this work. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
In this work we discuss the effects of white and coloured noise perturbations on the parameters of a mathematical model of bacteriophage infection introduced by Beretta and Kuang in [Math. Biosc. 149 (1998) 57]. We numerically simulate the strong solutions of the resulting systems of stochastic ordinary differential equations (SDEs), with respect to the global error, by means of numerical methods of both Euler-Taylor expansion and stochastic Runge-Kutta type.
Resumo:
This paper gives a review of recent progress in the design of numerical methods for computing the trajectories (sample paths) of solutions to stochastic differential equations. We give a brief survey of the area focusing on a number of application areas where approximations to strong solutions are important, with a particular focus on computational biology applications, and give the necessary analytical tools for understanding some of the important concepts associated with stochastic processes. We present the stochastic Taylor series expansion as the fundamental mechanism for constructing effective numerical methods, give general results that relate local and global order of convergence and mention the Magnus expansion as a mechanism for designing methods that preserve the underlying structure of the problem. We also present various classes of explicit and implicit methods for strong solutions, based on the underlying structure of the problem. Finally, we discuss implementation issues relating to maintaining the Brownian path, efficient simulation of stochastic integrals and variable-step-size implementations based on various types of control.