631 resultados para Hybrid methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The behaviour of ion channels within cardiac and neuronal cells is intrinsically stochastic in nature. When the number of channels is small this stochastic noise is large and can have an impact on the dynamics of the system which is potentially an issue when modelling small neurons and drug block in cardiac cells. While exact methods correctly capture the stochastic dynamics of a system they are computationally expensive, restricting their inclusion into tissue level models and so approximations to exact methods are often used instead. The other issue in modelling ion channel dynamics is that the transition rates are voltage dependent, adding a level of complexity as the channel dynamics are coupled to the membrane potential. By assuming that such transition rates are constant over each time step, it is possible to derive a stochastic differential equation (SDE), in the same manner as for biochemical reaction networks, that describes the stochastic dynamics of ion channels. While such a model is more computationally efficient than exact methods we show that there are analytical problems with the resulting SDE as well as issues in using current numerical schemes to solve such an equation. We therefore make two contributions: develop a different model to describe the stochastic ion channel dynamics that analytically behaves in the correct manner and also discuss numerical methods that preserve the analytical properties of the model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inverse problems based on using experimental data to estimate unknown parameters of a system often arise in biological and chaotic systems. In this paper, we consider parameter estimation in systems biology involving linear and non-linear complex dynamical models, including the Michaelis–Menten enzyme kinetic system, a dynamical model of competence induction in Bacillus subtilis bacteria and a model of feedback bypass in B. subtilis bacteria. We propose some novel techniques for inverse problems. Firstly, we establish an approximation of a non-linear differential algebraic equation that corresponds to the given biological systems. Secondly, we use the Picard contraction mapping, collage methods and numerical integration techniques to convert the parameter estimation into a minimization problem of the parameters. We propose two optimization techniques: a grid approximation method and a modified hybrid Nelder–Mead simplex search and particle swarm optimization (MH-NMSS-PSO) for non-linear parameter estimation. The two techniques are used for parameter estimation in a model of competence induction in B. subtilis bacteria with noisy data. The MH-NMSS-PSO scheme is applied to a dynamical model of competence induction in B. subtilis bacteria based on experimental data and the model for feedback bypass. Numerical results demonstrate the effectiveness of our approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The action potential (ap) of a cardiac cell is made up of a complex balance of ionic currents which flow across the cell membrane in response to electrical excitation of the cell. Biophysically detailed mathematical models of the ap have grown larger in terms of the variables and parameters required to model new findings in subcellular ionic mechanisms. The fitting of parameters to such models has seen a large degree of parameter and module re-use from earlier models. An alternative method for modelling electrically exciteable cardiac tissue is a phenomenological model, which reconstructs tissue level ap wave behaviour without subcellular details. A new parameter estimation technique to fit the morphology of the ap in a four variable phenomenological model is presented. An approximation of a nonlinear ordinary differential equation model is established that corresponds to the given phenomenological model of the cardiac ap. The parameter estimation problem is converted into a minimisation problem for the unknown parameters. A modified hybrid Nelder–Mead simplex search and particle swarm optimization is then used to solve the minimisation problem for the unknown parameters. The successful fitting of data generated from a well known biophysically detailed model is demonstrated. A successful fit to an experimental ap recording that contains both noise and experimental artefacts is also produced. The parameter estimation method’s ability to fit a complex morphology to a model with substantially more parameters than previously used is established.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report the production of free-standing thin sheets made up of mass-produced ZnO nanowires and the application of these nanowire sheets for the fabrication of ZnO/organic hybrid light-emitting diodes in the manner of assembly. Different p-type organic semiconductors are used to form heterojunctions with the ZnO nanowire film. Electroluminescence measurements of the devices show UV and visible emissions. Identical strong red emission is observed independent of the organic semiconductor materials used in this work. The visible emissions corresponding to the electron transition between defect levels within the energy bandgap of ZnO are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biologists are increasingly conscious of the critical role that noise plays in cellular functions such as genetic regulation, often in connection with fluctuations in small numbers of key regulatory molecules. This has inspired the development of models that capture this fundamentally discrete and stochastic nature of cellular biology - most notably the Gillespie stochastic simulation algorithm (SSA). The SSA simulates a temporally homogeneous, discrete-state, continuous-time Markov process, and of course the corresponding probabilities and numbers of each molecular species must all remain positive. While accurately serving this purpose, the SSA can be computationally inefficient due to very small time stepping so faster approximations such as the Poisson and Binomial τ-leap methods have been suggested. This work places these leap methods in the context of numerical methods for the solution of stochastic differential equations (SDEs) driven by Poisson noise. This allows analogues of Euler-Maruyuma, Milstein and even higher order methods to be developed through the Itô-Taylor expansions as well as similar derivative-free Runge-Kutta approaches. Numerical results demonstrate that these novel methods compare favourably with existing techniques for simulating biochemical reactions by more accurately capturing crucial properties such as the mean and variance than existing methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper gives a modification of a class of stochastic Runge–Kutta methods proposed in a paper by Komori (2007). The slight modification can reduce the computational costs of the methods significantly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider time-space fractional reaction diffusion equations in two dimensions. This equation is obtained from the standard reaction diffusion equation by replacing the first order time derivative with the Caputo fractional derivative, and the second order space derivatives with the fractional Laplacian. Using the matrix transfer technique proposed by Ilic, Liu, Turner and Anh [Fract. Calc. Appl. Anal., 9:333--349, 2006] and the numerical solution strategy used by Yang, Turner, Liu, and Ilic [SIAM J. Scientific Computing, 33:1159--1180, 2011], the solution of the time-space fractional reaction diffusion equations in two dimensions can be written in terms of a matrix function vector product $f(A)b$ at each time step, where $A$ is an approximate matrix representation of the standard Laplacian. We use the finite volume method over unstructured triangular meshes to generate the matrix $A$, which is therefore non-symmetric. However, the standard Lanczos method for approximating $f(A)b$ requires that $A$ is symmetric. We propose a simple and novel transformation in which the standard Lanczos method is still applicable to find $f(A)b$, despite the loss of symmetry. Numerical results are presented to verify the accuracy and efficiency of our newly proposed numerical solution strategy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we seek to expand the use of direct methods in real-time applications by proposing a vision-based strategy for pose estimation of aerial vehicles. The vast majority of approaches make use of features to estimate motion. Conversely, the strategy we propose is based on a MR (Multi- Resolution) implementation of an image registration technique (Inverse Compositional Image Alignment ICIA) using direct methods. An on-board camera in a downwards-looking configuration, and the assumption of planar scenes, are the bases of the algorithm. The motion between frames (rotation and translation) is recovered by decomposing the frame-to-frame homography obtained by the ICIA algorithm applied to a patch that covers around the 80% of the image. When the visual estimation is required (e.g. GPS drop-out), this motion is integrated with the previous known estimation of the vehicles’ state, obtained from the on-board sensors (GPS/IMU), and the subsequent estimations are based only on the vision-based motion estimations. The proposed strategy is tested with real flight data in representative stages of a flight: cruise, landing, and take-off, being two of those stages considered critical: take-off and landing. The performance of the pose estimation strategy is analyzed by comparing it with the GPS/IMU estimations. Results show correlation between the visual estimation obtained with the MR-ICIA and the GPS/IMU data, that demonstrate that the visual estimation can be used to provide a good approximation of the vehicle’s state when it is required (e.g. GPS drop-outs). In terms of performance, the proposed strategy is able to maintain an estimation of the vehicle’s state for more than one minute, at real-time frame rates based, only on visual information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Microbial pollution in water periodically affects human health in Australia, particularly in times of drought and flood. There is an increasing need for the control of waterborn microbial pathogens. Methods, allowing the determination of the origin of faecal contamination in water, are generally referred to as Microbial Source Tracking (MST). Various approaches have been evaluated as indicatorsof microbial pathogens in water samples, including detection of different microorganisms and various host-specific markers. However, until today there have been no universal MST methods that could reliably determine the source (human or animal) of faecal contamination. Therefore, the use of multiple approaches is frequently advised. MST is currently recognised as a research tool, rather than something to be included in routine practices. The main focus of this research was to develop novel and universally applicable methods to meet the demands for MST methods in routine testing of water samples. Escherichia coli was chosen initially as the object organism for our studies as, historically and globally, it is the standard indicator of microbial contamination in water. In this thesis, three approaches are described: single nucleotide polymorphism (SNP) genotyping, clustered regularly interspaced short palindromic repeats (CRISPR) screening using high resolution melt analysis (HRMA) methods and phage detection development based on CRISPR types. The advantage of the combination SNP genotyping and CRISPR genes has been discussed in this study. For the first time, a highly discriminatory single nucleotide polymorphism interrogation of E. coli population was applied to identify the host-specific cluster. Six human and one animal-specific SNP profile were revealed. SNP genotyping was successfully applied in the field investigations of the Coomera watershed, South-East Queensland, Australia. Four human profiles [11], [29], [32] and [45] and animal specific SNP profile [7] were detected in water. Two human-specific profiles [29] and [11] were found to be prevalent in the samples over a time period of years. The rainfall (24 and 72 hours), tide height and time, general land use (rural, suburban), seasons, distance from the river mouth and salinity show a lack of relashionship with the diversity of SNP profiles present in the Coomera watershed (p values > 0.05). Nevertheless, SNP genotyping method is able to identify and distinquish between human- and non-human specific E. coli isolates in water sources within one day. In some samples, only mixed profiles were detected. To further investigate host-specificity in these mixed profiles CRISPR screening protocol was developed, to be used on the set of E. coli, previously analysed for SNP profiles. CRISPR loci, which are the pattern of previous DNA coliphages attacks, were considered to be a promising tool for detecting host-specific markers in E. coli. Spacers in CRISPR loci could also reveal the dynamics of virulence in E. coli as well in other pathogens in water. Despite the fact that host-specificity was not observed in the set of E. coli analysed, CRISPR alleles were shown to be useful in detection of the geographical site of sources. HRMA allows determination of ‘different’ and ‘same’ CRISPR alleles and can be introduced in water monitoring as a cost-effective and rapid method. Overall, we show that the identified human specific SNP profiles [11], [29], [32] and [45] can be useful as marker genotypes globally for identification of human faecal contamination in water. Developed in the current study, the SNP typing approach can be used in water monitoring laboratories as an inexpensive, high-throughput and easy adapted protocol. The unique approach based on E. coli spacers for the search for unknown phage was developed to examine the host-specifity in phage sequences. Preliminary experiments on the recombinant plasmids showed the possibility of using this method for recovering phage sequences. Future studies will determine the host-specificity of DNA phage genotyping as soon as first reliable sequences can be acquired. No doubt, only implication of multiple approaches in MST will allow identification of the character of microbial contamination with higher confidence and readability.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fibre composite structures have become the most attractive candidate for civil engineering applications. Fibre reinforced plastic polymer (FRP) composite materials have been used in the rehabilitation and replacement of the old degrading traditional structures or build new structures. However, the lack of design standards for civil infrastructure limits their structural applications. The majority of the existing applications have been designed based on the research and guidelines provided by the fibre composite manufacturers or based on the designer’s experience. It has been a tendency that the final structure is generally over-designed. This paper provides a review on the available studies related to the design optimization of fibre composite structures used in civil engineering such as; plate, beam, box beam, sandwich panel, bridge girder, and bridge deck. Various optimization methods are presented and compared. In addition, the importance of using the appropriate optimization technique is discussed. An improved methodology, which considering experimental testing, numerical modelling, and design constrains, is proposed in the paper for design optimization of composite structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The United States Supreme Court has handed down a once in a generation patent law decision that will have important ramifications for the patentability of non-physical methods, both internationally and in Australia. In Bilski v Kappos, the Supreme Court considered whether an invention must either be tied to a machine or apparatus, or transform an article into a different state or thing to be patentable. It also considered for the first time whether business methods are patentable subject matter. The decision will be of particular interest to practitioners who followed the litigation in Grant v Commissioner of Patents, a Federal Court decision in which a Brisbane-based inventor was denied a patent over a method of protecting an asset from the claims of creditors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Corrosion is a common phenomenon and critical aspects of steel structural application. It affects the daily design, inspection and maintenance in structural engineering, especially for the heavy and complex industrial applications, where the steel structures are subjected to hash corrosive environments in combination of high working stress condition and often in open field and/or under high temperature production environments. In the paper, it presents the actual engineering application of advanced finite element methods in the predication of the structural integrity and robustness at a designed service life for the furnaces of alumina production, which was operated in the high temperature, corrosive environments and rotating with high working stress condition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.