955 resultados para Nonlinear differential equations


Relevância:

80.00% 80.00%

Publicador:

Resumo:

SIN and SOLDIER are heuristic programs in LISP which solve symbolic integration problems. SIN (Symbolic INtegrator) solves indefinite integration problems at the difficulty approaching those in the larger integral tables. SIN contains several more methods than are used in the previous symbolic integration program SAINT, and solves most of the problems attempted by SAINT in less than one second. SOLDIER (SOLution of Ordinary Differential Equations Routine) solves first order, first degree ordinary differential equations at the level of a good college sophomore and at an average of about five seconds per problem attempted. The differences in philosophy and operation between SAINT and SIN are described, and suggestions for extending the work presented are made.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

King, R.D., Garrett, S.M., Coghill, G.M. (2005). On the use of qualitative reasoning to simulate and identify metabolic pathways. Bioinformatics 21(9):2017-2026 RAE2008

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hill, Joe M., Lloyd, Noel G., Pearson, Jane M., 'Centres and limit cycles for an extended Kukles system', Electronic Journal of Differential Equations, Vol. 2007(2007), No. 119, pp. 1-23.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Gough, John, 'Quantum Stratonovich Stochastic Calculus and the Quantum Wong-Zakai Theorem', Journal of Mathematical Physics. 47, 113509, (2006)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Gough, John, (2004) 'Quantum Flows as Markovian Limit of Emission, Absorption and Scattering Interactions', Communications in Mathematical Physics 254 pp.498-512 RAE2008

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The class of all Exponential-Polynomial-Trigonometric (EPT) functions is classical and equal to the Euler-d’Alembert class of solutions of linear differential equations with constant coefficients. The class of non-negative EPT functions defined on [0;1) was discussed in Hanzon and Holland (2010) of which EPT probability density functions are an important subclass. EPT functions can be represented as ceAxb, where A is a square matrix, b a column vector and c a row vector where the triple (A; b; c) is the minimal realization of the EPT function. The minimal triple is only unique up to a basis transformation. Here the class of 2-EPT probability density functions on R is defined and shown to be closed under a variety of operations. The class is also generalised to include mixtures with the pointmass at zero. This class coincides with the class of probability density functions with rational characteristic functions. It is illustrated that the Variance Gamma density is a 2-EPT density under a parameter restriction. A discrete 2-EPT process is a process which has stochastically independent 2-EPT random variables as increments. It is shown that the distribution of the minimum and maximum of such a process is an EPT density mixed with a pointmass at zero. The Laplace Transform of these distributions correspond to the discrete time Wiener-Hopf factors of the discrete time 2-EPT process. A distribution of daily log-returns, observed over the period 1931-2011 from a prominent US index, is approximated with a 2-EPT density function. Without the non-negativity condition, it is illustrated how this problem is transformed into a discrete time rational approximation problem. The rational approximation software RARL2 is used to carry out this approximation. The non-negativity constraint is then imposed via a convex optimisation procedure after the unconstrained approximation. Sufficient and necessary conditions are derived to characterise infinitely divisible EPT and 2-EPT functions. Infinitely divisible 2-EPT density functions generate 2-EPT Lévy processes. An assets log returns can be modelled as a 2-EPT Lévy process. Closed form pricing formulae are then derived for European Options with specific times to maturity. Formulae for discretely monitored Lookback Options and 2-Period Bermudan Options are also provided. Certain Greeks, including Delta and Gamma, of these options are also computed analytically. MATLAB scripts are provided for calculations involving 2-EPT functions. Numerical option pricing examples illustrate the effectiveness of the 2-EPT approach to financial modelling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The growth and proliferation of invasive bacteria in engineered systems is an ongoing problem. While there are a variety of physical and chemical processes to remove and inactivate bacterial pathogens, there are many situations in which these tools are no longer effective or appropriate for the treatment of a microbial target. For example, certain strains of bacteria are becoming resistant to commonly used disinfectants, such as chlorine and UV. Additionally, the overuse of antibiotics has contributed to the spread of antibiotic resistance, and there is concern that wastewater treatment processes are contributing to the spread of antibiotic resistant bacteria.

Due to the continually evolving nature of bacteria, it is difficult to develop methods for universal bacterial control in a wide range of engineered systems, as many of our treatment processes are static in nature. Still, invasive bacteria are present in many natural and engineered systems, where the application of broad acting disinfectants is impractical, because their use may inhibit the original desired bioprocesses. Therefore, to better control the growth of treatment resistant bacteria and to address limitations with the current disinfection processes, novel tools that are both specific and adaptable need to be developed and characterized.

In this dissertation, two possible biological disinfection processes were investigated for use in controlling invasive bacteria in engineered systems. First, antisense gene silencing, which is the specific use of oligonucleotides to silence gene expression, was investigated. This work was followed by the investigation of bacteriophages (phages), which are viruses that are specific to bacteria, in engineered systems.


For the antisense gene silencing work, a computational approach was used to quantify the number of off-targets and to determine the effects of off-targets in prokaryotic organisms. For the organisms of Escherichia coli K-12 MG1655 and Mycobacterium tuberculosis H37Rv the mean number of off-targets was found to be 15.0 + 13.2 and 38.2 + 61.4, respectively, which results in a reduction of greater than 90% of the effective oligonucleotide concentration. It was also demonstrated that there was a high variability in the number of off-targets over the length of a gene, but that on average, there was no general gene location that could be targeted to reduce off-targets. Therefore, this analysis needs to be performed for each gene in question. It was also demonstrated that the thermodynamic binding energy between the oligonucleotide and the mRNA accounted for 83% of the variation in the silencing efficiency, compared to the number of off-targets, which explained 43% of the variance of the silencing efficiency. This suggests that optimizing thermodynamic parameters must be prioritized over minimizing the number of off-targets. In conclusion for the antisense work, these results suggest that off-target hybrids can account for a greater than 90% reduction in the concentration of the silencing oligonucleotides, and that the effective concentration can be increased through the rational design of silencing targets by minimizing off-target hybrids.

Regarding the work with phages, the disinfection rates of bacteria in the presence of phages was determined. The disinfection rates of E. coli K12 MG1655 in the presence of coliphage Ec2 ranged up to 2 h-1, and were dependent on both the initial phage and bacterial concentrations. Increasing initial phage concentrations resulted in increasing disinfection rates, and generally, increasing initial bacterial concentrations resulted in increasing disinfection rates. However, disinfection rates were found to plateau at higher bacterial and phage concentrations. A multiple linear regression model was used to predict the disinfection rates as a function of the initial phage and bacterial concentrations, and this model was able to explain 93% of the variance in the disinfection rates. The disinfection rates were also modeled with a particle aggregation model. The results from these model simulations suggested that at lower phage and bacterial concentrations there are not enough collisions to support active disinfection rates, which therefore, limits the conditions and systems where phage based bacterial disinfection is possible. Additionally, the particle aggregation model over predicted the disinfection rates at higher phage and bacterial concentrations of 108 PFU/mL and 108 CFU/mL, suggesting other interactions were occurring at these higher concentrations. Overall, this work highlights the need for the development of alternative models to more accurately describe the dynamics of this system at a variety of phage and bacterial concentrations. Finally, the minimum required hydraulic residence time was calculated for a continuous stirred-tank reactor and a plug flow reactor (PFR) as a function of both the initial phage and bacterial concentrations, which suggested that phage treatment in a PFR is theoretically possible.

In addition to determining disinfection rates, the long-term bacterial growth inhibition potential was determined for a variety of phages with both Gram-negative and Gram-positive bacteria. It was determined, that on average, phages can be used to inhibit bacterial growth for up to 24 h, and that this effect was concentration dependent for various phages at specific time points. Additionally, it was found that a phage cocktail was no more effective at inhibiting bacterial growth over the long-term than the best performing phage in isolation.

Finally, for an industrial application, the use of phages to inhibit invasive Lactobacilli in ethanol fermentations was investigated. It was demonstrated that phage 8014-B2 can achieve a greater than 3-log inactivation of Lactobacillus plantarum during a 48 h fermentation. Additionally, it was shown that phages can be used to protect final product yields and maintain yeast viability. Through modeling the fermentation system with differential equations it was determined that there was a 10 h window in the beginning of the fermentation run, where the addition of phages can be used to protect final product yields, and after 20 h no additional benefit of the phage addition was observed.

In conclusion, this dissertation improved the current methods for designing antisense gene silencing targets for prokaryotic organisms, and characterized phages from an engineering perspective. First, the current design strategy for antisense targets in prokaryotic organisms was improved through the development of an algorithm that minimized the number of off-targets. For the phage work, a framework was developed to predict the disinfection rates in terms of the initial phage and bacterial concentrations. In addition, the long-term bacterial growth inhibition potential of multiple phages was determined for several bacteria. In regard to the phage application, phages were shown to protect both final product yields and yeast concentrations during fermentation. Taken together, this work suggests that the rational design of phage treatment is possible and further work is needed to expand on this foundation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider a stochastic process driven by a linear ordinary differential equation whose right-hand side switches at exponential times between a collection of different matrices. We construct planar examples that switch between two matrices where the individual matrices and the average of the two matrices are all Hurwitz (all eigenvalues have strictly negative real part), but nonetheless the process goes to infinity at large time for certain values of the switching rate. We further construct examples in higher dimensions where again the two individual matrices and their averages are all Hurwitz, but the process has arbitrarily many transitions between going to zero and going to infinity at large time as the switching rate varies. In order to construct these examples, we first prove in general that if each of the individual matrices is Hurwitz, then the process goes to zero at large time for sufficiently slow switching rate and if the average matrix is Hurwitz, then the process goes to zero at large time for sufficiently fast switching rate. We also give simple conditions that ensure the process goes to zero at large time for all switching rates. © 2014 International Press.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce a dynamic directional model (DDM) for studying brain effective connectivity based on intracranial electrocorticographic (ECoG) time series. The DDM consists of two parts: a set of differential equations describing neuronal activity of brain components (state equations), and observation equations linking the underlying neuronal states to observed data. When applied to functional MRI or EEG data, DDMs usually have complex formulations and thus can accommodate only a few regions, due to limitations in spatial resolution and/or temporal resolution of these imaging modalities. In contrast, we formulate our model in the context of ECoG data. The combined high temporal and spatial resolution of ECoG data result in a much simpler DDM, allowing investigation of complex connections between many regions. To identify functionally segregated sub-networks, a form of biologically economical brain networks, we propose the Potts model for the DDM parameters. The neuronal states of brain components are represented by cubic spline bases and the parameters are estimated by minimizing a log-likelihood criterion that combines the state and observation equations. The Potts model is converted to the Potts penalty in the penalized regression approach to achieve sparsity in parameter estimation, for which a fast iterative algorithm is developed. The methods are applied to an auditory ECoG dataset.