966 resultados para Inverse problems (Differential equations)
Resumo:
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.
Resumo:
We introduce a new algorithm for source identification and field splitting based on the point source method (Potthast 1998 A point-source method for inverse acoustic and electromagnetic obstacle scattering problems IMA J. Appl. Math. 61 119–40, Potthast R 1996 A fast new method to solve inverse scattering problems Inverse Problems 12 731–42). The task is to separate the sound fields uj, j = 1, ..., n of sound sources supported in different bounded domains G1, ..., Gn in from measurements of the field on some microphone array—mathematically speaking from the knowledge of the sum of the fields u = u1 + + un on some open subset Λ of a plane. The main idea of the scheme is to calculate filter functions , to construct uℓ for ℓ = 1, ..., n from u|Λ in the form We will provide the complete mathematical theory for the field splitting via the point source method. In particular, we describe uniqueness, solvability of the problem and convergence and stability of the algorithm. In the second part we describe the practical realization of the splitting for real data measurements carried out at the Institute for Sound and Vibration Research at Southampton, UK. A practical demonstration of the original recording and the splitting results for real data is available online.
Resumo:
The goal of this paper is to study and further develop the orthogonality sampling or stationary waves algorithm for the detection of the location and shape of objects from the far field pattern of scattered waves in electromagnetics or acoustics. Orthogonality sampling can be seen as a special beam forming algorithm with some links to the point source method and to the linear sampling method. The basic idea of orthogonality sampling is to sample the space under consideration by calculating scalar products of the measured far field pattern , with a test function for all y in a subset Q of the space , m = 2, 3. The way in which this is carried out is important to extract the information which the scattered fields contain. The theoretical foundation of orthogonality sampling is only partly resolved, and the goal of this work is to initiate further research by numerical demonstration of the high potential of the approach. We implement the method for a two-dimensional setting for the Helmholtz equation, which represents electromagnetic scattering when the setup is independent of the third coordinate. We show reconstructions of the location and shape of objects from measurements of the scattered field for one or several directions of incidence and one or many frequencies or wave numbers, respectively. In particular, we visualize the indicator function both with the Dirichlet and Neumann boundary condition and for complicated inhomogeneous media.
Resumo:
We show that the four-dimensional variational data assimilation method (4DVar) can be interpreted as a form of Tikhonov regularization, a very familiar method for solving ill-posed inverse problems. It is known from image restoration problems that L1-norm penalty regularization recovers sharp edges in the image more accurately than Tikhonov, or L2-norm, penalty regularization. We apply this idea from stationary inverse problems to 4DVar, a dynamical inverse problem, and give examples for an L1-norm penalty approach and a mixed total variation (TV) L1–L2-norm penalty approach. For problems with model error where sharp fronts are present and the background and observation error covariances are known, the mixed TV L1–L2-norm penalty performs better than either the L1-norm method or the strong constraint 4DVar (L2-norm)method. A strength of the mixed TV L1–L2-norm regularization is that in the case where a simplified form of the background error covariance matrix is used it produces a much more accurate analysis than 4DVar. The method thus has the potential in numerical weather prediction to overcome operational problems with poorly tuned background error covariance matrices.
Resumo:
We investigate the error dynamics for cycled data assimilation systems, such that the inverse problem of state determination is solved at tk, k = 1, 2, 3, ..., with a first guess given by the state propagated via a dynamical system model from time tk − 1 to time tk. In particular, for nonlinear dynamical systems that are Lipschitz continuous with respect to their initial states, we provide deterministic estimates for the development of the error ||ek|| := ||x(a)k − x(t)k|| between the estimated state x(a) and the true state x(t) over time. Clearly, observation error of size δ > 0 leads to an estimation error in every assimilation step. These errors can accumulate, if they are not (a) controlled in the reconstruction and (b) damped by the dynamical system under consideration. A data assimilation method is called stable, if the error in the estimate is bounded in time by some constant C. The key task of this work is to provide estimates for the error ||ek||, depending on the size δ of the observation error, the reconstruction operator Rα, the observation operator H and the Lipschitz constants K(1) and K(2) on the lower and higher modes of controlling the damping behaviour of the dynamics. We show that systems can be stabilized by choosing α sufficiently small, but the bound C will then depend on the data error δ in the form c||Rα||δ with some constant c. Since ||Rα|| → ∞ for α → 0, the constant might be large. Numerical examples for this behaviour in the nonlinear case are provided using a (low-dimensional) Lorenz '63 system.
Resumo:
Brain activity can be measured non-invasively with functional imaging techniques. Each pixel in such an image represents a neural mass of about 105 to 107 neurons. Mean field models (MFMs) approximate their activity by averaging out neural variability while retaining salient underlying features, like neurotransmitter kinetics. However, MFMs incorporating the regional variability, realistic geometry and connectivity of cortex have so far appeared intractable. This lack of biological realism has led to a focus on gross temporal features of the EEG. We address these impediments and showcase a "proof of principle" forward prediction of co-registered EEG/fMRI for a full-size human cortex in a realistic head model with anatomical connectivity, see figure 1. MFMs usually assume homogeneous neural masses, isotropic long-range connectivity and simplistic signal expression to allow rapid computation with partial differential equations. But these approximations are insufficient in particular for the high spatial resolution obtained with fMRI, since different cortical areas vary in their architectonic and dynamical properties, have complex connectivity, and can contribute non-trivially to the measured signal. Our code instead supports the local variation of model parameters and freely chosen connectivity for many thousand triangulation nodes spanning a cortical surface extracted from structural MRI. This allows the introduction of realistic anatomical and physiological parameters for cortical areas and their connectivity, including both intra- and inter-area connections. Proper cortical folding and conduction through a realistic head model is then added to obtain accurate signal expression for a comparison to experimental data. To showcase the synergy of these computational developments, we predict simultaneously EEG and fMRI BOLD responses by adding an established model for neurovascular coupling and convolving "Balloon-Windkessel" hemodynamics. We also incorporate regional connectivity extracted from the CoCoMac database [1]. Importantly, these extensions can be easily adapted according to future insights and data. Furthermore, while our own simulation is based on one specific MFM [2], the computational framework is general and can be applied to models favored by the user. Finally, we provide a brief outlook on improving the integration of multi-modal imaging data through iterative fits of a single underlying MFM in this realistic simulation framework.
Resumo:
Changes to the electroencephalogram (EEG) observed during general anesthesia are modeled with a physiological mean field theory of electrocortical activity. To this end a parametrization of the postsynaptic impulse response is introduced which takes into account pharmacological effects of anesthetic agents on neuronal ligand-gated ionic channels. Parameter sets for this improved theory are then identified which respect known anatomical constraints and predict mean firing rates and power spectra typically encountered in human subjects. Through parallelized simulations of the eight nonlinear, two-dimensional partial differential equations on a grid representing an entire human cortex, it is demonstrated that linear approximations are sufficient for the prediction of a range of quantitative EEG variables. More than 70 000 plausible parameter sets are finally selected and subjected to a simulated induction with the stereotypical inhaled general anesthetic isoflurane. Thereby 86 parameter sets are identified that exhibit a strong “biphasic” rise in total power, a feature often observed in experiments. A sensitivity study suggests that this “biphasic” behavior is distinguishable even at low agent concentrations. Finally, our results are briefly compared with previous work by other groups and an outlook on future fits to experimental data is provided.
Resumo:
The long time–evolution of disturbances to slowly–varying solutions of partial differential equations is subject to the adiabatic invariance of the wave action. Generally, this approximate conservation law is obtained under the assumption that the partial differential equations are derived from a variational principle or have a canonical Hamiltonian structure. Here, the wave action conservation is examined for equations that possess a non–canonical (Poisson) Hamiltonian structure. The linear evolution of disturbances in the form of slowly varying wavetrains is studied using a WKB expansion. The properties of the original Hamiltonian system strongly constrain the linear equations that are derived, and this is shown to lead to the adiabatic invariance of a wave action. The connection between this (approximate) invariance and the (exact) conservation laws of pseudo–energy and pseudomomentum that exist when the basic solution is exactly time and space independent is discussed. An evolution equation for the slowly varying phase of the wavetrain is also derived and related to Berry's phase.
Resumo:
We establish Maximum Principles which apply to vectorial approximate minimizers of the general integral functional of Calculus of Variations. Our main result is a version of the Convex Hull Property. The primary advance compared to results already existing in the literature is that we have dropped the quasiconvexity assumption of the integrand in the gradient term. The lack of weak Lower semicontinuity is compensated by introducing a nonlinear convergence technique, based on the approximation of the projection onto a convex set by reflections and on the invariance of the integrand in the gradient term under the Orthogonal Group. Maximum Principles are implied for the relaxed solution in the case of non-existence of minimizers and for minimizing solutions of the Euler–Lagrange system of PDE.
Resumo:
A mathematical model incorporating many of the important processes at work in the crystallization of emulsions is presented. The model describes nucleation within the discontinuous domain of an emulsion, precipitation in the continuous domain, transport of monomers between the two domains, and formation and subsequent growth of crystals in both domains. The model is formulated as an autonomous system of nonlinear, coupled ordinary differential equations. The description of nucleation and precipitation is based upon the Becker–Döring equations of classical nucleation theory. A particular feature of the model is that the number of particles of all species present is explicitly conserved; this differs from work that employs Arrhenius descriptions of nucleation rate. Since the model includes many physical effects, it is analyzed in stages so that the role of each process may be understood. When precipitation occurs in the continuous domain, the concentration of monomers falls below the equilibrium concentration at the surface of the drops of the discontinuous domain. This leads to a transport of monomers from the drops into the continuous domain that are then incorporated into crystals and nuclei. Since the formation of crystals is irreversible and their subsequent growth inevitable, crystals forming in the continuous domain effectively act as a sink for monomers “sucking” monomers from the drops. In this case, numerical calculations are presented which are consistent with experimental observations. In the case in which critical crystal formation does not occur, the stationary solution is found and a linear stability analysis is performed. Bifurcation diagrams describing the loci of stationary solutions, which may be multiple, are numerically calculated.
Resumo:
A key step in many numerical schemes for time-dependent partial differential equations with moving boundaries is to rescale the problem to a fixed numerical mesh. An alternative approach is to use a moving mesh that can be adapted to focus on specific features of the model. In this paper we present and discuss two different velocity-based moving mesh methods applied to a two-phase model of avascular tumour growth formulated by Breward et al. (2002) J. Math. Biol. 45(2), 125-152. Each method has one moving node which tracks the moving boundary. The first moving mesh method uses a mesh velocity proportional to the boundary velocity. The second moving mesh method uses local conservation of volume fraction of cells (masses). Our results demonstrate that these moving mesh methods produce accurate results, offering higher resolution where desired whilst preserving the balance of fluxes and sources in the governing equations.
Resumo:
Neural stem cells (NSCs) are early precursors of neuronal and glial cells. NSCs are capable of generating identical progeny through virtually unlimited numbers of cell divisions (cell proliferation), producing daughter cells committed to differentiation. Nuclear factor kappa B (NF-kappaB) is an inducible, ubiquitous transcription factor also expressed in neurones, glia and neural stem cells. Recently, several pieces of evidence have been provided for a central role of NF-kappaB in NSC proliferation control. Here, we propose a novel mathematical model for NF-kappaB-driven proliferation of NSCs. We have been able to reconstruct the molecular pathway of activation and inactivation of NF-kappaB and its influence on cell proliferation by a system of nonlinear ordinary differential equations. Then we use a combination of analytical and numerical techniques to study the model dynamics. The results obtained are illustrated by computer simulations and are, in general, in accordance with biological findings reported by several independent laboratories. The model is able to both explain and predict experimental data. Understanding of proliferation mechanisms in NSCs may provide a novel outlook in both potential use in therapeutic approaches, and basic research as well.
Resumo:
In visual tracking experiments, distributions of the relative phase be-tween target and tracer showed positive relative phase indicating that the tracer precedes the target position. We found a mode transition from the reactive to anticipatory mode. The proposed integrated model provides a framework to understand the antici-patory behaviour of human, focusing on the integration of visual and soma-tosensory information. The time delays in visual processing and somatosensory feedback are explicitly treated in the simultaneous differential equations. The anticipatory behaviour observed in the visual tracking experiments can be ex-plained by the feedforward term of target velocity, internal dynamics, and time delay in somatosensory feedback.
Resumo:
We establish an uniform factorial decay estimate for the Taylor approximation of solutions to controlled differential equations. Its proof requires a factorial decay estimate for controlled paths which is interesting in its own right.
Resumo:
We construct a quasi-sure version (in the sense of Malliavin) of geometric rough paths associated with a Gaussian process with long-time memory. As an application we establish a large deviation principle (LDP) for capacities for such Gaussian rough paths. Together with Lyons' universal limit theorem, our results yield immediately the corresponding results for pathwise solutions to stochastic differential equations driven by such Gaussian process in the sense of rough paths. Moreover, our LDP result implies the result of Yoshida on the LDP for capacities over the abstract Wiener space associated with such Gaussian process.