992 resultados para Dynamical model
Resumo:
A mathematical model is presented to understand heat transfer processes during the cooling and re-warming of patients during cardiac surgery. Our compartmental model is able to account for many of the qualitative features observed in the cooling of various regions of the body including the central core containing the majority of organs, the rectal region containing the intestines and the outer peripheral region of skin and muscle. In particular, we focus on the issue of afterdrop: a drop in core temperature following patient re-warming, which can lead to serious post-operative complications. Model results for a typical cooling and re-warming procedure during surgery are in qualitative agreement with experimental data in producing the afterdrop effect and the observed dynamical variation in temperature between the core, rectal and peripheral regions. The influence of heat transfer processes and the volume of each compartmental region on the afterdrop effect is discussed. We find that excess fat on the peripheral and rectal regions leads to an increase in the afterdrop effect. Our model predicts that, by allowing constant re-warming after the core temperature has been raised, the afterdrop effect will be reduced.
Resumo:
This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
We consider a non-local version of the NJL model, based on a separable quark-quark interaction. The interaction is extended to include terms that bind vector and axial-vector mesons. The non-locality means that no further regulator is required. Moreover the model is able to confine the quarks by generating a quark propagator without poles at real energies. Working in the ladder approximation, we calculate amplitudes in Euclidean space and discuss features of their continuation to Minkowski energies. Conserved currents are constructed and we demonstrate their consistency with various Ward identities. Various meson masses are calculated, along with their strong and electromagnetic decay amplitudes. We also calculate the electromagnetic form factor of the pion, as well as form factors associated with the processes γγ* → π0 and ω → π0γ*. The results are found to lead to a satisfactory phenomenology and lend some dynamical support to the idea of vector-meson dominance.
Resumo:
A nonlocal version of the NJL model is investigated. It is based on a separable quark-quark interaction, as suggested by the instanton liquid picture of the QCD vacuum. The interaction is extended to include terms that bind vector and axial-vector mesons. The nonlocality means that no further regulator is required. Moreover the model is able to confine the quarks by generating a quark propagator without poles at real energies. Features of the continuation of amplitudes from Euclidean space to Minkowski energies are discussed. These features lead to restrictions on the model parameters as well as on the range of applicability of the model. Conserved currents are constructed, and their consistency with various Ward identities is demonstrated. In particular, the Gell-Mann-Oakes-Renner relation is derived both in the ladder approximation and at meson loop level. The importance of maintaining chiral symmetry in the calculations is stressed throughout. Calculations with the model are performed to all orders in momentum. Meson masses are determined, along with their strong and electromagnetic decay amplitudes. Also calculated are the electromagnetic form factor of the pion and form factors associated with the processes gamma gamma* --> pi0 and omega --> pi0 gamma*. The results are found to lead to a satisfactory phenomenology and demonstrate a possible dynamical origin for vector-meson dominance. In addition, the results produced at meson loop level validate the use of 1/Nc as an expansion parameter and indicate that a light and broad scalar state is inherent in models of the NJL type.
Resumo:
The climatology of a stratosphere-resolving version of the Met Office’s climate model is studied and validated against ECMWF reanalysis data. Ensemble integrations are carried out at two different horizontal resolutions. Along with a realistic climatology and annual cycle in zonal mean zonal wind and temperature, several physical effects are noted in the model. The time of final warming of the winter polar vortex is found to descend monotonically in the Southern Hemisphere, as would be expected for purely radiative forcing. In the Northern Hemisphere, however, the time of final warming is driven largely by dynamical effects in the lower stratosphere and radiative effects in the upper stratosphere, leading to the earliest transition to westward winds being seen in the midstratosphere. A realistic annual cycle in stratospheric water vapor concentrations—the tropical “tape recorder”—is captured. Tropical variability in the zonal mean zonal wind is found to be in better agreement with the reanalysis for the model run at higher horizontal resolution because the simulated quasi-biennial oscillation has a more realistic amplitude. Unexpectedly, variability in the extratropics becomes less realistic under increased resolution because of reduced resolved wave drag and increased orographic gravity wave drag. Overall, the differences in climatology between the simulations at high and moderate horizontal resolution are found to be small.
Resumo:
We investigate a simplified form of variational data assimilation in a fully nonlinear framework with the aim of extracting dynamical development information from a sequence of observations over time. Information on the vertical wind profile, w(z ), and profiles of temperature, T (z , t), and total water content, qt (z , t), as functions of height, z , and time, t, are converted to brightness temperatures at a single horizontal location by defining a two-dimensional (vertical and time) variational assimilation testbed. The profiles of T and qt are updated using a vertical advection scheme. A basic cloud scheme is used to obtain the fractional cloud amount and, when combined with the temperature field, this information is converted into a brightness temperature, using a simple radiative transfer scheme. It is shown that our model exhibits realistic behaviour with regard to the prediction of cloud, but the effects of nonlinearity become non-negligible in the variational data assimilation algorithm. A careful analysis of the application of the data assimilation scheme to this nonlinear problem is presented, the salient difficulties are highlighted, and suggestions for further developments are discussed.
Resumo:
The problem of state estimation occurs in many applications of fluid flow. For example, to produce a reliable weather forecast it is essential to find the best possible estimate of the true state of the atmosphere. To find this best estimate a nonlinear least squares problem has to be solved subject to dynamical system constraints. Usually this is solved iteratively by an approximate Gauss–Newton method where the underlying discrete linear system is in general unstable. In this paper we propose a new method for deriving low order approximations to the problem based on a recently developed model reduction method for unstable systems. To illustrate the theoretical results, numerical experiments are performed using a two-dimensional Eady model – a simple model of baroclinic instability, which is the dominant mechanism for the growth of storms at mid-latitudes. It is a suitable test model to show the benefit that may be obtained by using model reduction techniques to approximate unstable systems within the state estimation problem.
Resumo:
Using the formalism of the Ruelle response theory, we study how the invariant measure of an Axiom A dynamical system changes as a result of adding noise, and describe how the stochastic perturbation can be used to explore the properties of the underlying deterministic dynamics. We first find the expression for the change in the expectation value of a general observable when a white noise forcing is introduced in the system, both in the additive and in the multiplicative case. We also show that the difference between the expectation value of the power spectrum of an observable in the stochastically perturbed case and of the same observable in the unperturbed case is equal to the variance of the noise times the square of the modulus of the linear susceptibility describing the frequency-dependent response of the system to perturbations with the same spatial patterns as the considered stochastic forcing. This provides a conceptual bridge between the change in the fluctuation properties of the system due to the presence of noise and the response of the unperturbed system to deterministic forcings. Using Kramers-Kronig theory, it is then possible to derive the real and imaginary part of the susceptibility and thus deduce the Green function of the system for any desired observable. We then extend our results to rather general patterns of random forcing, from the case of several white noise forcings, to noise terms with memory, up to the case of a space-time random field. Explicit formulas are provided for each relevant case analysed. As a general result, we find, using an argument of positive-definiteness, that the power spectrum of the stochastically perturbed system is larger at all frequencies than the power spectrum of the unperturbed system. We provide an example of application of our results by considering the spatially extended chaotic Lorenz 96 model. These results clarify the property of stochastic stability of SRB measures in Axiom A flows, provide tools for analysing stochastic parameterisations and related closure ansatz to be implemented in modelling studies, and introduce new ways to study the response of a system to external perturbations. Taking into account the chaotic hypothesis, we expect that our results have practical relevance for a more general class of system than those belonging to Axiom A.
Resumo:
Data assimilation aims to incorporate measured observations into a dynamical system model in order to produce accurate estimates of all the current (and future) state variables of the system. The optimal estimates minimize a variational principle and can be found using adjoint methods. The model equations are treated as strong constraints on the problem. In reality, the model does not represent the system behaviour exactly and errors arise due to lack of resolution and inaccuracies in physical parameters, boundary conditions and forcing terms. A technique for estimating systematic and time-correlated errors as part of the variational assimilation procedure is described here. The modified method determines a correction term that compensates for model error and leads to improved predictions of the system states. The technique is illustrated in two test cases. Applications to the 1-D nonlinear shallow water equations demonstrate the effectiveness of the new procedure.
Resumo:
Cloud imagery is not currently used in numerical weather prediction (NWP) to extract the type of dynamical information that experienced forecasters have extracted subjectively for many years. For example, rapidly developing mid-latitude cyclones have characteristic signatures in the cloud imagery that are most fully appreciated from a sequence of images rather than from a single image. The Met Office is currently developing a technique to extract dynamical development information from satellite imagery using their full incremental 4D-Var (four-dimensional variational data assimilation) system. We investigate a simplified form of this technique in a fully nonlinear framework. We convert information on the vertical wind field, w(z), and profiles of temperature, T(z, t), and total water content, qt (z, t), as functions of height, z, and time, t, to a single brightness temperature by defining a 2D (vertical and time) variational assimilation testbed. The profiles of w, T and qt are updated using a simple vertical advection scheme. We define a basic cloud scheme to obtain the fractional cloud amount and, when combined with the temperature field, we convert this information into a brightness temperature, having developed a simple radiative transfer scheme. With the exception of some matrix inversion routines, all our code is developed from scratch. Throughout the development process we test all aspects of our 2D assimilation system, and then run identical twin experiments to try and recover information on the vertical velocity, from a sequence of observations of brightness temperature. This thesis contains a comprehensive description of our nonlinear models and assimilation system, and the first experimental results.
Resumo:
The separate effects of ozone depleting substances (ODSs) and greenhouse gases (GHGs) on forcing circulation changes in the Southern Hemisphere extratropical troposphere are investigated using a version of the Canadian Middle Atmosphere Model (CMAM) that is coupled to an ocean. Circulation-related diagnostics include zonal wind, tropopause pressure, Hadley cell width, jet location, annular mode index, precipitation, wave drag, and eddy fluxes of momentum and heat. As expected, the tropospheric response to the ODS forcing occurs primarily in austral summer, with past (1960-99) and future (2000-99) trends of opposite sign, while the GHG forcing produces more seasonally uniform trends with the same sign in the past and future. In summer the ODS forcing dominates past trends in all diagnostics, while the two forcings contribute nearly equally but oppositely to future trends. The ODS forcing produces a past surface temperature response consisting of cooling over eastern Antarctica, and is the dominant driver of past summertime surface temperature changes when the model is constrained by observed sea surface temperatures. For all diagnostics, the response to the ODS and GHG forcings is additive: that is, the linear trend computed from the simulations using the combined forcings equals (within statistical uncertainty) the sum of the linear trends from the simulations using the two separate forcings. Space time spectra of eddy fluxes and the spatial distribution of transient wave drag are examined to assess the viability of several recently proposed mechanisms for the observed poleward shift in the tropospheric jet.
Resumo:
Observations of noctilucent clouds have revealed a surprising coupling between the winter stratosphere and the summer polar mesopause region. In spite of the great distance involved, this inter-hemispheric link has been suggested to be the principal reason for both the year-to-year variability and the hemispheric differences in the frequency of occurrence of these high-altitude clouds. In this study, we investigate the dynamical influence of the winter stratosphere on the summer mesosphere using simulations from the vertically extended version of the Canadian Middle Atmosphere Model (CMAM). We find that for both Northern and Southern Hemispheres, variability in the summer polar mesopause region from one year to another can be traced back to the planetary-wave flux entering the winter stratosphere. The teleconnection pattern is the same for both positive and negative wave-flux anomalies. Using a composite analysis to isolate the events, it is argued that the mechanism for interhemispheric coupling is a feedback between summer mesosphere gravity-wave drag (GWD) and zonal wind, which is induced by an anomaly in mesospheric cross-equatorial flow, the latter arising from the anomaly in winter hemisphere GWD induced by the anomaly in stratospheric conditions.
Resumo:
A precipitation downscaling method is presented using precipitation from a general circulation model (GCM) as predictor. The method extends a previous method from monthly to daily temporal resolution. The simplest form of the method corrects for biases in wet-day frequency and intensity. A more sophisticated variant also takes account of flow-dependent biases in the GCM. The method is flexible and simple to implement. It is proposed here as a correction of GCM output for applications where sophisticated methods are not available, or as a benchmark for the evaluation of other downscaling methods. Applied to output from reanalyses (ECMWF, NCEP) in the region of the European Alps, the method is capable of reducing large biases in the precipitation frequency distribution, even for high quantiles. The two variants exhibit similar performances, but the ideal choice of method can depend on the GCM/reanalysis and it is recommended to test the methods in each case. Limitations of the method are found in small areas with unresolved topographic detail that influence higher-order statistics (e.g. high quantiles). When used as benchmark for three regional climate models (RCMs), the corrected reanalysis and the RCMs perform similarly in many regions, but the added value of the latter is evident for high quantiles in some small regions.
Resumo:
Accurate decadal climate predictions could be used to inform adaptation actions to a changing climate. The skill of such predictions from initialised dynamical global climate models (GCMs) may be assessed by comparing with predictions from statistical models which are based solely on historical observations. This paper presents two benchmark statistical models for predicting both the radiatively forced trend and internal variability of annual mean sea surface temperatures (SSTs) on a decadal timescale based on the gridded observation data set HadISST. For both statistical models, the trend related to radiative forcing is modelled using a linear regression of SST time series at each grid box on the time series of equivalent global mean atmospheric CO2 concentration. The residual internal variability is then modelled by (1) a first-order autoregressive model (AR1) and (2) a constructed analogue model (CA). From the verification of 46 retrospective forecasts with start years from 1960 to 2005, the correlation coefficient for anomaly forecasts using trend with AR1 is greater than 0.7 over parts of extra-tropical North Atlantic, the Indian Ocean and western Pacific. This is primarily related to the prediction of the forced trend. More importantly, both CA and AR1 give skillful predictions of the internal variability of SSTs in the subpolar gyre region over the far North Atlantic for lead time of 2 to 5 years, with correlation coefficients greater than 0.5. For the subpolar gyre and parts of the South Atlantic, CA is superior to AR1 for lead time of 6 to 9 years. These statistical forecasts are also compared with ensemble mean retrospective forecasts by DePreSys, an initialised GCM. DePreSys is found to outperform the statistical models over large parts of North Atlantic for lead times of 2 to 5 years and 6 to 9 years, however trend with AR1 is generally superior to DePreSys in the North Atlantic Current region, while trend with CA is superior to DePreSys in parts of South Atlantic for lead time of 6 to 9 years. These findings encourage further development of benchmark statistical decadal prediction models, and methods to combine different predictions.
Resumo:
We show that the four-dimensional variational data assimilation method (4DVar) can be interpreted as a form of Tikhonov regularization, a very familiar method for solving ill-posed inverse problems. It is known from image restoration problems that L1-norm penalty regularization recovers sharp edges in the image more accurately than Tikhonov, or L2-norm, penalty regularization. We apply this idea from stationary inverse problems to 4DVar, a dynamical inverse problem, and give examples for an L1-norm penalty approach and a mixed total variation (TV) L1–L2-norm penalty approach. For problems with model error where sharp fronts are present and the background and observation error covariances are known, the mixed TV L1–L2-norm penalty performs better than either the L1-norm method or the strong constraint 4DVar (L2-norm)method. A strength of the mixed TV L1–L2-norm regularization is that in the case where a simplified form of the background error covariance matrix is used it produces a much more accurate analysis than 4DVar. The method thus has the potential in numerical weather prediction to overcome operational problems with poorly tuned background error covariance matrices.