76 resultados para Square Root Model
em CentAUR: Central Archive University of Reading - UK
Resumo:
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.
Resumo:
A new primary model based on a thermodynamically consistent first-order kinetic approach was constructed to describe non-log-linear inactivation kinetics of pressure-treated bacteria. The model assumes a first-order process in which the specific inactivation rate changes inversely with the square root of time. The model gave reasonable fits to experimental data over six to seven orders of magnitude. It was also tested on 138 published data sets and provided good fits in about 70% of cases in which the shape of the curve followed the typical convex upward form. In the remainder of published examples, curves contained additional shoulder regions or extended tail regions. Curves with shoulders could be accommodated by including an additional time delay parameter and curves with tails shoulders could be accommodated by omitting points in the tail beyond the point at which survival levels remained more or less constant. The model parameters varied regularly with pressure, which may reflect a genuine mechanistic basis for the model. This property also allowed the calculation of (a) parameters analogous to the decimal reduction time D and z, the temperature increase needed to change the D value by a factor of 10, in thermal processing, and hence the processing conditions needed to attain a desired level of inactivation; and (b) the apparent thermodynamic volumes of activation associated with the lethal events. The hypothesis that inactivation rates changed as a function of the square root of time would be consistent with a diffusion-limited process.
Resumo:
In numerical weather prediction (NWP) data assimilation (DA) methods are used to combine available observations with numerical model estimates. This is done by minimising measures of error on both observations and model estimates with more weight given to data that can be more trusted. For any DA method an estimate of the initial forecast error covariance matrix is required. For convective scale data assimilation, however, the properties of the error covariances are not well understood. An effective way to investigate covariance properties in the presence of convection is to use an ensemble-based method for which an estimate of the error covariance is readily available at each time step. In this work, we investigate the performance of the ensemble square root filter (EnSRF) in the presence of cloud growth applied to an idealised 1D convective column model of the atmosphere. We show that the EnSRF performs well in capturing cloud growth, but the ensemble does not cope well with discontinuities introduced into the system by parameterised rain. The state estimates lose accuracy, and more importantly the ensemble is unable to capture the spread (variance) of the estimates correctly. We also find, counter-intuitively, that by reducing the spatial frequency of observations and/or the accuracy of the observations, the ensemble is able to capture the states and their variability successfully across all regimes.
Resumo:
Ensemble clustering (EC) can arise in data assimilation with ensemble square root filters (EnSRFs) using non-linear models: an M-member ensemble splits into a single outlier and a cluster of M−1 members. The stochastic Ensemble Kalman Filter does not present this problem. Modifications to the EnSRFs by a periodic resampling of the ensemble through random rotations have been proposed to address it. We introduce a metric to quantify the presence of EC and present evidence to dispel the notion that EC leads to filter failure. Starting from a univariate model, we show that EC is not a permanent but transient phenomenon; it occurs intermittently in non-linear models. We perform a series of data assimilation experiments using a standard EnSRF and a modified EnSRF by a resampling though random rotations. The modified EnSRF thus alleviates issues associated with EC at the cost of traceability of individual ensemble trajectories and cannot use some of algorithms that enhance performance of standard EnSRF. In the non-linear regimes of low-dimensional models, the analysis root mean square error of the standard EnSRF slowly grows with ensemble size if the size is larger than the dimension of the model state. However, we do not observe this problem in a more complex model that uses an ensemble size much smaller than the dimension of the model state, along with inflation and localisation. Overall, we find that transient EC does not handicap the performance of the standard EnSRF.
Resumo:
An eddy-resolving numerical model of a zonal flow, meant to resemble the Antarctic Circumpolar Current, is described and analyzed using the framework of J. Marshall and T. Radko. In addition to wind and buoyancy forcing at the surface, the model contains a sponge layer at the northern boundary that permits a residual meridional overturning circulation (MOC) to exist at depth. The strength of the residual MOC is diagnosed for different strengths of surface wind stress. It is found that the eddy circulation largely compensates for the changes in Ekman circulation. The extent of the compensation and thus the sensitivity of the MOC to the winds depend on the surface boundary condition. A fixed-heat-flux surface boundary severely limits the ability of the MOC to change. An interactive heat flux leads to greater sensitivity. To explain the MOC sensitivity to the wind strength under the interactive heat flux, transformed Eulerian-mean theory is applied, in which the eddy diffusivity plays a central role in determining the eddy response. A scaling theory for the eddy diffusivity, based on the mechanical energy balance, is developed and tested; the average magnitude of the diffusivity is found to be proportional to the square root of the wind stress. The MOC sensitivity to the winds based on this scaling is compared with the true sensitivity diagnosed from the experiments.
Resumo:
It is becoming increasingly important that we can understand and model flow processes in urban areas. Applications such as weather forecasting, air quality and sustainable urban development rely on accurate modelling of the interface between an urban surface and the atmosphere above. This review gives an overview of current understanding of turbulence generated by an urban surface up to a few building heights, the layer called the roughness sublayer (RSL). High quality datasets are also identified which can be used in the development of suitable parameterisations of the urban RSL. Datasets derived from physical and numerical modelling, and full-scale observations in urban areas now exist across a range of urban-type morphologies (e.g. street canyons, cubes, idealised and realistic building layouts). Results show that the urban RSL depth falls within 2 – 5 times mean building height and is not easily related to morphology. Systematic perturbations away from uniform layouts (e.g. varying building heights) have a significant impact on RSL structure and depth. Considerable fetch is required to develop an overlying inertial sublayer, where turbulence is more homogeneous, and some authors have suggested that the “patchiness” of urban areas may prevent inertial sublayers from developing at all. Turbulence statistics suggest similarities between vegetation and urban canopies but key differences are emerging. There is no consensus as to suitable scaling variables, e.g. friction velocity above canopy vs. square root of maximum Reynolds stress, mean vs. maximum building height. The review includes a summary of existing modelling practices and highlights research priorities.
Resumo:
A method is presented which allows thermal inertia (the soil heat capacity times the square root of the soil thermal diffusivity, C(h)rootD(h)), to be estimated remotely from micrometeorological observations. The method uses the drop in surface temperature, T-s, between sunset and sunrise, and the average night-time net radiation during that period, for clear, still nights. A Fourier series analysis was applied to analyse the time series of T-s . The Fourier series constants, together with the remote estimate of thermal inertia, were used in an analytical expression to calculate diurnal estimates of the soil heat flux, G. These remote estimates of C(h)rootD(h) and G compared well with values derived from in situ sensors. The remote and in situ estimates of C(h)rootD(h) both correlated well with topsoil moisture content. This method potentially allows area-average estimates of thermal inertia and soil heat flux to be derived from remote sensing, e.g. METEOSAT Second Generation, where the area is determined by the sensor's height and viewing angle. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Background and Purpose-Clinical research into the treatment of acute stroke is complicated, is costly, and has often been unsuccessful. Developments in imaging technology based on computed tomography and magnetic resonance imaging scans offer opportunities for screening experimental therapies during phase II testing so as to deliver only the most promising interventions to phase III. We discuss the design and the appropriate sample size for phase II studies in stroke based on lesion volume. Methods-Determination of the relation between analyses of lesion volumes and of neurologic outcomes is illustrated using data from placebo trial patients from the Virtual International Stroke Trials Archive. The size of an effect on lesion volume that would lead to a clinically relevant treatment effect in terms of a measure, such as modified Rankin score (mRS), is found. The sample size to detect that magnitude of effect on lesion volume is then calculated. Simulation is used to evaluate different criteria for proceeding from phase II to phase III. Results-The odds ratios for mRS correspond roughly to the square root of odds ratios for lesion volume, implying that for equivalent power specifications, sample sizes based on lesion volumes should be about one fourth of those based on mRS. Relaxation of power requirements, appropriate for phase II, lead to further sample size reductions. For example, a phase III trial comparing a novel treatment with placebo with a total sample size of 1518 patients might be motivated from a phase II trial of 126 patients comparing the same 2 treatment arms. Discussion-Definitive phase III trials in stroke should aim to demonstrate significant effects of treatment on clinical outcomes. However, more direct outcomes such as lesion volume can be useful in phase II for determining whether such phase III trials should be undertaken in the first place. (Stroke. 2009;40:1347-1352.)
Resumo:
Pulsed Phase Thermography (PPT) has been proven effective on depth retrieval of flat-bottomed holes in different materials such as plastics and aluminum. In PPT, amplitude and phase delay signatures are available following data acquisition (carried out in a similar way as in classical Pulsed Thermography), by applying a transformation algorithm such as the Fourier Transform (FT) on thermal profiles. The authors have recently presented an extended review on PPT theory, including a new inversion technique for depth retrieval by correlating the depth with the blind frequency fb (frequency at which a defect produce enough phase contrast to be detected). An automatic defect depth retrieval algorithm had also been proposed, evidencing PPT capabilities as a practical inversion technique. In addition, the use of normalized parameters to account for defect size variation as well as depth retrieval from complex shape composites (GFRP and CFRP) are currently under investigation. In this paper, steel plates containing flat-bottomed holes at different depths (from 1 to 4.5 mm) are tested by quantitative PPT. Least squares regression results show excellent agreement between depth and the inverse square root blind frequency, which can be used for depth inversion. Experimental results on steel plates with simulated corrosion are presented as well. It is worth noting that results are improved by performing PPT on reconstructed (synthetic) rather than on raw thermal data.
Resumo:
A shock capturing scheme is presented for the equations of isentropic flow based on upwind differencing applied to a locally linearized set of Riemann problems. This includes the two-dimensional shallow water equations using the familiar gas dynamics analogy. An average of the flow variables across the interface between cells is required, and this average is chosen to be the arithmetic mean for computational efficiency, leading to arithmetic averaging. This is in contrast to usual ‘square root’ averages found in this type of Riemann solver where the computational expense can be prohibitive. The scheme is applied to a two-dimensional dam-break problem and the approximate solution compares well with those given by other authors.
Resumo:
A numerical scheme is presented for the solution of the Euler equations of compressible flow of a real gas in a single spatial coordinate. This includes flow in a duct of variable cross-section, as well as flow with slab, cylindrical or spherical symmetry, as well as the case of an ideal gas, and can be useful when testing codes for the two-dimensional equations governing compressible flow of a real gas. The resulting scheme requires an average of the flow variables across the interface between cells, and this average is chosen to be the arithmetic mean for computational efficiency, which is in contrast to the usual “square root” averages found in this type of scheme. The scheme is applied with success to five problems with either slab or cylindrical symmetry and for a number of equations of state. The results compare favourably with the results from other schemes.
Resumo:
An efficient numerical method is presented for the solution of the Euler equations governing the compressible flow of a real gas. The scheme is based on the approximate solution of a specially constructed set of linearised Riemann problems. An average of the flow variables across the interface between cells is required, and this is chosen to be the arithmetic mean for computational efficiency, which is in contrast to the usual square root averaging. The scheme is applied to a test problem for five different equations of state.