984 resultados para Propagation models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We provide a system identification framework for the analysis of THz-transient data. The subspace identification algorithm for both deterministic and stochastic systems is used to model the time-domain responses of structures under broadband excitation. Structures with additional time delays can be modelled within the state-space framework using additional state variables. We compare the numerical stability of the commonly used least-squares ARX models to that of the subspace N4SID algorithm by using examples of fourth-order and eighth-order systems under pulse and chirp excitation conditions. These models correspond to structures having two and four modes simultaneously propagating respectively. We show that chirp excitation combined with the subspace identification algorithm can provide a better identification of the underlying mode dynamics than the ARX model does as the complexity of the system increases. The use of an identified state-space model for mode demixing, upon transformation to a decoupled realization form is illustrated. Applications of state-space models and the N4SID algorithm to THz transient spectroscopy as well as to optical systems are highlighted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Terahertz (THz) frequency radiation, 0.1 THz to 20 THz, is being investigated for biomedical imaging applications following the introduction of pulsed THz sources that produce picosecond pulses and function at room temperature. Owing to the broadband nature of the radiation, spectral and temporal information is available from radiation that has interacted with a sample; this information is exploited in the development of biomedical imaging tools and sensors. In this work, models to aid interpretation of broadband THz spectra were developed and evaluated. THz radiation lies on the boundary between regions best considered using a deterministic electromagnetic approach and those better analysed using a stochastic approach incorporating quantum mechanical effects, so two computational models to simulate the propagation of THz radiation in an absorbing medium were compared. The first was a thin film analysis and the second a stochastic Monte Carlo model. The Cole–Cole model was used to predict the variation with frequency of the physical properties of the sample and scattering was neglected. The two models were compared with measurements from a highly absorbing water-based phantom. The Monte Carlo model gave a prediction closer to experiment over 0.1 to 3 THz. Knowledge of the frequency-dependent physical properties, including the scattering characteristics, of the absorbing media is necessary. The thin film model is computationally simple to implement but is restricted by the geometry of the sample it can describe. The Monte Carlo framework, despite being initially more complex, provides greater flexibility to investigate more complicated sample geometries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modelling the interaction of terahertz(THz) radiation with biological tissueposes many interesting problems. THzradiation is neither obviously described byan electric field distribution or anensemble of photons and biological tissueis an inhomogeneous medium with anelectronic permittivity that is bothspatially and frequency dependent making ita complex system to model.A three-layer system of parallel-sidedslabs has been used as the system throughwhich the passage of THz radiation has beensimulated. Two modelling approaches havebeen developed a thin film matrix model anda Monte Carlo model. The source data foreach of these methods, taken at the sametime as the data recorded to experimentallyverify them, was a THz spectrum that hadpassed though air only.Experimental verification of these twomodels was carried out using athree-layered in vitro phantom. Simulatedtransmission spectrum data was compared toexperimental transmission spectrum datafirst to determine and then to compare theaccuracy of the two methods. Goodagreement was found, with typical resultshaving a correlation coefficient of 0.90for the thin film matrix model and 0.78 forthe Monte Carlo model over the full THzspectrum. Further work is underway toimprove the models above 1 THz.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The extra-tropical response to El Niño in configurations of a coupled model with increased horizontal resolution in the oceanic component is shown to be more realistic than in configurations with a low resolution oceanic component. This general conclusion is independent of the atmospheric resolution. Resolving small-scale processes in the ocean produces a more realistic oceanic mean state, with a reduced cold tongue bias, which in turn allows the atmospheric model component to be forced more realistically. A realistic atmospheric basic state is critical in order to represent Rossby wave propagation in response to El Niño, and hence the extra-tropical response to El Niño. Through the use of high and low resolution configurations of the forced atmospheric-only model component we show that, in isolation, atmospheric resolution does not significantly affect the simulation of the extra-tropical response to El Niño. It is demonstrated, through perturbations to the SST forcing of the atmospheric model component, that biases in the climatological SST field typical of coupled model configurations with low oceanic resolution can account for the erroneous atmospheric basic state seen in these coupled model configurations. These results highlight the importance of resolving small-scale oceanic processes in producing a realistic large-scale mean climate in coupled models, and suggest that it might may be possible to “squeeze out” valuable extra performance from coupled models through increases to oceanic resolution alone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-resolution simulations over a large tropical domain (∼20◦S–20◦N and 42◦E–180◦E) using both explicit and parameterized convection are analyzed and compared to observations during a 10-day case study of an active Madden-Julian Oscillation (MJO) event. The parameterized convection model simulations at both 40 km and 12 km grid spacing have a very weak MJO signal and little eastward propagation. A 4 km explicit convection simulation using Smagorinsky subgrid mixing in the vertical and horizontal dimensions exhibits the best MJO strength and propagation speed. 12 km explicit convection simulations also perform much better than the 12 km parameterized convection run, suggesting that the convection scheme, rather than horizontal resolution, is key for these MJO simulations. Interestingly, a 4 km explicit convection simulation using the conventional boundary layer scheme for vertical subgrid mixing (but still using Smagorinsky horizontal mixing) completely loses the large-scale MJO organization, showing that relatively high resolution with explicit convection does not guarantee a good MJO simulation. Models with a good MJO representation have a more realistic relationship between lower-free-tropospheric moisture and precipitation, supporting the idea that moisture-convection feedback is a key process for MJO propagation. There is also increased generation of available potential energy and conversion of that energy into kinetic energy in models with a more realistic MJO, which is related to larger zonal variance in convective heating and vertical velocity, larger zonal temperature variance around 200 hPa, and larger correlations between temperature and ascent (and between temperature and diabatic heating) between 500–400 hPa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a current need to constrain the parameters of gravity wave drag (GWD) schemes in climate models using observational information instead of tuning them subjectively. In this work, an inverse technique is developed using data assimilation principles to estimate gravity wave parameters. Because mostGWDschemes assume instantaneous vertical propagation of gravity waves within a column, observations in a single column can be used to formulate a one-dimensional assimilation problem to estimate the unknown parameters. We define a cost function that measures the differences between the unresolved drag inferred from observations (referred to here as the ‘observed’ GWD) and the GWD calculated with a parametrisation scheme. The geometry of the cost function presents some difficulties, including multiple minima and ill-conditioning because of the non-independence of the gravity wave parameters. To overcome these difficulties we propose a genetic algorithm to minimize the cost function, which provides a robust parameter estimation over a broad range of prescribed ‘true’ parameters. When real experiments using an independent estimate of the ‘observed’ GWD are performed, physically unrealistic values of the parameters can result due to the non-independence of the parameters. However, by constraining one of the parameters to lie within a physically realistic range, this degeneracy is broken and the other parameters are also found to lie within physically realistic ranges. This argues for the essential physical self-consistency of the gravity wave scheme. A much better fit to the observed GWD at high latitudes is obtained when the parameters are allowed to vary with latitude. However, a close fit can be obtained either in the upper or the lower part of the profiles, but not in both at the same time. This result is a consequence of assuming an isotropic launch spectrum. The changes of sign in theGWDfound in the tropical lower stratosphere, which are associated with part of the quasi-biennial oscillation forcing, cannot be captured by the parametrisation with optimal parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years a number of chemistry-climate models have been developed with an emphasis on the stratosphere. Such models cover a wide range of time scales of integration and vary considerably in complexity. The results of specific diagnostics are here analysed to examine the differences amongst individual models and observations, to assess the consistency of model predictions, with a particular focus on polar ozone. For example, many models indicate a significant cold bias in high latitudes, the “cold pole problem”, particularly in the southern hemisphere during winter and spring. This is related to wave propagation from the troposphere which can be improved by improving model horizontal resolution and with the use of non-orographic gravity wave drag. As a result of the widely differing modelled polar temperatures, different amounts of polar stratospheric clouds are simulated which in turn result in varying ozone values in the models. The results are also compared to determine the possible future behaviour of ozone, with an emphasis on the polar regions and mid-latitudes. All models predict eventual ozone recovery, but give a range of results concerning its timing and extent. Differences in the simulation of gravity waves and planetary waves as well as model resolution are likely major sources of uncertainty for this issue. In the Antarctic, the ozone hole has probably reached almost its deepest although the vertical and horizontal extent of depletion may increase slightly further over the next few years. According to the model results, Antarctic ozone recovery could begin any year within the range 2001 to 2008. The limited number of models which have been integrated sufficiently far indicate that full recovery of ozone to 1980 levels may not occur in the Antarctic until about the year 2050. For the Arctic, most models indicate that small ozone losses may continue for a few more years and that recovery could begin any year within the range 2004 to 2019. The start of ozone recovery in the Arctic is therefore expected to appear later than in the Antarctic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability of the climate models participating in phase 5 of the Coupled Model Intercomparison Project (CMIP5) to simulate North Atlantic extratropical cyclones in winter [December–February (DJF)] and summer [June–August (JJA)] is investigated in detail. Cyclones are identified as maxima in T42 vorticity at 850 hPa and their propagation is tracked using an objective feature-tracking algorithm. By comparing the historical CMIP5 simulations (1976–2005) and the ECMWF Interim Re-Analysis (ERA-Interim; 1979–2008), the authors find that systematic biases affect the number and intensity of North Atlantic cyclones in CMIP5 models. In DJF, the North Atlantic storm track tends to be either too zonal or displaced southward, thus leading to too few and weak cyclones over the Norwegian Sea and too many cyclones in central Europe. In JJA, the position of the North Atlantic storm track is generally well captured but some CMIP5 models underestimate the total number of cyclones. The dynamical intensity of cyclones, as measured by either T42 vorticity at 850 hPa or mean sea level pressure, is too weak in both DJF and JJA. The intensity bias has a hemispheric character, and it cannot be simply attributed to the representation of the North Atlantic large- scale atmospheric state. Despite these biases, the representation of Northern Hemisphere (NH) storm tracks has improved since CMIP3 and some CMIP5 models are able of representing well both the number and the intensity of North Atlantic cyclones. In particular, some of the higher-atmospheric-resolution models tend to have a better representation of the tilt of the North Atlantic storm track and of the intensity of cyclones in DJF.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the sources of systematic errors in climate models is challenging because of coupled feedbacks and errors compensation. The developing seamless approach proposes that the identification and the correction of short term climate model errors have the potential to improve the modeled climate on longer time scales. In previous studies, initialised atmospheric simulations of a few days have been used to compare fast physics processes (convection, cloud processes) among models. The present study explores how initialised seasonal to decadal hindcasts (re-forecasts) relate transient week-to-month errors of the ocean and atmospheric components to the coupled model long-term pervasive SST errors. A protocol is designed to attribute the SST biases to the source processes. It includes five steps: (1) identify and describe biases in a coupled stabilized simulation, (2) determine the time scale of the advent of the bias and its propagation, (3) find the geographical origin of the bias, (4) evaluate the degree of coupling in the development of the bias, (5) find the field responsible for the bias. This strategy has been implemented with a set of experiments based on the initial adjustment of initialised simulations and exploring various degrees of coupling. In particular, hindcasts give the time scale of biases advent, regionally restored experiments show the geographical origin and ocean-only simulations isolate the field responsible for the bias and evaluate the degree of coupling in the bias development. This strategy is applied to four prominent SST biases of the IPSLCM5A-LR coupled model in the tropical Pacific, that are largely shared by other coupled models, including the Southeast Pacific warm bias and the equatorial cold tongue bias. Using the proposed protocol, we demonstrate that the East Pacific warm bias appears in a few months and is caused by a lack of upwelling due to too weak meridional coastal winds off Peru. The cold equatorial bias, which surprisingly takes 30 years to develop, is the result of an equatorward advection of midlatitude cold SST errors. Despite large development efforts, the current generation of coupled models shows only little improvement. The strategy proposed in this study is a further step to move from the current random ad hoc approach, to a bias-targeted, priority setting, systematic model development approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The convectively active part of the Madden-Julian Oscillation (MJO) propagates eastward through the warm pool, from the Indian Ocean through the Maritime Continent (the Indonesian archipelago) to the western Pacific. The Maritime Continent's complex topography means the exact nature of the MJO propagation through this region is unclear. Model simulations of the MJO are often poor over the region, leading to local errors in latent heat release and global errors in medium-range weather prediction and climate simulation. Using 14 northern winters of TRMM satellite data it is shown that, where the mean diurnal cycle of precipitation is strong, 80% of the MJO precipitation signal in the Maritime Continent is accounted for by changes in the amplitude of the diurnal cycle. Additionally, the relationship between outgoing long-wave radiation (OLR) and precipitation is weakened here, such that OLR is no longer a reliable proxy for precipitation. The canonical view of the MJO as the smooth eastward propagation of a large-scale precipitation envelope also breaks down over the islands of the Maritime Continent. Instead, a vanguard of precipitation (anomalies of 2.5 mm day^-1 over 10^6 km^2) jumps ahead of the main body by approximately 6 days or 2000 km. Hence, there can be enhanced precipitation over Sumatra, Borneo or New Guinea when the large-scale MJO envelope over the surrounding ocean is one of suppressed precipitation. This behaviour can be accommodated into existing MJO theories. Frictional and topographic moisture convergence and relatively clear skies ahead of the main convective envelope combine with the low thermal inertia of the islands, to allow a rapid response in the diurnal cycle which rectifies onto the lower-frequency MJO. Hence, accurate representations of the diurnal cycle and its scale interaction appear to be necessary for models to simulate the MJO successfully.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Madden–Julian Oscillation (MJO) is the chief source of tropical intra-seasonal variability, but is simulated poorly by most state-of-the-art GCMs. Common errors include a lack of eastward propagation at the correct frequency and zonal extent, and too small a ratio of eastward- to westward-propagating variability. Here it is shown that HiGEM, a high-resolution GCM, simulates a very realistic MJO with approximately the correct spatial and temporal scale. Many MJO studies in GCMs are limited to diagnostics which average over a latitude band around the equator, allowing an analysis of the MJO’s structure in time and longitude only. In this study a wider range of diagnostics is applied. It is argued that such an approach is necessary for a comprehensive analysis of a model’s MJO. The standard analysis of Wheeler and Hendon (Mon Wea Rev 132(8):1917–1932, 2004; WH04) is applied to produce composites, which show a realistic spatial structure in the MJO envelopes but for the timing of the peak precipitation in the inter-tropical convergence zone, which bifurcates the MJO signal. Further diagnostics are developed to analyse the MJO’s episodic nature and the “MJO inertia” (the tendency to remain in the same WH04 phase from one day to the next). HiGEM favours phases 2, 3, 6 and 7; has too much MJO inertia; and dies out too frequently in phase 3. Recent research has shown that a key feature of the MJO is its interaction with the diurnal cycle over the Maritime Continent. This interaction is present in HiGEM but is unrealistically weak.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the challenge of representing structural differences in river channel cross-section geometry for regional to global scale river hydraulic models and the effect this can have on simulations of wave dynamics. Classically, channel geometry is defined using data, yet at larger scales the necessary information and model structures do not exist to take this approach. We therefore propose a fundamentally different approach where the structural uncertainty in channel geometry is represented using a simple parameterization, which could then be estimated through calibration or data assimilation. This paper first outlines the development of a computationally efficient numerical scheme to represent generalised channel shapes using a single parameter, which is then validated using a simple straight channel test case and shown to predict wetted perimeter to within 2% for the channels tested. An application to the River Severn, UK is also presented, along with an analysis of model sensitivity to channel shape, depth and friction. The channel shape parameter was shown to improve model simulations of river level, particularly for more physically plausible channel roughness and depth parameter ranges. Calibrating channel Manning’s coefficient in a rectangular channel provided similar water level simulation accuracy in terms of Nash-Sutcliffe efficiency to a model where friction and shape or depth were calibrated. However, the calibrated Manning coefficient in the rectangular channel model was ~2/3 greater than the likely physically realistic value for this reach and this erroneously slowed wave propagation times through the reach by several hours. Therefore, for large scale models applied in data sparse areas, calibrating channel depth and/or shape may be preferable to assuming a rectangular geometry and calibrating friction alone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The horizontal gradient of potential vorticity (PV) across the tropopause typically declines with lead time in global numerical weather forecasts and tends towards a steady value dependent on model resolution. This paper examines how spreading the tropopause PV contrast over a broader frontal zone affects the propagation of Rossby waves. The approach taken is to analyse Rossby waves on a PV front of finite width in a simple single-layer model. The dispersion relation for linear Rossby waves on a PV front of infinitesimal width is well known; here an approximate correction is derived for the case of a finite width front, valid in the limit that the front is narrow compared to the zonal wavelength. Broadening the front causes a decrease in both the jet speed and the ability of waves to propagate upstream. The contribution of these changes to Rossby wave phase speeds cancel at leading order. At second order the decrease in jet speed dominates, meaning phase speeds are slower on broader PV fronts. This asymptotic phase speed result is shown to hold for a wide class of single-layer dynamics with a varying range of PV inversion operators. The phase speed dependence on frontal width is verified by numerical simulations and also shown to be robust at finite wave amplitude, and estimates are made for the error in Rossby wave propagation speeds due to the PV gradient error present in numerical weather forecast models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.