987 resultados para Nonlinear maximum principle
Resumo:
This paper discusses the creation of a European Banking Union. First, we discuss questions of design. We highlight seven fundamental choices that decision makers will need to make: Which EU countries should participate in the banking union? To which categories of banks should it apply? Which institution should be tasked with supervision? Which one should deal with resolution? How centralised should the deposit insurance system be? What kind of fiscal backing would be required? What governance framework and political institutions would be needed? In terms of geographical scope, we see the coverage of the banking union of the euro area as necessary and of additional countries as desirable, even though this would entail important additional economic difficulties. The system should ideally cover all banks within the countries included, in order to prevent major competitive and distributional distortions. Supervisory authority should be granted either to both the ECB and a new agency, or to a new agency alone. National supervisors, acting under the authority of the European supervisor, would be tasked with the supervision of smaller banks in accordance with the subsidiarity principle. A European resolution authority should be established, with the possibility of drawing on ESM resources. A fully centralized deposit insurance system would eventually be desirable, but a system of partial reinsurance may also be envisaged at least in a first phase. A banking union would require at least implicit European fiscal backing, with significant political authority and legitimacy. Thus, banking union cannot be considered entirely separately from fiscal union and political union. The most difficult challenge of creating a European banking union lies with the short-term steps towards its eventual implementation. Many banks in the euro area, and especially in the crisis countries, are currently under stress and the move towards banking union almost certainly has significant distributional implications. Yet it is precisely because banks are under such stress that early and concrete action is needed. An overarching principle for such action is to minimize the cost to the tax payers. The first step should be to create a European supervisor that will anchor the development of the future banking union. In parallel, a capability to quickly assess the true capital position of the system’s most important banks should be created, for which we suggest establishing a temporary European Banking Sector Task Force working together with the European supervisor and other authorities. Ideally, problems identified by this process should be resolved by national authorities; in case fiscal capacities would prove insufficient, the European level would take over in the country concerned with some national financial participation, or in an even less likely adverse scenario, in all participating countries at once. This approach would require the passing of emergency legislation in the concerned countries that would give the Task Force the required access to information and, if necessary, further intervention rights. Thus, the principle of fiscal responsibility of respective member states for legacy costs would be preserved to the maximum extent possible, and at the same time, market participants and the public would be reassured that adequate tools are in place to address any eventuality.
Resumo:
Pairs of counter-propagating Rossby waves (CRWs) can be used to describe baroclinic instability in linearized primitive-equation dynamics, employing simple propagation and interaction mechanisms at only two locations in the meridional plane—the CRW ‘home-bases’. Here, it is shown how some CRW properties are remarkably robust as a growing baroclinic wave develops nonlinearly. For example, the phase difference between upper-level and lower-level waves in potential-vorticity contours, defined initially at the home-bases of the CRWs, remains almost constant throughout baroclinic wave life cycles, despite the occurrence of frontogenesis and Rossby-wave breaking. As the lower wave saturates nonlinearly the whole baroclinic wave changes phase speed from that of the normal mode to that of the self-induced phase speed of the upper CRW. On zonal jets without surface meridional shear, this must always act to slow the baroclinic wave. The direction of wave breaking when a basic state has surface meridional shear can be anticipated because the displacement structures of CRWs tend to be coherent along surfaces of constant basic-state angular velocity, U. This results in up-gradient horizontal momentum fluxes for baroclinically growing disturbances. The momentum flux acts to shift the jet meridionally in the direction of the increasing surface U, so that the upper CRW breaks in the same direction as occurred at low levels
Resumo:
The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.
Resumo:
Push-pull nonlinear optical (NLO) chromophores containing thiazole and benzothiazole acceptors were synthesized and characterized. Using these chromophores a series of second-order NLO polyimides were Successfully prepared from 4,4'-(hexafluoroisopropylidene) diphthalic anhydride (6FDA), pyromellitic dianhydride (PMDA) and 3,3'4,4'-benzophenone tetracarboxylic dianhydride (BTDA) by a standard condensation polymerization technique. These polyimides exhibit high glass transition temperatures ranging from 160 to 188 degrees C. UV-vis spectrum of polyimide exhibited a slight blue shift and decreases in absorption due to birefringence. From the order parameters, it was found that chromophores were aligned effectively. Using in situ poling and temperature ramping technique, the optical temperatures for corona poling were obtained. It was found that the optimal temperatures of polyimides approach their glass transition temperatures. These polyimides demonstrate relatively large d(33) values range between 35.15 and 45.20 pm/V at 532 nm. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
We report on a numerical study of the impact of short, fast inertia-gravity waves on the large-scale, slowly-evolving flow with which they co-exist. A nonlinear quasi-geostrophic numerical model of a stratified shear flow is used to simulate, at reasonably high resolution, the evolution of a large-scale mode which grows due to baroclinic instability and equilibrates at finite amplitude. Ageostrophic inertia-gravity modes are filtered out of the model by construction, but their effects on the balanced flow are incorporated using a simple stochastic parameterization of the potential vorticity anomalies which they induce. The model simulates a rotating, two-layer annulus laboratory experiment, in which we recently observed systematic inertia-gravity wave generation by an evolving, large-scale flow. We find that the impact of the small-amplitude stochastic contribution to the potential vorticity tendency, on the model balanced flow, is generally small, as expected. In certain circumstances, however, the parameterized fast waves can exert a dominant influence. In a flow which is baroclinically-unstable to a range of zonal wavenumbers, and in which there is a close match between the growth rates of the multiple modes, the stochastic waves can strongly affect wavenumber selection. This is illustrated by a flow in which the parameterized fast modes dramatically re-partition the probability-density function for equilibrated large-scale zonal wavenumber. In a second case study, the stochastic perturbations are shown to force spontaneous wavenumber transitions in the large-scale flow, which do not occur in their absence. These phenomena are due to a stochastic resonance effect. They add to the evidence that deterministic parameterizations in general circulation models, of subgrid-scale processes such as gravity wave drag, cannot always adequately capture the full details of the nonlinear interaction.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Samples of glacial till deposited since the Little Ice Age (LIA) maximum by two glaciers, North Bogbre at Svartisen and Corneliussen-breen at Okstindan, northern Norway, were obtained from transects running from the current glacier snout to the LIA (c. AD 1750) limit. The samples were analysed to determine their sediment magnetic properties, which display considerable variability. Significant trends in some magnetic parameters are evident with distance from the glacier margin and hence length of subaerial exposure. Magnetic susceptibility (X) decreases away from the contemporary snout, perhaps due to the weathering of ferrimagnetic minerals into antiferromagnetic forms, although this trend is generally not statistically significant. Trends in the ratios of soft IRM/hard IRM which are statistically significant support this hypothesis, suggesting that antiferromagnetic minerals are increasing relative to ferrimagnetic minerals towards the LIA maximum. Backfield ratios (IRM -100 mT/SIRM) also display a significant and strong trend towards magnetically harder behaviour with proximity to the LIA maximum. Thus, by employing a chronosequence approach, it may be possible to use sediment magnetics data as a tool for reconstructing glacier retreat in areas where more traditional techniques, such as lichenometry, are not applicable.
Resumo:
It has been generally accepted that the method of moments (MoM) variogram, which has been widely applied in soil science, requires about 100 sites at an appropriate interval apart to describe the variation adequately. This sample size is often larger than can be afforded for soil surveys of agricultural fields or contaminated sites. Furthermore, it might be a much larger sample size than is needed where the scale of variation is large. A possible alternative in such situations is the residual maximum likelihood (REML) variogram because fewer data appear to be required. The REML method is parametric and is considered reliable where there is trend in the data because it is based on generalized increments that filter trend out and only the covariance parameters are estimated. Previous research has suggested that fewer data are needed to compute a reliable variogram using a maximum likelihood approach such as REML, however, the results can vary according to the nature of the spatial variation. There remain issues to examine: how many fewer data can be used, how should the sampling sites be distributed over the site of interest, and how do different degrees of spatial variation affect the data requirements? The soil of four field sites of different size, physiography, parent material and soil type was sampled intensively, and MoM and REML variograms were calculated for clay content. The data were then sub-sampled to give different sample sizes and distributions of sites and the variograms were computed again. The model parameters for the sets of variograms for each site were used for cross-validation. Predictions based on REML variograms were generally more accurate than those from MoM variograms with fewer than 100 sampling sites. A sample size of around 50 sites at an appropriate distance apart, possibly determined from variograms of ancillary data, appears adequate to compute REML variograms for kriging soil properties for precision agriculture and contaminated sites. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field.