997 resultados para Special Scheme
Resumo:
Inhalt dieser Arbeit ist ein Verfahren zur numerischen Lösung der zweidimensionalen Flachwassergleichung, welche das Fließverhalten von Gewässern, deren Oberflächenausdehnung wesentlich größer als deren Tiefe ist, modelliert. Diese Gleichung beschreibt die gravitationsbedingte zeitliche Änderung eines gegebenen Anfangszustandes bei Gewässern mit freier Oberfläche. Diese Klasse beinhaltet Probleme wie das Verhalten von Wellen an flachen Stränden oder die Bewegung einer Flutwelle in einem Fluss. Diese Beispiele zeigen deutlich die Notwendigkeit, den Einfluss von Topographie sowie die Behandlung von Nass/Trockenübergängen im Verfahren zu berücksichtigen. In der vorliegenden Dissertation wird ein, in Gebieten mit hinreichender Wasserhöhe, hochgenaues Finite-Volumen-Verfahren zur numerischen Bestimmung des zeitlichen Verlaufs der Lösung der zweidimensionalen Flachwassergleichung aus gegebenen Anfangs- und Randbedingungen auf einem unstrukturierten Gitter vorgestellt, welches in der Lage ist, den Einfluss topographischer Quellterme auf die Strömung zu berücksichtigen, sowie in sogenannten \glqq lake at rest\grqq-stationären Zuständen diesen Einfluss mit den numerischen Flüssen exakt auszubalancieren. Basis des Verfahrens ist ein Finite-Volumen-Ansatz erster Ordnung, welcher durch eine WENO Rekonstruktion unter Verwendung der Methode der kleinsten Quadrate und eine sogenannte Space Time Expansion erweitert wird mit dem Ziel, ein Verfahren beliebig hoher Ordnung zu erhalten. Die im Verfahren auftretenden Riemannprobleme werden mit dem Riemannlöser von Chinnayya, LeRoux und Seguin von 1999 gelöst, welcher die Einflüsse der Topographie auf den Strömungsverlauf mit berücksichtigt. Es wird in der Arbeit bewiesen, dass die Koeffizienten der durch das WENO-Verfahren berechneten Rekonstruktionspolynome die räumlichen Ableitungen der zu rekonstruierenden Funktion mit einem zur Verfahrensordnung passenden Genauigkeitsgrad approximieren. Ebenso wird bewiesen, dass die Koeffizienten des aus der Space Time Expansion resultierenden Polynoms die räumlichen und zeitlichen Ableitungen der Lösung des Anfangswertproblems approximieren. Darüber hinaus wird die wohlbalanciertheit des Verfahrens für beliebig hohe numerische Ordnung bewiesen. Für die Behandlung von Nass/Trockenübergangen wird eine Methode zur Ordnungsreduktion abhängig von Wasserhöhe und Zellgröße vorgeschlagen. Dies ist notwendig, um in der Rechnung negative Werte für die Wasserhöhe, welche als Folge von Oszillationen des Raum-Zeit-Polynoms auftreten können, zu vermeiden. Numerische Ergebnisse die die theoretische Verfahrensordnung bestätigen werden ebenso präsentiert wie Beispiele, welche die hervorragenden Eigenschaften des Gesamtverfahrens in der Berechnung herausfordernder Probleme demonstrieren.
Resumo:
In July 2012, the European Commission issued an invitation for public consultation to review the ‘auctioning time profile’ for the EU Emissions Trading Scheme” in order to collect views from stakeholders and experts in the field of the EU carbon market on a draft for a future amendment of the Commission Regulation on the timing, administration and other aspects of auctioning of greenhouse gas emission allowances. In this submission, the CEPS Carbon Market Forum addresses the following questions and offers its views on the Commission’s proposed amendments: Is back loading a good idea? Is there a need for following up the back loading with structural measures? What should the number be? If this cannot be addressed, what are the considerations for deciding upon that number? What price expectations are linked to the number? On what basis are they construed?
Resumo:
Aerosols from anthropogenic and natural sources have been recognized as having an important impact on the climate system. However, the small size of aerosol particles (ranging from 0.01 to more than 10 μm in diameter) and their influence on solar and terrestrial radiation makes them difficult to represent within the coarse resolution of general circulation models (GCMs) such that small-scale processes, for example, sulfate formation and conversion, need parameterizing. It is the parameterization of emissions, conversion, and deposition and the radiative effects of aerosol particles that causes uncertainty in their representation within GCMs. The aim of this study was to perturb aspects of a sulfur cycle scheme used within a GCM to represent the climatological impacts of sulfate aerosol derived from natural and anthropogenic sulfur sources. It was found that perturbing volcanic SO2 emissions and the scavenging rate of SO2 by precipitation had the largest influence on the sulfate burden. When these parameters were perturbed the sulfate burden ranged from 0.73 to 1.17 TgS for 2050 sulfur emissions (A2 Special Report on Emissions Scenarios (SRES)), comparable with the range in sulfate burden across all the Intergovernmental Panel on Climate Change SRESs. Thus, the results here suggest that the range in sulfate burden due to model uncertainty is comparable with scenario uncertainty. Despite the large range in sulfate burden there was little influence on the climate sensitivity, which had a range of less than 0.5 K across the ensemble. We hypothesize that this small effect was partly associated with high sulfate loadings in the control phase of the experiment.
Resumo:
Fingerprinting is a well known approach for identifying multimedia data without having the original data present but what amounts to its essence or ”DNA”. Current approaches show insufficient deployment of three types of knowledge that could be brought to bear in providing a finger printing framework that remains effective, efficient and can accommodate both the whole as well as elemental protection at appropriate levels of abstraction to suit various Foci of Interest (FoI) in an image or cross media artefact. Thus our proposed framework aims to deliver selective composite fingerprinting that remains responsive to the requirements for protection of whole or parts of an image which may be of particularly interest and be especially vulnerable to attempts at rights violation. This is powerfully aided by leveraging both multi-modal information as well as a rich spectrum of collateral context knowledge including both image-level collaterals as well as the inevitably needed market intelligence knowledge such as customers’ social networks interests profiling which we can deploy as a crucial component of our Fingerprinting Collateral Knowledge. This is used in selecting the special FoIs within an image or other media content that have to be selectively and collaterally protected.
Resumo:
The Commission has proposed that a revised version of the present regime of direct payments should be rolled forward into the post-2013 CAP. There would be a limited redistribution of funds between Member States. Thirty per cent of the budget would be allocated to a new greening component, which would be problematic in the WTO. Non-active farmers would not qualify for aid; and payments would be capped. Special schemes would be introduced for small farmers, for young new entrants, and for disadvantaged regions.
Resumo:
fit the context of normalized variable formulation (NVF) of Leonard and total variation diminishing (TVD) constraints of Harten. this paper presents an extension of it previous work by the authors for solving unsteady incompressible flow problems. The main contributions of the paper are threefold. First, it presents the results of the development and implementation of a bounded high order upwind adaptative QUICKEST scheme in the 3D robust code (Freeflow), for the numerical solution of the full incompressible Navier-Stokes equations. Second, it reports numerical simulation results for 1D hock tube problem, 2D impinging jet and 2D/3D broken clam flows. Furthermore, these results are compared with existing analytical and experimental data. And third, it presents the application of the numerical method for solving 3D free surface flow problems. (C) 2007 IMACS. Published by Elsevier B.V. All rights reserved,
Resumo:
In this article, we present an analytical direct method, based on a Numerov three-point scheme, which is sixth order accurate and has a linear execution time on the grid dimension, to solve the discrete one-dimensional Poisson equation with Dirichlet boundary conditions. Our results should improve numerical codes used mainly in self-consistent calculations in solid state physics.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this article we study the general structure and special properties of the Schwinger-Dyson equation for the gluon propagator constructed with the pinch technique, together with the question of how to obtain infrared finite solutions, associated with the generation of an effective gluon mass. Exploiting the known all-order correspondence between the pinch technique and the background field method, we demonstrate that, contrary to the standard formulation, the non-perturbative gluon self-energy is transverse order-by-order in the dressed loop expansion, and separately for gluonic and ghost contributions. We next present a comprehensive review of several subtle issues relevant to the search of infrared finite solutions, paying particular attention to the role of the seagull graph in enforcing transversality, the necessity of introducing massless poles in the three-gluon vertex, and the incorporation of the correct renormalization group properties. In addition, we present a method for regulating the seagull-type contributions based on dimensional regularization; its applicability depends crucially on the asymptotic behavior of the solutions in the deep ultraviolet, and in particular on the anomalous dimension of the dynamically generated gluon mass. A linearized version of the truncated Schwinger-Dyson equation is derived, using a vertex that satisfies the required Ward identity and contains massless poles belonging to different Lorentz structures. The resulting integral equation is then solved numerically, the infrared and ultraviolet properties of the obtained solutions are examined in detail, and the allowed range for the effective gluon mass is determined. Various open questions and possible connections with different approaches in the literature are discussed. © SISSA 2006.
A new double laser pulse pumping scheme for transient collisionally excited plasma soft X-ray lasers
Resumo:
Within this thesis a new double laser pulse pumping scheme for plasma-based, transient collisionally excited soft x-ray lasers (SXRL) was developed, characterized and utilized for applications. SXRL operations from ~50 up to ~200 electron volt were demonstrated applying this concept. As a central technical tool, a special Mach-Zehnder interferometer in the chirped pulse amplification (CPA) laser front-end was developed for the generation of fully controllable double-pulses to optimally pump SXRLs.rnThis Mach-Zehnder device is fully controllable and enables the creation of two CPA pulses of different pulse duration and variable energy balance with an adjustable time delay. Besides the SXRL pumping, the double-pulse configuration was applied to determine the B-integral in the CPA laser system by amplifying short pulse replica in the system, followed by an analysis in the time domain. The measurement of B-integral values in the 0.1 to 1.5 radian range, only limited by the reachable laser parameters, proved to be a promising tool to characterize nonlinear effects in the CPA laser systems.rnContributing to the issue of SXRL pumping, the double-pulse was configured to optimally produce the gain medium of the SXRL amplification. The focusing geometry of the two collinear pulses under the same grazing incidence angle on the target, significantly improved the generation of the active plasma medium. On one hand the effect was induced by the intrinsically guaranteed exact overlap of the two pulses on the target, and on the other hand by the grazing incidence pre-pulse plasma generation, which allows for a SXRL operation at higher electron densities, enabling higher gain in longer wavelength SXRLs and higher efficiency at shorter wavelength SXRLs. The observation of gain enhancement was confirmed by plasma hydrodynamic simulations.rnThe first introduction of double short-pulse single-beam grazing incidence pumping for SXRL pumping below 20 nanometer at the laser facility PHELIX in Darmstadt (Germany), resulted in a reliable operation of a nickel-like palladium SXRL at 14.7 nanometer with a pump energy threshold strongly reduced to less than 500 millijoule. With the adaptation of the concept, namely double-pulse single-beam grazing incidence pumping (DGRIP) and the transfer of this technology to the laser facility LASERIX in Palaiseau (France), improved efficiency and stability of table-top high-repetition soft x-ray lasers in the wavelength region below 20 nanometer was demonstrated. With a total pump laser energy below 1 joule the target, 2 mircojoule of nickel-like molybdenum soft x-ray laser emission at 18.9 nanometer was obtained at 10 hertz repetition rate, proving the attractiveness for high average power operation. An easy and rapid alignment procedure fulfilled the requirements for a sophisticated installation, and the highly stable output satisfied the need for a reliable strong SXRL source. The qualities of the DGRIP scheme were confirmed in an irradiation operation on user samples with over 50.000 shots corresponding to a deposited energy of ~ 50 millijoule.rnThe generation of double-pulses with high energies up to ~120 joule enabled the transfer to shorter wavelength SXRL operation at the laser facility PHELIX. The application of DGRIP proved to be a simple and efficient method for the generation of soft x-ray lasers below 10 nanometer. Nickel-like samarium soft x-ray lasing at 7.3 nanometer was achieved at a low total pump energy threshold of 36 joule, which confirmed the suitability of the applied pumping scheme. A reliable and stable SXRL operation was demonstrated, due to the single-beam pumping geometry despite the large optical apertures. The soft x-ray lasing of nickel-like samarium was an important milestone for the feasibility of applying the pumping scheme also for higher pumping pulse energies, which are necessary to obtain soft x-ray laser wavelengths in the water window. The reduction of the total pump energy below 40 joule for 7.3 nanometer short wavelength lasing now fulfilled the requirement for the installation at the high-repetition rate operation laser facility LASERIX.rn
Resumo:
Special and differential treatment (S&D) allows differentiated treatment for developing countries within the WTO system by justifying a deviation from the most-favoured-nation obligation. Since it was incorporated into the GATT (the predecessor of the WTO) in the 1960s, S&D has played a significant role in promoting the integration of developing countries into the multilateral trading system. However, S&D is undergoing complicated and entangled discussion at the current multilateral trade negotiations, the Doha Development Agenda. There are a number of reasons to make opposing arguments in developed and developing countries, among which this paper focuses on two elements: diversification of developing countries and instability of preferential schemes. In order to overcome these problems and in order to make S&D more effective and operational, this paper considers the following alternative approaches: differentiation among developing countries applying the common but differentiated responsibility (CBDR) principle by analogy and codification of a preferential scheme as a multilateral agreement in the manner of North-South RTAs with flexibility.
Resumo:
Two extensions of the fast and accurate special perturbation method recently developed by Peláez et al. are presented for respectively elliptic and hyperbolic motion. A comparison with Peláez?s method and with the very efficient Stiefel- Scheifele?s method, for the problems of oblate Earth plus Moon and continuous radial thrust, shows that the new formulations can appreciably improve the accuracy of Peláez?s method and have a better performance of Stiefel-Scheifele?s method. Future work will be to include the two new formulations and the original one due to Peláez into an adaptive scheme for highly accurate orbit propagation
Resumo:
This CEPS Special Report builds on the first deliverable of the project entitled “Carbon leakage: Options for the EU”. It identifies carbon costs, and the ability to pass through carbon costs, as the main risk factors that could lead from asymmetrical carbon policies to carbon leakage. It also outlines and evaluates, based on criteria discussed in the paper, options for detecting and mitigating the risk of carbon leakage in three jurisdictions, with special attention to the EU ETS (Emissions Trading Scheme). Based on the analysis of approaches currently used in a number of existing carbon pricing systems, it identifies the balance between the number of sectors identified as being at risk, and the amount of compensation provided as a risk mitigation measure, as the critical element in providing an optimum approach to address carbon leakage risks. It also identifies a risk-based approach to identifying sectors at risk as allowing for a better reflection of reality in a counterfactual argument. Finally, the paper concludes that while, with some exceptions, there has been limited carbon leakage until now, the past may not be a good reflection of the future and that measures need to be put in place for the post-2020 period. While examining a number of approaches, it identifies free allocation as the most likely way forward for mitigating the risk of carbon leakage. While other approaches may provide interesting options, they also present challenges for implementation, from a market functioning, to international trade and relations, points of view. A number of challenges will need to be addressed in the post-2020 period, with many of them part of the EU ETS structural reform package. Some of these challenges include, among others, the need to recognise, and provide for individual sectoral characteristics, as well as for changes in production patterns, due to economic cycles, and other factors. Finally, the paper emphasises the need for an open dialogue regarding the post-2020 provisions for carbon leakage as no overall Energy and Climate Package is likely to be agreed on until this matter is addressed.
Resumo:
This paper discusses the application of the new European rules for burden-sharing and bail-in in the banking sector, in view of their ability to accommodate broader policy goals of aggregate financial stability. It finds that the Treaty principles and the new discipline of state aid and the restructuring of banks provide a solid framework for combating moral hazard and removing incentives that encourage excessive risk-taking by bankers. However, the application of the new rules may have become excessively attentive to the case-by-case evaluation of individual institutions, while perhaps losing sight of the aggregate policy needs of the banking system. Indeed, in this first phase of the banking union, while large segments of the EU banking sector still require a substantial restructuring and recapitalisation, the market may not be able to provide all the needed resources in the current environment of depressed profitability and low growth. Thus, a systemic market failure may be making the problem impossible to fix without resorting to temporary public support. But the risk of large write-offs of capital instruments due to burden-sharing and bail-in may represent an insurmountable obstacle to such public support as it may set in motion an investors’ flight. The paper concludes by showing that existing rules do contain the flexibility required to accommodate aggregate policy requirements in the general interest, and outlines a public support scheme for the precautionary recapitalisation of solvent banks that would be compliant with EU law.
Resumo:
This paper studies eight countries in which the regulation of unemployment benefits and related benefits and the concomitant activation of unemployed individuals has a multi-tiered architecture. It assesses their experiences and tries to understand possible problems of ‘institutional moral hazard’ that may emerge in the context of a hypothetical European Unemployment Benefit Scheme.