880 resultados para convective parameterization scheme


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A numerical model for studying the influences of deep convective cloud systems on photochemistry was developed based on a non-hydrostatic meteorological model and chemistry from a global chemistry transport model. The transport of trace gases, the scavenging of soluble trace gases, and the influences of lightning produced nitrogen oxides (NOx=NO+NO2) on the local ozone-related photochemistry were investigated in a multi-day case study for an oceanic region located in the tropical western Pacific. Model runs considering influences of large scale flows, previously neglected in multi-day cloud resolving and single column model studies of tracer transport, yielded that the influence of the mesoscale subsidence (between clouds) on trace gas transport was considerably overestimated in these studies. The simulated vertical transport and scavenging of highly soluble tracers were found to depend on the initial profiles, reconciling contrasting results from two previous studies. Influences of the modeled uptake of trace gases by hydrometeors in the liquid and the ice phase were studied in some detail for a small number of atmospheric trace gases and novel aspects concerning the role of the retention coefficient (i.e. the fraction of a dissolved trace gas that is retained in the ice phase upon freezing) on the vertical transport of highly soluble gases were illuminated. Including lightning NOx production inside a 500 km 2-D model domain was found to be important for the NOx budget and caused small to moderate changes in the domain averaged ozone concentrations. A number of sensitivity studies yielded that the fraction of lightning associated NOx which was lost through photochemical reactions in the vicinity of the lightning source was considerable, but strongly depended on assumptions about the magnitude and the altitude of the lightning NOx source. In contrast to a suggestion from an earlier study, it was argued that the near zero upper tropospheric ozone mixing ratios which were observed close to the study region were most probably not caused by the formation of NO associated with lightning. Instead, it was argued in agreement with suggestions from other studies that the deep convective transport of ozone-poor air masses from the relatively unpolluted marine boundary layer, which have most likely been advected horizontally over relatively large distances (both before and after encountering deep convection) probably played a role. In particular, it was suggested that the ozone profiles observed during CEPEX (Central Equatorial Pacific Experiment) were strongly influenced by the deep convection and the larger scale flow which are associated with the intra-seasonal oscillation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deep convection by pyro-cumulonimbus clouds (pyroCb) can transport large amounts of forest fire smoke into the upper troposphere and lower stratosphere. Here, results from numerical simulations of such deep convective smoke transport are presented. The structure, shape and injection height of the pyroCb simulated for a specific case study are in good agreement with observations. The model results confirm that substantial amounts of smoke are injected into the lower stratosphere. Small-scale mixing processes at the cloud top result in a significant enhancement of smoke injection into the stratosphere. Sensitivity studies show that the release of sensible heat by the fire plays an important role for the dynamics of the pyroCb. Furthermore, the convection is found to be very sensitive to background meteorological conditions. While the abundance of aerosol particles acting as cloud condensation nuclei (CCN) has a strong influence on the microphysical structure of the pyroCb, the CCN effect on the convective dynamics is rather weak. The release of latent heat dominates the overall energy budget of the pyroCb. Since most of the cloud water originates from moisture entrained from the background atmosphere, the fire-released moisture contributes only minor to convection dynamics. Sufficient fire heating, favorable meteorological conditions, and small-scale mixing processes at the cloud top are identified as the key ingredients for troposphere-to-stratosphere transport by pyroCb convection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

X-ray absorption spectroscopy (XAS) is a powerful means of investigation of structural and electronic properties in condensed -matter physics. Analysis of the near edge part of the XAS spectrum, the so – called X-ray Absorption Near Edge Structure (XANES), can typically provide the following information on the photoexcited atom: - Oxidation state and coordination environment. - Speciation of transition metal compounds. - Conduction band DOS projected on the excited atomic species (PDOS). Analysis of XANES spectra is greatly aided by simulations; in the most common scheme the multiple scattering framework is used with the muffin tin approximation for the scattering potential and the spectral simulation is based on a hypothetical, reference structure. This approach has the advantage of requiring relatively little computing power but in many cases the assumed structure is quite different from the actual system measured and the muffin tin approximation is not adequate for low symmetry structures or highly directional bonds. It is therefore very interesting and justified to develop alternative methods. In one approach, the spectral simulation is based on atomic coordinates obtained from a DFT (Density Functional Theory) optimized structure. In another approach, which is the object of this thesis, the XANES spectrum is calculated directly based on an ab – initio DFT calculation of the atomic and electronic structure. This method takes full advantage of the real many-electron final wavefunction that can be computed with DFT algorithms that include a core-hole in the absorbing atom to compute the final cross section. To calculate the many-electron final wavefunction the Projector Augmented Wave method (PAW) is used. In this scheme, the absorption cross section is written in function of several contributions as the many-electrons function of the finale state; it is calculated starting from pseudo-wavefunction and performing a reconstruction of the real-wavefunction by using a transform operator which contains some parameters, called partial waves and projector waves. The aim of my thesis is to apply and test the PAW methodology to the calculation of the XANES cross section. I have focused on iron and silicon structures and on some biological molecules target (myoglobin and cytochrome c). Finally other inorganic and biological systems could be taken into account for future applications of this methodology, which could become an important improvement with respect to the multiscattering approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Within this thesis a new double laser pulse pumping scheme for plasma-based, transient collisionally excited soft x-ray lasers (SXRL) was developed, characterized and utilized for applications. SXRL operations from ~50 up to ~200 electron volt were demonstrated applying this concept. As a central technical tool, a special Mach-Zehnder interferometer in the chirped pulse amplification (CPA) laser front-end was developed for the generation of fully controllable double-pulses to optimally pump SXRLs.rnThis Mach-Zehnder device is fully controllable and enables the creation of two CPA pulses of different pulse duration and variable energy balance with an adjustable time delay. Besides the SXRL pumping, the double-pulse configuration was applied to determine the B-integral in the CPA laser system by amplifying short pulse replica in the system, followed by an analysis in the time domain. The measurement of B-integral values in the 0.1 to 1.5 radian range, only limited by the reachable laser parameters, proved to be a promising tool to characterize nonlinear effects in the CPA laser systems.rnContributing to the issue of SXRL pumping, the double-pulse was configured to optimally produce the gain medium of the SXRL amplification. The focusing geometry of the two collinear pulses under the same grazing incidence angle on the target, significantly improved the generation of the active plasma medium. On one hand the effect was induced by the intrinsically guaranteed exact overlap of the two pulses on the target, and on the other hand by the grazing incidence pre-pulse plasma generation, which allows for a SXRL operation at higher electron densities, enabling higher gain in longer wavelength SXRLs and higher efficiency at shorter wavelength SXRLs. The observation of gain enhancement was confirmed by plasma hydrodynamic simulations.rnThe first introduction of double short-pulse single-beam grazing incidence pumping for SXRL pumping below 20 nanometer at the laser facility PHELIX in Darmstadt (Germany), resulted in a reliable operation of a nickel-like palladium SXRL at 14.7 nanometer with a pump energy threshold strongly reduced to less than 500 millijoule. With the adaptation of the concept, namely double-pulse single-beam grazing incidence pumping (DGRIP) and the transfer of this technology to the laser facility LASERIX in Palaiseau (France), improved efficiency and stability of table-top high-repetition soft x-ray lasers in the wavelength region below 20 nanometer was demonstrated. With a total pump laser energy below 1 joule the target, 2 mircojoule of nickel-like molybdenum soft x-ray laser emission at 18.9 nanometer was obtained at 10 hertz repetition rate, proving the attractiveness for high average power operation. An easy and rapid alignment procedure fulfilled the requirements for a sophisticated installation, and the highly stable output satisfied the need for a reliable strong SXRL source. The qualities of the DGRIP scheme were confirmed in an irradiation operation on user samples with over 50.000 shots corresponding to a deposited energy of ~ 50 millijoule.rnThe generation of double-pulses with high energies up to ~120 joule enabled the transfer to shorter wavelength SXRL operation at the laser facility PHELIX. The application of DGRIP proved to be a simple and efficient method for the generation of soft x-ray lasers below 10 nanometer. Nickel-like samarium soft x-ray lasing at 7.3 nanometer was achieved at a low total pump energy threshold of 36 joule, which confirmed the suitability of the applied pumping scheme. A reliable and stable SXRL operation was demonstrated, due to the single-beam pumping geometry despite the large optical apertures. The soft x-ray lasing of nickel-like samarium was an important milestone for the feasibility of applying the pumping scheme also for higher pumping pulse energies, which are necessary to obtain soft x-ray laser wavelengths in the water window. The reduction of the total pump energy below 40 joule for 7.3 nanometer short wavelength lasing now fulfilled the requirement for the installation at the high-repetition rate operation laser facility LASERIX.rn

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Die Verifikation bewertet die Güte von quantitativen Niederschlagsvorhersagen(QNV) gegenüber Beobachtungen und liefert Hinweise auf systematische Modellfehler. Mit Hilfe der merkmals-bezogenen Technik SAL werden simulierte Niederschlagsverteilungen hinsichtlich (S)truktur, (A)mplitude und (L)ocation analysiert. Seit einigen Jahren werden numerische Wettervorhersagemodelle benutzt, mit Gitterpunktabständen, die es erlauben, hochreichende Konvektion ohne Parametrisierung zu simulieren. Es stellt sich jetzt die Frage, ob diese Modelle bessere Vorhersagen liefern. Der hoch aufgelöste stündliche Beobachtungsdatensatz, der in dieser Arbeit verwendet wird, ist eine Kombination von Radar- und Stationsmessungen. Zum einem wird damit am Beispiel der deutschen COSMO-Modelle gezeigt, dass die Modelle der neuesten Generation eine bessere Simulation des mittleren Tagesgangs aufweisen, wenn auch mit zu geringen Maximum und etwas zu spätem Auftreten. Im Gegensatz dazu liefern die Modelle der alten Generation ein zu starkes Maximum, welches erheblich zu früh auftritt. Zum anderen wird mit dem neuartigen Modell eine bessere Simulation der räumlichen Verteilung des Niederschlags, durch eine deutliche Minimierung der Luv-/Lee Proble-matik, erreicht. Um diese subjektiven Bewertungen zu quantifizieren, wurden tägliche QNVs von vier Modellen für Deutschland in einem Achtjahreszeitraum durch SAL sowie klassischen Maßen untersucht. Die höher aufgelösten Modelle simulieren realistischere Niederschlagsverteilungen(besser in S), aber bei den anderen Komponenten tritt kaum ein Unterschied auf. Ein weiterer Aspekt ist, dass das Modell mit der gröbsten Auf-lösung(ECMWF) durch den RMSE deutlich am besten bewertet wird. Darin zeigt sich das Problem des ‚Double Penalty’. Die Zusammenfassung der drei Komponenten von SAL liefert das Resultat, dass vor allem im Sommer das am feinsten aufgelöste Modell (COSMO-DE) am besten abschneidet. Hauptsächlich kommt das durch eine realistischere Struktur zustande, so dass SAL hilfreiche Informationen liefert und die subjektive Bewertung bestätigt. rnIm Jahr 2007 fanden die Projekte COPS und MAP D-PHASE statt und boten die Möglich-keit, 19 Modelle aus drei Modellkategorien hinsichtlich ihrer Vorhersageleistung in Südwestdeutschland für Akkumulationszeiträume von 6 und 12 Stunden miteinander zu vergleichen. Als Ergebnisse besonders hervorzuheben sind, dass (i) je kleiner der Gitter-punktabstand der Modelle ist, desto realistischer sind die simulierten Niederschlags-verteilungen; (ii) bei der Niederschlagsmenge wird in den hoch aufgelösten Modellen weniger Niederschlag, d.h. meist zu wenig, simuliert und (iii) die Ortskomponente wird von allen Modellen am schlechtesten simuliert. Die Analyse der Vorhersageleistung dieser Modelltypen für konvektive Situationen zeigt deutliche Unterschiede. Bei Hochdrucklagen sind die Modelle ohne Konvektionsparametrisierung nicht in der Lage diese zu simulieren, wohingegen die Modelle mit Konvektionsparametrisierung die richtige Menge, aber zu flächige Strukturen realisieren. Für konvektive Ereignisse im Zusammenhang mit Fronten sind beide Modelltypen in der Lage die Niederschlagsverteilung zu simulieren, wobei die hoch aufgelösten Modelle realistischere Felder liefern. Diese wetterlagenbezogene Unter-suchung wird noch systematischer unter Verwendung der konvektiven Zeitskala durchge-führt. Eine erstmalig für Deutschland erstellte Klimatologie zeigt einen einer Potenzfunktion folgenden Abfall der Häufigkeit dieser Zeitskala zu größeren Werten hin auf. Die SAL Ergebnisse sind für beide Bereiche dramatisch unterschiedlich. Für kleine Werte der konvektiven Zeitskala sind sie gut, dagegen werden bei großen Werten die Struktur sowie die Amplitude deutlich überschätzt. rnFür zeitlich sehr hoch aufgelöste Niederschlagsvorhersagen gewinnt der Einfluss der zeitlichen Fehler immer mehr an Bedeutung. Durch die Optimierung/Minimierung der L Komponente von SAL innerhalb eines Zeitfensters(+/-3h) mit dem Beobachtungszeit-punkt im Zentrum ist es möglich diese zu bestimmen. Es wird gezeigt, dass bei optimalem Zeitversatz die Struktur und Amplitude der QNVs für das COSMO-DE besser werden und damit die grundsätzliche Fähigkeit des Modells die Niederschlagsverteilung realistischer zu simulieren, besser gezeigt werden kann.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Basic concepts and definitions relative to Lagrangian Particle Dispersion Models (LPDMs)for the description of turbulent dispersion are introduced. The study focusses on LPDMs that use as input, for the large scale motion, fields produced by Eulerian models, with the small scale motions described by Lagrangian Stochastic Models (LSMs). The data of two different dynamical model have been used: a Large Eddy Simulation (LES) and a General Circulation Model (GCM). After reviewing the small scale closure adopted by the Eulerian model, the development and implementation of appropriate LSMs is outlined. The basic requirement of every LPDM used in this work is its fullfillment of the Well Mixed Condition (WMC). For the dispersion description in the GCM domain, a stochastic model of Markov order 0, consistent with the eddy-viscosity closure of the dynamical model, is implemented. A LSM of Markov order 1, more suitable for shorter timescales, has been implemented for the description of the unresolved motion of the LES fields. Different assumptions on the small scale correlation time are made. Tests of the LSM on GCM fields suggest that the use of an interpolation algorithm able to maintain an analytical consistency between the diffusion coefficient and its derivative is mandatory if the model has to satisfy the WMC. Also a dynamical time step selection scheme based on the diffusion coefficient shape is introduced, and the criteria for the integration step selection are discussed. Absolute and relative dispersion experiments are made with various unresolved motion settings for the LSM on LES data, and the results are compared with laboratory data. The study shows that the unresolved turbulence parameterization has a negligible influence on the absolute dispersion, while it affects the contribution of the relative dispersion and meandering to absolute dispersion, as well as the Lagrangian correlation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work a modelization of the turbulence in the atmospheric boundary layer, under convective condition, is made. For this aim, the equations that describe the atmospheric motion are expressed through Reynolds averages and, then, they need closures. This work consists in modifying the TKE-l closure used in the BOLAM (Bologna Limited Area Model) forecast model. In particular, the single column model extracted from BOLAM is used, which is modified to obtain other three different closure schemes: a non-local term is added to the flux- gradient relations used to close the second order moments present in the evolution equation of the turbulent kinetic energy, so that the flux-gradient relations become more suitable for simulating an unstable boundary layer. Furthermore, a comparison among the results obtained from the single column model, the ones obtained from the three new schemes and the observations provided by the known case in literature ”GABLS2” is made.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis deals with numerical algorithms for fluid-structure interaction problems with application in blood flow modelling. It starts with a short introduction on the mathematical description of incompressible viscous flow with non-Newtonian viscosity and a moving linear viscoelastic structure. The mathematical model consists of the generalized Navier-Stokes equation used for the description of fluid flow and the generalized string model for structure movement. The arbitrary Lagrangian-Eulerian approach is used in order to take into account moving computational domain. A part of the thesis is devoted to the discussion on the non-Newtonian behaviour of shear-thinning fluids, which is in our case blood, and derivation of two non-Newtonian models frequently used in the blood flow modelling. Further we give a brief overview on recent fluid-structure interaction schemes with discussion about the difficulties arising in numerical modelling of blood flow. Our main contribution lies in numerical and experimental study of a new loosely-coupled partitioned scheme called the kinematic splitting fluid-structure interaction algorithm. We present stability analysis for a coupled problem of non-Newtonian shear-dependent fluids in moving domains with viscoelastic boundaries. Here, we assume both, the nonlinearity in convective as well is diffusive term. We analyse the convergence of proposed numerical scheme for a simplified fluid model of the Oseen type. Moreover, we present series of experiments including numerical error analysis, comparison of hemodynamic parameters for the Newtonian and non-Newtonian fluids and comparison of several physiologically relevant computational geometries in terms of wall displacement and wall shear stress. Numerical analysis and extensive experimental study for several standard geometries confirm reliability and accuracy of the proposed kinematic splitting scheme in order to approximate fluid-structure interaction problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study aims at a comprehensive understanding of the effects of aerosol-cloud interactions and their effects on cloud properties and climate using the chemistry-climate model EMAC. In this study, CCN activation is regarded as the dominant driver in aerosol-cloud feedback loops in warm clouds. The CCN activation is calculated prognostically using two different cloud droplet nucleation parameterizations, the STN and HYB CDN schemes. Both CDN schemes account for size and chemistry effects on the droplet formation based on the same aerosol properties. The calculation of the solute effect (hygroscopicity) is the main difference between the CDN schemes. The kappa-method is for the first time incorporated into Abdul-Razzak and Ghan activation scheme (ARG) to calculate hygroscopicity and critical supersaturation of aerosols (HYB), and the performance of the modied scheme is compared with the osmotic coefficient model (STN), which is the standard in the ARG scheme. Reference simulations (REF) with the prescribed cloud droplet number concentration have also been carried out in order to understand the effects of aerosol-cloud feedbacks. In addition, since the calculated cloud coverage is an important determinant of cloud radiative effects and is influencing the nucleation process two cloud cover parameterizations (i.e., a relative humidity threshold; RH-CLC and a statistical cloud cover scheme; ST-CLC) have been examined together with the CDN schemes, and their effects on the simulated cloud properties and relevant climate parameters have been investigated. The distinct cloud droplet spectra show strong sensitivity to aerosol composition effects on cloud droplet formation in all particle sizes, especially for the Aitken mode. As Aitken particles are the major component of the total aerosol number concentration and CCN, and are most sensitive to aerosol chemical composition effect (solute effect) on droplet formation, the activation of Aitken particles strongly contribute to total cloud droplet formation and thereby providing different cloud droplet spectra. These different spectra influence cloud structure, cloud properties, and climate, and show regionally varying sensitivity to meteorological and geographical condition as well as the spatiotemporal aerosol properties (i.e., particle size, number, and composition). The changes responding to different CDN schemes are more pronounced at lower altitudes than higher altitudes. Among regions, the subarctic regions show the strongest changes, as the lower surface temperature amplifies the effects of the activated aerosols; in contrast, the Sahara desert, where is an extremely dry area, is less influenced by changes in CCN number concentration. The aerosol-cloud coupling effects have been examined by comparing the prognostic CDN simulations (STN, HYB) with the reference simulation (REF). Most pronounced effects are found in the cloud droplet number concentration, cloud water distribution, and cloud radiative effect. The aerosol-cloud coupling generally increases cloud droplet number concentration; this decreases the efficiency of the formation of weak stratiform precipitation, and increases the cloud water loading. These large-scale changes lead to larger cloud cover and longer cloud lifetime, and contribute to high optical thickness and strong cloud cooling effects. This cools the Earth's surface, increases atmospheric stability, and reduces convective activity. These changes corresponding to aerosol-cloud feedbacks are also differently simulated depending on the cloud cover scheme. The ST-CLC scheme is more sensitive to aerosol-cloud coupling, since this scheme uses a tighter linkage of local dynamics and cloud water distributions in cloud formation process than the RH-CLC scheme. For the calculated total cloud cover, the RH-CLC scheme simulates relatively similar pattern to observations than the ST-CLC scheme does, but the overall properties (e.g., total cloud cover, cloud water content) in the RH simulations are overestimated, particularly over ocean. This is mainly originated from the difference in simulated skewness in each scheme: the RH simulations calculate negatively skewed distributions of cloud cover and relevant cloud water, which is similar to that of the observations, while the ST simulations yield positively skewed distributions resulting in lower mean values than the RH-CLC scheme does. The underestimation of total cloud cover over ocean, particularly over the intertropical convergence zone (ITCZ) relates to systematic defficiency of the prognostic calculation of skewness in the current set-ups of the ST-CLC scheme.rnOverall, the current EMAC model set-ups perform better over continents for all combinations of the cloud droplet nucleation and cloud cover schemes. To consider aerosol-cloud feedbacks, the HYB scheme is a better method for predicting cloud and climate parameters for both cloud cover schemes than the STN scheme. The RH-CLC scheme offers a better simulation of total cloud cover and the relevant parameters with the HYB scheme and single-moment microphysics (REF) than the ST-CLC does, but is not very sensitive to aerosol-cloud interactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the Mediterranean area, olive mill wastewater (OMW) and grape pomace (GP) are among the major agro-industrial wastes produced. These two wastes have a high organic load and high phytotoxicity. Thus, their disposal in the environment can lead to negative effects. Second-generation biorefineries are dedicated to the valorization of biowaste by the production of goods from such residual biomasses. This approach can combine bioremediation approaches to the generation of noble molecules, biomaterials and energy. The main aim of this thesis work was to study the anaerobic digestion of OMW and GP under different operational conditions to produce volatile fatti acids (VFAs) (first stage aim) and CH4 (second stage aim). To this end, a packed-bed biofilm reactor (PBBR) was set up to perform the anaerobic acidogenic digestion of the liquid dephenolized stream of OMW (OMWdeph). In parallel, the solid stream of OMW (OMWsolid), previously separated in order to allow the solid phase extraction of polyphenols, was addressed to anaerobic methanogenic digestion to obtain CH4. The latter experiment was performed in 100ml Pyrex bottles which were maintained at different temperatures (55-45-37°C). Together with previous experiments, the anaerobic acidogenic digestion of fermented GP (GPfreshacid) and dephenolized and fermented GP (GPdephacid) was performed in 100ml Pyrex bottles to estimate the concentration of VFAs achievable from each aforementioned GPs. Finally, the same matrices of GP and not pre-treated GP (GPfresh) were digested under anaerobic methanogenic condition to produce CH4. Anaerobic acidogenic and methanogenic digestion processes of GPs lasted about 33 days. Instead, the anaerobic acidogenic and methanogenic digestion process of OMWs lasted about 121 and 60 days, respectively. Each experiment was periodically monitored by analysing volume and composition of produced biogas and VFA concentration. Results showed that VFAs were produced in higher concentrations in GP compared to OMWdeph. The overall concentration of VFAs from GPfreshacid was approximately 39.5 gCOD L-1, 29 gCOD L-1 from GPdephacid, and 8.7 gCOD L-1 from OMWdeph. Concerning the CH4 production, the OMWsolid reached a high biochemical methane potential (BMP) at a thermophilic temperature (55°) than at mesophlic ones (37-45°C). The value reached was about 358.7 mlCH4 gSVsub-1. In contrast, GPfresh got a high BMP but at a mesophilic temperature. The BMP was about 207.3 mlCH4 gSVsub-1, followed by GPfreshacid with about 192.6 mlCH4 gSVsub-1 and lastly GPdephacid with about 102.2 mlCH4 gSVsub-1. In summary, based on the gathered results, GP seems to be a better carbon source for acidogenic and methanogenic microrganism compared to OMW, because higher amount of VFAs and CH4 were produced in AD of GP than OMW. In addition to these products, polyphenols were extracted by means of a solid phase extraction (SPE) procedure by another research group, and VFAs were utilised for biopolymers production, in particular polyhydroxyalkanoates (PHAs), by the same research group in which I was involved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Questo lavoro prende in esame lo schema di Hilbert di punti di C^2, il quale viene descritto assieme ad alcune sue proprietà, ad esempio la sua struttura hyper-kahleriana. Lo scopo della tesi è lo studio del polinomio di Poincaré di tale schema di Hilbert: ciò che si ottiene è una espressione del tipo serie di potenze, la quale è un caso particolare di una formula molto più generale, nota con il nome di formula di Goettsche.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work studies a km-scale data assimilation scheme based on a LETKF developed for the COSMO model. The aim is to evaluate the impact of the assimilation of two different types of data: temperature, humidity, pressure and wind data from conventional networks (SYNOP, TEMP, AIREP reports) and 3d reflectivity from radar volume. A 3-hourly continuous assimilation cycle has been implemented over an Italian domain, based on a 20 member ensemble, with boundary conditions provided from ECMWF ENS. Three different experiments have been run for evaluating the performance of the assimilation on one week in October 2014 during which Genova flood and Parma flood took place: a control run of the data assimilation cycle with assimilation of data from conventional networks only, a second run in which the SPPT scheme is activated into the COSMO model, a third run in which also reflectivity volumes from meteorological radar are assimilated. Objective evaluation of the experiments has been carried out both on case studies and on the entire week: check of the analysis increments, computing the Desroziers statistics for SYNOP, TEMP, AIREP and RADAR, over the Italian domain, verification of the analyses against data not assimilated (temperature at the lowest model level objectively verified against SYNOP data), and objective verification of the deterministic forecasts initialised with the KENDA analyses for each of the three experiments.