937 resultados para Lagrange interpolation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN] In this paper we present a new model for optical flow calculation using a variational formulation which preserves discontinuities of the flow much better than classical methods. We study the Euler-Lagrange equations asociated to the variational problem. In the case of quadratic energy, we show the existence and uniqueness of the corresponding evolution problem. Since our method avoid linearization in the optical flow constraint, it can recover large displacement in the scene. We avoid convergence to irrelevant local minima by embedding our method into a linear scale-space framework and using a focusing strategy from coarse to fine scales.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]We present a new method to construct a trivariate T-spline representation of complex genuszero solids for the application of isogeometric analysis. The proposed technique only demands a surface triangulation of the solid as input data. The key of this method lies in obtaining a volumetric parameterization between the solid and the parametric domain, the unitary cube. To do that, an adaptive tetrahedral mesh of the parametric domain is isomorphically transformed onto the solid by applying a mesh untangling and smoothing procedure. The control points of the trivariate T-spline are calculated by imposing the interpolation conditions on points sited both on the inner and on the surface of the solid...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]We present a new method to construct a trivariate T-spline representation of complex solids for the application of isogeometric analysis. The proposed technique only demands the surface of the solid as input data. The key of this method lies in obtaining a volumetric parameterization between the solid and a simple parametric domain. To do that, an adaptive tetrahedral mesh of the parametric domain is isomorphically transformed onto the solid by applying the meccano method. The control points of the trivariate T-spline are calculated by imposing the interpolation conditions on points situated both on the inner and on the surface of the solid...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]We present a new strategy, based on the meccano method [1, 2, 3], to construct a T-spline parameterization of 2D geometries for the application of isogeometric analysis. The proposed method only demands a boundary representation of the geometry as input data. The algorithm obtains, as a result, high quality parametric transformation between 2D objects and the parametric domain, the unit square. The key of the method lies in defining an isomorphic transformation between the parametric and physical T-mesh finding the optimal position of the interior nodes by applying a new T-mesh untangling and smoothing procedure. Bivariate T-spline representation is calculated by imposing the interpolation conditions on points sited both on the interior and on the boundary of the geometry…

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]In this talk we introduce a new methodology for wind field simulation or forecasting over complex terrain. The idea is to use wind measurements or predictions of the HARMONIE mesoscale model as the input data for an adaptive finite element mass consistent wind model [1,2]. The method has been recently implemented in the freely-available Wind3D code [3]. A description of the HARMONIE Non-Hydrostatic Dynamics can be found in [4]. The results of HARMONIE (obtained with a maximum resolution about 1 Km) are refined by the finite element model in a local scale (about a few meters). An interface between both models is implemented such that the initial wind field approximation is obtained by a suitable interpolation of the HARMONIE results…

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]A new methodology for wind field simulation or forecasting over complex terrain is introduced. The idea is to use wind measurements or predictions of the HARMONIE mesoscale model as the input data for an adaptive finite element mass consistent wind model. The method has been recently implemented in the freely-available Wind3D code. A description of the HARMONIE Non-Hydrostatic Dynamics can be found in. HARMONIE provides wind prediction with a maximum resolution about 1 Km that is refined by the finite element model in a local scale (about a few meters). An interface between both models is implemented such that the initial wind field approximation is obtained by a suitable interpolation of the HARMONIE results…

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we introduce an analytical approach for the frequency warping transform. Criteria for the design of operators based on arbitrary warping maps are provided and an algorithm carrying out a fast computation is defined. Such operators can be used to shape the tiling of time-frequency plane in a flexible way. Moreover, they are designed to be inverted by the application of their adjoint operator. According to the proposed mathematical model, the frequency warping transform is computed by considering two additive operators: the first one represents its nonuniform Fourier transform approximation and the second one suppresses aliasing. The first operator is known to be analytically characterized and fast computable by various interpolation approaches. A factorization of the second operator is found for arbitrary shaped non-smooth warping maps. By properly truncating the operators involved in the factorization, the computation turns out to be fast without compromising accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stress recovery techniques have been an active research topic in the last few years since, in 1987, Zienkiewicz and Zhu proposed a procedure called Superconvergent Patch Recovery (SPR). This procedure is a last-squares fit of stresses at super-convergent points over patches of elements and it leads to enhanced stress fields that can be used for evaluating finite element discretization errors. In subsequent years, numerous improved forms of this procedure have been proposed attempting to add equilibrium constraints to improve its performances. Later, another superconvergent technique, called Recovery by Equilibrium in Patches (REP), has been proposed. In this case the idea is to impose equilibrium in a weak form over patches and solve the resultant equations by a last-square scheme. In recent years another procedure, based on minimization of complementary energy, called Recovery by Compatibility in Patches (RCP) has been proposed in. This procedure, in many ways, can be seen as the dual form of REP as it substantially imposes compatibility in a weak form among a set of self-equilibrated stress fields. In this thesis a new insight in RCP is presented and the procedure is improved aiming at obtaining convergent second order derivatives of the stress resultants. In order to achieve this result, two different strategies and their combination have been tested. The first one is to consider larger patches in the spirit of what proposed in [4] and the second one is to perform a second recovery on the recovered stresses. Some numerical tests in plane stress conditions are presented, showing the effectiveness of these procedures. Afterwards, a new recovery technique called Last Square Displacements (LSD) is introduced. This new procedure is based on last square interpolation of nodal displacements resulting from the finite element solution. In fact, it has been observed that the major part of the error affecting stress resultants is introduced when shape functions are derived in order to obtain strains components from displacements. This procedure shows to be ultraconvergent and is extremely cost effective, as it needs in input only nodal displacements directly coming from finite element solution, avoiding any other post-processing in order to obtain stress resultants using the traditional method. Numerical tests in plane stress conditions are than presented showing that the procedure is ultraconvergent and leads to convergent first and second order derivatives of stress resultants. In the end, transverse stress profiles reconstruction using First-order Shear Deformation Theory for laminated plates and three dimensional equilibrium equations is presented. It can be seen that accuracy of this reconstruction depends on accuracy of first and second derivatives of stress resultants, which is not guaranteed by most of available low order plate finite elements. RCP and LSD procedures are than used to compute convergent first and second order derivatives of stress resultants ensuring convergence of reconstructed transverse shear and normal stress profiles respectively. Numerical tests are presented and discussed showing the effectiveness of both procedures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The wheel - rail contact analysis plays a fundamental role in the multibody modeling of railway vehicles. A good contact model must provide an accurate description of the global contact phenomena (contact forces and torques, number and position of the contact points) and of the local contact phenomena (position and shape of the contact patch, stresses and displacements). The model has also to assure high numerical efficiency (in order to be implemented directly online within multibody models) and a good compatibility with commercial multibody software (Simpack Rail, Adams Rail). The wheel - rail contact problem has been discussed by several authors and many models can be found in the literature. The contact models can be subdivided into two different categories: the global models and the local (or differential) models. Currently, as regards the global models, the main approaches to the problem are the so - called rigid contact formulation and the semi – elastic contact description. The rigid approach considers the wheel and the rail as rigid bodies. The contact is imposed by means of constraint equations and the contact points are detected during the dynamic simulation by solving the nonlinear algebraic differential equations associated to the constrained multibody system. Indentation between the bodies is not permitted and the normal contact forces are calculated through the Lagrange multipliers. Finally the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces respectively. Also the semi - elastic approach considers the wheel and the rail as rigid bodies. However in this case no kinematic constraints are imposed and the indentation between the bodies is permitted. The contact points are detected by means of approximated procedures (based on look - up tables and simplifying hypotheses on the problem geometry). The normal contact forces are calculated as a function of the indentation while, as in the rigid approach, the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces. Both the described multibody approaches are computationally very efficient but their generality and accuracy turn out to be often insufficient because the physical hypotheses behind these theories are too restrictive and, in many circumstances, unverified. In order to obtain a complete description of the contact phenomena, local (or differential) contact models are needed. In other words wheel and rail have to be considered elastic bodies governed by the Navier’s equations and the contact has to be described by suitable analytical contact conditions. The contact between elastic bodies has been widely studied in literature both in the general case and in the rolling case. Many procedures based on variational inequalities, FEM techniques and convex optimization have been developed. This kind of approach assures high generality and accuracy but still needs very large computational costs and memory consumption. Due to the high computational load and memory consumption, referring to the current state of the art, the integration between multibody and differential modeling is almost absent in literature especially in the railway field. However this integration is very important because only the differential modeling allows an accurate analysis of the contact problem (in terms of contact forces and torques, position and shape of the contact patch, stresses and displacements) while the multibody modeling is the standard in the study of the railway dynamics. In this thesis some innovative wheel – rail contact models developed during the Ph. D. activity will be described. Concerning the global models, two new models belonging to the semi – elastic approach will be presented; the models satisfy the following specifics: 1) the models have to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the models have to consider generic railway tracks and generic wheel and rail profiles 3) the models have to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the models have to evaluate the number and the position of the contact points and, for each point, the contact forces and torques 4) the models have to be implementable directly online within the multibody models without look - up tables 5) the models have to assure computation times comparable with those of commercial multibody software (Simpack Rail, Adams Rail) and compatible with RT and HIL applications 6) the models have to be compatible with commercial multibody software (Simpack Rail, Adams Rail). The most innovative aspect of the new global contact models regards the detection of the contact points. In particular both the models aim to reduce the algebraic problem dimension by means of suitable analytical techniques. This kind of reduction allows to obtain an high numerical efficiency that makes possible the online implementation of the new procedure and the achievement of performance comparable with those of commercial multibody software. At the same time the analytical approach assures high accuracy and generality. Concerning the local (or differential) contact models, one new model satisfying the following specifics will be presented: 1) the model has to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the model has to consider generic railway tracks and generic wheel and rail profiles 3) the model has to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the model has to able to calculate both the global contact variables (contact forces and torques) and the local contact variables (position and shape of the contact patch, stresses and displacements) 4) the model has to be implementable directly online within the multibody models 5) the model has to assure high numerical efficiency and a reduced memory consumption in order to achieve a good integration between multibody and differential modeling (the base for the local contact models) 6) the model has to be compatible with commercial multibody software (Simpack Rail, Adams Rail). In this case the most innovative aspects of the new local contact model regard the contact modeling (by means of suitable analytical conditions) and the implementation of the numerical algorithms needed to solve the discrete problem arising from the discretization of the original continuum problem. Moreover, during the development of the local model, the achievement of a good compromise between accuracy and efficiency turned out to be very important to obtain a good integration between multibody and differential modeling. At this point the contact models has been inserted within a 3D multibody model of a railway vehicle to obtain a complete model of the wagon. The railway vehicle chosen as benchmark is the Manchester Wagon the physical and geometrical characteristics of which are easily available in the literature. The model of the whole railway vehicle (multibody model and contact model) has been implemented in the Matlab/Simulink environment. The multibody model has been implemented in SimMechanics, a Matlab toolbox specifically designed for multibody dynamics, while, as regards the contact models, the CS – functions have been used; this particular Matlab architecture allows to efficiently connect the Matlab/Simulink and the C/C++ environment. The 3D multibody model of the same vehicle (this time equipped with a standard contact model based on the semi - elastic approach) has been then implemented also in Simpack Rail, a commercial multibody software for railway vehicles widely tested and validated. Finally numerical simulations of the vehicle dynamics have been carried out on many different railway tracks with the aim of evaluating the performances of the whole model. The comparison between the results obtained by the Matlab/ Simulink model and those obtained by the Simpack Rail model has allowed an accurate and reliable validation of the new contact models. In conclusion to this brief introduction to my Ph. D. thesis, we would like to thank Trenitalia and the Regione Toscana for the support provided during all the Ph. D. activity. Moreover we would also like to thank the INTEC GmbH, the society the develops the software Simpack Rail, with which we are currently working together to develop innovative toolboxes specifically designed for the wheel rail contact analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: Nocturnal frontal lobe epilepsy (NFLE) is a distinct syndrome of partial epilepsy whose clinical features comprise a spectrum of paroxysmal motor manifestations of variable duration and complexity, arising from sleep. Cardiovascular changes during NFLE seizures have previously been observed, however the extent of these modifications and their relationship with seizure onset has not been analyzed in detail. Objective: Aim of present study is to evaluate NFLE seizure related changes in heart rate (HR) and in sympathetic/parasympathetic balance through wavelet analysis of HR variability (HRV). Methods: We evaluated the whole night digitally recorded video-polysomnography (VPSG) of 9 patients diagnosed with NFLE with no history of cardiac disorders and normal cardiac examinations. Events with features of NFLE seizures were selected independently by three examiners and included in the study only if a consensus was reached. Heart rate was evaluated by measuring the interval between two consecutive R-waves of QRS complexes (RRi). RRi series were digitally calculated for a period of 20 minutes, including the seizures and resampled at 10 Hz using cubic spline interpolation. A multiresolution analysis was performed (Daubechies-16 form), and the squared level specific amplitude coefficients were summed across appropriate decomposition levels in order to compute total band powers in bands of interest (LF: 0.039062 - 0.156248, HF: 0.156248 - 0.624992). A general linear model was then applied to estimate changes in RRi, LF and HF powers during three different period (Basal) (30 sec, at least 30 sec before seizure onset, during which no movements occurred and autonomic conditions resulted stationary); pre-seizure period (preSP) (10 sec preceding seizure onset) and seizure period (SP) corresponding to the clinical manifestations. For one of the patients (patient 9) three seizures associated with ictal asystole were recorded, hence he was treated separately. Results: Group analysis performed on 8 patients (41 seizures) showed that RRi remained unchanged during the preSP, while a significant tachycardia was observed in the SP. A significant increase in the LF component was instead observed during both the preSP and the SP (p<0.001) while HF component decreased only in the SP (p<0.001). For patient 9 during the preSP and in the first part of SP a significant tachycardia was observed associated with an increased sympathetic activity (increased LF absolute values and LF%). In the second part of the SP a progressive decrease in HR that gradually exceeded basal values occurred before IA. Bradycardia was associated with an increase in parasympathetic activity (increased HF absolute values and HF%) contrasted by a further increase in LF until the occurrence of IA. Conclusions: These data suggest that changes in autonomic balance toward a sympathetic prevalence always preceded clinical seizure onset in NFLE, even when HR changes were not yet evident, confirming that wavelet analysis is a sensitive technique to detect sudden variations of autonomic balance occurring during transient phenomena. Finally we demonstrated that epileptic asystole is associated with a parasympathetic hypertonus counteracted by a marked sympathetic activation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In den letzten fünf Jahren hat sich mit dem Begriff desspektralen Tripels eine Möglichkeit zur Beschreibungdes an Spinoren gekoppelten Gravitationsfeldes auf(euklidischen) nichtkommutativen Räumen etabliert. Die Dynamik dieses Gravitationsfeldes ist dabei durch diesogenannte spektrale Wirkung, dieSpur einer geeigneten Funktion des Dirac-Operators,bestimmt. Erstaunlicherweise kann man die vollständige Lagrange-Dichtedes (an das Gravitationsfeld gekoppelten) Standardmodellsder Elementarteilchenphysik, also insbesondere auch denmassegebenden Higgs-Sektor, als spektrale Wirkungeines entsprechenden spektralen Tripels ableiten. Diesesspektrale Tripel ist als Produkt des spektralenTripels der (kommutativen) Raumzeit mit einem speziellendiskreten spektralen Tripel gegeben. In der Arbeitwerden solche diskreten spektralen Tripel, die bis vorKurzem neben dem nichtkommutativen Torus die einzigen,bekannten nichtkommutativen Beispiele waren, klassifiziert. Damit kannnun auch untersucht werden, inwiefern sich dasStandardmodell durch diese Eigenschaft gegenüber anderenYang-Mills-Higgs-Theorien auszeichnet. Es zeigt sichallerdings, dasses - trotz mancher Einschränkung - eine sehr große Zahl vonModellen gibt, die mit Hilfe von spektralen Tripelnabgeleitet werden können. Es wäre aber auch denkbar, dass sich das spektrale Tripeldes Standardmodells durch zusätzliche Strukturen,zum Beispiel durch eine darauf ``isometrisch'' wirkendeHopf-Algebra, auszeichnet. In der Arbeit werden, um dieseFrage untersuchen zu können, sogenannte H-symmetrischespektrale Tripel, welche solche Hopf-Isometrien aufweisen,definiert.Dabei ergibt sich auch eine Möglichkeit, neue(H-symmetrische) spektrale Tripel mit Hilfe ihrerzusätzlichen Symmetrienzu konstruieren. Dieser Algorithmus wird an den Beispielender kommutativen Sphäre, deren Spin-Geometrie hier zumersten Mal vollständig in der globalen, algebraischen Sprache der NichtkommutativenGeometrie beschrieben wird, sowie dem nichtkommutativenTorus illustriert.Als Anwendung werden einige neue Beipiele konstruiert. Eswird gezeigt, dass sich für Yang-Mills Higgs-Theorien, diemit Hilfe von H-symmetrischen spektralen Tripeln abgeleitetwerden, aus den zusätzlichen Isometrien Einschränkungen andiefermionischen Massenmatrizen ergeben. Im letzten Abschnitt der Arbeit wird kurz auf dieQuantisierung der spektralen Wirkung für diskrete spektraleTripel eingegangen.Außerdem wird mit dem Begriff des spektralen Quadrupels einKonzept für die nichtkommutative Verallgemeinerungvon lorentzschen Spin-Mannigfaltigkeiten vorgestellt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die vorliegende Arbeit befaßt sich mit einer Klasse von nichtlinearen Eigenwertproblemen mit Variationsstrukturin einem reellen Hilbertraum. Die betrachteteEigenwertgleichung ergibt sich demnach als Euler-Lagrange-Gleichung eines stetig differenzierbarenFunktionals, zusätzlich sei der nichtlineare Anteil desProblems als ungerade und definit vorausgesetzt.Die wichtigsten Ergebnisse in diesem abstrakten Rahmen sindKriterien für die Existenz spektral charakterisierterLösungen, d.h. von Lösungen, deren Eigenwert gerade miteinem vorgegeben variationellen Eigenwert eines zugehörigen linearen Problems übereinstimmt. Die Herleitung dieserKriterien basiert auf einer Untersuchung kontinuierlicher Familien selbstadjungierterEigenwertprobleme und erfordert Verallgemeinerungenspektraltheoretischer Konzepte.Neben reinen Existenzsätzen werden auch Beziehungen zwischenspektralen Charakterisierungen und denLjusternik-Schnirelman-Niveaus des Funktionals erörtert.Wir betrachten Anwendungen auf semilineareDifferentialgleichungen (sowieIntegro-Differentialgleichungen) zweiter Ordnung. Diesliefert neue Informationen über die zugehörigenLösungsmengen im Hinblick auf Knoteneigenschaften. Diehergeleiteten Methoden eignen sich besonders für eindimensionale und radialsymmetrische Probleme, während einTeil der Resultate auch ohne Symmetrieforderungen gültigist.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In der vorliegenden Dissertation werden zwei verschiedene Aspekte des Sektors ungerader innerer Parität der mesonischen chiralen Störungstheorie (mesonische ChPT) untersucht. Als erstes wird die Ein-Schleifen-Renormierung des führenden Terms, der sog. Wess-Zumino-Witten-Wirkung, durchgeführt. Dazu muß zunächst der gesamte Ein-Schleifen-Anteil der Theorie mittels Sattelpunkt-Methode extrahiert werden. Im Anschluß isoliert man alle singulären Ein-Schleifen-Strukturen im Rahmen der Heat-Kernel-Technik. Zu guter Letzt müssen diese divergenten Anteile absorbiert werden. Dazu benötigt man eine allgemeinste anomale Lagrange-Dichte der Ordnung O(p^6), welche systematisch entwickelt wird. Erweitert man die chirale Gruppe SU(n)_L x SU(n)_R auf SU(n)_L x SU(n)_R x U(1)_V, so kommen zusätzliche Monome ins Spiel. Die renormierten Koeffizienten dieser Lagrange-Dichte, die Niederenergiekonstanten (LECs), sind zunächst freie Parameter der Theorie, die individuell fixiert werden müssen. Unter Betrachtung eines komplementären vektormesonischen Modells können die Amplituden geeigneter Prozesse bestimmt und durch Vergleich mit den Ergebnissen der mesonischen ChPT eine numerische Abschätzung einiger LECs vorgenommen werden. Im zweiten Teil wird eine konsistente Ein-Schleifen-Rechnung für den anomalen Prozeß (virtuelles) Photon + geladenes Kaon -> geladenes Kaon + neutrales Pion durchgeführt. Zur Kontrolle unserer Resultate wird eine bereits vorhandene Rechnung zur Reaktion (virtuelles) Photon + geladenes Pion -> geladenes Pion + neutrales Pion reproduziert. Unter Einbeziehung der abgeschätzten Werte der jeweiligen LECs können die zugehörigen hadronischen Strukturfunktionen numerisch bestimmt und diskutiert werden.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new Coastal Rapid Environmental Assessment (CREA) strategy has been developed and successfully applied to the Northern Adriatic Sea. CREA strategy exploits the recent advent of operational oceanography to establish a CREA system based on an operational regional forecasting system and coastal monitoring networks of opportunity. The methodology wishes to initialize a coastal high resolution model, nested within the regional forecasting system, blending the large scale parent model fields with the available coastal observations to generate the requisite field estimates. CREA modeling system consists of a high resolution, O(800m), Adriatic SHELF model (ASHELF) implemented into the Northern Adriatic basin and nested within the Adriatic Forecasting System (AFS) (Oddo et al. 2006). The observational system is composed by the coastal networks established in the framework of ADRICOSM (ADRiatic sea integrated COastal areaS and river basin Managment system) Pilot Project. An assimilation technique exerts a correction of the initial field provided by AFS on the basis of the available observations. The blending of the two data sets has been carried out through a multi-scale optimal interpolation technique developed by Mariano and Brown (1992). Two CREA weekly exercises have been conducted: the first, at the beginning of May (spring experiment); the second in middle August (summer experiment). The weeks have been chosen looking at the availability of all coastal observations in the initialization day and one week later to validate model results, verifying our predictive skills. ASHELF spin up time has been investigated too, through a dedicated experiment, in order to obtain the maximum forecast accuracy within a minimum time. Energetic evaluations show that for the Northern Adriatic Sea and for the forcing applied, a spin-up period of one week allows ASHELF to generate new circulation features enabled by the increased resolution and its total kinetic energy to establish a new dynamical balance. CREA results, evaluated by mean of standard statistics between ASHELF and coastal CTDs, show improvement deriving from the initialization technique and a good model performance in the coastal areas of the Northern Adriatic basin, characterized by a shallow and wide continental shelf subject to substantial freshwater influence from rivers. Results demonstrate the feasibility of our CREA strategy to support coastal zone management and wish an additional establishment of operational coastal monitoring activities to advance it.