905 resultados para Negative dimensional integration method (NDIM)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

La ricerca affronta in modo unitario e nell’ottica europea i multiformi fenomeni della doppia imposizione economica e giuridica, assumendo come paradigma iniziale la tassazione dei dividendi cross-border. Definito lo statuto giuridico della doppia imposizione, se ne motiva la contrarietà all’ordinamento europeo e si indagano gli strumenti comunitari per raggiungere l’obiettivo europeo della sua eliminazione. In assenza di un’armonizzazione positiva, il risultato sostanziale viene raggiunto grazie all’integrazione negativa. Si dimostra che il riserbo della Corte di Giustizia di fronte a opzioni di politica fiscale è soltanto un’impostazione di facciata, valorizzando le aperture giurisprudenziali per il suo superamento. Questi, in sintesi, i passaggi fondamentali. Si parte dall’evoluzione delle libertà fondamentali in diritti di rango costituzionale, che ne trasforma il contenuto economico e la portata giuridica, attribuendo portata costituzionale ai valori di neutralità e non restrizione. Si evidenzia quindi il passaggio dal divieto di discriminazioni al divieto di restrizioni, constatando il fallimento del tentativo di configurare il divieto di doppia imposizione come principio autonomo dell’ordinamento europeo. Contemporaneamente, però, diventa opportuno riesaminare la distinzione tra doppia imposizione economica e giuridica, e impostare un unico inquadramento teorico della doppia imposizione come ipotesi paradigmatica di restrizione alle libertà. Conseguentemente, viene razionalizzato l’impianto giurisprudenziale delle cause di giustificazione. Questo consente agevolmente di legittimare scelte comunitarie per la ripartizione dei poteri impositivi tra Stati Membri e l’attribuzione delle responsabilità per l’eliminazione degli effetti della doppia imposizione. In conclusione, dunque, emerge una formulazione europea dell’equilibrato riparto di poteri impositivi a favore dello Stato della fonte. E, accanto ad essa, una concezione comunitaria del principio di capacità contributiva, con implicazioni dirompenti ancora da verificare. Sul piano metodologico, l’analisi si concentra criticamente sull’operato della Corte di Giustizia, svelando punti di forza e di debolezza della sua azione, che ha posto le basi per la risposta europea al problema della doppia imposizione.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study a new, fully non-linear, approach to Local Earthquake Tomography is presented. Local Earthquakes Tomography (LET) is a non-linear inversion problem that allows the joint determination of earthquakes parameters and velocity structure from arrival times of waves generated by local sources. Since the early developments of seismic tomography several inversion methods have been developed to solve this problem in a linearized way. In the framework of Monte Carlo sampling, we developed a new code based on the Reversible Jump Markov Chain Monte Carlo sampling method (Rj-McMc). It is a trans-dimensional approach in which the number of unknowns, and thus the model parameterization, is treated as one of the unknowns. I show that our new code allows overcoming major limitations of linearized tomography, opening a new perspective in seismic imaging. Synthetic tests demonstrate that our algorithm is able to produce a robust and reliable tomography without the need to make subjective a-priori assumptions about starting models and parameterization. Moreover it provides a more accurate estimate of uncertainties about the model parameters. Therefore, it is very suitable for investigating the velocity structure in regions that lack of accurate a-priori information. Synthetic tests also reveal that the lack of any regularization constraints allows extracting more information from the observed data and that the velocity structure can be detected also in regions where the density of rays is low and standard linearized codes fails. I also present high-resolution Vp and Vp/Vs models in two widespread investigated regions: the Parkfield segment of the San Andreas Fault (California, USA) and the area around the Alto Tiberina fault (Umbria-Marche, Italy). In both the cases, the models obtained with our code show a substantial improvement in the data fit, if compared with the models obtained from the same data set with the linearized inversion codes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the thesis, we discuss some aspects of 1D quantum systems related to entanglement entropies; in particular, we develop a new numerical method for the detection of crossovers in Luttinger liquids, and we discuss the behaviour of Rényi entropies in open conformal systems, when the boundary conditions preserve their conformal invariance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a simple (but fully three-dimensional) mathematical model for the electromagnetic exploration of buried, perfect electrically conducting objects within the soil underground. Moving an electric device parallel to the ground at constant height in order to generate a magnetic field, we measure the induced magnetic field within the device, and factor the underlying mathematics into a product of three operations which correspond to the primary excitation, some kind of reflection on the surface of the buried object(s) and the corresponding secondary excitation, respectively. Using this factorization we are able to give a justification of the so-called sampling method from inverse scattering theory for this particular set-up.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bioinformatics, in the last few decades, has played a fundamental role to give sense to the huge amount of data produced. Obtained the complete sequence of a genome, the major problem of knowing as much as possible of its coding regions, is crucial. Protein sequence annotation is challenging and, due to the size of the problem, only computational approaches can provide a feasible solution. As it has been recently pointed out by the Critical Assessment of Function Annotations (CAFA), most accurate methods are those based on the transfer-by-homology approach and the most incisive contribution is given by cross-genome comparisons. In the present thesis it is described a non-hierarchical sequence clustering method for protein automatic large-scale annotation, called “The Bologna Annotation Resource Plus” (BAR+). The method is based on an all-against-all alignment of more than 13 millions protein sequences characterized by a very stringent metric. BAR+ can safely transfer functional features (Gene Ontology and Pfam terms) inside clusters by means of a statistical validation, even in the case of multi-domain proteins. Within BAR+ clusters it is also possible to transfer the three dimensional structure (when a template is available). This is possible by the way of cluster-specific HMM profiles that can be used to calculate reliable template-to-target alignments even in the case of distantly related proteins (sequence identity < 30%). Other BAR+ based applications have been developed during my doctorate including the prediction of Magnesium binding sites in human proteins, the ABC transporters superfamily classification and the functional prediction (GO terms) of the CAFA targets. Remarkably, in the CAFA assessment, BAR+ placed among the ten most accurate methods. At present, as a web server for the functional and structural protein sequence annotation, BAR+ is freely available at http://bar.biocomp.unibo.it/bar2.0.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present dissertation we consider Feynman integrals in the framework of dimensional regularization. As all such integrals can be expressed in terms of scalar integrals, we focus on this latter kind of integrals in their Feynman parametric representation and study their mathematical properties, partially applying graph theory, algebraic geometry and number theory. The three main topics are the graph theoretic properties of the Symanzik polynomials, the termination of the sector decomposition algorithm of Binoth and Heinrich and the arithmetic nature of the Laurent coefficients of Feynman integrals.rnrnThe integrand of an arbitrary dimensionally regularised, scalar Feynman integral can be expressed in terms of the two well-known Symanzik polynomials. We give a detailed review on the graph theoretic properties of these polynomials. Due to the matrix-tree-theorem the first of these polynomials can be constructed from the determinant of a minor of the generic Laplacian matrix of a graph. By use of a generalization of this theorem, the all-minors-matrix-tree theorem, we derive a new relation which furthermore relates the second Symanzik polynomial to the Laplacian matrix of a graph.rnrnStarting from the Feynman parametric parameterization, the sector decomposition algorithm of Binoth and Heinrich serves for the numerical evaluation of the Laurent coefficients of an arbitrary Feynman integral in the Euclidean momentum region. This widely used algorithm contains an iterated step, consisting of an appropriate decomposition of the domain of integration and the deformation of the resulting pieces. This procedure leads to a disentanglement of the overlapping singularities of the integral. By giving a counter-example we exhibit the problem, that this iterative step of the algorithm does not terminate for every possible case. We solve this problem by presenting an appropriate extension of the algorithm, which is guaranteed to terminate. This is achieved by mapping the iterative step to an abstract combinatorial problem, known as Hironaka's polyhedra game. We present a publicly available implementation of the improved algorithm. Furthermore we explain the relationship of the sector decomposition method with the resolution of singularities of a variety, given by a sequence of blow-ups, in algebraic geometry.rnrnMotivated by the connection between Feynman integrals and topics of algebraic geometry we consider the set of periods as defined by Kontsevich and Zagier. This special set of numbers contains the set of multiple zeta values and certain values of polylogarithms, which in turn are known to be present in results for Laurent coefficients of certain dimensionally regularized Feynman integrals. By use of the extended sector decomposition algorithm we prove a theorem which implies, that the Laurent coefficients of an arbitrary Feynman integral are periods if the masses and kinematical invariants take values in the Euclidean momentum region. The statement is formulated for an even more general class of integrals, allowing for an arbitrary number of polynomials in the integrand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the years the Differential Quadrature (DQ) method has distinguished because of its high accuracy, straightforward implementation and general ap- plication to a variety of problems. There has been an increase in this topic by several researchers who experienced significant development in the last years. DQ is essentially a generalization of the popular Gaussian Quadrature (GQ) used for numerical integration functions. GQ approximates a finite in- tegral as a weighted sum of integrand values at selected points in a problem domain whereas DQ approximate the derivatives of a smooth function at a point as a weighted sum of function values at selected nodes. A direct appli- cation of this elegant methodology is to solve ordinary and partial differential equations. Furthermore in recent years the DQ formulation has been gener- alized in the weighting coefficients computations to let the approach to be more flexible and accurate. As a result it has been indicated as Generalized Differential Quadrature (GDQ) method. However the applicability of GDQ in its original form is still limited. It has been proven to fail for problems with strong material discontinuities as well as problems involving singularities and irregularities. On the other hand the very well-known Finite Element (FE) method could overcome these issues because it subdivides the computational domain into a certain number of elements in which the solution is calculated. Recently, some researchers have been studying a numerical technique which could use the advantages of the GDQ method and the advantages of FE method. This methodology has got different names among each research group, it will be indicated here as Generalized Differential Quadrature Finite Element Method (GDQFEM).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the large maturity limit, we compute explicitly the Local Volatility surface for Heston, through Dupire’s formula, with Fourier pricing of the respective derivatives of the call price. Than we verify that the prices of European call options produced by the Heston model, concide with those given by the local volatility model where the Local Volatility is computed as said above.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this PhD thesis " Simulation Guided Navigation in cranio- maxillo- facial surgery : a new approach to Improve intraoperative three-dimensional accuracy and reproducibility during surgery ." was at the center of its attention the various applications of a method introduced by our School in 2010 and has as its theme the increase of interest of reproducibility of surgical programs through methods that in whole or in part are using intraoperative navigation. It was introduced in Orthognathic Surgery Validation a new method for the interventions carried out according to the method Simulation Guided Navigation in facial deformities ; was then analyzed the method of three-dimensional control of the osteotomies through the use of templates and cutting of plates using the method precontoured CAD -CAM and laser sintering . It was finally proceeded to introduce the method of piezonavigated surgery in the various branches of maxillofacial surgery . These studies have been subjected to validation processes and the results are presented .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wearable inertial and magnetic measurements units (IMMU) are an important tool for underwater motion analysis because they are swimmer-centric, they require only simple measurement set-up and they provide the performance results very quickly. In order to estimate 3D joint kinematics during motion, protocols were developed to transpose the IMMU orientation estimation to a biomechanical model. The aim of the thesis was to validate a protocol originally propositioned to estimate the joint angles of the upper limbs during one-degree-of-freedom movements in dry settings and herein modified to perform 3D kinematics analysis of shoulders, elbows and wrists during swimming. Eight high-level swimmers were assessed in the laboratory by means of an IMMU while simulating the front crawl and breaststroke movements. A stereo-photogrammetric system (SPS) was used as reference. The joint angles (in degrees) of the shoulders (flexion-extension, abduction-adduction and internal-external rotation), the elbows (flexion-extension and pronation-supination), and the wrists (flexion-extension and radial-ulnar deviation) were estimated with the two systems and compared by means of root mean square errors (RMSE), relative RMSE, Pearson’s product-moment coefficient correlation (R) and coefficient of multiple correlation (CMC). Subsequently, the athletes were assessed during pool swimming trials through the IMMU. Considering both swim styles and all joint degrees of freedom modeled, the comparison between the IMMU and the SPS showed median values of RMSE lower than 8°, representing 10% of overall joint range of motion, high median values of CMC (0.97) and R (0.96). These findings suggest that the protocol accurately estimated the 3D orientation of the shoulders, elbows and wrists joint during swimming with accuracy adequate for the purposes of research. In conclusion, the proposed method to evaluate the 3D joint kinematics through IMMU was revealed to be a useful tool for both sport and clinical contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent advent of Next-generation sequencing technologies has revolutionized the way of analyzing the genome. This innovation allows to get deeper information at a lower cost and in less time, and provides data that are discrete measurements. One of the most important applications with these data is the differential analysis, that is investigating if one gene exhibit a different expression level in correspondence of two (or more) biological conditions (such as disease states, treatments received and so on). As for the statistical analysis, the final aim will be statistical testing and for modeling these data the Negative Binomial distribution is considered the most adequate one especially because it allows for "over dispersion". However, the estimation of the dispersion parameter is a very delicate issue because few information are usually available for estimating it. Many strategies have been proposed, but they often result in procedures based on plug-in estimates, and in this thesis we show that this discrepancy between the estimation and the testing framework can lead to uncontrolled first-type errors. We propose a mixture model that allows each gene to share information with other genes that exhibit similar variability. Afterwards, three consistent statistical tests are developed for differential expression analysis. We show that the proposed method improves the sensitivity of detecting differentially expressed genes with respect to the common procedures, since it is the best one in reaching the nominal value for the first-type error, while keeping elevate power. The method is finally illustrated on prostate cancer RNA-seq data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Zusammenfassung In der vorliegenden Arbeit besch¨aftige ich mich mit Differentialgleichungen von Feynman– Integralen. Ein Feynman–Integral h¨angt von einem Dimensionsparameter D ab und kann f¨ur ganzzahlige Dimension als projektives Integral dargestellt werden. Dies ist die sogenannte Feynman–Parameter Darstellung. In Abh¨angigkeit der Dimension kann ein solches Integral divergieren. Als Funktion in D erh¨alt man eine meromorphe Funktion auf ganz C. Ein divergentes Integral kann also durch eine Laurent–Reihe ersetzt werden und dessen Koeffizienten r¨ucken in das Zentrum des Interesses. Diese Vorgehensweise wird als dimensionale Regularisierung bezeichnet. Alle Terme einer solchen Laurent–Reihe eines Feynman–Integrals sind Perioden im Sinne von Kontsevich und Zagier. Ich beschreibe eine neue Methode zur Berechnung von Differentialgleichungen von Feynman– Integralen. ¨ Ublicherweise verwendet man hierzu die sogenannten ”integration by parts” (IBP)– Identit¨aten. Die neue Methode verwendet die Theorie der Picard–Fuchs–Differentialgleichungen. Im Falle projektiver oder quasi–projektiver Variet¨aten basiert die Berechnung einer solchen Differentialgleichung auf der sogenannten Griffiths–Dwork–Reduktion. Zun¨achst beschreibe ich die Methode f¨ur feste, ganzzahlige Dimension. Nach geeigneter Verschiebung der Dimension erh¨alt man direkt eine Periode und somit eine Picard–Fuchs–Differentialgleichung. Diese ist inhomogen, da das Integrationsgebiet einen Rand besitzt und daher nur einen relativen Zykel darstellt. Mit Hilfe von dimensionalen Rekurrenzrelationen, die auf Tarasov zur¨uckgehen, kann in einem zweiten Schritt die L¨osung in der urspr¨unglichen Dimension bestimmt werden. Ich beschreibe außerdem eine Methode, die auf der Griffiths–Dwork–Reduktion basiert, um die Differentialgleichung direkt f¨ur beliebige Dimension zu berechnen. Diese Methode ist allgemein g¨ultig und erspart Dimensionswechsel. Ein Erfolg der Methode h¨angt von der M¨oglichkeit ab, große Systeme von linearen Gleichungen zu l¨osen. Ich gebe Beispiele von Integralen von Graphen mit zwei und drei Schleifen. Tarasov gibt eine Basis von Integralen an, die Graphen mit zwei Schleifen und zwei externen Kanten bestimmen. Ich bestimme Differentialgleichungen der Integrale dieser Basis. Als wichtigstes Beispiel berechne ich die Differentialgleichung des sogenannten Sunrise–Graphen mit zwei Schleifen im allgemeinen Fall beliebiger Massen. Diese ist f¨ur spezielle Werte von D eine inhomogene Picard–Fuchs–Gleichung einer Familie elliptischer Kurven. Der Sunrise–Graph ist besonders interessant, weil eine analytische L¨osung erst mit dieser Methode gefunden werden konnte, und weil dies der einfachste Graph ist, dessen Master–Integrale nicht durch Polylogarithmen gegeben sind. Ich gebe außerdem ein Beispiel eines Graphen mit drei Schleifen. Hier taucht die Picard–Fuchs–Gleichung einer Familie von K3–Fl¨achen auf.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Angesichts der sich abzeichnenden Erschöpfung fossiler Ressourcen ist die Erforschung alternativer Energiequellen derzeit eines der meistbeachteten Forschungsgebiete. Durch ihr enormes Potential ist die Photovoltaik besonders im Fokus der Wissenschaft. Um großflächige Beschichtungsverfahren nutzen zu können, wird seit einigen Jahren auf dem Gebiet der Dünnschichtphotovoltaik intensiv geforscht. Jedoch sind die gegenwärtigen Solarzellenkonzepte allesamt durch die Verwendung giftiger (Cd, As) oder seltener Elemente (In, Ga) oder durch eine komplexe Phasenbildung in ihrem Potential beschränkt. Die Entwicklung alternativer Konzepte erscheint daher naheliegend.rnAufgrund dessen wurde in einem BMBF-geförderten Verbundprojekt die Abscheidung von Dünnschichten des binären Halbleiters Bi2S3 mittels physikalischer Gasphasenabscheidung mit dem Ziel der Etablierung als quasi-intrinsischer Absorber in Solarzellenstrukturen mit p-i-n-Schichtfolge hin untersucht.rnDurch sein von einem hochgradig anisotropen Bindungscharakter geprägtes Kristallwachstum war die Abscheidung glatter, einphasiger und für die Integration in eine Multischichtstruktur geeigneter Schichten mit Schichtdicken von einigen 100 nm eine der wichtigsten Herausforderungen. Die Auswirkungen der beiden Parameter Abscheidungstemperatur und Stöchiometrie wurden hinsichtlich ihrer Auswirkungen auf die relevanten Kenngrößen (wie Morphologie, Dotierungsdichte und Photolumineszenz) untersucht. Es gelang, erfolgreich polykristalline Schichten mit geeigneter Rauigkeit und einer Dotierungsdichte von n ≈ 2 1015cm-3 auf anwendungsrelevanten Substraten abzuscheiden, wobei eine besonders starke Abhängigkeit von der Gasphasenzusammensetzung ermittelt werden. Es konnten weiterhin die ersten Messungen der elektronischen Zustandsdichte unter Verwendung von Hochenergie-Photoemissionsspektroskopie durchgeführt werden, die insbesondere den Einfluss variabler Materialzusammensetzungen offenbarten.rnZum Nachweis der Eignung des Materials als Absorberschicht standen innerhalb des Projektes mit SnS, Cu2O und PbS prinzipiell geeignete p-Kontaktmaterialien zur Verfügung. Es konnten trotz der Verwendung besonders sauberer Abscheidungsmethoden im Vakuum keine funktionstüchtigen Solarzellen mit Bi2S3 deponiert werden. Jedoch war es unter Verwendung von Photoemissionspektroskopie möglich, die relevanten Grenzflächen zu spektroskopieren und die Ursachen für die Beobachtungen zu identifizieren. Zudem konnte erfolgreich die Notwendigkeit von Puffermaterialien bei der Bi2S3-Abscheidung nachgewiesen werden, um Oberflächenreaktionen zu unterbinden und die Transporteigenschaften an der Grenzfläche zu verbessern.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hybrid Elektrodenmaterialien (HEM) sind der Schlüssel zu grundlegenden Fortschritten in der Energiespeicherung und Systemen zur Energieumwandlung, einschließlich Lithium-Ionen-Batterien (LiBs), Superkondensatoren (SCs) und Brennstoffzellen (FCs). Die faszinierenden Eigenschaften von Graphen machen es zu einem guten Ausgangsmaterial für die Darstellung von HEM. Jedoch scheitern traditionelle Verfahren zur Herstellung von Graphen-HEM (GHEM) scheitern häufig an der fehlenden Kontrolle über die Morphologie und deren Einheitlichkeit, was zu unzureichenden Grenzflächenwechselwirkungen und einer mangelhaften Leistung des Materials führt. Diese Arbeit konzentriert sich auf die Herstellung von GHEM über kontrollierte Darstellungsmethoden und befasst sich mit der Nutzung von definierten GHEM für die Energiespeicherung und -umwandlung. Die große Volumenausdehnung bildet den Hauptnachteil der künftigen Lithium-Speicher-Materialien. Als erstes wird ein dreidimensionaler Graphen Schaumhybrid zur Stärkung der Grundstruktur und zur Verbesserung der elektrochemischen Leistung des Fe3O4 Anodenmaterials dargestellt. Der Einsatz von Graphenschalen und Graphennetzen realisiert dabei einen doppelten Schutz gegen die Volumenschwankung des Fe3O4 bei dem elektrochemischen Prozess. Die Leistung der SCs und der FCs hängt von der Porenstruktur und der zugänglichen Oberfläche, beziehungsweise den katalytischen Stellen der Elektrodenmaterialien ab. Wir zeigen, dass die Steuerung der Porosität über Graphen-basierte Kohlenstoffnanoschichten (HPCN) die zugängliche Oberfläche und den Ionentransport/Ladungsspeicher für SCs-Anwendungen erhöht. Desweiteren wurden Stickstoff dotierte Kohlenstoffnanoschichten (NDCN) für die kathodische Sauerstoffreduktion (ORR) hergestellt. Eine maßgeschnittene Mesoporosität verbunden mit Heteroatom Doping (Stickstoff) fördert die Exposition der aktiven Zentren und die ORR-Leistung der metallfreien Katalysatoren. Hochwertiges elektrochemisch exfoliiertes Graphen (EEG) ist ein vielversprechender Kandidat für die Darstellung von GHEM. Allerdings ist die kontrollierte Darstellung von EEG-Hybriden weiterhin eine große Herausforderung. Zu guter Letzt wird eine Bottom-up-Strategie für die Darstellung von EEG Schichten mit einer Reihe von funktionellen Nanopartikeln (Si, Fe3O4 und Pt NPs) vorgestellt. Diese Arbeit zeigt einen vielversprechenden Weg für die wirtschaftliche Synthese von EEG und EEG-basierten Materialien.