921 resultados para Causal Loop Diagram


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work describes the development of a simulation tool which allows the simulation of the Internal Combustion Engine (ICE), the transmission and the vehicle dynamics. It is a control oriented simulation tool, designed in order to perform both off-line (Software In the Loop) and on-line (Hardware In the Loop) simulation. In the first case the simulation tool can be used in order to optimize Engine Control Unit strategies (as far as regard, for example, the fuel consumption or the performance of the engine), while in the second case it can be used in order to test the control system. In recent years the use of HIL simulations has proved to be very useful in developing and testing of control systems. Hardware In the Loop simulation is a technology where the actual vehicles, engines or other components are replaced by a real time simulation, based on a mathematical model and running in a real time processor. The processor reads ECU (Engine Control Unit) output signals which would normally feed the actuators and, by using mathematical models, provides the signals which would be produced by the actual sensors. The simulation tool, fully designed within Simulink, includes the possibility to simulate the only engine, the transmission and vehicle dynamics and the engine along with the vehicle and transmission dynamics, allowing in this case to evaluate the performance and the operating conditions of the Internal Combustion Engine, once it is installed on a given vehicle. Furthermore the simulation tool includes different level of complexity, since it is possible to use, for example, either a zero-dimensional or a one-dimensional model of the intake system (in this case only for off-line application, because of the higher computational effort). Given these preliminary remarks, an important goal of this work is the development of a simulation environment that can be easily adapted to different engine types (single- or multi-cylinder, four-stroke or two-stroke, diesel or gasoline) and transmission architecture without reprogramming. Also, the same simulation tool can be rapidly configured both for off-line and real-time application. The Matlab-Simulink environment has been adopted to achieve such objectives, since its graphical programming interface allows building flexible and reconfigurable models, and real-time simulation is possible with standard, off-the-shelf software and hardware platforms (such as dSPACE systems).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Die Arbeit beginnt mit dem Vergleich spezieller Regularisierungsmethoden in der Quantenfeldtheorie mit dem Verfahren zur störungstheoretischen Konstruktion der S-Matrix nach Epstein und Glaser. Da das Epstein-Glaser-Verfahren selbst als Regularisierungsverfahren verwandt werden kann und darüberhinaus ausschließlich auf physikalisch motivierten Postulaten basiert, liefert dieser Vergleich ein Kriterium für die Zulässigkeit anderer Regularisierungsmethoden. Zusätzlich zur Herausstellung dieser Zulässigkeit resultiert aus dieser Gegenüberstellung als weiteres wesentliches Resultat ein neues, in der Anwendung praktikables sowie konsistentes Regularisierungsverfahren, das modifizierte BPHZ-Verfahren. Dieses wird anhand von Ein-Schleifen-Diagrammen aus der QED (Elektronselbstenergie, Vakuumpolarisation und Vertexkorrektur) demonstriert. Im Gegensatz zur vielverwandten Dimensionalen Regularisierung ist dieses Verfahren uneingeschränkt auch für chirale Theorien anwendbar. Als Beispiel hierfür dient die Berechnung der im Rahmen einer axialen Erweiterung der QED-Lagrangedichte auftretenden U(1)-Anomalie. Auf der Stufe von Mehr-Schleifen-Diagrammen zeigt der Vergleich der Epstein-Glaser-Konstruktion mit dem bekannten BPHZ-Verfahren an mehreren Beispielen aus der Phi^4-Theorie, darunter das sog. Sunrise-Diagramm, daß zu deren Berechnung die nach der Waldformel des BPHZ-Verfahrens zur Regularisierung beitragenden Unterdiagramme auf eine kleinere Klasse eingeschränkt werden können. Dieses Resultat ist gleichfalls für die Praxis der Regularisierung bedeutsam, da es bereits auf der Stufe der zu berücksichtigenden Unterdiagramme zu einer Vereinfachung führt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main part of this thesis describes a method of calculating the massless two-loop two-point function which allows expanding the integral up to an arbitrary order in the dimensional regularization parameter epsilon by rewriting it as a double Mellin-Barnes integral. Closing the contour and collecting the residues then transforms this integral into a form that enables us to utilize S. Weinzierl's computer library nestedsums. We could show that multiple zeta values and rational numbers are sufficient for expanding the massless two-loop two-point function to all orders in epsilon. We then use the Hopf algebra of Feynman diagrams and its antipode, to investigate the appearance of Riemann's zeta function in counterterms of Feynman diagrams in massless Yukawa theory and massless QED. The class of Feynman diagrams we consider consists of graphs built from primitive one-loop diagrams and the non-planar vertex correction, where the vertex corrections only depend on one external momentum. We showed the absence of powers of pi in the counterterms of the non-planar vertex correction and diagrams built by shuffling it with the one-loop vertex correction. We also found the invariance of some coefficients of zeta functions under a change of momentum flow through these vertex corrections.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present state of the theoretical predictions for the hadronic heavy hadron production is not quite satisfactory. The full next-to-leading order (NLO) ${cal O} (alpha_s^3)$ corrections to the hadroproduction of heavy quarks have raised the leading order (LO) ${cal O} (alpha_s^2)$ estimates but the NLO predictions are still slightly below the experimental numbers. Moreover, the theoretical NLO predictions suffer from the usual large uncertainty resulting from the freedom in the choice of renormalization and factorization scales of perturbative QCD.In this light there are hopes that a next-to-next-to-leading order (NNLO) ${cal O} (alpha_s^4)$ calculation will bring theoretical predictions even closer to the experimental data. Also, the dependence on the factorization and renormalization scales of the physical process is expected to be greatly reduced at NNLO. This would reduce the theoretical uncertainty and therefore make the comparison between theory and experiment much more significant. In this thesis I have concentrated on that part of NNLO corrections for hadronic heavy quark production where one-loop integrals contribute in the form of a loop-by-loop product. In the first part of the thesis I use dimensional regularization to calculate the ${cal O}(ep^2)$ expansion of scalar one-loop one-, two-, three- and four-point integrals. The Laurent series of the scalar integrals is needed as an input for the calculation of the one-loop matrix elements for the loop-by-loop contributions. Since each factor of the loop-by-loop product has negative powers of the dimensional regularization parameter $ep$ up to ${cal O}(ep^{-2})$, the Laurent series of the scalar integrals has to be calculated up to ${cal O}(ep^2)$. The negative powers of $ep$ are a consequence of ultraviolet and infrared/collinear (or mass ) divergences. Among the scalar integrals the four-point integrals are the most complicated. The ${cal O}(ep^2)$ expansion of the three- and four-point integrals contains in general classical polylogarithms up to ${rm Li}_4$ and $L$-functions related to multiple polylogarithms of maximal weight and depth four. All results for the scalar integrals are also available in electronic form. In the second part of the thesis I discuss the properties of the classical polylogarithms. I present the algorithms which allow one to reduce the number of the polylogarithms in an expression. I derive identities for the $L$-functions which have been intensively used in order to reduce the length of the final results for the scalar integrals. I also discuss the properties of multiple polylogarithms. I derive identities to express the $L$-functions in terms of multiple polylogarithms. In the third part I investigate the numerical efficiency of the results for the scalar integrals. The dependence of the evaluation time on the relative error is discussed. In the forth part of the thesis I present the larger part of the ${cal O}(ep^2)$ results on one-loop matrix elements in heavy flavor hadroproduction containing the full spin information. The ${cal O}(ep^2)$ terms arise as a combination of the ${cal O}(ep^2)$ results for the scalar integrals, the spin algebra and the Passarino-Veltman decomposition. The one-loop matrix elements will be needed as input in the determination of the loop-by-loop part of NNLO for the hadronic heavy flavor production.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is concerned with calculations in manifestly Lorentz-invariant baryon chiral perturbation theory beyond order D=4. We investigate two different methods. The first approach consists of the inclusion of additional particles besides pions and nucleons as explicit degrees of freedom. This results in the resummation of an infinite number of higher-order terms which contribute to higher-order low-energy constants in the standard formulation. In this thesis the nucleon axial, induced pseudoscalar, and pion-nucleon form factors are investigated. They are first calculated in the standard approach up to order D=4. Next, the inclusion of the axial-vector meson a_1(1260) is considered. We find three diagrams with an axial-vector meson which are relevant to the form factors. Due to the applied renormalization scheme, however, the contributions of the two loop diagrams vanish and only a tree diagram contributes explicitly. The appearing coupling constant is fitted to experimental data of the axial form factor. The inclusion of the axial-vector meson results in an improved description of the axial form factor for higher values of momentum transfer. The contributions to the induced pseudoscalar form factor, however, are negligible for the considered momentum transfer, and the axial-vector meson does not contribute to the pion-nucleon form factor. The second method consists in the explicit calculation of higher-order diagrams. This thesis describes the applied renormalization scheme and shows that all symmetries and the power counting are preserved. As an application we determine the nucleon mass up to order D=6 which includes the evaluation of two-loop diagrams. This is the first complete calculation in manifestly Lorentz-invariant baryon chiral perturbation theory at the two-loop level. The numerical contributions of the terms of order D=5 and D=6 are estimated, and we investigate their pion-mass dependence. Furthermore, the higher-order terms of the nucleon sigma term are determined with the help of the Feynman-Hellmann theorem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the past few years, the switch towards renewable sources for energy production is considered as necessary for the future sustainability of the world environment. Hydrogen is one of the most promising energy vectors for the stocking of low density renewable sources such as wind, biomasses and sun. The production of hydrogen by the steam-iron process could be one of the most versatile approaches useful for the employment of different reducing bio-based fuels. The steam iron process is a two-step chemical looping reaction based (i) on the reduction of an iron-based oxide with an organic compound followed by (ii) a reoxidation of the reduced solid material by water, which lead to the production of hydrogen. The overall reaction is the water oxidation of the organic fuel (gasification or reforming processes) but the inherent separation of the two semireactions allows the production of carbon-free hydrogen. In this thesis, steam-iron cycle with methanol is proposed and three different oxides with the generic formula AFe2O4 (A=Co,Ni,Fe) are compared in order to understand how the chemical properties and the structural differences can affect the productivity of the overall process. The modifications occurred in used samples are deeply investigated by the analysis of used materials. A specific study on CoFe2O4-based process using both classical and in-situ/ex-situ analysis is reported employing many characterization techniques such as FTIR spectroscopy, TEM, XRD, XPS, BET, TPR and Mössbauer spectroscopy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is concerned with the calculation of virtual Compton scattering (VCS) in manifestly Lorentz-invariant baryon chiral perturbation theory to fourth order in the momentum and quark-mass expansion. In the one-photon-exchange approximation, the VCS process is experimentally accessible in photon electro-production and has been measured at the MAMI facility in Mainz, at MIT-Bates, and at Jefferson Lab. Through VCS one gains new information on the nucleon structure beyond its static properties, such as charge, magnetic moments, or form factors. The nucleon response to an incident electromagnetic field is parameterized in terms of 2 spin-independent (scalar) and 4 spin-dependent (vector) generalized polarizabilities (GP). In analogy to classical electrodynamics the two scalar GPs represent the induced electric and magnetic dipole polarizability of a medium. For the vector GPs, a classical interpretation is less straightforward. They are derived from a multipole expansion of the VCS amplitude. This thesis describes the first calculation of all GPs within the framework of manifestly Lorentz-invariant baryon chiral perturbation theory. Because of the comparatively large number of diagrams - 100 one-loop diagrams need to be calculated - several computer programs were developed dealing with different aspects of Feynman diagram calculations. One can distinguish between two areas of development, the first concerning the algebraic manipulations of large expressions, and the second dealing with numerical instabilities in the calculation of one-loop integrals. In this thesis we describe our approach using Mathematica and FORM for algebraic tasks, and C for the numerical evaluations. We use our results for real Compton scattering to fix the two unknown low-energy constants emerging at fourth order. Furthermore, we present the results for the differential cross sections and the generalized polarizabilities of VCS off the proton.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 2010, 2011 and 2012 growing seasons, the occurrence of the ascomycetes Podosphaera fusca and Golovinomyces orontii, causal agents of powdery mildew disease, was monitored on cultivated cucurbits located in Bologna and Mantua provinces to determine the epidemiology of the species. To identify the pathogens, both morphological and molecular identifications were performed on infected leaf samples and a Multiplex-PCR was performed to identify the mating type genes of P. fusca isolates. The investigations indicated a temporal succession of the two species with the earlier infections caused by G. orontii, that seems to be the predominant species till the middle of July when it progressively disappears and P. fusca becomes the main species infecting cucurbits till the end of October. The temporal variation is likely due to the different overwintering strategies of the two species instead of climatic conditions. Only chasmothecia of P. fusca were recorded and mating type alleles ratio tended to be 1:1. Considering that only chasmothecia of P. fusca were found, molecular-genetic analysis were carried out to find some evidence of recombination within this species by MLST and AFLP methods. Surprisingly, no variations were observed within isolates for the 8 MLST markers used. According to this result, AFLP analysis showed a high similarity within isolates, with SM similarity coefficient ranging between 0.91-1.00 and also, sequencing of 12 polymorphic bands revealed identity to some gene involved in mutation and selection. The results suggest that populations of P. fusca are likely to be a clonal population with some differences among isolates probably due to agricultural practices such as fungicides treatments and cultivated hosts. Therefore, asexual reproduction, producing a lot of fungal biomass that can be easily transported by wind, is the most common and useful way to the spread and colonization of the pathogen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'ALMATracker è un sistema di puntamento per la stazione di terra di ALMASat-1. La sua configurazione non segue la classica Azimuth-Elevazione, bensì utilizza gli assi α-β per evitare punti di singolarità nelle posizioni vicino allo zenit. Ancora in fase di progettazione, utilizzando in congiunta SolidWorks e LabVIEW si è creato un Software-in-the-loop per la sua verifica funzionale, grazie all'utilizzo del relativamente nuovo pacchetto NI Softmotion. Data la scarsa esperienza e documentazione che si hanno su questo recente tool, si è prima creato un Case Study che simulasse un sistema di coordinate cilindriche in modo da acquisire competenza. I risultati conseguiti sono poi stati sfruttati per la creazione di un SIL per la simulazione del movimento dell'ALMATracker. L'utilizzo di questa metodologia di progettazione non solo ha confermato la validità del design proposto, ma anche evidenziato i problemi e le potenzialità che caratterizzano questo pacchetto software dandone un analisi approfondita.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Workaholism is defined as the combination of two underlying dimensions: working excessively and working compulsively. The present thesis aims at achieving the following purposes: 1) to test whether the interaction between environmental and personal antecedents may enhance workaholism; 2) to develop a questionnaire aimed to assess overwork climate in the workplace; 3) to contrast focal employees’ and coworkers’ perceptions of employees’ workaholism and engagement. Concerning the first purpose, the interaction between overwork climate and person characteristics (achievement motivation, perfectionism, conscientiousness, self-efficacy) was explored on a sample of 333 Dutch employees. The results of moderated regression analyses showed that the interaction between overwork climate and person characteristics is related to workaholism. The second purpose was pursued with two interrelated studies. In Study 1 the Overwork Climate Scale (OWCS) was developed and tested using a principal component analysis (N = 395) and a confirmatory factor analysis (N = 396). Two overwork climate dimensions were distinguished, overwork endorsement and lacking overwork rewards. In Study 2 the total sample (N = 791) was used to explore the association of overwork climate with two types of working hard: work engagement and workaholism. Lacking overwork rewards was negatively associated with engagement, whereas overwork endorsement showed a positive association with workaholism. Concerning the third purpose, using a sample of 73 dyads composed by focal employees and their coworkers, a multitrait-multimethod matrix and a correlated trait-correlated method model, i.e. the CT-C(M–1) model, were examined. Our results showed a considerable agreement between raters on focal employees' engagement and workaholism. In contrast, we observed a significant difference concerning the cognitive dimension of workaholism, working compulsively. Moreover, we provided further evidence for the discriminant validity between engagement and workaholism. Overall, workaholism appears as a negative work-related state that could be better explained by assuming a multi-causal and multi-rater approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work investigates the feasibility of a new process aimed at the production of hydrogen with inherent separation of carbon oxides. The process consists in a cycle in which, in the first step, a mixed metal oxide is reduced by ethanol (obtained from biomasses). The reduced metal is then contacted with steam in order to split the water and sequestrating the oxygen into the looping material’s structure. The oxides used to run this thermochemical cycle, also called “steam-iron process” are mixed ferrites in the spinel structure MeFe2O4 (Me = Fe, Co, Ni or Cu). To understand the reactions involved in the anaerobic reforming of ethanol, diffuse reflectance spectroscopy (DRIFTS) was used, coupled with the mass analysis of the effluent, to study the surface composition of the ferrites during the adsorption of ethanol and its transformations during the temperature program. This study was paired with the tests on a laboratory scale plant and the characterization through various techniques such as XRD, Mössbauer spectroscopy, elemental analysis... on the materials as synthesized and at different reduction degrees In the first step it was found that besides the generation of the expected CO, CO2 and H2O, the products of ethanol anaerobic oxidation, also a large amount of H2 and coke were produced. The latter is highly undesired, since it affects the second step, during which water is fed over the pre-reduced spinel at high temperature. The behavior of the different spinels was affected by the nature of the divalent metal cation; magnetite was the oxide showing the slower rate of reduction by ethanol, but on the other hand it was that one which could perform the entire cycle of the process more efficiently. Still the problem of coke formation remains the greater challenge to solve.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Microemulsions are thermodynamically stable, macroscopically homogeneous but microscopically heterogeneous, mixtures of water and oil stabilised by surfactant molecules. They have unique properties like ultralow interfacial tension, large interfacial area and the ability to solubilise other immiscible liquids. Depending on the temperature and concentration, non-ionic surfactants self assemble to micelles, flat lamellar, hexagonal and sponge like bicontinuous morphologies. Microemulsions have three different macroscopic phases (a) 1phase- microemulsion (isotropic), (b) 2phase-microemulsion coexisting with either expelled water or oil and (c) 3phase- microemulsion coexisting with expelled water and oil.rnrnOne of the most important fundamental questions in this field is the relation between the properties of the surfactant monolayer at water-oil interface and those of microemulsion. This monolayer forms an extended interface whose local curvature determines the structure of the microemulsion. The main part of my thesis deals with the quantitative measurements of the temperature induced phase transitions of water-oil-nonionic microemulsions and their interpretation using the temperature dependent spontaneous curvature [c0(T)] of the surfactant monolayer. In a 1phase- region, conservation of the components determines the droplet (domain) size (R) whereas in 2phase-region, it is determined by the temperature dependence of c0(T). The Helfrich bending free energy density includes the dependence of the droplet size on c0(T) as

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gegenstand dieser Arbeit ist die nummerische Berechnung von Schleifenintegralen welche in höheren Ordnungen der Störungstheorie auftreten.rnAnalog zur reellen Emission kann man auch in den virtuellen Beiträgen Subtraktionsterme einführen, welche die kollinearen und soften Divergenzen des Schleifenintegrals entfernen. Die Phasenraumintegration und die Schleifenintegration können dann in einer einzigen Monte Carlo Integration durchgeführt werden. In dieser Arbeit zeigen wir wie eine solche numerische Integration unter zu Hilfenahme einer Kontourdeformation durchgeführt werden kann. Ausserdem zeigen wir wie man die benötigeten Integranden mit Rekursionsformeln berechnen kann.