950 resultados para M2-m3 Loop
Resumo:
The X-ray test is a precise, fast and non-destructive method to detect mechanical damage in seeds. In the present study, the efficiency of X-ray analysis in identifying the extent of mechanical damage in sweet corn seeds and its relationship with germination and vigor was evaluated. Hybrid 'SWB 551' (sh2) seeds with round (R) and flat (F) shapes were classified as large (L), medium (M1, M2 and M3) and small (S), using sieves with round and oblong screens. After artificial exposure to different levels of damage (0, 1, 3, 5 and 7 impacts), seeds were X-rayed (15 kV, 5 min) and submitted to germination (25 °C/5 days) and cold (10 °C/7 days) tests. Digital images of normal and abnormal seedlings and ungerminated seeds from germination and cold tests were jointly analyzed with the seed X-ray images. Results showed that damage affecting the embryonic axis resulted in abnormal seedlings or dead seeds in the germination and cold tests. The X-ray analysis is efficient for identifying mechanical damage in sweet corn seeds, allowing damage severity to be associated with losses in germination and vigor.
Resumo:
Máster Universitario International en Acuicultura. Trabajo realizado en el Museo Oceanográfico de Mónaco y presentado como requisito parcial para la obtención del Título de Máster Universitario Internacional en Acuicultura, otorgado por la Universidad de Las Palmas de Gran Canaria (ULPGC), el Instituto Canario de Ciencias Marinas (ICCM), y el Centro Internacional de Altos Estudios Agronómicos Mediterráneos de Zaragoza (CIHEAM)
Resumo:
Programa de doctorado: Oceanografía
Resumo:
This work describes the development of a simulation tool which allows the simulation of the Internal Combustion Engine (ICE), the transmission and the vehicle dynamics. It is a control oriented simulation tool, designed in order to perform both off-line (Software In the Loop) and on-line (Hardware In the Loop) simulation. In the first case the simulation tool can be used in order to optimize Engine Control Unit strategies (as far as regard, for example, the fuel consumption or the performance of the engine), while in the second case it can be used in order to test the control system. In recent years the use of HIL simulations has proved to be very useful in developing and testing of control systems. Hardware In the Loop simulation is a technology where the actual vehicles, engines or other components are replaced by a real time simulation, based on a mathematical model and running in a real time processor. The processor reads ECU (Engine Control Unit) output signals which would normally feed the actuators and, by using mathematical models, provides the signals which would be produced by the actual sensors. The simulation tool, fully designed within Simulink, includes the possibility to simulate the only engine, the transmission and vehicle dynamics and the engine along with the vehicle and transmission dynamics, allowing in this case to evaluate the performance and the operating conditions of the Internal Combustion Engine, once it is installed on a given vehicle. Furthermore the simulation tool includes different level of complexity, since it is possible to use, for example, either a zero-dimensional or a one-dimensional model of the intake system (in this case only for off-line application, because of the higher computational effort). Given these preliminary remarks, an important goal of this work is the development of a simulation environment that can be easily adapted to different engine types (single- or multi-cylinder, four-stroke or two-stroke, diesel or gasoline) and transmission architecture without reprogramming. Also, the same simulation tool can be rapidly configured both for off-line and real-time application. The Matlab-Simulink environment has been adopted to achieve such objectives, since its graphical programming interface allows building flexible and reconfigurable models, and real-time simulation is possible with standard, off-the-shelf software and hardware platforms (such as dSPACE systems).
Resumo:
The main part of this thesis describes a method of calculating the massless two-loop two-point function which allows expanding the integral up to an arbitrary order in the dimensional regularization parameter epsilon by rewriting it as a double Mellin-Barnes integral. Closing the contour and collecting the residues then transforms this integral into a form that enables us to utilize S. Weinzierl's computer library nestedsums. We could show that multiple zeta values and rational numbers are sufficient for expanding the massless two-loop two-point function to all orders in epsilon. We then use the Hopf algebra of Feynman diagrams and its antipode, to investigate the appearance of Riemann's zeta function in counterterms of Feynman diagrams in massless Yukawa theory and massless QED. The class of Feynman diagrams we consider consists of graphs built from primitive one-loop diagrams and the non-planar vertex correction, where the vertex corrections only depend on one external momentum. We showed the absence of powers of pi in the counterterms of the non-planar vertex correction and diagrams built by shuffling it with the one-loop vertex correction. We also found the invariance of some coefficients of zeta functions under a change of momentum flow through these vertex corrections.
Resumo:
The present state of the theoretical predictions for the hadronic heavy hadron production is not quite satisfactory. The full next-to-leading order (NLO) ${cal O} (alpha_s^3)$ corrections to the hadroproduction of heavy quarks have raised the leading order (LO) ${cal O} (alpha_s^2)$ estimates but the NLO predictions are still slightly below the experimental numbers. Moreover, the theoretical NLO predictions suffer from the usual large uncertainty resulting from the freedom in the choice of renormalization and factorization scales of perturbative QCD.In this light there are hopes that a next-to-next-to-leading order (NNLO) ${cal O} (alpha_s^4)$ calculation will bring theoretical predictions even closer to the experimental data. Also, the dependence on the factorization and renormalization scales of the physical process is expected to be greatly reduced at NNLO. This would reduce the theoretical uncertainty and therefore make the comparison between theory and experiment much more significant. In this thesis I have concentrated on that part of NNLO corrections for hadronic heavy quark production where one-loop integrals contribute in the form of a loop-by-loop product. In the first part of the thesis I use dimensional regularization to calculate the ${cal O}(ep^2)$ expansion of scalar one-loop one-, two-, three- and four-point integrals. The Laurent series of the scalar integrals is needed as an input for the calculation of the one-loop matrix elements for the loop-by-loop contributions. Since each factor of the loop-by-loop product has negative powers of the dimensional regularization parameter $ep$ up to ${cal O}(ep^{-2})$, the Laurent series of the scalar integrals has to be calculated up to ${cal O}(ep^2)$. The negative powers of $ep$ are a consequence of ultraviolet and infrared/collinear (or mass ) divergences. Among the scalar integrals the four-point integrals are the most complicated. The ${cal O}(ep^2)$ expansion of the three- and four-point integrals contains in general classical polylogarithms up to ${rm Li}_4$ and $L$-functions related to multiple polylogarithms of maximal weight and depth four. All results for the scalar integrals are also available in electronic form. In the second part of the thesis I discuss the properties of the classical polylogarithms. I present the algorithms which allow one to reduce the number of the polylogarithms in an expression. I derive identities for the $L$-functions which have been intensively used in order to reduce the length of the final results for the scalar integrals. I also discuss the properties of multiple polylogarithms. I derive identities to express the $L$-functions in terms of multiple polylogarithms. In the third part I investigate the numerical efficiency of the results for the scalar integrals. The dependence of the evaluation time on the relative error is discussed. In the forth part of the thesis I present the larger part of the ${cal O}(ep^2)$ results on one-loop matrix elements in heavy flavor hadroproduction containing the full spin information. The ${cal O}(ep^2)$ terms arise as a combination of the ${cal O}(ep^2)$ results for the scalar integrals, the spin algebra and the Passarino-Veltman decomposition. The one-loop matrix elements will be needed as input in the determination of the loop-by-loop part of NNLO for the hadronic heavy flavor production.
Resumo:
Over the past few years, the switch towards renewable sources for energy production is considered as necessary for the future sustainability of the world environment. Hydrogen is one of the most promising energy vectors for the stocking of low density renewable sources such as wind, biomasses and sun. The production of hydrogen by the steam-iron process could be one of the most versatile approaches useful for the employment of different reducing bio-based fuels. The steam iron process is a two-step chemical looping reaction based (i) on the reduction of an iron-based oxide with an organic compound followed by (ii) a reoxidation of the reduced solid material by water, which lead to the production of hydrogen. The overall reaction is the water oxidation of the organic fuel (gasification or reforming processes) but the inherent separation of the two semireactions allows the production of carbon-free hydrogen. In this thesis, steam-iron cycle with methanol is proposed and three different oxides with the generic formula AFe2O4 (A=Co,Ni,Fe) are compared in order to understand how the chemical properties and the structural differences can affect the productivity of the overall process. The modifications occurred in used samples are deeply investigated by the analysis of used materials. A specific study on CoFe2O4-based process using both classical and in-situ/ex-situ analysis is reported employing many characterization techniques such as FTIR spectroscopy, TEM, XRD, XPS, BET, TPR and Mössbauer spectroscopy.
Resumo:
L'ALMATracker è un sistema di puntamento per la stazione di terra di ALMASat-1. La sua configurazione non segue la classica Azimuth-Elevazione, bensì utilizza gli assi α-β per evitare punti di singolarità nelle posizioni vicino allo zenit. Ancora in fase di progettazione, utilizzando in congiunta SolidWorks e LabVIEW si è creato un Software-in-the-loop per la sua verifica funzionale, grazie all'utilizzo del relativamente nuovo pacchetto NI Softmotion. Data la scarsa esperienza e documentazione che si hanno su questo recente tool, si è prima creato un Case Study che simulasse un sistema di coordinate cilindriche in modo da acquisire competenza. I risultati conseguiti sono poi stati sfruttati per la creazione di un SIL per la simulazione del movimento dell'ALMATracker. L'utilizzo di questa metodologia di progettazione non solo ha confermato la validità del design proposto, ma anche evidenziato i problemi e le potenzialità che caratterizzano questo pacchetto software dandone un analisi approfondita.
Resumo:
The work investigates the feasibility of a new process aimed at the production of hydrogen with inherent separation of carbon oxides. The process consists in a cycle in which, in the first step, a mixed metal oxide is reduced by ethanol (obtained from biomasses). The reduced metal is then contacted with steam in order to split the water and sequestrating the oxygen into the looping material’s structure. The oxides used to run this thermochemical cycle, also called “steam-iron process” are mixed ferrites in the spinel structure MeFe2O4 (Me = Fe, Co, Ni or Cu). To understand the reactions involved in the anaerobic reforming of ethanol, diffuse reflectance spectroscopy (DRIFTS) was used, coupled with the mass analysis of the effluent, to study the surface composition of the ferrites during the adsorption of ethanol and its transformations during the temperature program. This study was paired with the tests on a laboratory scale plant and the characterization through various techniques such as XRD, Mössbauer spectroscopy, elemental analysis... on the materials as synthesized and at different reduction degrees In the first step it was found that besides the generation of the expected CO, CO2 and H2O, the products of ethanol anaerobic oxidation, also a large amount of H2 and coke were produced. The latter is highly undesired, since it affects the second step, during which water is fed over the pre-reduced spinel at high temperature. The behavior of the different spinels was affected by the nature of the divalent metal cation; magnetite was the oxide showing the slower rate of reduction by ethanol, but on the other hand it was that one which could perform the entire cycle of the process more efficiently. Still the problem of coke formation remains the greater challenge to solve.
Resumo:
Questa tesi propone un progetto di riqualificazione funzionale ed energetica del Polo ospedaliero civile di Castel San Pietro Terme, un complesso di edilizia sanitaria attivo dal 1870, che la AUSL proprietaria ha ora programmato di riqualificare. Il complesso, costituito da diversi edifici realizzati in epoche successive con un volume lordo riscaldato di 41670 m3, occupa un’area di 18415 m2. Sottoposto nel corso del tempo a ripetute modifiche e ampliamenti,oggi si presenta come un insieme eterogeneo di volumi, disorganici nell’aspetto ed interessati da importanti criticità: • prestazioni energetiche largamente inadeguate; • insufficiente resistenza alle azioni sismiche; • inefficiente distribuzione interna degli ambienti e delle funzioni. Partendo da un’analisi che dal complesso ospedaliero si estende sull’intera area di Castel San Pietro Terme, è stato definito un progetto che tiene conto delle peculiarità e delle criticità del luogo. Il progetto propone la riqualificazione dell’area antistante l’ingresso storico dell’ospedale tramite il collegamento diretto al parco fluviale, oggi interrotto da viale Oriani e da un parcheggio. Sul complesso edificato viene invece progettato un insieme di interventi differenziati, che rispondono all’obiettivo primario di adattare il polo ospedaliero a nuove funzioni sanitarie. La riorganizzazione prevede: • L’eliminazione del reparto di chirurgia; • L’adeguamento delle degenze a funzioni di hospice e lungodegenza per malati terminali; • L’ampliamento del progetto Casa della Salute che prevede locali ambulatoriali. Il progetto ha assunto questo programma funzionale,puntando a mantenere e riqualificare quanto più possibile l’esistente. E’ stato quindi previsto di: • Demolire il corpo del blocco operatorio. • Ridefinire volumetricamente il corpo delle degenze • Prevedere la costruzione di nuovi volumi per ospitare i poliambulatori. Per assicurare un adeguato livello di prestazioni,l’intervento ha puntato a far conseguire all’intero complesso la classe energetica A e ad adeguare la capacità di risposta al sisma, in particolare del corpo delle degenze, che presenta le condizioni più critiche. Le simulazioni eseguite con il software Termolog Epix3 attestano un valore di fabbisogno energetico finale pari a 5,10 kWh/m3 anno, con una riduzione del 92,7% rispetto ai livelli di consumo attuali. E' stata posta particolare attenzione anche al comfortdegli ambienti di degenza, verificato tramite l’utilizzo del software di simulazione energetica in regime dinamico IESVE che ha permesso di monitorare gli effetti ottenuti in relazione ad ogni scelta progettuale. I nuovi padiglioni sono stati progettati per integrare in modo funzionale i locali ambulatoriali ed alcuni ambienti dedicati alle terapie complementari per i lungodegenti. La tecnologia a setti portanti Xlam è stata preferita per la velocità di realizzazione. La sovrastante copertura costituita da una membrana di ETFE sostenuta da travi curve in legno lamellare, oltre ad assicurare il comfort ambientale tramite lo sfruttamento di sistemi passivi, permette di limitare i requisiti dell’involucro dei volumi sottostanti.
Resumo:
Gegenstand dieser Arbeit ist die nummerische Berechnung von Schleifenintegralen welche in höheren Ordnungen der Störungstheorie auftreten.rnAnalog zur reellen Emission kann man auch in den virtuellen Beiträgen Subtraktionsterme einführen, welche die kollinearen und soften Divergenzen des Schleifenintegrals entfernen. Die Phasenraumintegration und die Schleifenintegration können dann in einer einzigen Monte Carlo Integration durchgeführt werden. In dieser Arbeit zeigen wir wie eine solche numerische Integration unter zu Hilfenahme einer Kontourdeformation durchgeführt werden kann. Ausserdem zeigen wir wie man die benötigeten Integranden mit Rekursionsformeln berechnen kann.
Resumo:
The research field of my PhD concerns mathematical modeling and numerical simulation, applied to the cardiac electrophysiology analysis at a single cell level. This is possible thanks to the development of mathematical descriptions of single cellular components, ionic channels, pumps, exchangers and subcellular compartments. Due to the difficulties of vivo experiments on human cells, most of the measurements are acquired in vitro using animal models (e.g. guinea pig, dog, rabbit). Moreover, to study the cardiac action potential and all its features, it is necessary to acquire more specific knowledge about single ionic currents that contribute to the cardiac activity. Electrophysiological models of the heart have become very accurate in recent years giving rise to extremely complicated systems of differential equations. Although describing the behavior of cardiac cells quite well, the models are computationally demanding for numerical simulations and are very difficult to analyze from a mathematical (dynamical-systems) viewpoint. Simplified mathematical models that capture the underlying dynamics to a certain extent are therefore frequently used. The results presented in this thesis have confirmed that a close integration of computational modeling and experimental recordings in real myocytes, as performed by dynamic clamp, is a useful tool in enhancing our understanding of various components of normal cardiac electrophysiology, but also arrhythmogenic mechanisms in a pathological condition, especially when fully integrated with experimental data.
Resumo:
Quando si parla di architetture di controllo in ambito Web, il Modello ad Eventi è indubbiamente quello più diffuso e adottato. L’asincronicità e l’elevata interazione con l’utente sono caratteristiche tipiche delle Web Applications, ed un architettura ad eventi, grazie all’adozione del suo tipico ciclo di controllo chiamato Event Loop, fornisce un'astrazione semplice ma sufficientemente espressiva per soddisfare tali requisiti. La crescita di Internet e delle tecnologie ad esso associate, assieme alle recenti conquiste in ambito di CPU multi-core, ha fornito terreno fertile per lo sviluppo di Web Applications sempre più complesse. Questo aumento di complessità ha portato però alla luce alcuni limiti del modello ad eventi, ancora oggi non del tutto risolti. Con questo lavoro si intende proporre un differente approccio a questa tipologia di problemi, che superi i limiti riscontrati nel modello ad eventi proponendo un architettura diversa, nata in ambito di IA ma che sta guadagno popolarità anche nel general-purpose: il Modello ad Agenti. Le architetture ad agenti adottano un ciclo di controllo simile all’Event Loop del modello ad eventi, ma con alcune profonde differenze: il Control Loop. Lo scopo di questa tesi sarà dunque approfondire le due tipologie di architetture evidenziandone le differenze, mostrando cosa significa affrontare un progetto e lo sviluppo di una Web Applications avendo tecnologie diverse con differenti cicli di controllo, mettendo in luce pregi e difetti dei due approcci.
Resumo:
This thesis is on loop-induced processes in theories with warped extra dimensions where the fermions and gauge bosons are allowed to propagate in the bulk, while the Higgs sector is localized on or near the infra-red brane. These so-called Randall-Sundrum (RS) models have the potential to simultaneously explain the hierarchy problem and address the question of what causes the large hierarchies in the fermion sector of the Standard Model (SM). The Kaluza-Klein (KK) excitations of the bulk fields can significantly affect the loop-level processes considered in this thesis and, hence, could indirectly indicate the existence of warped extra dimensions. The analytical part of this thesis deals with the detailed calculation of three loop-induced processes in the RS models in question: the Higgs production process via gluon fusion, the Higgs decay into two photons, and the flavor-changing neutral current b → sγ. A comprehensive, five-dimensional (5D) analysis will show that the amplitudes of the Higgs processes can be expressed in terms of integrals over 5D propagators with the Higgs-boson profile along the extra dimension, which can be used for arbitrary models with a compact extra dimension. To this end, both the boson and fermion propagators in a warped 5D background are derived. It will be shown that the seemingly contradictory results for the gluon fusion amplitude in the literature can be traced back to two distinguishable, not smoothly-connected incarnations of the RS model. The investigation of the b → sγ transition is performed in the KK decomposed theory. It will be argued that summing up the entire KK tower leads to a finite result, which can be well approximated by a closed, analytical expression.rnIn the phenomenological part of this thesis, the analytic results of all relevant Higgs couplings in the RS models in question are compared with current and in particular future sensitivities of the Large Hadron Collider (LHC) and the planned International Linear Collider. The latest LHC Higgs data is then used to exclude significant portions of the parameter space of each RS scenario. The analysis will demonstrate that especially the loop-induced Higgs couplings are sensitive to KK particles of the custodial RS model with masses in the multi tera-electronvolt range. Finally, the effect of the RS model on three flavor observables associated with the b → sγ transition are examined. In particular, we study the branching ratio of the inclusive decay B → X_s γ