975 resultados para two-loop diagram
Resumo:
The hero's journey is a narrative structure identified by several authors in comparative studies on folklore and mythology. This storytelling template presents the stages of inner metamorphosis undergone by the protagonist after being called to an adventure. In a simplified version, this journey is divided into three acts separated by two crucial moments. Here we propose a discrete-time dynamical system for representing the protagonist's evolution. The suffering along the journey is taken as the control parameter of this system. The bifurcation diagram exhibits stationary, periodic and chaotic behaviors. In this diagram, there are transition from fixed point to chaos and transition from limit cycle to fixed point. We found that the values of the control parameter corresponding to these two transitions are in quantitative agreement with the two critical moments of the three-act hero's journey identified in 10 movies appearing in the list of the 200 worldwide highest-grossing films. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Model predictive control (MPC) applications in the process industry usually deal with process systems that show time delays (dead times) between the system inputs and outputs. Also, in many industrial applications of MPC, integrating outputs resulting from liquid level control or recycle streams need to be considered as controlled outputs. Conventional MPC packages can be applied to time-delay systems but stability of the closed loop system will depend on the tuning parameters of the controller and cannot be guaranteed even in the nominal case. In this work, a state space model based on the analytical step response model is extended to the case of integrating time systems with time delays. This model is applied to the development of two versions of a nominally stable MPC, which is designed to the practical scenario in which one has targets for some of the inputs and/or outputs that may be unreachable and zone control (or interval tracking) for the remaining outputs. The controller is tested through simulation of a multivariable industrial reactor system. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Xylanases (EC 3.2.1.8 endo-1,4-glycosyl hydrolase) catalyze the hydrolysis of xylan, an abundant hemicellulose of plant cell walls. Access to the catalytic site of GH11 xylanases is regulated by movement of a short beta-hairpin, the so-called thumb region, which can adopt open or closed conformations. A crystallographic study has shown that the D11F/R122D mutant of the GH11 xylanase A from Bacillus subtilis (BsXA) displays a stable "open" conformation, and here we report a molecular dynamics simulation study comparing this mutant with the native enzyme over a range of temperatures. The mutant open conformation was stable at 300 and 328 K, however it showed a transition to the closed state at 338 K. Analysis of dihedral angles identified thumb region residues Y113 and T123 as key hinge points which determine the open-closed transition at 338 K. Although the D11F/R122D mutations result in a reduction in local inter-intramolecular hydrogen bonding, the global energies of the open and closed conformations in the native enzyme are equivalent, suggesting that the two conformations are equally accessible. These results indicate that the thumb region shows a broader degree of energetically permissible conformations which regulate the access to the active site region. The R122D mutation contributes to the stability of the open conformation, but is not essential for thumb dynamics, i.e., the wild type enzyme can also adapt to the open conformation.
Resumo:
This work describes the development of a simulation tool which allows the simulation of the Internal Combustion Engine (ICE), the transmission and the vehicle dynamics. It is a control oriented simulation tool, designed in order to perform both off-line (Software In the Loop) and on-line (Hardware In the Loop) simulation. In the first case the simulation tool can be used in order to optimize Engine Control Unit strategies (as far as regard, for example, the fuel consumption or the performance of the engine), while in the second case it can be used in order to test the control system. In recent years the use of HIL simulations has proved to be very useful in developing and testing of control systems. Hardware In the Loop simulation is a technology where the actual vehicles, engines or other components are replaced by a real time simulation, based on a mathematical model and running in a real time processor. The processor reads ECU (Engine Control Unit) output signals which would normally feed the actuators and, by using mathematical models, provides the signals which would be produced by the actual sensors. The simulation tool, fully designed within Simulink, includes the possibility to simulate the only engine, the transmission and vehicle dynamics and the engine along with the vehicle and transmission dynamics, allowing in this case to evaluate the performance and the operating conditions of the Internal Combustion Engine, once it is installed on a given vehicle. Furthermore the simulation tool includes different level of complexity, since it is possible to use, for example, either a zero-dimensional or a one-dimensional model of the intake system (in this case only for off-line application, because of the higher computational effort). Given these preliminary remarks, an important goal of this work is the development of a simulation environment that can be easily adapted to different engine types (single- or multi-cylinder, four-stroke or two-stroke, diesel or gasoline) and transmission architecture without reprogramming. Also, the same simulation tool can be rapidly configured both for off-line and real-time application. The Matlab-Simulink environment has been adopted to achieve such objectives, since its graphical programming interface allows building flexible and reconfigurable models, and real-time simulation is possible with standard, off-the-shelf software and hardware platforms (such as dSPACE systems).
Resumo:
The present state of the theoretical predictions for the hadronic heavy hadron production is not quite satisfactory. The full next-to-leading order (NLO) ${cal O} (alpha_s^3)$ corrections to the hadroproduction of heavy quarks have raised the leading order (LO) ${cal O} (alpha_s^2)$ estimates but the NLO predictions are still slightly below the experimental numbers. Moreover, the theoretical NLO predictions suffer from the usual large uncertainty resulting from the freedom in the choice of renormalization and factorization scales of perturbative QCD.In this light there are hopes that a next-to-next-to-leading order (NNLO) ${cal O} (alpha_s^4)$ calculation will bring theoretical predictions even closer to the experimental data. Also, the dependence on the factorization and renormalization scales of the physical process is expected to be greatly reduced at NNLO. This would reduce the theoretical uncertainty and therefore make the comparison between theory and experiment much more significant. In this thesis I have concentrated on that part of NNLO corrections for hadronic heavy quark production where one-loop integrals contribute in the form of a loop-by-loop product. In the first part of the thesis I use dimensional regularization to calculate the ${cal O}(ep^2)$ expansion of scalar one-loop one-, two-, three- and four-point integrals. The Laurent series of the scalar integrals is needed as an input for the calculation of the one-loop matrix elements for the loop-by-loop contributions. Since each factor of the loop-by-loop product has negative powers of the dimensional regularization parameter $ep$ up to ${cal O}(ep^{-2})$, the Laurent series of the scalar integrals has to be calculated up to ${cal O}(ep^2)$. The negative powers of $ep$ are a consequence of ultraviolet and infrared/collinear (or mass ) divergences. Among the scalar integrals the four-point integrals are the most complicated. The ${cal O}(ep^2)$ expansion of the three- and four-point integrals contains in general classical polylogarithms up to ${rm Li}_4$ and $L$-functions related to multiple polylogarithms of maximal weight and depth four. All results for the scalar integrals are also available in electronic form. In the second part of the thesis I discuss the properties of the classical polylogarithms. I present the algorithms which allow one to reduce the number of the polylogarithms in an expression. I derive identities for the $L$-functions which have been intensively used in order to reduce the length of the final results for the scalar integrals. I also discuss the properties of multiple polylogarithms. I derive identities to express the $L$-functions in terms of multiple polylogarithms. In the third part I investigate the numerical efficiency of the results for the scalar integrals. The dependence of the evaluation time on the relative error is discussed. In the forth part of the thesis I present the larger part of the ${cal O}(ep^2)$ results on one-loop matrix elements in heavy flavor hadroproduction containing the full spin information. The ${cal O}(ep^2)$ terms arise as a combination of the ${cal O}(ep^2)$ results for the scalar integrals, the spin algebra and the Passarino-Veltman decomposition. The one-loop matrix elements will be needed as input in the determination of the loop-by-loop part of NNLO for the hadronic heavy flavor production.
Resumo:
Over the past few years, the switch towards renewable sources for energy production is considered as necessary for the future sustainability of the world environment. Hydrogen is one of the most promising energy vectors for the stocking of low density renewable sources such as wind, biomasses and sun. The production of hydrogen by the steam-iron process could be one of the most versatile approaches useful for the employment of different reducing bio-based fuels. The steam iron process is a two-step chemical looping reaction based (i) on the reduction of an iron-based oxide with an organic compound followed by (ii) a reoxidation of the reduced solid material by water, which lead to the production of hydrogen. The overall reaction is the water oxidation of the organic fuel (gasification or reforming processes) but the inherent separation of the two semireactions allows the production of carbon-free hydrogen. In this thesis, steam-iron cycle with methanol is proposed and three different oxides with the generic formula AFe2O4 (A=Co,Ni,Fe) are compared in order to understand how the chemical properties and the structural differences can affect the productivity of the overall process. The modifications occurred in used samples are deeply investigated by the analysis of used materials. A specific study on CoFe2O4-based process using both classical and in-situ/ex-situ analysis is reported employing many characterization techniques such as FTIR spectroscopy, TEM, XRD, XPS, BET, TPR and Mössbauer spectroscopy.
Resumo:
This thesis is concerned with the calculation of virtual Compton scattering (VCS) in manifestly Lorentz-invariant baryon chiral perturbation theory to fourth order in the momentum and quark-mass expansion. In the one-photon-exchange approximation, the VCS process is experimentally accessible in photon electro-production and has been measured at the MAMI facility in Mainz, at MIT-Bates, and at Jefferson Lab. Through VCS one gains new information on the nucleon structure beyond its static properties, such as charge, magnetic moments, or form factors. The nucleon response to an incident electromagnetic field is parameterized in terms of 2 spin-independent (scalar) and 4 spin-dependent (vector) generalized polarizabilities (GP). In analogy to classical electrodynamics the two scalar GPs represent the induced electric and magnetic dipole polarizability of a medium. For the vector GPs, a classical interpretation is less straightforward. They are derived from a multipole expansion of the VCS amplitude. This thesis describes the first calculation of all GPs within the framework of manifestly Lorentz-invariant baryon chiral perturbation theory. Because of the comparatively large number of diagrams - 100 one-loop diagrams need to be calculated - several computer programs were developed dealing with different aspects of Feynman diagram calculations. One can distinguish between two areas of development, the first concerning the algebraic manipulations of large expressions, and the second dealing with numerical instabilities in the calculation of one-loop integrals. In this thesis we describe our approach using Mathematica and FORM for algebraic tasks, and C for the numerical evaluations. We use our results for real Compton scattering to fix the two unknown low-energy constants emerging at fourth order. Furthermore, we present the results for the differential cross sections and the generalized polarizabilities of VCS off the proton.
Resumo:
Microemulsions are thermodynamically stable, macroscopically homogeneous but microscopically heterogeneous, mixtures of water and oil stabilised by surfactant molecules. They have unique properties like ultralow interfacial tension, large interfacial area and the ability to solubilise other immiscible liquids. Depending on the temperature and concentration, non-ionic surfactants self assemble to micelles, flat lamellar, hexagonal and sponge like bicontinuous morphologies. Microemulsions have three different macroscopic phases (a) 1phase- microemulsion (isotropic), (b) 2phase-microemulsion coexisting with either expelled water or oil and (c) 3phase- microemulsion coexisting with expelled water and oil.rnrnOne of the most important fundamental questions in this field is the relation between the properties of the surfactant monolayer at water-oil interface and those of microemulsion. This monolayer forms an extended interface whose local curvature determines the structure of the microemulsion. The main part of my thesis deals with the quantitative measurements of the temperature induced phase transitions of water-oil-nonionic microemulsions and their interpretation using the temperature dependent spontaneous curvature [c0(T)] of the surfactant monolayer. In a 1phase- region, conservation of the components determines the droplet (domain) size (R) whereas in 2phase-region, it is determined by the temperature dependence of c0(T). The Helfrich bending free energy density includes the dependence of the droplet size on c0(T) as
Resumo:
Lattice Quantum Chromodynamics (LQCD) is the preferred tool for obtaining non-perturbative results from QCD in the low-energy regime. It has by nowrnentered the era in which high precision calculations for a number of phenomenologically relevant observables at the physical point, with dynamical quark degrees of freedom and controlled systematics, become feasible. Despite these successes there are still quantities where control of systematic effects is insufficient. The subject of this thesis is the exploration of the potential of todays state-of-the-art simulation algorithms for non-perturbativelyrn$\mathcal{O}(a)$-improved Wilson fermions to produce reliable results in thernchiral regime and at the physical point both for zero and non-zero temperature. Important in this context is the control over the chiral extrapolation. Thisrnthesis is concerned with two particular topics, namely the computation of hadronic form factors at zero temperature, and the properties of the phaserntransition in the chiral limit of two-flavour QCD.rnrnThe electromagnetic iso-vector form factor of the pion provides a platform to study systematic effects and the chiral extrapolation for observables connected to the structure of mesons (and baryons). Mesonic form factors are computationally simpler than their baryonic counterparts but share most of the systematic effects. This thesis contains a comprehensive study of the form factor in the regime of low momentum transfer $q^2$, where the form factor is connected to the charge radius of the pion. A particular emphasis is on the region very close to $q^2=0$ which has not been explored so far, neither in experiment nor in LQCD. The results for the form factor close the gap between the smallest spacelike $q^2$-value available so far and $q^2=0$, and reach an unprecedented accuracy at full control over the main systematic effects. This enables the model-independent extraction of the pion charge radius. The results for the form factor and the charge radius are used to test chiral perturbation theory ($\chi$PT) and are thereby extrapolated to the physical point and the continuum. The final result in units of the hadronic radius $r_0$ is rn$$ \left\langle r_\pi^2 \right\rangle^{\rm phys}/r_0^2 = 1.87 \: \left(^{+12}_{-10}\right)\left(^{+\:4}_{-15}\right) \quad \textnormal{or} \quad \left\langle r_\pi^2 \right\rangle^{\rm phys} = 0.473 \: \left(^{+30}_{-26}\right)\left(^{+10}_{-38}\right)(10) \: \textnormal{fm} \;, $$rn which agrees well with the results from other measurements in LQCD and experiment. Note, that this is the first continuum extrapolated result for the charge radius from LQCD which has been extracted from measurements of the form factor in the region of small $q^2$.rnrnThe order of the phase transition in the chiral limit of two-flavour QCD and the associated transition temperature are the last unkown features of the phase diagram at zero chemical potential. The two possible scenarios are a second order transition in the $O(4)$-universality class or a first order transition. Since direct simulations in the chiral limit are not possible the transition can only be investigated by simulating at non-zero quark mass with a subsequent chiral extrapolation, guided by the universal scaling in the vicinity of the critical point. The thesis presents the setup and first results from a study on this topic. The study provides the ideal platform to test the potential and limits of todays simulation algorithms at finite temperature. The results from a first scan at a constant zero-temperature pion mass of about 290~MeV are promising, and it appears that simulations down to physical quark masses are feasible. Of particular relevance for the order of the chiral transition is the strength of the anomalous breaking of the $U_A(1)$ symmetry at the transition point. It can be studied by looking at the degeneracies of the correlation functions in scalar and pseudoscalar channels. For the temperature scan reported in this thesis the breaking is still pronounced in the transition region and the symmetry becomes effectively restored only above $1.16\:T_C$. The thesis also provides an extensive outline of research perspectives and includes a generalisation of the standard multi-histogram method to explicitly $\beta$-dependent fermion actions.
Resumo:
This thesis is on loop-induced processes in theories with warped extra dimensions where the fermions and gauge bosons are allowed to propagate in the bulk, while the Higgs sector is localized on or near the infra-red brane. These so-called Randall-Sundrum (RS) models have the potential to simultaneously explain the hierarchy problem and address the question of what causes the large hierarchies in the fermion sector of the Standard Model (SM). The Kaluza-Klein (KK) excitations of the bulk fields can significantly affect the loop-level processes considered in this thesis and, hence, could indirectly indicate the existence of warped extra dimensions. The analytical part of this thesis deals with the detailed calculation of three loop-induced processes in the RS models in question: the Higgs production process via gluon fusion, the Higgs decay into two photons, and the flavor-changing neutral current b → sγ. A comprehensive, five-dimensional (5D) analysis will show that the amplitudes of the Higgs processes can be expressed in terms of integrals over 5D propagators with the Higgs-boson profile along the extra dimension, which can be used for arbitrary models with a compact extra dimension. To this end, both the boson and fermion propagators in a warped 5D background are derived. It will be shown that the seemingly contradictory results for the gluon fusion amplitude in the literature can be traced back to two distinguishable, not smoothly-connected incarnations of the RS model. The investigation of the b → sγ transition is performed in the KK decomposed theory. It will be argued that summing up the entire KK tower leads to a finite result, which can be well approximated by a closed, analytical expression.rnIn the phenomenological part of this thesis, the analytic results of all relevant Higgs couplings in the RS models in question are compared with current and in particular future sensitivities of the Large Hadron Collider (LHC) and the planned International Linear Collider. The latest LHC Higgs data is then used to exclude significant portions of the parameter space of each RS scenario. The analysis will demonstrate that especially the loop-induced Higgs couplings are sensitive to KK particles of the custodial RS model with masses in the multi tera-electronvolt range. Finally, the effect of the RS model on three flavor observables associated with the b → sγ transition are examined. In particular, we study the branching ratio of the inclusive decay B → X_s γ
Resumo:
Unique as snowflakes, learning communities are formed in countless ways. Some are designed specifically for first-year students, while others offer combined or clustered upper-level courses. Most involve at least two linked courses, and some add residential and social components. Many address core general education and basic skills requirements. Learning communities differ in design, yet they are similar in striving to enhance students' academic and social growth. First-year learning communities foster experiences that have been linked to academic success and retention. They also offer unique opportunities for librarians interested in collaborating with departmental faculty and enhancing teaching skills. This article will explore one librarian's experiences teaching within three first-year learning communities at Buffalo State College.
Resumo:
Background Patients late after open-heart surgery may develop dual-loop reentrant atrial arrhythmias, and mapping and catheter ablation remain challenging despite computer-assisted mapping techniques. Objectives The purpose of the study was to demonstrate the prevalence and characteristics of dual-loop reentrant arrhythmias, and to define the optimal mapping and ablation strategy. Methods Fourty consecutive patients (mean age 52+/-12 years) with intra-atrial reentrant tachycardia (IART) after open-heart surgery (with an incision of the right atrial free wall) were studied. Dual-loop IART was defined as the presence of two simultaneous atrial circuits. Following an abrupt tachycardia change during radiofrequency (RF) ablation, electrical disconnection of the targeted reentry isthmus from the remaining circuit was demonstrated by entrainment mapping. Furthermore, the second circuit loop was localized using electroanatomic mapping and/or entrainment mapping. Results Dual-loop IART was demonstrated in 8 patients (20%, 5 patients with congenital heart disease, 3 with acquired heart disease). Dual-loop IART included an isthmus-dependant atrial flutter combined with a reentry related to the atriotomy scar. The diagnosis of dual-loop IART required the comparison of entrainment mapping before and after tachycardiamodification. Overall, 35 patients had successful RF ablation (88%). Success rates were lower in patients with dual-loop IART than in patient without dual-loop IART. Ablation failures in 3 patients with dual-loop IART were related to the inability to properly transect the second tachycardia isthmus in the right atrial free wall. Conclusions Dual-loop IART is relatively common after heart surgery involving a right atriotomy. Abrupt tachycardia change and specific entrainment mapping maneuvers demonstrate these circuits. Electroanatomic mapping appears to be important to assist catheter ablation of periatriotomy circuits.
Resumo:
BACKGROUND: In contrast to hypnosis, there is no surrogate parameter for analgesia in anesthetized patients. Opioids are titrated to suppress blood pressure response to noxious stimulation. The authors evaluated a novel model predictive controller for closed-loop administration of alfentanil using mean arterial blood pressure and predicted plasma alfentanil concentration (Cp Alf) as input parameters. METHODS: The authors studied 13 healthy patients scheduled to undergo minor lumbar and cervical spine surgery. After induction with propofol, alfentanil, and mivacurium and tracheal intubation, isoflurane was titrated to maintain the Bispectral Index at 55 (+/- 5), and the alfentanil administration was switched from manual to closed-loop control. The controller adjusted the alfentanil infusion rate to maintain the mean arterial blood pressure near the set-point (70 mmHg) while minimizing the Cp Alf toward the set-point plasma alfentanil concentration (Cp Alfref) (100 ng/ml). RESULTS: Two patients were excluded because of loss of arterial pressure signal and protocol violation. The alfentanil infusion was closed-loop controlled for a mean (SD) of 98.9 (1.5)% of presurgery time and 95.5 (4.3)% of surgery time. The mean (SD) end-tidal isoflurane concentrations were 0.78 (0.1) and 0.86 (0.1) vol%, the Cp Alf values were 122 (35) and 181 (58) ng/ml, and the Bispectral Index values were 51 (9) and 52 (4) before surgery and during surgery, respectively. The mean (SD) absolute deviations of mean arterial blood pressure were 7.6 (2.6) and 10.0 (4.2) mmHg (P = 0.262), and the median performance error, median absolute performance error, and wobble were 4.2 (6.2) and 8.8 (9.4)% (P = 0.002), 7.9 (3.8) and 11.8 (6.3)% (P = 0.129), and 14.5 (8.4) and 5.7 (1.2)% (P = 0.002) before surgery and during surgery, respectively. A post hoc simulation showed that the Cp Alfref decreased the predicted Cp Alf compared with mean arterial blood pressure alone. CONCLUSION: The authors' controller has a similar set-point precision as previous hypnotic controllers and provides adequate alfentanil dosing during surgery. It may help to standardize opioid dosing in research and may be a further step toward a multiple input-multiple output controller.
Resumo:
A push to reduce dependency on foreign energy and increase the use of renewable energy has many gas stations pumping ethanol blended fuels. Recreational engines typically have less complex fuel management systems than that of the automotive sector. This prevents the engine from being able to adapt to different ethanol concentrations. Using ethanol blended fuels in recreational engines raises several consumer concerns. Engine performance and emissions are both affected by ethanol blended fuels. This research focused on assessing the impact of E22 on two-stroke and four-stroke snowmobiles. Three snowmobiles were used for this study. A 2009 Arctic Cat Z1 Turbo with a closed-loop fuel injection system, a 2009 Yamaha Apex with an open-loop fuel injection system and a 2010 Polaris Rush with an open-loop fuel injection system were used to determine the impact of E22 on snowmobile engines. A five mode emissions test was conducted on each of the snowmobiles with E0 and E22 to determine the impact of the E22 fuel. All of the snowmobiles were left in stock form to assess the effect of E22 on snowmobiles currently on the trail. Brake specific emissions of the snowmobiles running on E22 were compared to that of the E0 fuel. Engine parameters such as exhaust gas temperature, fuel flow, and relative air to fuel ratio (λ) were also compared on all three snowmobiles. Combustion data using an AVL combustion analysis system was taken on the Polaris Rush. This was done to compare in-cylinder pressures, combustion duration, and location of 50% mass fraction burn. E22 decreased total hydrocarbons and carbon monoxide for all of the snowmobiles and increased carbon dioxide. Peak power increased for the closed-loop fuel injected Arctic Cat. A smaller increase of peak power was observed for the Polaris due to a partial ability of the fuel management system to adapt to ethanol. A decrease in peak power was observed for the open-loop fuel injected Yamaha.
Resumo:
The U.S. Renewable Fuel Standard mandates that by 2022, 36 billion gallons of renewable fuels must be produced on a yearly basis. Ethanol production is capped at 15 billion gallons, meaning 21 billion gallons must come from different alternative fuel sources. A viable alternative to reach the remainder of this mandate is iso-butanol. Unlike ethanol, iso-butanol does not phase separate when mixed with water, meaning it can be transported using traditional pipeline methods. Iso-butanol also has a lower oxygen content by mass, meaning it can displace more petroleum while maintaining the same oxygen concentration in the fuel blend. This research focused on studying the effects of low level alcohol fuels on marine engine emissions to assess the possibility of using iso-butanol as a replacement for ethanol. Three marine engines were used in this study, representing a wide range of what is currently in service in the United States. Two four-stroke engine and one two-stroke engine powered boats were tested in the tributaries of the Chesapeake Bay, near Annapolis, Maryland over the course of two rounds of weeklong testing in May and September. The engines were tested using a standard test cycle and emissions were sampled using constant volume sampling techniques. Specific emissions for two-stroke and four-stroke engines were compared to the baseline indolene tests. Because of the nature of the field testing, limited engine parameters were recorded. Therefore, the engine parameters analyzed aside from emissions were the operating relative air-to-fuel ratio and engine speed. Emissions trends from the baseline test to each alcohol fuel for the four-stroke engines were consistent, when analyzing a single round of testing. The same trends were not consistent when comparing separate rounds because of uncontrolled weather conditions and because the four-stroke engines operate without fuel control feedback during full load conditions. Emissions trends from the baseline test to each alcohol fuel for the two-stroke engine were consistent for all rounds of testing. This is due to the fact the engine operates open-loop, and does not provide fueling compensation when fuel composition changes. Changes in emissions with respect to the baseline for iso-butanol were consistent with changes for ethanol. It was determined iso-butanol would make a viable replacement for ethanol.