993 resultados para Two-loop-calculations, LEP, ILC


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The electromagnetic form factors are the most fundamental observables that encode information about the internal structure of the nucleon. The electric (GE) and the magnetic ( GM) form factors contain information about the spatial distribution of the charge and magnetization inside the nucleon. A significant discrepancy exists between the Rosenbluth and the polarization transfer measurements of the electromagnetic form factors of the proton. One possible explanation for the discrepancy is the contributions of two-photon exchange (TPE) effects. Theoretical calculations estimating the magnitude of the TPE effect are highly model dependent, and limited experimental evidence for such effects exists. Experimentally, the TPE effect can be measured by comparing the ratio of positron-proton elastic scattering cross section to that of the electron-proton [R = σ(e +p)/σ(e+p)]. The ratio R was measured over a wide range of kinematics, utilizing a 5.6 GeV primary electron beam produced by the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab. This dissertation explored dependence of R on kinematic variables such as squared four-momentum transfer (Q2) and the virtual photon polarization parameter (&epsis;). A mixed electron-positron beam was produced from the primary electron beam in experimental Hall B. The mixed beam was scattered from a liquid hydrogen (LH2) target. Both the scattered lepton and the recoil proton were detected by the CEBAF Large Acceptance Spectrometer (CLAS). The elastic events were then identified by using elastic scattering kinematics. This work extracted the Q2 dependence of R at high &epsis;(&epsis; > 0.8) and the $&epsis; dependence of R at ⟨Q 2⟩ approx 0.85 GeV2. In these kinematics, our data confirm the validity of the hadronic calculations of the TPE effect by Blunden, Melnitchouk, and Tjon. This hadronic TPE effect, with additional corrections contributed by higher excitations of the intermediate state nucleon, largely reconciles the Rosenbluth and the polarization transfer measurements of the electromagnetic form factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work proposes the use of the behavioral model of the hysteresis loop of the ferroelectrics capacitor as a new alternative to the usually costly techniques in the computation of nonlinear functions in artificial neurons implemented on reconfigurable hardware platform, in this case, a FPGA device. Initially the proposal has been validated by the implementation of the boolean logic through the digital models of two artificial neurons: the Perceptron and a variation of the model Integrate and Fire Spiking Neuron, both using the model also digital of the hysteresis loop of the ferroelectric capacitor as it’s basic nonlinear unit for the calculations of the neurons outputs. Finally, it has been used the analog model of the ferroelectric capacitor with the goal of verifying it’s effectiveness and possibly the reduction of the number of necessary logic elements in the case of implementing the artificial neurons on integrated circuit. The implementations has been carried out by Simulink models and the synthesizing has been done through the DSP Builder software from Altera Corporation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A heat loop suitable for the study of thermal fouling and its relationship to corrosion processes was designed, constructed and tested. The design adopted was an improvement over those used by such investigators as Hopkins and the Heat Transfer Research Institute in that very low levels of fouling could be detected accurately, the heat transfer surface could be readily removed for examination and the chemistry of the environment could be carefully monitored and controlled. In addition, an indirect method of electrical heating of the heat transfer surface was employed to eliminate magnetic and electric effects which result when direct resistance heating is employed to a test section. The testing of the loop was done using a 316 stainless steel test section and a suspension of ferric oxide and water in an attempt to duplicate the results obtained by Hopkins. Two types of thermal ·fouling resistance versus time curves were obtained . (i) Asymptotic type fouling curve, similar to the fouling behaviour described by Kern and Seaton and other investigators, was the most frequent type of fouling curve obtained. Thermal fouling occurred at a steadily decreasing rate before reaching a final asymptotic value. (ii) If an asymptotically fouled tube was cooled with rapid cir- ·culation for periods up to eight hours at zero heat flux, and heating restarted, fouling recommenced at a high linear rate. The fouling results obtained were observed to be similar and 1n agreement with the fouling behaviour reported previously by Hopkins and it was possible to duplicate quite closely the previous results . This supports the contention of Hopkins that the fouling results obtained were due to a crevice corrosion process and not an artifact of that heat loop which might have caused electrical and magnetic effects influencing the fouling. The effects of Reynolds number and heat flux on the asymptotic fouling resistance have been determined. A single experiment to study the effect of oxygen concentration has been carried out. The ferric oxide concentration for most of the fouling trials was standardized at 2400 ppM and the range of Reynolds number and heat flux for the study was 11000-29500 and 89-121 KW/M², respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation consists of two independent musical compositions and an article detailing the process of the design and assembly of an electric guitar with particular emphasis on the carefully curated suite of embedded effects.

The first piece, 'Phase Locked Loop and Modulo Games' is scored for electric guitar and a single echo of equal volume less than a beat away. One could think of the piece as a 15 minute canon at the unison at the dotted eighth note (or at times the quarter or triplet-quarter), however the compositional motivation is more about weaving a composite texture between the guitar and its echo that is, while in theory extremely contrapuntal, in actuality is simply a single [superhuman] melodic line.

The second piece, 'The Dogma Loops' picks up a few compositional threads left by ‘Phase Locked Loop’ and weaves them into an entirely new tapestry. 'Phase Locked Loop' is motivated by the creation of a complex musical composite that is for the most part electronically transparent. 'The Dogma Loops' questions that same notion of composite electronic complexity by essentially asking a question: "what are the inputs to an interactive electronic system that create the most complex outputs via the simplest musical means possible?"

'The Dogma Loops' is scored for Electric Guitar (doubling on Ukulele), Violin and Violoncello. All of the principal instruments require an electronic pickup (except the Uke). The work is in three sections played attacca; [Automation Games], [Point of Origin] and [Cloning Vectors].

The third and final component of the document is the article 'Finding Ibrida.' This article details the process of the design and assembly of an electric guitar with integrated effects, while also providing the deeper context (conceptual and technical) which motivated the efforts and informed the challenges to hybridize the various technologies (tubes, transistors, digital effects and a microcontroller subsystem). The project was motivated by a desire for rigorous technical and hands-on engagement with analog signal processing as applied to the electric guitar. ‘Finding Ibrida’ explores sound, some myths and lore of guitar tech and the history of electric guitar distortion and its culture of sonic exploration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Free energy calculations are a computational method for determining thermodynamic quantities, such as free energies of binding, via simulation.

Currently, due to computational and algorithmic limitations, free energy calculations are limited in scope.

In this work, we propose two methods for improving the efficiency of free energy calculations.

First, we expand the state space of alchemical intermediates, and show that this expansion enables us to calculate free energies along lower variance paths.

We use Q-learning, a reinforcement learning technique, to discover and optimize paths at low computational cost.

Second, we reduce the cost of sampling along a given path by using sequential Monte Carlo samplers.

We develop a new free energy estimator, pCrooks (pairwise Crooks), a variant on the Crooks fluctuation theorem (CFT), which enables decomposition of the variance of the free energy estimate for discrete paths, while retaining beneficial characteristics of CFT.

Combining these two advancements, we show that for some test models, optimal expanded-space paths have a nearly 80% reduction in variance relative to the standard path.

Additionally, our free energy estimator converges at a more consistent rate and on average 1.8 times faster when we enable path searching, even when the cost of path discovery and refinement is considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gate-tunable two-dimensional (2D) materials-based quantum capacitors (QCs) and van der Waals heterostructures involve tuning transport or optoelectronic characteristics by the field effect. Recent studies have attributed the observed gate-tunable characteristics to the change of the Fermi level in the first 2D layer adjacent to the dielectrics, whereas the penetration of the field effect through the one-molecule-thick material is often ignored or oversimplified. Here, we present a multiscale theoretical approach that combines first-principles electronic structure calculations and the Poisson–Boltzmann equation methods to model penetration of the field effect through graphene in a metal–oxide–graphene–semiconductor (MOGS) QC, including quantifying the degree of “transparency” for graphene two-dimensional electron gas (2DEG) to an electric displacement field. We find that the space charge density in the semiconductor layer can be modulated by gating in a nonlinear manner, forming an accumulation or inversion layer at the semiconductor/graphene interface. The degree of transparency is determined by the combined effect of graphene quantum capacitance and the semiconductor capacitance, which allows us to predict the ranking for a variety of monolayer 2D materials according to their transparency to an electric displacement field as follows: graphene > silicene > germanene > WS2 > WTe2 > WSe2 > MoS2 > phosphorene > MoSe2 > MoTe2, when the majority carrier is electron. Our findings reveal a general picture of operation modes and design rules for the 2D-materials-based QCs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]We present the first U-series ages of corals from emergent marine deposits on the Canary Islands. Deposits at +. 20. m are 481 ± 39 ka, possibly correlative to marine isotope stage (or MIS) 11, while those at +. 12 and +. 8. m are 120.5 ± 0.8. ka and 130.2 ± 0.8. ka, respectively, correlative to MIS 5.5. The age, elevations, and uplift rates derived from MIS 5.5 deposits on the Canary Islands allow calculations of hypothetical palaeo-sea levels during the MIS 11 high sea stand. Estimates indicate that the MIS 11 high sea stand likely was at least +. 9. m (relative to present sea level) and could have been as high as +. 24. m.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A two stage approach to performing ab initio calculations on medium and large sized molecules is described. The first step is to perform SCF calculations on small molecules or molecular fragments using the OPIT Program. This employs a small basis set of spherical and p-type Gaussian functions. The Gaussian functions can be identified very closely with atomic cores, bond pairs, lone pairs, etc. The position and exponent of any of the Gaussian functions can be varied by OPIT to produce a small but fully optimised basis set. The second stage is the molecular fragments method. As an example of this, Gaussian exponents and distances are taken from an OPIT calculation on ethylene and used unchanged in a single SCF calculation on benzene. Approximate ab initio calculations of this type give much useful information and are often preferable to semi-empirical approaches, since the nature of the approximations involved is much better defined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Turnip crinkle virus (TCV) and Pea enation mosaic virus (PEMV) are two positive (+)-strand RNA viruses that are used to investigate the regulation of translation and replication due to their small size and simple genomes. Both viruses contain cap-independent translation elements (CITEs) within their 3´ untranslated regions (UTRs) that fold into tRNA-shaped structures (TSS) according to nuclear magnetic resonance and small angle x-ray scattering analysis (TCV) and computational prediction (PEMV). Specifically, the TCV TSS can directly associate with ribosomes and participates in RNA-dependent RNA polymerase (RdRp) binding. The PEMV kissing-loop TSS (kl-TSS) can simultaneously bind to ribosomes and associate with the 5´ UTR of the viral genome. Mutational analysis and chemical structure probing methods provide great insight into the function and secondary structure of the two 3´ CITEs. However, lack of 3-D structural information has limited our understanding of their functional dynamics. Here, I report the folding dynamics for the TCV TSS using optical tweezers (OT), a single molecule technique. My study of the unfolding/folding pathways for the TCV TSS has provided an unexpected unfolding pathway, confirmed the presence of Ψ3 and hairpin elements, and suggested an interconnection between the hairpins and pseudoknots. In addition, this study has demonstrated the importance of the adjacent upstream adenylate-rich sequence for the formation of H4a/Ψ3 along with the contribution of magnesium to the stability of the TCV TSS. In my second project, I report on the structural analysis of the PEMV kl-TSS using NMR and SAXS. This study has re-confirmed the base-pair pattern for the PEMV kl-TSS and the proposed interaction of the PEMV kl-TSS with its interacting partner, hairpin 5H2. The molecular envelope of the kl-TSS built from SAXS analysis suggests the kl-TSS has two functional conformations, one of which has a different shape from the previously predicted tRNA-shaped form. Along with applying biophysical methods to study the structural folding dynamics of RNAs, I have also developed a technique that improves the production of large quantities of recombinant RNAs in vivo for NMR study. In this project, I report using the wild-type and mutant E.coli strains to produce cost-effective, site-specific labeled, recombinant RNAs. This technique was validated with four representative RNAs of different sizes and complexity to produce milligram amounts of RNAs. The benefit of using site-specific labeled RNAs made from E.coli was demonstrated with several NMR techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the one-loop Coleman-Weinberg effective potential, we derive a general analytic expression for all the derivatives of the effective potential with respect to any number of classical scalar fields. The result is valid for a renormalisable theory in four dimensions with any number of scalars, fermions or gauge bosons. This result corresponds to the zero-external momentum contribution to a general one-loop diagram with N scalar external legs. We illustrate the use of the general result in two simple scalar singlet extensions of the Standard Model, to obtain the dominant contributions to the triple couplings of light scalar particles under the zero external momentum approximation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using asymptotic methods, we investigate whether discrete breathers are supported by a two-dimensional Fermi-Pasta-Ulam lattice. A scalar (one-component) two-dimensional Fermi-Pasta-Ulam lattice is shown to model the charge stored within an electrical transmission lattice. A third-order multiple-scale analysis in the semi-discrete limit fails, since at this order, the lattice equations reduce to the (2+1)-dimensional cubic nonlinear Schrödinger (NLS) equation which does not support stable soliton solutions for the breather envelope. We therefore extend the analysis to higher order and find a generalised $(2+1)$-dimensional NLS equation which incorporates higher order dispersive and nonlinear terms as perturbations. We find an ellipticity criterion for the wave numbers of the carrier wave. Numerical simulations suggest that both stationary and moving breathers are supported by the system. Calculations of the energy show the expected threshold behaviour whereby the energy of breathers does {\em not} go to zero with the amplitude; we find that the energy threshold is maximised by stationary breathers, and becomes arbitrarily small as the boundary of the domain of ellipticity is approached.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most exciting discoveries in astrophysics of the last last decade is of the sheer diversity of planetary systems. These include "hot Jupiters", giant planets so close to their host stars that they orbit once every few days; "Super-Earths", planets with sizes intermediate to those of Earth and Neptune, of which no analogs exist in our own solar system; multi-planet systems with planets smaller than Mars to larger than Jupiter; planets orbiting binary stars; free-floating planets flying through the emptiness of space without any star; even planets orbiting pulsars. Despite these remarkable discoveries, the field is still young, and there are many areas about which precious little is known. In particular, we don't know the planets orbiting Sun-like stars nearest to our own solar system, and we know very little about the compositions of extrasolar planets. This thesis provides developments in those directions, through two instrumentation projects.

The first chapter of this thesis concerns detecting planets in the Solar neighborhood using precision stellar radial velocities, also known as the Doppler technique. We present an analysis determining the most efficient way to detect planets considering factors such as spectral type, wavelengths of observation, spectrograph resolution, observing time, and instrumental sensitivity. We show that G and K dwarfs observed at 400-600 nm are the best targets for surveys complete down to a given planet mass and out to a specified orbital period. Overall we find that M dwarfs observed at 700-800 nm are the best targets for habitable-zone planets, particularly when including the effects of systematic noise floors caused by instrumental imperfections. Somewhat surprisingly, we demonstrate that a modestly sized observatory, with a dedicated observing program, is up to the task of discovering such planets.

We present just such an observatory in the second chapter, called the "MINiature Exoplanet Radial Velocity Array," or MINERVA. We describe the design, which uses a novel multi-aperture approach to increase stability and performance through lower system etendue, as well as keeping costs and time to deployment down. We present calculations of the expected planet yield, and data showing the system performance from our testing and development of the system at Caltech's campus. We also present the motivation, design, and performance of a fiber coupling system for the array, critical for efficiently and reliably bringing light from the telescopes to the spectrograph. We finish by presenting the current status of MINERVA, operational at Mt. Hopkins observatory in Arizona.

The second part of this thesis concerns a very different method of planet detection, direct imaging, which involves discovery and characterization of planets by collecting and analyzing their light. Directly analyzing planetary light is the most promising way to study their atmospheres, formation histories, and compositions. Direct imaging is extremely challenging, as it requires a high performance adaptive optics system to unblur the point-spread function of the parent star through the atmosphere, a coronagraph to suppress stellar diffraction, and image post-processing to remove non-common path "speckle" aberrations that can overwhelm any planetary companions.

To this end, we present the "Stellar Double Coronagraph," or SDC, a flexible coronagraphic platform for use with the 200" Hale telescope. It has two focal and pupil planes, allowing for a number of different observing modes, including multiple vortex phase masks in series for improved contrast and inner working angle behind the obscured aperture of the telescope. We present the motivation, design, performance, and data reduction pipeline of the instrument. In the following chapter, we present some early science results, including the first image of a companion to the star delta Andromeda, which had been previously hypothesized but never seen.

A further chapter presents a wavefront control code developed for the instrument, using the technique of "speckle nulling," which can remove optical aberrations from the system using the deformable mirror of the adaptive optics system. This code allows for improved contrast and inner working angles, and was written in a modular style so as to be portable to other high contrast imaging platforms. We present its performance on optical, near-infrared, and thermal infrared instruments on the Palomar and Keck telescopes, showing how it can improve contrasts by a factor of a few in less than ten iterations.

One of the large challenges in direct imaging is sensing and correcting the electric field in the focal plane to remove scattered light that can be much brighter than any planets. In the last chapter, we present a new method of focal-plane wavefront sensing, combining a coronagraph with a simple phase-shifting interferometer. We present its design and implementation on the Stellar Double Coronagraph, demonstrating its ability to create regions of high contrast by measuring and correcting for optical aberrations in the focal plane. Finally, we derive how it is possible to use the same hardware to distinguish companions from speckle errors using the principles of optical coherence. We present results observing the brown dwarf HD 49197b, demonstrating the ability to detect it despite it being buried in the speckle noise floor. We believe this is the first detection of a substellar companion using the coherence properties of light.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional vehicles are creating pollution problems, global warming and the extinction of high density fuels. To address these problems, automotive companies and universities are researching on hybrid electric vehicles where two different power devices are used to propel a vehicle. This research studies the development and testing of a dynamic model for Prius 2010 Hybrid Synergy Drive (HSD), a power-split device. The device was modeled and integrated with a hybrid vehicle model. To add an electric only mode for vehicle propulsion, the hybrid synergy drive was modified by adding a clutch to carrier 1. The performance of the integrated vehicle model was tested with UDDS drive cycle using rule-based control strategy. The dSPACE Hardware-In-the-Loop (HIL) simulator was used for HIL simulation test. The HIL simulation result shows that the integration of developed HSD dynamic model with a hybrid vehicle model was successful. The HSD model was able to split power and isolate engine speed from vehicle speed in hybrid mode.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.