940 resultados para Pneumatic accelerator


Relevância:

10.00% 10.00%

Publicador:

Resumo:

There may be advantages to be gained by combining Case-Based Reasoning (CBR) techniques with numerical models. In this paper we consider how CBR can be used as a flexible query engine to improve the usability of numerical models. Particularly they can help to solve inverse and mixed problems, and to solve constraint problems. We discuss this idea with reference to the illustrative example of a pneumatic conveyor. We describe a model of the problem of particle degradation in such a conveyor, and the problems faced by design engineers. The solution of these problems requires a system that allows iterative sharing of control between user, CBR system, and numerical model. This multi-initiative interaction is illustrated for the pneumatic conveyor by means of Unified Modeling Language (UML) collaboration and sequence diagrams. We show approaches to the solution of these problems via a CBR tool.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A 74-year-old man presented to our Emergency Department with acute dyspnoea. His electrocardiogram showed atrial flutter with 2:1 block and a rate of 150 bpm. Initial investigations revealed a D-dimer level of 6.01 mg/dl. Based on the patient’s complaints and the high D-dimer level, computed tomography pulmonary angiography was immediately performed. This showed no evidence of pulmonary embolism, but there were pneumatic changes in the right upper lung lobe. Antibiotics treatment was started with pipracillin/tazobactam, after which the patient’s condition improved. However, on the third day after admission he developed acute dyspnoea, diaphoresis and cardiopulmonary instability immediately after defecation. To promptly confirm our clinical suspicion of pulmonary embolism, a transthoracic echocardiography was carried out. This demonstrated a worm-like, mobile mass in the right heart. The right ventricle was enlarged, and paradoxical septal motion was present, indicating right ventricular pressure overload. The systolic tricuspid valvular gradient was 56 mmHg. The patient was treated with thrombolysis. His condition was greatly clinically improved after 3 hours. After 10 days of hospitalization, the patient was discharged.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Deep-sea ferromanganese nodules accumulate trace elements from seawater and underlying sediment porewaters during the growth of concentric mineral layers over millions of years. These trace elements have the potential to record past ocean geochemical conditions. The goal of this study was to determine whether Fe mineral alteration occurs and how the speciation of trace elements responds to alteration over ∼3.7 Ma of marine ferromanganese nodule (MFN) formation, a timeline constrained by estimates from 9Be/10Be concentrations in the nodule material. We determined Fe-bearing phases and Fe isotope composition in a South Pacific Gyre (SPG) nodule. Specifically, the distribution patterns and speciation of trace element uptake by these Fe phases were investigated. The time interval covered by the growth of our sample of the nodule was derived from 9Be/10Be accelerator mass spectrometry (AMS). The composition and distribution of major and trace elements were mapped at various spatial scales, using micro-X-ray fluorescence (μXRF), electron microprobe analysis (EMPA), and inductively coupled plasma mass spectrometry (ICP-MS). Fe phases were characterized by micro-extended X-ray absorption fine structure (μEXAFS) spectroscopy and micro-X-ray diffraction (μXRD). Speciation of Ti and V, associated with Fe, was measured using micro-X-ray absorption near edge structure (μXANES) spectroscopy. Iron isotope composition (δ56/54Fe) in subsamples of 1-3 mm increments along the radius of the nodule was determined with multiple-collector ICP-MS (MC-ICP-MS). The SPG nodule formed through primarily hydrogeneous inputs at a rate of 4.0 ± 0.4 mm/Ma. The nodule exhibited a high diversity of Fe mineral phases: feroxyhite (δ-FeOOH), goethite (α-FeOOH), lepidocrocite (γ-FeOOH), and poorly ordered ferrihydrite-like phases. These findings provide evidence that Fe oxyhydroxides within the nodule undergo alteration to more stable phases over millions of years. Trace Ti and V were spatially correlated with Fe and found to be adsorbed to Fe-bearing minerals. Ti/Fe and V/Fe ratios, and Ti and V speciation, did not vary along the nodule radius. The δ56/54Fe values, when averaged over sample increments representing 0.25 to 0.75 Ma, were homogeneous within uncertainty along the nodule radius, at -0.12 ± 0.07 ‰ (2sd, n=10). Our results indicate that the Fe isotope composition of the nodule remained constant during nodule growth and that mineral alteration did not affect the primary Fe isotope composition of the nodule. Furthermore, the average δ56/54Fe value of -0.12 ‰ we find is consistent with Fe sourced from continental eolian particles (dust). Despite mineral alteration, the trace element partitioning of Ti and V, and Fe isotope composition, do not appear to change within the sensitivity of our measurements. These findings suggest that Fe oxyhydroxides within hydrogenetic ferromanganese nodules are out of geochemical contact with seawater once they are covered by subsequent concentric mineral layers. Even though Fe-bearing minerals are altered, trace element ratios, speciation and Fe isotope composition are preserved within the nodule.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The contamination of Japan after the Fukushima accident has been investigated mainly for volatile fission products, but only sparsely for actinides such as plutonium. Only small releases of actinides were estimated in Fukushima. Plutonium is still omnipresent in the environment from previous atmospheric nuclear weapons tests. We investigated soil and plants sampled at different hot spots in Japan, searching for reactor-borne plutonium using its isotopic ratio Pu-240/Pu-239. By using accelerator mass spectrometry, we clearly demonstrated the release of Pu from the Fukushima Daiichi power plant: While most samples contained only the radionuclide signature of fallout plutonium, there is at least one vegetation sample whose isotope ratio (0.381 +/- 0.046) evidences that the Pu originates from a nuclear reactor (Pu239+240 activity concentration 0.49 Bq/kg). Plutonium content and isotope ratios differ considerably even for very close sampling locations, e.g. the soil and the plants growing on it. This strong localization indicates a particulate Pu release, which is of high radiological risk if incorporated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La acalasia es una enfermedad esofágica poco frecuente que se acompaña de una importante alteración de la calidad de vida de los pacientes. Su etiología no está totalmente aclarada y sus características clínicas principales son la disfagia y la regurgitación. El tratamiento de la acalasia está dirigido al alivio funcional y sintomático mediante la abertura del esfínter esofágico inferior, siendo al momento la miotomía laparoscópica la técnica de elección mientras que las dilataciones neumáticas y la inyección de toxina botulínica deben considerarse como técnicas de recurso en casos seleccionados. Objetivo: Evaluar los resultados de la miotomía extendida más funduplicatura parcial anterior de Dorr como tratamiento de la acalasia por vía laparoscópica, comparándola con nuestra experiencia previa mediante la técnica estándar. Materiales y método: diseño: Estudio prospectivo, descriptivo y longitudinal. Sede: Hospital Latino, Cuenca - Ecuador. Pacientes y método: Desde junio de 1992 hasta diciembre del 2011 se intervinieron 39 pacientes con diagnóstico de acalasia que recibieron tratamiento quirúrgico por medio de cirugía mínimamente invasiva. Se estudió la edad, sintomatología previa, clasificación según Stewart, tiempo de evolución de los síntomas, técnica operatoria realizada, control postoperatorio. Resultados: Se intervinieron 39 paciente, con edad promedio de 66 años, mínima 23 y máxima 81. La sintomatología presentada fue disfagia en el 100%, regurgitación en el 74,4%, pérdida de peso en el 71,8% y odinofagia en el 28.2%. El tiempo de evolución de los síntomas fueron: menor a 2 años 48.7% (n=19), de 2 a 4 años 33.3% (n=13), de 4 a 6 años de 12.8% (n=5), y de 6 a 8 años un 5.1% (n=2). Según Stewart se clasificaron en I 8% (n=3), II 49% (n=19), III 38% (n=15) y IV 5% (n=2).La técnica empleada fue Miotomía + Dorr 57% (n=22), Miotomía extendida + Dorr 20% (n=8), Miotomía sola 18% (n=7), Miotomía + Toupet 5% (n=2). Se ha realizado seguimiento del 75% de pacientes, con resultados excelentes en el 91%, y bueno en el 9%. En los ocho últimos casos se realizó la miotomía extendida más funduplicatura tipo Dorr, brindando resultados excelentes a corto plazo. Conclusión: la miotomía gástrica extendida mejora el resultado de la terapia quirúrgica para la acalasia sin incrementar la tasa de reflujo gastroesofágico anormal cuando se añade una funduplicatura parcial anterior tipo Dorr.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hintergrund: Für die Therapie maligner Neubildungen stellt die Strahlentherapie wichtige Behandlungsmöglichkeiten dar, die sich in den vergangenen Jahrzehnten deutlich weiterentwickelt haben. Hierzu gehört unter anderem die stereotaktische Radiochirurgie (SRS), die durch eine einmalige Applikation fokussierter hoher Strahlendosen in einem klar definierten Zeitraum gekennzeichnet ist. Von besonderer Bedeutung ist die SRS für die Behandlung von Hirnmetastasen. Fragestellung: Ziel dieses HTA-Berichts ist die Erstellung einer umfassenden Übersicht der aktuellen Literatur der Behandlung von Hirnmetastasen, um die Radiochirurgie als alleinige Therapie oder in Kombination mit Therapiealternativen bezüglich der medizinischen Wirksamkeit, Sicherheit und Wirtschaftlichkeit sowie ethischer, sozialer und juristischer Aspekte zu vergleichen. Methodik: Relevante Publikationen deutscher und englischer Sprache werden über eine strukturierte Datenbank- sowie mittels Handrecherche zwischen Januar 2002 und August 2007 identifiziert. Die Zielpopulation bilden Patienten mit einer oder mehreren Hirnmetastasen. Eine Beurteilung der methodischen Qualität wird unter Beachtung von Kriterien der evidenzbasierten Medizin (EbM) durchgeführt. Ergebnisse: Von insgesamt 1.495 Treffern erfüllen 15 Studien die medizinischen Einschlusskriterien. Insgesamt ist die Studienqualität stark eingeschränkt und mit Ausnahme von zwei randomisierte kontrollierte Studien (RCT) und zwei Metaanalysen werden ausschließlich historische Kohortenstudien identifiziert. Die Untersuchung relevanter Endpunkte ist uneinheitlich. Qualitativ hochwertige Studien zeigen, dass die Ergänzung der Ganzhirnbestrahlung (WBRT) zur SRS sowie der SRS zur WBRT mit einer verbesserten lokalen Tumorkontrolle und Funktionsfähigkeit einhergeht. Nur im Vergleich zur alleinigen WBRT resultiert die Kombination von SRS und WBRT jedoch bei Patienten mit singulären Hirnmetastasen, RPA-Klasse 1 (RPA = Rekursive Partitionierungsanalyse) und bestimmten Primärtumoren in verbesserter Überlebenszeit. Die Therapiesicherheit zeigt in beiden Fällen keine deutlichen Unterschiede zwischen den Interventionsgruppen. Methodisch weniger hochwertige Studien finden keine eindeutigen Unterschiede zwischen SRS und WBRT, SRS und Neurochirurgie (NC) sowie SRS und hypofraktionierter Strahlentherapie (HCSRT). Die Lebensqualität wird in keiner Studie untersucht. Durch die Datenbankrecherche werden 320 Publikationen für den ökonomischen Bereich identifiziert. Insgesamt werden fünf davon für den vorliegenden Health Technology Assessment (HTA)-Bericht verwendet. Die Qualität der Publikationen ist dabei unterschiedlich. Bezüglich der Wirtschaftlichkeit verschiedener Gerätealternativen ergibt sich, unter der Annahme gleicher Wirksamkeit, eine starke Abhängigkeit von der Anzahl der behandelten Patienten. Im Fall, dass die beiden Gerätealternativen nur für die SRS verwandt werden, liegen Hinweise vor, dass das Gamma Knife kostengünstiger sein kann. Andernfalls ist es sehr wahrscheinlich, dass der flexiblere modifizierte Linearbeschleuniger kostengünstiger ist. Nach einem HTA sind die Gesamtkosten für ein Gamma Knife und einen dedizierten Linearbeschleuniger ungefähr gleich, während ein modifizierter Linearbeschleuniger günstiger ist. Für ethische, juristische und soziale Fragestellungen werden keine relevanten Publikationen identifiziert. Diskussion: Insgesamt sind sowohl die Qualität als auch die Quantität identifizierter Studien stark reduziert. Es zeigt sich jedoch, dass die Prognose von Patienten mit Hirnmetastasen auch unter modernsten therapeutischen Möglichkeiten schlecht ist. Ausreichend starke Evidenz gibt es lediglich für die Untersuchung ergänzender WBRT zur SRS und der ergänzenden SRS zur WBRT. Ein direkter Vergleich von SRS und WBRT, SRS und NC sowie SRS und HCSRT ist hingegen nicht möglich. Die Wirtschaftlichkeit verschiedener Gerätealternativen hängt von der Patientenzahl und den behandelten Indikationen ab. Für ausgelastete dedizierte Systeme, liegen Hinweise vor, dass sie kostengünstiger sein können. Bei flexibler Nutzung scheinen modifizierte Systeme wirtschaftlich vorteilhafter. Diese Aussagen erfolgen unter der nicht gesicherten Annahme gleicher Wirksamkeit der Alternativen. Die Behandlungspräzision der Geräte kann Einfluss auf die Gerätewahl haben. Zu neueren Gerätealternativen wie z. B. dem CyberKnife liegen bisher keine Untersuchungen vor. Aus der wirtschaftlich vorteilhaften hohen Auslastung folgt aber eine begrenzte Geräteanzahl in einem vorgegebenen Gebiet, was evtl. einen gleichberechtigten, wohnortnahen Zugang zu dieser Technik erschwert. Schlussfolgerungen: Die Kombination SRS und WBRT geht mit einer verbesserten lokalen Tumorkontrolle und Funktionsfähigkeit gegenüber der jeweils alleinigen Therapie einher. Nur für Patienten mit singulärer Metastase resultiert dies in Vorteilen der Überlebenszeit. Qualitativ hochwertige Studien sind notwendig um die SRS direkt mit WBRT und NC zu vergleichen. Weiterhin sollte besonders die Lebensqualität in zukünftigen Studien mitberücksichtigt werden. Bei der Art des verwendeten Gerätes zeichnet sich eine deutliche Abhängigkeit der Wirtschaftlichkeit der Geräte von der erreichbaren Auslastung ab. Hohe Patientenzahlen bieten Vorteile für spezialisierte Systeme und bei geringeren Patientenzahlen ist die Flexibilität modifizierter System vorteilhaft. Weitere Studien z. B. zum CyberKnife sind wünschenswert. Insgesamt ist die Studienlage insbesondere für das deutsche Gesundheitssystem sehr mangelhaft.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Soft robots are robots made mostly or completely of soft, deformable, or compliant materials. As humanoid robotic technology takes on a wider range of applications, it has become apparent that they could replace humans in dangerous environments. Current attempts to create robotic hands for these environments are very difficult and costly to manufacture. Therefore, a robotic hand made with simplistic architecture and cheap fabrication techniques is needed. The goal of this thesis is to detail the design, fabrication, modeling, and testing of the SUR Hand. The SUR Hand is a soft, underactuated robotic hand designed to be cheaper and easier to manufacture than conventional hands. Yet, it maintains much of their dexterity and precision. This thesis will detail the design process for the soft pneumatic fingers, compliant palm, and flexible wrist. It will also discuss a semi-empirical model for finger design and the creation and validation of grasping models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Crossing the Franco-Swiss border, the Large Hadron Collider (LHC), designed to collide 7 TeV proton beams, is the world's largest and most powerful particle accelerator the operation of which was originally intended to commence in 2008. Unfortunately, due to an interconnect discontinuity in one of the main dipole circuit's 13 kA superconducting busbars, a catastrophic quench event occurred during initial magnet training, causing significant physical system damage. Furthermore, investigation into the cause found that such discontinuities were not only present in the circuit in question, but throughout the entire LHC. This prevented further magnet training and ultimately resulted in the maximum sustainable beam energy being limited to approximately half that of the design nominal, 3.5-4 TeV, for the first three years of operation (Run 1, 2009-2012) and a major consolidation campaign being scheduled for the first long shutdown (LS 1, 2012-2014). Throughout Run 1, a series of studies attempted to predict the amount of post-installation training quenches still required to qualify each circuit to nominal-energy current levels. With predictions in excess of 80 quenches (each having a recovery time of 8-12+ hours) just to achieve 6.5 TeV and close to 1000 quenches for 7 TeV, it was decided that for Run 2, all systems be at least qualified for 6.5 TeV operation. However, even with all interconnect discontinuities scheduled to be repaired during LS 1, numerous other concerns regarding circuit stability arose. In particular, observations of an erratic behaviour of magnet bypass diodes and the degradation of other potentially weak busbar sections, as well as observations of seemingly random millisecond spikes in beam losses, known as unidentified falling object (UFO) events, which, if persist at 6.5 TeV, may eventually deposit sufficient energy to quench adjacent magnets. In light of the above, the thesis hypothesis states that, even with the observed issues, the LHC main dipole circuits can safely support and sustain near-nominal proton beam energies of at least 6.5 TeV. Research into minimising the risk of magnet training led to the development and implementation of a new qualification method, capable of providing conclusive evidence that all aspects of all circuits, other than the magnets and their internal joints, can safely withstand a quench event at near-nominal current levels, allowing for magnet training to be carried out both systematically and without risk. This method has become known as the Copper Stabiliser Continuity Measurement (CSCM). Results were a success, with all circuits eventually being subject to a full current decay from 6.5 TeV equivalent current levels, with no measurable damage occurring. Research into UFO events led to the development of a numerical model capable of simulating typical UFO events, reproducing entire Run 1 measured event data sets and extrapolating to 6.5 TeV, predicting the likelihood of UFO-induced magnet quenches. Results provided interesting insights into the involved phenomena as well as confirming the possibility of UFO-induced magnet quenches. The model was also capable of predicting that such events, if left unaccounted for, are likely to be commonplace or not, resulting in significant long-term issues for 6.5+ TeV operation. Addressing the thesis hypothesis, the following written works detail the development and results of all CSCM qualification tests and subsequent magnet training as well as the development and simulation results of both 4 TeV and 6.5 TeV UFO event modelling. The thesis concludes, post-LS 1, with the LHC successfully sustaining 6.5 TeV proton beams, but with UFO events, as predicted, resulting in otherwise uninitiated magnet quenches and being at the forefront of system availability issues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Educação Física, Programa de Pós-Graduação Strictu-Sensu em Educação Física, 2015.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade Gama, Programa de Pós-Graduação em Engenharia Biomédica, 2015.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of activities in the oil and gas sector has been promoting the search for materials more adequate to oilwell cementing operation. In the state of Rio Grande do Norte, the cement sheath integrity tend to fail during steam injection operation which is necessary to increase oil recovery in reservoir with heavy oil. Geopolymer is a material that can be used as alternative cement. It has been used in manufacturing of fireproof compounds, construction of structures and for controlling of toxic or radioactive waste. Latex is widely used in Portland cement slurries and its characteristic is the increase of compressive strength of cement slurries. Sodium Tetraborate is used in dental cement as a retarder. The addition of this additive aim to improve the geopolymeric slurries properties for oilwell cementing operation. The slurries studied are constituted of metakaolinite, potassium silicate, potassium hydroxide, non-ionic latex and sodium tetraborate. The properties evaluated were: viscosity, compressive strength, thickening time, density, fluid loss control, at ambient temperature (27 ºC) and at cement specification temperature. The tests were carried out in accordance to the practical recommendations of the norm API RP 10B. The slurries with sodium tetraborate did not change either their rheological properties or their mechanical properties or their density in relation the slurry with no additive. The increase of the concentration of sodium tetraborate increased the water loss at both temperatures studied. The best result obtained with the addition of sodium tetraborate was thickening time, which was tripled. The addition of latex in the slurries studied diminished their rheological properties and their density, however, at ambient temperature, it increased their compressive strength and it functioned as an accelerator. The increase of latex concentration increased the presence of water and then diminished the density of the slurries and increased the water loss. From the results obtained, it was concluded that sodium tetraborate and non-ionic latex are promising additives for geopolymer slurries to be used in oilwell cementing operation

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, the interest of the automotive market for hybrid vehicles has increased due to the more restrictive pollutants emissions legislation and to the necessity of decreasing the fossil fuel consumption, since such solution allows a consistent improvement of the vehicle global efficiency. The term hybridization regards the energy flow in the powertrain of a vehicle: a standard vehicle has, usually, only one energy source and one energy tank; instead, a hybrid vehicle has at least two energy sources. In most cases, the prime mover is an internal combustion engine (ICE) while the auxiliary energy source can be mechanical, electrical, pneumatic or hydraulic. It is expected from the control unit of a hybrid vehicle the use of the ICE in high efficiency working zones and to shut it down when it is more convenient, while using the EMG at partial loads and as a fast torque response during transients. However, the battery state of charge may represent a limitation for such a strategy. That’s the reason why, in most cases, energy management strategies are based on the State Of Charge, or SOC, control. Several studies have been conducted on this topic and many different approaches have been illustrated. The purpose of this dissertation is to develop an online (usable on-board) control strategy in which the operating modes are defined using an instantaneous optimization method that minimizes the equivalent fuel consumption of a hybrid electric vehicle. The equivalent fuel consumption is calculated by taking into account the total energy used by the hybrid powertrain during the propulsion phases. The first section presents the hybrid vehicles characteristics. The second chapter describes the global model, with a particular focus on the energy management strategies usable for the supervisory control of such a powertrain. The third chapter shows the performance of the implemented controller on a NEDC cycle compared with the one obtained with the original control strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a study of the Grid data access patterns in distributed analysis in the CMS experiment at the LHC accelerator. This study ranges from the deep analysis of the historical patterns of access to the most relevant data types in CMS, to the exploitation of a supervised Machine Learning classification system to set-up a machinery able to eventually predict future data access patterns - i.e. the so-called dataset “popularity” of the CMS datasets on the Grid - with focus on specific data types. All the CMS workflows run on the Worldwide LHC Computing Grid (WCG) computing centers (Tiers), and in particular the distributed analysis systems sustains hundreds of users and applications submitted every day. These applications (or “jobs”) access different data types hosted on disk storage systems at a large set of WLCG Tiers. The detailed study of how this data is accessed, in terms of data types, hosting Tiers, and different time periods, allows to gain precious insight on storage occupancy over time and different access patterns, and ultimately to extract suggested actions based on this information (e.g. targetted disk clean-up and/or data replication). In this sense, the application of Machine Learning techniques allows to learn from past data and to gain predictability potential for the future CMS data access patterns. Chapter 1 provides an introduction to High Energy Physics at the LHC. Chapter 2 describes the CMS Computing Model, with special focus on the data management sector, also discussing the concept of dataset popularity. Chapter 3 describes the study of CMS data access patterns with different depth levels. Chapter 4 offers a brief introduction to basic machine learning concepts and gives an introduction to its application in CMS and discuss the results obtained by using this approach in the context of this thesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The electromagnetic form factors are the most fundamental observables that encode information about the internal structure of the nucleon. The electric ($G_{E}$) and the magnetic ($G_{M}$) form factors contain information about the spatial distribution of the charge and magnetization inside the nucleon. A significant discrepancy exists between the Rosenbluth and the polarization transfer measurements of the electromagnetic form factors of the proton. One possible explanation for the discrepancy is the contributions of two-photon exchange (TPE) effects. Theoretical calculations estimating the magnitude of the TPE effect are highly model dependent, and limited experimental evidence for such effects exists. Experimentally, the TPE effect can be measured by comparing the ratio of positron-proton elastic scattering cross section to that of the electron-proton $\large(R = \frac{\sigma (e^{+}p)}{\sigma (e^{-}p)}\large)$. The ratio $R$ was measured over a wide range of kinematics, utilizing a 5.6 GeV primary electron beam produced by the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab. This dissertation explored dependence of $R$ on kinematic variables such as squared four-momentum transfer ($Q^{2}$) and the virtual photon polarization parameter ($\varepsilon$). A mixed electron-positron beam was produced from the primary electron beam in experimental Hall B. The mixed beam was scattered from a liquid hydrogen (LH$_{2}$) target. Both the scattered lepton and the recoil proton were detected by the CEBAF Large Acceptance Spectrometer (CLAS). The elastic events were then identified by using elastic scattering kinematics. This work extracted the $Q^{2}$ dependence of $R$ at high $\varepsilon$ ($\varepsilon > $ 0.8) and the $\varepsilon$ dependence of $R$ at $\langle Q^{2} \rangle \approx 0.85$ GeV$^{2}$. In these kinematics, our data confirm the validity of the hadronic calculations of the TPE effect by Blunden, Melnitchouk, and Tjon. This hadronic TPE effect, with additional corrections contributed by higher excitations of the intermediate state nucleon, largely reconciles the Rosenbluth and the polarization transfer measurements of the electromagnetic form factors.