971 resultados para non-ideal problems
Resumo:
This paper aims to show, analyze and solve the problems related to the translation of the book with meaning-bond alphabetically ordered chapter titles La vita non è in ordine alfabetico, by the Italian writer Andrea Bajani. The procedure is inevitable for a possible translation of the book, and it is necessary to have a preliminary pattern to follow. After translating the whole book, not only is a revision fundamental, but a restructure and reorganization of the collection may be required, hence, what this thesis offers is a scheme to start from, together with an analysis of the possible problems that may arise, and a useful method to find a solution.
Resumo:
Lattice Quantum Chromodynamics (LQCD) is the preferred tool for obtaining non-perturbative results from QCD in the low-energy regime. It has by nowrnentered the era in which high precision calculations for a number of phenomenologically relevant observables at the physical point, with dynamical quark degrees of freedom and controlled systematics, become feasible. Despite these successes there are still quantities where control of systematic effects is insufficient. The subject of this thesis is the exploration of the potential of todays state-of-the-art simulation algorithms for non-perturbativelyrn$\mathcal{O}(a)$-improved Wilson fermions to produce reliable results in thernchiral regime and at the physical point both for zero and non-zero temperature. Important in this context is the control over the chiral extrapolation. Thisrnthesis is concerned with two particular topics, namely the computation of hadronic form factors at zero temperature, and the properties of the phaserntransition in the chiral limit of two-flavour QCD.rnrnThe electromagnetic iso-vector form factor of the pion provides a platform to study systematic effects and the chiral extrapolation for observables connected to the structure of mesons (and baryons). Mesonic form factors are computationally simpler than their baryonic counterparts but share most of the systematic effects. This thesis contains a comprehensive study of the form factor in the regime of low momentum transfer $q^2$, where the form factor is connected to the charge radius of the pion. A particular emphasis is on the region very close to $q^2=0$ which has not been explored so far, neither in experiment nor in LQCD. The results for the form factor close the gap between the smallest spacelike $q^2$-value available so far and $q^2=0$, and reach an unprecedented accuracy at full control over the main systematic effects. This enables the model-independent extraction of the pion charge radius. The results for the form factor and the charge radius are used to test chiral perturbation theory ($\chi$PT) and are thereby extrapolated to the physical point and the continuum. The final result in units of the hadronic radius $r_0$ is rn$$ \left\langle r_\pi^2 \right\rangle^{\rm phys}/r_0^2 = 1.87 \: \left(^{+12}_{-10}\right)\left(^{+\:4}_{-15}\right) \quad \textnormal{or} \quad \left\langle r_\pi^2 \right\rangle^{\rm phys} = 0.473 \: \left(^{+30}_{-26}\right)\left(^{+10}_{-38}\right)(10) \: \textnormal{fm} \;, $$rn which agrees well with the results from other measurements in LQCD and experiment. Note, that this is the first continuum extrapolated result for the charge radius from LQCD which has been extracted from measurements of the form factor in the region of small $q^2$.rnrnThe order of the phase transition in the chiral limit of two-flavour QCD and the associated transition temperature are the last unkown features of the phase diagram at zero chemical potential. The two possible scenarios are a second order transition in the $O(4)$-universality class or a first order transition. Since direct simulations in the chiral limit are not possible the transition can only be investigated by simulating at non-zero quark mass with a subsequent chiral extrapolation, guided by the universal scaling in the vicinity of the critical point. The thesis presents the setup and first results from a study on this topic. The study provides the ideal platform to test the potential and limits of todays simulation algorithms at finite temperature. The results from a first scan at a constant zero-temperature pion mass of about 290~MeV are promising, and it appears that simulations down to physical quark masses are feasible. Of particular relevance for the order of the chiral transition is the strength of the anomalous breaking of the $U_A(1)$ symmetry at the transition point. It can be studied by looking at the degeneracies of the correlation functions in scalar and pseudoscalar channels. For the temperature scan reported in this thesis the breaking is still pronounced in the transition region and the symmetry becomes effectively restored only above $1.16\:T_C$. The thesis also provides an extensive outline of research perspectives and includes a generalisation of the standard multi-histogram method to explicitly $\beta$-dependent fermion actions.
Resumo:
In vielen Bereichen der industriellen Fertigung, wie zum Beispiel in der Automobilindustrie, wer- den digitale Versuchsmodelle (sog. digital mock-ups) eingesetzt, um die Entwicklung komplexer Maschinen m ̈oglichst gut durch Computersysteme unterstu ̈tzen zu k ̈onnen. Hierbei spielen Be- wegungsplanungsalgorithmen eine wichtige Rolle, um zu gew ̈ahrleisten, dass diese digitalen Pro- totypen auch kollisionsfrei zusammengesetzt werden k ̈onnen. In den letzten Jahrzehnten haben sich hier sampling-basierte Verfahren besonders bew ̈ahrt. Diese erzeugen eine große Anzahl von zuf ̈alligen Lagen fu ̈r das ein-/auszubauende Objekt und verwenden einen Kollisionserken- nungsmechanismus, um die einzelnen Lagen auf Gu ̈ltigkeit zu u ̈berpru ̈fen. Daher spielt die Kollisionserkennung eine wesentliche Rolle beim Design effizienter Bewegungsplanungsalgorith- men. Eine Schwierigkeit fu ̈r diese Klasse von Planern stellen sogenannte “narrow passages” dar, schmale Passagen also, die immer dort auftreten, wo die Bewegungsfreiheit der zu planenden Objekte stark eingeschr ̈ankt ist. An solchen Stellen kann es schwierig sein, eine ausreichende Anzahl von kollisionsfreien Samples zu finden. Es ist dann m ̈oglicherweise n ̈otig, ausgeklu ̈geltere Techniken einzusetzen, um eine gute Performance der Algorithmen zu erreichen.rnDie vorliegende Arbeit gliedert sich in zwei Teile: Im ersten Teil untersuchen wir parallele Kollisionserkennungsalgorithmen. Da wir auf eine Anwendung bei sampling-basierten Bewe- gungsplanern abzielen, w ̈ahlen wir hier eine Problemstellung, bei der wir stets die selben zwei Objekte, aber in einer großen Anzahl von unterschiedlichen Lagen auf Kollision testen. Wir im- plementieren und vergleichen verschiedene Verfahren, die auf Hu ̈llk ̈operhierarchien (BVHs) und hierarchische Grids als Beschleunigungsstrukturen zuru ̈ckgreifen. Alle beschriebenen Verfahren wurden auf mehreren CPU-Kernen parallelisiert. Daru ̈ber hinaus vergleichen wir verschiedene CUDA Kernels zur Durchfu ̈hrung BVH-basierter Kollisionstests auf der GPU. Neben einer un- terschiedlichen Verteilung der Arbeit auf die parallelen GPU Threads untersuchen wir hier die Auswirkung verschiedener Speicherzugriffsmuster auf die Performance der resultierenden Algo- rithmen. Weiter stellen wir eine Reihe von approximativen Kollisionstests vor, die auf den beschriebenen Verfahren basieren. Wenn eine geringere Genauigkeit der Tests tolerierbar ist, kann so eine weitere Verbesserung der Performance erzielt werden.rnIm zweiten Teil der Arbeit beschreiben wir einen von uns entworfenen parallelen, sampling- basierten Bewegungsplaner zur Behandlung hochkomplexer Probleme mit mehreren “narrow passages”. Das Verfahren arbeitet in zwei Phasen. Die grundlegende Idee ist hierbei, in der er- sten Planungsphase konzeptionell kleinere Fehler zuzulassen, um die Planungseffizienz zu erh ̈ohen und den resultierenden Pfad dann in einer zweiten Phase zu reparieren. Der hierzu in Phase I eingesetzte Planer basiert auf sogenannten Expansive Space Trees. Zus ̈atzlich haben wir den Planer mit einer Freidru ̈ckoperation ausgestattet, die es erlaubt, kleinere Kollisionen aufzul ̈osen und so die Effizienz in Bereichen mit eingeschr ̈ankter Bewegungsfreiheit zu erh ̈ohen. Optional erlaubt unsere Implementierung den Einsatz von approximativen Kollisionstests. Dies setzt die Genauigkeit der ersten Planungsphase weiter herab, fu ̈hrt aber auch zu einer weiteren Perfor- mancesteigerung. Die aus Phase I resultierenden Bewegungspfade sind dann unter Umst ̈anden nicht komplett kollisionsfrei. Um diese Pfade zu reparieren, haben wir einen neuartigen Pla- nungsalgorithmus entworfen, der lokal beschr ̈ankt auf eine kleine Umgebung um den bestehenden Pfad einen neuen, kollisionsfreien Bewegungspfad plant.rnWir haben den beschriebenen Algorithmus mit einer Klasse von neuen, schwierigen Metall- Puzzlen getestet, die zum Teil mehrere “narrow passages” aufweisen. Unseres Wissens nach ist eine Sammlung vergleichbar komplexer Benchmarks nicht ̈offentlich zug ̈anglich und wir fan- den auch keine Beschreibung von vergleichbar komplexen Benchmarks in der Motion-Planning Literatur.
Resumo:
Die vorliegende Arbeit behandelt Vorwärts- sowie Rückwärtstheorie transienter Wirbelstromprobleme. Transiente Anregungsströme induzieren elektromagnetische Felder, welche sogenannte Wirbelströme in leitfähigen Objekten erzeugen. Im Falle von sich langsam ändernden Feldern kann diese Wechselwirkung durch die Wirbelstromgleichung, einer Approximation an die Maxwell-Gleichungen, beschrieben werden. Diese ist eine lineare partielle Differentialgleichung mit nicht-glatten Koeffizientenfunktionen von gemischt parabolisch-elliptischem Typ. Das Vorwärtsproblem besteht darin, zu gegebener Anregung sowie den umgebungsbeschreibenden Koeffizientenfunktionen das elektrische Feld als distributionelle Lösung der Gleichung zu bestimmen. Umgekehrt können die Felder mit Messspulen gemessen werden. Das Ziel des Rückwärtsproblems ist es, aus diesen Messungen Informationen über leitfähige Objekte, also über die Koeffizientenfunktion, die diese beschreibt, zu gewinnen. In dieser Arbeit wird eine variationelle Lösungstheorie vorgestellt und die Wohlgestelltheit der Gleichung diskutiert. Darauf aufbauend wird das Verhalten der Lösung für verschwindende Leitfähigkeit studiert und die Linearisierbarkeit der Gleichung ohne leitfähiges Objekt in Richtung des Auftauchens eines leitfähigen Objektes gezeigt. Zur Regularisierung der Gleichung werden Modifikationen vorgeschlagen, welche ein voll parabolisches bzw. elliptisches Problem liefern. Diese werden verifiziert, indem die Konvergenz der Lösungen gezeigt wird. Zuletzt wird gezeigt, dass unter der Annahme von sonst homogenen Umgebungsparametern leitfähige Objekte eindeutig durch die Messungen lokalisiert werden können. Hierzu werden die Linear Sampling Methode sowie die Faktorisierungsmethode angewendet.
Resumo:
The first chapter of this work has the aim to provide a brief overview of the history of our Universe, in the context of string theory and considering inflation as its possible application to cosmological problems. We then discuss type IIB string compactifications, introducing the study of the inflaton, a scalar field candidated to describe the inflation theory. The Large Volume Scenario (LVS) is studied in the second chapter paying particular attention to the stabilisation of the Kähler moduli which are four-dimensional gravitationally coupled scalar fields which parameterise the size of the extra dimensions. Moduli stabilisation is the process through which these particles acquire a mass and can become promising inflaton candidates. The third chapter is devoted to the study of Fibre Inflation which is an interesting inflationary model derived within the context of LVS compactifications. The fourth chapter tries to extend the zone of slow-roll of the scalar potential by taking larger values of the field φ. Everything is done with the purpose of studying in detail deviations of the cosmological observables, which can better reproduce current experimental data. Finally, we present a slight modification of Fibre Inflation based on a different compactification manifold. This new model produces larger tensor modes with a spectral index in good agreement with the date released in February 2015 by the Planck satellite.
Resumo:
We performed 124 measurements of particulate matter (PM(2.5)) in 95 hospitality venues such as restaurants, bars, cafés, and a disco, which had differing smoking regulations. We evaluated the impact of spatial separation between smoking and non-smoking areas on mean PM(2.5) concentration, taking relevant characteristics of the venue, such as the type of ventilation or the presence of additional PM(2.5) sources, into account. We differentiated five smoking environments: (i) completely smoke-free location, (ii) non-smoking room spatially separated from a smoking room, (iii) non-smoking area with a smoking area located in the same room, (iv) smoking area with a non-smoking area located in the same room, and (v) smoking location which could be either a room where smoking was allowed that was spatially separated from non-smoking room or a hospitality venue without smoking restriction. In these five groups, the geometric mean PM(2.5) levels were (i) 20.4, (ii) 43.9, (iii) 71.9, (iv) 110.4, and (v) 110.3 microg/m(3), respectively. This study showed that even if non-smoking and smoking areas were spatially separated into two rooms, geometric mean PM(2.5) levels in non-smoking rooms were considerably higher than in completely smoke-free hospitality venues. PRACTICAL IMPLICATIONS: PM(2.5) levels are considerably increased in the non-smoking area if smoking is allowed anywhere in the same location. Even locating the smoking area in another room resulted in a more than doubling of the PM(2.5) levels in the non-smoking room compared with venues where smoking was not allowed at all. In practice, spatial separation of rooms where smoking is allowed does not prevent exposure to environmental tobacco smoke in nearby non-smoking areas.
Resumo:
The goals of any treatment of cervical spine injuries are: return to maximum functional ability, minimum of residual pain, decrease of any neurological deficit, minimum of residual deformity and prevention of further disability. The advantages of surgical treatment are the ability to reach optimal reduction, immediate stability, direct decompression of the cord and the exiting roots, the need for only minimum external fixation, the possibility for early mobilisation and clearly decreased nursing problems. There are some reasons why those goals can be reached better by anterior surgery. Usually the bony compression of the cord and roots comes from the front therefore anterior decompression is usually the procedure of choice. Also, the anterior stabilisation with a plate is usually simpler than a posterior instrumentation. It needs to be stressed that closed reduction by traction can align the fractured spine and indirectly decompress the neural structures in about 70%. The necessary weight is 2.5 kg per level of injury. In the upper cervical spine, the odontoid fracture type 2 is an indication for anterior surgery by direct screw fixation. Joint C1/C2 dislocations or fractures or certain odontoid fractures can be treated with a fusion of the C1/C2 joint by anterior transarticular screw fixation. In the lower and middle cervical spine, anterior plating combined with iliac crest or fibular strut graft is the procedure of choice, however, a solid graft can also be replaced by filled solid or expandable vertebral cages. The complication of this surgery is low, when properly executed and anterior surgery may only be contra-indicated in case of a significant lesion or locked joints.
Resumo:
Although non-organic hearing losses are relatively rare, it is important to identify suspicious findings early to be able to administer specific tests, such as objective measurements and specific counseling. In this retrospective study, we searched for findings that were specific ti or typical for non-organic hearing losses. Patient records from a 6 year period (2003-2008) from the University ENT Department of Bern, Switzerland, were reviewed. In this period, 40 subjects were diagnosed with a non-organic hearing loss (22 children, ages 7-16, mean 10.6 years; 18 adults, ages 19-57, mean 39.7 years; 25 females and 15 males). Pure tone audiograms in children and adults showed predominantly sensorineural and frequency-independent hearing losses, mostly in the range of 40-60 dB. In all cases, objective measurements (otoacoustic emissions and/or auditory-evoked potentials) indicated normal or substantially better hearing thresholds than those found in pure tone audiometry. In nine subjects (22.5%; 2 children, 7 adults), hearing aids had been fitted before the first presentation at our center. Six children (27%) had a history of middle ear problems with a transient hearing loss and 11 (50%) knew a person with a hearing loss. Two new and hitherto unreported findings emerged from the analysis: it was observed that a small air-bone gap of 5-20 dB was typical for non-organic hearing losses and that speech audiometry might show considerably poorer results than expected from pure tone audiometry.
Resumo:
Magnetic resonance spectroscopy enables insight into the chemical composition of spinal cord tissue. However, spinal cord magnetic resonance spectroscopy has rarely been applied in clinical work due to technical challenges, including strong susceptibility changes in the region and the small cord diameter, which distort the lineshape and limit the attainable signal to noise ratio. Hence, extensive signal averaging is required, which increases the likelihood of static magnetic field changes caused by subject motion (respiration, swallowing), cord motion, and scanner-induced frequency drift. To avoid incoherent signal averaging, it would be ideal to perform frequency alignment of individual free induction decays before averaging. Unfortunately, this is not possible due to the low signal to noise ratio of the metabolite peaks. In this article, frequency alignment of individual free induction decays is demonstrated to improve spectral quality by using the high signal to noise ratio water peak from non-water-suppressed proton magnetic resonance spectroscopy via the metabolite cycling technique. Electrocardiography (ECG)-triggered point resolved spectroscopy (PRESS) localization was used for data acquisition with metabolite cycling or water suppression for comparison. A significant improvement in the signal to noise ratio and decrease of the Cramér Rao lower bounds of all metabolites is attained by using metabolite cycling together with frequency alignment, as compared to water-suppressed spectra, in 13 healthy volunteers.
Resumo:
A small proportion of individuals with non-specific low back pain (NSLBP) develop persistent problems. Up to 80% of the total costs for NSLBP are owing to chronic NSLBP. Psychosocial factors have been described to be important in the transition from acute to chronic NSLBP. Guidelines recommend the use of the Acute Low Back Pain Screening Questionnaire (ALBPSQ) and the Örebro Musculoskeletal Pain Screening Questionnaire (ÖMPSQ) to identify individuals at risk of developing persistent problems, such as long-term absence of work, persistent restriction in function or persistent pain. These instruments can be used with a cutoff value, where patients with values above the threshold are further assessed with a more comprehensive examination.
Resumo:
Rumiana Stoilova (Bulgaria). Social Policy Facing the Problems of Youth Employment. Ms. Stoilova is a researcher in the Institute of Sociology in Sofia and worked on this project from October 1996 to September 1998. This project involved collecting both statistical and empirical data on the state of youth employment in Bulgaria, which was then compared with similar data from other European countries. One significant aspect was the parallel investigation of employment and unemployment, which took as a premise the continuity of professional experience where unemployment is just a temporary condition caused by external and internal factors. These need to be studied and changed on a systematic basis so as to create a more favourable market situation and to improve individuals' resources for improving their market opportunities. A second important aspect of the project was an analysis of the various entities active on the labour market, including government and private institutions, associations of unemployed persons, of employers or of trade unions, all with their specific legal powers and interests, and of the problems in communication between these. The major trends in youth unemployment during the period studied include a high proportion of the registered unemployed who are not eligible for social assistance, a lengthening of the average period of unemployment, an increase in the percentage of people who are unemployed for the first time and an increasing percentage of these who are not eligible for assistance, particularly among newly registered young people. At the same time the percentage of those for who work has been found is rising and during the last three years an increasing number of the unemployed have started some independent economic activity. Regional differences are also considerable and in the case of the Haskovo region represent a danger of losing the youngest generation, with resulting negative demographic effects. One major weakness of the existing institutional structure is the large scale of the black labour market, with clear negative implications for the young people drawn into it. The role of non-governmental organisations in providing support and information for the unemployed is growing and the government has recently introduced special preferences for organisations offering jobs to unemployed persons. Social policy in the labour market has however been largely restricted to passive measures, mostly because of the risk that poverty poses to people continuously excluded from the labour market. Among the active measures taken, well over half are concerned with providing jobs for the unemployed and there are very limited programmes for providing or improving qualifications. The nature of youth employment in Bulgaria can be seen in the influence of sustained structures (generation) and institutions (family and school). Ms. Stoilova studied the situation of the modern generation through a series of profiles, mostly those of continuously unemployed and self-employed persons, but also distinguishing between students and the unemployed, and between high school and university students. The different categories of young people were studied in separate mini-studies and the survey was carried out in five town in order to gather objective and subjective information on the state of the labour market in the different regions. She conducted interviews with several hundred young people covering questions of family background, career plans, attitudes to the labour situation and government measures to deal with it, and such questions as independence, mobility, attitude to work, etc. The interviews with young people unemployed for a long period of time show the risk involved in starting work and its link with dynamics of economic development. Their approval of structural reforms, of the financial restrictions connected with the introduction of a currency board and the inevitability of unemployment was largely declarative. The findings indicate that the continuously unemployed need practical knowledge and skills to "translate" the macroeconomic realities in concrete alternatives of individual work and initiative. The unemployed experience their exclusion from the labour market not only as a professional problem but also as an existential threat, of poverty, forced mobility and dependence on their parents' generation. The exclusion from the market of goods and services means more than just exercising restraint in their consumption, as it places restrictions on their personal development. Ms. Stoilova suggests that more efficient ways of providing financial aid and mobilisation are needed to counteract the social disintegration and marginalisation of the continuously unemployed. In measuring the speed of reform, university students took both employment opportunities and the implementation of the meritocratic principle in employment into account. When offered a hypothetical choice between a well-paid job and work in one's own profession, 62% would prefer opt for the well-paid job and for working for a company that offered career opportunities rather than employment in a family or own company. While most see the information gained during their studies as useful and interesting, relatively few see their education as competitive on a wider level and many were pessimistic about employment opportunities based on their qualifications. Very similar attitudes were found among high school students, with differences being due rather to family and personal situations. The unemployed, on the other hand, placed greater emphasis on possibilities of gaining or improving qualifications on a job and for the opportunities it would offer for personal contacts. High school students tend to attribute more significance to opportunities for personal accomplishment. A significant difference that five times fewer high school students were willing to work for state-owned companies, and many fewer expected to find permanent employment or to find a job in the area where they lived, Within the family situation, actual support for children seems to be higher than the feelings of confidence expressed in interviews. The attitudes of the families towards past experience seems to be linked with their ability to cope with the difficulties of the present, with those families which show an optimistic and active attitude towards the future having a greater respect for parents experience and tolerance in communication between parents and children.
Resumo:
Tracking or target localization is used in a wide range of important tasks from knowing when your flight will arrive to ensuring your mail is received on time. Tracking provides the location of resources enabling solutions to complex logistical problems. Wireless Sensor Networks (WSN) create new opportunities when applied to tracking, such as more flexible deployment and real-time information. When radar is used as the sensing element in a tracking WSN better results can be obtained; because radar has a comparatively larger range both in distance and angle to other sensors commonly used in WSNs. This allows for less nodes deployed covering larger areas, saving money. In this report I implement a tracking WSN platform similar to what was developed by Lim, Wang, and Terzis. This consists of several sensor nodes each with a radar, a sink node connected to a host PC, and a Matlab© program to fuse sensor data. I have re-implemented their experiment with my WSN platform for tracking a non-cooperative target to verify their results and also run simulations to compare. The results of these tests are discussed and some future improvements are proposed.
Resumo:
Hall thrusters have been under active development around the world since the 1960’s. Thrusters using traditional propellants such as xenon have been flown on a variety of satellite orbit raising and maintenance missions with an excellent record. To expand the mission envelope, it is necessary to lower the specific impulse of the thrusters but xenon and krypton are poor performers at specific impulses below 1,200 seconds. To enhance low specific impulse performance, this dissertation examines the development of a Hall-effect thruster which uses bismuth as a propellant. Bismuth, the heaviest non-radioactive element, holds many advantages over noble gas propellants from an energetics as well as a practical economic standpoint. Low ionization energy, large electron-impact crosssection and high atomic mass make bismuth ideal for low-specific impulse applications. The primary disadvantage lies in the high temperatures which are required to generate the bismuth vapors. Previous efforts carried out in the Soviet Union relied upon the complete bismuth vaporization and gas phase delivery to the anode. While this proved successful, the power required to vaporize and maintain gas phase throughout the mass flow system quickly removed many of the efficiency gains expected from using bismuth. To solve these problems, a unique method of delivering liquid bismuth to the anode has been developed. Bismuth is contained within a hollow anode reservoir that is capped by a porous metallic disc. By utilizing the inherent waste heat generated in a Hall thruster, liquid bismuth is evaporated and the vapors pass through the porous disc into the discharge chamber. Due to the high temperatures and material compatibility requirements, the anode was fabricated out of pure molybdenum. The porous vaporizer was not available commercially so a method of creating a refractory porous plate with 40-50% open porosity was developed. Molybdenum also does not respond well to most forms of welding so a diffusion bonding process was also developed to join the molybdenum porous disc to the molybdenum anode. Operation of the direct evaporation bismuth Hall thruster revealed interesting phenomenon. By utilizing constant current mode on a discharge power supply, the discharge voltage settles out to a stable operating point which is a function of discharge current, anode face area and average pore size on the vaporizer. Oscillations with a 40 second period were also observed. Preliminary performance data suggests that the direct evaporation bismuth Hall thruster performs similar to xenon and krypton Hall thrusters. Plume interrogation with a Retarding Potential Analyzer confirmed that bismuth ions were being efficiently accelerated while Faraday probe data gave a view of the ion density in the exhausted plume.
Resumo:
Mower is a micro-architecture technique which targets branch misprediction penalties in superscalar processors. It speeds-up the misprediction recovery process by dynamically evicting stale instructions and fixing the RAT (Register Alias Table) using explicit branch dependency tracking. Tracking branch dependencies is accomplished by using simple bit matrices. This low-overhead technique allows overlapping of the recovery process with instruction fetching, renaming and scheduling from the correct path. Our evaluation of the mechanism indicates that it yields performance very close to ideal recovery and provides up to 5% speed-up and 2% reduction in power consumption compared to a traditional recovery mechanism using a reorder buffer and a walker. The simplicity of the mechanism should permit easy implementation of Mower in an actual processor.