963 resultados para Direct access sensor


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Im Juli 2009 wurde am Mainzer Mikrotron (MAMI) erstmal ein Experiment durchgeführt, bei dem ein polarisiertes 3He Target mit Photonen im Energiebereich von 200 bis 800 MeV untersucht wurde. Das Ziel dieses Experiments war die Überprüfung der Gerasimov-Drell-Hearn Summenregel am Neutron. Die Verwendung der Messdaten welche mit dem polarisierten 3He Target gewonnen wurden, geben - im Vergleich mit den bereits existieren Daten vom Deuteron - aufgrund der Spin-Struktur des 3He einen komplementären und direkteren Zugang zum Neutron. Die Messung des totalen helizitätsabhängigen Photoabsorptions-Wirkungsquerschnitts wurde mittels eines energiemarkierten Strahls von zirkular polarisierten Photonen, welcher auf das longitudinal polarisierte 3He Target trifft, durchgeführt. Als Produktdetektoren kamen der Crystal Ball (4π Raumabdeckung), TAPS (als ”Vorwärtswand”) sowie ein Schwellen-Cherenkov-Detektor (online Veto zur Reduktion von elektromagnetischen Ereignissen) zum Einsatz. Planung und Aufbau der verschiedenen komponenten Teile des 3He Experimentaufbaus war ein entscheidender Teil dieser Dissertation und wird detailliert in der vorliegenden Arbeit beschrieben. Das Detektorsystem als auch die Analyse-Methoden wurden durch die Messung des unpolarisierten, totalen und inklusiven Photoabsoprtions-Wirkungsquerschnitts an flüssigem Wasserstoff getestet. Hierbei zeigten die Ergebnisse eine gute Übereinstimmung mit bereits zuvor publizierten Daten. Vorläufige Ergebnisse des unpolarisierten totalen Photoabsorptions-Wirkungsquerschnitts sowie der helizitätsabhängige Unterschied zwischen Photoabsorptions-Wirkungsquerschnitten an 3He im Vergleich zu verschiedenen theoretischen Modellen werden vorgestellt.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Organische Ladungstransfersysteme weisen eine Vielfalt von konkurrierenden Wechselwirkungen zwischen Ladungs-, Spin- und Gitterfreiheitsgraden auf. Dies führt zu interessanten physikalischen Eigenschaften, wie metallische Leitfähigkeit, Supraleitung und Magnetismus. Diese Dissertation beschäftigt sich mit der elektronischen Struktur von organischen Ladungstransfersalzen aus drei Material-Familien. Dabei kamen unterschiedliche Photoemissions- und Röntgenspektroskopietechniken zum Einsatz. Die untersuchten Moleküle wurden z.T. im MPI für Polymerforschung synthetisiert. Sie stammen aus der Familie der Coronene (Donor Hexamethoxycoronen HMC und Akzeptor Coronen-hexaon COHON) und Pyrene (Donor Tetra- und Hexamethoxypyren TMP und HMP) im Komplex mit dem klassischen starken Akzeptor Tetracyanoquinodimethan (TCNQ). Als dritte Familie wurden Ladungstransfersalze der k-(BEDT-TTF)2X Familie (X ist ein monovalentes Anion) untersucht. Diese Materialien liegen nahe bei einem Bandbreite-kontrollierten Mottübergang im Phasendiagramm.rnFür Untersuchungen mittels Ultraviolett-Photoelektronenspektroskopie (UPS) wurden UHV-deponierte dünne Filme erzeugt. Dabei kam ein neuer Doppelverdampfer zum Einsatz, welcher speziell für Milligramm-Materialmengen entwickelt wurde. Diese Methode wies im Ladungstransferkomplex im Vergleich mit der reinen Donor- und Akzeptorspezies energetische Verschiebungen von Valenzzuständen im Bereich weniger 100meV nach. Ein wichtiger Aspekt der UPS-Messungen lag im direkten Vergleich mit ab-initio Rechnungen.rnDas Problem der unvermeidbaren Oberflächenverunreinigungen von lösungsgezüchteten 3D-Kristallen wurde durch die Methode Hard-X-ray Photoelectron Spectroscopy (HAXPES) bei Photonenenergien um 6 keV (am Elektronenspeicherring PETRA III in Hamburg) überwunden. Die große mittlere freie Weglänge der Photoelektronen im Bereich von 15 nm resultiert in echter Volumensensitivität. Die ersten HAXPES Experimente an Ladungstransferkomplexen weltweit zeigten große chemische Verschiebungen (mehrere eV). In der Verbindung HMPx-TCNQy ist die N1s-Linie ein Fingerabdruck der Cyanogruppe im TCNQ und zeigt eine Aufspaltung und einen Shift zu höheren Bindungsenergien von bis zu 6 eV mit zunehmendem HMP-Gehalt. Umgekehrt ist die O1s-Linie ein Fingerabdruck der Methoxygruppe in HMP und zeigt eine markante Aufspaltung und eine Verschiebung zu geringeren Bindungsenergien (bis zu etwa 2,5eV chemischer Verschiebung), d.h. eine Größenordnung größer als die im Valenzbereich.rnAls weitere synchrotronstrahlungsbasierte Technik wurde Near-Edge-X-ray-Absorption Fine Structure (NEXAFS) Spektroskopie am Speicherring ANKA Karlsruhe intensiv genutzt. Die mittlere freie Weglänge der niederenergetischen Sekundärelektronen (um 5 nm). Starke Intensitätsvariationen von bestimmten Vorkanten-Resonanzen (als Signatur der unbesetzte Zustandsdichte) zeigen unmittelbar die Änderung der Besetzungszahlen der beteiligten Orbitale in der unmittelbaren Umgebung des angeregten Atoms. Damit war es möglich, präzise die Beteiligung spezifischer Orbitale im Ladungstransfermechanismus nachzuweisen. Im genannten Komplex wird Ladung von den Methoxy-Orbitalen 2e(Pi*) und 6a1(σ*) zu den Cyano-Orbitalen b3g und au(Pi*) und – in geringerem Maße – zum b1g und b2u(σ*) der Cyanogruppe transferiert. Zusätzlich treten kleine energetische Shifts mit unterschiedlichem Vorzeichen für die Donor- und Akzeptor-Resonanzen auf, vergleichbar mit den in UPS beobachteten Shifts.rn

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Durch die Möglichkeit, gleichzeitig mehrere Polarisationsfreiheitsgradernin der quasi-elastischen Elektronstreuung an $^3mathrm{He}$ zurnmessen, bietet sich ein neuer experimenteller Zugang zu kleinen, aber rnwichtigen Partialwellenbeiträgen ($S'$, $D$-Welle) desrn$^3mathrm{He}$-Grundzustands. Dies ermöglicht nicht nur ein tieferesrnVerständnis des Drei-Körper-Systems, sondern bietet auch diernMöglichkeit, Erkenntnisse über die $^3mathrm{He}$-Struktur undrnDynamik zu erlangen. Mit Hilfe dieser Informationen lassen sich abrninitio Rechnungen testen, sowie Korrekturen berechnen, die für anderernExperimente (z.B. Messung von $G_{en}$) benötigt werden. rnrnModerne Faddeev-Rechnungen liefern nicht nur eine quantitativernBeschreibung des $^3mathrm{He}$-Grundzustands, sondern geben auchrneinen Einblick in die sogenannten spinabhängigenrnImpulsverteilungen. Eine gründliche experimentelle Untersuchung ist in rndiesem Zusammenhang nötig, um eine solide Basis für die Üperprüfungrnder theoretische Modelle zu liefern. EinrnDreifach-Polarisationsexperiment liefert hier zum einen wichtigernDaten, zum anderen kann damit untersucht werden, ob mit der Methoderndes glqq Deuteron-Tagginggrqq polarisiertes $^3mathrm{He}$ alsrneffektives polarisiertes Protonentarget verwendet werden kann. rnrnDas hier vorgestellte Experiment kombiniert erstmals Strahl- undrnTargetpolarisation sowie die Messung der Polarisation des auslaufendenrnProtons. Das Experiment wurde im Sommer 2007 an derrnDrei-Spektrometer-Anlage der A1-Kollaboration an MAMI rndurchgeführt. Dabei wurde mit einer Strahlenergie vonrn$E=855,mathrm{MeV}$ bei $q^2=-0,14,(mathrm{GeV/c})^2$rn$(omega=0,13,mathrm{GeV}$, $q=0,4,mathrm{GeV/c})$ gemessen.rnrnDie bestimmten Wirkungsquerschnitte, sowie die Strahl-Target- und diernDreifach-Asymmetrie werden mit theoretischen Modellrechnungen vonrnJ. Golak (Plane Wave Impuls Approximation PWIA, sowie ein Modell mitrnEndzustandswechselwirkung) verglichen. Zudem wurde das Modell von dernForest verwendet, welches den Wirkungsquerschnitt über eine gemessenernSpektralfunktion berechnet. Der Vergleich mit den Modellrechnungenrnzeigt, dass sowohl der Wirkungsquerschnitt, als auch die Doppel- undrnDreifach-Asymmetrie gut mit den theoretischen Rechnungenrnübereinstimmen. rnrnDie Ergebnisse dieser Arbeit bestätigen, dass polarisiertesrn$^3mathrm{He}$ nicht nur als polarisiertes Neutronentarget, sondernrndurch Nachweis des Deuterons ebenfalls als polarisiertesrnProtonentarget verwendet werden kann.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The thesis investigates the nucleon structure probed by the electromagnetic interaction. One of the most basic observables, reflecting the electromagnetic structure of the nucleon, are the form factors, which have been studied by means of elastic electron-proton scattering with ever increasing precision for several decades. In the timelike region, corresponding with the proton-antiproton annihilation into a electron-positron pair, the present experimental information is much less accurate. However, in the near future high-precision form factor measurements are planned. About 50 years after the first pioneering measurements of the electromagnetic form factors, polarization experiments stirred up the field since the results were found to be in striking contradiction to the findings of previous form factor investigations from unpolarized measurements. Triggered by the conflicting results, a whole new field studying the influence of two-photon exchange corrections to elastic electron-proton scattering emerged, which appeared as the most likely explanation of the discrepancy. The main part of this thesis deals with theoretical studies of two-photon exchange, which is investigated particularly with regard to form factor measurements in the spacelike as well as in the timelike region. An extraction of the two-photon amplitudes in the spacelike region through a combined analysis using the results of unpolarized cross section measurements and polarization experiments is presented. Furthermore, predictions of the two-photon exchange effects on the e+p/e-p cross section ratio are given for several new experiments, which are currently ongoing. The two-photon exchange corrections are also investigated in the timelike region in the process pbar{p} -> e+ e- by means of two factorization approaches. These corrections are found to be smaller than those obtained for the spacelike scattering process. The influence of the two-photon exchange corrections on cross section measurements as well as asymmetries, which allow a direct access of the two-photon exchange contribution, is discussed. Furthermore, one of the factorization approaches is applied for investigating the two-boson exchange effects in parity-violating electron-proton scattering. In the last part of the underlying work, the process pbar{p} -> pi0 e+e- is analyzed with the aim of determining the form factors in the so-called unphysical, timelike region below the two-nucleon production threshold. For this purpose, a phenomenological model is used, which provides a good description of the available data of the real photoproduction process pbar{p} -> pi0 gamma.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conclusion: A robot built specifically for stereotactic cochlear implantation provides equal or better accuracy levels together with a better integration into a clinical environment, when compared to existing approaches based on industrial robots. Objectives: To evaluate the technical accuracy of a robotic system developed specifically for lateral skull base surgery in an experimental setup reflecting the intended clinical application. The invasiveness of cochlear electrode implantation procedures may be reduced by replacing the traditional mastoidectomy with a small tunnel slightly larger in diameter than the electrode itself. Methods: The end-to-end accuracy of the robot system and associated image-guided procedure was evaluated on 15 temporal bones of whole head cadaver specimens. The main components of the procedure were as follows: reference screw placement, cone beam CT scan, computer-aided planning, pair-point matching of the surgical plan, robotic drilling of the direct access tunnel, and post-operative cone beam CT scan and accuracy assessment. Results: The mean accuracy at the target point (round window) was 0.56 ± 41 mm with an angular misalignment of 0.88 ± 0.41°. The procedural time of the registration process through the completion of the drilling procedure was 25 ± 11 min. The robot was fully operational in a clinical environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

11Beta-hydroxysteroid dehydrogenase type 1 (11beta-HSD1) is essential for the local activation of glucocorticoid receptors (GR). Unlike unliganded cytoplasmic GR, 11beta-HSD1 is an endoplasmic reticulum (ER)-membrane protein with lumenal orientation. Cortisone might gain direct access to 11beta-HSD1 by free diffusion across membranes, indirectly via intracellular binding proteins or, alternatively, by insertion into membranes. Membranous cortisol, formed by 11beta-HSD1 at the ER-lumenal side, might then activate cytoplasmic GR or bind to ER-lumenal secretory proteins. Compartmentalization of 11beta-HSD1 is important for its regulation by hexose-6-phosphate dehydrogenase (H6PDH), which regenerates cofactor NADPH in the ER lumen and stimulates oxoreductase activity. ER-lumenal orientation of 11beta-HSD1 is also essential for the metabolism of the alternative substrate 7-ketocholesterol (7KC), a major cholesterol oxidation product found in atherosclerotic plaques and taken up from processed cholesterol-rich food. An 11beta-HSD1 mutant adopting cytoplasmic orientation efficiently catalyzed the oxoreduction of cortisone but not 7KC, indicating access to cortisone from both sides of the ER-membrane but to 7KC only from the lumenal side. These aspects may be relevant for understanding the physiological role of 11beta-HSD1 and for developing therapeutic interventions to control glucocorticoid reactivation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVES: Pulmonary valve insufficiency remains a leading cause for reoperations in congenital cardiac surgery. The current percutaneous approach is limited by the size of the access vessel and variable right ventricular outflow tract morphology. This study assesses the feasibility of transapical pulmonary valve replacement based on a new valved stent construction concept. METHODS: A new valved stent design was implanted off-pump under continuous intracardiac echocardiographic and fluoroscopic guidance into the native right ventricular outflow tract in 8 pigs (48.5 +/- 6.0 kg) through the right ventricular apex, and device function was studied by using invasive and noninvasive measures. RESULTS: Procedural success was 100% at the first attempt. Procedural time was 75 +/- 15 minutes. All devices were delivered at the target site with good acute valve function. No valved stents dislodged. No animal had significant regurgitation or paravalvular leaking on intracardiac echocardiographic analysis. All animals had a competent tricuspid valve and no signs of right ventricular dysfunction. The planimetric valve orifice was 2.85 +/- 0.32 cm(2). No damage to the pulmonary artery or structural defect of the valved stents was found at necropsy. CONCLUSIONS: This study confirms the feasibility of direct access valve replacement through the transapical procedure for replacement of the pulmonary valve, as well as validity of the new valved stent design concept. The transapical procedure is targeting a broader patient pool, including the very young and the adult patient. The device design might not be restricted to failing conduits only and could allow for implantation in a larger patient population, including those with native right ventricular outflow tract configurations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Da sich Additive Manufacturing (AM) von traditionellen Produktionsverfahren unterscheidet, entstehen neue Möglichkeiten im Produktdesign und im Supply Chain Setup. Die Auswirkungen der Aufhebung traditionellen Restriktionen im Produktdesign werden unter dem Begriff „Design for Additive Manufacturing“ intensiv diskutiert. In gleicher Weise werden durch AM Restriktionen im traditionellen Supply Chain Setup aufgehoben. Insbesondere sind die folgenden Verbesserungen möglich: Reduktion von Losgrössen und Lieferzeiten, bedarfsgerechte Produktion auf Abruf, dezentrale Produktion, Customization auf Ebene Bauteil und kontinuierliche Weiterentwicklung von Bauteilen. Viele Firmen investieren nicht selbst in die AM Technologien, sondern kaufen Bauteile bei Lieferanten. Um das Potential der AM Supply Chain mit Lieferanten umzusetzen, entstehen die folgenden Anforderungen an AM Einkaufsprozesse. Erstens muss der Aufwand pro Bestellung reduziert werden. Zweitens brauchen AM Nutzer einen direkten Zugang zu den Lieferanten ohne Umweg über die Einkaufsabteilung. Drittens müssen geeignete AM Lieferanten einfach identifiziert werden können. Viertens muss der Wechsel von Lieferanten mit möglichst geringem Aufwand möglich sein. Ein mögliche Lösung sind AM spezifische E-Procurement System um diese Anforderungen zu erfüllen

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The effect of anions on the redox behavior and structure of 11-ferrocenyl-1-undecanethiol (FcC11) monolayers (SAM) on Au(1 1 1) single crystal and Au(1 1 1-25 nm) thin film electrodes was investigated in 0.1 M solutions of HPF6, HClO4, HBF4, HNO3, and H2SO4 by cyclic voltammetry (CV) and in situ surface-enhanced infrared reflection-absorption spectroscopy (SEIRAS). We demonstrate that the FcC11 redox peaks shift toward positive potentials and broaden with increasing hydrophilicity of the anions. In situ surface-enhanced IR-spectroscopy (SEIRAS) provided direct access for the incorporation of anions into the oxidized adlayer. The coadsorption of anions is accompanied by the penetration of water molecules. The latter effect is particularly pronounced in aqueous HNO3 and H2SO4 electrolytes. The adlayer permeability increases with increasing hydrophilicity of the anions. We also found that even the neutral (reduced) FcC11 SAM is permeable for water molecules. Based on the property of interfacial water to reorient upon charge inversion, we propose a spectroscopic approach for estimating the potential of zero total charge of the FcC11-modified Au(1 1 1) electrodes in aqueous electrolytes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Given the fragmentation of outpatient care, timely follow-up of abnormal diagnostic imaging results remains a challenge. We hypothesized that an electronic medical record (EMR) that facilitates the transmission and availability of critical imaging results through either automated notification (alerting) or direct access to the primary report would eliminate this problem. METHODS: We studied critical imaging alert notifications in the outpatient setting of a tertiary care Department of Veterans Affairs facility from November 2007 to June 2008. Tracking software determined whether the alert was acknowledged (ie, health care practitioner/provider [HCP] opened the message for viewing) within 2 weeks of transmission; acknowledged alerts were considered read. We reviewed medical records and contacted HCPs to determine timely follow-up actions (eg, ordering a follow-up test or consultation) within 4 weeks of transmission. Multivariable logistic regression models accounting for clustering effect by HCPs analyzed predictors for 2 outcomes: lack of acknowledgment and lack of timely follow-up. RESULTS: Of 123 638 studies (including radiographs, computed tomographic scans, ultrasonograms, magnetic resonance images, and mammograms), 1196 images (0.97%) generated alerts; 217 (18.1%) of these were unacknowledged. Alerts had a higher risk of being unacknowledged when the ordering HCPs were trainees (odds ratio [OR], 5.58; 95% confidence interval [CI], 2.86-10.89) and when dual-alert (>1 HCP alerted) as opposed to single-alert communication was used (OR, 2.02; 95% CI, 1.22-3.36). Timely follow-up was lacking in 92 (7.7% of all alerts) and was similar for acknowledged and unacknowledged alerts (7.3% vs 9.7%; P = .22). Risk for lack of timely follow-up was higher with dual-alert communication (OR, 1.99; 95% CI, 1.06-3.48) but lower when additional verbal communication was used by the radiologist (OR, 0.12; 95% CI, 0.04-0.38). Nearly all abnormal results lacking timely follow-up at 4 weeks were eventually found to have measurable clinical impact in terms of further diagnostic testing or treatment. CONCLUSIONS: Critical imaging results may not receive timely follow-up actions even when HCPs receive and read results in an advanced, integrated electronic medical record system. A multidisciplinary approach is needed to improve patient safety in this area.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Debuggers are crucial tools for developing object-oriented software systems as they give developers direct access to the running systems. Nevertheless, traditional debuggers rely on generic mechanisms to explore and exhibit the execution stack and system state, while developers reason about and formulate domain-specific questions using concepts and abstractions from their application domains. This creates an abstraction gap between the debugging needs and the debugging support leading to an inefficient and error-prone debugging effort. To reduce this gap, we propose a framework for developing domain-specific debuggers called the Moldable Debugger. The Moldable Debugger is adapted to a domain by creating and combining domain-specific debugging operations with domain-specific debugging views, and adapts itself to a domain by selecting, at run time, appropriate debugging operations and views. We motivate the need for domain-specific debugging, identify a set of key requirements and show how our approach improves debugging by adapting the debugger to several domains.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Context. The Rosetta encounter with comet 67P/Churyumov-Gerasimenko provides a unique opportunity for an in situ, up-close investigation of ion-neutral chemistry in the coma of a weakly outgassing comet far from the Sun. Aims. Observations of primary and secondary ions and modeling are used to investigate the role of ion-neutral chemistry within the thin coma. Methods. Observations from late October through mid-December 2014 show the continuous presence of the solar wind 30 km from the comet nucleus. These and other observations indicate that there is no contact surface and the solar wind has direct access to the nucleus. On several occasions during this time period, the Rosetta/ROSINA/Double Focusing Mass Spectrometer measured the low-energy ion composition in the coma. Organic volatiles and water group ions and their breakup products (masses 14 through 19), CO2+ (masses 28 and 44) and other mass peaks (at masses 26, 27, and possibly 30) were observed. Secondary ions include H3O+ and HCO+ (masses 19 and 29). These secondary ions indicate ion-neutral chemistry in the thin coma of the comet. A relatively simple model is constructed to account for the low H3O+/H2O+ and HCO+/CO+ ratios observed in a water dominated coma. Results from this simple model are compared with results from models that include a more detailed chemical reaction network. Results. At low outgassing rates, predictions from the simple model agree with observations and with results from more complex models that include much more chemistry. At higher outgassing rates, the ion-neutral chemistry is still limited and high HCO+/CO+ ratios are predicted and observed. However, at higher outgassing rates, the model predicts high H3O+/H2O+ ratios and the observed ratios are often low. These low ratios may be the result of the highly heterogeneous nature of the coma, where CO and CO2 number densities can exceed that of water.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents a numerical study of reaction and diffusion phenomena in wall-coated heat-exchanger microreactors. Specifically, the interactions between an endothermic and exothermic catalyst layer separated by an impermeable wall is studied to understand the inherent behavior of the system. Two modeling approaches are presented, the first under the assumption of a constant thermal gradient and neglecting heat of reaction and the second considering both catalyst layers and reaction heat. Both studies found that thicker, more thermally insulating catalyst layers increase the effectiveness of the exothermic reaction by allowing for accumulation of reaction heat while thinner catalyst layers for the endothermic catalyst allow for direct access of the reactant to higher wall temperatures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este proyecto se incluye en una línea de trabajo que tiene como objetivo final la optimización de la energía consumida por un dispositivo portátil multimedia mediante la aplicación de técnicas de control realimentado, a partir de una modificación dinámica de la frecuencia de trabajo del procesador y de su tensión de alimentación. La modificación de frecuencia y tensión se realiza a partir de la información de realimentación acerca de la potencia consumida por el dispositivo, lo que supone un problema ya que no suele ser posible la monitorización del consumo de potencia en dispositivos de estas características. Este es el motivo por el que se recurre a la estimación del consumo de potencia, utilizando para ello un modelo de predicción. A partir del número de veces que se producen ciertos eventos en el procesador del dispositivo, el modelo de predicción es capaz de obtener una estimación de la potencia consumida por dicho dispositivo. El trabajo llevado a cabo en este proyecto se centra en la implementación de un modelo de estimación de potencia en el kernel de Linux. La razón por la que la estimación se implementa en el sistema operativo es, en primer lugar para lograr un acceso directo a los contadores del procesador. En segundo lugar, para facilitar la modificación de frecuencia y tensión, una vez obtenida la estimación de potencia, ya que esta también se realiza desde el sistema operativo. Otro motivo para implementar la estimación en el sistema operativo, es que la estimación debe ser independiente de las aplicaciones de usuario. Además, el proceso de estimación se realiza de forma periódica, lo que sería difícil de lograr si no se trabajase desde el sistema operativo. Es imprescindible que la estimación se haga de forma periódica ya que al ser dinámica la modificación de frecuencia y tensión que se pretende implementar, se necesita conocer el consumo de potencia del dispositivo en todo momento. Cabe destacar también, que los algoritmos de control se tienen que diseñar sobre un patrón periódico de actuación. El modelo de estimación de potencia funciona de manera específica para el perfil de consumo generado por una única aplicación determinada, que en este caso es un decodificador de vídeo. Sin embargo, es necesario que funcione de la forma más precisa posible para cada una de las frecuencias de trabajo del procesador, y para el mayor número posible de secuencias de vídeo. Esto es debido a que las sucesivas estimaciones de potencia se pretenden utilizar para llevar a cabo la modificación dinámica de frecuencia, por lo que el modelo debe ser capaz de continuar realizando las estimaciones independientemente de la frecuencia con la que esté trabajando el dispositivo. Para valorar la precisión del modelo de estimación se toman medidas de la potencia consumida por el dispositivo a las distintas frecuencias de trabajo durante la ejecución del decodificador de vídeo. Estas medidas se comparan con las estimaciones de potencia obtenidas durante esas mismas ejecuciones, obteniendo de esta forma el error de predicción cometido por el modelo y realizando las modificaciones y ajustes oportunos en el mismo. ABSTRACT. This project is included in a work line which tries to optimize consumption of handheld multimedia devices by the application of feedback control techniques, from a dynamic modification of the processor work frequency and its voltage. The frequency and voltage modification is performed depending on the feedback information about the device power consumption. This is a problem because normally it is not possible to monitor the power consumption on this kind of devices. This is the reason why a power consumption estimation is used instead, which is obtained from a prediction model. Using the number of times some events occur on the device processor, the prediction model is able to obtain a power consumption estimation of this device. The work done in this project focuses on the implementation of a power estimation model in the Linux kernel. The main reason to implement the estimation in the operating system is to achieve a direct access to the processor counters. The second reason is to facilitate the frequency and voltage modification, because this modification is also done from the operating system. Another reason to implement the estimation in the operating system is because the estimation must be done apart of the user applications. Moreover, the estimation process is done periodically, what is difficult to obtain outside the operating system. It is necessary to make the estimation in a periodic way because the frequency and voltage modification is going to be dynamic, so it needs to know the device power consumption at every time. Also, it is important to say that the control algorithms have to be designed over a periodic pattern of action. The power estimation model works specifically for the consumption profile generated by a single application, which in this case is a video decoder. Nevertheless, it is necessary that the model works as accurate as possible for each frequency available on the processor, and for the greatest number of video sequences. This is because the power estimations are going to be used to modify dynamically the frequency, so the model must be able to work independently of the device frequency. To value the estimation model precision, some measurements of the device power consumption are taken at different frequencies during the video decoder execution. These measurements are compared with the power estimations obtained during that execution, getting the prediction error committed by the model, and if it is necessary, making modifications and settings on this model.