942 resultados para Three-dimensional power
Resumo:
A two-phase three-dimensional computational model of an intermediate temperature (120--190°C) proton exchange membrane (PEM) fuel cell is presented. This represents the first attempt to model PEM fuel cells employing intermediate temperature membranes, in this case, phosphoric acid doped polybenzimidazole (PBI). To date, mathematical modeling of PEM fuel cells has been restricted to low temperature operation, especially to those employing Nafion ® membranes; while research on PBI as an intermediate temperature membrane has been solely at the experimental level. This work is an advancement in the state of the art of both these fields of research. With a growing trend toward higher temperature operation of PEM fuel cells, mathematical modeling of such systems is necessary to help hasten the development of the technology and highlight areas where research should be focused.^ This mathematical model accounted for all the major transport and polarization processes occurring inside the fuel cell, including the two phase phenomenon of gas dissolution in the polymer electrolyte. Results were presented for polarization performance, flux distributions, concentration variations in both the gaseous and aqueous phases, and temperature variations for various heat management strategies. The model predictions matched well with published experimental data, and were self-consistent.^ The major finding of this research was that, due to the transport limitations imposed by the use of phosphoric acid as a doping agent, namely low solubility and diffusivity of dissolved gases and anion adsorption onto catalyst sites, the catalyst utilization is very low (∼1--2%). Significant cost savings were predicted with the use of advanced catalyst deposition techniques that would greatly reduce the eventual thickness of the catalyst layer, and subsequently improve catalyst utilization. The model also predicted that an increase in power output in the order of 50% is expected if alternative doping agents to phosphoric acid can be found, which afford better transport properties of dissolved gases, reduced anion adsorption onto catalyst sites, and which maintain stability and conductive properties at elevated temperatures.^
Resumo:
This thesis examines the phenomenological projection of space in two Cuban novels: La ninfa inconstante (2008) by Guillermo Cabrera Infante (1929–2005), and Todos se van (2006) by Wendy Guerra (1970–). Both novels are paradigmatic of two generations of Cuban writers who portray the city of Havana as a backdrop against which to project socio-political and biographical narratives. To problematize ethical and political omissions in the novels, this work incorporates disciplines such as philosophy, urbanism, architecture, sociology and literary theory. Through the concepts of prominent phenomenologists, such as Gaston Bachelard, Martin Heidegger and Maurice Merleau-Ponty, amongst others, this study evaluates how space becomes a construction to ambivalent dynamics of truth telling within contrasting, suffocating sociopolitical contexts. In addition, it explores how these phenomenological spaces are defined in relation to power. For instance, the Cuban Revolution, and its aftermath of more than 52 years, brings forth a sense of displacement and placelessness. The novels present and develop both authors’ spatial consciousness (that we call “ontological space”), which is not necessarily a container of three-dimensional objects, but instead, fictional emergent constructions. This thesis concludes that literature can become a meaningful space to cope with unbearable realities.
Resumo:
Due to the increasing demand for high power and reliable miniaturized energy storage devices, the development of micro-supercapacitors or electrochemical micro-capacitors have attracted much attention in recent years. This dissertation investigates several strategies to develop on-chip micro-supercapacitors with high power and energy density. Micro-supercapacitors based on interdigitated carbon micro-electrode arrays are fabricated through carbon microelectromechanical systems (C-MEMS) technique which is based on carbonization of patterned photoresist. To improve the capacitive behavior, electrochemical activation is performed on carbon micro-electrode arrays. The developed micro-supercapacitors show specific capacitances as high as 75 mFcm-2 at a scan rate of 5 mVs -1 after electrochemical activation for 30 minutes. The capacitance loss is less than 13% after 1000 cyclic voltammetry (CV) cycles. These results indicate that electrochemically activated C-MEMS micro-electrode arrays are promising candidates for on-chip electrochemical micro-capacitor applications. The energy density of micro-supercapacitors was further improved by conformal coating of polypyrrole (PPy) on C-MEMS structures. In these types of micro-devices the three dimensional (3D) carbon microstructures serve as current collectors for high energy density PPy electrodes. The electrochemical characterizations of these micro-supercapacitors show that they can deliver a specific capacitance of about 162.07 mFcm-2 and a specific power of 1.62mWcm -2 at a 20 mVs-1 scan rate. Addressing the need for high power micro-supercapacitors, the application of graphene as electrode materials for micro-supercapacitor was also investigated. The present study suggests a novel method to fabricate graphene-based micro-supercapacitors with thin film or in-plane interdigital electrodes. The fabricated micro-supercapacitors show exceptional frequency response and power handling performance and could effectively charge and discharge at rates as high as 50 Vs-1. CV measurements show that the specific capacitance of the micro-supercapacitor based on reduced graphene oxide and carbon nanotube composites is 6.1 mFcm -2 at scan rate of 0.01Vs-1. At a very high scan rate of 50 Vs-1, a specific capacitance of 2.8 mFcm-2 (stack capacitance of 3.1 Fcm-3) is recorded. This unprecedented performance can potentially broaden the future applications of micro-supercapacitors.
Resumo:
Electromagnetic waves in suburban environment encounter multiple obstructions that shadow the signal. These waves are scattered and random in polarization. They take multiple paths that add as vectors at the portable device. Buildings have vertical and horizontal edges. Diffraction from edges has polarization dependent characteristics. In practical case, a signal transmitted from a vertically polarized high antenna will result in a significant fraction of total power in the horizontal polarization at the street level. Signal reception can be improved whenever there is a probability of receiving the signal in at least two independent ways or branches. The Finite-Difference Time-Domain (FDTD) method was applied to obtain the two and three-dimensional dyadic diffraction coefficients (soft and hard) of right-angle perfect electric conductor (PEC) wedges illuminated by a plane wave. The FDTD results were in good agreement with the asymptotic solutions obtained using Uniform Theory of Diffraction (UTD). Further, a material wedge replaced the PEC wedge and the dyadic diffraction coefficient for the same was obtained.
Resumo:
Based on the possibility of real-time interaction with three-dimensional environments through an advanced interface, Virtual Reality consist in the main technology of this work, used in the design of virtual environments based on real Hydroelectric Plants. Previous to the process of deploying a Virtual Reality System for operation, three-dimensional modeling and interactive scenes settings are very importante steps. However, due to its magnitude and complexity, power plants virtual environments generation, currently, presents high computing cost. This work aims to present a methodology to optimize the production process of virtual environments associated with real hydroelectric power plants. In partnership with electric utility CEMIG, several HPPs were used in the scope of this work. During the modeling of each one of them, the techiniques within the methodologie were addressed. After the evaluation of the computional techniques presented here, it was possible to confirm a reduction in the time required to deliver each hydroelectrical complex. Thus, this work presents the current scenario about development of virtual hydroelectric power plants and discusses the proposed methodology that seeks to optimize this process in the electricity generation sector.
Resumo:
The nonlinear interaction between light and atoms is an extensive field of study with a broad range of applications in quantum information science and condensed matter physics. Nonlinear optical phenomena occurring in cold atoms are particularly interesting because such slowly moving atoms can spatially organize into density gratings, which allows for studies involving optical interactions with structured materials. In this thesis, I describe a novel nonlinear optical effect that arises when cold atoms spatially bunch in an optical lattice. I show that employing this spatial atomic bunching provides access to a unique physical regime with reduced thresholds for nonlinear optical processes and enhanced material properties. Using this method, I observe the nonlinear optical phenomenon of transverse optical pattern formation at record-low powers. These transverse optical patterns are generated by a wave- mixing process that is mediated by the cold atomic vapor. The optical patterns are highly multimode and induce rich non-equilibrium atomic dynamics. In particular, I find that there exists a synergistic interplay between the generated optical pat- terns and the atoms, wherein the scattered fields help the atoms to self-organize into new, multimode structures that are not externally imposed on the atomic sample. These self-organized structures in turn enhance the power in the optical patterns. I provide the first detailed investigation of the motional dynamics of atoms that have self-organized in a multimode geometry. I also show that the transverse optical patterns induce Sisyphus cooling in all three spatial dimensions, which is the first observation of spontaneous three-dimensional cooling. My experiment represents a unique means by which to study nonlinear optics and non-equilibrium dynamics at ultra-low required powers.
Resumo:
The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.
At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.
The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.
In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.
To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.
In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.
Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.
In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.
Resumo:
Optical coherence tomography (OCT) is a noninvasive three-dimensional interferometric imaging technique capable of achieving micrometer scale resolution. It is now a standard of care in ophthalmology, where it is used to improve the accuracy of early diagnosis, to better understand the source of pathophysiology, and to monitor disease progression and response to therapy. In particular, retinal imaging has been the most prevalent clinical application of OCT, but researchers and companies alike are developing OCT systems for cardiology, dermatology, dentistry, and many other medical and industrial applications.
Adaptive optics (AO) is a technique used to reduce monochromatic aberrations in optical instruments. It is used in astronomical telescopes, laser communications, high-power lasers, retinal imaging, optical fabrication and microscopy to improve system performance. Scanning laser ophthalmoscopy (SLO) is a noninvasive confocal imaging technique that produces high contrast two-dimensional retinal images. AO is combined with SLO (AOSLO) to compensate for the wavefront distortions caused by the optics of the eye, providing the ability to visualize the living retina with cellular resolution. AOSLO has shown great promise to advance the understanding of the etiology of retinal diseases on a cellular level.
Broadly, we endeavor to enhance the vision outcome of ophthalmic patients through improved diagnostics and personalized therapy. Toward this end, the objective of the work presented herein was the development of advanced techniques for increasing the imaging speed, reducing the form factor, and broadening the versatility of OCT and AOSLO. Despite our focus on applications in ophthalmology, the techniques developed could be applied to other medical and industrial applications. In this dissertation, a technique to quadruple the imaging speed of OCT was developed. This technique was demonstrated by imaging the retinas of healthy human subjects. A handheld, dual depth OCT system was developed. This system enabled sequential imaging of the anterior segment and retina of human eyes. Finally, handheld SLO/OCT systems were developed, culminating in the design of a handheld AOSLO system. This system has the potential to provide cellular level imaging of the human retina, resolving even the most densely packed foveal cones.
Resumo:
The successful, efficient, and safe turbine design requires a thorough understanding of the underlying physical phenomena. This research investigates the physical understanding and parameters highly correlated to flutter, an aeroelastic instability prevalent among low pressure turbine (LPT) blades in both aircraft engines and power turbines. The modern way of determining whether a certain cascade of LPT blades is susceptible to flutter is through time-expensive computational fluid dynamics (CFD) codes. These codes converge to solution satisfying the Eulerian conservation equations subject to the boundary conditions of a nodal domain consisting fluid and solid wall particles. Most detailed CFD codes are accompanied by cryptic turbulence models, meticulous grid constructions, and elegant boundary condition enforcements all with one goal in mind: determine the sign (and therefore stability) of the aerodynamic damping. The main question being asked by the aeroelastician, ``is it positive or negative?'' This type of thought-process eventually gives rise to a black-box effect, leaving physical understanding behind. Therefore, the first part of this research aims to understand and reveal the physics behind LPT flutter in addition to several related topics including acoustic resonance effects. A percentage of this initial numerical investigation is completed using an influence coefficient approach to study the variation the work-per-cycle contributions of neighboring cascade blades to a reference airfoil. The second part of this research introduces new discoveries regarding the relationship between steady aerodynamic loading and negative aerodynamic damping. Using validated CFD codes as computational wind tunnels, a multitude of low-pressure turbine flutter parameters, such as reduced frequency, mode shape, and interblade phase angle, will be scrutinized across various airfoil geometries and steady operating conditions to reach new design guidelines regarding the influence of steady aerodynamic loading and LPT flutter. Many pressing topics influencing LPT flutter including shocks, their nonlinearity, and three-dimensionality are also addressed along the way. The work is concluded by introducing a useful preliminary design tool that can estimate within seconds the entire aerodynamic damping versus nodal diameter curve for a given three-dimensional cascade.
Resumo:
Miniaturized, self-sufficient bioelectronics powered by unconventional micropower may lead to a new generation of implantable, wireless, minimally invasive medical devices, such as pacemakers, defibrillators, drug-delivering pumps, sensor transmitters, and neurostimulators. Studies have shown that micro-enzymatic biofuel cells (EBFCs) are among the most intuitive candidates for in vivo micropower. In the fisrt part of this thesis, the prototype design of an EBFC chip, having 3D intedigitated microelectrode arrays was proposed to obtain an optimum design of 3D microelectrode arrays for carbon microelectromechanical systems (C-MEMS) based EBFCs. A detailed modeling solving partial differential equations (PDEs) by finite element techniques has been developed on the effect of 1) dimensions of microelectrodes, 2) spatial arrangement of 3D microelectrode arrays, 3) geometry of microelectrode on the EBFC performance based on COMSOL Multiphysics. In the second part of this thesis, in order to investigate the performance of an EBFC, behavior of an EBFC chip performance inside an artery has been studied. COMSOL Multiphysics software has also been applied to analyze mass transport for different orientations of an EBFC chip inside a blood artery. Two orientations: horizontal position (HP) and vertical position (VP) have been analyzed. The third part of this thesis has been focused on experimental work towards high performance EBFC. This work has integrated graphene/enzyme onto three-dimensional (3D) micropillar arrays in order to obtain efficient enzyme immobilization, enhanced enzyme loading and facilitate direct electron transfer. The developed 3D graphene/enzyme network based EBFC generated a maximum power density of 136.3 μWcm-2 at 0.59 V, which is almost 7 times of the maximum power density of the bare 3D carbon micropillar arrays based EBFC. To further improve the EBFC performance, reduced graphene oxide (rGO)/carbon nanotubes (CNTs) has been integrated onto 3D mciropillar arrays to further increase EBFC performance in the fourth part of this thesisThe developed rGO/CNTs based EBFC generated twice the maximum power density of rGO based EBFC. Through a comparison of experimental and theoretical results, the cell performance efficiency is noted to be 67%.
Resumo:
Este trabajo presenta la reelaboración de un modelo de producción de textos escritos, publicado por el Grupo Didactext en 2003. Se sitúa en un marco sociocognitivo, lingüístico y didáctico, y está concebido desde la interacción de tres dimensiones simbolizadas por círculos concéntricos recurrentes. El primer círculo corresponde al ámbito cultural: las diversas esferas de la praxis humana en las que está inmersa toda actividad de composición escrita. El segundo se refiere a los contextos de producción, de los que forman parte el contexto social, el situacional, el físico, la audiencia y el medio de composición. El tercer círculo corresponde al individuo, en el que se tiene en cuenta el papel de la memoria en la producción de un texto desde el enfoque sociocultural, la motivación, las emociones y las estrategias cognitivas y metacognitivas, dentro de las cuales se conciben seis unidades funcionales que actúan en concurrencia: acceso al conocimiento, planificación, redacción, revisión y reescritura, edición, y presentación oral. La orientación didáctica se interesa por la enseñanza y el aprendizaje de la escritura académica en las aulas, así como por la investigación de la escritura en contextos de educación.
Resumo:
Once the preserve of university academics and research laboratories with high-powered and expensive computers, the power of sophisticated mathematical fire models has now arrived on the desk top of the fire safety engineer. It is a revolution made possible by parallel advances in PC technology and fire modelling software. But while the tools have proliferated, there has not been a corresponding transfer of knowledge and understanding of the discipline from expert to general user. It is a serious shortfall of which the lack of suitable engineering courses dealing with the subject is symptomatic, if not the cause. The computational vehicles to run the models and an understanding of fire dynamics are not enough to exploit these sophisticated tools. Too often, they become 'black boxes' producing magic answers in exciting three-dimensional colour graphics and client-satisfying 'virtual reality' imagery. As well as a fundamental understanding of the physics and chemistry of fire, the fire safety engineer must have at least a rudimentary understanding of the theoretical basis supporting fire models to appreciate their limitations and capabilities. The five day short course, "Principles and Practice of Fire Modelling" run by the University of Greenwich attempt to bridge the divide between the expert and the general user, providing them with the expertise they need to understand the results of mathematical fire modelling. The course and associated text book, "Mathematical Modelling of Fire Phenomena" are aimed at students and professionals with a wide and varied background, they offer a friendly guide through the unfamiliar terrain of mathematical modelling. These concepts and techniques are introduced and demonstrated in seminars. Those attending also gain experience in using the methods during "hands-on" tutorial and workshop sessions. On completion of this short course, those participating should: - be familiar with the concept of zone and field modelling; - be familiar with zone and field model assumptions; - have an understanding of the capabilities and limitations of modelling software packages for zone and field modelling; - be able to select and use the most appropriate mathematical software and demonstrate their use in compartment fire applications; and - be able to interpret model predictions. The result is that the fire safety engineer is empowered to realise the full value of mathematical models to help in the prediction of fire development, and to determine the consequences of fire under a variety of conditions. This in turn enables him or her to design and implement safety measures which can potentially control, or at the very least reduce the impact of fire.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Background: Athletic groin pain (AGP) is prevalent in sports involving repeated accelerations, decelerations, kicking and change-of-direction movements. Clinical and radiological examinations lack the ability to assess pathomechanics of AGP, but three-dimensional biomechanical movement analysis may be an important innovation. Aim: The primary aim was to describe and analyse movements used by patients with AGP during a maximum effort change-of-direction task. The secondary aim was to determine if specific anatomical diagnoses were related to a distinct movement strategy. Methods: 322 athletes with a current symptom of chronic AGP participated. Structured and standardised clinical assessments and radiological examinations were performed on all participants. Additionally, each participant performed multiple repetitions of a planned maximum effort change-of-direction task during which whole body kinematics were recorded. Kinematic and kinetic data were examined using continuous waveform analysis techniques in combination with a subgroup design that used gap statistic and hierarchical clustering. Results: Three subgroups (clusters) were identified. Kinematic and kinetic measures of the clusters differed strongly in patterns observed in thorax, pelvis, hip, knee and ankle. Cluster 1 (40%) was characterised by increased ankle eversion, external rotation and knee internal rotation and greater knee work. Cluster 2 (15%) was characterised by increased hip flexion, pelvis contralateral drop, thorax tilt and increased hip work. Cluster 3 (45%) was characterised by high ankle dorsiflexion, thorax contralateral drop, ankle work and prolonged ground contact time. No correlation was observed between movement clusters and clinically palpated location of the participant's pain. Conclusions: We identified three distinct movement strategies among athletes with long-standing groin pain during a maximum effort change-of-direction task. These movement strategies were not related to clinical assessment findings but highlighted targets for rehabilitation in response to possible propagative mechanisms. Trial registration number NCT02437942, pre results.
Resumo:
La présente thèse propose une étude expérimentale du décollement dans le diffuseur d’un modèle de turbine hydroélectrique bulbe. Le décollement se produit quand la turbine est opérée à forte charge et il réduit la section effective de récupération du diffuseur. La diminution de la performance du diffuseur à forte charge engendre une baisse brusque de l’efficacité de la turbine et de la puissance extraite. Le modèle réduit de bulbe est fidèle aux machines modernes avec un diffuseur particulièrement divergent. Les performances de la turbine sont mesurées sur une large gamme de points d’opération pour déterminer les conditions les plus intéressantes pour l’étude du décollement et pour étudier la distribution paramétrique de ce phénomène. La pression est mesurée le long de l’aspirateur par des capteurs dynamiques affleurants alors que les champs de vitesse dans la zone de décollement sont mesurés avec une méthode PIV à deux composantes. Les observations à la paroi sont pour leur part faites à l’aide de brins de laine. Pour un débit suffisant, le gradient de pression adverse induit par la géométrie du diffuseur affaiblit suffisamment la couche limite, entraînant ainsi l’éjection de fluide de la paroi le long d’une large enveloppe tridimensionelle. Le décollement instationnaire tridimensionnel se situe dans la même zone du diffuseur indépendamment du point d’opération. L’augmentation du débit provoque à la fois une extension de la zone de décollement et une augmentation de l’occurrence de ses manifestations. La position et la forme du front de décollement fluctue significativement sans périodicité. L’analyse topologique et celle des tourbillons des champs de vitesse instantanés montrent une topologie du front de décollement complexe qui diffère beaucoup d’une réalisation à l’autre. Bien que l’écoulement soit turbulent, les tourbillons associés aux foyers du front sont clairement plus gros et plus intenses que ceux de la turbulence. Cela suggère que le mécanisme d’enroulement menant aux tourbillons du décollement est clairement distinct des mécanismes de la turbulence.