893 resultados para Computer Engineering|Electrical engineering
Resumo:
The physics of the operation of singe-electron tunneling devices (SEDs) and singe-electron tunneling transistors (SETs), especially of those with multiple nanometer-sized islands, has remained poorly understood in spite of some intensive experimental and theoretical research. This computational study examines the current-voltage (IV) characteristics of multi-island single-electron devices using a newly developed multi-island transport simulator (MITS) that is based on semi-classical tunneling theory and kinetic Monte Carlo simulation. The dependence of device characteristics on physical device parameters is explored, and the physical mechanisms that lead to the Coulomb blockade (CB) and Coulomb staircase (CS) characteristics are proposed. Simulations using MITS demonstrate that the overall IV characteristics in a device with a random distribution of islands are a result of a complex interplay among those factors that affect the tunneling rates that are fixed a priori (e.g. island sizes, island separations, temperature, gate bias, etc.), and the evolving charge state of the system, which changes as the source-drain bias (VSD) is changed. With increasing VSD, a multi-island device has to overcome multiple discrete energy barriers (up-steps) before it reaches the threshold voltage (Vth). Beyond Vth, current flow is rate-limited by slow junctions, which leads to the CS structures in the IV characteristic. Each step in the CS is characterized by a unique distribution of island charges with an associated distribution of tunneling probabilities. MITS simulation studies done on one-dimensional (1D) disordered chains show that longer chains are better suited for switching applications as Vth increases with increasing chain length. They are also able to retain CS structures at higher temperatures better than shorter chains. In sufficiently disordered 2D systems, we demonstrate that there may exist a dominant conducting path (DCP) for conduction, which makes the 2D device behave as a quasi-1D device. The existence of a DCP is sensitive to the device structure, but is robust with respect to changes in temperature, gate bias, and VSD. A side gate in 1D and 2D systems can effectively control Vth. We argue that devices with smaller island sizes and narrower junctions may be better suited for practical applications, especially at room temperature.
Resumo:
All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun-sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. My dissertation explores the performance of a multi-frame-blind-deconvolution technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios and compared to other speckle imaging techniques. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate and severe turbulence conditions. Each set consisted of 1000 simulated, turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function. I will compare the mean-square-error (MSE) performance of speckle imaging methods and a maximum-likelihood, multi-frame blind deconvolution (MFBD) method applied to long-path horizontal imaging scenarios. Both methods are used to reconstruct a scene from simulated imagery featuring anisoplanatic turbulence induced aberrations. This comparison is performed over three sets of 1000 simulated images each for low, moderate and severe turbulence-induced image degradation. The comparison shows that speckle-imaging techniques reduce the MSE 46 percent, 42 percent and 47 percent on average for low, moderate, and severe cases, respectively using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides, 40 percent, 29 percent, and 36 percent improvements in MSE on average under the same conditions. The comparison is repeated under low light conditions (less than 100 photons per pixel) where improvements of 39 percent, 29 percent and 27 percent are available using speckle imaging methods and 25 input frames and 38 percent, 34 percent and 33 percent respectively for the MFBD method and 150 input frames. The MFBD estimator is applied to three sets of field data and the results presented. Finally, a combined Bispectrum-MFBD Hybrid estimator is proposed and investigated. This technique consistently provides a lower MSE and smaller variance in the estimate under all three simulated turbulence conditions.
Resumo:
Wireless sensor network is an emerging research topic due to its vast and ever-growing applications. Wireless sensor networks are made up of small nodes whose main goal is to monitor, compute and transmit data. The nodes are basically made up of low powered microcontrollers, wireless transceiver chips, sensors to monitor their environment and a power source. The applications of wireless sensor networks range from basic household applications, such as health monitoring, appliance control and security to military application, such as intruder detection. The wide spread application of wireless sensor networks has brought to light many research issues such as battery efficiency, unreliable routing protocols due to node failures, localization issues and security vulnerabilities. This report will describe the hardware development of a fault tolerant routing protocol for railroad pedestrian warning system. The protocol implemented is a peer to peer multi-hop TDMA based protocol for nodes arranged in a linear zigzag chain arrangement. The basic working of the protocol was derived from Wireless Architecture for Hard Real-Time Embedded Networks (WAHREN).
Resumo:
Photovoltaic power has become one of the most popular research area in new energy field. In this report, the case of household solar power system is presented. Based on the Matlab environment, the simulation is built by using Simulink and SimPowerSystem. There are four parts in a household solar system, solar cell, MPPT system, battery and power consumer. Solar cell and MPPT system are been studied and analyzed individually. The system with MPPT generates 30% more energy than the system without MPPT. After simulating the household system, it is can be seen that the power which generated by the system is 40.392 kWh per sunny day. By combining the power generated by the system and the price of the electric power, 8.42 years are need for the system to achieve a balance of income and expenditure when weather condition is considered.
Resumo:
For a microgrid with a high penetration level of renewable energy, energy storage use becomes more integral to the system performance due to the stochastic nature of most renewable energy sources. This thesis examines the use of droop control of an energy storage source in dc microgrids in order to optimize a global cost function. The approach involves using a multidimensional surface to determine the optimal droop parameters based on load and state of charge. The optimal surface is determined using knowledge of the system architecture and can be implemented with fully decentralized source controllers. The optimal surface control of the system is presented. Derivations of a cost function along with the implementation of the optimal control are included. Results were verified using a hardware-in-the-loop system.
Resumo:
As microgrid power systems gain prevalence and renewable energy comprises greater and greater portions of distributed generation, energy storage becomes important to offset the higher variance of renewable energy sources and maximize their usefulness. One of the emerging techniques is to utilize a combination of lead-acid batteries and ultracapacitors to provide both short and long-term stabilization to microgrid systems. The different energy and power characteristics of batteries and ultracapacitors imply that they ought to be utilized in different ways. Traditional linear controls can use these energy storage systems to stabilize a power grid, but cannot effect more complex interactions. This research explores a fuzzy logic approach to microgrid stabilization. The ability of a fuzzy logic controller to regulate a dc bus in the presence of source and load fluctuations, in a manner comparable to traditional linear control systems, is explored and demonstrated. Furthermore, the expanded capabilities (such as storage balancing, self-protection, and battery optimization) of a fuzzy logic system over a traditional linear control system are shown. System simulation results are presented and validated through hardware-based experiments. These experiments confirm the capabilities of the fuzzy logic control system to regulate bus voltage, balance storage elements, optimize battery usage, and effect self-protection.
Resumo:
Clouds are one of the most influential elements of weather on the earth system, yet they are also one of the least understood. Understanding their composition and behavior at small scales is critical to understanding and predicting larger scale feedbacks. Currently, the best method to study clouds on the microscale is through airborne in situ measurements using optical instruments capable of resolving clouds on the individual particle level. However, current instruments are unable to sufficiently resolve the scales important to cloud evolution and behavior. The Holodec is a new generation of optical cloud instrument which uses digital inline holography to overcome many of the limitations of conventional instruments. However, its performance and reliability was limited due to several deficiencies in its original design. These deficiencies were addressed and corrected to advance the instrument from the prototype stage to an operational instrument. In addition, the processing software used to reconstruct and analyze digitally recorded holograms was improved upon to increase robustness and ease of use.
Resumo:
In power electronic basedmicrogrids, the computational requirements needed to implement an optimized online control strategy can be prohibitive. The work presented in this dissertation proposes a generalized method of derivation of geometric manifolds in a dc microgrid that is based on the a-priori computation of the optimal reactions and trajectories for classes of events in a dc microgrid. The proposed states are the stored energies in all the energy storage elements of the dc microgrid and power flowing into them. It is anticipated that calculating a large enough set of dissimilar transient scenarios will also span many scenarios not specifically used to develop the surface. These geometric manifolds will then be used as reference surfaces in any type of controller, such as a sliding mode hysteretic controller. The presence of switched power converters in microgrids involve different control actions for different system events. The control of the switch states of the converters is essential for steady state and transient operations. A digital memory look-up based controller that uses a hysteretic sliding mode control strategy is an effective technique to generate the proper switch states for the converters. An example dcmicrogrid with three dc-dc boost converters and resistive loads is considered for this work. The geometric manifolds are successfully generated for transient events, such as step changes in the loads and the sources. The surfaces corresponding to a specific case of step change in the loads are then used as reference surfaces in an EEPROM for experimentally validating the control strategy. The required switch states corresponding to this specific transient scenario are programmed in the EEPROM as a memory table. This controls the switching of the dc-dc boost converters and drives the system states to the reference manifold. In this work, it is shown that this strategy effectively controls the system for a transient condition such as step changes in the loads for the example case.
Resumo:
Heuristic optimization algorithms are of great importance for reaching solutions to various real world problems. These algorithms have a wide range of applications such as cost reduction, artificial intelligence, and medicine. By the term cost, one could imply that that cost is associated with, for instance, the value of a function of several independent variables. Often, when dealing with engineering problems, we want to minimize the value of a function in order to achieve an optimum, or to maximize another parameter which increases with a decrease in the cost (the value of this function). The heuristic cost reduction algorithms work by finding the optimum values of the independent variables for which the value of the function (the “cost”) is the minimum. There is an abundance of heuristic cost reduction algorithms to choose from. We will start with a discussion of various optimization algorithms such as Memetic algorithms, force-directed placement, and evolution-based algorithms. Following this initial discussion, we will take up the working of three algorithms and implement the same in MATLAB. The focus of this report is to provide detailed information on the working of three different heuristic optimization algorithms, and conclude with a comparative study on the performance of these algorithms when implemented in MATLAB. In this report, the three algorithms we will take in to consideration will be the non-adaptive simulated annealing algorithm, the adaptive simulated annealing algorithm, and random restart hill climbing algorithm. The algorithms are heuristic in nature, that is, the solution these achieve may not be the best of all the solutions but provide a means to reach a quick solution that may be a reasonably good solution without taking an indefinite time to implement.
Resumo:
In recent years, advanced metering infrastructure (AMI) has been the main research focus due to the traditional power grid has been restricted to meet development requirements. There has been an ongoing effort to increase the number of AMI devices that provide real-time data readings to improve system observability. Deployed AMI across distribution secondary networks provides load and consumption information for individual households which can improve grid management. Significant upgrade costs associated with retrofitting existing meters with network-capable sensing can be made more economical by using image processing methods to extract usage information from images of the existing meters. This thesis presents a new solution that uses online data exchange of power consumption information to a cloud server without modifying the existing electromechanical analog meters. In this framework, application of a systematic approach to extract energy data from images replaces the manual reading process. One case study illustrates the digital imaging approach is compared to the averages determined by visual readings over a one-month period.
Resumo:
Hafnium oxide (HfOn) is a promising dielectric for future microelectronic applications. Hf02 thin films (10-75nm) were deposited on Pt/Si02/Si substrates by pulsed DC magnetron reactive sputtering. Top electrodes of Pt were formed by e-beam evapo- ration through an aperture mask on the samples to create MIM (Metal-Insulator-Metal) capacitors. Various processing conditions (Arloz ratio, DC power and deposition rate) and post-deposition annealing conditions (time and temperature) were investigated. The structure of the Hf02 films was characterized by X-ray diffraction (XRD) and the roughness was measured by a profilometer. The electrical properties were characterized in terms of their relative permittivity (E,(T) and ~,.(f)) and leakage behavior (I-V, I-T and I- time). The electrical measurements were performed over a temperature range from -5 to 200°C. For the samples with best experimental results, the relative permittivity of HfOa was found to be -- 27 after anneal and increased by 0.027%/"C with increasing temperature over the measured temperature range. At 25"C, leakage current density was below lop8 ~ l c m ' at 1 volt. The leakage current increased with temperature above a specific threshold temperature below which the leakage current didn't change much. The leakage current increased with voltage. At voltages below lvolt, it's ohmic; at higher voltages, it follows Schottky model. The breakdown field is - 1 . 8 2 ~ lo6 Vlcm. The optical bandgap was measured with samples deposited on quartz substrates to be 5.4eV after anneal.
Resumo:
Electricity markets in the United States presently employ an auction mechanism to determine the dispatch of power generation units. In this market design, generators submit bid prices to a regulation agency for review, and the regulator conducts an auction selection in such a way that satisfies electricity demand. Most regulators currently use an auction selection method that minimizes total offer costs ["bid cost minimization" (BCM)] to determine electric dispatch. However, recent literature has shown that this method may not minimize consumer payments, and it has been shown that an alternative selection method that directly minimizes total consumer payments ["payment cost minimization" (PCM)] may benefit social welfare in the long term. The objective of this project is to further investigate the long term benefit of PCM implementation and determine whether it can provide lower costs to consumers. The two auction selection methods are expressed as linear constraint programs and are implemented in an optimization software package. Methodology for game theoretic bidding simulation is developed using EMCAS, a real-time market simulator. Results of a 30-day simulation showed that PCM reduced energy costs for consumers by 12%. However, this result will be cross-checked in the future with two other methods of bid simulation as proposed in this paper.
Resumo:
High Angular Resolution Diffusion Imaging (HARDI) techniques, including Diffusion Spectrum Imaging (DSI), have been proposed to resolve crossing and other complex fiber architecture in the human brain white matter. In these methods, directional information of diffusion is inferred from the peaks in the orientation distribution function (ODF). Extensive studies using histology on macaque brain, cat cerebellum, rat hippocampus and optic tracts, and bovine tongue are qualitatively in agreement with the DSI-derived ODFs and tractography. However, there are only two studies in the literature which validated the DSI results using physical phantoms and both these studies were not performed on a clinical MRI scanner. Also, the limited studies which optimized DSI in a clinical setting, did not involve a comparison against physical phantoms. Finally, there is lack of consensus on the necessary pre- and post-processing steps in DSI; and ground truth diffusion fiber phantoms are not yet standardized. Therefore, the aims of this dissertation were to design and construct novel diffusion phantoms, employ post-processing techniques in order to systematically validate and optimize (DSI)-derived fiber ODFs in the crossing regions on a clinical 3T MR scanner, and develop user-friendly software for DSI data reconstruction and analysis. Phantoms with a fixed crossing fiber configuration of two crossing fibers at 90° and 45° respectively along with a phantom with three crossing fibers at 60°, using novel hollow plastic capillaries and novel placeholders, were constructed. T2-weighted MRI results on these phantoms demonstrated high SNR, homogeneous signal, and absence of air bubbles. Also, a technique to deconvolve the response function of an individual peak from the overall ODF was implemented, in addition to other DSI post-processing steps. This technique greatly improved the angular resolution of the otherwise unresolvable peaks in a crossing fiber ODF. The effects of DSI acquisition parameters and SNR on the resultant angular accuracy of DSI on the clinical scanner were studied and quantified using the developed phantoms. With a high angular direction sampling and reasonable levels of SNR, quantification of a crossing region in the 90°, 45° and 60° phantoms resulted in a successful detection of angular information with mean ± SD of 86.93°±2.65°, 44.61°±1.6° and 60.03°±2.21° respectively, while simultaneously enhancing the ODFs in regions containing single fibers. For the applicability of these validated methodologies in DSI, improvement in ODFs and fiber tracking from known crossing fiber regions in normal human subjects were demonstrated; and an in-house software package in MATLAB which streamlines the data reconstruction and post-processing for DSI, with easy to use graphical user interface was developed. In conclusion, the phantoms developed in this dissertation offer a means of providing ground truth for validation of reconstruction and tractography algorithms of various diffusion models (including DSI). Also, the deconvolution methodology (when applied as an additional DSI post-processing step) significantly improved the angular accuracy of the ODFs obtained from DSI, and should be applicable to ODFs obtained from the other high angular resolution diffusion imaging techniques.
Resumo:
During the last three decades, FPGA technology has quickly evolved to become a major subject of research in computer and electrical engineering as it has been identified as a powerful alternative for creating highly efficient computing systems. FPGA devices offer substantial performance improvements when compared against traditional processing architectures via custom design and reconfiguration capabilities.
Resumo:
Muito se fala na crescente velocidade de inovações tecnológicas que se apresenta ao mundo e que vai se acelerando. Antes, gerações se passavam para termos uma evolução tecnológica. Hoje, uma mesma geração presencia vários saltos tecnológicos. Neste contexto de inovações frequentes, surge a preocupação sobre como formar um profissional responsável, ético e que consiga acompanhar e ser protagonista de tais mudanças. O trabalho desenvolvido nesta tese busca colaborar na discussão sobre a formação do engenheiro, com foco na engenharia elétrica e de computação, revisitando as definições de conceitos como educação em engenharia, aprendizagem ativa, inovação, Design Thinking e competências transversais, e definindo, como contribuição de pesquisa, os conceitos de tecno-pedagogia e ambientes tecno-pedagógicos, como pressuposto de convergência de estruturas tecnológicas, estratégias pedagógicas e métodos de avaliação em aprendizagem ativa para a inovação. Apresenta um método para identificar e quantificar o grau de ênfase das competências transversais para a inovação a partir da demanda de mercado para engenheiros eletricistas e da computação; e um método de observação, coleta e análise de dados sobre o desenvolvimento de competências transversais na participação em duas experiências de aula: na disciplina global ME310 da Universidade de Stanford; e na disciplina 030-3410 da Escola Politécnica da Universidade de São Paulo. Com isso, foi possível elaborar um método para auxiliar no planejamento de disciplinas e cursos com foco em inovação, identificando as competências transversais que devem ser incentivadas e relacionando tais competências com estratégias de ensino e aprendizagem e sugerindo a estrutura tecnológica e método de avaliação a serem adotados.