950 resultados para TK Electrical engineering. Electronics Nuclear engineering
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
A lightweight Java application suite has been developed and deployed allowing collaborative learning between students and tutors at remote locations. Students can engage in group activities online and also collaborate with tutors. A generic Java framework has been developed and applied to electronics, computing and mathematics education. The applications are respectively: (a) a digital circuit simulator, which allows students to collaborate in building simple or complex electronic circuits; (b) a Java programming environment where the paradigm is behavioural-based robotics, and (c) a differential equation solver useful in modelling of any complex and nonlinear dynamic system. Each student sees a common shared window on which may be added text or graphical objects and which can then be shared online. A built-in chat room supports collaborative dialogue. Students can work either in collaborative groups or else in teams as directed by the tutor. This paper summarises the technical architecture of the system as well as the pedagogical implications of the suite. A report of student evaluation is also presented distilled from use over a period of twelve months. We intend this suite to facilitate learning between groups at one or many institutions and to facilitate international collaboration. We also intend to use the suite as a tool to research the establishment and behaviour of collaborative learning groups. We shall make our software freely available to interested researchers.
Resumo:
Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work.
Resumo:
Deployment of low power basestations within cellular networks can potentially increase both capacity and coverage. However, such deployments require efficient resource allocation schemes for managing interference from the low power and macro basestations that are located within each other’s transmission range. In this dissertation, we propose novel and efficient dynamic resource allocation algorithms in the frequency, time and space domains. We show that the proposed algorithms perform better than the current state-of-art resource management algorithms. In the first part of the dissertation, we propose an interference management solution in the frequency domain. We introduce a distributed frequency allocation scheme that shares frequencies between macro and low power pico basestations, and guarantees a minimum average throughput to users. The scheme seeks to minimize the total number of frequencies needed to honor the minimum throughput requirements. We evaluate our scheme using detailed simulations and show that it performs on par with the centralized optimum allocation. Moreover, our proposed scheme outperforms a static frequency reuse scheme and the centralized optimal partitioning between the macro and picos. In the second part of the dissertation, we propose a time domain solution to the interference problem. We consider the problem of maximizing the alpha-fairness utility over heterogeneous wireless networks (HetNets) by jointly optimizing user association, wherein each user is associated to any one transmission point (TP) in the network, and activation fractions of all TPs. Activation fraction of a TP is the fraction of the frame duration for which it is active, and together these fractions influence the interference seen in the network. To address this joint optimization problem which we show is NP-hard, we propose an alternating optimization based approach wherein the activation fractions and the user association are optimized in an alternating manner. The subproblem of determining the optimal activation fractions is solved using a provably convergent auxiliary function method. On the other hand, the subproblem of determining the user association is solved via a simple combinatorial algorithm. Meaningful performance guarantees are derived in either case. Simulation results over a practical HetNet topology reveal the superior performance of the proposed algorithms and underscore the significant benefits of the joint optimization. In the final part of the dissertation, we propose a space domain solution to the interference problem. We consider the problem of maximizing system utility by optimizing over the set of user and TP pairs in each subframe, where each user can be served by multiple TPs. To address this optimization problem which is NP-hard, we propose a solution scheme based on difference of submodular function optimization approach. We evaluate our scheme using detailed simulations and show that it performs on par with a much more computationally demanding difference of convex function optimization scheme. Moreover, the proposed scheme performs within a reasonable percentage of the optimal solution. We further demonstrate the advantage of the proposed scheme by studying its performance with variation in different network topology parameters.
Resumo:
Our work focuses on experimental and theoretical studies aimed at establishing a fundamental understanding of the principal electrical and optical processes governing the operation of quantum dot solar cells (QDSC) and their feasibility for the realization of intermediate band solar cell (IBSC). Uniform performance QD solar cells with high conversion efficiency have been fabricated using carefully calibrated process recipes as the basis of all reliable experimental characterization. The origin for the enhancement of the short circuit current density (Jsc) in QD solar cells was carefully investigated. External quantum efficiency (EQE) measurements were performed as a measure of the below bandgap distribution of transition states. In this work, we found that the incorporation of self-assembled quantum dots (QDs) interrupts the lattice periodicity and introduce a greatly broadened tailing density of states extending from the bandedge towards mid-gap. A below-bandgap density of states (DOS) model with an extended Urbach tail has been developed. In particular, the below-bandgap photocurrent generation has been attributed to transitions via confined energy states and background continuum tailing states. Photoluminescence measurement is used to measure the energy level of the lowest available state and the coupling effect between QD states and background tailing states because it results from a non-equilibrium process. A basic I-V measurement reveals a degradation of the open circuit voltage (Voc) of QD solar cells, which is related to a one sub-bandgap photon absorption process followed by a direct collection of the generated carriers by the external circuit. We have proposed a modified Shockley-Queisser (SQ) model that predicts the degradation of Voc compared with a reference bulk device. Whenever an energy state within the forbidden gap can facilitate additional absorption, it can facilitate recombination as well. If the recombination is non-radiative, it is detrimental to solar cell performance. We have also investigated the QD trapping effects as deep level energy states. Without an efficient carrier extraction pathway, the QDs can indeed function as mobile carriers traps. Since hole energy levels are mostly connected with hole collection under room temperature, the trapping effect is more severe for electrons. We have tried to electron-dope the QDs to exert a repulsive Coulomb force to help improve the carrier collection efficiency. We have experimentally observed a 30% improvement of Jsc for 4e/dot devices compared with 0e/dot devices. Electron-doping helps with better carrier collection efficiency, however, we have also measured a smaller transition probability from valance band to QD states as a direct manifestation of the Pauli Exclusion Principle. The non-linear performance is of particular interest. With the availability of laser with on-resonance and off-resonance excitation energy, we have explored the photocurrent enhancement by a sequential two-photon absorption (2PA) process via the intermediate states. For the first time, we are able to distinguish the nonlinearity effect by 1PA and 2PA process. The observed 2PA current under off-resonant and on-resonant excitation comes from a two-step transition via the tailing states instead of the QD states. However, given the existence of an extended Urbach tail and the small number of photons available for the intermediate states to conduction band transition, the experimental results suggest that with the current material system, the intensity requirement for an observable enhancement of photocurrent via a 2PA process is much higher than what is available from concentrated sun light. In order to realize the IBSC model, a matching transition strength needs to be achieved between valance band to QD states and QD states to conduction band. However, we have experimentally shown that only a negligible amount of signal can be observed at cryogenic temperature via the transition from QD states to conduction band under a broadband IR source excitation. Based on the understanding we have achieved, we found that the existence of the extended tailing density of states together with the large mismatch of the transition strength from VB to QD and from QD to CB, has systematically put into question the feasibility of the IBSC model with QDs.
Resumo:
We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.
Resumo:
While humans can easily segregate and track a speaker's voice in a loud noisy environment, most modern speech recognition systems still perform poorly in loud background noise. The computational principles behind auditory source segregation in humans is not yet fully understood. In this dissertation, we develop a computational model for source segregation inspired by auditory processing in the brain. To support the key principles behind the computational model, we conduct a series of electro-encephalography experiments using both simple tone-based stimuli and more natural speech stimulus. Most source segregation algorithms utilize some form of prior information about the target speaker or use more than one simultaneous recording of the noisy speech mixtures. Other methods develop models on the noise characteristics. Source segregation of simultaneous speech mixtures with a single microphone recording and no knowledge of the target speaker is still a challenge. Using the principle of temporal coherence, we develop a novel computational model that exploits the difference in the temporal evolution of features that belong to different sources to perform unsupervised monaural source segregation. While using no prior information about the target speaker, this method can gracefully incorporate knowledge about the target speaker to further enhance the segregation.Through a series of EEG experiments we collect neurological evidence to support the principle behind the model. Aside from its unusual structure and computational innovations, the proposed model provides testable hypotheses of the physiological mechanisms of the remarkable perceptual ability of humans to segregate acoustic sources, and of its psychophysical manifestations in navigating complex sensory environments. Results from EEG experiments provide further insights into the assumptions behind the model and provide motivation for future single unit studies that can provide more direct evidence for the principle of temporal coherence.
Resumo:
A 2-dimensional dynamic analog of squid tentacles was presented. The tentacle analog consists of a multi-cell structure, which can be easily replicated to a large scale. Each cell of the model is a quadrilateral with unit masses at the corners. Each side of the quadrilateral is a spring-damper system in parallel. The spring constants are the controls for the system. The dynamics are subject to the constraint that the area of each quadrilateral must remain constant. The system dynamics was analyzed, and various equilibrium points were found with different controls. Then these equilibrium points were further determined experimentally, demonstrated to be asymptotically stable. A simulation built in MATLAB was used to find the convergence rates under different controls and damping coefficients. Finally, a control scheme was developed and used to drive the system to several configurations observed in real tentacle.
Resumo:
In order to power our planet for the next century, clean energy technologies need to be developed and deployed. Photovoltaic solar cells, which convert sunlight into electricity, are a clear option; however, they currently supply 0.1% of the US electricity due to the relatively high cost per Watt of generation. Thus, our goal is to create more power from a photovoltaic device, while simultaneously reducing its price. To accomplish this goal, we are creating new high efficiency anti-reflection coatings that allow more of the incident sunlight to be converted to electricity, using simple and inexpensive coating techniques that enable reduced manufacturing costs. Traditional anti-reflection coatings (consisting of thin layers of non-absorbing materials) rely on the destructive interference of the reflected light, causing more light to enter the device and subsequently get absorbed. While these coatings are used on nearly all commercial cells, they are wavelength dependent and are deposited using expensive processes that require elevated temperatures, which increase production cost and can be detrimental to some temperature sensitive solar cell materials. We are developing two new classes of anti-reflection coatings (ARCs) based on textured dielectric materials: (i) a transparent, flexible paper technology that relies on optical scattering and reduced refractive index contrast between the air and semiconductor and (ii) silicon dioxide (SiO2) nanosphere arrays that rely on collective optical resonances. Both techniques improve solar cell absorption and ultimately yield high efficiency, low cost devices. For the transparent paper-based ARCs, we have recently shown that they improve solar cell efficiencies for all angles of incident illumination reducing the need for costly tracking of the sun’s position. For a GaAs solar cell, we achieved a 24% improvement in the power conversion efficiency using this simple coating. Because the transparent paper is made from an earth abundant material (wood pulp) using an easy, inexpensive and scalable process, this type of ARC is an excellent candidate for future solar technologies. The coatings based on arrays of dielectric nanospheres also show excellent potential for inexpensive, high efficiency solar cells. The fabrication process is based on a Meyer rod rolling technique, which can be performed at room-temperature and applied to mass production, yielding a scalable and inexpensive manufacturing process. The deposited monolayer of SiO2 nanospheres, having a diameter of 500 nm on a bare Si wafer, leads to a significant increase in light absorption and a higher expected current density based on initial simulations, on the order of 15-20%. With application on a Si solar cell containing a traditional anti-reflection coating (Si3N4 thin-film), an additional increase in the spectral current density is observed, 5% beyond what a typical commercial device would achieve. Due to the coupling between the spheres originated from Whispering Gallery Modes (WGMs) inside each nanosphere, the incident light is strongly coupled into the high-index absorbing material, leading to increased light absorption. Furthermore, the SiO2 nanospheres scatter and diffract light in such a way that both the optical and electrical properties of the device have little dependence on incident angle, eliminating the need for solar tracking. Because the layer can be made with an easy, inexpensive, and scalable process, this anti-reflection coating is also an excellent candidate for replacing conventional technologies relying on complicated and expensive processes.
Resumo:
We consider an LTE network where a secondary user acts as a relay, transmitting data to the primary user using a decode-and-forward mechanism, transparent to the base-station (eNodeB). Clearly, the relay can decode symbols more reliably if the employed precoder matrix indicators (PMIs) are known. However, for closed loop spatial multiplexing (CLSM) transmit mode, this information is not always embedded in the downlink signal, leading to a need for effective methods to determine the PMI. In this thesis, we consider 2x2 MIMO and 4x4 MIMO downlink channels corresponding to CLSM and formulate two techniques to estimate the PMI at the relay using a hypothesis testing framework. We evaluate their performance via simulations for various ITU channel models over a range of SNR and for different channel quality indicators (CQIs). We compare them to the case when the true PMI is known at the relay and show that the performance of the proposed schemes are within 2 dB at 10% block error rate (BLER) in almost all scenarios. Furthermore, the techniques add minimal computational overhead over existent receiver structure. Finally, we also identify scenarios when using the proposed precoder detection algorithms in conjunction with the cooperative decode-and-forward relaying mechanism benefits the PUE and improves the BLER performance for the PUE. Therefore, we conclude from this that the proposed algorithms as well as the cooperative relaying mechanism at the CMR can be gainfully employed in a variety of real-life scenarios in LTE networks.
Resumo:
This dissertation is concerned with the control, combining, and propagation of laser beams through a turbulent atmosphere. In the first part we consider adaptive optics: the process of controlling the beam based on information of the current state of the turbulence. If the target is cooperative and provides a coherent return beam, the phase measured near the beam transmitter and adaptive optics can, in principle, correct these fluctuations. However, for many applications, the target is uncooperative. In this case, we show that an incoherent return from the target can be used instead. Using the principle of reciprocity, we derive a novel relation between the field at the target and the scattered field at a detector. We then demonstrate through simulation that an adaptive optics system can utilize this relation to focus a beam through atmospheric turbulence onto a rough surface. In the second part we consider beam combining. To achieve the power levels needed for directed energy applications it is necessary to combine a large number of lasers into a single beam. The large linewidths inherent in high-power fiber and slab lasers cause random phase and intensity fluctuations occurring on sub-nanosecond time scales. We demonstrate that this presents a challenging problem when attempting to phase-lock high-power lasers. Furthermore, we show that even if instruments are developed that can precisely control the phase of high-power lasers; coherent combining is problematic for DE applications. The dephasing effects of atmospheric turbulence typically encountered in DE applications will degrade the coherent properties of the beam before it reaches the target. Finally, we investigate the propagation of Bessel and Airy beams through atmospheric turbulence. It has been proposed that these quasi-non-diffracting beams could be resistant to the effects of atmospheric turbulence. However, we find that atmospheric turbulence disrupts the quasi-non-diffracting nature of Bessel and Airy beams when the transverse coherence length nears the initial aperture diameter or diagonal respectively. The turbulence induced transverse phase distortion limits the effectiveness of Bessel and Airy beams for applications requiring propagation over long distances in the turbulent atmosphere.
Resumo:
Heterogeneous computing systems have become common in modern processor architectures. These systems, such as those released by AMD, Intel, and Nvidia, include both CPU and GPU cores on a single die available with reduced communication overhead compared to their discrete predecessors. Currently, discrete CPU/GPU systems are limited, requiring larger, regular, highly-parallel workloads to overcome the communication costs of the system. Without the traditional communication delay assumed between GPUs and CPUs, we believe non-traditional workloads could be targeted for GPU execution. Specifically, this thesis focuses on the execution model of nested parallel workloads on heterogeneous systems. We have designed a simulation flow which utilizes widely used CPU and GPU simulators to model heterogeneous computing architectures. We then applied this simulator to non-traditional GPU workloads using different execution models. We also have proposed a new execution model for nested parallelism allowing users to exploit these heterogeneous systems to reduce execution time.
Resumo:
Am Institut für Arbeitswissenschaft und Betriebsorganisation (ifab) Universität Karlsruhe wird zurzeit das Projekt LIVE-Fab (Lernen in der virtuellen Fabrik) gemeinsam mit der Fachhochschule Landshut, Fachbereich Maschinenbau, durchgeführt. Dieses Projekt wird vom Bundesministerium für Bildung und Forschung (BMBF) im Rahmen des Programms „Neue Medien in der Bildung“ gefördert. Das Ziel des Projektes ist die Entwicklung eines anschaulichen Lehr- und Lernmodells für eine Fabrik als funktionierendes Ganzes. Dazu soll im Rechner eine Modellfabrik mit den Bereichen Wareneingang, Fertigung, Montage und Qualitätssicherung abgebildet werden. Die Fabrik mit ihren Anlagen (Maschinen, Transportsysteme etc.) und Materialflüsse soll in einem 3D-Modell visuell erfassbar sein. Die Grundlagen zur Schaffung einer virtuell funktionierenden Produktion einschließlich Anlagenplanung, Arbeitsvorbereitung, die Mechanismen, Kundenbestellungen und Qualitätsmanagement sollen in einzelnen Fallstudien den Studierenden vermittelt werden. Den Studierenden aus den Fachbereichen Maschinenbau, Wirtschaftsingenieurwesen, Elektrotechnik und Betriebswirtschaft mit technischer Ausrichtung soll mit der virtuellen Fabrik ein Werkzeug an die Hand gegeben werden, mit dem sie die komplexen, ineinander verzahnten Vorgänge eines Produktionsprozesses besser verstehen lernen. Dies bedeutet, dass in der virtuellen Fabrik die inhaltlichen Aspekte mehrerer vorgelagerter Vorlesungen kombiniert werden und dadurch ein Verbund zum Verständnis der Produktionsprozesse geschaffen wird.(DIPF/Orig.)