844 resultados para Computer Engineering|Electrical engineering|Computer science


Relevância:

100.00% 100.00%

Publicador:

Resumo:

III-Nitride materials have recently become a promising candidate for superior applications over the current technologies. However, certain issues such as lack of native substrates, and high defect density have to be overcome for further development of III-Nitride technology. This work presents research on lattice engineering of III-Nitride materials, and the structural, optical, and electrical properties of its alloys, in order to approach the ideal material for various applications. We demonstrated the non-destructive and quantitative characterization of composition modulated nanostructure in InAlN thin films with X-ray diffraction. We found the development of the nanostructure depends on growth temperature, and the composition modulation has impacts on carrier recombination dynamics. We also showed that the controlled relaxation of a very thin AlN buffer (20 ~ 30 nm) or a graded composition InGaN buffer can significantly reduce the defect density of a subsequent epitaxial layer. Finally, we synthesized an InAlGaN thin films and a multi-quantum-well structure. Significant emission enhancement in the UVB range (280 – 320 nm) was observed compared to AlGaN thin films. The nature of the enhancement was investigated experimentally and numerically, suggesting carrier confinement in the In localization centers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Peer mentoring has been a success for everyone involved resulting in a ‘win-win-win’ situation for mentors, mentees and university schools and departments (Andrews and Clark, 2011). Mentors have the opportunity to develop key transferable skills such as communication and leadership, which in turn can enhance their employability opportunities. There is also potential to increase and develop social and academic confidence. For mentees the benefits include the opportunity to gain advice, encouragement and support during the transition period from school/college/work to university along with the opportunity to gain an insight into the stages of university life by learning the "rules of the game". Through peer mentor schemes University schools and departments are meeting the demand to support student success while assisting student transition and reducing attrition. This paper will focus on the peer mentor scheme set up in the School of Electronics, Electrical Engineering and Computer Science at Queen’s University Belfast specifically the development of employability skills through company involvement in the scheme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research paper presents a five step algorithm to generate tool paths for machining Free form / Irregular Contoured Surface(s) (FICS) by adopting STEP-NC (AP-238) format. In the first step, a parametrized CAD model with FICS is created or imported in UG-NX6.0 CAD package. The second step recognizes the features and calculates a Closeness Index (CI) by comparing them with the B-Splines / Bezier surfaces. The third step utilizes the CI and extracts the necessary data to formulate the blending functions for identified features. In the fourth step Z-level 5 axis tool paths are generated by adopting flat and ball end mill cutters. Finally, in the fifth step, tool paths are integrated with STEP-NC format and validated. All these steps are discussed and explained through a validated industrial component.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research paper presents the work on feature recognition, tool path data generation and integration with STEP-NC (AP-238 format) for features having Free form / Irregular Contoured Surface(s) (FICS). Initially, the FICS features are modelled / imported in UG CAD package and a closeness index is generated. This is done by comparing the FICS features with basic B-Splines / Bezier curves / surfaces. Then blending functions are caculated by adopting convolution theorem. Based on the blending functions, contour offsett tool paths are generated and simulated for 5 axis milling environment. Finally, the tool path (CL) data is integrated with STEP-NC (AP-238) format. The tool path algorithm and STEP- NC data is tested with various industrial parts through an automated UFUNC plugin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the new academic year structure encouraging more in-term assessment to replace end-of-year examinations one of the problems we face is assessing students and keeping track of individual student learning without overloading the students and staff with excessive assessment burdens.
In the School of Electronics, Electrical Engineering and Computer Science, we have constructed a system that allows students to self-assess their capability on a simple Yes/No/Don’t Know scale against fine grained learning outcomes for a module. As the term progresses students update their record as appropriately, including selecting a Learnt option to reflect improvements they have gained as part of their studies.
In the system each of the learning outcomes are linked to the relevant teaching session (lectures and labs) and to online resources that students can access at any time. Students can structure their own learning experience to their needs and preferences in order to attain the learning outcomes.
The system keeps a history of the student’s record, allowing the lecturer to observe how the students’ abilities progress over the term and to compare it to assessment results. The system also keeps of any of the resource links that student has clicked on and the related learning outcome.
The initial work is comparing the accuracy of the student self-assessments with their performance in the related questions in the traditional end-of-year examination.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of new learning models has been of great importance throughout recent years, with a focus on creating advances in the area of deep learning. Deep learning was first noted in 2006, and has since become a major area of research in a number of disciplines. This paper will delve into the area of deep learning to present its current limitations and provide a new idea for a fully integrated deep and dynamic probabilistic system. The new model will be applicable to a vast number of areas initially focusing on applications into medical image analysis with an overall goal of utilising this approach for prediction purposes in computer based medical systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-07

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deployment of low power basestations within cellular networks can potentially increase both capacity and coverage. However, such deployments require efficient resource allocation schemes for managing interference from the low power and macro basestations that are located within each other’s transmission range. In this dissertation, we propose novel and efficient dynamic resource allocation algorithms in the frequency, time and space domains. We show that the proposed algorithms perform better than the current state-of-art resource management algorithms. In the first part of the dissertation, we propose an interference management solution in the frequency domain. We introduce a distributed frequency allocation scheme that shares frequencies between macro and low power pico basestations, and guarantees a minimum average throughput to users. The scheme seeks to minimize the total number of frequencies needed to honor the minimum throughput requirements. We evaluate our scheme using detailed simulations and show that it performs on par with the centralized optimum allocation. Moreover, our proposed scheme outperforms a static frequency reuse scheme and the centralized optimal partitioning between the macro and picos. In the second part of the dissertation, we propose a time domain solution to the interference problem. We consider the problem of maximizing the alpha-fairness utility over heterogeneous wireless networks (HetNets) by jointly optimizing user association, wherein each user is associated to any one transmission point (TP) in the network, and activation fractions of all TPs. Activation fraction of a TP is the fraction of the frame duration for which it is active, and together these fractions influence the interference seen in the network. To address this joint optimization problem which we show is NP-hard, we propose an alternating optimization based approach wherein the activation fractions and the user association are optimized in an alternating manner. The subproblem of determining the optimal activation fractions is solved using a provably convergent auxiliary function method. On the other hand, the subproblem of determining the user association is solved via a simple combinatorial algorithm. Meaningful performance guarantees are derived in either case. Simulation results over a practical HetNet topology reveal the superior performance of the proposed algorithms and underscore the significant benefits of the joint optimization. In the final part of the dissertation, we propose a space domain solution to the interference problem. We consider the problem of maximizing system utility by optimizing over the set of user and TP pairs in each subframe, where each user can be served by multiple TPs. To address this optimization problem which is NP-hard, we propose a solution scheme based on difference of submodular function optimization approach. We evaluate our scheme using detailed simulations and show that it performs on par with a much more computationally demanding difference of convex function optimization scheme. Moreover, the proposed scheme performs within a reasonable percentage of the optimal solution. We further demonstrate the advantage of the proposed scheme by studying its performance with variation in different network topology parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis aims to exploit properties of thin films for applications such as spintronics, UV detection and gas sensing. Nanoscale thin films devices have myriad advantages and compatibility with Si-based integrated circuits processes. Two distinct classes of material systems are investigated, namely ferromagnetic thin films and semiconductor oxides. To aid the designing of devices, the surface properties of the thin films were investigated by using electron and photon characterization techniques including Auger electron spectroscopy (AES), X-ray photoelectron spectroscopy (XPS), grazing incidence X-ray diffraction (GIXRD), and energy-dispersive X-ray spectroscopy (EDS). These are complemented by nanometer resolved local proximal probes such as atomic force microscopy (AFM), magnetic force microscopy (MFM), electric force microscopy (EFM), and scanning tunneling microscopy to elucidate the interplay between stoichiometry, morphology, chemical states, crystallization, magnetism, optical transparency, and electronic properties. Specifically, I studied the effect of annealing on the surface stoichiometry of the CoFeB/Cu system by in-situ AES and discovered that magnetic nanoparticles with controllable areal density can be produced. This is a good alternative for producing nanoparticles using a maskless process. Additionally, I studied the behavior of magnetic domain walls of the low coercivity alloy CoFeB patterned nanowires. MFM measurement with the in-plane magnetic field showed that, compared to their permalloy counterparts, CoFeB nanowires require a much smaller magnetization switching field , making them promising for low-power-consumption domain wall motion based devices. With oxides, I studied CuO nanoparticles on SnO2 based UV photodetectors (PDs), and discovered that they promote the responsivity by facilitating charge transfer with the formed nanoheterojunctions. I also demonstrated UV PDs with spectrally tunable photoresponse with the bandgap engineered ZnMgO. The bandgap of the alloyed ZnMgO thin films was tailored by varying the Mg contents and AES was demonstrated as a surface scientific approach to assess the alloying of ZnMgO. With gas sensors, I discovered the rf-sputtered anatase-TiO2 thin films for a selective and sensitive NO2 detection at room temperature, under UV illumination. The implementation of UV enhances the responsivity, response and recovery rate of the TiO2 sensor towards NO2 significantly. Evident from the high resolution XPS and AFM studies, the surface contamination and morphology of the thin films degrade the gas sensing response. I also demonstrated that surface additive metal nanoparticles on thin films can improve the response and the selectivity of oxide based sensors. I employed nanometer-scale scanning probe microscopy to study a novel gas senor scheme consisting of gallium nitride (GaN) nanowires with functionalizing oxides layer. The results suggested that AFM together with EFM is capable of discriminating low-conductive materials at the nanoscale, providing a nondestructive method to quantitatively relate sensing response to the surface morphology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last decade, success of social networks has significantly reshaped how people consume information. Recommendation of contents based on user profiles is well-received. However, as users become dominantly mobile, little is done to consider the impacts of the wireless environment, especially the capacity constraints and changing channel. In this dissertation, we investigate a centralized wireless content delivery system, aiming to optimize overall user experience given the capacity constraints of the wireless networks, by deciding what contents to deliver, when and how. We propose a scheduling framework that incorporates content-based reward and deliverability. Our approach utilizes the broadcast nature of wireless communication and social nature of content, by multicasting and precaching. Results indicate this novel joint optimization approach outperforms existing layered systems that separate recommendation and delivery, especially when the wireless network is operating at maximum capacity. Utilizing limited number of transmission modes, we significantly reduce the complexity of the optimization. We also introduce the design of a hybrid system to handle transmissions for both system recommended contents ('push') and active user requests ('pull'). Further, we extend the joint optimization framework to the wireless infrastructure with multiple base stations. The problem becomes much harder in that there are many more system configurations, including but not limited to power allocation and how resources are shared among the base stations ('out-of-band' in which base stations transmit with dedicated spectrum resources, thus no interference; and 'in-band' in which they share the spectrum and need to mitigate interference). We propose a scalable two-phase scheduling framework: 1) each base station obtains delivery decisions and resource allocation individually; 2) the system consolidates the decisions and allocations, reducing redundant transmissions. Additionally, if the social network applications could provide the predictions of how the social contents disseminate, the wireless networks could schedule the transmissions accordingly and significantly improve the dissemination performance by reducing the delivery delay. We propose a novel method utilizing: 1) hybrid systems to handle active disseminating requests; and 2) predictions of dissemination dynamics from the social network applications. This method could mitigate the performance degradation for content dissemination due to wireless delivery delay. Results indicate that our proposed system design is both efficient and easy to implement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, we present a quantitative approach using probabilistic verification techniques for the analysis of reliability, availability, maintainability, and safety (RAMS) properties of satellite systems. The subject of our research is satellites used in mission critical industrial applications. A strong case for using probabilistic model checking to support RAMS analysis of satellite systems is made by our verification results. This study is intended to build a foundation to help reliability engineers with a basic background in model checking to apply probabilistic model checking to small satellite systems. We make two major contributions. One of these is the approach of RAMS analysis to satellite systems. In the past, RAMS analysis has been extensively applied to the field of electrical and electronics engineering. It allows system designers and reliability engineers to predict the likelihood of failures from the indication of historical or current operational data. There is a high potential for the application of RAMS analysis in the field of space science and engineering. However, there is a lack of standardisation and suitable procedures for the correct study of RAMS characteristics for satellite systems. This thesis considers the promising application of RAMS analysis to the case of satellite design, use, and maintenance, focusing on its system segments. Data collection and verification procedures are discussed, and a number of considerations are also presented on how to predict the probability of failure. Our second contribution is leveraging the power of probabilistic model checking to analyse satellite systems. We present techniques for analysing satellite systems that differ from the more common quantitative approaches based on traditional simulation and testing. These techniques have not been applied in this context before. We present the use of probabilistic techniques via a suite of detailed examples, together with their analysis. Our presentation is done in an incremental manner: in terms of complexity of application domains and system models, and a detailed PRISM model of each scenario. We also provide results from practical work together with a discussion about future improvements.