989 resultados para Electrical engineering|Nanoscience
Resumo:
The new configuration proposed in this paper for Marx Generator (MG.) aims to generate high voltage for pulsed power applications through reduced number of semiconductor components with a more efficient load supplying process. The main idea is to charge two groups of capacitors in parallel through an inductor and take the advantage of resonant phenomenon in charging each capacitor up to a double input voltage level. In each resonant half a cycle, one of those capacitor groups are charged, and eventually the charged capacitors will be connected in series and the summation of the capacitor voltages can be appeared at the output of the topology. This topology can be considered as a modified Marx generator which works based on the resonant concept. Simulated models of this converter have been investigated in Matlab/SIMULINK platform and the acquired results fully satisfy the anticipations in proper operation of the converter.
Resumo:
Uncooperative iris identification systems at a distance suffer from poor resolution of the captured iris images, which significantly degrades iris recognition performance. Superresolution techniques have been employed to enhance the resolution of iris images and improve the recognition performance. However, all existing super-resolution approaches proposed for the iris biometric super-resolve pixel intensity values. This paper considers transferring super-resolution of iris images from the intensity domain to the feature domain. By directly super-resolving only the features essential for recognition, and by incorporating domain specific information from iris models, improved recognition performance compared to pixel domain super-resolution can be achieved. This is the first paper to investigate the possibility of feature domain super-resolution for iris recognition, and experiments confirm the validity of the proposed approach.
Resumo:
Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.
Resumo:
The School of Electrical and Electronic Systems Engineering at Queensland University of Technology, Brisbane, Australia (QUT), offers three bachelor degree courses in electrical and computer engineering. In all its courses there is a strong emphasis on signal processing. A newly established Signal Processing Research Centre (SPRC) has played an important role in the development of the signal processing units in these courses. This paper describes the unique design of the undergraduate program in signal processing at QUT, the laboratories developed to support it, and the criteria that influenced the design.
Resumo:
One of the fundamental motivations underlying computational cell biology is to gain insight into the complicated dynamical processes taking place, for example, on the plasma membrane or in the cytosol of a cell. These processes are often so complicated that purely temporal mathematical models cannot adequately capture the complex chemical kinetics and transport processes of, for example, proteins or vesicles. On the other hand, spatial models such as Monte Carlo approaches can have very large computational overheads. This chapter gives an overview of the state of the art in the development of stochastic simulation techniques for the spatial modelling of dynamic processes in a living cell.
Resumo:
In public places, crowd size may be an indicator of congestion, delay, instability, or of abnormal events, such as a fight, riot or emergency. Crowd related information can also provide important business intelligence such as the distribution of people throughout spaces, throughput rates, and local densities. A major drawback of many crowd counting approaches is their reliance on large numbers of holistic features, training data requirements of hundreds or thousands of frames per camera, and that each camera must be trained separately. This makes deployment in large multi-camera environments such as shopping centres very costly and difficult. In this chapter, we present a novel scene-invariant crowd counting algorithm that uses local features to monitor crowd size. The use of local features allows the proposed algorithm to calculate local occupancy statistics, scale to conditions which are unseen in the training data, and be trained on significantly less data. Scene invariance is achieved through the use of camera calibration, allowing the system to be trained on one or more viewpoints and then deployed on any number of new cameras for testing without further training. A pre-trained system could then be used as a ‘turn-key’ solution for crowd counting across a wide range of environments, eliminating many of the costly barriers to deployment which currently exist.
Resumo:
The University of Queensland has recently established a new design-focused, studio-based computer science degree. The Bachelor of Information Environments degree augments the core courses from the University's standard CS degree with a stream of design courses and integrative studio-based projects undertaken every semester. The studio projects integrate and reinforce learning by requiring students to apply the knowledge and skills gained in other courses to open-ended real-world design projects. The studio model is based on the architectural studio and involves teamwork, collaborative learning, interactive problem solving, presentations and peer review. This paper describes the degree program, its curriculum and rationale, and reports on experiences in the first year of delivery.
Resumo:
If we are stepping out of windows, what are we stepping into? We suggest it is into cooperative buildings. For the foreseeable future, at least, we can identify two major characteristics of the cooperative building. The spaces of the building will be augmented in various ways, providing an ambient environment that bridges spatial discontinuities in workgroups and provides a continuous window into the state of the virtual world. Secondly, the ways in which the spaces themselves are used will evolve to be more congruent with the fluid, dynamic and distributed nature of the work taking place in the building. These two characteristics are deeply interconnected. This evolution need not happen entirely in the physical world; the essence of a cooperative building will become the way in which it mixes both physical and virtual affordances to support the workaday activities of its inhabitants.
Resumo:
This article presents a novel approach to confidentiality violation detection based on taint marking. Information flows are dynamically tracked between applications and objects of the operating system such as files, processes and sockets. A confidentiality policy is defined by labelling sensitive information and defining which information may leave the local system through network exchanges. Furthermore, per application profiles can be defined to restrict the sets of information each application may access and/or send through the network. In previous works, we focused on the use of mandatory access control mechanisms for information flow tracking. In this current work, we have extended the previous information flow model to track network exchanges, and we are able to define a policy attached to network sockets. We show an example application of this extension in the context of a compromised web browser: our implementation detects a confidentiality violation when the browser attempts to leak private information to a remote host over the network.
Resumo:
This paper presents a practical framework to synthesize multi-sensor navigation information for localization of a rotary-wing unmanned aerial vehicle (RUAV) and estimation of unknown ship positions when the RUAV approaches the landing deck. The estimation performance of the visual tracking sensor can also be improved through integrated navigation. Three different sensors (inertial navigation, Global Positioning System, and visual tracking sensor) are utilized complementarily to perform the navigation tasks for the purpose of an automatic landing. An extended Kalman filter (EKF) is developed to fuse data from various navigation sensors to provide the reliable navigation information. The performance of the fusion algorithm has been evaluated using real ship motion data. Simulation results suggest that the proposed method can be used to construct a practical navigation system for a UAV-ship landing system.
Resumo:
Existing recommendation systems often recommend products to users by capturing the item-to-item and user-to-user similarity measures. These types of recommendation systems become inefficient in people-to-people networks for people to people recommendation that require two way relationship. Also, existing recommendation methods use traditional two dimensional models to find inter relationships between alike users and items. It is not efficient enough to model the people-to-people network with two-dimensional models as the latent correlations between the people and their attributes are not utilized. In this paper, we propose a novel tensor decomposition-based recommendation method for recommending people-to-people based on users profiles and their interactions. The people-to-people network data is multi-dimensional data which when modeled using vector based methods tend to result in information loss as they capture either the interactions or the attributes of the users but not both the information. This paper utilizes tensor models that have the ability to correlate and find latent relationships between similar users based on both information, user interactions and user attributes, in order to generate recommendations. Empirical analysis is conducted on a real-life online dating dataset. As demonstrated in results, the use of tensor modeling and decomposition has enabled the identification of latent correlations between people based on their attributes and interactions in the network and quality recommendations have been derived using the 'alike' users concept.
Resumo:
Client puzzles are moderately-hard cryptographic problems neither easy nor impossible to solve that can be used as a counter-measure against denial of service attacks on network protocols. Puzzles based on modular exponentiation are attractive as they provide important properties such as non-parallelisability, deterministic solving time, and linear granularity. We propose an efficient client puzzle based on modular exponentiation. Our puzzle requires only a few modular multiplications for puzzle generation and verification. For a server under denial of service attack, this is a significant improvement as the best known non-parallelisable puzzle proposed by Karame and Capkun (ESORICS 2010) requires at least 2k-bit modular exponentiation, where k is a security parameter. We show that our puzzle satisfies the unforgeability and difficulty properties defined by Chen et al. (Asiacrypt 2009). We present experimental results which show that, for 1024-bit moduli, our proposed puzzle can be up to 30 times faster to verify than the Karame-Capkun puzzle and 99 times faster than the Rivest et al.'s time-lock puzzle.
Resumo:
Rats are superior to the most advanced robots when it comes to creating and exploiting spatial representations. A wild rat can have a foraging range of hundreds of meters, possibly kilometers, and yet the rodent can unerringly return to its home after each foraging mission, and return to profitable foraging locations at a later date (Davis, et al., 1948). The rat runs through undergrowth and pipes with few distal landmarks, along paths where the visual, textural, and olfactory appearance constantly change (Hardy and Taylor, 1980; Recht, 1988). Despite these challenges the rat builds, maintains, and exploits internal representations of large areas of the real world throughout its two to three year lifetime. While algorithms exist that allow robots to build maps, the questions of how to maintain those maps and how to handle change in appearance over time remain open. The robotic approach to map building has been dominated by algorithms that optimise the geometry of the map based on measurements of distances to features. In a robotic approach, measurements of distance to features are taken with range-measuring devices such as laser range finders or ultrasound sensors, and in some cases estimates of depth from visual information. The features are incorporated into the map based on previous readings of other features in view and estimates of self-motion. The algorithms explicitly model the uncertainty in measurements of range and the measurement of self-motion, and use probability theory to find optimal solutions for the geometric configuration of the map features (Dissanayake, et al., 2001; Thrun and Leonard, 2008). Some of the results from the application of these algorithms have been impressive, ranging from three-dimensional maps of large urban strucutures (Thrun and Montemerlo, 2006) to natural environments (Montemerlo, et al., 2003).
Resumo:
The Lingodroids are a pair of mobile robots that evolve a language for places and relationships between places (based on distance and direction). Each robot in these studies has its own understanding of the layout of the world, based on its unique experiences and exploration of the environment. Despite having different internal representations of the world, the robots are able to develop a common lexicon for places, and then use simple sentences to explain and understand relationships between places even places that they could not physically experience, such as areas behind closed doors. By learning the language, the robots are able to develop representations for places that are inaccessible to them, and later, when the doors are opened, use those representations to perform goal-directed behavior.
Resumo:
This paper is directed towards providing an answer to the question, ”Can you control the trajectory of a Lagrangian float?” Being a float that has minimal actuation (only buoyancy control), their horizontal trajectory is dictated through drifting with ocean currents. However, with the appropriate vertical actuation and utilising spatio-temporal variations in water speed and direction, we show here that broad controllabilty results can be met such as waypoint following to keep a float inside of a bay or out of a designated region. This paper extends theory experimen- tally evaluted on horizontally actuated Autonomous Underwater Vehicles (AUVs) for trajectory control utilising ocean forecast models and presents an initial investi- gation into the controllability of these minimally actuated drifting AUVs. Simulated results for offshore coastal and within highly dynamic tidal bays illustrate two tech- niques with the promise for an affirmative answer to the posed question above.