981 resultados para Microsphere-based array
Resumo:
In this study, two linear coplanar array antennas based on Indium Phosphide (InP) substrate are designed, presented and compared in terms of bandwidth and gain. Slot introduction in combination with coplanar structure is investigated, providing enhanced antenna gain and bandwidth at the 60 GHz frequency band. In addition the proposed array antennas are evaluated in terms of integration with a high-speed photodiode and investigated in terms of matching, providing a bandwidth that reaches 2 GHz. Moreover a potential beam forming scenario combined with photonic up-conversion scheme has been proposed. © 2013 IEEE.
Resumo:
A high performance liquid-level sensor based on microstructured polymer optical fiber Bragg grating (mPOFBG) array sensors is reported in detail. The sensor sensitivity is found to be 98pm/cm of liquid, enhanced by more than a factor of 9 compared to a reported silica fiber-based sensor.
Resumo:
Thermal annealing can be used to induce a permanent negative Bragg wavelength shift for polymer fibre grating sensors and it was originally used for multiplexing purposes. Recently, researchers showed that annealing can also provide additional benefits, such as strain and humidity sensitivity enhancement and augmented temperature operational range. The annealing process can change both the optical and mechanical properties of the fibre. In this paper, the annealing effects on the stress and force sensitivities of PMMA fibre Bragg grating sensors are investigated. The incentive for that investigation was an unexpected behaviour observed in an array of sensors which were used for liquid level monitoring. One sensor exhibited much lower pressure sensitivity and that was the only one that was not annealed. To further investigate the phenomenon, additional sensors were photo-inscribed and characterised with regard their stress and force sensitivities. Then, the fibres were annealed by placing them in hot water, controlling with that way the humidity factor. After annealing, stress and force sensitivities were measured again. The results show that the annealing can improve the stress and force sensitivity of the devices. This can provide better performing sensors for use in stress, force and pressure sensing applications.
Resumo:
This paper proposes a new thermography-based maximum power point tracking (MPPT) scheme to address photovoltaic (PV) partial shading faults. Solar power generation utilizes a large number of PV cells connected in series and in parallel in an array, and that are physically distributed across a large field. When a PV module is faulted or partial shading occurs, the PV system sees a nonuniform distribution of generated electrical power and thermal profile, and the generation of multiple maximum power points (MPPs). If left untreated, this reduces the overall power generation and severe faults may propagate, resulting in damage to the system. In this paper, a thermal camera is employed for fault detection and a new MPPT scheme is developed to alter the operating point to match an optimized MPP. Extensive data mining is conducted on the images from the thermal camera in order to locate global MPPs. Based on this, a virtual MPPT is set out to find the global MPP. This can reduce MPPT time and be used to calculate the MPP reference voltage. Finally, the proposed methodology is experimentally implemented and validated by tests on a 600-W PV array.
Resumo:
The aim of this research was to demonstrate a high current and stable field emission (FE) source based on carbon nanotubes (CNTs) and electron multiplier microchannel plate (MCP) and design efficient field emitters. In recent years various CNT based FE devices have been demonstrated including field emission displays, x-ray source and many more. However to use CNTs as source in high powered microwave (HPM) devices higher and stable current in the range of few milli-amperes to amperes is required. To achieve such high current we developed a novel technique of introducing a MCP between CNT cathode and anode. MCP is an array of electron multipliers; it operates by avalanche multiplication of secondary electrons, which are generated when electrons strike channel walls of MCP. FE current from CNTs is enhanced due to avalanche multiplication of secondary electrons and in addition MCP also protects CNTs from irreversible damage during vacuum arcing. Conventional MCP is not suitable for this purpose due to the lower secondary emission properties of their materials. To achieve higher and stable currents we have designed and fabricated a unique ceramic MCP consisting of high SEY materials. The MCP was fabricated utilizing optimum design parameters, which include channel dimensions and material properties obtained from charged particle optics (CPO) simulation. Child Langmuir law, which gives the optimum current density from an electron source, was taken into account during the system design and experiments. Each MCP channel consisted of MgO coated CNTs which was chosen from various material systems due to its very high SEY. With MCP inserted between CNT cathode and anode stable and higher emission current was achieved. It was ∼25 times higher than without MCP. A brighter emission image was also evidenced due to enhanced emission current. The obtained results are a significant technological advance and this research holds promise for electron source in new generation lightweight, efficient and compact microwave devices for telecommunications in satellites or space applications. As part of this work novel emitters consisting of multistage geometry with improved FE properties were was also developed.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
Hardware/software (HW/SW) cosimulation integrates software simulation and hardware simulation simultaneously. Usually, HW/SW co-simulation platform is used to ease debugging and verification for very large-scale integration (VLSI) design. To accelerate the computation of the gesture recognition technique, an HW/SW implementation using field programmable gate array (FPGA) technology is presented in this paper. The major contributions of this work are: (1) a novel design of memory controller in the Verilog Hardware Description Language (Verilog HDL) to reduce memory consumption and load on the processor. (2) The testing part of the neural network algorithm is being hardwired to improve the speed and performance. The American Sign Language gesture recognition is chosen to verify the performance of the approach. Several experiments were carried out on four databases of the gestures (alphabet signs A to Z). (3) The major benefit of this design is that it takes only few milliseconds to recognize the hand gesture which makes it computationally more efficient.
Resumo:
Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. ^ We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. ^ We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. ^ We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). ^ In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.^
Resumo:
In this article we present a numerical study of the collective dynamics in a population of coupled semiconductor lasers with a saturable absorber, operating in the excitable regime under the action of additive noise. We demonstrate that temporal and intensity synchronization takes place in a broad region of the parameter space and for various array sizes. The synchronization is robust and occurs even for a set of nonidentical coupled lasers. The cooperative nature of the system results in a self-organization process which enhances the coherence of the single element of the population too and can have broad impact for detection purposes, for building all-optical simulators of neural networks and in the field of photonics-based computation.
Resumo:
This thesis involved the development of two Biosensors and their associated assays for the detection of diseases, namely IBR and BVD for veterinary use and C1q protein as a biomarker to pancreatic cancer for medical application, using Surface Plasmon Resonance (SPR) and nanoplasmonics. SPR techniques have been used by a number of groups, both in research [1-3] and commercially [4, 5] , as a diagnostic tool for the detection of various biomolecules, especially antibodies [6-8]. The biosensor market is an ever expanding field, with new technology and new companies rapidly emerging on the market, for both human [8] and veterinary applications [9, 10]. In Chapter 2, we discuss the development of a simultaneous IBR and BVD virus assay for the detection of antibodies in bovine serum on an SPR-2 platform. Pancreatic cancer is the most lethal cancer by organ site, partially due to the lack of a reliable molecular signature for diagnostic testing. C1q protein has been recently proposed as a biomarker within a panel for the detection of pancreatic cancer. The third chapter discusses the fabrication, assays and characterisation of nanoplasmonic arrays. We will talk about developing C1q scFv antibody assays, clone screening of the antibodies and subsequently moving the assays onto the nanoplasmonic array platform for static assays, as well as a custom hybrid benchtop system as a diagnostic method for the detection of pancreatic cancer. Finally, in chapter 4, we move on to Guided Mode Resonance (GMR) sensors, as a low-cost option for potential use in Point-of Care diagnostics. C1q and BVD assays used in the prior formats are transferred to this platform, to ascertain its usability as a cost effective, reliable sensor for diagnostic testing. We discuss the fabrication, characterisation and assay development, as well as their use in the benchtop hybrid system.
Resumo:
Kernel-level malware is one of the most dangerous threats to the security of users on the Internet, so there is an urgent need for its detection. The most popular detection approach is misuse-based detection. However, it cannot catch up with today's advanced malware that increasingly apply polymorphism and obfuscation. In this thesis, we present our integrity-based detection for kernel-level malware, which does not rely on the specific features of malware. We have developed an integrity analysis system that can derive and monitor integrity properties for commodity operating systems kernels. In our system, we focus on two classes of integrity properties: data invariants and integrity of Kernel Queue (KQ) requests. We adopt static analysis for data invariant detection and overcome several technical challenges: field-sensitivity, array-sensitivity, and pointer analysis. We identify data invariants that are critical to system runtime integrity from Linux kernel 2.4.32 and Windows Research Kernel (WRK) with very low false positive rate and very low false negative rate. We then develop an Invariant Monitor to guard these data invariants against real-world malware. In our experiment, we are able to use Invariant Monitor to detect ten real-world Linux rootkits and nine real-world Windows malware and one synthetic Windows malware. We leverage static and dynamic analysis of kernel and device drivers to learn the legitimate KQ requests. Based on the learned KQ requests, we build KQguard to protect KQs. At runtime, KQguard rejects all the unknown KQ requests that cannot be validated. We apply KQguard on WRK and Linux kernel, and extensive experimental evaluation shows that KQguard is efficient (up to 5.6% overhead) and effective (capable of achieving zero false positives against representative benign workloads after appropriate training and very low false negatives against 125 real-world malware and nine synthetic attacks). In our system, Invariant Monitor and KQguard cooperate together to protect data invariants and KQs in the target kernel. By monitoring these integrity properties, we can detect malware by its violation of these integrity properties during execution.
Resumo:
Multiple myeloma is characterized by genomic alterations frequently involving gains and losses of chromosomes. Single nucleotide polymorphism (SNP)-based mapping arrays allow the identification of copy number changes at the sub-megabase level and the identification of loss of heterozygosity (LOH) due to monosomy and uniparental disomy (UPD). We have found that SNP-based mapping array data and fluorescence in situ hybridization (FISH) copy number data correlated well, making the technique robust as a tool to investigate myeloma genomics. The most frequently identified alterations are located at 1p, 1q, 6q, 8p, 13, and 16q. LOH is found in these large regions and also in smaller regions throughout the genome with a median size of 1 Mb. We have identified that UPD is prevalent in myeloma and occurs through a number of mechanisms including mitotic nondisjunction and mitotic recombination. For the first time in myeloma, integration of mapping and expression data has allowed us to reduce the complexity of standard gene expression data and identify candidate genes important in both the transition from normal to monoclonal gammopathy of unknown significance (MGUS) to myeloma and in different subgroups within myeloma. We have documented these genes, providing a focus for further studies to identify and characterize those that are key in the pathogenesis of myeloma.
Resumo:
With security and surveillance, there is an increasing need to process image data efficiently and effectively either at source or in a large data network. Whilst a Field-Programmable Gate Array (FPGA) has been seen as a key technology for enabling this, the design process has been viewed as problematic in terms of the time and effort needed for implementation and verification. The work here proposes a different approach of using optimized FPGA-based soft-core processors which allows the user to exploit the task and data level parallelism to achieve the quality of dedicated FPGA implementations whilst reducing design time. The paper also reports some preliminary
progress on the design flow to program the structure. An implementation for a Histogram of Gradients algorithm is also reported which shows that a performance of 328 fps can be achieved with this design approach, whilst avoiding the long design time, verification and debugging steps associated with conventional FPGA implementations.
Resumo:
A novel retrodirective array (RDA) architecture is proposed which utilises a special case spectral signature embedded within the data payload as pilot signals. With the help of a pair of phase-locked-loop (PLL) based phase conjugators (PCs) the RDA’s response to other unwanted and/or unfriendly interrogating signals can be disabled, leading to enhanced secrecy performance directly in the wireless physical layer. The effectiveness of the proposed RDA system is experimentally demonstrated.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08