928 resultados para PRACTICAL APPLICATIONS
Resumo:
The life sciences can benefit greatly from imaging technologies that connect microscopic discoveries with macroscopic observations. One technology uniquely positioned to provide such benefits is photoacoustic tomography (PAT), a sensitive modality for imaging optical absorption contrast over a range of spatial scales at high speed. In PAT, endogenous contrast reveals a tissue's anatomical, functional, metabolic, and histologic properties, and exogenous contrast provides molecular and cellular specificity. The spatial scale of PAT covers organelles, cells, tissues, organs, and small animals. Consequently, PAT is complementary to other imaging modalities in contrast mechanism, penetration, spatial resolution, and temporal resolution. We review the fundamentals of PAT and provide practical guidelines for matching PAT systems with research needs. We also summarize the most promising biomedical applications of PAT, discuss related challenges, and envision PAT's potential to lead to further breakthroughs.
Resumo:
Polymer Optical Fibers have occupied historically a place for large core flexible fibers operating in short distances. In addition to their practical passive application in short-haul communication they constitute a potential research field as active devices with organic dopants. Organic dyes are preferred as dopants over organic semiconductors due to their higher optical cross section. Thus organic dyes as gain media in a polymer fiber is used to develop efficient and narrow laser sources with a tunability throughout the visible region or optical amplifier with high gain. Dyes incorporated in fiber form has added advantage over other solid state forms such as films since the pump power required to excite the molecules in the core of the fiber is less thereby utilising the pump power effectively. In 1987, Muto et.al investigated a dye doped step index polymer fiber laser. Afterwards, numerous researches have been carried out in this area demonstrating laser emission from step index, graded index and hollow optical fibers incorporating various dyes. Among various dyes, Rhodamine6G is the most widely and commonly used laser dye for the last four decades. Rhodamine6G has many desirable optical properties which make it preferable over other organic dyes such as Coumarin, Nile Blue, Curcumin etc. The research focus on the implementation of efficient fiber lasers and amplifiers for short fiber distances. Developing efficient plastic lasers with electrical pumping can be a new proposal in this field which demands lowest possible threshold pump energy of the gain medium in the cavity as an important parameter. One way of improving the efficiency of the lasers, through low threshold pump energy, is by modifying the gain of the amplifiers in the resonator/cavity. Success in the field of Radiative Decay Engineering can pave way to this problem. Laser gain media consisting of dye-nanoparticle composites can improve the efficiency by lowering the lasing threshold and enhancing the photostability. The electric field confined near the surface of metal nanoparticles due to Localized Surface Plasmon Resonance can be very effective for the excitation of active centers to impart high optical gain for lasing. Since the Surface Plasmon Resonance of nanoparticles of gold and silver lies in the visible range, it can affect the spectral emission characteristics of organic dyes such as Rhodamine6G through plasmon field generated by the particles. The change in emission of the dye placed near metal nanoparticles depend on plasmon field strength which in turn depends on the type of metal, size of nanoparticle, surface modification of the particle and the wavelength of incident light. Progress in fabrication of different types of nanostructures lead to the advent of nanospheres, nanoalloys, core-shell and nanowires to name a few. The thesis deals with the fabrication and characterisation of polymer optical fibers with various metallic and bimetallic nanostructures incorporated in the gain media for efficient fiber lasers with low threshold and improved photostability.
Resumo:
Récemment, beaucoup d’efforts ont été investis afin de développer des modulateurs sur silicium pour les télécommunications optiques et leurs domaines d’applications. Ces modulateurs sont utiles pour les centres de données à courte portée et à haut débit. Ainsi, ce travail porte sur la caractérisation de deux types de modulateurs à réseau de Bragg intégré sur silicium comportant une jonction PN entrelacée dont le but est de réaliser une modulation de la longueur d’onde de Bragg par le biais de l’application d’un tension de polarisation inverse réalisant une déplétion des porteurs au sein du guide d’onde. Pour le premier modulateur à réseau de Bragg, la période de la jonction PN est différente de celle du réseau de Bragg tandis que le deuxième modulateur à réseau de Bragg a la période de sa jonction PN en accord avec celle du réseau de Bragg. Ces différences apporteront un comportement différent du modulateur impliquant donc une transmission de données de qualité différente et c’est ce que nous cherchons à caractériser. L’avantage de ce modulateur à réseau de Bragg est qu’il est relativement simple à designer et possède un réseau de Bragg uniforme dont on connaît déjà très bien les caractéristiques. La première étape dans la caractérisation de ces modulateurs fut de réaliser des mesures optiques, uniquement, afin de constater la réponse spectrale en réflexion et en transmission. Par la suite, nous sommes passé par l’approche usuelle, c’est à dire en réalisant des mesures DC sur les modulateurs. Ce mémoire montre également les résultats pratiques sur le comportement des électrodes et de la jonction PN. Mais il rend compte également des résultats de la transmission de données de ces modulateurs par l’utilisation d’une modulation OOK et PAM-4 et permet de mettre en évidence les différences en terme d’efficacité de modulation de ces deux modulateurs. Nous discutons alors de la pertinence de ce choix de design par rapport à ce que l’on peut trouver actuellement dans la littérature.
Resumo:
As climate change continues to impact socio-ecological systems, tools that assist conservation managers to understand vulnerability and target adaptations are essential. Quantitative assessments of vulnerability are rare because available frameworks are complex and lack guidance for dealing with data limitations and integrating across scales and disciplines. This paper describes a semi-quantitative method for assessing vulnerability to climate change that integrates socio-ecological factors to address management objectives and support decision-making. The method applies a framework first adopted by the Intergovernmental Panel on Climate Change and uses a structured 10-step process. The scores for each framework element are normalized and multiplied to produce a vulnerability score and then the assessed components are ranked from high to low vulnerability. Sensitivity analyses determine which indicators most influence the analysis and the resultant decision-making process so data quality for these indicators can be reviewed to increase robustness. Prioritisation of components for conservation considers other economic, social and cultural values with vulnerability rankings to target actions that reduce vulnerability to climate change by decreasing exposure or sensitivity and/or increasing adaptive capacity. This framework provides practical decision-support and has been applied to marine ecosystems and fisheries, with two case applications provided as examples: (1) food security in Pacific Island nations under climate-driven fish declines, and (2) fisheries in the Gulf of Carpentaria, northern Australia. The step-wise process outlined here is broadly applicable and can be undertaken with minimal resources using existing data, thereby having great potential to inform adaptive natural resource management in diverse locations.
Resumo:
In this thesis, we present a quantitative approach using probabilistic verification techniques for the analysis of reliability, availability, maintainability, and safety (RAMS) properties of satellite systems. The subject of our research is satellites used in mission critical industrial applications. A strong case for using probabilistic model checking to support RAMS analysis of satellite systems is made by our verification results. This study is intended to build a foundation to help reliability engineers with a basic background in model checking to apply probabilistic model checking to small satellite systems. We make two major contributions. One of these is the approach of RAMS analysis to satellite systems. In the past, RAMS analysis has been extensively applied to the field of electrical and electronics engineering. It allows system designers and reliability engineers to predict the likelihood of failures from the indication of historical or current operational data. There is a high potential for the application of RAMS analysis in the field of space science and engineering. However, there is a lack of standardisation and suitable procedures for the correct study of RAMS characteristics for satellite systems. This thesis considers the promising application of RAMS analysis to the case of satellite design, use, and maintenance, focusing on its system segments. Data collection and verification procedures are discussed, and a number of considerations are also presented on how to predict the probability of failure. Our second contribution is leveraging the power of probabilistic model checking to analyse satellite systems. We present techniques for analysing satellite systems that differ from the more common quantitative approaches based on traditional simulation and testing. These techniques have not been applied in this context before. We present the use of probabilistic techniques via a suite of detailed examples, together with their analysis. Our presentation is done in an incremental manner: in terms of complexity of application domains and system models, and a detailed PRISM model of each scenario. We also provide results from practical work together with a discussion about future improvements.
Resumo:
Secure Multi-party Computation (MPC) enables a set of parties to collaboratively compute, using cryptographic protocols, a function over their private data in a way that the participants do not see each other's data, they only see the final output. Typical MPC examples include statistical computations over joint private data, private set intersection, and auctions. While these applications are examples of monolithic MPC, richer MPC applications move between "normal" (i.e., per-party local) and "secure" (i.e., joint, multi-party secure) modes repeatedly, resulting overall in mixed-mode computations. For example, we might use MPC to implement the role of the dealer in a game of mental poker -- the game will be divided into rounds of local decision-making (e.g. bidding) and joint interaction (e.g. dealing). Mixed-mode computations are also used to improve performance over monolithic secure computations. Starting with the Fairplay project, several MPC frameworks have been proposed in the last decade to help programmers write MPC applications in a high-level language, while the toolchain manages the low-level details. However, these frameworks are either not expressive enough to allow writing mixed-mode applications or lack formal specification, and reasoning capabilities, thereby diminishing the parties' trust in such tools, and the programs written using them. Furthermore, none of the frameworks provides a verified toolchain to run the MPC programs, leaving the potential of security holes that can compromise the privacy of parties' data. This dissertation presents language-based techniques to make MPC more practical and trustworthy. First, it presents the design and implementation of a new MPC Domain Specific Language, called Wysteria, for writing rich mixed-mode MPC applications. Wysteria provides several benefits over previous languages, including a conceptual single thread of control, generic support for more than two parties, high-level abstractions for secret shares, and a fully formalized type system and operational semantics. Using Wysteria, we have implemented several MPC applications, including, for the first time, a card dealing application. The dissertation next presents Wys*, an embedding of Wysteria in F*, a full-featured verification oriented programming language. Wys* improves on Wysteria along three lines: (a) It enables programmers to formally verify the correctness and security properties of their programs. As far as we know, Wys* is the first language to provide verification capabilities for MPC programs. (b) It provides a partially verified toolchain to run MPC programs, and finally (c) It enables the MPC programs to use, with no extra effort, standard language constructs from the host language F*, thereby making it more usable and scalable. Finally, the dissertation develops static analyses that help optimize monolithic MPC programs into mixed-mode MPC programs, while providing similar privacy guarantees as the monolithic versions.
Resumo:
Opuntia spp. flowers have been traditionally used for medical purposes, mostly because of their diversity in bioactive molecules with health promoting properties. The proximate, mineral and volatile compound profiles, together with the cytotoxic and antimicrobial properties were characterized in O. microdasys flowers at different maturity stages, revealing several statistically significant differences. O. microdasys stood out mainly for its high contents of dietary fiber, potassium and camphor, and its high activities against HCT15 cells, Staphylococcus aureus, Aspergillus versicolor and Penicillium funiculosum. The vegetative stage showed the highest cytotoxic and antifungal activities, whilst the full flowering stage was particularly active against bacterial species. The complete dataset has been classified by principal component analysis, achieving clearly identifiable groups for each flowering stage, elucidating also the most distinctive features, and comprehensively profiling each of the assayed stages. The results might be useful to define the best flowering stage considering practical application purposes.
Resumo:
Compressed covariance sensing using quadratic samplers is gaining increasing interest in recent literature. Covariance matrix often plays the role of a sufficient statistic in many signal and information processing tasks. However, owing to the large dimension of the data, it may become necessary to obtain a compressed sketch of the high dimensional covariance matrix to reduce the associated storage and communication costs. Nested sampling has been proposed in the past as an efficient sub-Nyquist sampling strategy that enables perfect reconstruction of the autocorrelation sequence of Wide-Sense Stationary (WSS) signals, as though it was sampled at the Nyquist rate. The key idea behind nested sampling is to exploit properties of the difference set that naturally arises in quadratic measurement model associated with covariance compression. In this thesis, we will focus on developing novel versions of nested sampling for low rank Toeplitz covariance estimation, and phase retrieval, where the latter problem finds many applications in high resolution optical imaging, X-ray crystallography and molecular imaging. The problem of low rank compressive Toeplitz covariance estimation is first shown to be fundamentally related to that of line spectrum recovery. In absence if noise, this connection can be exploited to develop a particular kind of sampler called the Generalized Nested Sampler (GNS), that can achieve optimal compression rates. In presence of bounded noise, we develop a regularization-free algorithm that provably leads to stable recovery of the high dimensional Toeplitz matrix from its order-wise minimal sketch acquired using a GNS. Contrary to existing TV-norm and nuclear norm based reconstruction algorithms, our technique does not use any tuning parameters, which can be of great practical value. The idea of nested sampling idea also finds a surprising use in the problem of phase retrieval, which has been of great interest in recent times for its convex formulation via PhaseLift, By using another modified version of nested sampling, namely the Partial Nested Fourier Sampler (PNFS), we show that with probability one, it is possible to achieve a certain conjectured lower bound on the necessary measurement size. Moreover, for sparse data, an l1 minimization based algorithm is proposed that can lead to stable phase retrieval using order-wise minimal number of measurements.
Resumo:
With wireless vehicular communications, Vehicular Ad Hoc Networks (VANETs) enable numerous applications to enhance traffic safety, traffic efficiency, and driving experience. However, VANETs also impose severe security and privacy challenges which need to be thoroughly investigated. In this dissertation, we enhance the security, privacy, and applications of VANETs, by 1) designing application-driven security and privacy solutions for VANETs, and 2) designing appealing VANET applications with proper security and privacy assurance. First, the security and privacy challenges of VANETs with most application significance are identified and thoroughly investigated. With both theoretical novelty and realistic considerations, these security and privacy schemes are especially appealing to VANETs. Specifically, multi-hop communications in VANETs suffer from packet dropping, packet tampering, and communication failures which have not been satisfyingly tackled in literature. Thus, a lightweight reliable and faithful data packet relaying framework (LEAPER) is proposed to ensure reliable and trustworthy multi-hop communications by enhancing the cooperation of neighboring nodes. Message verification, including both content and signature verification, generally is computation-extensive and incurs severe scalability issues to each node. The resource-aware message verification (RAMV) scheme is proposed to ensure resource-aware, secure, and application-friendly message verification in VANETs. On the other hand, to make VANETs acceptable to the privacy-sensitive users, the identity and location privacy of each node should be properly protected. To this end, a joint privacy and reputation assurance (JPRA) scheme is proposed to synergistically support privacy protection and reputation management by reconciling their inherent conflicting requirements. Besides, the privacy implications of short-time certificates are thoroughly investigated in a short-time certificates-based privacy protection (STCP2) scheme, to make privacy protection in VANETs feasible with short-time certificates. Secondly, three novel solutions, namely VANET-based ambient ad dissemination (VAAD), general-purpose automatic survey (GPAS), and VehicleView, are proposed to support the appealing value-added applications based on VANETs. These solutions all follow practical application models, and an incentive-centered architecture is proposed for each solution to balance the conflicting requirements of the involved entities. Besides, the critical security and privacy challenges of these applications are investigated and addressed with novel solutions. Thus, with proper security and privacy assurance, these solutions show great application significance and economic potentials to VANETs. Thus, by enhancing the security, privacy, and applications of VANETs, this dissertation fills the gap between the existing theoretic research and the realistic implementation of VANETs, facilitating the realistic deployment of VANETs.
Resumo:
High voltage electrophoretic deposition (HVEPD) has been developed as a novel technique to obtain vertically aligned forests of one-dimensional nanomaterials for efficient energy storage. The ability to control and manipulate nanomaterials is critical for their effective usage in a variety of applications. Oriented structures of one-dimensional nanomaterials provide a unique opportunity to take full advantage of their excellent mechanical and electrochemical properties. However, it is still a significant challenge to obtain such oriented structures with great process flexibility, ease of processing under mild conditions and the capability to scale up, especially in context of efficient device fabrication and system packaging. This work presents HVEPD as a simple, versatile and generic technique to obtain vertically aligned forests of different one-dimensional nanomaterials on flexible, transparent and scalable substrates. Improvements on material chemistry and reduction of contact resistance have enabled the fabrication of high power supercapacitor electrodes using the HVEPD method. The investigations have also paved the way for further enhancements of performance by employing hybrid material systems and AC/DC pulsed deposition. Multi-walled carbon nanotubes (MWCNTs) were used as the starting material to demonstrate the HVEPD technique. A comprehensive study of the key parameters was conducted to better understand the working mechanism of the HVEPD process. It has been confirmed that HVEPD was enabled by three key factors: high deposition voltage for alignment, low dispersion concentration to avoid aggregation and simultaneous formation of holding layer by electrodeposition for reinforcement of nanoforests. A set of suitable parameters were found to obtain vertically aligned forests of MWCNTs. Compared with their randomly oriented counterparts, the aligned MWCNT forests showed better electrochemical performance, lower electrical resistance and a capability to achieve superhydrophpbicity, indicating their potential in a broad range of applications. The versatile and generic nature of the HVEPD process has been demonstrated by achieving deposition on flexible and transparent substrates, as well as aligned forests of manganese dioxide (MnO2) nanorods. A continuous roll-printing HVEPD approach was then developed to obtain aligned MWCNT forest with low contact resistance on large, flexible substrates. Such large-scale electrodes showed no deterioration in electrochemical performance and paved the way for practical device fabrication. The effect of a holding layer on the contact resistance between aligned MWCNT forests and the substrate was studied to improve electrochemical performance of such electrodes. It was found that a suitable precursor salt like nickel chloride could be used to achieve a conductive holding layer which helped to significantly reduce the contact resistance. This in turn enhanced the electrochemical performance of the electrodes. High-power scalable redox capacitors were then prepared using HVEPD. Very high power/energy densities and excellent cyclability have been achieved by synergistically combining hydrothermally synthesized, highly crystalline α-MnO2 nanorods, vertically aligned forests and reduced contact resistance. To further improve the performance, hybrid electrodes have been prepared in the form of vertically aligned forest of MWCNTs with branches of α-MnO2 nanorods on them. Large- scale electrodes with such hybrid structures were manufactured using continuous HVEPD and characterized, showing further improved power and energy densities. The alignment quality and density of MWCNT forests were also improved by using an AC/DC pulsed deposition technique. In this case, AC voltage was first used to align the MWCNTs, followed by immediate DC voltage to deposit the aligned MWCNTs along with the conductive holding layer. Decoupling of alignment from deposition was proven to result in better alignment quality and higher electrochemical performance.
Resumo:
Modern power networks incorporate communications and information technology infrastructure into the electrical power system to create a smart grid in terms of control and operation. The smart grid enables real-time communication and control between consumers and utility companies allowing suppliers to optimize energy usage based on price preference and system technical issues. The smart grid design aims to provide overall power system monitoring, create protection and control strategies to maintain system performance, stability and security. This dissertation contributed to the development of a unique and novel smart grid test-bed laboratory with integrated monitoring, protection and control systems. This test-bed was used as a platform to test the smart grid operational ideas developed here. The implementation of this system in the real-time software creates an environment for studying, implementing and verifying novel control and protection schemes developed in this dissertation. Phasor measurement techniques were developed using the available Data Acquisition (DAQ) devices in order to monitor all points in the power system in real time. This provides a practical view of system parameter changes, system abnormal conditions and its stability and security information system. These developments provide valuable measurements for technical power system operators in the energy control centers. Phasor Measurement technology is an excellent solution for improving system planning, operation and energy trading in addition to enabling advanced applications in Wide Area Monitoring, Protection and Control (WAMPAC). Moreover, a virtual protection system was developed and implemented in the smart grid laboratory with integrated functionality for wide area applications. Experiments and procedures were developed in the system in order to detect the system abnormal conditions and apply proper remedies to heal the system. A design for DC microgrid was developed to integrate it to the AC system with appropriate control capability. This system represents realistic hybrid AC/DC microgrids connectivity to the AC side to study the use of such architecture in system operation to help remedy system abnormal conditions. In addition, this dissertation explored the challenges and feasibility of the implementation of real-time system analysis features in order to monitor the system security and stability measures. These indices are measured experimentally during the operation of the developed hybrid AC/DC microgrids. Furthermore, a real-time optimal power flow system was implemented to optimally manage the power sharing between AC generators and DC side resources. A study relating to real-time energy management algorithm in hybrid microgrids was performed to evaluate the effects of using energy storage resources and their use in mitigating heavy load impacts on system stability and operational security.
Resumo:
The ability to measure tiny variations in the local gravitational acceleration allows – amongst other applications – the detection of hidden hydrocarbon reserves, magma build-up before volcanic eruptions, and subterranean tunnels. Several technologies are available that achieve the sensitivities required (tens of μGal/√Hz), and stabilities required (periods of days to weeks) for such applications: free-fall gravimeters, spring-based gravimeters, superconducting gravimeters, and atom interferometers. All of these devices can observe the Earth tides; the elastic deformation of the Earth’s crust as a result of tidal forces. This is a universally predictable gravitational signal that requires both high sensitivity and high stability over timescales of several days to measure. All present gravimeters, however, have limitations of excessive cost (£70 k) and high mass (<8 kg). In this thesis, the building of a microelectromechanical system (MEMS) gravimeter with a sensitivity of 40 μGal/√Hz in a package size of only a few cubic centimetres is discussed. MEMS accelerometers – found in most smart phones – can be mass-produced remarkably cheaply, but most are not sensitive enough, and none have been stable enough to be called a ‘gravimeter’. The remarkable stability and sensitivity of the device is demonstrated with a measurement of the Earth tides. Such a measurement has never been undertaken with a MEMS device, and proves the long term stability of the instrument compared to any other MEMS device, making it the first MEMS accelerometer that can be classed as a gravimeter. This heralds a transformative step in MEMS accelerometer technology. Due to their small size and low cost, MEMS gravimeters could create a new paradigm in gravity mapping: exploration surveys could be carried out with drones instead of low-flying aircraft; they could be used for distributed land surveys in exploration settings, for the monitoring of volcanoes; or built into multi-pixel density contrast imaging arrays.
Resumo:
Monolithic materials cannot always satisfy the demands of today’s advanced requirements. Only by combining several materials at different length-scales, as nature does, the requested performances can be met. Polymer nanocomposites are intended to overcome the common drawbacks of pristine polymers, with a multidisciplinary collaboration of material science with chemistry, engineering, and nanotechnology. These materials are an active combination of polymers and nanomaterials, where at least one phase lies in the nanometer range. By mimicking nature’s materials is possible to develop new nanocomposites for structural applications demanding combinations of strength and toughness. In this perspective, nanofibers obtained by electrospinning have been increasingly adopted in the last decade to improve the fracture toughness of Fiber Reinforced Plastic (FRP) laminates. Although nanofibers have already found applications in various fields, their widespread introduction in the industrial context is still a long way to go. This thesis aims to develop methodologies and models able to predict the behaviour of nanofibrous-reinforced polymers, paving the way for their practical engineering applications. It consists of two main parts. The first one investigates the mechanisms that act at the nanoscale, systematically evaluating the mechanical properties of both the nanofibrous reinforcement phase (Chapter 1) and hosting polymeric matrix (Chapter 2). The second part deals with the implementation of different types of nanofibers for novel pioneering applications, trying to combine the well-known fracture toughness enhancement in composite laminates with improving other mechanical properties or including novel functionalities. Chapter 3 reports the development of novel adhesive carriers made of nylon 6,6 nanofibrous mats to increase the fracture toughness of epoxy-bonded joints. In Chapter 4, recently developed rubbery nanofibers are used to enhance the damping properties of unidirectional carbon fiber laminates. Lastly, in Chapter 5, a novel self-sensing composite laminate capable of detecting impacts on its surface using PVDF-TrFE piezoelectric nanofibers is presented.
Resumo:
One of the most visionary goals of Artificial Intelligence is to create a system able to mimic and eventually surpass the intelligence observed in biological systems including, ambitiously, the one observed in humans. The main distinctive strength of humans is their ability to build a deep understanding of the world by learning continuously and drawing from their experiences. This ability, which is found in various degrees in all intelligent biological beings, allows them to adapt and properly react to changes by incrementally expanding and refining their knowledge. Arguably, achieving this ability is one of the main goals of Artificial Intelligence and a cornerstone towards the creation of intelligent artificial agents. Modern Deep Learning approaches allowed researchers and industries to achieve great advancements towards the resolution of many long-standing problems in areas like Computer Vision and Natural Language Processing. However, while this current age of renewed interest in AI allowed for the creation of extremely useful applications, a concerningly limited effort is being directed towards the design of systems able to learn continuously. The biggest problem that hinders an AI system from learning incrementally is the catastrophic forgetting phenomenon. This phenomenon, which was discovered in the 90s, naturally occurs in Deep Learning architectures where classic learning paradigms are applied when learning incrementally from a stream of experiences. This dissertation revolves around the Continual Learning field, a sub-field of Machine Learning research that has recently made a comeback following the renewed interest in Deep Learning approaches. This work will focus on a comprehensive view of continual learning by considering algorithmic, benchmarking, and applicative aspects of this field. This dissertation will also touch on community aspects such as the design and creation of research tools aimed at supporting Continual Learning research, and the theoretical and practical aspects concerning public competitions in this field.