916 resultados para sensor-Cloud system
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The use of nanoscale low-dimensional systems could boost the sensitivity of gas sensors. In this work we simulate a nanoscopic sensor based on carbon nanotubes with a large number of binding sites using ab initio density functional electronic structure calculations coupled to the Non-Equilibrium Green's Function formalism. We present a recipe where the adsorption process is studied followed by conductance calculations of a single defect system and of more realistic disordered system considering different coverages of molecules as one would expect experimentally. We found that the sensitivity of the disordered system is enhanced by a factor of 5 when compared to the single defect one. Finally, our results from the atomistic electronic transport are used as input to a simple model that connects them to experimental parameters such as temperature and partial gas pressure, providing a procedure for simulating a realistic nanoscopic gas sensor. Using this methodology we show that nitrogen-rich carbon nanotubes could work at room temperature with extremely high sensitivity. Copyright 2012 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. [http://dx.doi.org/10.1063/1.4739280]
Resumo:
The wide variety of molecular architectures used in sensors and biosensors and the large amount of data generated with some principles of detection have motivated the use of computational methods, such as information visualization techniques, not only to handle the data but also to optimize sensing performance. In this study, we combine projection techniques with micro-Raman scattering and atomic force microscopy (AFM) to address critical issues related to practical applications of electronic tongues (e-tongues) based on impedance spectroscopy. Experimentally, we used sensing units made with thin films of a perylene derivative (AzoPTCD acronym), coating Pt interdigitated electrodes, to detect CuCl(2) (Cu(2+)), methylene blue (MB), and saccharose in aqueous solutions, which were selected due to their distinct molecular sizes and ionic character in solution. The AzoPTCD films were deposited from monolayers to 120 nm via Langmuir-Blodgett (LB) and physical vapor deposition (PVD) techniques. Because the main aspects investigated were how the interdigitated electrodes are coated by thin films (architecture on e-tongue) and the film thickness, we decided to employ the same material for all sensing units. The capacitance data were projected into a 2D plot using the force scheme method, from which we could infer that at low analyte concentrations the electrical response of the units was determined by the film thickness. Concentrations at 10 mu M or higher could be distinguished with thinner films tens of nanometers at most-which could withstand the impedance measurements, and without causing significant changes in the Raman signal for the AzoPTCD film-forming molecules. The sensitivity to the analytes appears to be related to adsorption on the film surface, as inferred from Raman spectroscopy data using MB as analyte and from the multidimensional projections. The analysis of the results presented may serve as a new route to select materials and molecular architectures for novel sensors and biosensors, in addition to suggesting ways to unravel the mechanisms behind the high sensitivity obtained in various sensors.
Resumo:
Vanadium/titanium mixed oxide films were produced using the sol-gel route. The structural investigation revealed that increased TiO2 molar ratio in the mixed oxide disturbs the V2O5 crystalline structure and makes it amorphous. This blocks the TiO2 phase transformation, so TiO2 stabilizes in the anatase phase. In addition the surface of the sample always presents larger amounts of TiO2 than expected, revealing a concentration gradient along the growth direction. For increased TiO2 molar ratios the roughness of the surface is reduced. Ion sensors were fabricated using the extended gate field effect transistor configuration. The obtained sensitivities varied in the range of 58 mV/pH down to 15 mV/pH according to the composition and morphology of the surface of the samples. Low TiO2 amounts presented better sensing properties that might be related to the cracked and inhomogeneous surfaces. Rising the TiO2 quantity in the films produces homogeneous surfaces but diminishes their sensitivities. Thus, the present paper reveals that the compositional and structural aspects change the surface morphology and electrical properties accounting for the final ion sensing properties of the V2O5/TiO2 films. (C) 2012 The Electrochemical Society. [DOI: 10.1149/2.053206jes] All rights reserved.
Resumo:
The Pierre Auger Observatory is a facility built to detect air showers produced by cosmic rays above 10(17) eV. During clear nights with a low illuminated moon fraction, the UV fluorescence light produced by air showers is recorded by optical telescopes at the Observatory. To correct the observations for variations in atmospheric conditions, atmospheric monitoring is performed at regular intervals ranging from several minutes (for cloud identification) to several hours (for aerosol conditions) to several days (for vertical profiles of temperature, pressure, and humidity). In 2009, the monitoring program was upgraded to allow for additional targeted measurements of atmospheric conditions shortly after the detection of air showers of special interest, e. g., showers produced by very high-energy cosmic rays or showers with atypical longitudinal profiles. The former events are of particular importance for the determination of the energy scale of the Observatory, and the latter are characteristic of unusual air shower physics or exotic primary particle types. The purpose of targeted (or "rapid") monitoring is to improve the resolution of the atmospheric measurements for such events. In this paper, we report on the implementation of the rapid monitoring program and its current status. The rapid monitoring data have been analyzed and applied to the reconstruction of air showers of high interest, and indicate that the air fluorescence measurements affected by clouds and aerosols are effectively corrected using measurements from the regular atmospheric monitoring program. We find that the rapid monitoring program has potential for supporting dedicated physics analyses beyond the standard event reconstruction.
Resumo:
Organic hydroperoxides are oxidants generated during bacterial-host interactions. Here, we demonstrate that the peroxidase OhrA and its negative regulator OhrR comprise a major pathway for sensing and detoxifying organic hydroperoxides in the opportunistic pathogen Chromobacterium violaceum. Initially, we found that an ohrA mutant was hypersensitive to organic hydroperoxides and that it displayed a low efficiency for decomposing these molecules. Expression of ohrA and ohrR was specifically induced by organic hydroperoxides. These genes were expressed as monocistronic transcripts and also as a bicistronic ohrR-ohrA mRNA, generating the abundantly detected ohrA mRNA and the barely detected ohrR transcript. The bicistronic transcript appears to be processed. OhrR repressed both the ohrA and ohrR genes by binding directly to inverted repeat sequences within their promoters in a redox-dependent manner. Site-directed mutagenesis of each of the four OhrR cysteine residues indicated that the conserved Cys21 is critical to organic hydroperoxide sensing, whereas Cys126 is required for disulfide bond formation. Taken together, these phenotypic, genetic and biochemical data indicate that the response of C. violaceum to organic hydroperoxides is mediated by OhrA and OhrR. Finally, we demonstrated that oxidized OhrR, inactivated by intermolecular disulfide bond formation, is specifically regenerated via thiol-disulfide exchange by thioredoxin (but not other thiol reducing agents such as glutaredoxin, glutathione and lipoamide), providing a physiological reducing system for this thiol-based redox switch.
Resumo:
Abstract Background The authors have developed a small portable device for the objective measurement of the transparency of corneas stored in preservative medium, for use by eye banks in evaluation prior to transplantation. Methods The optical system consists of a white light, lenses, and pinholes that collimate the white light beams and illuminate the cornea in its preservative medium, and an optical filter (400–700 nm) that selects the range of the wavelength of interest. A sensor detects the light that passes through the cornea, and the average corneal transparency is displayed. In order to obtain only the tissue transparency, an electronic circuit was built to detect a baseline input of the preservative medium prior to the measurement of corneal transparency. The operation of the system involves three steps: adjusting the "0 %" transmittance of the instrument, determining the "100 %" transmittance of the system, and finally measuring the transparency of the preserved cornea inside the storage medium. Results Fifty selected corneas were evaluated. Each cornea was submitted to three evaluation methods: subjective classification of transparency through a slit lamp, quantification of the transmittance of light using a corneal spectrophotometer previously developed, and measurement of transparency with the portable device. Conclusion By comparing the three methods and using the expertise of eye bank trained personnel, a table for quantifying corneal transparency with the new device has been developed. The correlation factor between the corneal spectrophotometer and the new device is 0,99813, leading to a system that is able to standardize transparency measurements of preserved corneas, which is currently done subjectively.
Resumo:
[ES]El proyecto contiene módulos de simulación, procesado de datos, mapeo y localización, desarrollados en C++ utilizando ROS (Robot Operating System) y PCL (Point Cloud Library). Ha sido desarrollado bajo el proyecto de robótica submarina AVORA.Se han caracterizado el vehículo y el sensor, y se han analizado diferentes tecnologías de sensores y mapeo. Los datos pasan por tres etapas: Conversión a nube de puntos, filtrado por umbral, eliminación de puntos espureos y, opcionalmente, detección de formas. Estos datos son utilizados para construir un mapa de superficie multinivel. La otra herramienta desarrollada es un algoritmo de Punto más Cercano Iterativo (ICP) modificado, que tiene en cuenta el modo de funcionamiento del sonar de imagen utilizado.
Resumo:
[EN]An accurate estimation of the number of people entering / leaving a controlled area is an interesting capability for automatic surveil- lance systems. Potential applications where this technology can be ap- plied include those related to security, safety, energy saving or fraud control. In this paper we present a novel con guration of a multi-sensor system combining both visual and range data specially suited for trou- blesome scenarios such as public transportation. The approach applies probabilistic estimation lters on raw sensor data to create intermediate level hypothesis that are later fused using a certainty-based integration stage. Promising results have been obtained in several tests performed on a realistic test bed scenario under variable lightning conditions.
Resumo:
In fluid dynamics research, pressure measurements are of great importance to define the flow field acting on aerodynamic surfaces. In fact the experimental approach is fundamental to avoid the complexity of the mathematical models for predicting the fluid phenomena. It’s important to note that, using in-situ sensor to monitor pressure on large domains with highly unsteady flows, several problems are encountered working with the classical techniques due to the transducer cost, the intrusiveness, the time response and the operating range. An interesting approach for satisfying the previously reported sensor requirements is to implement a sensor network capable of acquiring pressure data on aerodynamic surface using a wireless communication system able to collect the pressure data with the lowest environmental–invasion level possible. In this thesis a wireless sensor network for fluid fields pressure has been designed, built and tested. To develop the system, a capacitive pressure sensor, based on polymeric membrane, and read out circuitry, based on microcontroller, have been designed, built and tested. The wireless communication has been performed using the Zensys Z-WAVE platform, and network and data management have been implemented. Finally, the full embedded system with antenna has been created. As a proof of concept, the monitoring of pressure on the top of the mainsail in a sailboat has been chosen as working example.
Resumo:
In this report it was designed an innovative satellite-based monitoring approach applied on the Iraqi Marshlands to survey the extent and distribution of marshland re-flooding and assess the development of wetland vegetation cover. The study, conducted in collaboration with MEEO Srl , makes use of images collected from the sensor (A)ATSR onboard ESA ENVISAT Satellite to collect data at multi-temporal scales and an analysis was adopted to observe the evolution of marshland re-flooding. The methodology uses a multi-temporal pixel-based approach based on classification maps produced by the classification tool SOIL MAPPER ®. The catalogue of the classification maps is available as web service through the Service Support Environment Portal (SSE, supported by ESA). The inundation of the Iraqi marshlands, which has been continuous since April 2003, is characterized by a high degree of variability, ad-hoc interventions and uncertainty. Given the security constraints and vastness of the Iraqi marshlands, as well as cost-effectiveness considerations, satellite remote sensing was the only viable tool to observe the changes taking place on a continuous basis. The proposed system (ALCS – AATSR LAND CLASSIFICATION SYSTEM) avoids the direct use of the (A)ATSR images and foresees the application of LULCC evolution models directly to „stock‟ of classified maps. This approach is made possible by the availability of a 13 year classified image database, conceived and implemented in the CARD project (http://earth.esa.int/rtd/Projects/#CARD).The approach here presented evolves toward an innovative, efficient and fast method to exploit the potentiality of multi-temporal LULCC analysis of (A)ATSR images. The two main objectives of this work are both linked to a sort of assessment: the first is to assessing the ability of modeling with the web-application ALCS using image-based AATSR classified with SOIL MAPPER ® and the second is to evaluate the magnitude, the character and the extension of wetland rehabilitation.
Resumo:
During the last few years, several methods have been proposed in order to study and to evaluate characteristic properties of the human skin by using non-invasive approaches. Mostly, these methods cover aspects related to either dermatology, to analyze skin physiology and to evaluate the effectiveness of medical treatments in skin diseases, or dermocosmetics and cosmetic science to evaluate, for example, the effectiveness of anti-aging treatments. To these purposes a routine approach must be followed. Although very accurate and high resolution measurements can be achieved by using conventional methods, such as optical or mechanical profilometry for example, their use is quite limited primarily to the high cost of the instrumentation required, which in turn is usually cumbersome, highlighting some of the limitations for a routine based analysis. This thesis aims to investigate the feasibility of a noninvasive skin characterization system based on the analysis of capacitive images of the skin surface. The system relies on a CMOS portable capacitive device which gives 50 micron/pixel resolution capacitance map of the skin micro-relief. In order to extract characteristic features of the skin topography, image analysis techniques, such as watershed segmentation and wavelet analysis, have been used to detect the main structures of interest: wrinkles and plateau of the typical micro-relief pattern. In order to validate the method, the features extracted from a dataset of skin capacitive images acquired during dermatological examinations of a healthy group of volunteers have been compared with the age of the subjects involved, showing good correlation with the skin ageing effect. Detailed analysis of the output of the capacitive sensor compared with optical profilometry of silicone replica of the same skin area has revealed potentiality and some limitations of this technology. Also, applications to follow-up studies, as needed to objectively evaluate the effectiveness of treatments in a routine manner, are discussed.
Resumo:
An Adaptive Optic (AO) system is a fundamental requirement of 8m-class telescopes. We know that in order to obtain the maximum possible resolution allowed by these telescopes we need to correct the atmospheric turbulence. Thanks to adaptive optic systems we are able to use all the effective potential of these instruments, drawing all the information from the universe sources as best as possible. In an AO system there are two main components: the wavefront sensor (WFS) that is able to measure the aberrations on the incoming wavefront in the telescope, and the deformable mirror (DM) that is able to assume a shape opposite to the one measured by the sensor. The two subsystem are connected by the reconstructor (REC). In order to do this, the REC requires a “common language" between these two main AO components. It means that it needs a mapping between the sensor-space and the mirror-space, called an interaction matrix (IM). Therefore, in order to operate correctly, an AO system has a main requirement: the measure of an IM in order to obtain a calibration of the whole AO system. The IM measurement is a 'mile stone' for an AO system and must be done regardless of the telescope size or class. Usually, this calibration step is done adding to the telescope system an auxiliary artificial source of light (i.e a fiber) that illuminates both the deformable mirror and the sensor, permitting the calibration of the AO system. For large telescope (more than 8m, like Extremely Large Telescopes, ELTs) the fiber based IM measurement requires challenging optical setups that in some cases are also impractical to build. In these cases, new techniques to measure the IM are needed. In this PhD work we want to check the possibility of a different method of calibration that can be applied directly on sky, at the telescope, without any auxiliary source. Such a technique can be used to calibrate AO system on a telescope of any size. We want to test the new calibration technique, called “sinusoidal modulation technique”, on the Large Binocular Telescope (LBT) AO system, which is already a complete AO system with the two main components: a secondary deformable mirror with by 672 actuators, and a pyramid wavefront sensor. My first phase of PhD work was helping to implement the WFS board (containing the pyramid sensor and all the auxiliary optical components) working both optical alignments and tests of some optical components. Thanks to the “solar tower” facility of the Astrophysical Observatory of Arcetri (Firenze), we have been able to reproduce an environment very similar to the telescope one, testing the main LBT AO components: the pyramid sensor and the secondary deformable mirror. Thanks to this the second phase of my PhD thesis: the measure of IM applying the sinusoidal modulation technique. At first we have measured the IM using a fiber auxiliary source to calibrate the system, without any kind of disturbance injected. After that, we have tried to use this calibration technique in order to measure the IM directly “on sky”, so adding an atmospheric disturbance to the AO system. The results obtained in this PhD work measuring the IM directly in the Arcetri solar tower system are crucial for the future development: the possibility of the acquisition of IM directly on sky means that we are able to calibrate an AO system also for extremely large telescope class where classic IM measurements technique are problematic and, sometimes, impossible. Finally we have not to forget the reason why we need this: the main aim is to observe the universe. Thanks to these new big class of telescopes and only using their full capabilities, we will be able to increase our knowledge of the universe objects observed, because we will be able to resolve more detailed characteristics, discovering, analyzing and understanding the behavior of the universe components.
Resumo:
Escherichia coli kann C4-Dicarboxylate und andere Carbonsäuren als Substrate für den aeroben und anaeroben Stoffwechsel nutzen. Die Anwesenheit von C4-Dicarboxylaten im Außenmedium wird über das Zweikomponentensystem DcuSR, bestehend aus der membranständigen Sensorkinase DcuS und dem cytoplasmatischen Responseregulator DcuR, erkannt. Die Bindung von C4-Dicarboxylaten an die periplasmatische Domäne von DcuS führt zu einer Induktion der Zielgene. Hierzu zählen die Gene für den anaeroben Fumarat/Succinat-Antiporter DcuB (dcuB), die anaerobe Fumarase (fumB) und die Fumaratreduktase (frdABCD). Unter aeroben Bedingungen stimuliert DcuSR die Expression des dctA Gens, das für den aeroben C4-Dicarboxylat-Carrier DctA kodiert. Für den Carrier DcuB konnte eine regulatorische Funktion bei der Expression der DcuSR-regulierten Gene gezeigt werden. Die Inaktivierung des dcuB Gens führte bereits ohne Fumarat zu einer maximalen Expression einer dcuB´-´lacZ Reportergenfusion und anderer DcuSR-abhängiger Gene. Diese Stimulierung erfolgte nur in einem dcuS-positiven Hintergrund. DcuB unterscheidet sich damit von den alternativen Carriern DcuA und DcuC, die diesen Effekt nicht zeigten. Mithilfe ungerichteter Mutagenese wurden DcuB-Punktmutanten hergestellt (Thr394Ile und Asp398Asn), die eine Geninduktion verursachten, aber eine intakte Transportfunktion besaßen. Dies zeigt, dass der regulatorische Effekt von DcuB unabhängig von dessen Transportfunktion ist. Durch gerichtete Mutagenese wurde die Funktion einer Punktmutation (Thr394) näher charakterisiert. Es werden zwei Modelle zur Membrantopologie von DcuB und der Lage der Punktmutationen im Protein vorgestellt. Da DcuB seine regulatorische Funktion über eine Interaktion mit DcuS vermitteln könnte, wurden mögliche Wechselwirkungen zwischen DcuB und DcuS als auch DcuR mithilfe von Two-Hybrid-Systemen untersucht. Für biochemische Untersuchungen von DcuB wurde außerdem die Expression des Proteins in vivo und in vitro versucht. Unter aeroben Bedingungen beeinflusst der C4-Dicarboxylat-Carrier DctA die Expression der DcuSR-abhängigen Gene. Eine Mutation des dctA Gens bewirkte eine stärkere Expression einer dctA´-´lacZ Reportergenfusion im Vergleich zum Wildtyp. Diese Expression nahm in einem dcuS-negativen Hintergrund ab, die Succinat-abhängige Induktion blieb jedoch erhalten. Unter anaeroben Bedingungen kann das dctA Gen auch durch Inaktivierung von DcuB induziert werden. Es wird ein Modell vorgestellt, das die Beteiligung beider Carrier an der DcuSR-abhängigen Regulation erklärt.
Resumo:
Uno dei temi più discussi ed interessanti nel mondo dell’informatica al giorno d’oggi è sicuramente il Cloud Computing. Nuove organizzazioni che offrono servizi di questo tipo stanno nascendo ovunque e molte aziende oggi desiderano imparare ad utilizzarli, migrando i loro centri di dati e le loro applicazioni nel Cloud. Ciò sta avvenendo anche grazie alla spinta sempre più forte che stanno imprimendo le grandi compagnie nella comunità informatica: Google, Amazon, Microsoft, Apple e tante altre ancora parlano sempre più frequentemente di Cloud Computing e si stanno a loro volta ristrutturando profondamente per poter offrire servizi Cloud adeguandosi così a questo grande cambiamento che sta avvenendo nel settore dell’informatica. Tuttavia il grande movimento di energie, capitali, investimenti ed interesse che l’avvento del Cloud Computing sta causando non aiuta a comprendere in realtà che cosa esso sia, al punto tale che oggi non ne esiste ancora una definizione univoca e condivisa. La grande pressione inoltre che esso subisce da parte del mondo del mercato fa sì che molte delle sue più peculiari caratteristiche, dal punto di vista dell’ingegneria del software, vengano nascoste e soverchiate da altre sue proprietà, architetturalmente meno importanti, ma con un più grande impatto sul pubblico di potenziali clienti. L’obbiettivo che ci poniamo con questa tesi è quindi quello di esplorare il nascente mondo del Cloud Computing, cercando di comprenderne a fondo le principali caratteristiche architetturali e focalizzando l’attenzione in particolare sullo sviluppo di applicazioni in ambiente Cloud, processo che sotto alcuni aspetti si differenzia molto dallo sviluppo orientato ad ambienti più classici. La tesi è così strutturata: nel primo capitolo verrà fornita una panoramica sul Cloud Computing nella quale saranno date anche le prime definizioni e verranno esposti tutti i temi fondamentali sviluppati nei capitoli successivi. Il secondo capitolo costituisce un approfondimento su un argomento specifico, quello dei Cloud Operating System, componenti fondamentali che permettono di trasformare una qualunque infrastruttura informatica in un’infrastruttura Cloud. Essi verranno presentati anche per mezzo di molte analogie con i classici sistemi operativi desktop. Con il terzo capitolo ci si addentra più a fondo nel cuore del Cloud Computing, studiandone il livello chiamato Infrastructure as a Service tramite un esempio concreto di Cloud provider: Amazon, che fornisce i suoi servizi nel progetto Amazon Web Services. A questo punto, più volte nel corso della trattazione di vari temi saremo stati costretti ad affrontare le problematiche relative alla gestione di enormi moli di dati, che spesso sono il punto centrale di molte applicazioni Cloud. Ci è parso quindi importante approfondire questo argomento in un capitolo appositamente dedicato, il quarto, supportando anche in questo caso la trattazione teorica con un esempio concreto: BigTable, il sistema di Google per la gestione della memorizzazione di grandi quantità di dati. Dopo questo intermezzo, la trattazione procede risalendo lungo i livelli dell’architettura Cloud, ricalcando anche quella che è stata l’evoluzione temporale del Cloud Computing: nel quinto capitolo, dal livello Infrastructure as a Service si passa quindi a quello Platform as a Service, tramite lo studio dei servizi offerti da Google Cloud Platform. Il sesto capitolo costituisce invece il punto centrale della tesi, quello che ne soddisfa l’obbiettivo principale: esso contiene infatti uno studio approfondito sullo sviluppo di applicazioni orientate all’ambiente Cloud. Infine, il settimo capitolo si pone come un ponte verso possibili sviluppi futuri, analizzando quali sono i limiti principali delle tecnologie, dei modelli e dei linguaggi che oggi supportano il Cloud Computing. In esso viene proposto come possibile soluzione il modello ad attori; inoltre viene anche presentato il framework Orleans, che Microsoft sta sviluppando negli ultimi anni con lo scopo appunto di supportare lo sviluppo di applicazioni in ambiente Cloud.