996 resultados para sensor classification
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Nowadays the incredible grow of mobile devices market led to the need for location-aware applications. However, sometimes person location is difficult to obtain, since most of these devices only have a GPS (Global Positioning System) chip to retrieve location. In order to suppress this limitation and to provide location everywhere (even where a structured environment doesn’t exist) a wearable inertial navigation system is proposed, which is a convenient way to track people in situations where other localization systems fail. The system combines pedestrian dead reckoning with GPS, using widely available, low-cost and low-power hardware components. The system innovation is the information fusion and the use of probabilistic methods to learn persons gait behavior to correct, in real-time, the drift errors given by the sensors.
Resumo:
Nowadays there is an increase of location-aware mobile applications. However, these applications only retrieve location with a mobile device's GPS chip. This means that in indoor or in more dense environments these applications don't work properly. To provide location information everywhere a pedestrian Inertial Navigation System (INS) is typically used, but these systems can have a large estimation error since, in order to turn the system wearable, they use low-cost and low-power sensors. In this work a pedestrian INS is proposed, where force sensors were included to combine with the accelerometer data in order to have a better detection of the stance phase of the human gait cycle, which leads to improvements in location estimation. Besides sensor fusion an information fusion architecture is proposed, based on the information from GPS and several inertial units placed on the pedestrian body, that will be used to learn the pedestrian gait behavior to correct, in real-time, the inertial sensors errors, thus improving location estimation.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Arguably, the most difficult task in text classification is to choose an appropriate set of features that allows machine learning algorithms to provide accurate classification. Most state-of-the-art techniques for this task involve careful feature engineering and a pre-processing stage, which may be too expensive in the emerging context of massive collections of electronic texts. In this paper, we propose efficient methods for text classification based on information-theoretic dissimilarity measures, which are used to define dissimilarity-based representations. These methods dispense with any feature design or engineering, by mapping texts into a feature space using universal dissimilarity measures; in this space, classical classifiers (e.g. nearest neighbor or support vector machines) can then be used. The reported experimental evaluation of the proposed methods, on sentiment polarity analysis and authorship attribution problems, reveals that it approximates, sometimes even outperforms previous state-of-the-art techniques, despite being much simpler, in the sense that they do not require any text pre-processing or feature engineering.
Resumo:
A Norfloxacina (NFX) é um antibiótico antibacteriano indicado para combater bactérias Gram-negativas e amplamente utilizado para o tratamento de infeções no trato respiratório e urinário. Com a necessidade de realizar estudos clínicos e farmacológicos esenvolveram-se métodos de análise rápida e sensitiva para a determinação da Norfloxacina. Neste trabalho foi desenvolvido um novo sensor eletroquímico sensível e seletivo para a deteção da NFX. O sensor foi construído a partir de modificações efetuadas num elétrodo de carbono vítreo. Inicialmente o elétrodo foi modificado com a deposição de uma suspensão de nanotubos de carbono de paredes múltiplas (MWCNT) de modo a aumentar a sensibilidade de resposta analítica. De seguida um filme polímerico molecularmente impresso (MIP) foi preparado por eletrodeposição, a partir de uma solução contendo pirrol (monómero funcional) e NFX (template). Um elétrodo de controlo não impresso foi também preparado (NIP). Estudouse e caraterizou-se a resposta eletroquímica do sensor para a oxidação da NFX por voltametria de onda quadrada. Foram optimizados diversos parâmetros experimentais, tais como, condições ótimas de polimerização, condições de incubação e condições de extração. O sensor apresenta um comportamento linear entre a intensidade da corrente do pico e o logaritmo da concentração de NFX na gama entre 0,1 e 8μM. Os resultados obtidos apresentam boa precisão, com repetibilidade inferior a 6% e reprodutibilidade inferior a 9%. Foi calculado a partir da curva de calibração um limite de deteção de 0,2 μM O método desenvolvido é seletivo, rápido e de fácil manuseamento. O sensor molecularmente impresso foi aplicado com sucesso na deteção da NFX em amostras de urina real e água.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertação apresentada para obtenção do grau de Mestre em Bioquímica Estrutural e Funcional, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Dissertação apresentada à Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para complementar os requerimentos para a obtenção do grau de Mestre em Engenharia Biomédica
Resumo:
Ammonia is an important gas in many power plants and industrial processes so its detection is of extreme importance in environmental monitoring and process control due to its high toxicity. Ammonia’s threshold limit is 25 ppm and the exposure time limit is 8 h, however exposure to 35 ppm is only secure for 10 min. In this work a brief introduction to ammonia aspects are presented, like its physical and chemical properties, the dangers in its manipulation, its ways of production and its sources. The application areas in which ammonia gas detection is important and needed are also referred: environmental gas analysis (e.g. intense farming), automotive-, chemical- and medical industries. In order to monitor ammonia gas in these different areas there are some requirements that must be attended. These requirements determine the choice of sensor and, therefore, several types of sensors with different characteristics were developed, like metal oxides, surface acoustic wave-, catalytic-, and optical sensors, indirect gas analyzers, and conducting polymers. All the sensors types are described, but more attention will be given to polyaniline (PANI), particularly to its characteristics, syntheses, chemical doping processes, deposition methods, transduction modes, and its adhesion to inorganic materials. Besides this, short descriptions of PANI nanostructures, the use of electrospinning in the formation of nanofibers/microfibers, and graphene and its characteristics are included. The created sensor is an instrument that tries to achieve a goal of the medical community in the control of the breath’s ammonia levels being an easy and non-invasive method for diagnostic of kidney malfunction and/or gastric ulcers. For that the device should be capable to detect different levels of ammonia gas concentrations. So, in the present work an ammonia gas sensor was developed using a conductive polymer composite which was immobilized on a carbon transducer surface. The experiments were targeted to ammonia measurements at ppb level. Ammonia gas measurements were carried out in the concentration range from 1 ppb to 500 ppb. A commercial substrate was used; screen-printed carbon electrodes. After adequate surface pre-treatment of the substrate, its electrodes were covered by a nanofibrous polymeric composite. The conducting polyaniline doped with sulfuric acid (H2SO4) was blended with reduced graphene oxide (RGO) obtained by wet chemical synthesis. This composite formed the basis for the formation of nanofibers by electrospinning. Nanofibers will increase the sensitivity of the sensing material. The electrospun PANI-RGO fibers were placed on the substrate and then dried at ambient temperature. Amperometric measurements were performed at different ammonia gas concentrations (1 to 500 ppb). The I-V characteristics were registered and some interfering gases were studied (NO2, ethanol, and acetone). The gas samples were prepared in a custom setup and were diluted with dry nitrogen gas. Electrospun nanofibers of PANI-RGO composite demonstrated an enhancement in NH3 gas detection when comparing with only electrospun PANI nanofibers. Was visible higher range of resistance at concentrations from 1 to 500 ppb. It was also observed that the sensor had stable, reproducible and recoverable properties. Moreover, it had better response and recovery times. The new sensing material of the developed sensor demonstrated to be a good candidate for ammonia gas determination.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
Radio link quality estimation is essential for protocols and mechanisms such as routing, mobility management and localization, particularly for low-power wireless networks such as wireless sensor networks. Commodity Link Quality Estimators (LQEs), e.g. PRR, RNP, ETX, four-bit and RSSI, can only provide a partial characterization of links as they ignore several link properties such as channel quality and stability. In this paper, we propose F-LQE (Fuzzy Link Quality Estimator, a holistic metric that estimates link quality on the basis of four link quality properties—packet delivery, asymmetry, stability, and channel quality—that are expressed and combined using Fuzzy Logic. We demonstrate through an extensive experimental analysis that F-LQE is more reliable than existing estimators (e.g., PRR, WMEWMA, ETX, RNP, and four-bit) as it provides a finer grain link classification. It is also more stable as it has lower coefficient of variation of link estimates. Importantly, we evaluate the impact of F-LQE on the performance of tree routing, specifically the CTP (Collection Tree Protocol). For this purpose, we adapted F-LQE to build a new routing metric for CTP, which we dubbed as F-LQE/RM. Extensive experimental results obtained with state-of-the-art widely used test-beds show that F-LQE/RM improves significantly CTP routing performance over four-bit (the default LQE of CTP) and ETX (another popular LQE). F-LQE/RM improves the end-to-end packet delivery by up to 16%, reduces the number of packet retransmissions by up to 32%, reduces the Hop count by up to 4%, and improves the topology stability by up to 47%.
Resumo:
In-network storage of data in wireless sensor networks contributes to reduce the communications inside the network and to favor data aggregation. In this paper, we consider the use of n out of m codes and data dispersal in combination to in-network storage. In particular, we provide an abstract model of in-network storage to show how n out of m codes can be used, and we discuss how this can be achieved in five cases of study. We also define a model aimed at evaluating the probability of correct data encoding and decoding, we exploit this model and simulations to show how, in the cases of study, the parameters of the n out of m codes and the network should be configured in order to achieve correct data coding and decoding with high probability.
Resumo:
Disaster management is one of the most relevant application fields of wireless sensor networks. In this application, the role of the sensor network usually consists of obtaining a representation or a model of a physical phenomenon spreading through the affected area. In this work we focus on forest firefighting operations, proposing three fully distributed ways for approximating the actual shape of the fire. In the simplest approach, a circular burnt area is assumed around each node that has detected the fire and the union of these circles gives the overall fire’s shape. However, as this approach makes an intensive use of the wireless sensor network resources, we have proposed to incorporate two in-network aggregation techniques, which do not require considering the complete set of fire detections. The first technique models the fire by means of a complex shape composed of multiple convex hulls representing different burning areas, while the second technique uses a set of arbitrary polygons. Performance evaluation of realistic fire models on computer simulations reveals that the method based on arbitrary polygons obtains an improvement of 20% in terms of accuracy of the fire shape approximation, reducing the overhead in-network resources to 10% in the best case.
Resumo:
A new immunosensor is presented for human chorionic gonadotropin (hCG), made by electrodepositing chitosan/gold-nanoparticles over graphene screen-printed electrode (SPE). The antibody was covalently bound to CS via its Fc-terminal. The assembly was controlled by electrochemical Impedance Spectroscopy (EIS) and followed by Fourier Transformed Infrared (FTIR). The hCG-immunosensor displayed linear response against the logarithm-hCG concentration for 0.1–25 ng/mL with limit of detection of 0.016 ng/mL. High selectivity was observed in blank urine and successful detection of hCG was also achieved in spiked samples of real urine from pregnant woman. The immunosensor showed good detection capability, simplicity of fabrication, low-cost, high sensitivity and selectivity.