792 resultados para airborne sensor
Resumo:
The present study is focused on the characterization of ultrafine particles emitted in welding of steel using mixtures of Ar+CO2, and intends to analyze which are the main process parameters which may have influence on the emission itself. It was found that the amount of emitted ultrafine particles (measured by particle number and alveolar deposited surface area) are clearly dependent from the distance to the welding front and also from the main welding parameters, namely the current intensity and heat input in the welding process. The emission of airborne ultrafine particles seem to increase with the current intensity as fume formation rate does. When comparing the tested gas mixtures, higher emissions are observed for more oxidant mixtures, that is, mixtures with higher CO2 content, which result in higher arc stability. The later mixtures originate higher concentrations of ultrafine particles (as measured by number of particles by cm3 of air) and higher values of alveolar deposited surface area of particles, thus resulting in a more hazardous condition regarding worker's exposure. © 2014 Sociedade Portuguesa de Materiais (SPM). Published by Elsevier España, S.L. All rights reserved.
Resumo:
This paper analyzes the signals captured during impacts and vibrations of a mechanical manipulator. To test the impacts, a flexible beam is clamped to the end-effector of a manipulator that is programmed in a way such that the rod moves against a rigid surface. Eighteen signals are captured and theirs correlation are calculated. A sensor classification scheme based on the multidimensional scaling technique is presented.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Instrumentação, Manutenção Industrial e Qualidade
Resumo:
The aim of this study was the assessment of exposure to ultrafine in the urban environment of Lisbon, Portugal, due to automobile traffic, and consisted of the determination of deposited alveolar surface area in an avenue leading to the town center during late spring. This study revealed differentiated patterns for weekdays and weekends, which could be related with the fluxes of automobile traffic. During a typical week, ultrafine particles alveolar deposited surface area varied between 35.0 and 89.2 mu m(2)/cm(3), which is comparable with levels reported for other towns such in Germany and the United States. These measurements were also complemented by measuring the electrical mobility diameter (varying from 18.3 to 128.3 nm) and number of particles that showed higher values than those previously reported for Madrid and Brisbane. Also, electron microscopy showed that the collected particles were composed of carbonaceous agglomerates, typical of particles emitted by the exhaustion of diesel vehicles. Implications: The approach of this study considers the measurement of surface deposited alveolar area of particles in the outdoor urban environment of Lisbon, Portugal. This type of measurements has not been done so far. Only particulate matter with aerodynamic diameters <2.5 (PM2.5) and >10 (PM10) mu m have been measured in outdoor environments and the levels found cannot be found responsible for all the observed health effects. Therefore, the exposure to nano- and ultrafine particles has not been assessed systematically, and several authors consider this as a real knowledge gap and claim for data such as these that will allow for deriving better and more comprehensive epidemiologic studies. Nanoparticle surface area monitor (NSAM) equipments are recent ones and their use has been limited to indoor atmospheres. However, as this study shows, NSAM is a very powerful tool for outdoor environments also. As most lung diseases are, in fact, related to deposition of the alveolar region of the lung, the metric used in this study is the ideal one.
Resumo:
The aim of this study is to assess the levels of airborne ultrafine particles emitted in welding processes (tungsten inert gas [TIG], metal active gas [MAG] of carbon steel, and friction stir welding [FSW] of aluminum) in terms of deposited area in pulmonary alveolar tract using a nanoparticle surface area monitor (NSAM) analyzer. The obtained results showed the dependence of process parameters on emitted ultrafine particles and demonstrated the presence of ultrafine particles compared to background levels. Data indicated that the process that resulted in the lowest levels of alveolar deposited surface area (ADSA) was FSW, followed by TIG and MAG. However, all tested processes resulted in significant concentrations of ultrafine particles being deposited in humans lungs of exposed workers.
Resumo:
Cooking was found to be a main source of submicrometer and ultrafine aerosols from gas combustion in stoves. Therefore, this study consisted of the determination of the alveolar deposited surface area due to aerosols resulting from common domestic cooking activities (boiling fish, vegetables, or pasta, and frying hamburgers and eggs). The concentration of ultrafine particles during the cooking events significantly increased from a baseline of 42.7 mu m(2)/cm(3) (increased to 72.9 mu m(2)/cm(3) due to gas burning) to a maximum of 890.3 mu m(2)/cm(3) measured during fish boiling in water, and a maximum of 4500 mu m(2)/cm(3) during meat frying. This clearly shows that a domestic activity such as cooking can lead to exposures as high as those of occupational exposure activities.
Resumo:
The aim of this study is to contribute to the assessment of exposure levels of ultrafine particles (UFP) in the urban environment of Lisbon, Portugal, due to automobile traffic, by monitoring lung-deposited alveolar surface area (resulting from exposure to UFP) in a major avenue leading to the town centre during late Spring, as well as in indoor buildings facing it. This study revealed differentiated patterns for week days and weekends, consistent with PM2.5 and PM10 patterns currently monitored by air quality stations in Lisbon. The observed ultrafine particulate levels could be directly related with the fluxes of automobile traffic. During a typical week, UFP alveolar deposited surface area varied between 35.0 and 89.2 mu m(2)/cm(3), which is comparable with levels reported for other towns such in Germany and United States. The measured values allowed the determination of the number of UFP per cm(3), which are comparable to levels reported for Madrid and Brisbane. In what concerns outdoor/indoor levels, we observed higher levels (32-63%) outdoor, which is somewhat lower than levels observed in houses in Ontario.
Resumo:
The aim of this study was to contribute to the assessment of exposure levels of ultrafine particles in the urban environment of Lisbon, Portugal, due to automobile traffic, by monitoring lung deposited alveolar surface area (resulting from exposure to ultrafine particles) in a major avenue leading to the town center during late spring, as well as in indoor buildings facing it. Data revealed differentiated patterns for week days and weekends, consistent with PM2.5 and PM10 patterns currently monitored by air quality stations in Lisbon. The observed ultrafine particulate levels may be directly correlated with fluxes in automobile traffic. During a typical week, amounts of ultrafine particles per alveolar deposited surface area varied between 35 and 89.2 mu m2/cm3, which are comparable with levels reported for other towns in Germany and the United States. The measured values allowed for determination of the number of ultrafine particles per cubic centimeter, which are comparable to levels reported for Madrid and Brisbane. In what concerns outdoor/indoor levels, we observed higher levels (32 to 63%) outdoors, which is somewhat lower than levels observed in houses in Ontario.
Resumo:
Journal of Applied Physics, Vol. 96, nº3
Resumo:
Coal contains trace elements and naturally occurring radionuclides such as 40K, 232Th, 238U. When coal is burned, minerals, including most of the radionuclides, do not burn and concentrate in the ash several times in comparison with their content in coal. Usually, a small fraction of the fly ash produced (2-5%) is released into the atmosphere. The activities released depend on many factors (concentration in coal, ash content and inorganic matter of the coal, combustion temperature, ratio between bottom and fly ash, filtering system). Therefore, marked differences should be expected between the by-products produced and the amount of activity discharged (per unit of energy produced) from different coal-fired power plants. In fact, the effects of these releases on the environment due to ground deposition have been received some attention but the results from these studies are not unanimous and cannot be understood as a generic conclusion for all coal-fired power plants. In this study, the dispersion modelling of natural radionuclides was carried out to assess the impact of continuous atmospheric releases from a selected coal plant. The natural radioactivity of the coal and the fly ash were measured and the dispersion was modelled by a Gaussian plume estimating the activity concentration at different heights up to a distance of 20 km in several wind directions. External and internal doses (inhalation and ingestion) and the resulting risk were calculated for the population living within 20 km from the coal plant. In average, the effective dose is lower than the ICRP’s limit and the risk is lower than the U.S. EPA’s limit. Therefore, in this situation, the considered exposure does not pose any risk. However, when considering the dispersion in the prevailing wind direction, these values are significant due to an increase of 232Th and 226Ra concentrations in 75% and 44%, respectively.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Nowadays the incredible grow of mobile devices market led to the need for location-aware applications. However, sometimes person location is difficult to obtain, since most of these devices only have a GPS (Global Positioning System) chip to retrieve location. In order to suppress this limitation and to provide location everywhere (even where a structured environment doesn’t exist) a wearable inertial navigation system is proposed, which is a convenient way to track people in situations where other localization systems fail. The system combines pedestrian dead reckoning with GPS, using widely available, low-cost and low-power hardware components. The system innovation is the information fusion and the use of probabilistic methods to learn persons gait behavior to correct, in real-time, the drift errors given by the sensors.
Resumo:
Nowadays there is an increase of location-aware mobile applications. However, these applications only retrieve location with a mobile device's GPS chip. This means that in indoor or in more dense environments these applications don't work properly. To provide location information everywhere a pedestrian Inertial Navigation System (INS) is typically used, but these systems can have a large estimation error since, in order to turn the system wearable, they use low-cost and low-power sensors. In this work a pedestrian INS is proposed, where force sensors were included to combine with the accelerometer data in order to have a better detection of the stance phase of the human gait cycle, which leads to improvements in location estimation. Besides sensor fusion an information fusion architecture is proposed, based on the information from GPS and several inertial units placed on the pedestrian body, that will be used to learn the pedestrian gait behavior to correct, in real-time, the inertial sensors errors, thus improving location estimation.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
A Norfloxacina (NFX) é um antibiótico antibacteriano indicado para combater bactérias Gram-negativas e amplamente utilizado para o tratamento de infeções no trato respiratório e urinário. Com a necessidade de realizar estudos clínicos e farmacológicos esenvolveram-se métodos de análise rápida e sensitiva para a determinação da Norfloxacina. Neste trabalho foi desenvolvido um novo sensor eletroquímico sensível e seletivo para a deteção da NFX. O sensor foi construído a partir de modificações efetuadas num elétrodo de carbono vítreo. Inicialmente o elétrodo foi modificado com a deposição de uma suspensão de nanotubos de carbono de paredes múltiplas (MWCNT) de modo a aumentar a sensibilidade de resposta analítica. De seguida um filme polímerico molecularmente impresso (MIP) foi preparado por eletrodeposição, a partir de uma solução contendo pirrol (monómero funcional) e NFX (template). Um elétrodo de controlo não impresso foi também preparado (NIP). Estudouse e caraterizou-se a resposta eletroquímica do sensor para a oxidação da NFX por voltametria de onda quadrada. Foram optimizados diversos parâmetros experimentais, tais como, condições ótimas de polimerização, condições de incubação e condições de extração. O sensor apresenta um comportamento linear entre a intensidade da corrente do pico e o logaritmo da concentração de NFX na gama entre 0,1 e 8μM. Os resultados obtidos apresentam boa precisão, com repetibilidade inferior a 6% e reprodutibilidade inferior a 9%. Foi calculado a partir da curva de calibração um limite de deteção de 0,2 μM O método desenvolvido é seletivo, rápido e de fácil manuseamento. O sensor molecularmente impresso foi aplicado com sucesso na deteção da NFX em amostras de urina real e água.