11 resultados para Road images
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Low noise surfaces have been increasingly considered as a viable and cost-effective alternative to acoustical barriers. However, road planners and administrators frequently lack information on the correlation between the type of road surface and the resulting noise emission profile. To address this problem, a method to identify and classify different types of road pavements was developed, whereby near field road noise is analyzed using statistical learning methods. The vehicle rolling sound signal near the tires and close to the road surface was acquired by two microphones in a special arrangement which implements the Close-Proximity method. A set of features, characterizing the properties of the road pavement, was extracted from the corresponding sound profiles. A feature selection method was used to automatically select those that are most relevant in predicting the type of pavement, while reducing the computational cost. A set of different types of road pavement segments were tested and the performance of the classifier was evaluated. Results of pavement classification performed during a road journey are presented on a map, together with geographical data. This procedure leads to a considerable improvement in the quality of road pavement noise data, thereby increasing the accuracy of road traffic noise prediction models.
Resumo:
The use of iris recognition for human authentication has been spreading in the past years. Daugman has proposed a method for iris recognition, composed by four stages: segmentation, normalization, feature extraction, and matching. In this paper we propose some modifications and extensions to Daugman's method to cope with noisy images. These modifications are proposed after a study of images of CASIA and UBIRIS databases. The major modification is on the computationally demanding segmentation stage, for which we propose a faster and equally accurate template matching approach. The extensions on the algorithm address the important issue of pre-processing that depends on the image database, being mandatory when we have a non infra-red camera, like a typical WebCam. For this scenario, we propose methods for reflection removal and pupil enhancement and isolation. The tests, carried out by our C# application on grayscale CASIA and UBIRIS images show that the template matching segmentation method is more accurate and faster than the previous one, for noisy images. The proposed algorithms are found to be efficient and necessary when we deal with non infra-red images and non uniform illumination.
Resumo:
The rapid growth in genetics and molecular biology combined with the development of techniques for genetically engineering small animals has led to increased interest in in vivo small animal imaging. Small animal imaging has been applied frequently to the imaging of small animals (mice and rats), which are ubiquitous in modeling human diseases and testing treatments. The use of PET in small animals allows the use of subjects as their own control, reducing the interanimal variability. This allows performing longitudinal studies on the same animal and improves the accuracy of biological models. However, small animal PET still suffers from several limitations. The amounts of radiotracers needed, limited scanner sensitivity, image resolution and image quantification issues, all could clearly benefit from additional research. Because nuclear medicine imaging deals with radioactive decay, the emission of radiation energy through photons and particles alongside with the detection of these quanta and particles in different materials make Monte Carlo method an important simulation tool in both nuclear medicine research and clinical practice. In order to optimize the quantitative use of PET in clinical practice, data- and image-processing methods are also a field of intense interest and development. The evaluation of such methods often relies on the use of simulated data and images since these offer control of the ground truth. Monte Carlo simulations are widely used for PET simulation since they take into account all the random processes involved in PET imaging, from the emission of the positron to the detection of the photons by the detectors. Simulation techniques have become an importance and indispensable complement to a wide range of problems that could not be addressed by experimental or analytical approaches.
Resumo:
Fluorescence confocal microscopy (FCM) is now one of the most important tools in biomedicine research. In fact, it makes it possible to accurately study the dynamic processes occurring inside the cell and its nucleus by following the motion of fluorescent molecules over time. Due to the small amount of acquired radiation and the huge optical and electronics amplification, the FCM images are usually corrupted by a severe type of Poisson noise. This noise may be even more damaging when very low intensity incident radiation is used to avoid phototoxicity. In this paper, a Bayesian algorithm is proposed to remove the Poisson intensity dependent noise corrupting the FCM image sequences. The observations are organized in a 3-D tensor where each plane is one of the images acquired along the time of a cell nucleus using the fluorescence loss in photobleaching (FLIP) technique. The method removes simultaneously the noise by considering different spatial and temporal correlations. This is accomplished by using an anisotropic 3-D filter that may be separately tuned in space and in time dimensions. Tests using synthetic and real data are described and presented to illustrate the application of the algorithm. A comparison with several state-of-the-art algorithms is also presented.
Resumo:
An atmospheric aerosol study was performed in 2008 inside an urban road tunnel, in Lisbon, Portugal. Using a high volume impactor, the aerosol was collected into four size fractions (PM0.5, PM0.5-1, PM1-2.5 and PM2.5-10) and analysed for particle mass (PM), organic and elemental carbon (OC and EC), polycyclic aromatic hydrocarbons (PAH), soluble inorganic ions and elemental composition. Three main groups of compounds were discriminated in the tunnel aerosol: carbonaceous, soil component and vehicle mechanical wear. Measurements indicate that Cu can be a good tracer for wear emissions of road traffic. Cu levels correlate strongly with Fe, Mn, Sn and Cr, showing a highly linear constant ratio in all size ranges, suggesting a unique origin through sizes. Ratios of Cu with other elements can be used to source apportion the trace elements present in urban atmospheres, mainly on what concerns coarse aerosol particles. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Prémio - CEN/TC 287 AWARD FOR EXCELLENCE IN INSIRE 2012 Implementation of the INSPIRE Directive on Road Infrastructure in Portugal Inês Soares, aluna de Mestrado em engenharia civil do ISEL, Instituto Superior de Engenharia de Lisboa e Paulo Martins, e o seu orientador, receberam um prémio europeu, no dia 27 de junho, em Istambul na Turquia, numa conferência internacional organizada pela Comissão Europeia e pelo governo Turco. A jovem portuguesa foi escolhida entre cerca de 20 candidatos de vários países. Paulo Matos Martins, professor no ISEL, além de mentor do trabalho premiado, foi orientador de mestrado da aluna e explica que se trata de um estudo sobre a aplicação da Diretiva comunitária INSPIRE à infraestrutura rodoviária nacional que contou com a estreita colaboração do InIR, Instituto da Infraestrutura Rodoviária através da coorientação da engenheira Adelaide Costa e colaboração técnica do engenheiro Rui Luso Soares. O projeto-piloto correspondeu à criação de uma aplicação informática que permite aceder a informação geográfica harmonizada relativa à infraestrutura rodoviária nacional, de acordo com as disposições de execução INSPIRE, dando cumprimento aos requisitos impostos pela Diretiva às entidades responsáveis por este tipo de informação, entre as quais se incluem diversos organismos públicos (podendo no futuro vir a incluir as autarquias), permitindo aos decisores políticos e a todos os cidadãos o fácil acesso a informação de qualidade sobre as infraestruturas, o território e o ambiente.
Resumo:
Fluorescence confocal microscopy images present a low signal to noise ratio and a time intensity decay due to the so called photoblinking and photobleaching effects. These effects, together with the Poisson multiplicative noise that corrupts the images, make long time biological observation processes very difficult.
Resumo:
Computational Vision stands as the most comprehensive way of knowing the surrounding environment. Accordingly to that, this study aims to present a method to obtain from a common webcam, environment information to guide a mobile differential robot through a path similar to a roadway.
Resumo:
This paper presents a spatial econometrics analysis for the number of road accidents with victims in the smallest administrative divisions of Lisbon, considering as a baseline a log-Poisson model for environmental factors. Spatial correlation on data is investigated for data alone and for the residuals of the baseline model without and with spatial-autocorrelated and spatial-lagged terms. In all the cases no spatial autocorrelation was detected.
Resumo:
Computational Vision stands as the most comprehensive way of knowing the surrounding environment. Accordingly to that, this study aims to present a method to obtain from a common webcam, environment information to guide a mobile differential robot through a path similar to a roadway.
Resumo:
In this paper an automatic classification algorithm is proposed for the diagnosis of the liver steatosis, also known as, fatty liver, from ultrasound images. The features, automatically extracted from the ultrasound images used by the classifier, are basically the ones used by the physicians in the diagnosis of the disease based on visual inspection of the ultrasound images. The main novelty of the method is the utilization of the speckle noise that corrupts the ultrasound images to compute textural features of the liver parenchyma relevant for the diagnosis. The algorithm uses the Bayesian framework to compute a noiseless image, containing anatomic and echogenic information of the liver and a second image containing only the speckle noise used to compute the textural features. The classification results, with the Bayes classifier using manually classified data as ground truth show that the automatic classifier reaches an accuracy of 95% and a 100% of sensitivity.