917 resultados para multi-resolution image analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-content analysis has revolutionized cancer drug discovery by identifying substances that alter the phenotype of a cell, which prevents tumor growth and metastasis. The high-resolution biofluorescence images from assays allow precise quantitative measures enabling the distinction of small molecules of a host cell from a tumor. In this work, we are particularly interested in the application of deep neural networks (DNNs), a cutting-edge machine learning method, to the classification of compounds in chemical mechanisms of action (MOAs). Compound classification has been performed using image-based profiling methods sometimes combined with feature reduction methods such as principal component analysis or factor analysis. In this article, we map the input features of each cell to a particular MOA class without using any treatment-level profiles or feature reduction methods. To the best of our knowledge, this is the first application of DNN in this domain, leveraging single-cell information. Furthermore, we use deep transfer learning (DTL) to alleviate the intensive and computational demanding effort of searching the huge parameter's space of a DNN. Results show that using this approach, we obtain a 30% speedup and a 2% accuracy improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction. Development of the fetal brain surfacewith concomitant gyrification is one of the majormaturational processes of the human brain. Firstdelineated by postmortem studies or by ultrasound, MRIhas recently become a powerful tool for studying in vivothe structural correlates of brain maturation. However,the quantitative measurement of fetal brain developmentis a major challenge because of the movement of the fetusinside the amniotic cavity, the poor spatial resolution,the partial volume effect and the changing appearance ofthe developing brain. Today extensive efforts are made todeal with the âeurooepost-acquisitionâeuro reconstruction ofhigh-resolution 3D fetal volumes based on severalacquisitions with lower resolution (Rousseau, F., 2006;Jiang, S., 2007). We here propose a framework devoted tothe segmentation of the basal ganglia, the gray-whitetissue segmentation, and in turn the 3D corticalreconstruction of the fetal brain. Method. Prenatal MRimaging was performed with a 1-T system (GE MedicalSystems, Milwaukee) using single shot fast spin echo(ssFSE) sequences in fetuses aged from 29 to 32gestational weeks (slice thickness 5.4mm, in planespatial resolution 1.09mm). For each fetus, 6 axialvolumes shifted by 1 mm were acquired (about 1 min pervolume). First, each volume is manually segmented toextract fetal brain from surrounding fetal and maternaltissues. Inhomogeneity intensity correction and linearintensity normalization are then performed. A highspatial resolution image of isotropic voxel size of 1.09mm is created for each fetus as previously published byothers (Rousseau, F., 2006). B-splines are used for thescattered data interpolation (Lee, 1997). Then, basalganglia segmentation is performed on this superreconstructed volume using active contour framework witha Level Set implementation (Bach Cuadra, M., 2010). Oncebasal ganglia are removed from the image, brain tissuesegmentation is performed (Bach Cuadra, M., 2009). Theresulting white matter image is then binarized andfurther given as an input in the Freesurfer software(http://surfer.nmr.mgh.harvard.edu/) to provide accuratethree-dimensional reconstructions of the fetal brain.Results. High-resolution images of the cerebral fetalbrain, as obtained from the low-resolution acquired MRI,are presented for 4 subjects of age ranging from 29 to 32GA. An example is depicted in Figure 1. Accuracy in theautomated basal ganglia segmentation is compared withmanual segmentation using measurement of Dice similarity(DSI), with values above 0.7 considering to be a verygood agreement. In our sample we observed DSI valuesbetween 0.785 and 0.856. We further show the results ofgray-white matter segmentation overlaid on thehigh-resolution gray-scale images. The results arevisually checked for accuracy using the same principlesas commonly accepted in adult neuroimaging. Preliminary3D cortical reconstructions of the fetal brain are shownin Figure 2. Conclusion. We hereby present a completepipeline for the automated extraction of accuratethree-dimensional cortical surface of the fetal brain.These results are preliminary but promising, with theultimate goal to provide âeurooemovieâeuro of the normal gyraldevelopment. In turn, a precise knowledge of the normalfetal brain development will allow the quantification ofsubtle and early but clinically relevant deviations.Moreover, a precise understanding of the gyraldevelopment process may help to build hypotheses tounderstand the pathogenesis of several neurodevelopmentalconditions in which gyrification have been shown to bealtered (e.g. schizophrenia, autismâeuro¦). References.Rousseau, F. (2006), 'Registration-Based Approach forReconstruction of High-Resolution In Utero Fetal MR Brainimages', IEEE Transactions on Medical Imaging, vol. 13,no. 9, pp. 1072-1081. Jiang, S. (2007), 'MRI of MovingSubjects Using Multislice Snapshot Images With VolumeReconstruction (SVR): Application to Fetal, Neonatal, andAdult Brain Studies', IEEE Transactions on MedicalImaging, vol. 26, no. 7, pp. 967-980. Lee, S. (1997),'Scattered data interpolation with multilevel B-splines',IEEE Transactions on Visualization and Computer Graphics,vol. 3, no. 3, pp. 228-244. Bach Cuadra, M. (2010),'Central and Cortical Gray Mater Segmentation of MagneticResonance Images of the Fetal Brain', ISMRM Conference.Bach Cuadra, M. (2009), 'Brain tissue segmentation offetal MR images', MICCAI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In vivo fetal magnetic resonance imaging provides aunique approach for the study of early human braindevelopment [1]. In utero cerebral morphometry couldpotentially be used as a marker of the cerebralmaturation and help to distinguish between normal andabnormal development in ambiguous situations. However,this quantitative approach is a major challenge becauseof the movement of the fetus inside the amniotic cavity,the poor spatial resolution provided by very fast MRIsequences and the partial volume effect. Extensiveefforts are made to deal with the reconstruction ofhigh-resolution 3D fetal volumes based on severalacquisitions with lower resolution [2,3,4]. Frameworkswere developed for the segmentation of specific regionsof the fetal brain such as posterior fossa, brainstem orgerminal matrix [5,6], or for the entire brain tissue[7,8], applying the Expectation-Maximization MarkovRandom Field (EM-MRF) framework. However, many of theseprevious works focused on the young fetus (i.e. before 24weeks) and use anatomical atlas priors to segment thedifferent tissue or regions. As most of the gyraldevelopment takes place after the 24th week, acomprehensive and clinically meaningful study of thefetal brain should not dismiss the third trimester ofgestation. To cope with the rapidly changing appearanceof the developing brain, some authors proposed a dynamicatlas [8]. To our opinion, this approach however faces arisk of circularity: each brain will be analyzed /deformed using the template of its biological age,potentially biasing the effective developmental delay.Here, we expand our previous work [9] to proposepost-processing pipeline without prior that allow acomprehensive set of morphometric measurement devoted toclinical application. Data set & Methods: Prenatal MRimaging was performed with a 1-T system (GE MedicalSystems, Milwaukee) using single shot fast spin echo(ssFSE) sequences (TR 7000 ms, TE 180 ms, FOV 40 x 40 cm,slice thickness 5.4mm, in plane spatial resolution1.09mm). For each fetus, 6 axial volumes shifted by 1 mmwere acquired under motherâeuro?s sedation (about 1min pervolume). First, each volume is segmentedsemi-automatically using region-growing algorithms toextract fetal brain from surrounding maternal tissues.Inhomogeneity intensity correction [10] and linearintensity normalization are then performed. Brain tissues(CSF, GM and WM) are then segmented based on thelow-resolution volumes as presented in [9]. Ahigh-resolution image with isotropic voxel size of 1.09mm is created as proposed in [2] and using B-splines forthe scattered data interpolation [11]. Basal gangliasegmentation is performed using a levet setimplementation on the high-resolution volume [12]. Theresulting white matter image is then binarized and givenas an input in FreeSurfer software(http://surfer.nmr.mgh.harvard.edu) to providetopologically accurate three-dimensional reconstructionsof the fetal brain according to the local intensitygradient. References: [1] Guibaud, Prenatal Diagnosis29(4) (2009). [2] Rousseau, Acad. Rad. 13(9), 2006. [3]Jiang, IEEE TMI 2007. [4] Warfield IADB, MICCAI 2009. [5]Claude, IEEE Trans. Bio. Eng. 51(4) 2004. [6] Habas,MICCAI 2008. [7] Bertelsen, ISMRM 2009. [8] Habas,Neuroimage 53(2) 2010. [9] Bach Cuadra, IADB, MICCAI2009. [10] Styner, IEEE TMI 19(39 (2000). [11] Lee, IEEETrans. Visual. And Comp. Graph. 3(3), 1997. [12] BachCuadra, ISMRM 2010.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of research was to analyse the potential of Normalized Difference Vegetation Index (NDVI) maps from satellite images, yield maps and grapevine fertility and load variables to delineate zones with different wine grape properties for selective harvesting. Two vineyard blocks located in NE Spain (Cabernet Sauvignon and Syrah) were analysed. The NDVI was computed from a Quickbird-2 multi-spectral image at veraison (July 2005). Yield data was acquired by means of a yield monitor during September 2005. Other variables, such as the number of buds, number of shoots, number of wine grape clusters and weight of 100 berries were sampled in a 10 rows × 5 vines pattern and used as input variables, in combination with the NDVI, to define the clusters as alternative to yield maps. Two days prior to the harvesting, grape samples were taken. The analysed variables were probable alcoholic degree, pH of the juice, total acidity, total phenolics, colour, anthocyanins and tannins. The input variables, alone or in combination, were clustered (2 and 3 Clusters) by using the ISODATA algorithm, and an analysis of variance and a multiple rang test were performed. The results show that the zones derived from the NDVI maps are more effective to differentiate grape maturity and quality variables than the zones derived from the yield maps. The inclusion of other grapevine fertility and load variables did not improve the results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dirt counting and dirt particle characterisation of pulp samples is an important part of quality control in pulp and paper production. The need for an automatic image analysis system to consider dirt particle characterisation in various pulp samples is also very critical. However, existent image analysis systems utilise a single threshold to segment the dirt particles in different pulp samples. This limits their precision. Based on evidence, designing an automatic image analysis system that could overcome this deficiency is very useful. In this study, the developed Niblack thresholding method is proposed. The method defines the threshold based on the number of segmented particles. In addition, the Kittler thresholding is utilised. Both of these thresholding methods can determine the dirt count of the different pulp samples accurately as compared to visual inspection and the Digital Optical Measuring and Analysis System (DOMAS). In addition, the minimum resolution needed for acquiring a scanner image is defined. By considering the variation in dirt particle features, the curl shows acceptable difference to discriminate the bark and the fibre bundles in different pulp samples. Three classifiers, called k-Nearest Neighbour, Linear Discriminant Analysis and Multi-layer Perceptron are utilised to categorize the dirt particles. Linear Discriminant Analysis and Multi-layer Perceptron are the most accurate in classifying the segmented dirt particles by the Kittler thresholding with morphological processing. The result shows that the dirt particles are successfully categorized for bark and for fibre bundles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optical microscopy is living its renaissance. The diffraction limit, although still physically true, plays a minor role in the achievable resolution in far-field fluorescence microscopy. Super-resolution techniques enable fluorescence microscopy at nearly molecular resolution. Modern (super-resolution) microscopy methods rely strongly on software. Software tools are needed all the way from data acquisition, data storage, image reconstruction, restoration and alignment, to quantitative image analysis and image visualization. These tools play a key role in all aspects of microscopy today – and their importance in the coming years is certainly going to increase, when microscopy little-by-little transitions from single cells into more complex and even living model systems. In this thesis, a series of bioimage informatics software tools are introduced for STED super-resolution microscopy. Tomographic reconstruction software, coupled with a novel image acquisition method STED< is shown to enable axial (3D) super-resolution imaging in a standard 2D-STED microscope. Software tools are introduced for STED super-resolution correlative imaging with transmission electron microscopes or atomic force microscopes. A novel method for automatically ranking image quality within microscope image datasets is introduced, and it is utilized to for example select the best images in a STED microscope image dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RÉSUMÉ - Les images satellitales multispectrales, notamment celles à haute résolution spatiale (plus fine que 30 m au sol), représentent une source d’information inestimable pour la prise de décision dans divers domaines liés à la gestion des ressources naturelles, à la préservation de l’environnement ou à l’aménagement et la gestion des centres urbains. Les échelles d’étude peuvent aller du local (résolutions plus fines que 5 m) à des échelles régionales (résolutions plus grossières que 5 m). Ces images caractérisent la variation de la réflectance des objets dans le spectre qui est l’information clé pour un grand nombre d’applications de ces données. Or, les mesures des capteurs satellitaux sont aussi affectées par des facteurs « parasites » liés aux conditions d’éclairement et d’observation, à l’atmosphère, à la topographie et aux propriétés des capteurs. Deux questions nous ont préoccupé dans cette recherche. Quelle est la meilleure approche pour restituer les réflectances au sol à partir des valeurs numériques enregistrées par les capteurs tenant compte des ces facteurs parasites ? Cette restitution est-elle la condition sine qua non pour extraire une information fiable des images en fonction des problématiques propres aux différents domaines d’application des images (cartographie du territoire, monitoring de l’environnement, suivi des changements du paysage, inventaires des ressources, etc.) ? Les recherches effectuées les 30 dernières années ont abouti à une série de techniques de correction des données des effets des facteurs parasites dont certaines permettent de restituer les réflectances au sol. Plusieurs questions sont cependant encore en suspens et d’autres nécessitent des approfondissements afin, d’une part d’améliorer la précision des résultats et d’autre part, de rendre ces techniques plus versatiles en les adaptant à un plus large éventail de conditions d’acquisition des données. Nous pouvons en mentionner quelques unes : - Comment prendre en compte des caractéristiques atmosphériques (notamment des particules d’aérosol) adaptées à des conditions locales et régionales et ne pas se fier à des modèles par défaut qui indiquent des tendances spatiotemporelles à long terme mais s’ajustent mal à des observations instantanées et restreintes spatialement ? - Comment tenir compte des effets de « contamination » du signal provenant de l’objet visé par le capteur par les signaux provenant des objets environnant (effet d’adjacence) ? ce phénomène devient très important pour des images de résolution plus fine que 5 m; - Quels sont les effets des angles de visée des capteurs hors nadir qui sont de plus en plus présents puisqu’ils offrent une meilleure résolution temporelle et la possibilité d’obtenir des couples d’images stéréoscopiques ? - Comment augmenter l’efficacité des techniques de traitement et d’analyse automatique des images multispectrales à des terrains accidentés et montagneux tenant compte des effets multiples du relief topographique sur le signal capté à distance ? D’autre part, malgré les nombreuses démonstrations par des chercheurs que l’information extraite des images satellitales peut être altérée à cause des tous ces facteurs parasites, force est de constater aujourd’hui que les corrections radiométriques demeurent peu utilisées sur une base routinière tel qu’est le cas pour les corrections géométriques. Pour ces dernières, les logiciels commerciaux de télédétection possèdent des algorithmes versatiles, puissants et à la portée des utilisateurs. Les algorithmes des corrections radiométriques, lorsqu’ils sont proposés, demeurent des boîtes noires peu flexibles nécessitant la plupart de temps des utilisateurs experts en la matière. Les objectifs que nous nous sommes fixés dans cette recherche sont les suivants : 1) Développer un logiciel de restitution des réflectances au sol tenant compte des questions posées ci-haut. Ce logiciel devait être suffisamment modulaire pour pouvoir le bonifier, l’améliorer et l’adapter à diverses problématiques d’application d’images satellitales; et 2) Appliquer ce logiciel dans différents contextes (urbain, agricole, forestier) et analyser les résultats obtenus afin d’évaluer le gain en précision de l’information extraite par des images satellitales transformées en images des réflectances au sol et par conséquent la nécessité d’opérer ainsi peu importe la problématique de l’application. Ainsi, à travers cette recherche, nous avons réalisé un outil de restitution de la réflectance au sol (la nouvelle version du logiciel REFLECT). Ce logiciel est basé sur la formulation (et les routines) du code 6S (Seconde Simulation du Signal Satellitaire dans le Spectre Solaire) et sur la méthode des cibles obscures pour l’estimation de l’épaisseur optique des aérosols (aerosol optical depth, AOD), qui est le facteur le plus difficile à corriger. Des améliorations substantielles ont été apportées aux modèles existants. Ces améliorations concernent essentiellement les propriétés des aérosols (intégration d’un modèle plus récent, amélioration de la recherche des cibles obscures pour l’estimation de l’AOD), la prise en compte de l’effet d’adjacence à l’aide d’un modèle de réflexion spéculaire, la prise en compte de la majorité des capteurs multispectraux à haute résolution (Landsat TM et ETM+, tous les HR de SPOT 1 à 5, EO-1 ALI et ASTER) et à très haute résolution (QuickBird et Ikonos) utilisés actuellement et la correction des effets topographiques l’aide d’un modèle qui sépare les composantes directe et diffuse du rayonnement solaire et qui s’adapte également à la canopée forestière. Les travaux de validation ont montré que la restitution de la réflectance au sol par REFLECT se fait avec une précision de l’ordre de ±0.01 unités de réflectance (pour les bandes spectrales du visible, PIR et MIR), même dans le cas d’une surface à topographie variable. Ce logiciel a permis de montrer, à travers des simulations de réflectances apparentes à quel point les facteurs parasites influant les valeurs numériques des images pouvaient modifier le signal utile qui est la réflectance au sol (erreurs de 10 à plus de 50%). REFLECT a également été utilisé pour voir l’importance de l’utilisation des réflectances au sol plutôt que les valeurs numériques brutes pour diverses applications courantes de la télédétection dans les domaines des classifications, du suivi des changements, de l’agriculture et de la foresterie. Dans la majorité des applications (suivi des changements par images multi-dates, utilisation d’indices de végétation, estimation de paramètres biophysiques, …), la correction des images est une opération cruciale pour obtenir des résultats fiables. D’un point de vue informatique, le logiciel REFLECT se présente comme une série de menus simples d’utilisation correspondant aux différentes étapes de saisie des intrants de la scène, calcul des transmittances gazeuses, estimation de l’AOD par la méthode des cibles obscures et enfin, l’application des corrections radiométriques à l’image, notamment par l’option rapide qui permet de traiter une image de 5000 par 5000 pixels en 15 minutes environ. Cette recherche ouvre une série de pistes pour d’autres améliorations des modèles et méthodes liés au domaine des corrections radiométriques, notamment en ce qui concerne l’intégration de la FDRB (fonction de distribution de la réflectance bidirectionnelle) dans la formulation, la prise en compte des nuages translucides à l’aide de la modélisation de la diffusion non sélective et l’automatisation de la méthode des pentes équivalentes proposée pour les corrections topographiques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a new directionally adaptive, learning based, single image super resolution method using multiple direction wavelet transform, called Directionlets is presented. This method uses directionlets to effectively capture directional features and to extract edge information along different directions of a set of available high resolution images .This information is used as the training set for super resolving a low resolution input image and the Directionlet coefficients at finer scales of its high-resolution image are learned locally from this training set and the inverse Directionlet transform recovers the super-resolved high resolution image. The simulation results showed that the proposed approach outperforms standard interpolation techniques like Cubic spline interpolation as well as standard Wavelet-based learning, both visually and in terms of the mean squared error (mse) values. This method gives good result with aliased images also.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a forward-looking infrared (FLIR) video surveillance system is presented for collision avoidance of moving ships to bridge piers. An image preprocessing algorithm is proposed to reduce clutter background by multi-scale fractal analysis, in which the blanket method is used for fractal feature computation. Then, the moving ship detection algorithm is developed from image differentials of the fractal feature in the region of surveillance between regularly interval frames. When the moving ships are detected in region of surveillance, the device for safety alert is triggered. Experimental results have shown that the approach is feasible and effective. It has achieved real-time and reliable alert to avoid collisions of moving ships to bridge piers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multidimensional Visualization techniques are invaluable tools for analysis of structured and unstructured data with variable dimensionality. This paper introduces PEx-Image-Projection Explorer for Images-a tool aimed at supporting analysis of image collections. The tool supports a methodology that employs interactive visualizations to aid user-driven feature detection and classification tasks, thus offering improved analysis and exploration capabilities. The visual mappings employ similarity-based multidimensional projections and point placement to layout the data on a plane for visual exploration. In addition to its application to image databases, we also illustrate how the proposed approach can be successfully employed in simultaneous analysis of different data types, such as text and images, offering a common visual representation for data expressed in different modalities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Texture is one of the most important visual attributes used in image analysis. It is used in many content-based image retrieval systems, where it allows the identification of a larger number of images from distinct origins. This paper presents a novel approach for image analysis and retrieval based on complexity analysis. The approach consists of a texture segmentation step, performed by complexity analysis through BoxCounting fractal dimension, followed by the estimation of complexity of each computed region by multiscale fractal dimension. Experiments have been performed with MRI database in both pattern recognition and image retrieval contexts. Results show the accuracy of the method and also indicate how the performance changes as the texture segmentation process is altered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A set of NIH Image macro programs was developed to make qualitative and quantitative analyses from digital stereo pictures produced by scanning electron microscopes. These tools were designed for image alignment, anaglyph representation, animation, reconstruction of true elevation surfaces, reconstruction of elevation profiles, true-scale elevation mapping and, for the quantitative approach, surface area and roughness calculations. Limitations on time processing, scanning techniques and programming concepts are also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an automatic methodology for road network extraction from medium-and high-resolution aerial images. It is based on two steps. In the first step, the road seeds (i.e., road segments) are extracted using a set of four road objects and another set of connection rules among road objects. Each road object is a local representation of an approximately straight road fragment and its construction is based on a combination of polygons describing all relevant image edges, according to some rules embodying road knowledge. Each road seed is composed by a sequence of connected road objects in which each sequence of this type can be geometrically structured as a chain of contiguous quadrilaterals. In the second step, two strategies for road completion are applied in order to generate the complete road network. The first strategy is based on two basic perceptual grouping rules, i.e., proximity and collinearity rules, which allow the sequential reconstruction of gaps between every pair of disconnected road segments. This strategy does not allow the reconstruction of road crossings, but it allows the extraction of road centerlines from the contiguous quadrilaterals representing connected road segments. The second strategy for road completion aims at reconstructing road crossings. Firstly, the road centerlines are used to find reference points for road crossings, which are their approximate positions. Then these points are used to extract polygons representing the contours of road crossings. This paper presents the proposed methodology and experimental results. © Pleiades Publishing, Inc. 2006.