973 resultados para Semi-automatic road extraction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this paper is to introduce a methodology for semi-automatic road extraction from aerial digital image pairs by using dynamic programming and epipolar geometry. The method uses both images from where each road feature pair is extracted. The operator identifies the corresponding road featuresand s/he selects sparse seed points along them. After all road pairs have been extracted, epipolar geometry is applied to determine the automatic point-to-point correspondence between each correspondent feature. Finally, each correspondent road pair is georeferenced by photogrammetric intersection. Experiments were made with rural aerial images. The results led to the conclusion that the methodology is robust and efficient, even in the presence of shadows of trees and buildings or other irregularities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several kinds of research in road extraction have been carried out in the last 6 years by the Photogrammetry and Computer Vision Research Group (GPF&VC - Grupo de Pesquisa em Fotogrametria e Visão Computacional). Several semi-automatic road extraction methodologies have been developed, including sequential and optimizatin techniques. The GP-F&VC has also been developing fully automatic methodologies for road extraction. This paper presents an overview of the GP-F&VC research in road extraction from digital images, along with examples of results obtained by the developed methodologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The acquisition and update of Geographic Information System (GIS) data are typically carried out using aerial or satellite imagery. Since new roads are usually linked to georeferenced pre-existing road network, the extraction of pre-existing road segments may provide good hypotheses for the updating process. This paper addresses the problem of extracting georeferenced roads from images and formulating hypotheses for the presence of new road segments. Our approach proceeds in three steps. First, salient points are identified and measured along roads from a map or GIS database by an operator or an automatic tool. These salient points are then projected onto the image-space and errors inherent in this process are calculated. In the second step, the georeferenced roads are extracted from the image using a dynamic programming (DP) algorithm. The projected salient points and corresponding error estimates are used as input for this extraction process. Finally, the road center axes extracted in the previous step are analyzed to identify potential new segments attached to the extracted, pre-existing one. This analysis is performed using a combination of edge-based and correlation-based algorithms. In this paper we present our approach and early implementation results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Semi-automatic building detection and extraction is a topic of growing interest due to its potential application in such areas as cadastral information systems, cartographic revision, and GIS. One of the existing strategies for building extraction is to use a digital surface model (DSM) represented by a cloud of known points on a visible surface, and comprising features such as trees or buildings. Conventional surface modeling using stereo-matching techniques has its drawbacks, the most obvious being the effect of building height on perspective, shadows, and occlusions. The laser scanner, a recently developed technological tool, can collect accurate DSMs with high spatial frequency. This paper presents a methodology for semi-automatic modeling of buildings which combines a region-growing algorithm with line-detection methods applied over the DSM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a dynamic programming approach for semi-automated road extraction from medium-and high-resolution images. This method is a modified version of a pre-existing dynamic programming method for road extraction from low-resolution images. The basic assumption of this pre-existing method is that roads manifest as lines in low-resolution images (pixel footprint> 2 m) and as such can be modeled and extracted as linear features. On the other hand, roads manifest as ribbon features in medium- and high-resolution images (pixel footprint ≤ 2 m) and, as a result, the focus of road extraction becomes the road centerlines. The original method can not accurately extract road centerlines from medium- and high- resolution images. In view of this, we propose a modification of the merit function of the original approach, which is carried out by a constraint function embedding road edge properties. Experimental results demonstrated the modified algorithm's potential in extracting road centerlines from medium- and high-resolution images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes our semi-automatic keyword based approach for the four topics of Information Extraction from Microblogs Posted during Disasters task at Forum for Information Retrieval Evaluation (FIRE) 2016. The approach consists three phases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the current frontiers in the clinical management of Pectus Excavatum (PE) patients is the prediction of the surgical outcome prior to the intervention. This can be done through computerized simulation of the Nuss procedure, which requires an anatomically correct representation of the costal cartilage. To this end, we take advantage of the costal cartilage tubular structure to detect it through multi-scale vesselness filtering. This information is then used in an interactive 2D initialization procedure which uses anatomical maximum intensity projections of 3D vesselness feature images to efficiently initialize the 3D segmentation process. We identify the cartilage tissue centerlines in these projected 2D images using a livewire approach. We finally refine the 3D cartilage surface through region-based sparse field level-sets. We have tested the proposed algorithm in 6 noncontrast CT datasets from PE patients. A good segmentation performance was found against reference manual contouring, with an average Dice coefficient of 0.75±0.04 and an average mean surface distance of 1.69±0.30mm. The proposed method requires roughly 1 minute for the interactive initialization step, which can positively contribute to an extended use of this tool in clinical practice, since current manual delineation of the costal cartilage can take up to an hour.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eradication of code smells is often pointed out as a way to improve readability, extensibility and design in existing software. However, code smell detection remains time consuming and error-prone, partly due to the inherent subjectivity of the detection processes presently available. In view of mitigating the subjectivity problem, this dissertation presents a tool that automates a technique for the detection and assessment of code smells in Java source code, developed as an Eclipse plugin. The technique is based upon a Binary Logistic Regression model that uses complexity metrics as independent variables and is calibrated by expert‟s knowledge. An overview of the technique is provided, the tool is described and validated by an example case study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This project was funded under the Applied Research Grants Scheme administered by Enterprise Ireland. The project was a partnership between Galway - Mayo Institute of Technology and an industrial company, Tyco/Mallinckrodt Galway. The project aimed to develop a semi - automatic, self - learning pattern recognition system capable of detecting defects on the printed circuits boards such as component vacancy, component misalignment, component orientation, component error, and component weld. The research was conducted in three directions: image acquisition, image filtering/recognition and software development. Image acquisition studied the process of forming and digitizing images and some fundamental aspects regarding the human visual perception. The importance of choosing the right camera and illumination system for a certain type of problem has been highlighted. Probably the most important step towards image recognition is image filtering, The filters are used to correct and enhance images in order to prepare them for recognition. Convolution, histogram equalisation, filters based on Boolean mathematics, noise reduction, edge detection, geometrical filters, cross-correlation filters and image compression are some examples of the filters that have been studied and successfully implemented in the software application. The software application developed during the research is customized in order to meet the requirements of the industrial partner. The application is able to analyze pictures, perform the filtering, build libraries, process images and generate log files. It incorporates most of the filters studied and together with the illumination system and the camera it provides a fully integrated framework able to analyze defects on printed circuit boards.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In fetal brain MRI, most of the high-resolution reconstruction algorithms rely on brain segmentation as a preprocessing step. Manual brain segmentation is however highly time-consuming and therefore not a realistic solution. In this work, we assess on a large dataset the performance of Multiple Atlas Fusion (MAF) strategies to automatically address this problem. Firstly, we show that MAF significantly increase the accuracy of brain segmentation as regards single-atlas strategy. Secondly, we show that MAF compares favorably with the most recent approach (Dice above 0.90). Finally, we show that MAF could in turn provide an enhancement in terms of reconstruction quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Arterial baroreflex sensitivity estimated by pharmacological impulse stimuli depends on intrinsic signal variability and usually a subjective choice of blood pressure (BP) and heart rate (HR) values. We propose a semi-automatic method to estimate cardiovascular reflex sensitivity to bolus infusions of phenylephrine and nitroprusside. Beat-to-beat BP and HR time series for male Wistar rats (N = 13) were obtained from the digitized signal (sample frequency = 2 kHz) and analyzed by the proposed method (PRM) developed in Matlab language. In the PRM, time series were low-pass filtered with zero-phase distortion (3rd order Butterworth used in the forward and reverse direction) and presented graphically, and parameters were selected interactively. Differences between basal mean values and peak BP (deltaBP) and HR (deltaHR) values after drug infusions were used to calculate baroreflex sensitivity indexes, defined as the deltaHR/deltaBP ratio. The PRM was compared to the method traditionally (TDM) employed by seven independent observers using files for reflex bradycardia (N = 43) and tachycardia (N = 61). Agreement was assessed by Bland and Altman plots. Dispersion among users, measured as the standard deviation, was higher for TDM for reflex bradycardia (0.60 ± 0.46 vs 0.21 ± 0.26 bpm/mmHg for PRM, P < 0.001) and tachycardia (0.83 ± 0.62 vs 0.28 ± 0.28 bpm/mmHg for PRM, P < 0.001). The advantage of the present method is related to its objectivity, since the routine automatically calculates the desired parameters according to previous software instructions. This is an objective, robust and easy-to-use tool for cardiovascular reflex studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In vivo proton magnetic resonance spectroscopy (¹H-MRS) is a technique capable of assessing biochemical content and pathways in normal and pathological tissue. In the brain, ¹H-MRS complements the information given by magnetic resonance images. The main goal of the present study was to assess the accuracy of ¹H-MRS for the classification of brain tumors in a pilot study comparing results obtained by manual and semi-automatic quantification of metabolites. In vivo single-voxel ¹H-MRS was performed in 24 control subjects and 26 patients with brain neoplasms that included meningiomas, high-grade neuroglial tumors and pilocytic astrocytomas. Seven metabolite groups (lactate, lipids, N-acetyl-aspartate, glutamate and glutamine group, total creatine, total choline, myo-inositol) were evaluated in all spectra by two methods: a manual one consisting of integration of manually defined peak areas, and the advanced method for accurate, robust and efficient spectral fitting (AMARES), a semi-automatic quantification method implemented in the jMRUI software. Statistical methods included discriminant analysis and the leave-one-out cross-validation method. Both manual and semi-automatic analyses detected differences in metabolite content between tumor groups and controls (P < 0.005). The classification accuracy obtained with the manual method was 75% for high-grade neuroglial tumors, 55% for meningiomas and 56% for pilocytic astrocytomas, while for the semi-automatic method it was 78, 70, and 98%, respectively. Both methods classified all control subjects correctly. The study demonstrated that ¹H-MRS accurately differentiated normal from tumoral brain tissue and confirmed the superiority of the semi-automatic quantification method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous assessment of verticality by means of rod and rod and frame tests indicated that human subjects can be more (field dependent) or less (field independent) influenced by a frame placed around a tilted rod. In the present study we propose a new approach to these tests. The judgment of visual verticality (rod test) was evaluated in 50 young subjects (28 males, ranging in age from 20 to 27 years) by randomly projecting a luminous rod tilted between -18 and +18° (negative values indicating left tilts) onto a tangent screen. In the rod and frame test the rod was displayed within a luminous fixed frame tilted at +18 or -18°. Subjects were instructed to verbally indicate the rod’s inclination direction (forced choice). Visual dependency was estimated by means of a Visual Index calculated from rod and rod and frame test values. Based on this index, volunteers were classified as field dependent, intermediate and field independent. A fourth category was created within the field-independent subjects for whom the amount of correct guesses in the rod and frame test exceeded that of the rod test, thus indicating improved performance when a surrounding frame was present. In conclusion, the combined use of subjective visual vertical and the rod and frame test provides a specific and reliable form of evaluation of verticality in healthy subjects and might be of use to probe changes in brain function after central or peripheral lesions.