973 resultados para Semi-automatic road extraction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The European research project TIDE (Tidal Inlets Dynamics and Environment) is developing and validating coupled models describing the morphological, biological and ecological evolution of tidal environments. The interactions between the physical and biological processes occurring in these regions requires that the system be studied as a whole rather than as separate parts. Extensive use of remote sensing including LiDAR is being made to provide validation data for the modelling. This paper describes the different uses of LiDAR within the project and their relevance to the TIDE science objectives. LiDAR data have been acquired from three different environments, the Venice Lagoon in Italy, Morecambe Bay in England, and the Eden estuary in Scotland. LiDAR accuracy at each site has been evaluated using ground reference data acquired with differential GPS. A semi-automatic technique has been developed to extract tidal channel networks from LiDAR data either used alone or fused with aerial photography. While the resulting networks may require some correction, the procedure does allow network extraction over large areas using objective criteria and reduces fieldwork requirements. The networks extracted may subsequently be used in geomorphological analyses, for example to describe the drainage patterns induced by networks and to examine the rate of change of networks. Estimation of the heights of the low and sparse vegetation on marshes is being investigated by analysis of the statistical distribution of the measured LiDAR heights. Species having different mean heights may be separated using the first-order moments of the height distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two ongoing projects at ESSC that involve the development of new techniques for extracting information from airborne LiDAR data and combining this information with environmental models will be discussed. The first project in conjunction with Bristol University is aiming to improve 2-D river flood flow models by using remote sensing to provide distributed data for model calibration and validation. Airborne LiDAR can provide such models with a dense and accurate floodplain topography together with vegetation heights for parameterisation of model friction. The vegetation height data can be used to specify a friction factor at each node of a model’s finite element mesh. A LiDAR range image segmenter has been developed which converts a LiDAR image into separate raster maps of surface topography and vegetation height for use in the model. Satellite and airborne SAR data have been used to measure flood extent remotely in order to validate the modelled flood extent. Methods have also been developed for improving the models by decomposing the model’s finite element mesh to reflect floodplain features such as hedges and trees having different frictional properties to their surroundings. Originally developed for rural floodplains, the segmenter is currently being extended to provide DEMs and friction parameter maps for urban floods, by fusing the LiDAR data with digital map data. The second project is concerned with the extraction of tidal channel networks from LiDAR. These networks are important features of the inter-tidal zone, and play a key role in tidal propagation and in the evolution of salt-marshes and tidal flats. The study of their morphology is currently an active area of research, and a number of theories related to networks have been developed which require validation using dense and extensive observations of network forms and cross-sections. The conventional method of measuring networks is cumbersome and subjective, involving manual digitisation of aerial photographs in conjunction with field measurement of channel depths and widths for selected parts of the network. A semi-automatic technique has been developed to extract networks from LiDAR data of the inter-tidal zone. A multi-level knowledge-based approach has been implemented, whereby low level algorithms first extract channel fragments based mainly on image properties then a high level processing stage improves the network using domain knowledge. The approach adopted at low level uses multi-scale edge detection to detect channel edges, then associates adjacent anti-parallel edges together to form channels. The higher level processing includes a channel repair mechanism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 1998 the first decorticator was developed in the Textile Engineering Laboratory and patented for the purpose of extracting fibres from pineapple leaves, with the financial help from CNPq and BNB. The objective of the present work was to develop an automatic decorticator different from the first one with a semiautomatic system of decortication with automatic feeding of the leaves and collection of the extracted fibres. The system is started through a command system that passes information to two engines, one for starting the beater cylinder and the other for the feeding of the leaves as well as the extraction of the decorticated fibres automatically. This in turn introduces the leaves between a knife and a beater cylinder with twenty blades (the previous one had only 8 blades). These blades are supported by equidistant flanges with a central transmission axis that would help in increasing the number of beatings of the leaves. In the present system the operator has to place the leaves on the rotating endless feeding belt and collect the extracted leaves that are being carried out through another endless belt. The pulp resulted form the extraction is collected in a tray through a collector. The feeding of the leaves as well as the extraction of the fibres is controlled automatically by varying the velocity of the cylinders. The semi-automatic decorticator basically composed of a chassis made out of iron bars (profile L) with 200cm length, 91 cm of height 68 cm of width. The decorticator weighs around 300Kg. It was observed that the increase in the number of blades from 8 to twenty in the beater cylinder reduced the turbulence inside the decorticator, which helped to improve the removal of the fibres without any problems as well as the quality of the fibres. From the studies carried out, from each leaf 2,8 to 4,5% of fibres can be extracted. This gives around 4 to 5 tons of fibres per hectare, which is more than that of cotton production per hectare. This quantity with no doubt could generate jobs to the people not only on the production of the fibres but also on their application in different areas

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper is a totally automatic strategy proposed to reduce the complexity of patterns ( vegetation, building, soils etc.) that interact with the object 'road' in color images, thus reducing the difficulty of the automatic extraction of this object. The proposed methodology consists of three sequential steps. In the first step the punctual operator is applied for artificiality index computation known as NandA ( Natural and Artificial). The result is an image whose the intensity attribute is the NandA response. The second step consists in automatically thresholding the image obtained in the previous step, resulting in a binary image. This image usually allows the separation between artificial and natural objects. The third step consists in applying a preexisting road seed extraction methodology to the previous generated binary image. Several experiments carried out with real images made the verification of the potential of the proposed methodology possible. The comparison of the obtained result to others obtained by a similar methodology for road seed extraction from gray level images, showed that the main benefit was the drastic reduction of the computational effort.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The terminological performance of the descriptors representing the Information Science domain in the SIBI/USP Controlled Vocabulary was evaluated in manual, automatic and semi-automatic indexing processes. It can be concluded that, in order to have a better performance (i.e., to adequately represent the content of the corpus), current Information Science descriptors of the SIBi/USP Controlled Vocabulary must be extended and put into context by means of terminological definitions so that information needs of users are fulfilled.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this work is to evaluate the influence of point measurements in images, with subpixel accuracy, and its contribution in the calibration of digital cameras. Also, the effect of subpixel measurements in 3D coordinates of check points in the object space will be evaluated. With this purpose, an algorithm that allows subpixel accuracy was implemented for semi-automatic determination of points of interest, based on Fõrstner operator. Experiments were accomplished with a block of images acquired with the multispectral camera DuncanTech MS3100-CIR. The influence of subpixel measurements in the adjustment by Least Square Method (LSM) was evaluated by the comparison of estimated standard deviation of parameters in both situations, with manual measurement (pixel accuracy) and with subpixel estimation. Additionally, the influence of subpixel measurements in the 3D reconstruction was also analyzed. Based on the obtained results, i.e., on the quantification of the standard deviation reduction in the Inner Orientation Parameters (IOP) and also in the relative error of the 3D reconstruction, it was shown that measurements with subpixel accuracy are relevant for some tasks in Photogrammetry, mainly for those in which the metric quality is of great relevance, as Camera Calibration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Informação - FFC

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ZUSAMMENFASSUNG Langzeitbeobachtungsstudien zur Landschaftsdynamik inSahelländern stehen generell einem defizitären Angebot anquantitativen Rauminformationen gegenüber. Der in Malivorgefundene lokal- bis regionalräumliche Datenmangelführte zu einer methodologischen Studie, die die Entwicklungvon Verfahren zur multi-temporalen Erfassung und Analyse vonLandschaftsveränderungsdaten beinhaltet. Für den RaumWestafrika existiert in großer Flächenüberdeckunghistorisches Fernerkundungsmaterial in Form hochauflösenderLuftbilder ab den 50er Jahren und erste erdbeobachtendeSatellitendaten von Landsat-MSS ab den 70er Jahren.Multitemporale Langzeitanalysen verlangen zur digitalenReproduzierbarkeit, zur Datenvergleich- undObjekterfaßbarkeit die a priori-Betrachtung derDatenbeschaffenheit und -qualität. Zwei, ohne verfügbare, noch rekonstruierbareBodenkontrolldaten entwickelte Methodenansätze zeigen nichtnur die Möglichkeiten, sondern auch die Grenzen eindeutigerradiometrischer und morphometrischerBildinformationsgewinnung. Innerhalb desÜberschwemmungsgunstraums des Nigerbinnendeltas im ZentrumMalis stellen sich zwei Teilstudien zur Extraktion vonquantitativen Sahelvegetationsdaten den radiometrischen undatmosphärischen Problemen:1. Präprozessierende Homogenisierung von multitemporalenMSS-Archivdaten mit Simulationen zur Wirksamkeitatmosphärischer und sensorbedingter Effekte2. Entwicklung einer Methode zur semi-automatischenErfassung und Quantifizierung der Dynamik derGehölzbedeckungsdichte auf panchromatischenArchiv-Luftbildern Die erste Teilstudie stellt historischeLandsat-MSS-Satellitenbilddaten für multi-temporale Analysender Landschaftsdynamik als unbrauchbar heraus. In derzweiten Teilstudie wird der eigens, mittelsmorphomathematischer Filteroperationen für die automatischeMusterkennung und Quantifizierung von Sahelgehölzobjektenentwickelte Methodenansatz präsentiert. Abschließend wird die Forderung nach kosten- undzeiteffizienten Methodenstandards hinsichtlich ihrerRepräsentativität für die Langzeitbeobachtung desRessourceninventars semi-arider Räume sowie deroperationellen Transferierbarkeit auf Datenmaterial modernerFernerkundungssensoren diskutiert.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While sound and video may capture viewers' attention, interaction can captivate them. This has not been available prior to the advent of Digital Television. In fact, what lies at the heart of the Digital Television revolution is this new type of interactive content, offered in the form of interactive Television (iTV) services. On top of that, the new world of converged networks has created a demand for a new type of converged services on a range of mobile terminals (Tablet PCs, PDAs and mobile phones). This paper aims at presenting a new approach to service creation that allows for the semi-automatic translation of simulations and rapid prototypes created in the accessible desktop multimedia authoring package Macromedia Director into services ready for broadcast. This is achieved by a series of tools that de-skill and speed-up the process of creating digital TV user interfaces (UI) and applications for mobile terminals. The benefits of rapid prototyping are essential for the production of these new types of services, and are therefore discussed in the first section of this paper. In the following sections, an overview of the operation of content, service, creation and management sub-systems is presented, which illustrates why these tools compose an important and integral part of a system responsible of creating, delivering and managing converged broadcast and telecommunications services. The next section examines a number of metadata languages candidates for describing the iTV services user interface and the schema language adopted in this project. A detailed description of the operation of the two tools is provided to offer an insight of how they can be used to de-skill and speed-up the process of creating digital TV user interfaces and applications for mobile terminals. Finally, representative broadcast oriented and telecommunication oriented converged service components are also introduced, demonstrating how these tools have been used to generate different types of services.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a preprocessing module for improving the performance of a Spanish into Spanish Sign Language (Lengua de Signos Espanola: LSE) translation system when dealing with sparse training data. This preprocessing module replaces Spanish words with associated tags. The list with Spanish words (vocabulary) and associated tags used by this module is computed automatically considering those signs that show the highest probability of being the translation of every Spanish word. This automatic tag extraction has been compared to a manual strategy achieving almost the same improvement. In this analysis, several alternatives for dealing with non-relevant words have been studied. Non-relevant words are Spanish words not assigned to any sign. The preprocessing module has been incorporated into two well-known statistical translation architectures: a phrase-based system and a Statistical Finite State Transducer (SFST). This system has been developed for a specific application domain: the renewal of Identity Documents and Driver's License. In order to evaluate the system a parallel corpus made up of 4080 Spanish sentences and their LSE translation has been used. The evaluation results revealed a significant performance improvement when including this preprocessing module. In the phrase-based system, the proposed module has given rise to an increase in BLEU (Bilingual Evaluation Understudy) from 73.8% to 81.0% and an increase in the human evaluation score from 0.64 to 0.83. In the case of SFST, BLEU increased from 70.6% to 78.4% and the human evaluation score from 0.65 to 0.82.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatic segmentation and tracking of the coronary artery tree from Cardiac Multislice-CT images is an important goal to improve the diagnosis and treatment of coronary artery disease. This paper presents a semi-automatic algorithm (one input point per vessel) based on morphological grayscale local reconstructions in 3D images devoted to the extraction of the coronary artery tree. The algorithm has been evaluated in the framework of the Coronary Artery Tracking Challenge 2008 [1], obtaining consistent results in overlapping measurements (a mean of 70% of the vessel well tracked). Poor results in accuracy measurements suggest that future work should refine the centerline extraction. The algorithm can be efficiently implemented and its general strategy can be easily extrapolated to a completely automated centerline extraction or to a user interactive vessel extraction