973 resultados para Semi-automatic road extraction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose to directly process 3D + t image sequences with mathematical morphology operators, using a new classi?cation of the 3D+t structuring elements. Several methods (?ltering, tracking, segmentation) dedicated to the analysis of 3D + t datasets of zebra?sh embryogenesis are introduced and validated through a synthetic dataset. Then, we illustrate the application of these methods to the analysis of datasets of zebra?sh early development acquired with various microscopy techniques. This processing paradigm produces spatio-temporal coherent results as it bene?ts from the intrinsic redundancy of the temporal dimension, and minimizes the needs for human intervention in semi-automatic algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present the use of D-higraphs to perform HAZOP studies. D-higraphs is a formalism that includes in a single model the functional as well as the structural (ontological) components of any given system. A tool to perform a semi-automatic guided HAZOP study on a process plant is presented. The diagnostic system uses an expert system to predict the behavior modeled using D-higraphs. This work is applied to the study of an industrial case and its results are compared with other similar approaches proposed in previous studies. The analysis shows that the proposed methodology fits its purpose enabling causal reasoning that explains causes and consequences derived from deviations, it also fills some of the gaps and drawbacks existing in previous reported HAZOP assistant tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-invasive quantitative assessment of the right ventricular anatomical and functional parameters is a challenging task. We present a semi-automatic approach for right ventricle (RV) segmentation from 4D MR images in two variants, which differ in the amount of user interaction. The method consists of three main phases: First, foreground and background markers are generated from the user input. Next, an over-segmented region image is obtained applying a watershed transform. Finally, these regions are merged using 4D graph-cuts with an intensity based boundary term. For the first variant the user outlines the inside of the RV wall in a few end-diastole slices, for the second two marker pixels serve as starting point for a statistical atlas application. Results were obtained by blind evaluation on 16 testing 4D MR volumes. They prove our method to be robust against markers location and place it favourably in the ranks of existing approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantification of neurotransmission Single-Photon Emission Computed Tomography (SPECT) studies of the dopaminergic system can be used to track, stage and facilitate early diagnosis of the disease. The aim of this study was to implement QuantiDOPA, a semi-automatic quantification software of application in clinical routine to reconstruct and quantify neurotransmission SPECT studies using radioligands which bind the dopamine transporter (DAT). To this end, a workflow oriented framework for the biomedical imaging (GIMIAS) was employed. QuantiDOPA allows the user to perform a semiautomatic quantification of striatal uptake by following three stages: reconstruction, normalization and quantification. QuantiDOPA is a useful tool for semi-automatic quantification inDAT SPECT imaging and it has revealed simple and flexible

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present an innovative technique to tackle the problem of automatic road sign detection and tracking using an on-board stereo camera. It involves a continuous 3D analysis of the road sign during the whole tracking process. Firstly, a color and appearance based model is applied to generate road sign candidates in both stereo images. A sparse disparity map between the left and right images is then created for each candidate by using contour-based and SURF-based matching in the far and short range, respectively. Once the map has been computed, the correspondences are back-projected to generate a cloud of 3D points, and the best-fit plane is computed through RANSAC, ensuring robustness to outliers. Temporal consistency is enforced by means of a Kalman filter, which exploits the intrinsic smoothness of the 3D camera motion in traffic environments. Additionally, the estimation of the plane allows to correct deformations due to perspective, thus easing further sign classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cognitive rehabilitation aims to remediate or alleviate the cognitive deficits appearing after an episode of acquired brain injury (ABI). The purpose of this work is to describe the telerehabilitation platform called Guttmann Neuropersonal Trainer (GNPT) which provides new strategies for cognitive rehabilitation, improving efficiency and access to treatments, and to increase knowledge generation from the process. A cognitive rehabilitation process has been modeled to design and develop the system, which allows neuropsychologists to configure and schedule rehabilitation sessions, consisting of set of personalized computerized cognitive exercises grounded on neuroscience and plasticity principles. It provides remote continuous monitoring of patient's performance, by an asynchronous communication strategy. An automatic knowledge extraction method has been used to implement a decision support system, improving treatment customization. GNPT has been implemented in 27 rehabilitation centers and in 83 patients' homes, facilitating the access to the treatment. In total, 1660 patients have been treated. Usability and cost analysis methodologies have been applied to measure the efficiency in real clinical environments. The usability evaluation reveals a system usability score higher than 70 for all target users. The cost efficiency study results show a relation of 1-20 compared to face-to-face rehabilitation. GNPT enables brain-damaged patients to continue and further extend rehabilitation beyond the hospital, improving the efficiency of the rehabilitation process. It allows customized therapeutic plans, providing information to further development of clinical practice guidelines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to perform finite element (FE) analyses of patient-specific abdominal aortic aneurysms, geometries derived from medical images must be meshed with suitable elements. We propose a semi-automatic method for generating conforming hexahedral meshes directly from contours segmented from medical images. Magnetic resonance images are generated using a protocol developed to give the abdominal aorta high contrast against the surrounding soft tissue. These data allow us to distinguish between the different structures of interest. We build novel quadrilateral meshes for each surface of the sectioned geometry and generate conforming hexahedral meshes by combining the quadrilateral meshes. The three-layered morphology of both the arterial wall and thrombus is incorporated using parameters determined from experiments. We demonstrate the quality of our patient-specific meshes using the element Scaled Jacobian. The method efficiently generates high-quality elements suitable for FE analysis, even in the bifurcation region of the aorta into the iliac arteries. For example, hexahedral meshes of up to 125,000 elements are generated in less than 130 s, with 94.8 % of elements well suited for FE analysis. We provide novel input for simulations by independently meshing both the arterial wall and intraluminal thrombus of the aneurysm, and their respective layered morphologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper surveys some of the fundamental problems in natural language (NL) understanding (syntax, semantics, pragmatics, and discourse) and the current approaches to solving them. Some recent developments in NL processing include increased emphasis on corpus-based rather than example- or intuition-based work, attempts to measure the coverage and effectiveness of NL systems, dealing with discourse and dialogue phenomena, and attempts to use both analytic and stochastic knowledge. Critical areas for the future include grammars that are appropriate to processing large amounts of real language; automatic (or at least semi-automatic) methods for deriving models of syntax, semantics, and pragmatics; self-adapting systems; and integration with speech processing. Of particular importance are techniques that can be tuned to such requirements as full versus partial understanding and spoken language versus text. Portability (the ease with which one can configure an NL system for a particular application) is one of the largest barriers to application of this technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A produção de metano entérico está entre as principais fontes de emissão de gases de efeito estufa dentre as atividades agropecuárias, além de gerar perda energética ao animal de até 12% da energia bruta consumida. Assim, o objetivo deste trabalho foi avaliar o uso de nitrato de cálcio encapsulado na alimentação de ruminantes como estratégia nutricional a mitigação de metano entérico. O experimento consistiu de duas fases. Fase I: Foram testadas dietas suplementadas com produto comercial de nitrato de cálcio encapsulado utilizando a técnica semiautomática de produção de gases in vitro. Meio grama de substrato com 50 mL de meio de incubação e 25 mL de inóculo ruminal foram incubados em frascos de vidro (160 mL) à 39 ºC por 24 horas para determinação da melhor dieta a ser testada in vivo. O primeiro ensaio testou a associação entre a monensina (dietas com e sem adição de monensina) e doses de nitrato encapsulado (0; 1,5 e 3% da matéria seca (MS)) para mitigação de metano in vitro. Não foi observada interação entre monensina e nitrato para as variáveis testadas. O segundo ensaio in vitro testou a interação do tipo de dieta com duas relações concentrado:volumoso, 20:80 e 80:20, e a inclusão de doses de nitrato encapsulado (0; 1,5; 3 e 4,5% MS). Embora não foi observado efeito associativo entre dieta e nitrato para redução de metano, foi observada mudança nos produtos da fermentação ruminal, com redução de propionato, em decorrência da concorrência de nitrato e propianogênicas por hidrogênio mais escasso em dietas com menor fermentação. Fase II: Conforme os resultados obtidos na Fase I, na segunda fase foi avaliado o efeito associativo da relação de concentrado:volumoso da dieta e a dose de nitrato sobre a emissão de metano, constituintes ruminais e toxicidade do nitrato in vivo. Utilizou-se seis borregos canulados no rúmen, distribuídos em delineamento experimental quadrado latino 6 x 6, em fatorial 2 x 3. Os fatores foram tipo de dieta (relação concentrado:volumoso 20:80 e 80:20) e inclusão de doses de nitrato encapsulado na dieta (0; 1,5 e 3% MS) em substituição gradual ao farelo de soja, totalizando seis tratamentos. Os teores de substituição do farelo de soja pelo nitrato foram em equivalente proteico de maneira a deixar as dietas isonitrogenadas. Os animais foram adaptados gradualmente a oferta de nitrato dietético para evitar problemas com toxidez. A análise de toxicidade foi avaliada pela taxa de metahemoglobina no sangue dos ovinos 3 horas após a alimentação. Nitrato reduziu a produção de metano em ambas as dietas. Os níveis de metahemoglobina no sangue dos animais não foram alterados pela adição de nitrato. Foi observado efeito associativo entre o tipo de dieta e nitrato para os produtos da fermentação ruminal, como acetato, que aumentou linearmente nas dietas com 80% de concentrado quando nitrato foi adicionado. Concluí-se que nitrato, utilizado de forma segura, é uma promissora estratégia para redução de metano entérico independentemente do tipo de dieta com que está sendo suplementado

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present new tools for the segmentation and analysis of musical scores in the OpenMusic computer-aided composition environment. A modular object-oriented framework enables the creation of segmentations on score objects and the implementation of automatic or semi-automatic analysis processes. The analyses can be performed and displayed thanks to customizable classes and callbacks. Concrete examples are given, in particular with the implementation of a semi-automatic harmonic analysis system and a framework for rhythmic transcription.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE In clinical diagnosis, medical image segmentation plays a key role in the analysis of pathological regions. Despite advances in automatic and semi-automatic segmentation techniques, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a lower number of interactions, and a user-independent solution to reduce the time frame between image acquisition and diagnosis. METHODS We present a new interactive method for correcting image segmentations. Our method provides 3D shape corrections through 2D interactions. This approach enables an intuitive and natural corrections of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle and knee joint segmentations from MR images. RESULTS Experimental results show that full segmentation corrections could be performed within an average correction time of 5.5±3.3 minutes and an average of 56.5±33.1 user interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.02 for both anatomies. In addition, for users with different levels of expertise, our method yields a correction time and number of interaction decrease from 38±19.2 minutes to 6.4±4.3 minutes, and 339±157.1 to 67.7±39.6 interactions, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Photo annotation is a resource-intensive task, yet is increasingly essential as image archives and personal photo collections grow in size. There is an inherent con?ict in the process of describing and archiving personal experiences, because casual users are generally unwilling to expend large amounts of e?ort on creating the annotations which are required to organise their collections so that they can make best use of them. This paper describes the Photocopain system, a semi-automatic image annotation system which combines information about the context in which a photograph was captured with information from other readily available sources in order to generate outline annotations for that photograph that the user may further extend or amend.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Much of the geometrical data relating to engineering components and assemblies is stored in the form of orthographic views, either on paper or computer files. For various engineering applications, however, it is necessary to describe objects in formal geometric modelling terms. The work reported in this thesis is concerned with the development and implementation of concepts and algorithms for the automatic interpretation of orthographic views as solid models. The various rules and conventions associated with engineering drawings are reviewed and several geometric modelling representations are briefly examined. A review of existing techniques for the automatic, and semi-automatic, interpretation of engineering drawings as solid models is given. A new theoretical approach is then presented and discussed. The author shows how the implementation of such an approach for uniform thickness objects may be extended to more general objects by introducing the concept of `approximation models'. Means by which the quality of the transformations is monitored, are also described. Detailed descriptions of the interpretation algorithms and the software package that were developed for this project are given. The process is then illustrated by a number of practical examples. Finally, the thesis concludes that, using the techniques developed, a substantial percentage of drawings of engineering components could be converted into geometric models with a specific degree of accuracy. This degree is indicative of the suitability of the model for a particular application. Further work on important details is required before a commercially acceptable package is produced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tonal, textural and contextual properties are used in manual photointerpretation of remotely sensed data. This study has used these three attributes to produce a lithological map of semi arid northwest Argentina by semi automatic computer classification procedures of remotely sensed data. Three different types of satellite data were investigated, these were LANDSAT MSS, TM and SIR-A imagery. Supervised classification procedures using tonal features only produced poor classification results. LANDSAT MSS produced classification accuracies in the range of 40 to 60%, while accuracies of 50 to 70% were achieved using LANDSAT TM data. The addition of SIR-A data produced increases in the classification accuracy. The increased classification accuracy of TM over the MSS is because of the better discrimination of geological materials afforded by the middle infra red bands of the TM sensor. The maximum likelihood classifier consistently produced classification accuracies 10 to 15% higher than either the minimum distance to means or decision tree classifier, this improved accuracy was obtained at the cost of greatly increased processing time. A new type of classifier the spectral shape classifier, which is computationally as fast as a minimum distance to means classifier is described. However, the results for this classifier were disappointing, being lower in most cases than the minimum distance or decision tree procedures. The classification results using only tonal features were felt to be unacceptably poor, therefore textural attributes were investigated. Texture is an important attribute used by photogeologists to discriminate lithology. In the case of TM data, texture measures were found to increase the classification accuracy by up to 15%. However, in the case of the LANDSAT MSS data the use of texture measures did not provide any significant increase in the accuracy of classification. For TM data, it was found that second order texture, especially the SGLDM based measures, produced highest classification accuracy. Contextual post processing was found to increase classification accuracy and improve the visual appearance of classified output by removing isolated misclassified pixels which tend to clutter classified images. Simple contextual features, such as mode filters were found to out perform more complex features such as gravitational filter or minimal area replacement methods. Generally the larger the size of the filter, the greater the increase in the accuracy. Production rules were used to build a knowledge based system which used tonal and textural features to identify sedimentary lithologies in each of the two test sites. The knowledge based system was able to identify six out of ten lithologies correctly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modelling architectural information is particularly important because of the acknowledged crucial role of software architecture in raising the level of abstraction during development. In the MDE area, the level of abstraction of models has frequently been related to low-level design concepts. However, model-driven techniques can be further exploited to model software artefacts that take into account the architecture of the system and its changes according to variations of the environment. In this paper, we propose model-driven techniques and dynamic variability as concepts useful for modelling the dynamic fluctuation of the environment and its impact on the architecture. Using the mappings from the models to implementation, generative techniques allow the (semi) automatic generation of artefacts making the process more efficient and promoting software reuse. The automatic generation of configurations and reconfigurations from models provides the basis for safer execution. The architectural perspective offered by the models shift focus away from implementation details to the whole view of the system and its runtime change promoting high-level analysis. © 2009 Springer Berlin Heidelberg.