973 resultados para Semi-automatic road extraction
Resumo:
Introduction : Les nourrissons, vu la grande compliance de leur cage thoracique, doivent maintenir activement leur volume pulmonaire de fin d’expiration (VPFE). Ceci se fait par interruption précoce de l’expiration, et par le freinage expiratoire au niveau laryngé et par la persistance de la contraction des muscles inspiratoires. Chez les nourrissons ventilés mécaniquement, notre équipe a montré que le diaphragme est activé jusqu’à la fin de l’expiration (activité tonique). Il n’est pas clair si cette activité tonique diaphragmatique compense pour l’absence de freinage laryngé liée à l’intubation endotrachéale. Objectif : Notre objectif est de déterminer si l’activité tonique diaphragmatique persiste après l’extubation chez les nourrissons et si elle peut être observée chez les enfants plus âgés. Méthode : Ceci est une étude observationnelle longitudinale prospective de patients âgés de 1 semaine à 18 ans admis aux soins intensifs pédiatriques (SIP), ventilés mécaniquement pour >24 heures et avec consentement parental. L’activité électrique du diaphragme (AEdi) a été enregistrée à l’aide d’une sonde nasogastrique spécifique à 4 moments durant le séjour aux SIP : en phase aigüe, pré et post-extubation et au congé. L’AEdi a été analysée de façon semi-automatique. L’AEdi tonique a été définie comme l’AEdi durant le dernier quartile de l’expiration. Résultats : 55 patients avec un âge médian de 10 mois (écart interquartile: 1-48) ont été étudiés. Chez les nourrissons (<1an, n=28), l’AEdi tonique en pourcentage de l’activité inspiratoire était de 48% (30-56) en phase aigüe, 38% (25-44) pré-extubation, 28% (17-42) post-extubation et 33% (22-43) au congé des SIP (p<0.05, ANOVA, avec différence significative entre enregistrements 1 et 3-4). Aucun changement significatif n’a été observé pré et post-extubation. L’AEdi tonique chez les patients plus âgés (>1an, n=27) était négligeable en phases de respiration normale (0.6mcv). Par contre, une AEdi tonique significative (>1mcv et >10%) a été observée à au moins un moment durant le séjour de 10 (37%) patients. La bronchiolite est le seul facteur indépendant associé à l’activité tonique diaphragmatique. Conclusion : Chez les nourrissons, l’AEdi tonique persiste après l’extubation et elle peut être réactivée dans certaines situations pathologiques chez les enfants plus âgés. Elle semble être un indicateur de l’effort du patient pour maintenir son VPFE. D’autres études devraient être menées afin de déterminer si la surveillance de l’AEdi tonique pourrait faciliter la détection de situations de ventilation inappropriée.
Resumo:
La fraction d’éjection du ventricule gauche est un excellent marqueur de la fonction cardiaque. Plusieurs techniques invasives ou non sont utilisées pour son calcul : l’angiographie, l’échocardiographie, la résonnance magnétique nucléaire cardiaque, le scanner cardiaque, la ventriculographie radioisotopique et l’étude de perfusion myocardique en médecine nucléaire. Plus de 40 ans de publications scientifiques encensent la ventriculographie radioisotopique pour sa rapidité d’exécution, sa disponibilité, son faible coût et sa reproductibilité intra-observateur et inter-observateur. La fraction d’éjection du ventricule gauche a été calculée chez 47 patients à deux reprises, par deux technologues, sur deux acquisitions distinctes selon trois méthodes : manuelle, automatique et semi-automatique. Les méthodes automatique et semi-automatique montrent dans l’ensemble une meilleure reproductibilité, une plus petite erreur standard de mesure et une plus petite différence minimale détectable. La méthode manuelle quant à elle fournit un résultat systématiquement et significativement inférieur aux deux autres méthodes. C’est la seule technique qui a montré une différence significative lors de l’analyse intra-observateur. Son erreur standard de mesure est de 40 à 50 % plus importante qu’avec les autres techniques, tout comme l’est sa différence minimale détectable. Bien que les trois méthodes soient d’excellentes techniques reproductibles pour l’évaluation de la fraction d’éjection du ventricule gauche, les estimations de la fiabilité des méthodes automatique et semi-automatique sont supérieures à celles de la méthode manuelle.
Resumo:
Introduction : Les nourrissons, vu la grande compliance de leur cage thoracique, doivent maintenir activement leur volume pulmonaire de fin d’expiration (VPFE). Ceci se fait par interruption précoce de l’expiration, et par le freinage expiratoire au niveau laryngé et par la persistance de la contraction des muscles inspiratoires. Chez les nourrissons ventilés mécaniquement, notre équipe a montré que le diaphragme est activé jusqu’à la fin de l’expiration (activité tonique). Il n’est pas clair si cette activité tonique diaphragmatique compense pour l’absence de freinage laryngé liée à l’intubation endotrachéale. Objectif : Notre objectif est de déterminer si l’activité tonique diaphragmatique persiste après l’extubation chez les nourrissons et si elle peut être observée chez les enfants plus âgés. Méthode : Ceci est une étude observationnelle longitudinale prospective de patients âgés de 1 semaine à 18 ans admis aux soins intensifs pédiatriques (SIP), ventilés mécaniquement pour >24 heures et avec consentement parental. L’activité électrique du diaphragme (AEdi) a été enregistrée à l’aide d’une sonde nasogastrique spécifique à 4 moments durant le séjour aux SIP : en phase aigüe, pré et post-extubation et au congé. L’AEdi a été analysée de façon semi-automatique. L’AEdi tonique a été définie comme l’AEdi durant le dernier quartile de l’expiration. Résultats : 55 patients avec un âge médian de 10 mois (écart interquartile: 1-48) ont été étudiés. Chez les nourrissons (<1an, n=28), l’AEdi tonique en pourcentage de l’activité inspiratoire était de 48% (30-56) en phase aigüe, 38% (25-44) pré-extubation, 28% (17-42) post-extubation et 33% (22-43) au congé des SIP (p<0.05, ANOVA, avec différence significative entre enregistrements 1 et 3-4). Aucun changement significatif n’a été observé pré et post-extubation. L’AEdi tonique chez les patients plus âgés (>1an, n=27) était négligeable en phases de respiration normale (0.6mcv). Par contre, une AEdi tonique significative (>1mcv et >10%) a été observée à au moins un moment durant le séjour de 10 (37%) patients. La bronchiolite est le seul facteur indépendant associé à l’activité tonique diaphragmatique. Conclusion : Chez les nourrissons, l’AEdi tonique persiste après l’extubation et elle peut être réactivée dans certaines situations pathologiques chez les enfants plus âgés. Elle semble être un indicateur de l’effort du patient pour maintenir son VPFE. D’autres études devraient être menées afin de déterminer si la surveillance de l’AEdi tonique pourrait faciliter la détection de situations de ventilation inappropriée.
Resumo:
Intensification of permafrost disturbances such as active layer detachments (ALDs) and retrogressive thaw slumps (RTS) have been observed across the circumpolar Arctic. These features are indicators of unstable conditions stemming from recent climate warming and permafrost degradation. In order to understand the processes interacting to give rise to these features, a multidisciplinary approach is required; i.e., interactions between geomorphology, hydrology, vegetation and ground thermal conditions. The goal of this research is to detect and map permafrost disturbance, predict landscape controls over disturbance and determine approaches for monitoring disturbance, all with the goal of contributing to the mitigation of permafrost hazards. Permafrost disturbance inventories were created by applying semi-automatic change detection techniques to IKONOS satellite imagery collected at the Cape Bounty Arctic Watershed Observatory (CBAWO). These methods provide a means to estimate the spatial distribution of permafrost disturbances for a given area for use as an input in susceptibility modelling. Permafrost disturbance susceptibility models were then developed using generalized additive and generalized linear models (GAM, GLM) fitted to disturbed and undisturbed locations and relevant GIS-derived predictor variables (slope, potential solar radiation, elevation). These models successfully delineated areas across the landscape that were susceptible to disturbances locally and regionally when transferred to an independent validation location. Permafrost disturbance susceptibility models are a first-order assessment of landscape susceptibility and are promising for designing land management strategies for remote permafrost regions. Additionally, geomorphic patterns associated with higher susceptibility provide important knowledge about processes associated with the initiation of disturbances. Permafrost degradation was analyzed at the CBAWO using differential interferometric synthetic aperture radar (DInSAR). Active-layer dynamics were interpreted using inter-seasonal and intra-seasonal displacement measurements and highlight the importance of hydroclimatic factors on active layer change. Collectively, these research approaches contribute to permafrost monitoring and the assessment of landscape-scale vulnerability in order to develop permafrost disturbance mitigation strategies.
Resumo:
Congenital vertebral malformations are common in brachycephalic “screw-tailed” dog breeds such as French bulldogs, English bulldogs, Boston terriers, and Pugs. Those vertebral malformations disrupt the normal vertebral column anatomy and biomechanics, potentially leading to deformity of the vertebral column and subsequent neurological dysfunction. The initial aim of this work was to study and determine whether the congenital vertebral malformations identified in those breeds could be translated in a radiographic classification scheme used in humans to give an improved classification, with clear and well-defined terminology, with the expectation that this would facilitate future study and clinical management in the veterinary field. Therefore, two observers who were blinded to the neurologic status of the dogs classified each vertebral malformation based on the human classification scheme of McMaster and were able to translate them successfully into a new classification scheme for veterinary use. The following aim was to assess the nature and the impact of vertebral column deformity engendered by those congenital vertebral malformations in the target breeds. As no gold standard exists in veterinary medicine for the calculation of the degree of deformity, it was elected to adapt the human equivalent, termed the Cobb angle, as a potential standard reference tool for use in veterinary practice. For the validation of the Cobb angle measurement method, a computerised semi-automatic technique was used and assessed by multiple independent observers. They observed not only that Kyphosis was the most common vertebral column deformity but also that patients with such deformity were found to be more likely to suffer from neurological deficits, more especially if their Cobb angle was above 35 degrees.
Resumo:
Email is a key communication format in a digital world, both for professional and/or personal usage. Exchanged messages (both human and automatically generated) have reached such a volume that processing them can be a great challenge for human users that try to do it on a daily basis and in an efficient manner. In fact, a significant amount of their time is spent searching and getting context information (normally historic information) in order to prepare a reply message or to take a decision/action, when compared to the actual time required for writing a reply. Therefore, it is of utmost importance for this process to use both automatic and semi-automatic mechanisms that allow to put email messages into context. Since context information is given, not only by historical email messages but also inferred from the relationship between contacts and/or organizations present in the messages, the existence of navigation mechanisms (and even exploration ones) between contacts and entities associated to email messages, is of fundamental importance. This is the main purpose of the SMART Mail prototype, which architecture, data visualization and exploration components and AI algorithms, are presented throughout this paper.
Resumo:
This paper presents an automatic methodology for road network extraction from medium-and high-resolution aerial images. It is based on two steps. In the first step, the road seeds (i.e., road segments) are extracted using a set of four road objects and another set of connection rules among road objects. Each road object is a local representation of an approximately straight road fragment and its construction is based on a combination of polygons describing all relevant image edges, according to some rules embodying road knowledge. Each road seed is composed by a sequence of connected road objects in which each sequence of this type can be geometrically structured as a chain of contiguous quadrilaterals. In the second step, two strategies for road completion are applied in order to generate the complete road network. The first strategy is based on two basic perceptual grouping rules, i.e., proximity and collinearity rules, which allow the sequential reconstruction of gaps between every pair of disconnected road segments. This strategy does not allow the reconstruction of road crossings, but it allows the extraction of road centerlines from the contiguous quadrilaterals representing connected road segments. The second strategy for road completion aims at reconstructing road crossings. Firstly, the road centerlines are used to find reference points for road crossings, which are their approximate positions. Then these points are used to extract polygons representing the contours of road crossings. This paper presents the proposed methodology and experimental results. © Pleiades Publishing, Inc. 2006.
Resumo:
In the last years, it has become increasingly clear that neurodegenerative diseases involve protein aggregation, a process often used as disease progression readout and to develop therapeutic strategies. This work presents an image processing tool to automatic segment, classify and quantify these aggregates and the whole 3D body of the nematode Caenorhabditis Elegans. A total of 150 data set images, containing different slices, were captured with a confocal microscope from animals of distinct genetic conditions. Because of the animals’ transparency, most of the slices pixels appeared dark, hampering their body volume direct reconstruction. Therefore, for each data set, all slices were stacked in one single 2D image in order to determine a volume approximation. The gradient of this image was input to an anisotropic diffusion algorithm that uses the Tukey’s biweight as edge-stopping function. The image histogram median of this outcome was used to dynamically determine a thresholding level, which allows the determination of a smoothed exterior contour of the worm and the medial axis of the worm body from thinning its skeleton. Based on this exterior contour diameter and the medial animal axis, random 3D points were then calculated to produce a volume mesh approximation. The protein aggregations were subsequently segmented based on an iso-value and blended with the resulting volume mesh. The results obtained were consistent with qualitative observations in literature, allowing non-biased, reliable and high throughput protein aggregates quantification. This may lead to a significant improvement on neurodegenerative diseases treatment planning and interventions prevention
Resumo:
Electricity markets worldwide suffered profound transformations. The privatization of previously nationally owned systems; the deregulation of privately owned systems that were regulated; and the strong interconnection of national systems, are some examples of such transformations [1, 2]. In general, competitive environments, as is the case of electricity markets, require good decision-support tools to assist players in their decisions. Relevant research is being undertaken in this field, namely concerning player modeling and simulation, strategic bidding and decision-support.
Resumo:
Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfilment of the requirements for the degree of Master in Computer Science
Resumo:
The extraction of relevant terms from texts is an extensively researched task in Text- Mining. Relevant terms have been applied in areas such as Information Retrieval or document clustering and classification. However, relevance has a rather fuzzy nature since the classification of some terms as relevant or not relevant is not consensual. For instance, while words such as "president" and "republic" are generally considered relevant by human evaluators, and words like "the" and "or" are not, terms such as "read" and "finish" gather no consensus about their semantic and informativeness. Concepts, on the other hand, have a less fuzzy nature. Therefore, instead of deciding on the relevance of a term during the extraction phase, as most extractors do, I propose to first extract, from texts, what I have called generic concepts (all concepts) and postpone the decision about relevance for downstream applications, accordingly to their needs. For instance, a keyword extractor may assume that the most relevant keywords are the most frequent concepts on the documents. Moreover, most statistical extractors are incapable of extracting single-word and multi-word expressions using the same methodology. These factors led to the development of the ConceptExtractor, a statistical and language-independent methodology which is explained in Part I of this thesis. In Part II, I will show that the automatic extraction of concepts has great applicability. For instance, for the extraction of keywords from documents, using the Tf-Idf metric only on concepts yields better results than using Tf-Idf without concepts, specially for multi-words. In addition, since concepts can be semantically related to other concepts, this allows us to build implicit document descriptors. These applications led to published work. Finally, I will present some work that, although not published yet, is briefly discussed in this document.
Resumo:
Automatic creation of polarity lexicons is a crucial issue to be solved in order to reduce time andefforts in the first steps of Sentiment Analysis. In this paper we present a methodology based onlinguistic cues that allows us to automatically discover, extract and label subjective adjectivesthat should be collected in a domain-based polarity lexicon. For this purpose, we designed abootstrapping algorithm that, from a small set of seed polar adjectives, is capable to iterativelyidentify, extract and annotate positive and negative adjectives. Additionally, the methodautomatically creates lists of highly subjective elements that change their prior polarity evenwithin the same domain. The algorithm proposed reached a precision of 97.5% for positiveadjectives and 71.4% for negative ones in the semantic orientation identification task.
Resumo:
In this study, a procedure is developed for cloud point extraction of Pd(II) and Rh(III) ions in aqueous solution using Span 80 (non-ionic surfactant) prior to their determination by flame atomic absorption spectroscopy. This method is based on the extraction of Pd(II) and Rh(III) ions at a pH of 10 using Span 80 with no chelating agent. We investigated the effect of various parameters on the recovery of the analyte ions, including pH, equilibration temperature and time, concentration of Span 80, and ionic strength. Under the best experimental conditions, the limits of detection based on 3Sb for Pd(II) and Rh(III) ions were 1.3 and 1.2 ng mL-1, respectively. Seven replicate determinations of a mixture of 0.5 µg mL-1 palladium and rhodium ions gave a mean absorbance of 0.058 and 0.053 with relative standard deviations of 1.8 and 1.6%, respectively. The developed method was successfully applied to the extraction and determination of the palladium and rhodium ions in road dust and standard samples and satisfactory results were obtained.
Resumo:
Cerebral glioma is the most prevalent primary brain tumor, which are classified broadly into low and high grades according to the degree of malignancy. High grade gliomas are highly malignant which possess a poor prognosis, and the patients survive less than eighteen months after diagnosis. Low grade gliomas are slow growing, least malignant and has better response to therapy. To date, histological grading is used as the standard technique for diagnosis, treatment planning and survival prediction. The main objective of this thesis is to propose novel methods for automatic extraction of low and high grade glioma and other brain tissues, grade detection techniques for glioma using conventional magnetic resonance imaging (MRI) modalities and 3D modelling of glioma from segmented tumor slices in order to assess the growth rate of tumors. Two new methods are developed for extracting tumor regions, of which the second method, named as Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA) can also extract white matter and grey matter from T1 FLAIR an T2 weighted images. The methods were validated with manual Ground truth images, which showed promising results. The developed methods were compared with widely used Fuzzy c-means clustering technique and the robustness of the algorithm with respect to noise is also checked for different noise levels. Image texture can provide significant information on the (ab)normality of tissue, and this thesis expands this idea to tumour texture grading and detection. Based on the thresholds of discriminant first order and gray level cooccurrence matrix based second order statistical features three feature sets were formulated and a decision system was developed for grade detection of glioma from conventional T2 weighted MRI modality.The quantitative performance analysis using ROC curve showed 99.03% accuracy for distinguishing between advanced (aggressive) and early stage (non-aggressive) malignant glioma. The developed brain texture analysis techniques can improve the physician’s ability to detect and analyse pathologies leading to a more reliable diagnosis and treatment of disease. The segmented tumors were also used for volumetric modelling of tumors which can provide an idea of the growth rate of tumor; this can be used for assessing response to therapy and patient prognosis.
Resumo:
Efficient optic disc segmentation is an important task in automated retinal screening. For the same reason optic disc detection is fundamental for medical references and is important for the retinal image analysis application. The most difficult problem of optic disc extraction is to locate the region of interest. Moreover it is a time consuming task. This paper tries to overcome this barrier by presenting an automated method for optic disc boundary extraction using Fuzzy C Means combined with thresholding. The discs determined by the new method agree relatively well with those determined by the experts. The present method has been validated on a data set of 110 colour fundus images from DRION database, and has obtained promising results. The performance of the system is evaluated using the difference in horizontal and vertical diameters of the obtained disc boundary and that of the ground truth obtained from two expert ophthalmologists. For the 25 test images selected from the 110 colour fundus images, the Pearson correlation of the ground truth diameters with the detected diameters by the new method are 0.946 and 0.958 and, 0.94 and 0.974 respectively. From the scatter plot, it is shown that the ground truth and detected diameters have a high positive correlation. This computerized analysis of optic disc is very useful for the diagnosis of retinal diseases