973 resultados para Semi-automatic road extraction
Resumo:
Trabalho académico com o objetivo do autor desenvolver um estudo prévio e um projeto de uma travessia sobre o rio Lima, na cidade de Viana do Castelo constituída por uma ponte de tirantes rodoferroviária. O projeto académico visa, também, desenvolver e compreender: os conceitos básicos, as metodologias de conceção, e o funcionamento de estruturas desse género. O motivo principal da escolha do tema é a necessidade de uma alternativa à ponte Eiffel em Viana do Castelo, e juntando o facto de em Portugal não existir nenhuma obra de arte de tirantes rodoferroviária até ao presente, seria interessante estudar e projetar uma estrutura rodoferroviária de tirantes. Das diversas possibilidades de sistemas estruturais estudados, adotou-se uma ponte que acomodará 4 vias rodoviárias e 2 vias ferroviárias, com um desenvolvimento total de 660 metros, constituída por dois vãos laterais com 165 metros cada um, e com um vão central de 330 metros. A obra de arte será em semi-leque com dois planos de tirantes, ancorados a duas torres de betão em Y invertido de altura aproximadamente de 110 metros. O tabuleiro será duplo misto aço-betão, constituído por duas vigas trianguladas do tipo Warren, e por carlingas, afastadas entre si de 15 metros com secções tubulares metálicas de espessura variável. As carlingas ao nível superior suportam a laje de betão, que constitui a rodovia, e inferiormente, suportam outra laje de betão para a parte ferroviária. O trabalho inicia-se com o enquadramento conceptual geral da envolvente da obra de arte, seguidamente com apresentação da evolução histórica ao longo do tempo das pontes de tirantes, e à apresentação de algumas pontes rodoferroviárias de tirantes. É realizada uma análise preliminar, onde se estudam as restrições, as condicionantes, o local de implantação, e o sistema da configuração geométrica a adotar na conceção estrutural. São descritos todos os tipos de materiais, equipamentos a utilizar, bem como as suas características mecânicas necessárias para o cálculo estrutural. A quantificação das ações e das combinações de cálculo efetuaram-se de acordo com as normas em vigor nacionais e europeias, designadamente os Eurocódigos das várias especialidades e o Regulamento de Segurança e Ações para Estruturas de Edifícios e Pontes. Efetuou-se um pré-dimensionamento e uma otimização de vários sistemas estruturais possíveis de todos os elementos estruturais, tendo em conta variáveis de estudo como a economia e a resistência estrutural das secções, por forma a chegar à solução final. A estrutura foi discretizada e analisada num modelo estático tridimensional num programa de cálculo automático. A análise de resultados foi efetuada longitudinalmente para a verificação dos Estados Limites Últimos e Estados Limites de Utilização dos elementos estruturais que constituem a ponte. Foi ainda efetuada uma estimativa orçamental da ponte no rio Lima na cidade de Viana do Castelo.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Currently the world swiftly adapts to visual communication. Online services like YouTube and Vine show that video is no longer the domain of broadcast television only. Video is used for different purposes like entertainment, information, education or communication. The rapid growth of today’s video archives with sparsely available editorial data creates a big problem of its retrieval. The humans see a video like a complex interplay of cognitive concepts. As a result there is a need to build a bridge between numeric values and semantic concepts. This establishes a connection that will facilitate videos’ retrieval by humans. The critical aspect of this bridge is video annotation. The process could be done manually or automatically. Manual annotation is very tedious, subjective and expensive. Therefore automatic annotation is being actively studied. In this thesis we focus on the multimedia content automatic annotation. Namely the use of analysis techniques for information retrieval allowing to automatically extract metadata from video in a videomail system. Furthermore the identification of text, people, actions, spaces, objects, including animals and plants. Hence it will be possible to align multimedia content with the text presented in the email message and the creation of applications for semantic video database indexing and retrieving.
Resumo:
Due to advances in information technology (e.g., digital video cameras, ubiquitous sensors), the automatic detection of human behaviors from video is a very recent research topic. In this paper, we perform a systematic and recent literature review on this topic, from 2000 to 2014, covering a selection of 193 papers that were searched from six major scientific publishers. The selected papers were classified into three main subjects: detection techniques, datasets and applications. The detection techniques were divided into four categories (initialization, tracking, pose estimation and recognition). The list of datasets includes eight examples (e.g., Hollywood action). Finally, several application areas were identified, including human detection, abnormal activity detection, action recognition, player modeling and pedestrian detection. Our analysis provides a road map to guide future research for designing automatic visual human behavior detection systems.
Resumo:
This research aims to advance blinking detection in the context of work activity. Rather than patients having to attend a clinic, blinking videos can be acquired in a work environment, and further automatically analyzed. Therefore, this paper presents a methodology to perform the automatic detection of eye blink using consumer videos acquired with low-cost web cameras. This methodology includes the detection of the face and eyes of the recorded person, and then it analyzes the low-level features of the eye region to create a quantitative vector. Finally, this vector is classified into one of the two categories considered —open and closed eyes— by using machine learning algorithms. The effectiveness of the proposed methodology was demonstrated since it provides unbiased results with classification errors under 5%
Resumo:
In search to increase the offer of liquid, clean, renewable and sustainable energy in the world energy matrix, the use of lignocellulosic materials (LCMs) for bioethanol production arises as a valuable alternative. The objective of this work was to analyze and compare the performance of Saccharomyces cerevisiae, Pichia stipitis and Zymomonas mobilis in the production of bioethanol from coconut fibre mature (CFM) using different strategies: simultaneous saccharification and fermentation (SSF) and semi-simultaneous saccharification and fermentation (SSSF). The CFM was pretreated by hydrothermal pretreatment catalyzed with sodium hydroxide (HPCSH). The pretreated CFM was characterized by X-ray diffractometry and SEM, and the lignin recovered in the liquid phase by FTIR and TGA. After the HPCSH pretreatment (2.5% (v/v) sodium hydroxide at 180 °C for 30 min), the cellulose content was 56.44%, while the hemicellulose and lignin were reduced 69.04% and 89.13%, respectively. Following pretreatment, the obtained cellulosic fraction was submitted to SSF and SSSF. Pichia stipitis allowed for the highest ethanol yield 90.18% in SSSF, 91.17% and 91.03% were obtained with Saccharomyces cerevisiae and Zymomonas mobilis, respectively. It may be concluded that the selection of the most efficient microorganism for the obtention of high bioethanol production yields from cellulose pretreated by HPCSH depends on the operational strategy used and this pretreatment is an interesting alternative for add value of coconut fibre mature compounds (lignin, phenolics) being in accordance with the biorefinery concept.
Resumo:
We present a new method for lysis of single cells in continuous flow, where cells are sequentially trapped, lysed and released in an automatic process. Using optimized frequencies, dielectrophoretic trapping allows exposing cells in a reproducible way to high electrical fields for long durations, thereby giving good control on the lysis parameters. In situ evaluation of cytosol extraction on single cells has been studied for Chinese hamster ovary (CHO) cells through out-diffusion of fluorescent molecules for different voltage amplitudes. A diffusion model is proposed to correlate this out-diffusion to the total area of the created pores, which is dependent on the potential drop across the cell membrane and enables evaluation of the total pore area in the membrane. The dielectrophoretic trapping is no longer effective after lysis because of the reduced conductivity inside the cells, leading to cell release. The trapping time is linked to the time required for cytosol extraction and can thus provide additional validation of the effective cytosol extraction for non-fluorescent cells. Furthermore, the application of one single voltage for both trapping and lysis provides a fully automatic process including cell trapping, lysis, and release, allowing operating the device in continuous flow without human intervention.
Resumo:
Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.
Resumo:
Purpose: Recently morphometric measurements of the ascending aorta have been done with ECG-gated MDCT to help the development of future endovascular therapies (TCT) [1]. However, the variability of these measurements remains unknown. It will be interesting to know the impact of CAD (computer aided diagnosis) with automated segmentation of the vessel and automatic measurements of diameter on the management of ascending aorta aneurysms. Methods and Materials: Thirty patients referred for ECG-gated CT thoracic angiography (64-row CT scanner) were evaluated. Measurements of the maximum and minimum ascending aorta diameters were obtained automatically with a commercially available CAD and semi-manually by two observers separately. The CAD algorithms segment the iv-enhanced lumen of the ascending aorta into perpendicular planes along the centreline. The CAD then determines the largest and the smallest diameters. Both observers repeated the automatic measurements and the semimanual measurements during a different session at least one month after the first measurements. The Bland and Altman method was used to study the inter/intraobserver variability. A Wilcoxon signed-rank test was also used to analyse differences between observers. Results: Interobserver variability for semi-manual measurements between the first and second observers was between 1.2 to 1.0 mm for maximal and minimal diameter, respectively. Intraobserver variability of each observer ranged from 0.8 to 1.2 mm, the lowest variability being produced by the more experienced observer. CAD variability could be as low as 0.3 mm, showing that it can perform better than human observers. However, when used in nonoptimal conditions (streak artefacts from contrast in the superior vena cava or weak lumen enhancement), CAD has a variability that can be as high as 0.9 mm, reaching variability of semi-manual measurements. Furthermore, there were significant differences between both observers for maximal and minimal diameter measurements (p<0.001). There was also a significant difference between the first observer and CAD for maximal diameter measurements with the former underestimating the diameter compared to the latter (p<0.001). As for minimal diameters, they were higher when measured by the second observer than when measured by CAD (p<0.001). Neither the difference of mean minimal diameter between the first observer and CAD nor the difference of mean maximal diameter between the second observer and CAD was significant (p=0.20 and 0.06, respectively). Conclusion: CAD algorithms can lessen the variability of diameter measurements in the follow-up of ascending aorta aneurysms. Nevertheless, in non-optimal conditions, it may be necessary to correct manually the measurements. Improvements of the algorithms will help to avoid such a situation.
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.
Resumo:
In this paper we present a description of the role of definitional verbal patterns for the extraction of semantic relations. Several studies show that semantic relations can be extracted from analytic definitions contained in machine-readable dictionaries (MRDs). In addition, definitions found in specialised texts are a good starting point to search for different types of definitions where other semantic relations occur. The extraction of definitional knowledge from specialised corpora represents another interesting approach for the extraction of semantic relations. Here, we present a descriptive analysis of definitional verbal patterns in Spanish and the first steps towards the development of a system for the automatic extraction of definitional knowledge.
Resumo:
Several features that can be extracted from digital images of the sky and that can be useful for cloud-type classification of such images are presented. Some features are statistical measurements of image texture, some are based on the Fourier transform of the image and, finally, others are computed from the image where cloudy pixels are distinguished from clear-sky pixels. The use of the most suitable features in an automatic classification algorithm is also shown and discussed. Both the features and the classifier are developed over images taken by two different camera devices, namely, a total sky imager (TSI) and a whole sky imager (WSC), which are placed in two different areas of the world (Toowoomba, Australia; and Girona, Spain, respectively). The performance of the classifier is assessed by comparing its image classification with an a priori classification carried out by visual inspection of more than 200 images from each camera. The index of agreement is 76% when five different sky conditions are considered: clear, low cumuliform clouds, stratiform clouds (overcast), cirriform clouds, and mottled clouds (altocumulus, cirrocumulus). Discussion on the future directions of this research is also presented, regarding both the use of other features and the use of other classification techniques
Resumo:
Objective: Small nodal tumor infiltrates are identified by applying multilevel sectioning and immunohistochemistry (IHC) in addition to H&E (hematoxylin and eosin) stains of resected lymph nodes. However, the use of multilevel sectioning and IHC is very time-consuming and costly. The current standard analysis of lymph nodes in colon cancer patients is based on one slide per lymph node stained by H&E. A new molecular diagnostic system called ''One tep Nucleic Acid Amplification'' (OSNA) was designed for a more accurate detection of lymph node metastases. The objective of the present investigation was to compare the performance ofOSNAto current standard histology (H&E). We hypothesize that OSNA provides a better staging than the routine use of one slide H&E per lymph node.Methods: From 22 colon cancer patients 307 frozen lymph nodes were used to compare OSNA with H&E. The lymph nodes were cut into halves. One half of the lymph node was analyzed by OSNA. The semi-automated OSNA uses amplification of reverse-transcribed cytokeratin19 (CK19) mRNA directly from the homogenate. The remaining tissue was dedicated to histology, with 5 levels of H&E and IHC staining (CK19).Results: On routine evaluation of oneH&Eslide 7 patients were nodal positive (macro-metastases). All these patients were recognized by OSNA analysis as being positive (sensitivity 100%). Two of the remaining 15 patients had lymph node micro-metastases and 9 isolated tumor cells. For the patients with micrometastases both H&E and OSNA were positive in 1 of the 2 patients. For patients with isolated tumor cells, H&E was positive in 1/9 cases whereas OSNA was positive in 3/9 patients (IHC as a reference). There was only one case to be described as IHC negative/OSNA positive. On the basis of single lymph nodes the sensitivity of OSNA and the 5 levels of H&E and IHC was 94・5%.Conclusion: OSNA is a novel molecular tool for the detection of lymph node metastases in colon cancer patients which provides better staging compared to the current standard evaluation of one slide H&E stain. Since the use of OSNA allows the analysis of the whole lymph node, sampling bias and undetected tumor deposits due to uninvestigated material will be overcome. OSNA improves staging in colon cancer patients and may replace the current standard of H&E staining in the future.
Resumo:
The value of earmarks as an efficient means of personal identification is still subject to debate. It has been argued that the field is lacking a firm systematic and structured data basis to help practitioners to form their conclusions. Typically, there is a paucity of research guiding as to the selectivity of the features used in the comparison process between an earmark and reference earprints taken from an individual. This study proposes a system for the automatic comparison of earprints and earmarks, operating without any manual extraction of key-points or manual annotations. For each donor, a model is created using multiple reference prints, hence capturing the donor within source variability. For each comparison between a mark and a model, images are automatically aligned and a proximity score, based on a normalized 2D correlation coefficient, is calculated. Appropriate use of this score allows deriving a likelihood ratio that can be explored under known state of affairs (both in cases where it is known that the mark has been left by the donor that gave the model and conversely in cases when it is established that the mark originates from a different source). To assess the system performance, a first dataset containing 1229 donors elaborated during the FearID research project was used. Based on these data, for mark-to-print comparisons, the system performed with an equal error rate (EER) of 2.3% and about 88% of marks are found in the first 3 positions of a hitlist. When performing print-to-print transactions, results show an equal error rate of 0.5%. The system was then tested using real-case data obtained from police forces.
Resumo:
This paper analyzes the effects of parliamentary representation on road infrastructure expenditure during the Spanish Restoration. Using a panel dataset of Spanish provinces in 1880-1914 we find that the allocation of administrative resources among provinces depended both on the delegation characteristics (such as the share of MPs with party leadership positions, and their degree of electoral independence), and the regime"s global search for stability. These results point to the importance of electoral dynamics within semi-democratic political systems, and offer an example of the influence of government tactics on infrastructure allocation.