973 resultados para Semi-automatic road extraction
Resumo:
This paper presents a solution to part of the problem of making robotic or semi-robotic digging equipment less dependant on human supervision. A method is described for identifying rocks of a certain size that may affect digging efficiency or require special handling. The process involves three main steps. First, by using range and intensity data from a time-of-flight (TOF) camera, a feature descriptor is used to rank points and separate regions surrounding high scoring points. This allows a wide range of rocks to be recognized because features can represent a whole or just part of a rock. Second, these points are filtered to extract only points thought to belong to the large object. Finally, a check is carried out to verify that the resultant point cloud actually represents a rock. Results are presented from field testing on piles of fragmented rock. Note to Practitioners—This paper presents an algorithm to identify large boulders in a pile of broken rock as a step towards an autonomous mining dig planner. In mining, piles of broken rock can contain large fragments that may need to be specially handled. To assess rock piles for excavation, we make use of a TOF camera that does not rely on external lighting to generate a point cloud of the rock pile. We then segment large boulders from its surface by using a novel feature descriptor and distinguish between real and false boulder candidates. Preliminary field experiments show promising results with the algorithm performing nearly as well as human test subjects.
Resumo:
This presentation summarizes experience with the automated speech recognition and translation approach realised in the context of the European project EMMA.
Resumo:
Background and aims: Machine learning techniques for the text mining of cancer-related clinical documents have not been sufficiently explored. Here some techniques are presented for the pre-processing of free-text breast cancer pathology reports, with the aim of facilitating the extraction of information relevant to cancer staging.
Materials and methods: The first technique was implemented using the freely available software RapidMiner to classify the reports according to their general layout: ‘semi-structured’ and ‘unstructured’. The second technique was developed using the open source language engineering framework GATE and aimed at the prediction of chunks of the report text containing information pertaining to the cancer morphology, the tumour size, its hormone receptor status and the number of positive nodes. The classifiers were trained and tested respectively on sets of 635 and 163 manually classified or annotated reports, from the Northern Ireland Cancer Registry.
Results: The best result of 99.4% accuracy – which included only one semi-structured report predicted as unstructured – was produced by the layout classifier with the k nearest algorithm, using the binary term occurrence word vector type with stopword filter and pruning. For chunk recognition, the best results were found using the PAUM algorithm with the same parameters for all cases, except for the prediction of chunks containing cancer morphology. For semi-structured reports the performance ranged from 0.97 to 0.94 and from 0.92 to 0.83 in precision and recall, while for unstructured reports performance ranged from 0.91 to 0.64 and from 0.68 to 0.41 in precision and recall. Poor results were found when the classifier was trained on semi-structured reports but tested on unstructured.
Conclusions: These results show that it is possible and beneficial to predict the layout of reports and that the accuracy of prediction of which segments of a report may contain certain information is sensitive to the report layout and the type of information sought.
Resumo:
Truck drivers are one of the largest occupational groups in Iran. Evidence from previous studies suggests that working and living conditions on the road engender many concerns for truck drivers, and their families and communities. This research aimed to explore the experiences of Iranian truck drivers regarding life on the road. This qualitative study was conducted among Iranian truck drivers working in the inter-state transportation sector. A purposeful sample of 20 truck drivers took part in this research. Data were collected through semi-structured interviews and analyzed based on qualitative content analysis. After analysis of the data, three main themes emerged: "Individual impacts related to the hardships of life on the road life", "Family impacts related to the hardships of road life", and "Having positive attitude towards work and road". These findings represent the dimensions of perspectives in the road-life of truck drivers. Although truck drivers possess positive beliefs about their occupation and life on the road, they and their families face many hardships which should be well understood. They also need support to be better able to solve the road-life concerns they face. This study's findings are useful for occupational programming and in the promotion of health for truck drivers.
Resumo:
Hyperspectral instruments have been incorporated in satellite missions, providing data of high spectral resolution of the Earth. This data can be used in remote sensing applications, such as, target detection, hazard prevention, and monitoring oil spills, among others. In most of these applications, one of the requirements of paramount importance is the ability to give real-time or near real-time response. Recently, onboard processing systems have emerged, in order to overcome the huge amount of data to transfer from the satellite to the ground station, and thus, avoiding delays between hyperspectral image acquisition and its interpretation. For this purpose, compact reconfigurable hardware modules, such as field programmable gate arrays (FPGAs) are widely used. This paper proposes a parallel FPGA-based architecture for endmember’s signature extraction. This method based on the Vertex Component Analysis (VCA) has several advantages, namely it is unsupervised, fully automatic, and it works without dimensionality reduction (DR) pre-processing step. The architecture has been designed for a low cost Xilinx Zynq board with a Zynq-7020 SoC FPGA based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data sets collected by the NASA’s Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the Cuprite mining district in Nevada. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low cost embedded systems, opening new perspectives for onboard hyperspectral image processing.
Resumo:
Is phraseology the third articulation of language? Fresh insights into a theoretical conundrum Jean-Pierre Colson University of Louvain (Louvain-la-Neuve, Belgium) Although the notion of phraseology is now used across a wide range of linguistic disciplines, its definition and the classification of phraseological units remain a subject of intense debate. It is generally agreed that phraseology implies polylexicality, but this term is problematic as well, because it brings us back to one of the most controversial topics in modern linguistics: the definition of a word. On the other hand, another widely accepted principle of language is the double articulation or duality of patterning (Martinet 1960): the first articulation consists of morphemes and the second of phonemes. The very definition of morphemes, however, also poses several problems, and the situation becomes even more confused if we wish to take phraseology into account. In this contribution, I will take the view that a corpus-based and computational approach to phraseology may shed some new light on this theoretical conundrum. A better understanding of the basic units of meaning is necessary for more efficient language learning and translation, especially in the case of machine translation. Previous research (Colson 2011, 2012, 2013, 2014), Corpas Pastor (2000, 2007, 2008, 2013, 2015), Corpas Pastor & Leiva Rojo (2011), Leiva Rojo (2013), has shown the paramount importance of phraseology for translation. A tentative step towards a coherent explanation of the role of phraseology in language has been proposed by Mejri (2006): it is postulated that a third articulation of language intervenes at the level of words, including simple morphemes, sequences of free and bound morphemes, but also phraseological units. I will present results from experiments with statistical associations of morphemes across several languages, and point out that (mainly) isolating languages such as Chinese are interesting for a better understanding of the interplay between morphemes and phraseological units. Named entities, in particular, are an extreme example of intertwining cultural, statistical and linguistic elements. Other examples show that the many borrowings and influences that characterize European languages tend to give a somewhat blurred vision of the interplay between morphology and phraseology. From a statistical point of view, the cpr-score (Colson 2016) provides a methodology for adapting the automatic extraction of phraseological units to the morphological structure of each language. The results obtained can therefore be used for testing hypotheses about the interaction between morphology, phraseology and culture. Experiments with the cpr-score on the extraction of Chinese phraseological units show that results depend on how the basic units of meaning are defined: a morpheme-based approach yields good results, which corroborates the claim by Beck and Mel'čuk (2011) that the association of morphemes into words may be similar to the association of words into phraseological units. A cross-linguistic experiment carried out for English, French, Spanish and Chinese also reveals that the results are quite compatible with Mejri’s hypothesis (2006) of a third articulation of language. Such findings, if confirmed, also corroborate the notion of statistical semantics in language. To illustrate this point, I will present the PhraseoRobot (Colson 2016), a computational tool for extracting phraseological associations around key words from the media, such as Brexit. The results confirm a previous study on the term globalization (Colson 2016): a significant part of sociolinguistic associations prevailing in the media is related to phraseology in the broad sense, and can therefore be partly extracted by means of statistical scores. References Beck, D. & I. Mel'čuk (2011). Morphological phrasemes and Totonacan verbal morphology. Linguistics 49/1: 175-228. Colson, J.-P. (2011). La traduction spécialisée basée sur les corpus : une expérience dans le domaine informatique. In : Sfar, I. & S. Mejri, La traduction de textes spécialisés : retour sur des lieux communs. Synergies Tunisie n° 2. Gerflint, Agence universitaire de la Francophonie, p. 115-123. Colson, J.-P. (2012). Traduire le figement en langue de spécialité : une expérience de phraséologie informatique. In : Mogorrón Huerta, P. & S. Mejri (dirs.), Lenguas de especialidad, traducción, fijación / Langues spécialisées, figement et traduction. Encuentros Mediterráneos / Rencontres Méditerranéennes, N°4. Universidad de Alicante, p. 159-171. Colson, J.-P. (2013). Pratique traduisante et idiomaticité : l’importance des structures semi-figées. In : Mogorrón Huerta, P., Gallego Hernández, D., Masseau, P. & Tolosa Igualada, M. (eds.), Fraseología, Opacidad y Traduccíon. Studien zur romanischen Sprachwissenschaft und interkulturellen Kommunikation (Herausgegeben von Gerd Wotjak). Frankfurt am Main, Peter Lang, p. 207-218. Colson, J.-P. (2014). La phraséologie et les corpus dans les recherches traductologiques. Communication lors du colloque international Europhras 2014, Association Européenne de Phraséologie. Université de Paris Sorbonne, 10-12 septembre 2014. Colson, J-P. (2016). Set phrases around globalization : an experiment in corpus-based computational phraseology. In: F. Alonso Almeida, I. Ortega Barrera, E. Quintana Toledo and M. Sánchez Cuervo (eds.), Input a Word, Analyse the World: Selected Approaches to Corpus Linguistics. Newcastle upon Tyne: Cambridge Scholars Publishing, p. 141-152. Corpas Pastor, G. (2000). Acerca de la (in)traducibilidad de la fraseología. In: G. Corpas Pastor (ed.), Las lenguas de Europa: Estudios de fraseología, fraseografía y traducción. Granada: Comares, p. 483-522. Corpas Pastor, G. (2007). Europäismen - von Natur aus phraseologische Äquivalente? Von blauem Blut und sangre azul. In: M. Emsel y J. Cuartero Otal (eds.), Brücken: Übersetzen und interkulturelle Kommunikationen. Festschrift für Gerd Wotjak zum 65. Geburtstag, Fráncfort: Peter Lang, p. 65-77. Corpas Pastor, G. (2008). Investigar con corpus en traducción: los retos de un nuevo paradigma [Studien zur romanische Sprachwissenschaft und interkulturellen Kommunikation, 49], Fráncfort: Peter Lang. Corpas Pastor, G. (2013). Detección, descripción y contraste de las unidades fraseológicas mediante tecnologías lingüísticas. In Olza, I. & R. Elvira Manero (eds.) Fraseopragmática. Berlin: Frank & Timme, p. 335-373. Leiva Rojo, J. (2013). La traducción de unidades fraseológicas (alemán-español/español-alemán) como parámetro para la evaluación y revisión de traducciones. In: Mellado Blanco, C., Buján, P, Iglesias N.M., Losada M.C. & A. Mansilla (eds), La fraseología del alemán y el español: lexicografía y traducción. ELS, Etudes Linguistiques / Linguistische Studien, Band 11. München: Peniope, p. 31-42. Leiva Rojo, J. & G. Corpas Pastor (2011). Placing Italian idioms in a foreign milieu: a case study. In: Pamies Bertrán, A., Luque Nadal, L., Bretana, J. &; M. Pazos (eds), (2011). Multilingual phraseography. Second Language Learning and Translation Applications. Baltmannsweiler: Schneider Verlag (Colección: Phraseologie und Parömiologie, 28), p. 289-298. Martinet, A. (1966). Eléments de linguistique générale. Paris: Colin. Mejri, S. (2006). Polylexicalité, monolexicalité et double articulation. Cahiers de Lexicologie 2: 209-221.
Resumo:
Automatic video segmentation plays a vital role in sports videos annotation. This paper presents a fully automatic and computationally efficient algorithm for analysis of sports videos. Various methods of automatic shot boundary detection have been proposed to perform automatic video segmentation. These investigations mainly concentrate on detecting fades and dissolves for fast processing of the entire video scene without providing any additional feedback on object relativity within the shots. The goal of the proposed method is to identify regions that perform certain activities in a scene. The model uses some low-level feature video processing algorithms to extract the shot boundaries from a video scene and to identify dominant colours within these boundaries. An object classification method is used for clustering the seed distributions of the dominant colours to homogeneous regions. Using a simple tracking method a classification of these regions to active or static is performed. The efficiency of the proposed framework is demonstrated over a standard video benchmark with numerous types of sport events and the experimental results show that our algorithm can be used with high accuracy for automatic annotation of active regions for sport videos.
Resumo:
Dissertação de Mestrado, Ciências da Linguagem, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2014
Resumo:
Caatinga is an important laboratory for studies about arthropods adaptations and aclimatations because its precipitation is highly variable in time. We studied the effects of time variability over the composition of Arthropods in a caatinga area. The study was carried out at a preservation area on Almas Farm, São José dos Cordeiros, Paraíba. Samples were collected in two 100 m long parallel transects, separated for a 30 m distance, in a dense tree dominated caatinga area, between August 2007 and July 2008. Samples were collected in each transect every 10 m. Ten soil samples were taken from each transect, both at 0-5 cm (A) and 5-10 cm (B) depth, resulting in 40 samples each month. The Berlese funnel method was used for fauna extraction. We registered 26 orders and the arthropods density in the soil ranged from 3237 to 22774 individuals.m-2 from January 2007 to March 2008, respectively. There was no difference between layers A and B regarding orders abundance and richness. The groups recorded include groups with few records or that had no records in the Caatinga region yet as Pauropoda, Psocoptera, Thysanoptera, Protura and Araneae. Acari was the most abundant group, with 66,7% of the total number of individuals. Soil Arthropods presented a positive correlation with soil moisture, vegetal cover, precipitation and real evapotranspiration. Increases in fauna richness and abundance were registered in February, a month after the beginning of the rainy season. A periodic rain events in arid and semiarid ecosystems triggers physiological responses in edafic organisms, like arthropods. Edafic arthropods respond to time variability in the Caatinga biome. This fauna variation has to be considered in studies of this ecosystem, because the variation of Arthropods composition in soil can affect the dynamics of the food web through time
Resumo:
This paper presents a development of a semi-active prosthetic knee, which can work in both active and passive modes based on the energy required during the gait cycle of various activities of daily livings (ADLs). The prosthetic limb is equipped with various sensors to measure the kinematic and kinetic parameters of both prosthetic limbs. This prosthetic knee is designed to be back-drivable in passive mode to provide a potential use in energy regeneration when there negative energy across the knee joint. Preliminary test has been performed on transfemoral amputee in passive mode to provide some insight to the amputee/prosthesis interaction and performance with the designed prosthetic knee.
Resumo:
Les méthodes de design et de construction des routes développés dans le sud canadien ont maintenant besoin d’être adaptés aux environnements nordiques du pays afin de prévenir le dégel dramatique du pergélisol lors de la construction d’une nouvelle route. De plus, le réchauffement climatique occasionne présentement d’importants problèmes de stabilité des sols dans le nord canadien. Ces facteurs causent des pertes importantes au niveau des capacités fonctionnelles et structurales de l’Alaska Highway au Yukon sur un segment de plus de 200 km situé entre le village de Destruction Bay et la frontière de l’Alaska. Afin de trouver des solutions rentables à long terme, le ministère du transport du Yukon (en collaboration avec le Federal Highway Administration du gouvernement américain, Transports Canada, l’Université Laval, l’Université de Montréal et l’Alaska University transportation Center) a mis en place 12 sections d’essais de 50 mètres de longueur sur l’autoroute de l’Alaska près de Beaver Creek en 2008. Ces différentes sections d’essais ont été conçues pour évaluer une ou plusieurs méthodes combinées de stabilisation thermique telles que le drain thermique, le remblai à convection d’air, le pare-neige / pare-soleil, le remblai couvert de matières organiques, les drains longitudinaux, le déblaiement de la neige sur les pentes et la surface réfléchissante. Les objectifs spécifiques de la recherche sont 1) d’établir les régimes thermiques et les flux de chaleur dans chacune des sections pour les 3 premières années de fonctionnement ; 2) de documenter les facteurs pouvant favoriser ou nuire à l’efficacité des systèmes de protection et ; 3) de déterminer le rapport coûts/bénéfices à long terme pour chacune des techniques utilisées. Pour ce faire, une nouvelle méthode d’analyse, basée sur la mesure de flux d’extraction de chaleur Hx et d’induction Hi à l’interface entre le remblai et le sol naturel, a été utilisée dans cette étude. Certaines techniques de protection du pergélisol démontrent un bon potentiel durant leurs 3 premières années de fonctionnement. C’est le cas pour le remblai à convection d’air non-couvert, le remblai à convection d’air pleine largeur, les drains longitudinaux, le pare-soleil / pare-neige et la surface réfléchissante. Malheureusement, des problèmes dans l’installation des drains thermiques ont empêché une évaluation complète de leur efficacité.
Resumo:
A relevant problem of polyolefins processing is the presence of volatile and semi-volatile compounds (VOCs and SVOCs) such as linear chains alkanes found out in final products. These VOCs can be detected by customers from the unpleasant smelt and can be an environmental issue, at the same time they can cause negative side effects during process. Since no previously standardized analytical techniques for polymeric matrix are available in bibliography, we have implemented different VOCs extraction methods and gaschromatographic analysis for quali-quantitative studies of such compounds. In literature different procedures can be found including microwave extraction (MAE) and thermo desorption (TDS) used with different purposes. TDS coupled with GC-MS are necessary for the identification of different compounds in the polymer matrix. Although the quantitative determination is complex, the results obtained from TDS/GC-MS show that by-products are mainly linear chains oligomers with even number of carbon in a C8-C22 range (for HDPE). In order to quantify these linear alkanes by-products, a more accurate GC-FID determination with internal standard has been run on MAE extracts. Regardless the type of extruder used, it is difficult to distinguish the effect of the various processes, which in any case entails having a lower-boiling substance content, lower than the corresponding virgin polymer. The two HDPEs studied can be distinguished on the basis of the quantity of analytes found, therefore the production process is mainly responsible for the amount of VOCs and SVOCs observed. The extruder technology used by Sacmi SC allows to obtain a significant reduction in VOCs compared to the conventional screw system. Thus, the result is significantly important as a lower quantity of volatile substances certainly leads to a lower migration of such materials, especially when used for food packaging.
Resumo:
My doctoral research is about the modelling of symbolism in the cultural heritage domain, and on connecting artworks based on their symbolism through knowledge extraction and representation techniques. In particular, I participated in the design of two ontologies: one models the relationships between a symbol, its symbolic meaning, and the cultural context in which the symbol symbolizes the symbolic meaning; the second models artistic interpretations of a cultural heritage object from an iconographic and iconological (thus also symbolic) perspective. I also converted several sources of unstructured data, a dictionary of symbols and an encyclopaedia of symbolism, and semi-structured data, DBpedia and WordNet, to create HyperReal, the first knowledge graph dedicated to conventional cultural symbolism. By making use of HyperReal's content, I showed how linked open data about cultural symbolism could be utilized to initiate a series of quantitative studies that analyse (i) similarities between cultural contexts based on their symbologies, (ii) broad symbolic associations, (iii) specific case studies of symbolism such as the relationship between symbols, their colours, and their symbolic meanings. Moreover, I developed a system that can infer symbolic, cultural context-dependent interpretations from artworks according to what they depict, envisioning potential use cases for museum curation. I have then re-engineered the iconographic and iconological statements of Wikidata, a widely used general-domain knowledge base, creating ICONdata: an iconographic and iconological knowledge graph. ICONdata was then enriched with automatic symbolic interpretations. Subsequently, I demonstrated the significance of enhancing artwork information through alignment with linked open data related to symbolism, resulting in the discovery of novel connections between artworks. Finally, I contributed to the creation of a software application. This application leverages established connections, allowing users to investigate the symbolic expression of a concept across different cultural contexts through the generation of a three-dimensional exhibition of artefacts symbolising the chosen concept.
Resumo:
To detect the presence of male DNA in vaginal samples collected from survivors of sexual violence and stored on filter paper. A pilot study was conducted to evaluate 10 vaginal samples spotted on sterile filter paper: 6 collected at random in April 2009 and 4 in October 2010. Time between sexual assault and sample collection was 4-48hours. After drying at room temperature, the samples were placed in a sterile envelope and stored for 2-3years until processing. DNA extraction was confirmed by polymerase chain reaction for human β-globin, and the presence of prostate-specific antigen (PSA) was quantified. The presence of the Y chromosome was detected using primers for sequences in the TSPY (Y7/Y8 and DYS14) and SRY genes. β-Globin was detected in all 10 samples, while 2 samples were positive for PSA. Half of the samples amplified the Y7/Y8 and DYS14 sequences of the TSPY gene and 30% amplified the SRY gene sequence of the Y chromosome. Four male samples and 1 female sample served as controls. Filter-paper spots stored for periods of up to 3years proved adequate for preserving genetic material from vaginal samples collected following sexual violence.
Resumo:
Diabetic Retinopathy (DR) is a complication of diabetes that can lead to blindness if not readily discovered. Automated screening algorithms have the potential to improve identification of patients who need further medical attention. However, the identification of lesions must be accurate to be useful for clinical application. The bag-of-visual-words (BoVW) algorithm employs a maximum-margin classifier in a flexible framework that is able to detect the most common DR-related lesions such as microaneurysms, cotton-wool spots and hard exudates. BoVW allows to bypass the need for pre- and post-processing of the retinographic images, as well as the need of specific ad hoc techniques for identification of each type of lesion. An extensive evaluation of the BoVW model, using three large retinograph datasets (DR1, DR2 and Messidor) with different resolution and collected by different healthcare personnel, was performed. The results demonstrate that the BoVW classification approach can identify different lesions within an image without having to utilize different algorithms for each lesion reducing processing time and providing a more flexible diagnostic system. Our BoVW scheme is based on sparse low-level feature detection with a Speeded-Up Robust Features (SURF) local descriptor, and mid-level features based on semi-soft coding with max pooling. The best BoVW representation for retinal image classification was an area under the receiver operating characteristic curve (AUC-ROC) of 97.8% (exudates) and 93.5% (red lesions), applying a cross-dataset validation protocol. To assess the accuracy for detecting cases that require referral within one year, the sparse extraction technique associated with semi-soft coding and max pooling obtained an AUC of 94.2 ± 2.0%, outperforming current methods. Those results indicate that, for retinal image classification tasks in clinical practice, BoVW is equal and, in some instances, surpasses results obtained using dense detection (widely believed to be the best choice in many vision problems) for the low-level descriptors.