989 resultados para Automatic term extraction


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a new algorithm for extracting Free-Form Surface Features (FFSFs) from a surface model. The extraction algorithm is based on a modified taxonomy of FFSFs from that proposed in the literature. A new classification scheme has been proposed for FFSFs to enable their representation and extraction. The paper proposes a separating curve as a signature of FFSFs in a surface model. FFSFs are classified based on the characteristics of the separating curve (number and type) and the influence region (the region enclosed by the separating curve). A method to extract these entities is presented. The algorithm has been implemented and tested for various free-form surface features on different types of free-form surfaces (base surfaces) and is found to correctly identify and represent the features irrespective of the type of underlying surface. The representation and extraction algorithm are both based on topology and geometry. The algorithm is data-driven and does not use any pre-defined templates. The definition presented for a feature is unambiguous and application independent. The proposed classification of FFSFs can be used to develop an ontology to determine semantic equivalences for the feature to be exchanged, mapped and used across PLM applications. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents classification, representation and extraction of deformation features in sheet-metal parts. The thickness is constant for these shape features and hence these are also referred to as constant thickness features. The deformation feature is represented as a set of faces with a characteristic arrangement among the faces. Deformation of the base-sheet or forming of material creates Bends and Walls with respect to a base-sheet or a reference plane. These are referred to as Basic Deformation Features (BDFs). Compound deformation features having two or more BDFs are defined as characteristic combinations of Bends and Walls and represented as a graph called Basic Deformation Features Graph (BDFG). The graph, therefore, represents a compound deformation feature uniquely. The characteristic arrangement of the faces and type of bends belonging to the feature decide the type and nature of the deformation feature. Algorithms have been developed to extract and identify deformation features from a CAD model of sheet-metal parts. The proposed algorithm does not require folding and unfolding of the part as intermediate steps to recognize deformation features. Representations of typical features are illustrated and results of extracting these deformation features from typical sheet metal parts are presented and discussed. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the semiconductor manufacturing environment it is very important to understand which factors have the most impact on process outcomes and to control them accordingly. This is usually achieved through design of experiments at process start-up and long term observation of production. As such it relies heavily on the expertise of the process engineer. In this work, we present an automatic approach to extracting useful insights about production processes and equipment based on state-of-the-art Machine Learning techniques. The main goal of this activity is to provide tools to process engineers to accelerate the learning-by-observation phase of process analysis. Using a Metal Deposition process as an example, we highlight various ways in which the extracted information can be employed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thermal comfort is defined as “that condition of mind which expresses satisfaction with the thermal environment’ [1] [2]. Field studies have been completed in order to establish the governing conditions for thermal comfort [3]. These studies showed that the internal climate of a room was the strongest factor in establishing thermal comfort. Direct manipulation of the internal climate is necessary to retain an acceptable level of thermal comfort. In order for Building Energy Management Systems (BEMS) strategies to be efficiently utilised it is necessary to have the ability to predict the effect that activating a heating/cooling source (radiators, windows and doors) will have on the room. The numerical modelling of the domain can be challenging due to necessity to capture temperature stratification and/or different heat sources (radiators, computers and human beings). Computational Fluid Dynamic (CFD) models are usually utilised for this function because they provide the level of details required. Although they provide the necessary level of accuracy these models tend to be highly computationally expensive especially when transient behaviour needs to be analysed. Consequently they cannot be integrated in BEMS. This paper presents and describes validation of a CFD-ROM method for real-time simulations of building thermal performance. The CFD-ROM method involves the automatic extraction and solution of reduced order models (ROMs) from validated CFD simulations. The test case used in this work is a room of the Environmental Research Institute (ERI) Building at the University College Cork (UCC). ROMs have shown that they are sufficiently accurate with a total error of less than 1% and successfully retain a satisfactory representation of the phenomena modelled. The number of zones in a ROM defines the size and complexity of that ROM. It has been observed that ROMs with a higher number of zones produce more accurate results. As each ROM has a time to solution of less than 20 seconds they can be integrated into the BEMS of a building which opens the potential to real time physics based building energy modelling.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Accurate modelling of the internal climate of buildings is essential if Building Energy Management Systems (BEMS) are to efficiently maintain adequate thermal comfort. Computational fluid dynamics (CFD) models are usually utilised to predict internal climate. Nevertheless CFD models, although providing the necessary level of accuracy, are highly computationally expensive, and cannot practically be integrated in BEMS. This paper presents and describes validation of a CFD-ROM method for real-time simulations of building thermal performance. The CFD-ROM method involves the automatic extraction and solution of reduced order models (ROMs) from validated CFD simulations. ROMs are shown to be adequately accurate with a total error below 5% and to retain satisfactory representation of the phenomena modelled. Each ROM has a time to solution under 20seconds, which opens the potential of their integration with BEMS, giving real-time physics-based building energy modelling. A parameter study was conducted to investigate the applicability of the extracted ROM to initial boundary conditions different from those from which it was extracted. The results show that the ROMs retained satisfactory total errors when the initial conditions in the room were varied by ±5°C. This allows the production of a finite number of ROMs with the ability to rapidly model many possible scenarios.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cerebral glioma is the most prevalent primary brain tumor, which are classified broadly into low and high grades according to the degree of malignancy. High grade gliomas are highly malignant which possess a poor prognosis, and the patients survive less than eighteen months after diagnosis. Low grade gliomas are slow growing, least malignant and has better response to therapy. To date, histological grading is used as the standard technique for diagnosis, treatment planning and survival prediction. The main objective of this thesis is to propose novel methods for automatic extraction of low and high grade glioma and other brain tissues, grade detection techniques for glioma using conventional magnetic resonance imaging (MRI) modalities and 3D modelling of glioma from segmented tumor slices in order to assess the growth rate of tumors. Two new methods are developed for extracting tumor regions, of which the second method, named as Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA) can also extract white matter and grey matter from T1 FLAIR an T2 weighted images. The methods were validated with manual Ground truth images, which showed promising results. The developed methods were compared with widely used Fuzzy c-means clustering technique and the robustness of the algorithm with respect to noise is also checked for different noise levels. Image texture can provide significant information on the (ab)normality of tissue, and this thesis expands this idea to tumour texture grading and detection. Based on the thresholds of discriminant first order and gray level cooccurrence matrix based second order statistical features three feature sets were formulated and a decision system was developed for grade detection of glioma from conventional T2 weighted MRI modality.The quantitative performance analysis using ROC curve showed 99.03% accuracy for distinguishing between advanced (aggressive) and early stage (non-aggressive) malignant glioma. The developed brain texture analysis techniques can improve the physician’s ability to detect and analyse pathologies leading to a more reliable diagnosis and treatment of disease. The segmented tumors were also used for volumetric modelling of tumors which can provide an idea of the growth rate of tumor; this can be used for assessing response to therapy and patient prognosis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Efficient optic disc segmentation is an important task in automated retinal screening. For the same reason optic disc detection is fundamental for medical references and is important for the retinal image analysis application. The most difficult problem of optic disc extraction is to locate the region of interest. Moreover it is a time consuming task. This paper tries to overcome this barrier by presenting an automated method for optic disc boundary extraction using Fuzzy C Means combined with thresholding. The discs determined by the new method agree relatively well with those determined by the experts. The present method has been validated on a data set of 110 colour fundus images from DRION database, and has obtained promising results. The performance of the system is evaluated using the difference in horizontal and vertical diameters of the obtained disc boundary and that of the ground truth obtained from two expert ophthalmologists. For the 25 test images selected from the 110 colour fundus images, the Pearson correlation of the ground truth diameters with the detected diameters by the new method are 0.946 and 0.958 and, 0.94 and 0.974 respectively. From the scatter plot, it is shown that the ground truth and detected diameters have a high positive correlation. This computerized analysis of optic disc is very useful for the diagnosis of retinal diseases

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Condition monitoring of wooden railway sleepers applications are generallycarried out by visual inspection and if necessary some impact acoustic examination iscarried out intuitively by skilled personnel. In this work, a pattern recognition solutionhas been proposed to automate the process for the achievement of robust results. Thestudy presents a comparison of several pattern recognition techniques together withvarious nonstationary feature extraction techniques for classification of impactacoustic emissions. Pattern classifiers such as multilayer perceptron, learning cectorquantization and gaussian mixture models, are combined with nonstationary featureextraction techniques such as Short Time Fourier Transform, Continuous WaveletTransform, Discrete Wavelet Transform and Wigner-Ville Distribution. Due to thepresence of several different feature extraction and classification technqies, datafusion has been investigated. Data fusion in the current case has mainly beeninvestigated on two levels, feature level and classifier level respectively. Fusion at thefeature level demonstrated best results with an overall accuracy of 82% whencompared to the human operator.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we propose a model for discovering frequent sequential patterns, phrases, which can be used as profile descriptors of documents. It is indubitable that we can obtain numerous phrases using data mining algorithms. However, it is difficult to use these phrases effectively for answering what users want. Therefore, we present a pattern taxonomy extraction model which performs the task of extracting descriptive frequent sequential patterns by pruning the meaningless ones. The model then is extended and tested by applying it to the information filtering system. The results of the experiment show that pattern-based methods outperform the keyword-based methods. The results also indicate that removal of meaningless patterns not only reduces the cost of computation but also improves the effectiveness of the system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A system that could automatically extract abnormal lung regions may assist expert radiologists in verifying lung tissue abnormalities. This paper presents an automated lung nodule detection system consisting of five components: acquisition, pre-processing, background removal, detection, and false positives reduction. The system employs a combination of an ensemble classification and clustering methods. The performance of the developed system is compared against some existing counterparts. Based 011 the experimental results, the proposed system achieved a sensitivity of 100% and a false-positives/slice of 0.67 for 30 tested CT images.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper addresses the challenge of bridging the semantic gap that exists between the simplicity of features that can be currently computed in automated content indexing systems and the richness of semantics in user queries posed for media search and retrieval. It proposes a unique computational approach to extraction of expressive elements of motion pictures for deriving high-level semantics of stories portrayed, thus enabling rich video annotation and interpretation. This approach, motivated and directed by the existing cinematic conventions known as film grammar, as a first step toward demonstrating its effectiveness, uses the attributes of motion and shot length to define and compute a novel measure of tempo of a movie. Tempo flow plots are defined and derived for a number of full-length movies and edge analysis is performed leading to the extraction of dramatic story sections and events signaled by their unique tempo. The results confirm tempo as a useful high-level semantic construct in its own right and a promising component of others such as rhythm, tone or mood of a film. In addition to the development of this computable tempo measure, a study is conducted as to the usefulness of biasing it toward either of its constituents, namely, motion or shot length. Finally, a refinement is made to the shot length normalizing mechanism, driven by the peculiar characteristics of shot length distribution exhibited by movies. Results of these additional studies, and possible applications and limitations are discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper proposes a unique computational approach to extraction of expressive elements of motion pictures for deriving high level semantics of stories portrayed, thus enabling better video annotation and interpretation systems. This approach, motivated and directed by the existing cinematic conventions known as film grammar, as a first step towards demonstrating its effectiveness, uses the attributes of motion and shot length to define and compute a novel measure of tempo of a movie. Tempo flow plots are defined and derived for four full-length movies and edge analysis is performed leading to the extraction of dramatic story sections and events signaled by their unique tempo. The results confirm tempo as a useful attribute in its own right and a promising component of semantic constructs such as tone or mood of a film.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article presents an automatic methodology for extraction of road seeds from high-resolution aerial images. The method is based on a set of four road objects and another set of connection rules among road objects. Each road object is a local representation of an approximately straight road fragment and its construction is based on a combination of polygons describing all relevant image edges, according to some rules embodying road knowledge. Each one of the road seeds is composed by a sequence of connected road objects, in which each sequence of this type can be geometrically structured as a chain of contiguous quadrilaterals. Experiments carried out with high-resolution aerial images showed that the proposed methodology is very promising in extracting road seeds. This article presents the fundamentals of the method and the experimental results, as well.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The continued growth of large cities is producing increasing volumes of urban sewage sludge. Disposing of this waste without damaging the environment requires careful management. The application of large quantities of biosolids (treated sewage sludge) to agricultural lands for many years may result in the excessive accumulation of nutrients like phosphorus (P) and thereby raise risks of eutrophication in nearby water bodies. We evaluated the fractionation of P in samples of an Oxisol collected as part of a field experiment in which biosolids were added at three rates to a maize (Zea mays L) plantation over four consecutive years. The biosolids treatments were equivalent to one, two and four times the recommended N rate for maize crops. In a fourth treatment, mineral fertilizer was applied at the rate recommended for maize. Inorganic P forms were extracted with ammonium chloride to remove soluble and loosely bound P; P bound to aluminum oxide (P-Al) was extracted with ammonium fluoride; P bound to iron oxide (P-Fe) was extracted with sodium hydroxide; and P bound to calcium (P-Ca) was extracted with sulfuric acid. Organic P was calculated as the difference between total P and inorganic P. The predominant fraction of P was P-Fe, followed by P-Al and P-Ca. P fractions were positively correlated to the amounts of P applied, except for P-Ca. The low values of P-Ca were due to the advanced weathering processes to which the Oxisol have been subjected, under which forms of P-Ca are converted to P-Fe and P-Al. The fertilization with P via biosolids increased P availability for maize plants even when a large portion of P was converted to more stable forms. Phosphorus content in maize leaves and grains was positively correlated with P fractions in soils. From these results it can be concluded that the application of biosolids in highly weathered tropical clayey soils for many years, even above the recommended rate based on N requirements for maize, tend to be less potentially hazardous to the environment than in less weathered sandy soils because the non-readily P fractions are predominant after the addition of biosolids. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ontology design and population -core aspects of semantic technologies- re- cently have become fields of great interest due to the increasing need of domain-specific knowledge bases that can boost the use of Semantic Web. For building such knowledge resources, the state of the art tools for ontology design require a lot of human work. Producing meaningful schemas and populating them with domain-specific data is in fact a very difficult and time-consuming task. Even more if the task consists in modelling knowledge at a web scale. The primary aim of this work is to investigate a novel and flexible method- ology for automatically learning ontology from textual data, lightening the human workload required for conceptualizing domain-specific knowledge and populating an extracted schema with real data, speeding up the whole ontology production process. Here computational linguistics plays a fundamental role, from automati- cally identifying facts from natural language and extracting frame of relations among recognized entities, to producing linked data with which extending existing knowledge bases or creating new ones. In the state of the art, automatic ontology learning systems are mainly based on plain-pipelined linguistics classifiers performing tasks such as Named Entity recognition, Entity resolution, Taxonomy and Relation extraction [11]. These approaches present some weaknesses, specially in capturing struc- tures through which the meaning of complex concepts is expressed [24]. Humans, in fact, tend to organize knowledge in well-defined patterns, which include participant entities and meaningful relations linking entities with each other. In literature, these structures have been called Semantic Frames by Fill- 6 Introduction more [20], or more recently as Knowledge Patterns [23]. Some NLP studies has recently shown the possibility of performing more accurate deep parsing with the ability of logically understanding the structure of discourse [7]. In this work, some of these technologies have been investigated and em- ployed to produce accurate ontology schemas. The long-term goal is to collect large amounts of semantically structured information from the web of crowds, through an automated process, in order to identify and investigate the cognitive patterns used by human to organize their knowledge.