974 resultados para Automatic term extraction
Resumo:
This paper presents classification, representation and extraction of deformation features in sheet-metal parts. The thickness is constant for these shape features and hence these are also referred to as constant thickness features. The deformation feature is represented as a set of faces with a characteristic arrangement among the faces. Deformation of the base-sheet or forming of material creates Bends and Walls with respect to a base-sheet or a reference plane. These are referred to as Basic Deformation Features (BDFs). Compound deformation features having two or more BDFs are defined as characteristic combinations of Bends and Walls and represented as a graph called Basic Deformation Features Graph (BDFG). The graph, therefore, represents a compound deformation feature uniquely. The characteristic arrangement of the faces and type of bends belonging to the feature decide the type and nature of the deformation feature. Algorithms have been developed to extract and identify deformation features from a CAD model of sheet-metal parts. The proposed algorithm does not require folding and unfolding of the part as intermediate steps to recognize deformation features. Representations of typical features are illustrated and results of extracting these deformation features from typical sheet metal parts are presented and discussed. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
In the semiconductor manufacturing environment it is very important to understand which factors have the most impact on process outcomes and to control them accordingly. This is usually achieved through design of experiments at process start-up and long term observation of production. As such it relies heavily on the expertise of the process engineer. In this work, we present an automatic approach to extracting useful insights about production processes and equipment based on state-of-the-art Machine Learning techniques. The main goal of this activity is to provide tools to process engineers to accelerate the learning-by-observation phase of process analysis. Using a Metal Deposition process as an example, we highlight various ways in which the extracted information can be employed.
Resumo:
Thermal comfort is defined as “that condition of mind which expresses satisfaction with the thermal environment’ [1] [2]. Field studies have been completed in order to establish the governing conditions for thermal comfort [3]. These studies showed that the internal climate of a room was the strongest factor in establishing thermal comfort. Direct manipulation of the internal climate is necessary to retain an acceptable level of thermal comfort. In order for Building Energy Management Systems (BEMS) strategies to be efficiently utilised it is necessary to have the ability to predict the effect that activating a heating/cooling source (radiators, windows and doors) will have on the room. The numerical modelling of the domain can be challenging due to necessity to capture temperature stratification and/or different heat sources (radiators, computers and human beings). Computational Fluid Dynamic (CFD) models are usually utilised for this function because they provide the level of details required. Although they provide the necessary level of accuracy these models tend to be highly computationally expensive especially when transient behaviour needs to be analysed. Consequently they cannot be integrated in BEMS. This paper presents and describes validation of a CFD-ROM method for real-time simulations of building thermal performance. The CFD-ROM method involves the automatic extraction and solution of reduced order models (ROMs) from validated CFD simulations. The test case used in this work is a room of the Environmental Research Institute (ERI) Building at the University College Cork (UCC). ROMs have shown that they are sufficiently accurate with a total error of less than 1% and successfully retain a satisfactory representation of the phenomena modelled. The number of zones in a ROM defines the size and complexity of that ROM. It has been observed that ROMs with a higher number of zones produce more accurate results. As each ROM has a time to solution of less than 20 seconds they can be integrated into the BEMS of a building which opens the potential to real time physics based building energy modelling.
Resumo:
Accurate modelling of the internal climate of buildings is essential if Building Energy Management Systems (BEMS) are to efficiently maintain adequate thermal comfort. Computational fluid dynamics (CFD) models are usually utilised to predict internal climate. Nevertheless CFD models, although providing the necessary level of accuracy, are highly computationally expensive, and cannot practically be integrated in BEMS. This paper presents and describes validation of a CFD-ROM method for real-time simulations of building thermal performance. The CFD-ROM method involves the automatic extraction and solution of reduced order models (ROMs) from validated CFD simulations. ROMs are shown to be adequately accurate with a total error below 5% and to retain satisfactory representation of the phenomena modelled. Each ROM has a time to solution under 20seconds, which opens the potential of their integration with BEMS, giving real-time physics-based building energy modelling. A parameter study was conducted to investigate the applicability of the extracted ROM to initial boundary conditions different from those from which it was extracted. The results show that the ROMs retained satisfactory total errors when the initial conditions in the room were varied by ±5°C. This allows the production of a finite number of ROMs with the ability to rapidly model many possible scenarios.
Resumo:
Cerebral glioma is the most prevalent primary brain tumor, which are classified broadly into low and high grades according to the degree of malignancy. High grade gliomas are highly malignant which possess a poor prognosis, and the patients survive less than eighteen months after diagnosis. Low grade gliomas are slow growing, least malignant and has better response to therapy. To date, histological grading is used as the standard technique for diagnosis, treatment planning and survival prediction. The main objective of this thesis is to propose novel methods for automatic extraction of low and high grade glioma and other brain tissues, grade detection techniques for glioma using conventional magnetic resonance imaging (MRI) modalities and 3D modelling of glioma from segmented tumor slices in order to assess the growth rate of tumors. Two new methods are developed for extracting tumor regions, of which the second method, named as Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA) can also extract white matter and grey matter from T1 FLAIR an T2 weighted images. The methods were validated with manual Ground truth images, which showed promising results. The developed methods were compared with widely used Fuzzy c-means clustering technique and the robustness of the algorithm with respect to noise is also checked for different noise levels. Image texture can provide significant information on the (ab)normality of tissue, and this thesis expands this idea to tumour texture grading and detection. Based on the thresholds of discriminant first order and gray level cooccurrence matrix based second order statistical features three feature sets were formulated and a decision system was developed for grade detection of glioma from conventional T2 weighted MRI modality.The quantitative performance analysis using ROC curve showed 99.03% accuracy for distinguishing between advanced (aggressive) and early stage (non-aggressive) malignant glioma. The developed brain texture analysis techniques can improve the physician’s ability to detect and analyse pathologies leading to a more reliable diagnosis and treatment of disease. The segmented tumors were also used for volumetric modelling of tumors which can provide an idea of the growth rate of tumor; this can be used for assessing response to therapy and patient prognosis.
Resumo:
Efficient optic disc segmentation is an important task in automated retinal screening. For the same reason optic disc detection is fundamental for medical references and is important for the retinal image analysis application. The most difficult problem of optic disc extraction is to locate the region of interest. Moreover it is a time consuming task. This paper tries to overcome this barrier by presenting an automated method for optic disc boundary extraction using Fuzzy C Means combined with thresholding. The discs determined by the new method agree relatively well with those determined by the experts. The present method has been validated on a data set of 110 colour fundus images from DRION database, and has obtained promising results. The performance of the system is evaluated using the difference in horizontal and vertical diameters of the obtained disc boundary and that of the ground truth obtained from two expert ophthalmologists. For the 25 test images selected from the 110 colour fundus images, the Pearson correlation of the ground truth diameters with the detected diameters by the new method are 0.946 and 0.958 and, 0.94 and 0.974 respectively. From the scatter plot, it is shown that the ground truth and detected diameters have a high positive correlation. This computerized analysis of optic disc is very useful for the diagnosis of retinal diseases
Resumo:
Condition monitoring of wooden railway sleepers applications are generallycarried out by visual inspection and if necessary some impact acoustic examination iscarried out intuitively by skilled personnel. In this work, a pattern recognition solutionhas been proposed to automate the process for the achievement of robust results. Thestudy presents a comparison of several pattern recognition techniques together withvarious nonstationary feature extraction techniques for classification of impactacoustic emissions. Pattern classifiers such as multilayer perceptron, learning cectorquantization and gaussian mixture models, are combined with nonstationary featureextraction techniques such as Short Time Fourier Transform, Continuous WaveletTransform, Discrete Wavelet Transform and Wigner-Ville Distribution. Due to thepresence of several different feature extraction and classification technqies, datafusion has been investigated. Data fusion in the current case has mainly beeninvestigated on two levels, feature level and classifier level respectively. Fusion at thefeature level demonstrated best results with an overall accuracy of 82% whencompared to the human operator.
Resumo:
This article presents an automatic methodology for extraction of road seeds from high-resolution aerial images. The method is based on a set of four road objects and another set of connection rules among road objects. Each road object is a local representation of an approximately straight road fragment and its construction is based on a combination of polygons describing all relevant image edges, according to some rules embodying road knowledge. Each one of the road seeds is composed by a sequence of connected road objects, in which each sequence of this type can be geometrically structured as a chain of contiguous quadrilaterals. Experiments carried out with high-resolution aerial images showed that the proposed methodology is very promising in extracting road seeds. This article presents the fundamentals of the method and the experimental results, as well.
Resumo:
The continued growth of large cities is producing increasing volumes of urban sewage sludge. Disposing of this waste without damaging the environment requires careful management. The application of large quantities of biosolids (treated sewage sludge) to agricultural lands for many years may result in the excessive accumulation of nutrients like phosphorus (P) and thereby raise risks of eutrophication in nearby water bodies. We evaluated the fractionation of P in samples of an Oxisol collected as part of a field experiment in which biosolids were added at three rates to a maize (Zea mays L) plantation over four consecutive years. The biosolids treatments were equivalent to one, two and four times the recommended N rate for maize crops. In a fourth treatment, mineral fertilizer was applied at the rate recommended for maize. Inorganic P forms were extracted with ammonium chloride to remove soluble and loosely bound P; P bound to aluminum oxide (P-Al) was extracted with ammonium fluoride; P bound to iron oxide (P-Fe) was extracted with sodium hydroxide; and P bound to calcium (P-Ca) was extracted with sulfuric acid. Organic P was calculated as the difference between total P and inorganic P. The predominant fraction of P was P-Fe, followed by P-Al and P-Ca. P fractions were positively correlated to the amounts of P applied, except for P-Ca. The low values of P-Ca were due to the advanced weathering processes to which the Oxisol have been subjected, under which forms of P-Ca are converted to P-Fe and P-Al. The fertilization with P via biosolids increased P availability for maize plants even when a large portion of P was converted to more stable forms. Phosphorus content in maize leaves and grains was positively correlated with P fractions in soils. From these results it can be concluded that the application of biosolids in highly weathered tropical clayey soils for many years, even above the recommended rate based on N requirements for maize, tend to be less potentially hazardous to the environment than in less weathered sandy soils because the non-readily P fractions are predominant after the addition of biosolids. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Ontology design and population -core aspects of semantic technologies- re- cently have become fields of great interest due to the increasing need of domain-specific knowledge bases that can boost the use of Semantic Web. For building such knowledge resources, the state of the art tools for ontology design require a lot of human work. Producing meaningful schemas and populating them with domain-specific data is in fact a very difficult and time-consuming task. Even more if the task consists in modelling knowledge at a web scale. The primary aim of this work is to investigate a novel and flexible method- ology for automatically learning ontology from textual data, lightening the human workload required for conceptualizing domain-specific knowledge and populating an extracted schema with real data, speeding up the whole ontology production process. Here computational linguistics plays a fundamental role, from automati- cally identifying facts from natural language and extracting frame of relations among recognized entities, to producing linked data with which extending existing knowledge bases or creating new ones. In the state of the art, automatic ontology learning systems are mainly based on plain-pipelined linguistics classifiers performing tasks such as Named Entity recognition, Entity resolution, Taxonomy and Relation extraction [11]. These approaches present some weaknesses, specially in capturing struc- tures through which the meaning of complex concepts is expressed [24]. Humans, in fact, tend to organize knowledge in well-defined patterns, which include participant entities and meaningful relations linking entities with each other. In literature, these structures have been called Semantic Frames by Fill- 6 Introduction more [20], or more recently as Knowledge Patterns [23]. Some NLP studies has recently shown the possibility of performing more accurate deep parsing with the ability of logically understanding the structure of discourse [7]. In this work, some of these technologies have been investigated and em- ployed to produce accurate ontology schemas. The long-term goal is to collect large amounts of semantically structured information from the web of crowds, through an automated process, in order to identify and investigate the cognitive patterns used by human to organize their knowledge.
Resumo:
The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerprint is very noisy, we are not able to detect a reliable set of features. A new fingerprint enhancement method, which is both iterative and contextual, is proposed. This approach detects high-quality regions in fingerprints, selectively applies contextual filtering and iteratively expands like wildfire toward low-quality ones. A precise estimation of the orientation field would greatly simplify the estimation of other fingerprint features (singular points, minutiae) and improve the performance of a fingerprint recognition system. The fingerprint orientation extraction is improved following two directions. First, after the introduction of a new taxonomy of fingerprint orientation extraction methods, several variants of baseline methods are implemented and, pointing out the role of pre- and post- processing, we show how to improve the extraction. Second, the introduction of a new hybrid orientation extraction method, which follows an adaptive scheme, allows to improve significantly the orientation extraction in noisy fingerprints. Scientific papers typically propose recognition systems that integrate many modules and therefore an automatic evaluation of fingerprint algorithms is needed to isolate the contributions that determine an actual progress in the state-of-the-art. The lack of a publicly available framework to compare fingerprint orientation extraction algorithms, motivates the introduction of a new benchmark area called FOE (including fingerprints and manually-marked orientation ground-truth) along with fingerprint matching benchmarks in the FVC-onGoing framework. The success of such framework is discussed by providing relevant statistics: more than 1450 algorithms submitted and two international competitions.
Resumo:
Recently developed computer applications provide tools for planning cranio-maxillofacial interventions based on 3-dimensional (3D) virtual models of the patient's skull obtained from computed-tomography (CT) scans. Precise knowledge of the location of the mid-facial plane is important for the assessment of deformities and for planning reconstructive procedures. In this work, a new method is presented to automatically compute the mid-facial plane on the basis of a surface model of the facial skeleton obtained from CT. The method matches homologous surface areas selected by the user on the left and right facial side using an iterative closest point optimization. The symmetry plane which best approximates this matching transformation is then computed. This new automatic method was evaluated in an experimental study. The study included experienced and inexperienced clinicians defining the symmetry plane by a selection of landmarks. This manual definition was systematically compared with the definition resulting from the new automatic method: Quality of the symmetry planes was evaluated by their ability to match homologous areas of the face. Results show that the new automatic method is reliable and leads to significantly higher accuracy than the manual method when performed by inexperienced clinicians. In addition, the method performs equally well in difficult trauma situations, where key landmarks are unreliable or absent.
Resumo:
Automatic identification and extraction of bone contours from X-ray images is an essential first step task for further medical image analysis. In this paper we propose a 3D statistical model based framework for the proximal femur contour extraction from calibrated X-ray images. The automatic initialization is solved by an estimation of Bayesian network algorithm to fit a multiple component geometrical model to the X-ray data. The contour extraction is accomplished by a non-rigid 2D/3D registration between a 3D statistical model and the X-ray images, in which bone contours are extracted by a graphical model based Bayesian inference. Preliminary experiments on clinical data sets verified its validity