927 resultados para feature inspection method


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis presents novel vision based control solutions that enable fixed-wing Unmanned Aerial Vehicles to perform tasks of inspection over infrastructure including power lines, pipe lines and roads. This is achieved through the development of techniques that combine visual servoing with alternate manoeuvres that assist the UAV in both following and observing the feature from a downward facing camera. Control designs are developed through techniques of Image Based Visual Servoing to utilise sideslip through Skid-to-Turn and Forward-Slip manoeuvres. This allows the UAV to simultaneously track and collect data over the length of infrastructure, including straight segments and the transition where these meet.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In a people-to-people matching systems, filtering is widely applied to find the most suitable matches. The results returned are either too many or only a few when the search is generic or specific respectively. The use of a sophisticated recommendation approach becomes necessary. Traditionally, the object of recommendation is the item which is inanimate. In online dating systems, reciprocal recommendation is required to suggest a partner only when the user and the recommended candidate both are satisfied. In this paper, an innovative reciprocal collaborative method is developed based on the idea of similarity and common neighbors, utilizing the information of relevance feedback and feature importance. Extensive experiments are carried out using data gathered from a real online dating service. Compared to benchmarking methods, our results show the proposed method can achieve noticeable better performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As of today, online reviews have become more and more important in decision making process. In recent years, the problem of identifying useful reviews for users has attracted significant attentions. For instance, in order to select reviews that focus on a particular feature, researchers proposed a method which extracts all associated words of this feature as the relevant information to evaluate and find appropriate reviews. However, the extraction of associated words is not that accurate due to the noise in free review text, and this affects the overall performance negatively. In this paper, we propose a method to select reviews according to a given feature by using a review model generated based upon a domain ontology called product feature taxonomy. The proposed review model provides relevant information about the hierarchical relationships of the features in the review which captures the review characteristics accurately. Our experiment results based on real world review dataset show that our approach is able to improve the review selection performance according to the given criteria effectively.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There has been a growing interest in alignment-free methods for phylogenetic analysis using complete genome data. Among them, CVTree method, feature frequency profiles method and dynamical language approach were used to investigate the whole-proteome phylogeny of large dsDNA viruses. Using the data set of large dsDNA viruses from Gao and Qi (BMC Evol. Biol. 2007), the phylogenetic results based on the CVTree method and the dynamical language approach were compared in Yu et al. (BMC Evol. Biol. 2010). In this paper, we first apply dynamical language approach to the data set of large dsDNA viruses from Wu et al. (Proc. Natl. Acad. Sci. USA 2009) and compare our phylogenetic results with those based on the feature frequency profiles method. Then we construct the whole-proteome phylogeny of the larger dataset combining the above two data sets. According to the report of The International Committee on the Taxonomy of Viruses (ICTV), the trees from our analyses are in good agreement to the latest classification of large dsDNA viruses.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There are over 600,000 bridges in the US, and not all of them can be inspected and maintained within the specified time frame. This is because manually inspecting bridges is a time-consuming and costly task, and some state Departments of Transportation (DOT) cannot afford the essential costs and manpower. In this paper, a novel method that can detect large-scale bridge concrete columns is proposed for the purpose of eventually creating an automated bridge condition assessment system. The method employs image stitching techniques (feature detection and matching, image affine transformation and blending) to combine images containing different segments of one column into a single image. Following that, bridge columns are detected by locating their boundaries and classifying the material within each boundary in the stitched image. Preliminary test results of 114 concrete bridge columns stitched from 373 close-up, partial images of the columns indicate that the method can correctly detect 89.7% of these elements, and thus, the viability of the application of this research.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Manually inspecting bridges is a time-consuming and costly task. There are over 600,000 bridges in the US, and not all of them can be inspected and maintained within the specified time frame as some state DOTs cannot afford the essential costs and manpower. This paper presents a novel method that can detect bridge concrete columns from visual data for the purpose of eventually creating an automated bridge condition assessment system. The method employs SIFT feature detection and matching to find overlapping areas among images. Affine transformation matrices are then calculated to combine images containing different segments of one column into a single image. Following that, the bridge columns are detected by identifying the boundaries in the stitched image and classifying the material within each boundary. Preliminary test results using real bridge images indicate that most columns in stitched images can be correctly detected and thus, the viability of the application of this research.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Feature tracking is a key step in the derivation of Atmospheric Motion Vectors (AMV). Most operational derivation processes use some template matching technique, such as Euclidean distance or cross-correlation, for the tracking step. As this step is very expensive computationally, often shortrange forecasts generated by Numerical Weather Prediction (NWP) systems are used to reduce the search area. Alternatives, such as optical flow methods, have been explored, with the aim of improving the number and quality of the vectors generated and the computational efficiency of the process. This paper will present the research carried out to apply Stochastic Diffusion Search, a generic search technique in the Swarm Intelligence family, to feature tracking in the context of AMV derivation. The method will be described, and we will present initial results, with Euclidean distance as reference.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this study was to develop a fast capillary electrophoresis method for the determination of benzoate and sorbate ions in commercial beverages. In the method development the pH and constituents of the background electrolyte were selected using the effective mobility versus pH curves. As the high resolution obtained experimentally for sorbate and benzoate in the studies presented in the literature is not in agreement with that expected from the ionic mobility values published, a procedure to determine these values was carried out. The salicylate ion was used as the internal standard. The background electrolyte was composed of 25 mmol L(-1) tris(hydroxymethyl)aminomethane and 12.5 mmol L(-1) 2-hydroxyisobutyric acid, atpH 8.1.Separation was conducted in a fused-silica capillary(32 cm total length and 8.5 cm effective length, 50 mu m I.D.), with short-end injection configuration and direct UV detection at 200 nm for benzoate and salicylate and 254 nm for sorbate ions. The run time was only 28 s. A few figures of merit of the proposed method include: good linearity (R(2) > 0.999), limit of detection of 0.9 and 0.3 mg L(-1) for benzoate and sorbate, respectively, inter-day precision better than 2.7% (n =9) and recovery in the range 97.9-105%. Beverage samples were prepared by simple dilution with deionized water (1:11, v/v). Concentrations in the range of 197-401 mg L(-1) for benzoate and 28-144 mg L(-1) for sorbate were found in soft drinks and tea. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The motivation for this thesis work is the need for improving reliability of equipment and quality of service to railway passengers as well as a requirement for cost-effective and efficient condition maintenance management for rail transportation. This thesis work develops a fusion of various machine vision analysis methods to achieve high performance in automation of wooden rail track inspection.The condition monitoring in rail transport is done manually by a human operator where people rely on inference systems and assumptions to develop conclusions. The use of conditional monitoring allows maintenance to be scheduled, or other actions to be taken to avoid the consequences of failure, before the failure occurs. Manual or automated condition monitoring of materials in fields of public transportation like railway, aerial navigation, traffic safety, etc, where safety is of prior importance needs non-destructive testing (NDT).In general, wooden railway sleeper inspection is done manually by a human operator, by moving along the rail sleeper and gathering information by visual and sound analysis for examining the presence of cracks. Human inspectors working on lines visually inspect wooden rails to judge the quality of rail sleeper. In this project work the machine vision system is developed based on the manual visual analysis system, which uses digital cameras and image processing software to perform similar manual inspections. As the manual inspection requires much effort and is expected to be error prone sometimes and also appears difficult to discriminate even for a human operator by the frequent changes in inspected material. The machine vision system developed classifies the condition of material by examining individual pixels of images, processing them and attempting to develop conclusions with the assistance of knowledge bases and features.A pattern recognition approach is developed based on the methodological knowledge from manual procedure. The pattern recognition approach for this thesis work was developed and achieved by a non destructive testing method to identify the flaws in manually done condition monitoring of sleepers.In this method, a test vehicle is designed to capture sleeper images similar to visual inspection by human operator and the raw data for pattern recognition approach is provided from the captured images of the wooden sleepers. The data from the NDT method were further processed and appropriate features were extracted.The collection of data by the NDT method is to achieve high accuracy in reliable classification results. A key idea is to use the non supervised classifier based on the features extracted from the method to discriminate the condition of wooden sleepers in to either good or bad. Self organising map is used as classifier for the wooden sleeper classification.In order to achieve greater integration, the data collected by the machine vision system was made to interface with one another by a strategy called fusion. Data fusion was looked in at two different levels namely sensor-level fusion, feature- level fusion. As the goal was to reduce the accuracy of the human error on the rail sleeper classification as good or bad the results obtained by the feature-level fusion compared to that of the results of actual classification were satisfactory.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Craniosynostosis consists of a premature fusion of the sutures in an infant skull that restricts skull and brain growth. During the last decades, there has been a rapid increase of fundamentally diverse surgical treatment methods. At present, the surgical outcome has been assessed using global variables such as cephalic index, head circumference, and intracranial volume. However, these variables have failed in describing the local deformations and morphological changes that may have a role in the neurologic disorders observed in the patients. This report describes a rigid image registration-based method to evaluate outcomes of craniosynostosis surgical treatments, local quantification of head growth, and indirect intracranial volume change measurements. The developed semiautomatic analysis method was applied to computed tomography data sets of a 5-month-old boy with sagittal craniosynostosis who underwent expansion of the posterior skull with cranioplasty. Quantification of the local changes between pre- and postoperative images was quantified by mapping the minimum distance of individual points from the preoperative to the postoperative surface meshes, and indirect intracranial volume changes were estimated. The proposed methodology can provide the surgeon a tool for the quantitative evaluation of surgical procedures and detection of abnormalities of the infant skull and its development.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: It is yet unclear if there are differences between using electronic key feature problems (KFPs) or electronic case-based multiple choice questions (cbMCQ) for the assessment of clinical decision making. Summary of Work: Fifth year medical students were exposed to clerkships which ended with a summative exam. Assessment of knowledge per exam was done by 6-9 KFPs, 9-20 cbMCQ and 9-28 MC questions. Each KFP consisted of a case vignette and three key features (KF) using “long menu” as question format. We sought students’ perceptions of the KFPs and cbMCQs in focus groups (n of students=39). Furthermore statistical data of 11 exams (n of students=377) concerning the KFPs and (cb)MCQs were compared. Summary of Results: The analysis of the focus groups resulted in four themes reflecting students’ perceptions of KFPs and their comparison with (cb)MCQ: KFPs were perceived as (i) more realistic, (ii) more difficult, (iii) more motivating for the intense study of clinical reasoning than (cb)MCQ and (iv) showed an overall good acceptance when some preconditions are taken into account. The statistical analysis revealed that there was no difference in difficulty; however KFP showed a higher discrimination and reliability (G-coefficient) even when corrected for testing times. Correlation of the different exam parts was intermediate. Conclusions: Students perceived the KFPs as more motivating for the study of clinical reasoning. Statistically KFPs showed a higher discrimination and higher reliability than cbMCQs. Take-home messages: Including KFPs with long menu questions into summative clerkship exams seems to offer positive educational effects.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fragestellung/Einleitung: Es ist unklar inwiefern Unterschiede bestehen im Einsatz von Key Feature Problemen (KFP) mit Long Menu Fragen und fallbasierten Typ A Fragen (FTA) für die Überprüfung des klinischen Denkens (Clinical Reasoning) in der klinischen Ausbildung von Medizinstudierenden. Methoden: Medizinstudierende des fünften Studienjahres nahmen an ihrer klinischen Pädiatrie-Rotation teil, die mit einer summativen Prüfung endete. Die Überprüfung des Wissen wurde pro Prüfung elektronisch mit 6-9 KFP [1], [3], 9-20 FTA und 9-28 nichtfallbasierten Multiple Choice Fragen (NFTA) durchgeführt. Jedes KFP bestand aus einer Fallvignette und drei Key Features und nutzen ein sog. Long Menu [4] als Antwortformat. Wir untersuchten die Perzeption der KFP und FTA in Focus Gruppen [2] (n of students=39). Weiterhin wurden die statistischen Kennwerte der KFP und FTA von 11 Prüfungen (n of students=377) verglichen. Ergebnisse: Die Analyse der Fokusgruppen resultierte in vier Themen, die die Perzeption der KFP und deren Vergleich mit FTA darstellten: KFP wurden als 1. realistischer, 2. schwerer, und 3. motivierender für das intensive Selbststudium des klinischen Denkens als FTA aufgenommen und zeigten 4. insgesamt eine gute Akzeptanz sofern gewisse Voraussetzungen berücksichtigt werden. Die statistische Auswertung zeigte keinen Unterschied im Schwierigkeitsgrad; jedoch zeigten die KFP eine höhere Diskrimination und Reliabilität (G-coefficient) selbst wenn für die Prüfungszeit korrigiert wurde. Die Korrelation der verschiedenen Prüfungsteile war mittel. Diskussion/Schlussfolgerung: Die Studierenden erfuhren die KFP als motivierenden für das Selbststudium des klinischen Denkens. Statistisch zeigten die KFP eine grössere Diskrimination und höhere Relibilität als die FTA. Der Einbezug von KFP mit Long Menu in Prüfungen des klinischen Studienabschnitts erscheint vielversprechend und einen „educational effect“ zu haben.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we propose a novel filter for feature selection. Such filter relies on the estimation of the mutual information between features and classes. We bypass the estimation of the probability density function with the aid of the entropic-graphs approximation of Rényi entropy, and the subsequent approximation of the Shannon one. The complexity of such bypassing process does not depend on the number of dimensions but on the number of patterns/samples, and thus the curse of dimensionality is circumvented. We show that it is then possible to outperform a greedy algorithm based on the maximal relevance and minimal redundancy criterion. We successfully test our method both in the contexts of image classification and microarray data classification.