895 resultados para query extraction
Resumo:
Information representation is a critical issue in machine vision. The representation strategy in the primitive stages of a vision system has enormous implications for the performance in subsequent stages. Existing feature extraction paradigms, like edge detection, provide sparse and unreliable representations of the image information. In this thesis, we propose a novel feature extraction paradigm. The features consist of salient, simple parts of regions bounded by zero-crossings. The features are dense, stable, and robust. The primary advantage of the features is that they have abstract geometric attributes pertaining to their size and shape. To demonstrate the utility of the feature extraction paradigm, we apply it to passive navigation. We argue that the paradigm is applicable to other early vision problems.
Resumo:
Modelo de dados e consultas Twing. Evolução dos algoritmos para consultas Twing. Avaliação dos algoritmos apresentados. Novos desafios. Considerações finais. Conclusões.
Resumo:
A method with carbon nanotubes functioning both as the adsorbent of solid-phase extraction (SPE) and the matrix for matrix assisted laser desorption/ ionization mass spectrometry (MALDI-MS) to analyze small molecules in solution has been developed. In this method, 10 muL suspensions of carbon nanotubes in 50% (vol/vol) methanol were added to the sample solution to extract analytes onto surface of carbon nanotubes because of their dramatic hydrophobicity. Carbon nanotubes in solution are deposited onto the bottom of tube with centrifugation. After removing the supernatant fluid, carbon nanotubes are suspended again with dispersant and pipetted directly onto the sample target of the MALDI-MS to perform a mass spectrometric analysis. It was demonstrated by analysis of a variety of small molecules that the resolution of peaks and the efficiency of desorption/ ionization on the carbon nanotubes are better than those on the activated carbon. It is found that with the addition of glycerol and sucrose to the dispersant, the intensity, the ratio of signal to noise (S/N), and the resolution of peaks for analytes by mass spectrometry increased greatly. Compared with the previously reported method by depositing sample solution onto thin layer of carbon nanotubes, it is observed that the detection limit for analytes can be enhanced about 10 to 100 times due to solid-phase extraction of analytes in solution by carbon nanotubes. An acceptable result of simultaneously quantitative analysis of three analytes in solution has been achieved. The application in determining drugs spiked into urine has also been realized. (C) 2004 American Society for Mass Spectrometry.
Resumo:
Organophosphorus pesticides (OPPs) in vegetables were determined by stir bar sorptive extraction (SBSE) and capillary gas chromatography with thermionic specific detection (TSD). Hydroxy-terminated polydimethylsioxane (PDMS) prepared by sol-gel method was used as extraction phase. The effects of extraction temperature, salting out, extraction time on extraction efficiency were studied. The detection limits of OPPs in water were <= 1.2 ng/l. This method was also applied to the analysis of OPPs in vegetable samples and matrix effect was studied. Linear ranges of OPPs in vegetable samples were 0.05-50 ng/g with detection limits <= 0. 15 ng/g and the repeatability of the method was less than 20% relative standard deviation. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Li, Longzhuang, Liu, Yonghuai, Obregon, A., Weatherston, M. Visual Segmentation-Based Data Record Extraction From Web Documents. Proceedings of IEEE International Conference on Information Reuse and Integration, 2007, pp. 502-507. Sponsorship: IEEE
Resumo:
This paper reviews the fingerprint classification literature looking at the problem from a double perspective. We first deal with feature extraction methods, including the different models considered for singular point detection and for orientation map extraction. Then, we focus on the different learning models considered to build the classifiers used to label new fingerprints. Taxonomies and classifications for the feature extraction, singular point detection, orientation extraction and learning methods are presented. A critical view of the existing literature have led us to present a discussion on the existing methods and their drawbacks such as difficulty in their reimplementation, lack of details or major differences in their evaluations procedures. On this account, an experimental analysis of the most relevant methods is carried out in the second part of this paper, and a new method based on their combination is presented.
Resumo:
A simple procedure for the isolation of caffeine from energy drinks by solid phase extraction on a C18 cartridge. Quantitative analysis of the amount of caffeine by LC/MS is determined by referencing a standard curve.
Resumo:
A system is described that tracks moving objects in a video dataset so as to extract a representation of the objects' 3D trajectories. The system then finds hierarchical clusters of similar trajectories in the video dataset. Objects' motion trajectories are extracted via an EKF formulation that provides each object's 3D trajectory up to a constant factor. To increase accuracy when occlusions occur, multiple tracking hypotheses are followed. For trajectory-based clustering and retrieval, a modified version of edit distance, called longest common subsequence (LCSS) is employed. Similarities are computed between projections of trajectories on coordinate axes. Trajectories are grouped based, using an agglomerative clustering algorithm. To check the validity of the approach, experiments using real data were performed.
Resumo:
A common problem in many types of databases is retrieving the most similar matches to a query object. Finding those matches in a large database can be too slow to be practical, especially in domains where objects are compared using computationally expensive similarity (or distance) measures. This paper proposes a novel method for approximate nearest neighbor retrieval in such spaces. Our method is embedding-based, meaning that it constructs a function that maps objects into a real vector space. The mapping preserves a large amount of the proximity structure of the original space, and it can be used to rapidly obtain a short list of likely matches to the query. The main novelty of our method is that it constructs, together with the embedding, a query-sensitive distance measure that should be used when measuring distances in the vector space. The term "query-sensitive" means that the distance measure changes depending on the current query object. We report experiments with an image database of handwritten digits, and a time-series database. In both cases, the proposed method outperforms existing state-of-the-art embedding methods, meaning that it provides significantly better trade-offs between efficiency and retrieval accuracy.
Resumo:
Personal communication devices are increasingly equipped with sensors that are able to collect and locally store information from their environs. The mobility of users carrying such devices, and hence the mobility of sensor readings in space and time, opens new horizons for interesting applications. In particular, we envision a system in which the collective sensing, storage and communication resources, and mobility of these devices could be leveraged to query the state of (possibly remote) neighborhoods. Such queries would have spatio-temporal constraints which must be met for the query answers to be useful. Using a simplified mobility model, we analytically quantify the benefits from cooperation (in terms of the system's ability to satisfy spatio-temporal constraints), which we show to go beyond simple space-time tradeoffs. In managing the limited storage resources of such cooperative systems, the goal should be to minimize the number of unsatisfiable spatio-temporal constraints. We show that Data Centric Storage (DCS), or "directed placement", is a viable approach for achieving this goal, but only when the underlying network is well connected. Alternatively, we propose, "amorphous placement", in which sensory samples are cached locally, and shuffling of cached samples is used to diffuse the sensory data throughout the whole network. We evaluate conditions under which directed versus amorphous placement strategies would be more efficient. These results lead us to propose a hybrid placement strategy, in which the spatio-temporal constraints associated with a sensory data type determine the most appropriate placement strategy for that data type. We perform an extensive simulation study to evaluate the performance of directed, amorphous, and hybrid placement protocols when applied to queries that are subject to timing constraints. Our results show that, directed placement is better for queries with moderately tight deadlines, whereas amorphous placement is better for queries with looser deadlines, and that under most operational conditions, the hybrid technique gives the best compromise.
Resumo:
Ongoing research at Boston University has produced computational models of biological vision and learning that embody a growing corpus of scientific data and predictions. Vision models perform long-range grouping and figure/ground segmentation, and memory models create attentionally controlled recognition codes that intrinsically cornbine botton-up activation and top-down learned expectations. These two streams of research form the foundation of novel dynamically integrated systems for image understanding. Simulations using multispectral images illustrate road completion across occlusions in a cluttered scene and information fusion from incorrect labels that are simultaneously inconsistent and correct. The CNS Vision and Technology Labs (cns.bu.edulvisionlab and cns.bu.edu/techlab) are further integrating science and technology through analysis, testing, and development of cognitive and neural models for large-scale applications, complemented by software specification and code distribution.
Resumo:
This paper shows how knowledge, in the form of fuzzy rules, can be derived from a self-organizing supervised learning neural network called fuzzy ARTMAP. Rule extraction proceeds in two stages: pruning removes those recognition nodes whose confidence index falls below a selected threshold; and quantization of continuous learned weights allows the final system state to be translated into a usable set of rules. Simulations on a medical prediction problem, the Pima Indian Diabetes (PID) database, illustrate the method. In the simulations, pruned networks about 1/3 the size of the original actually show improved performance. Quantization yields comprehensible rules with only slight degradation in test set prediction performance.
Resumo:
Capable of three-dimensional imaging of the cornea with micrometer-scale resolution, spectral domain-optical coherence tomography (SDOCT) offers potential advantages over Placido ring and Scheimpflug photography based systems for accurate extraction of quantitative keratometric parameters. In this work, an SDOCT scanning protocol and motion correction algorithm were implemented to minimize the effects of patient motion during data acquisition. Procedures are described for correction of image data artifacts resulting from 3D refraction of SDOCT light in the cornea and from non-idealities of the scanning system geometry performed as a pre-requisite for accurate parameter extraction. Zernike polynomial 3D reconstruction and a recursive half searching algorithm (RHSA) were implemented to extract clinical keratometric parameters including anterior and posterior radii of curvature, central cornea optical power, central corneal thickness, and thickness maps of the cornea. Accuracy and repeatability of the extracted parameters obtained using a commercial 859nm SDOCT retinal imaging system with a corneal adapter were assessed using a rigid gas permeable (RGP) contact lens as a phantom target. Extraction of these parameters was performed in vivo in 3 patients and compared to commercial Placido topography and Scheimpflug photography systems. The repeatability of SDOCT central corneal power measured in vivo was 0.18 Diopters, and the difference observed between the systems averaged 0.1 Diopters between SDOCT and Scheimpflug photography, and 0.6 Diopters between SDOCT and Placido topography.
Resumo:
Gemstone Team WAVES (Water and Versatile Energy Systems)
Resumo:
Photon correlation spectroscopy (PCS) is a light-scattering technique for particle size diagnosis. It has been used mainly in the investigation of hydrosol particles since it is based on the measurement of the correlation function of the light scattered from the Brownian motion of suspended particles. Recently this technique also proved useful for studying soot particles in flames and similar aerosol systems. In the case of a polydispersed system the problem of recovering the particle size distribution can be reduced to the problem of inverting the Laplace transform. In this paper we review several methods introduced by the authors for the solution of this problem. We present some numerical results and we discuss the resolution limits characterizing the reconstruction of the size distributions. © 1989.