985 resultados para Automatic selection


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work deals with noise removal by the use of an edge preserving method whose parameters are automatically estimated, for any application, by simply providing information about the standard deviation noise level we wish to eliminate. The desired noiseless image u(x), in a Partial Differential Equation based model, can be viewed as the solution of an evolutionary differential equation u t(x) = F(u xx, u x, u, x, t) which means that the true solution will be reached when t ® ¥. In practical applications we should stop the time ''t'' at some moment during this evolutionary process. This work presents a sufficient condition, related to time t and to the standard deviation s of the noise we desire to remove, which gives a constant T such that u(x, T) is a good approximation of u(x). The approach here focused on edge preservation during the noise elimination process as its main characteristic. The balance between edge points and interior points is carried out by a function g which depends on the initial noisy image u(x, t0), the standard deviation of the noise we want to eliminate and a constant k. The k parameter estimation is also presented in this work therefore making, the proposed model automatic. The model's feasibility and the choice of the optimal time scale is evident through out the various experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main purpose of this work is the development of computational tools in order to assist the on-line automatic detection of burn in the surface grinding process. Most of the parameters currently employed in the burning recognition (DPO, FKS, DPKS, DIFP, among others) do not incorporate routines for automatic selection of the grinding passes, therefore, requiring the user's interference for the choice of the active region. Several methods were employed in the passes extraction; however, those with the best results are presented in this article. Tests carried out in a surface-grinding machine have shown the success of the algorithms developed for pass extraction. Copyright © 2007 by ABCM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A good object representation or object descriptor is one of the key issues in object based image analysis. To effectively fuse color and texture as a unified descriptor at object level, this paper presents a novel method for feature fusion. Color histogram and the uniform local binary patterns are extracted from arbitrary-shaped image-objects, and kernel principal component analysis (kernel PCA) is employed to find nonlinear relationships of the extracted color and texture features. The maximum likelihood approach is used to estimate the intrinsic dimensionality, which is then used as a criterion for automatic selection of optimal feature set from the fused feature. The proposed method is evaluated using SVM as the benchmark classifier and is applied to object-based vegetation species classification using high spatial resolution aerial imagery. Experimental results demonstrate that great improvement can be achieved by using proposed feature fusion method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A comprehensive one-dimensional meanline design approach for radial inflow turbines is described in the present work. An original code was developed in Python that takes a novel approach to the automatic selection of feasible machines based on pre-defined performance or geometry characteristics for a given application. It comprises a brute-force search algorithm that traverses the entire search space based on key non-dimensional parameters and rotational speed. In this study, an in-depth analysis and subsequent implementation of relevant loss models as well as selection criteria for radial inflow turbines is addressed. Comparison with previously published designs, as well as other available codes, showed good agreement. Sample (real and theoretical) test cases were trialed and results showed good agreement when compared to other available codes. The presented approach was found to be valid and the model was found to be a useful tool with regards to the preliminary design and performance estimation of radial inflow turbines, enabling its integration with other thermodynamic cycle analysis and three-dimensional blade design codes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

深入分析了经典的Canny边缘检测算法,针对其在参数确定的自主能力不高的问题,提出一种新的基于大津法和统计理论的自适应边缘提取方法,通过对一组参数进行了统计优化,自适应地确定边缘检测的全局最优参数。实验结果表明本文提出的非结构环境下目标自适应边缘提取方法能够有效地抑制噪声,自适应地确定最优边缘提取参数,提高了边缘定位精度。最后,通过实验表明,本文提出的方法在环境信息未知月球探测应用中具有较高边缘检测性能。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nearest neighbor retrieval is the task of identifying, given a database of objects and a query object, the objects in the database that are the most similar to the query. Retrieving nearest neighbors is a necessary component of many practical applications, in fields as diverse as computer vision, pattern recognition, multimedia databases, bioinformatics, and computer networks. At the same time, finding nearest neighbors accurately and efficiently can be challenging, especially when the database contains a large number of objects, and when the underlying distance measure is computationally expensive. This thesis proposes new methods for improving the efficiency and accuracy of nearest neighbor retrieval and classification in spaces with computationally expensive distance measures. The proposed methods are domain-independent, and can be applied in arbitrary spaces, including non-Euclidean and non-metric spaces. In this thesis particular emphasis is given to computer vision applications related to object and shape recognition, where expensive non-Euclidean distance measures are often needed to achieve high accuracy. The first contribution of this thesis is the BoostMap algorithm for embedding arbitrary spaces into a vector space with a computationally efficient distance measure. Using this approach, an approximate set of nearest neighbors can be retrieved efficiently - often orders of magnitude faster than retrieval using the exact distance measure in the original space. The BoostMap algorithm has two key distinguishing features with respect to existing embedding methods. First, embedding construction explicitly maximizes the amount of nearest neighbor information preserved by the embedding. Second, embedding construction is treated as a machine learning problem, in contrast to existing methods that are based on geometric considerations. The second contribution is a method for constructing query-sensitive distance measures for the purposes of nearest neighbor retrieval and classification. In high-dimensional spaces, query-sensitive distance measures allow for automatic selection of the dimensions that are the most informative for each specific query object. It is shown theoretically and experimentally that query-sensitivity increases the modeling power of embeddings, allowing embeddings to capture a larger amount of the nearest neighbor structure of the original space. The third contribution is a method for speeding up nearest neighbor classification by combining multiple embedding-based nearest neighbor classifiers in a cascade. In a cascade, computationally efficient classifiers are used to quickly classify easy cases, and classifiers that are more computationally expensive and also more accurate are only applied to objects that are harder to classify. An interesting property of the proposed cascade method is that, under certain conditions, classification time actually decreases as the size of the database increases, a behavior that is in stark contrast to the behavior of typical nearest neighbor classification systems. The proposed methods are evaluated experimentally in several different applications: hand shape recognition, off-line character recognition, online character recognition, and efficient retrieval of time series. In all datasets, the proposed methods lead to significant improvements in accuracy and efficiency compared to existing state-of-the-art methods. In some datasets, the general-purpose methods introduced in this thesis even outperform domain-specific methods that have been custom-designed for such datasets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O trabalho tem como objetivo aplicar uma modelagem não linear ao Produto Interno Bruto brasileiro. Para tanto foi testada a existência de não linearidade do processo gerador dos dados com a metodologia sugerida por Castle e Henry (2010). O teste consiste em verificar a persistência dos regressores não lineares no modelo linear irrestrito. A seguir a série é modelada a partir do modelo autoregressivo com limiar utilizando a abordagem geral para específico na seleção do modelo. O algoritmo Autometrics é utilizado para escolha do modelo não linear. Os resultados encontrados indicam que o Produto Interno Bruto do Brasil é melhor explicado por um modelo não linear com três mudanças de regime, que ocorrem no inicio dos anos 90, que, de fato, foi um período bastante volátil. Através da modelagem não linear existe o potencial para datação de ciclos, no entanto os resultados encontrados não foram suficientes para tal análise.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There has been an increasing tendency on the use of selective image compression, since several applications make use of digital images and the loss of information in certain regions is not allowed in some cases. However, there are applications in which these images are captured and stored automatically making it impossible to the user to select the regions of interest to be compressed in a lossless manner. A possible solution for this matter would be the automatic selection of these regions, a very difficult problem to solve in general cases. Nevertheless, it is possible to use intelligent techniques to detect these regions in specific cases. This work proposes a selective color image compression method in which regions of interest, previously chosen, are compressed in a lossless manner. This method uses the wavelet transform to decorrelate the pixels of the image, competitive neural network to make a vectorial quantization, mathematical morphology, and Huffman adaptive coding. There are two options for automatic detection in addition to the manual one: a method of texture segmentation, in which the highest frequency texture is selected to be the region of interest, and a new face detection method where the region of the face will be lossless compressed. The results show that both can be successfully used with the compression method, giving the map of the region of interest as an input

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper a methodology for automatic extraction of road segments from images with different resolutions (low, middle and high resolution) is presented. It is based on a generalized concept of lines in digital images, by which lines can be described by the centerlines of two parallel edges. In the specific case of low resolution images, where roads are manifested as entities of 1 or 2 pixels wide, the proposed methodology combines an automatic image enhancement operation with the following strategies: automatic selection of the hysteresis thresholds and the Gaussian scale factor; line length thresholding; and polygonization. In medium and high resolution images roads manifest as narrow and elongated ribbons and, consequently, the extraction goal becomes the road centerlines. In this case, it is not necessary to apply the previous enhancement step used to enhance roads in low resolution images. The results obtained in the experimental evaluation satisfied all criteria established for the efficient extraction of road segments from different resolution images, providing satisfactory results in a completely automatic way.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a novel approach to the computed assessment of a mammographic phantom device. The approach shown here is fully automated and is based on the automatic selection of the region of interest, in the use of the discrete wavelet transform (DWT) and morphological operators to assess the quality of the American College of Radiology (ACR) mammographic phantom images. The algorithms developed here have succesfully scored 30 images obtained with different combinations of voltage applied to the tube and exposure and could notice the differences in the radiographs due to the different level of exposure to radiation. © 2013 Springer-Verlag.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In Computer-Aided Diagnosis-based schemes in mammography analysis each module is interconnected, which directly affects the system operation as a whole. The identification of mammograms with and without masses is highly needed to reduce the false positive rates regarding the automatic selection of regions of interest for further image segmentation. This study aims to evaluate the performance of three techniques in classifying regions of interest as containing masses or without masses (without clinical findings), as well as the main contribution of this work is to introduce the Optimum-Path Forest (OPF) classifier in this context, which has never been done so far. Thus, we have compared OPF against with two sorts of neural networks in a private dataset composed by 120 images: Radial Basis Function and Multilayer Perceptron (MLP). Texture features have been used for such purpose, and the experiments have demonstrated that MLP networks have been slightly better than OPF, but the former is much faster, which can be a suitable tool for real-time recognition systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a photometric catalogue of compact groups of galaxies (p2MCGs) automatically extracted from the Two-Micron All Sky Survey (2MASS) extended source catalogue. A total of 262 p2MCGs are identified, following the criteria defined by Hickson, of which 230 survive visual inspection (given occasional galaxy fragmentation and blends in the 2MASS parent catalogue). Only one quarter of these 230 groups were previously known compact groups (CGs). Among the 144 p2MCGs that have all their galaxies with known redshifts, 85 (59?per cent) have four or more accordant galaxies. This v2MCG sample of velocity-filtered p2MCGs constitutes the largest sample of CGs (with N = 4) catalogued to date, with both well-defined selection criteria and velocity filtering, and is the first CG sample selected by stellar mass. It is fairly complete up to Kgroup similar to 9 and radial velocity of similar to 6000?km?s-1. We compared the properties of the 78 v2MCGs with median velocities greater than 3000?km?s-1 with the properties of other CG samples, as well as those (mvCGs) extracted from the semi-analytical model (SAM) of Guo et al. run on the high-resolution Millennium-II simulation. This mvCG sample is similar (i.e. with 2/3 of physically dense CGs) to those we had previously extracted on three other SAMs run on the Millennium simulation with 125 times worse spatial and mass resolutions. The space density of v2MCGs within 6000?km?s-1 is 8.0 X 10-5?h3?Mpc-3, i.e. four times that of the Hickson sample [Hickson Compact Group (HCG)] up to the same distance and with the same criteria used in this work, but still 40?per cent less than that of mvCGs. The v2MCG constitutes the first group catalogue to show a statistically large firstsecond ranked galaxy magnitude gap according to TremaineRichstone statistics, as expected if the first ranked group members tend to be the products of galaxy mergers, and as confirmed in the mvCGs. The v2MCG is also the first observed sample to show that first-ranked galaxies tend to be centrally located, again consistent with the predictions obtained from mvCGs. We found no significant correlation of group apparent elongation and velocity dispersion in the quartets among the v2MCGs, and the velocity dispersions of apparently round quartets are not significantly larger than those of chain-like ones, in contrast to what has been previously reported in HCGs. By virtue of its automatic selection with the popular Hickson criteria, its size, its selection on stellar mass, and its statistical signs of mergers and centrally located brightest galaxies, the v2MCG catalogue appears to be the laboratory of choice to study physically dense groups of four or more galaxies of comparable luminosity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The increasing economic competition drives the industry to implement tools that improve their processes efficiencies. The process automation is one of these tools, and the Real Time Optimization (RTO) is an automation methodology that considers economic aspects to update the process control in accordance with market prices and disturbances. Basically, RTO uses a steady-state phenomenological model to predict the process behavior, and then, optimizes an economic objective function subject to this model. Although largely implemented in industry, there is not a general agreement about the benefits of implementing RTO due to some limitations discussed in the present work: structural plant/model mismatch, identifiability issues and low frequency of set points update. Some alternative RTO approaches have been proposed in literature to handle the problem of structural plant/model mismatch. However, there is not a sensible comparison evaluating the scope and limitations of these RTO approaches under different aspects. For this reason, the classical two-step method is compared to more recently derivative-based methods (Modifier Adaptation, Integrated System Optimization and Parameter estimation, and Sufficient Conditions of Feasibility and Optimality) using a Monte Carlo methodology. The results of this comparison show that the classical RTO method is consistent, providing a model flexible enough to represent the process topology, a parameter estimation method appropriate to handle measurement noise characteristics and a method to improve the sample information quality. At each iteration, the RTO methodology updates some key parameter of the model, where it is possible to observe identifiability issues caused by lack of measurements and measurement noise, resulting in bad prediction ability. Therefore, four different parameter estimation approaches (Rotational Discrimination, Automatic Selection and Parameter estimation, Reparametrization via Differential Geometry and classical nonlinear Least Square) are evaluated with respect to their prediction accuracy, robustness and speed. The results show that the Rotational Discrimination method is the most suitable to be implemented in a RTO framework, since it requires less a priori information, it is simple to be implemented and avoid the overfitting caused by the Least Square method. The third RTO drawback discussed in the present thesis is the low frequency of set points update, this problem increases the period in which the process operates at suboptimum conditions. An alternative to handle this problem is proposed in this thesis, by integrating the classic RTO and Self-Optimizing control (SOC) using a new Model Predictive Control strategy. The new approach demonstrates that it is possible to reduce the problem of low frequency of set points updates, improving the economic performance. Finally, the practical aspects of the RTO implementation are carried out in an industrial case study, a Vapor Recompression Distillation (VRD) process located in Paulínea refinery from Petrobras. The conclusions of this study suggest that the model parameters are successfully estimated by the Rotational Discrimination method; the RTO is able to improve the process profit in about 3%, equivalent to 2 million dollars per year; and the integration of SOC and RTO may be an interesting control alternative for the VRD process.