12 resultados para Thresholding
em Universidad Politécnica de Madrid
Resumo:
Soil erosion is a complex phenomenon involving the detachment and transport of soil particles, storage and runoff of rainwater, and infiltration. The relative magnitude and importance of these processes depends on several factors being one of them surface micro-topography, usually quanti[U+FB01]ed trough soil surface roughness (SSR). SSR greatly affects surface sealing and runoff generation, yet little information is available about the effect of roughness on the spatial distribution of runoff and on flow concentration. The methods commonly used to measure SSR involve measuring point elevation using a pin roughness meter or laser, both of which are labor intensive and expensive. Lately a simple and inexpensive technique based on percentage of shadow in soil surface image has been developed to determine SSR in the field in order to obtain measurement for wide spread application. One of the first steps in this technique is image de-noising and thresholding to estimate the percentage of black pixels in the studied area. In this work, a series of soil surface images have been analyzed applying several de-noising wavelet analysis and thresholding algorithms to study the variation in percentage of shadows and the shadows size distribution
Resumo:
Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth?s resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution.
Resumo:
We have determined the cross-section σ for color center generation under single Br ion impacts on amorphous SiO2. The evolution of the cross-sections, σ(E) and σ(Se), show an initial flat stage that we associate to atomic collision mechanisms. Above a certain threshold value (Se > 2 keV/nm), roughly coinciding with that reported for the onset of macroscopic disorder (compaction), σ shows a marked increase due to electronic processes. In this regime, a energetic cost of around 7.5 keV is necessary to create a non bridging oxygen hole center-E′ (NBOHC/E′) pair, whatever the input energy. The data appear consistent with a non-radiative decay of self-trapped excitons.
Resumo:
Soil voids manifest the cumulative effect of local pedogenic processes and ultimately influence soil behavior - especially as it pertains to aeration and hydrophysical properties. Because of the relatively weak attenuation of X-rays by air, compared with liquids or solids, non-disruptive CT scanning has become a very attractive tool for generating three-dimensional imagery of soil voids. One of the main steps involved in this analysis is the thresholding required to transform the original (greyscale) images into the type of binary representation (e.g., pores in white, solids in black) needed for fractal analysis or simulation with Lattice?Boltzmann models (Baveye et al., 2010). The objective of the current work is to apply an innovative approach to quantifying soil voids and pore networks in original X-ray CT imagery using Relative Entropy (Bird et al., 2006; Tarquis et al., 2008). These will be illustrated using typical imagery representing contrasting soil structures. Particular attention will be given to the need to consider the full 3D context of the CT imagery, as well as scaling issues, in the application and interpretation of this index.
Resumo:
One important issue emerging strongly in agriculture is related with the automatization of tasks, where the optical sensors play an important role. They provide images that must be conveniently processed. The most relevantimage processing procedures require the identification of green plants, in our experiments they come from barley and corn crops including weeds, so that some types of action can be carried out, including site-specific treatments with chemical products or mechanical manipulations. Also the identification of textures belonging to the soil could be useful to know some variables, such as humidity, smoothness or any others. Finally, from the point of view of the autonomous robot navigation, where the robot is equipped with the imaging system, some times it is convenient to know not only the soil information and the plants growing in the soil but also additional information supplied by global references based on specific areas. This implies that the images to be processed contain textures of three main types to be identified: green plants, soil and sky if any. This paper proposes a new automatic approach for segmenting these main textures and also to refine the identification of sub-textures inside the main ones. Concerning the green identification, we propose a new approach that exploits the performance of existing strategies by combining them. The combination takes into account the relevance of the information provided by each strategy based on the intensity variability. This makes an important contribution. The combination of thresholding approaches, for segmenting the soil and the sky, makes the second contribution; finally the adjusting of the supervised fuzzy clustering approach for identifying sub-textures automatically, makes the third finding. The performance of the method allows to verify its viability for automatic tasks in agriculture based on image processing
Resumo:
El presente trabajo describe una nueva metodología para la detección automática del espacio glotal de imágenes laríngeas tomadas a partir de 15 vídeos grabados por el servicio ORL del hospital Gregorio Marañón de Madrid con luz estroboscópica. El sistema desarrollado está basado en el modelo de contornos activos (snake). El algoritmo combina en el pre-procesado, algunas técnicas tradicionales (umbralización y filtro de mediana) con técnicas más sofisticadas tales como filtrado anisotrópico. De esta forma, se obtiene una imagen apropiada para el uso de las snakes. El valor escogido para el umbral es del 85% del pico máximo del histograma de la imagen; sobre este valor la información de los píxeles no es relevante. El filtro anisotrópico permite distinguir dos niveles de intensidad, uno es el fondo y el otro es la glotis. La inicialización se basa en obtener el módulo del campo GVF; de esta manera se asegura un proceso automático para la selección del contorno inicial. El rendimiento del algoritmo se valida usando los coeficientes de Pratt y se compara contra una segmentación realizada manualmente y otro método automático basado en la transformada de watershed. SUMMARY: The present work describes a new methodology for the automatic detection of the glottal space from laryngeal images taken from 15 videos recorded by the ENT service of the Gregorio Marañon Hospital in Madrid with videostroboscopic equipment. The system is based on active contour models (snakes). The algorithm combines for the pre-processing, some traditional techniques (thresholding and median filter) with more sophisticated techniques such as anisotropic filtering. In this way, we obtain an appropriate image for the use of snake. The value selected for the threshold is 85% of the maximum peak of the image histogram; over this point the information of the pixels is not relevant. The anisotropic filter permits to distinguish two intensity levels, one is the background and the other one is the glottis. The initialization is based on the obtained magnitude by GVF field; in this manner an automatic process for the initial contour selection will be assured. The performance of the algorithm is tested using the Pratt coefficient and compared against a manual segmentation and another automatic method based on the watershed transformation.
Resumo:
Este Proyecto Fin de Carrera trata sobre el reconocimiento e identificación de caracteres de matrículas de automóviles. Este tipo de sistemas de reconocimiento también se los conoce mundialmente como sistemas ANPR ("Automatic Number Plate Recognition") o LPR ("License Plate Recognition"). La gran cantidad de vehículos y logística que se mueve cada segundo por todo el planeta, hace necesaria su registro para su tratamiento y control. Por ello, es necesario implementar un sistema que pueda identificar correctamente estos recursos, para su posterior procesado, construyendo así una herramienta útil, ágil y dinámica. El presente trabajo ha sido estructurado en varias partes. La primera de ellas nos muestra los objetivos y las motivaciones que se persiguen con la realización de este proyecto. En la segunda, se abordan y desarrollan todos los diferentes procesos teóricos y técnicos, así como matemáticos, que forman un sistema ANPR común, con el fin de implementar una aplicación práctica que pueda demostrar la utilidad de estos en cualquier situación. En la tercera, se desarrolla esa parte práctica en la que se apoya la base teórica del trabajo. En ésta se describen y desarrollan los diversos algoritmos, creados con el fin de estudiar y comprobar todo lo planteado hasta ahora, así como observar su comportamiento. Se implementan varios procesos característicos del reconocimiento de caracteres y patrones, como la detección de áreas o patrones, rotado y transformación de imágenes, procesos de detección de bordes, segmentación de caracteres y patrones, umbralización y normalización, extracción de características y patrones, redes neuronales, y finalmente el reconocimiento óptico de caracteres o comúnmente conocido como OCR. La última parte refleja los resultados obtenidos a partir del sistema de reconocimiento de caracteres implementado para el trabajo y se exponen las conclusiones extraídas a partir de éste. Finalmente se plantean las líneas futuras de mejora, desarrollo e investigación, para poder realizar un sistema más eficiente y global. This Thesis deals about license plate characters recognition and identification. These kinds of systems are also known worldwide as ANPR systems ("Automatic Number Plate Recognition") or LPR ("License Plate Recognition"). The great number of vehicles and logistics moving every second all over the world, requires a registration for treatment and control. Thereby, it’s therefore necessary to implement a system that can identify correctly these resources, for further processing, thus building a useful, flexible and dynamic tool. This work has been structured into several parts. The first one shows the objectives and motivations attained by the completion of this project. In the second part, it’s developed all the different theoretical and technical processes, forming a common ANPR system in order to implement a practical application that can demonstrate the usefulness of these ones on any situation. In the third, the practical part is developed, which is based on the theoretical work. In this one are described and developed various algorithms, created to study and verify all the questions until now suggested, and complain the behavior of these systems. Several recognition of characters and patterns characteristic processes are implemented, such as areas or patterns detection, image rotation and transformation, edge detection processes, patterns and character segmentation, thresholding and normalization, features and patterns extraction, neural networks, and finally the optical character recognition or commonly known like OCR. The last part shows the results obtained from the character recognition system implemented for this thesis and the outlines conclusions drawn from it. Finally, future lines of improvement, research and development are proposed, in order to make a more efficient and comprehensive system.
Resumo:
This paper proposes a new method, oriented to crop row detection in images from maize fields with high weed pressure. The vision system is designed to be installed onboard a mobile agricultural vehicle, i.e. submitted to gyros, vibrations and undesired movements. The images are captured under image perspective, being affected by the above undesired effects. The image processing consists of three main processes: image segmentation, double thresholding, based on the Otsu’s method, and crop row detection. Image segmentation is based on the application of a vegetation index, the double thresholding achieves the separation between weeds and crops and the crop row detection applies least squares linear regression for line adjustment. Crop and weed separation becomes effective and the crop row detection can be favorably compared against the classical approach based on the Hough transform. Both gain effectiveness and accuracy thanks to the double thresholding that makes the main finding of the paper.
Resumo:
In this letter, we propose a novel method for unsupervised change detection (CD) in multitemporal Erreur Relative Globale Adimensionnelle de Synthese (ERGAS) satellite images by using the relative dimensionless global error in synthesis index locally. In order to obtain the change image, the index is calculated around a pixel neighborhood (3x3 window) processing simultaneously all the spectral bands available. With the objective of finding the binary change masks, six thresholding methods are selected. A comparison between the proposed method and the change vector analysis method is reported. The accuracy CD showed in the experimental results demonstrates the effectiveness of the proposed method.
Resumo:
The present work describes a new methodology for the automatic detection of the glottal space from laryngeal images based on active contour models (snakes). In order to obtain an appropriate image for the use of snakes based techniques, the proposed algorithm combines a pre-processing stage including some traditional techniques (thresholding and median filter) with more sophisticated ones such as anisotropic filtering. The value selected for the thresholding was fixed to the 85% of the maximum peak of the image histogram, and the anisotropic filter permits to distinguish two intensity levels, one corresponding to the background and the other one to the foreground (glottis). The initialization carried out is based on the magnitude obtained using the Gradient Vector Flow field, ensuring an automatic process for the selection of the initial contour. The performance of the algorithm is tested using the Pratt coefficient and compared against a manual segmentation. The results obtained suggest that this method provided results comparable with other techniques such as the proposed in (Osma-Ruiz et al., 2008).
Resumo:
La teledetección o percepción remota (remote sensing) es la ciencia que abarca la obtención de información (espectral, espacial, temporal) sobre un objeto, área o fenómeno a través del análisis de datos adquiridos por un dispositivo que no está en contacto con el elemento estudiado. Los datos obtenidos a partir de la teledetección para la observación de la superficie terrestre comúnmente son imágenes, que se caracterizan por contar con un sinnúmero de aplicaciones que están en continua evolución, por lo cual para solventar los constantes requerimientos de nuevas aplicaciones a menudo se proponen nuevos algoritmos que mejoran o facilitan algún proceso en particular. Para el desarrollo de dichos algoritmos, es preciso hacer uso de métodos matemáticos que permitan la manipulación de la información con algún fin específico. Dentro de estos métodos, el análisis multi-resolución se caracteriza por permitir analizar una señal en diferentes escalas, lo que facilita trabajar con datos que puedan tener resoluciones diferentes, tal es el caso de las imágenes obtenidas mediante teledetección. Una de las alternativas para la implementación de análisis multi-resolución es la Transformada Wavelet Compleja de Doble Árbol (DT-CWT). Esta transformada se implementa a partir de dos filtros reales y se caracteriza por presentar invariancia a traslaciones, precio a pagar por su característica de no ser críticamente muestreada. A partir de las características de la DT-CWT se propone su uso en el diseño de algoritmos de procesamiento de imagen, particularmente imágenes de teledetección. Estos nuevos algoritmos de procesamiento digital de imágenes de teledetección corresponden particularmente a fusión y detección de cambios. En este contexto esta tesis presenta tres algoritmos principales aplicados a fusión, evaluación de fusión y detección de cambios en imágenes. Para el caso de fusión de imágenes, se presenta un esquema general que puede ser utilizado con cualquier algoritmo de análisis multi-resolución; este algoritmo parte de la implementación mediante DT-CWT para luego extenderlo a un método alternativo, el filtro bilateral. En cualquiera de los dos casos la metodología implica que la inyección de componentes pueda realizarse mediante diferentes alternativas. En el caso del algoritmo de evaluación de fusión se presenta un nuevo esquema que hace uso de procesos de clasificación, lo que permite evaluar los resultados del proceso de fusión de forma individual para cada tipo de cobertura de uso de suelo que se defina en el proceso de evaluación. Esta metodología permite complementar los procesos de evaluación tradicionales y puede facilitar el análisis del impacto de la fusión sobre determinadas clases de suelo. Finalmente, los algoritmos de detección de cambios propuestos abarcan dos enfoques. El primero está orientado a la obtención de mapas de sequía en datos multi-temporales a partir de índices espectrales. El segundo enfoque propone la utilización de un índice global de calidad espectral como filtro espacial. La utilización de dicho filtro facilita la comparación espectral global entre dos imágenes, esto unido a la utilización de umbrales, conlleva a la obtención de imágenes diferencia que contienen la información de cambio. ABSTRACT Remote sensing is a science relates to information gathering (spectral, spatial, temporal) about an object, area or phenomenon, through the analysis of data acquired by a device that is not in contact with the studied item. In general, data obtained from remote sensing to observe the earth’s surface are images, which are characterized by having a number of applications that are constantly evolving. Therefore, to solve the constant requirements of applications, new algorithms are proposed to improve or facilitate a particular process. With the purpose of developing these algorithms, each application needs mathematical methods, such as the multiresolution analysis which allows to analyze a signal at different scales. One of the options is the Dual Tree Complex Wavelet Transform (DT-CWT) which is implemented from two real filters and is characterized by invariance to translations. Among the advantages of this transform is its successful application in image fusion and change detection areas. In this regard, this thesis presents three algorithms applied to image fusion, assessment for image fusion and change detection in multitemporal images. For image fusion, it is presented a general outline that can be used with any multiresolution analysis technique; this algorithm is proposed at first with DT-CWT and then extends to an alternative method, the bilateral filter. In either case the method involves injection of components by various means. For fusion assessment, the proposal is focused on a scheme that uses classification processes, which allows evaluating merger results individually for each type of land use coverage that is defined in evaluation process. This methodology allows complementing traditional assessment processes and can facilitate impact analysis of the merger on certain kinds of soil. Finally, two approaches of change detection algorithms are included. The first is aimed at obtaining drought maps in multitemporal data from spectral indices. The second one takes a global index of spectral quality as a spatial filter. The use of this filter facilitates global spectral comparison between two images and by means of thresholding, allows imaging containing change information.
Resumo:
Esta tesis estudia el comportamiento de la región exterior de una capa límite turbulenta sin gradientes de presiones. Se ponen a prueba dos teorías relativamente bien establecidas. La teoría de semejanza para la pared supone que en el caso de haber una pared rugosa, el fluido sólo percibe el cambio en la fricción superficial que causa, y otros efectos secundarios quedarán confinados a una zona pegada a la pared. El consenso actual es que dicha teoría es aproximadamente cierta. En el extremo exterior de la capa límite existe una región producida por la interacción entre las estructuras turbulentas y el flujo irrotacional de la corriente libre llamada interfaz turbulenta/no turbulenta. La mayoría de los resultados al respecto sugieren la presencia de fuerzas de cortadura ligeramente más intensa, lo que la hace distinta al resto del flujo turbulento. Las propiedades de esa región probablemente cambien si la velocidad de crecimiento de la capa límite aumenta, algo que puede conseguirse aumentando la fricción en la pared. La rugosidad y la ingestión de masa están entonces relacionadas, y el comportamiento local de la interfaz turbulenta/no turbulenta puede explicar el motivo por el que las capas límite sobre paredes rugosas no se comportan como en el caso de tener paredes lisas precisamente en la zona exterior. Para estudiar las capas límite a números de Reynolds lo suficientemente elevados, se ha desarrollado un nuevo código de alta resolución para la simulación numérica directa de capas límite turbulentas sin gradiente de presión. Dicho código es capaz de simular capas límite en un intervalo de números de Reynolds entre ReT = 100 — 2000 manteniendo una buena escalabilidad hasta los dos millones de hilos en superordenadores de tipo Blue Gene/Q. Se ha guardado especial atención a la generación de condiciones de contorno a la entrada correctas. Los resultados obtenidos están en concordancia con los resultados previos, tanto en el caso de simulaciones como de experimentos. La interfaz turbulenta/no turbulenta de una capa límite se ha analizado usando un valor umbral del módulo de la vorticidad. Dicho umbral se considera un parámetro para analizar cada superficie obtenida de un contorno del módulo de la vorticidad. Se han encontrado dos regímenes distintos en función del umbral escogido con propiedades opuestas, separados por una transición topológica gradual. Las características geométricas de la zona escalan con o99 cuando u^/isdgg es la unidad de vorticidad. Las propiedades del íluido relativas a la posición del contorno de vorticidad han sido analizados para una serie de umbrales utilizando el campo de distancias esféricas, que puede obtenerse con independencia de la complejidad de la superficie de referencia. Las propiedades del fluido a una distancia dada del inerfaz también dependen del umbral de vorticidad, pero tienen características parecidas con independencia del número de Reynolds. La interacción entre la turbulencia y el flujo no turbulento se restringe a una zona muy fina con un espesor del orden de la escala de Kolmogorov local. Hacia el interior del flujo turbulento las propiedades son indistinguibles del resto de la capa límite. Se ha simulado una capa límite sin gradiente de presiones con una fuerza volumétrica cerca de la pared. La el forzado ha sido diseñado para aumentar la fricción en la pared sin introducir ningún efecto geométrico obvio. La simulación consta de dos dominios, un primer dominio más pequeño y a baja resolución que se encarga de generar condiciones de contorno correctas, y un segundo dominio mayor y a alta resolución donde se aplica el forzado. El estudio de los perfiles y los coeficientes de autocorrelación sugieren que los dos casos, el liso y el forzado, no colapsan más allá de la capa logarítmica por la complejidad geométrica de la zona intermitente, y por el hecho que la distancia a la pared no es una longitud característica. Los efectos causados por la geometría de la zona intermitente pueden evitarse utilizando el interfaz como referencia, y la distancia esférica para el análisis de sus propiedades. Las propiedades condicionadas del flujo escalan con 5QQ y u/uT, las dos únicas escalas contenidas en el modelo de semejanza de pared de Townsend, consistente con estos resultados. ABSTRACT This thesis studies the characteristics of the outer region of zero-pressure-gradient turbulent boundary layers at moderate Reynolds numbers. Two relatively established theories are put to test. The wall similarity theory states that with the presence of roughness, turbulent motion is mostly affected by the additional drag caused by the roughness, and that other secondary effects are restricted to a region very close to the wall. The consensus is that this theory is valid, but only as a first approximation. At the edge of the boundary layer there is a thin layer caused by the interaction between the turbulent eddies and the irroational fluid of the free stream, called turbulent/non-turbulent interface. The bulk of results about this layer suggest the presence of some localized shear, with properties that make it distinguishable from the rest of the turbulent flow. The properties of the interface are likely to change if the rate of spread of the turbulent boundary layer is amplified, an effect that is usually achieved by increasing the drag. Roughness and entrainment are therefore linked, and the local features of the turbulent/non-turbulent interface may explain the reason why rough-wall boundary layers deviate from the wall similarity theory precisely far from the wall. To study boundary layers at a higher Reynolds number, a new high-resolution code for the direct numerical simulation of a zero pressure gradient turbulent boundary layers over a flat plate has been developed. This code is able to simulate a wide range of Reynolds numbers from ReT =100 to 2000 while showing a linear weak scaling up to around two million threads in the BG/Q architecture. Special attention has been paid to the generation of proper inflow boundary conditions. The results are in good agreement with existing numerical and experimental data sets. The turbulent/non-turbulent interface of a boundary layer is analyzed by thresholding the vorticity magnitude field. The value of the threshold is considered a parameter in the analysis of the surfaces obtained from isocontours of the vorticity magnitude. Two different regimes for the surface can be distinguished depending on the threshold, with a gradual topological transition across which its geometrical properties change significantly. The width of the transition scales well with oQg when u^/udgg is used as a unit of vorticity. The properties of the flow relative to the position of the vorticity magnitude isocontour are analyzed within the same range of thresholds, using the ball distance field, which can be obtained regardless of the size of the domain and complexity of the interface. The properties of the flow at a given distance to the interface also depend on the threshold, but they are similar regardless of the Reynolds number. The interaction between the turbulent and the non-turbulent flow occurs in a thin layer with a thickness that scales with the Kolmogorov length. Deeper into the turbulent side, the properties are undistinguishable from the rest of the turbulent flow. A zero-pressure-gradient turbulent boundary layer with a volumetric near-wall forcing has been simulated. The forcing has been designed to increase the wall friction without introducing any obvious geometrical effect. The actual simulation is split in two domains, a smaller one in charge of the generation of correct inflow boundary conditions, and a second and larger one where the forcing is applied. The study of the one-point and twopoint statistics suggest that the forced and the smooth cases do not collapse beyond the logarithmic layer may be caused by the geometrical complexity of the intermittent region, and by the fact that the scaling with the wall-normal coordinate is no longer present. The geometrical effects can be avoided using the turbulent/non-turbulent interface as a reference frame, and the minimum distance respect to it. The conditional analysis of the vorticity field with the alternative reference frame recovers the scaling with 5QQ and v¡uT already present in the logarithmic layer, the only two length-scales allowed if Townsend’s wall similarity hypothesis is valid.