901 resultados para Subfractals, Subfractal Coding, Model Analysis, Digital Imaging, Pattern Recognition
Resumo:
Sclerochronological records of interannual shell growth variability were established for eight modern shells (26 to 163 years of age) of the bivalve Arctica islandica, which were sampled at one site in the inner German Bight. The records indicate generally low synchrony between individuals. Spectral analysis of the whole 163-yr masterchronology indicated a cyclic pattern with a period of 5 and 7 years. The masterchronology correlated poorly to time series of environmental parameters over the last 90 years. High environmental variability in time and space of the dynamic and complex German Bight hydrographic system results in an extraordinarily high noise' level in the shell growth pattern of Arctica islandica.
Resumo:
El objeto de esta Tesis doctoral es el desarrollo de una metodologia para la deteccion automatica de anomalias a partir de datos hiperespectrales o espectrometria de imagen, y su cartografiado bajo diferentes condiciones tipologicas de superficie y terreno. La tecnologia hiperespectral o espectrometria de imagen ofrece la posibilidad potencial de caracterizar con precision el estado de los materiales que conforman las diversas superficies en base a su respuesta espectral. Este estado suele ser variable, mientras que las observaciones se producen en un numero limitado y para determinadas condiciones de iluminacion. Al aumentar el numero de bandas espectrales aumenta tambien el numero de muestras necesarias para definir espectralmente las clases en lo que se conoce como Maldicion de la Dimensionalidad o Efecto Hughes (Bellman, 1957), muestras habitualmente no disponibles y costosas de obtener, no hay mas que pensar en lo que ello implica en la Exploracion Planetaria. Bajo la definicion de anomalia en su sentido espectral como la respuesta significativamente diferente de un pixel de imagen respecto de su entorno, el objeto central abordado en la Tesis estriba primero en como reducir la dimensionalidad de la informacion en los datos hiperespectrales, discriminando la mas significativa para la deteccion de respuestas anomalas, y segundo, en establecer la relacion entre anomalias espectrales detectadas y lo que hemos denominado anomalias informacionales, es decir, anomalias que aportan algun tipo de informacion real de las superficies o materiales que las producen. En la deteccion de respuestas anomalas se asume un no conocimiento previo de los objetivos, de tal manera que los pixeles se separan automaticamente en funcion de su informacion espectral significativamente diferenciada respecto de un fondo que se estima, bien de manera global para toda la escena, bien localmente por segmentacion de la imagen. La metodologia desarrollada se ha centrado en la implicacion de la definicion estadistica del fondo espectral, proponiendo un nuevo enfoque que permite discriminar anomalias respecto fondos segmentados en diferentes grupos de longitudes de onda del espectro, explotando la potencialidad de separacion entre el espectro electromagnetico reflectivo y emisivo. Se ha estudiado la eficiencia de los principales algoritmos de deteccion de anomalias, contrastando los resultados del algoritmo RX (Reed and Xiaoli, 1990) adoptado como estandar por la comunidad cientifica, con el metodo UTD (Uniform Targets Detector), su variante RXD-UTD, metodos basados en subespacios SSRX (Subspace RX) y metodo basados en proyecciones de subespacios de imagen, como OSPRX (Orthogonal Subspace Projection RX) y PP (Projection Pursuit). Se ha desarrollado un nuevo metodo, evaluado y contrastado por los anteriores, que supone una variacion de PP y describe el fondo espectral mediante el analisis discriminante de bandas del espectro electromagnetico, separando las anomalias con el algortimo denominado Detector de Anomalias de Fondo Termico o DAFT aplicable a sensores que registran datos en el espectro emisivo. Se han evaluado los diferentes metodos de deteccion de anomalias en rangos del espectro electromagnetico del visible e infrarrojo cercano (Visible and Near Infrared-VNIR), infrarrojo de onda corta (Short Wavelenght Infrared-SWIR), infrarrojo medio (Meadle Infrared-MIR) e infrarrojo termico (Thermal Infrared-TIR). La respuesta de las superficies en las distintas longitudes de onda del espectro electromagnetico junto con su entorno, influyen en el tipo y frecuencia de las anomalias espectrales que puedan provocar. Es por ello que se han utilizado en la investigacion cubos de datos hiperepectrales procedentes de los sensores aeroportados cuya estrategia y diseno en la construccion espectrometrica de la imagen difiere. Se han evaluado conjuntos de datos de test de los sensores AHS (Airborne Hyperspectral System), HyMAP Imaging Spectrometer, CASI (Compact Airborne Spectrographic Imager), AVIRIS (Airborne Visible Infrared Imaging Spectrometer), HYDICE (Hyperspectral Digital Imagery Collection Experiment) y MASTER (MODIS/ASTER Simulator). Se han disenado experimentos sobre ambitos naturales, urbanos y semiurbanos de diferente complejidad. Se ha evaluado el comportamiento de los diferentes detectores de anomalias a traves de 23 tests correspondientes a 15 areas de estudio agrupados en 6 espacios o escenarios: Urbano - E1, Semiurbano/Industrial/Periferia Urbana - E2, Forestal - E3, Agricola - E4, Geologico/Volcanico - E5 y Otros Espacios Agua, Nubes y Sombras - E6. El tipo de sensores evaluados se caracteriza por registrar imagenes en un amplio rango de bandas, estrechas y contiguas, del espectro electromagnetico. La Tesis se ha centrado en el desarrollo de tecnicas que permiten separar y extraer automaticamente pixeles o grupos de pixeles cuya firma espectral difiere de manera discriminante de las que tiene alrededor, adoptando para ello como espacio muestral parte o el conjunto de las bandas espectrales en las que ha registrado radiancia el sensor hiperespectral. Un factor a tener en cuenta en la investigacion ha sido el propio instrumento de medida, es decir, la caracterizacion de los distintos subsistemas, sensores imagen y auxiliares, que intervienen en el proceso. Para poder emplear cuantitativamente los datos medidos ha sido necesario definir las relaciones espaciales y espectrales del sensor con la superficie observada y las potenciales anomalias y patrones objetivos de deteccion. Se ha analizado la repercusion que en la deteccion de anomalias tiene el tipo de sensor, tanto en su configuracion espectral como en las estrategias de diseno a la hora de registrar la radiacion prodecente de las superficies, siendo los dos tipos principales de sensores estudiados los barredores o escaneres de espejo giratorio (whiskbroom) y los barredores o escaneres de empuje (pushbroom). Se han definido distintos escenarios en la investigacion, lo que ha permitido abarcar una amplia variabilidad de entornos geomorfologicos y de tipos de coberturas, en ambientes mediterraneos, de latitudes medias y tropicales. En resumen, esta Tesis presenta una tecnica de deteccion de anomalias para datos hiperespectrales denominada DAFT en su variante de PP, basada en una reduccion de la dimensionalidad proyectando el fondo en un rango de longitudes de onda del espectro termico distinto de la proyeccion de las anomalias u objetivos sin firma espectral conocida. La metodologia propuesta ha sido probada con imagenes hiperespectrales reales de diferentes sensores y en diferentes escenarios o espacios, por lo tanto de diferente fondo espectral tambien, donde los resultados muestran los beneficios de la aproximacion en la deteccion de una gran variedad de objetos cuyas firmas espectrales tienen suficiente desviacion respecto del fondo. La tecnica resulta ser automatica en el sentido de que no hay necesidad de ajuste de parametros, dando resultados significativos en todos los casos. Incluso los objetos de tamano subpixel, que no pueden distinguirse a simple vista por el ojo humano en la imagen original, pueden ser detectados como anomalias. Ademas, se realiza una comparacion entre el enfoque propuesto, la popular tecnica RX y otros detectores tanto en su modalidad global como local. El metodo propuesto supera a los demas en determinados escenarios, demostrando su capacidad para reducir la proporcion de falsas alarmas. Los resultados del algoritmo automatico DAFT desarrollado, han demostrado la mejora en la definicion cualitativa de las anomalias espectrales que identifican a entidades diferentes en o bajo superficie, reemplazando para ello el modelo clasico de distribucion normal con un metodo robusto que contempla distintas alternativas desde el momento mismo de la adquisicion del dato hiperespectral. Para su consecucion ha sido necesario analizar la relacion entre parametros biofisicos, como la reflectancia y la emisividad de los materiales, y la distribucion espacial de entidades detectadas respecto de su entorno. Por ultimo, el algoritmo DAFT ha sido elegido como el mas adecuado para sensores que adquieren datos en el TIR, ya que presenta el mejor acuerdo con los datos de referencia, demostrando una gran eficacia computacional que facilita su implementacion en un sistema de cartografia que proyecte de forma automatica en un marco geografico de referencia las anomalias detectadas, lo que confirma un significativo avance hacia un sistema en lo que se denomina cartografia en tiempo real. The aim of this Thesis is to develop a specific methodology in order to be applied in automatic detection anomalies processes using hyperspectral data also called hyperspectral scenes, and to improve the classification processes. Several scenarios, areas and their relationship with surfaces and objects have been tested. The spectral characteristics of reflectance parameter and emissivity in the pattern recognition of urban materials in several hyperspectral scenes have also been tested. Spectral ranges of the visible-near infrared (VNIR), shortwave infrared (SWIR) and thermal infrared (TIR) from hyperspectral data cubes of AHS (Airborne Hyperspectral System), HyMAP Imaging Spectrometer, CASI (Compact Airborne Spectrographic Imager), AVIRIS (Airborne Visible Infrared Imaging Spectrometer), HYDICE (Hyperspectral Digital Imagery Collection Experiment) and MASTER (MODIS/ASTER Simulator) have been used in this research. It is assumed that there is not prior knowledge of the targets in anomaly detection. Thus, the pixels are automatically separated according to their spectral information, significantly differentiated with respect to a background, either globally for the full scene, or locally by the image segmentation. Several experiments on different scenarios have been designed, analyzing the behavior of the standard RX anomaly detector and different methods based on subspace, image projection and segmentation-based anomaly detection methods. Results and their consequences in unsupervised classification processes are discussed. Detection of spectral anomalies aims at extracting automatically pixels that show significant responses in relation of their surroundings. This Thesis deals with the unsupervised technique of target detection, also called anomaly detection. Since this technique assumes no prior knowledge about the target or the statistical characteristics of the data, the only available option is to look for objects that are differentiated from the background. Several methods have been developed in the last decades, allowing a better understanding of the relationships between the image dimensionality and the optimization of search procedures as well as the subpixel differentiation of the spectral mixture and its implications in anomalous responses. In other sense, image spectrometry has proven to be efficient in the characterization of materials, based on statistical methods using a specific reflection and absorption bands. Spectral configurations in the VNIR, SWIR and TIR have been successfully used for mapping materials in different urban scenarios. There has been an increasing interest in the use of high resolution data (both spatial and spectral) to detect small objects and to discriminate surfaces in areas with urban complexity. This has come to be known as target detection which can be either supervised or unsupervised. In supervised target detection, algorithms lean on prior knowledge, such as the spectral signature. The detection process for matching signatures is not straightforward due to the complications of converting data airborne sensor with material spectra in the ground. This could be further complicated by the large number of possible objects of interest, as well as uncertainty as to the reflectance or emissivity of these objects and surfaces. An important objective in this research is to establish relationships that allow linking spectral anomalies with what can be called informational anomalies and, therefore, identify information related to anomalous responses in some places rather than simply spotting differences from the background. The development in recent years of new hyperspectral sensors and techniques, widen the possibilities for applications in remote sensing of the Earth. Remote sensing systems measure and record electromagnetic disturbances that the surveyed objects induce in their surroundings, by means of different sensors mounted on airborne or space platforms. Map updating is important for management and decisions making people, because of the fast changes that usually happen in natural, urban and semi urban areas. It is necessary to optimize the methodology for obtaining the best from remote sensing techniques from hyperspectral data. The first problem with hyperspectral data is to reduce the dimensionality, keeping the maximum amount of information. Hyperspectral sensors augment considerably the amount of information, this allows us to obtain a better precision on the separation of material but at the same time it is necessary to calculate a bigger number of parameters, and the precision lowers with the increase in the number of bands. This is known as the Hughes effects (Bellman, 1957) . Hyperspectral imagery allows us to discriminate between a huge number of different materials however some land and urban covers are made up with similar material and respond similarly which produces confusion in the classification. The training and the algorithm used for mapping are also important for the final result and some properties of thermal spectrum for detecting land cover will be studied. In summary, this Thesis presents a new technique for anomaly detection in hyperspectral data called DAFT, as a PP's variant, based on dimensionality reduction by projecting anomalies or targets with unknown spectral signature to the background, in a range thermal spectrum wavelengths. The proposed methodology has been tested with hyperspectral images from different imaging spectrometers corresponding to several places or scenarios, therefore with different spectral background. The results show the benefits of the approach to the detection of a variety of targets whose spectral signatures have sufficient deviation in relation to the background. DAFT is an automated technique in the sense that there is not necessary to adjust parameters, providing significant results in all cases. Subpixel anomalies which cannot be distinguished by the human eye, on the original image, however can be detected as outliers due to the projection of the VNIR end members with a very strong thermal contrast. Furthermore, a comparison between the proposed approach and the well-known RX detector is performed at both modes, global and local. The proposed method outperforms the existents in particular scenarios, demonstrating its performance to reduce the probability of false alarms. The results of the automatic algorithm DAFT have demonstrated improvement in the qualitative definition of the spectral anomalies by replacing the classical model by the normal distribution with a robust method. For their achievement has been necessary to analyze the relationship between biophysical parameters such as reflectance and emissivity, and the spatial distribution of detected entities with respect to their environment, as for example some buried or semi-buried materials, or building covers of asbestos, cellular polycarbonate-PVC or metal composites. Finally, the DAFT method has been chosen as the most suitable for anomaly detection using imaging spectrometers that acquire them in the thermal infrared spectrum, since it presents the best results in comparison with the reference data, demonstrating great computational efficiency that facilitates its implementation in a mapping system towards, what is called, Real-Time Mapping.
Resumo:
El Hogar Digital Accesible (HDA) de la ETSIST nace con el propósito de acercar las nuevas Tecnologías de la Información a las personas que precisan de necesidades concretas de accesibilidad y usabilidad, dotándoles de herramientas que les permitan aumentar su calidad de vida, confort, seguridad y autonomía. El entorno del HDA consta de elementos de control para puertas, persianas, iluminación, agua o gas, sensores de temperatura, incendios, gas, sistemas de climatización, sistemas de entretenimiento y sistemas de seguridad tales como detectores de presencia y alarmas. Todo ello apoyado sobre una arquitectura de red que proporciona una pasarela residencial y un acceso a banda ancha. El objetivo principal de este PFG ha sido el desarrollo de un sistema de autenticación para el Hogar Digital Accesible de bajo coste. La idea de integrar un sistema de autenticación en el HDA, surge de la necesidad de proteger de accesos no deseados determinados servicios disponibles dentro de un ámbito privado. Algunos de estos servicios pueden ser tales como el acceso a la lectura de los mensajes disponibles en el contestador automático, el uso de equipos multimedia, la desconexión de alarmas de seguridad o simplemente la configuración de ambientes según el usuario que esté autenticado (intensidad de luz, temperatura de la sala, etc.). En el desarrollo han primado los principios de accesibilidad, usabilidad y seguridad necesarios para la creación de un entorno no invasivo, que permitiera acreditar la identidad del usuario frente al sistema HDA. Se ha planteado como posible solución, un sistema basado en el reconocimiento de un trazo realizado por el usuario. Este trazo se usará como clave de cara a validar a los usuarios. El usuario deberá repetir el trazado que registró en el sistema para autenticarse. Durante la ejecución del presente PFG, se justificará la elección de este mecanismo de autenticación frente a otras alternativas disponibles en el mercado. Para probar la aplicación, se ha podido contar con dos periféricos de distintas gamas, el uDraw creado para la PS3 que se compone de una tableta digitalizadora y un lápiz que permite recoger los trazos realizados por el usuario de forma inalámbrica y la tableta digitalizadora Bamboo de Wacom. La herramienta desarrollada permite a su vez, la posibilidad de ser usada por otro tipo de dispositivos como es el caso del reloj con acelerómetro de 3 ejes de Texas Instruments Chronos eZ430 capaz de trasladar los movimientos del usuario al puntero de un ratón. El PFG se encuentra dividido en tres grandes bloques de flujo de trabajo. El primero se centra en el análisis del sistema y las tecnologías que lo componen, incluyendo los distintos algoritmos disponibles para realizar la autenticación basada en reconocimiento de patrones aplicados a imágenes que mejor se adaptan a las necesidades del usuario. En el segundo bloque se recoge una versión de prueba basada en el análisis y el diseño UML realizado previamente, sobre la que se efectuaron pruebas de concepto y se comprobó la viabilidad del proyecto. El último bloque incluye la verificación y validación del sistema mediante pruebas que certifican que se han alcanzado los niveles de calidad necesarios para la consecución de los objetivos planteados, generando finalmente la documentación necesaria. Como resultado del trabajo realizado, se ha obtenido un sistema que plantea una arquitectura fácilmente ampliable lograda a través del uso de técnicas como la introspección, que permiten separar la lógica de la capa de negocio del código que la implementa, pudiendo de forma simple e intuitiva sustituir código mediante ficheros de configuración, lo que hace que el sistema sea flexible y escalable. Tras la realización del PFG, se puede concluir que el producto final obtenido ha respondido de forma satisfactoria alcanzando los niveles de calidad requeridos, siendo capaz de proporcionar un sistema de autenticación alternativo a los convencionales, manteniendo unas cotas de seguridad elevadas y haciendo de la accesibilidad y el precio sus características más reseñables. ABSTRACT. Accessible Digital Home (HDA) of the ETSIST was created with the aim of bringing the latest information and communications technologies closer to the people who has special needs of accessibility and usability increasing their quality of life, comfort, security and autonomy. The HDA environment has different control elements for doors, blinds, lighting, water or gas, temperature sensors, fire protection systems, gas flashover, air conditioning systems, entertainments systems and security systems such as intruders detectors and alarms. Everything supported by an architecture net which provides a broadband residential services gateway. The main goal of this PFG was the development of a low-cost authentication system for the Accessible Digital Home. The idea of integrating an authentication system on the HDA, stems from the need to safeguard certain private key network resources from unauthorized access. Some of said resources are the access to the answering machine messages, the use of multimedia devices, the alarms deactivation or the parameter settings for each environment as programmed by the authenticated user (light intensity, room temperature, etc.). During the development priority was given to concepts like accessibility, usability and security. All of them necessary to create a non invasive environment that allows the users to certify their identity. A system based on stroke pattern recognition, was considered as a possible solution. This stroke is used as a key to validate users. The user must repeat the stroke that was saved on the system to validate access. The selection of this authentication mechanism among the others available options will be justified during this PFG. Two peripherals with different ranges were used to test the application. One of them was uDraw design for the PS3. It is wireless and is formed by a pen and a drawing tablet that allow us to register the different strokes drawn by the user. The other one was the Wacom Bamboo tablet, that supports the same functionality but with better accuracy. The developed tool allows another kind of peripherals like the 3-axes accelerometer digital wristwatch Texas Instruments Chronos eZ430 capable of transfering user movements to the mouse cursor. The PFG is divided by three big blocks that represent different workflows. The first block is focused on the system analysis and the technologies related to it, including algorithms for image pattern recognition that fits the user's needs. The second block describes how the beta version was developed based on the UML analysis and design previously done. It was tested and the viability of the project was verified. The last block contains the system verification and validation. These processes certify that the requirements have been fulfilled as well as the quality levels needed to reach the planned goals. Finally all the documentation has been produced. As a result of the work, an expandable system has been created, due to the introspection that provides the opportunity to separate the business logic from the code that implements it. With this technique, the code could be replaced throughout configuration files which makes the system flexible and highly scalable. Once the PFG has finished, it must therefore be concluded that the final product has been a success and high levels of quality have been achieved. This authentication tool gives us a low-cost alternative to the conventional ones. The new authentication system remains security levels reasonably high giving particular emphasis to the accessibility and the price.
Resumo:
The main problem to study vertical drainage from the moisture distribution, on a vertisol profile, is searching for suitable methods using these procedures. Our aim was to design a digital image processing methodology and its analysis to characterize the moisture content distribution of a vertisol profile. In this research, twelve soil pits were excavated on a ba re Mazic Pellic Vertisols ix of them in May 13/2011 and the rest in May 19 /2011 after a moderate rainfall event. Digital RGB images were taken from each vertisol pit using a Kodak? camera selecting a size of 1600x945 pixels. Each soil image was processed to homogenized brightness and then a spatial filter with several window sizes was applied to select the optimum one. The RGB image obtained were divided in each matrix color selecting the best thresholds for each one, maximum and minimum, to be applied and get a digital binary pattern. This one was analyzed by estimating two fractal scaling exponents box counting dimension D BC) and interface fractal dimension (D) In addition, three pre-fractal scaling coefficients were determinate at maximum resolution: total number of boxes intercepting the foreground pattern (A), fractal lacunarity (?1) and Shannon entropy S1). For all the images processed the spatial filter 9x9 was the optimum based on entropy, cluster and histogram criteria. Thresholds for each color were selected based on bimodal histograms.
Resumo:
Introdução: Embora alterações estruturais cerebrais na esquizofrenia venham sendo repetidamente demonstradas em estudos de ressonância magnética (RM), ainda permanece incerto se tais alterações são estáticas ou progressivas. Enquanto estudos longitudinais são tradicionalmente utilizados na avaliação da questão da progressão, estudos transversais de neuroimagem comparando diretamente pacientes com esquizofrenia crônica e de primeiro episódio a controles saudáveis têm sido bastante raros até o presente. Com o recente interesse em meganálises combinando dados multicêntricos de RM visando-se a maior poder estatístico, o presente estudo multicêntrico de morfometria baseada no voxel (VBM) foi realizado para avaliar os padrões de alterações estruturais cerebrais segundo os diferentes estágios da doença, bem como para avaliar quais (se alguma) dessas alterações se correlacionariam especificamente a moderadores clínicos potenciais, tais como exposição cumulativa a antipsicóticos, tempo de doença e gravidade da doença. Métodos: Selecionou-se uma ampla amostra de pacientes com esquizofrenia (161, sendo 99 crônicos e 62 de primeiro episódio) e controles (151) a partir de quatro estudos prévios de RM (1,5T) realizados na mesma região do Brasil. O processamento e análise das imagens foi realizado usando-se o software Statistical Parametric Mapping (SPM8) com emprego do algoritmo DARTEL (diffeomorphic anatomical registration through exponentiated Lie algebra). Os efeitos de grupo sobre os volumes regionais de substância cinzenta (SC) foram analisados através de comparações voxel-a-voxel por análises de covariância em modelos lineares gerais, inserindo-se, em todas as análises, o volume total de SC, protocolo do scanner, idade e sexo como variáveis de confusão. Por fim, foram realizadas análises de correlação entre os aludidos moderadores clínicos potenciais e os volumes cerebrais globais e regionais. Resultados: Os pacientes com esquizofrenia de primeiro episódio apresentaram reduções volumétricas sutis em comparação aos controles, em um circuito neural circunscrito e identificável apenas em análises SVC (small volume correction) [p < 0.05, com correção family-wise error (FWE)], incluindo a ínsula, estruturas têmporo-límbicas e corpo estriado. Os pacientes crônicos, por outro lado, apresentaram um padrão de alterações extensas comparativamente aos controles, envolvendo os córtices frontais orbitais, superiores e inferiores bilateralmente, córtex frontal médio direito, ambos os córtices cingulados anteriores, ambas as ínsulas, e os córtices temporais superior e médio direitos (p < 0.05, análises whole-brain com correção FWE). Foram encontradas correlações negativas significantes entre exposição cumulativa a antipsicóticos e volumes globais de SC e substância branca nos pacientes com esquizofrenia, embora as correlações com reduções regionais não tenham sido significantes. Detectaram-se, ainda, correlações negativas significantes entre tempo de doença e volumes regionais relativos da ínsula esquerda, córtex cingulado anterior direito e córtices pré-frontais dorsolaterais nas análises SVC para os grupos conjuntos (esquizofrenia crônica e de primeiro episódio). Conclusão: Os achados supracitados indicam que: a) as alterações estruturais associadas com o diagnóstico de esquizofrenia são mais disseminadas na forma crônica em comparação à de primeiro episódio; b) reduções volumétricas regionais em áreas específicas do cérebro podem variar em função do tempo de doença; c) a exposição cumulativa a antipsicóticos associou-se a alterações volumétricas globais, e não regionais
Resumo:
Deformable Template models are first applied to track the inner wall of coronary arteries in intravascular ultrasound sequences, mainly in the assistance to angioplasty surgery. A circular template is used for initializing an elliptical deformable model to track wall deformation when inflating a balloon placed at the tip of the catheter. We define a new energy function for driving the behavior of the template and we test its robustness both in real and synthetic images. Finally we introduce a framework for learning and recognizing spatio-temporal geometric constraints based on Principal Component Analysis (eigenconstraints).
Resumo:
Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
Strain localisation is a widespread phenomenon often observed in shear and compressive loading of geomaterials, for example, the fault gouge. It is believed that the main mechanisms of strain localisation are strain softening and mismatch between dilatancy and pressure sensitivity. Observations show that gouge deformation is accompanied by considerable rotations of grains. In our previous work as a model for gouge material, we proposed a continuum description for an assembly of particles of equal radius in which the particle rotation is treated as an independent degree of freedom. We showed that there exist critical values of the model parameters for which the displacement gradient exhibits a pronounced localisation at the mid-surface layers of the fault, even in the absence of inelasticity. Here, we generalise the model to the case of finite deformations characteristic for the gouge deformation. We derive objective constitutive relationships relating the Jaumann rates of stress and moment stress to the relative strain and curvature rates, respectively. The model suggests that the pattern of localisation remains the same as in the linear case. However, the presence of the Jaumann terms leads to the emergence of non-zero normal stresses acting along and perpendicular to the shear layer (with zero hydrostatic pressure), and localised along the mid-line of the gouge; these stress components are absent in the linear model of simple shear. These additional normal stresses, albeit small, cause a change in the direction in which the maximal normal stresses act and in which en-echelon fracturing is formed.
Resumo:
This review will discuss the use of manual grading scales, digital photography, and automated image analysis in the quantification of fundus changes caused by age-related macular disease. Digital imaging permits processing of images for enhancement, comparison, and feature quantification, and these techniques have been investigated for automated drusen analysis. The accuracy of automated analysis systems has been enhanced by the incorporation of interactive elements, such that the user is able to adjust the sensitivity of the system, or manually add and remove pixels. These methods capitalize on both computer and human image feature recognition and the advantage of computer-based methodologies for quantification. The histogram-based adaptive local thresholding system is able to extract useful information from the image without being affected by the presence of other structures. More recent developments involve compensation for fundus background reflectance, which has most recently been combined with the Otsu method of global thresholding. This method is reported to provide results comparable with manual stereo viewing. Developments in this area are likely to encourage wider use of automated techniques. This will make the grading of photographs easier and cheaper for clinicians and researchers. © 2007 Elsevier Inc. All rights reserved.
Resumo:
In many models of edge analysis in biological vision, the initial stage is a linear 2nd derivative operation. Such models predict that adding a linear luminance ramp to an edge will have no effect on the edge's appearance, since the ramp has no effect on the 2nd derivative. Our experiments did not support this prediction: adding a negative-going ramp to a positive-going edge (or vice-versa) greatly reduced the perceived blur and contrast of the edge. The effects on a fairly sharp edge were accurately predicted by a nonlinear multi-scale model of edge processing [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision], in which a half-wave rectifier comes after the 1st derivative filter. But we also found that the ramp affected perceived blur more profoundly when the edge blur was large, and this greater effect was not predicted by the existing model. The model's fit to these data was much improved when the simple half-wave rectifier was replaced by a threshold-like transducer [May, K. A. & Georgeson, M. A. (2007). Blurred edges look faint, and faint edges look sharp: The effect of a gradient threshold in a multi-scale edge coding model. Vision Research, 47, 1705-1720.]. This modified model correctly predicted that the interaction between ramp gradient and edge scale would be much larger for blur perception than for contrast perception. In our model, the ramp narrows an internal representation of the gradient profile, leading to a reduction in perceived blur. This in turn reduces perceived contrast because estimated blur plays a role in the model's estimation of contrast. Interestingly, the model predicts that analogous effects should occur when the width of the window containing the edge is made narrower. This has already been confirmed for blur perception; here, we further support the model by showing a similar effect for contrast perception. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we present one approach for extending the learning set of a classification algorithm with additional metadata. It is used as a base for giving appropriate names to found regularities. The analysis of correspondence between connections established in the attribute space and existing links between concepts can be used as a test for creation of an adequate model of the observed world. Meta-PGN classifier is suggested as a possible tool for establishing these connections. Applying this approach in the field of content-based image retrieval of art paintings provides a tool for extracting specific feature combinations, which represent different sides of artists' styles, periods and movements.
Resumo:
With the latest development in computer science, multivariate data analysis methods became increasingly popular among economists. Pattern recognition in complex economic data and empirical model construction can be more straightforward with proper application of modern softwares. However, despite the appealing simplicity of some popular software packages, the interpretation of data analysis results requires strong theoretical knowledge. This book aims at combining the development of both theoretical and applicationrelated data analysis knowledge. The text is designed for advanced level studies and assumes acquaintance with elementary statistical terms. After a brief introduction to selected mathematical concepts, the highlighting of selected model features is followed by a practice-oriented introduction to the interpretation of SPSS1 outputs for the described data analysis methods. Learning of data analysis is usually time-consuming and requires efforts, but with tenacity the learning process can bring about a significant improvement of individual data analysis skills.
Resumo:
This research examined the factors contributing to the performance of online grocers prior to, and following, the 2000 dot.com collapse. The primary goals were to assess the relationship between a company’s business model(s) and its performance in the online grocery channel and to determine if there were other company and/or market related factors that could account for company performance. ^ To assess the primary goals, a case based theory building process was utilized. A three-way cross-case analysis comprising Peapod, GroceryWorks, and Tesco examined the common profit components, the structural category (e.g., pure-play, partnership, and hybrid) profit components, and the idiosyncratic profit components related to each specific company. ^ Based on the analysis, it was determined that online grocery store business models could be represented at three distinct, but hierarchically, related levels. The first level was termed the core model and represented the basic profit structure that all online grocers needed in order to conduct operations. The next model level was termed the structural model and represented the profit structure associated with the specific business model configuration (i.e., pure-play, partnership, hybrid). The last model level was termed the augmented model and represented the company’s business model when idiosyncratic profit components were included. In relation to the five company related factors, scalability, rate of expansion, and the automation level were potential candidates for helping to explain online grocer performance. In addition, all the market structure related factors were deemed possible candidates for helping to explain online grocer performance. ^ The study concluded by positing an alternative hypothesis concerning the performance of online grocers. Prior to this study, the prevailing wisdom was that the business models were the primary cause of online grocer performance. However, based on the core model analysis, it was hypothesized that the customer relationship activities (i.e., advertising, promotions, and loyalty program tie-ins) were the real drivers of online grocer performance. ^
Resumo:
Extensive data sets on water quality and seagrass distributions in Florida Bay have been assembled under complementary, but independent, monitoring programs. This paper presents the landscape-scale results from these monitoring programs and outlines a method for exploring the relationships between two such data sets. Seagrass species occurrence and abundance data were used to define eight benthic habitat classes from 677 sampling locations in Florida Bay. Water quality data from 28 monitoring stations spread across the Bay were used to construct a discriminant function model that assigned a probability of a given benthic habitat class occurring for a given combination of water quality variables. Mean salinity, salinity variability, the amount of light reaching the benthos, sediment depth, and mean nutrient concentrations were important predictor variables in the discriminant function model. Using a cross-validated classification scheme, this discriminant function identified the most likely benthic habitat type as the actual habitat type in most cases. The model predicted that the distribution of benthic habitat types in Florida Bay would likely change if water quality and water delivery were changed by human engineering of freshwater discharge from the Everglades. Specifically, an increase in the seasonal delivery of freshwater to Florida Bay should cause an expansion of seagrass beds dominated by Ruppia maritima and Halodule wrightii at the expense of the Thalassia testudinum-dominated community that now occurs in northeast Florida Bay. These statistical techniques should prove useful for predicting landscape-scale changes in community composition in diverse systems where communities are in quasi-equilibrium with environmental drivers.
Resumo:
This research examined the factors contributing to the performance of online grocers prior to, and following, the 2000 dot.com collapse. The primary goals were to assess the relationship between a company’s business model(s) and its performance in the online grocery channel and to determine if there were other company and/or market related factors that could account for company performance. To assess the primary goals, a case based theory building process was utilized. A three-way cross-case analysis comprising Peapod, GroceryWorks, and Tesco examined the common profit components, the structural category (e.g., pure-play, partnership, and hybrid) profit components, and the idiosyncratic profit components related to each specific company. Based on the analysis, it was determined that online grocery store business models could be represented at three distinct, but hierarchically, related levels. The first level was termed the core model and represented the basic profit structure that all online grocers needed in order to conduct operations. The next model level was termed the structural model and represented the profit structure associated with the specific business model configuration (i.e., pure-play, partnership, hybrid). The last model level was termed the augmented model and represented the company’s business model when idiosyncratic profit components were included. In relation to the five company related factors, scalability, rate of expansion, and the automation level were potential candidates for helping to explain online grocer performance. In addition, all the market structure related factors were deemed possible candidates for helping to explain online grocer performance. The study concluded by positing an alternative hypothesis concerning the performance of online grocers. Prior to this study, the prevailing wisdom was that the business models were the primary cause of online grocer performance. However, based on the core model analysis, it was hypothesized that the customer relationship activities (i.e., advertising, promotions, and loyalty program tie-ins) were the real drivers of online grocer performance.