852 resultados para Image texture analysis
Resumo:
data analysis table
Resumo:
Image registration is an important component of image analysis used to align two or more images. In this paper, we present a new framework for image registration based on compression. The basic idea underlying our approach is the conjecture that two images are correctly registered when we can maximally compress one image given the information in the other. The contribution of this paper is twofold. First, we show that the image registration process can be dealt with from the perspective of a compression problem. Second, we demonstrate that the similarity metric, introduced by Li et al., performs well in image registration. Two different versions of the similarity metric have been used: the Kolmogorov version, computed using standard real-world compressors, and the Shannon version, calculated from an estimation of the entropy rate of the images
Resumo:
La present tesi proposa una metodología per a la simulació probabilística de la fallada de la matriu en materials compòsits reforçats amb fibres de carboni, basant-se en l'anàlisi de la distribució aleatòria de les fibres. En els primers capítols es revisa l'estat de l'art sobre modelització matemàtica de materials aleatoris, càlcul de propietats efectives i criteris de fallada transversal en materials compòsits. El primer pas en la metodologia proposada és la definició de la determinació del tamany mínim d'un Element de Volum Representatiu Estadístic (SRVE) . Aquesta determinació es du a terme analitzant el volum de fibra, les propietats elàstiques efectives, la condició de Hill, els estadístics de les components de tensió i defromació, la funció de densitat de probabilitat i les funcions estadístiques de distància entre fibres de models d'elements de la microestructura, de diferent tamany. Un cop s'ha determinat aquest tamany mínim, es comparen un model periòdic i un model aleatori, per constatar la magnitud de les diferències que s'hi observen. Es defineix, també, una metodologia per a l'anàlisi estadístic de la distribució de la fibra en el compòsit, a partir d'imatges digitals de la secció transversal. Aquest anàlisi s'aplica a quatre materials diferents. Finalment, es proposa un mètode computacional de dues escales per a simular la fallada transversal de làmines unidireccionals, que permet obtenir funcions de densitat de probabilitat per a les variables mecàniques. Es descriuen algunes aplicacions i possibilitats d'aquest mètode i es comparen els resultats obtinguts de la simulació amb valors experimentals.
Resumo:
L'increment de bases de dades que cada vegada contenen imatges més difícils i amb un nombre més elevat de categories, està forçant el desenvolupament de tècniques de representació d'imatges que siguin discriminatives quan es vol treballar amb múltiples classes i d'algorismes que siguin eficients en l'aprenentatge i classificació. Aquesta tesi explora el problema de classificar les imatges segons l'objecte que contenen quan es disposa d'un gran nombre de categories. Primerament s'investiga com un sistema híbrid format per un model generatiu i un model discriminatiu pot beneficiar la tasca de classificació d'imatges on el nivell d'anotació humà sigui mínim. Per aquesta tasca introduïm un nou vocabulari utilitzant una representació densa de descriptors color-SIFT, i desprès s'investiga com els diferents paràmetres afecten la classificació final. Tot seguit es proposa un mètode par tal d'incorporar informació espacial amb el sistema híbrid, mostrant que la informació de context es de gran ajuda per la classificació d'imatges. Desprès introduïm un nou descriptor de forma que representa la imatge segons la seva forma local i la seva forma espacial, tot junt amb un kernel que incorpora aquesta informació espacial en forma piramidal. La forma es representada per un vector compacte obtenint un descriptor molt adequat per ésser utilitzat amb algorismes d'aprenentatge amb kernels. Els experiments realitzats postren que aquesta informació de forma te uns resultats semblants (i a vegades millors) als descriptors basats en aparença. També s'investiga com diferents característiques es poden combinar per ésser utilitzades en la classificació d'imatges i es mostra com el descriptor de forma proposat juntament amb un descriptor d'aparença millora substancialment la classificació. Finalment es descriu un algoritme que detecta les regions d'interès automàticament durant l'entrenament i la classificació. Això proporciona un mètode per inhibir el fons de la imatge i afegeix invariança a la posició dels objectes dins les imatges. S'ensenya que la forma i l'aparença sobre aquesta regió d'interès i utilitzant els classificadors random forests millora la classificació i el temps computacional. Es comparen els postres resultats amb resultats de la literatura utilitzant les mateixes bases de dades que els autors Aixa com els mateixos protocols d'aprenentatge i classificació. Es veu com totes les innovacions introduïdes incrementen la classificació final de les imatges.
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
Resumo:
We examine the efficacy two volume spatial registration of pre and postoperative clinical computed tomography (CT) imaging to verify post-operative electrode array placement in cochlear implant (CI) patients. To measure the degree of accuracy with which the composite image predicts in-vivo placement of the array, we replicate the CI surgical process in cadaver heads. Pre-operative, post-operative, micro CT imaging and histology are utilized for verification.
Resumo:
Recent interest in the validation of general circulation models (GCMs) has been devoted to objective methods. A small number of authors have used the direct synoptic identification of phenomena together with a statistical analysis to perform the objective comparison between various datasets. This paper describes a general method for performing the synoptic identification of phenomena that can be used for an objective analysis of atmospheric, or oceanographic, datasets obtained from numerical models and remote sensing. Methods usually associated with image processing have been used to segment the scene and to identify suitable feature points to represent the phenomena of interest. This is performed for each time level. A technique from dynamic scene analysis is then used to link the feature points to form trajectories. The method is fully automatic and should be applicable to a wide range of geophysical fields. An example will be shown of results obtained from this method using data obtained from a run of the Universities Global Atmospheric Modelling Project GCM.
Resumo:
The technology for site-specific applications of nitrogen (N) fertilizer has exposed a gap in our knowledge about the spatial variation of soil mineral N, and that which will become available during the growing season within arable fields. Spring mineral N and potentially available N were measured in an arable field together with gravimetric water content, loss on ignition, crop yield, percentages of sand, silt, and clay, and elevation to describe their spatial variation geostatistically. The areas with a larger clay content had larger values of mineral N, potentially available N, loss on ignition and gravimetric water content, and the converse was true for the areas with more sandy soil. The results suggest that the spatial relations between mineral N and loss on ignition, gravimetric water content, soil texture, elevation and crop yield, and between potentially available N and loss on ignition and silt content could be used to indicate their spatial patterns. Variable-rate nitrogen fertilizer application would be feasible in this field because of the spatial structure and the magnitude of variation of mineral N and potentially available N.
Resumo:
Structure is an important physical feature of the soil that is associated with water movement, the soil atmosphere, microorganism activity and nutrient uptake. A soil without any obvious organisation of its components is known as apedal and this state can have marked effects on several soil processes. Accurate maps of topsoil and subsoil structure are desirable for a wide range of models that aim to predict erosion, solute transport, or flow of water through the soil. Also such maps would be useful to precision farmers when deciding how to apply nutrients and pesticides in a site-specific way, and to target subsoiling and soil structure stabilization procedures. Typically, soil structure is inferred from bulk density or penetrometer resistance measurements and more recently from soil resistivity and conductivity surveys. To measure the former is both time-consuming and costly, whereas observations made by the latter methods can be made automatically and swiftly using a vehicle-mounted penetrometer or resistivity and conductivity sensors. The results of each of these methods, however, are affected by other soil properties, in particular moisture content at the time of sampling, texture, and the presence of stones. Traditional methods of observing soil structure identify the type of ped and its degree of development. Methods of ranking such observations from good to poor for different soil textures have been developed. Indicator variograms can be computed for each category or rank of structure and these can be summed to give the sum of indicator variograms (SIV). Observations of the topsoil and subsoil structure were made at four field sites where the soil had developed on different parent materials. The observations were ranked by four methods and indicator and the sum of indicator variograms were computed and modelled for each method of ranking. The individual indicators were then kriged with the parameters of the appropriate indicator variogram model to map the probability of encountering soil with the structure represented by that indicator. The model parameters of the SIVs for each ranking system were used with the data to krige the soil structure classes, and the results are compared with those for the individual indicators. The relations between maps of soil structure and selected wavebands from aerial photographs are examined as basis for planning surveys of soil structure. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Commercially supplied chicken breast muscle was subjected to simultaneous heat and pressure treatments. Treatment conditions ranged from ambient temperature to 70 °C and from 0.1 to 800 MPa, respectively, in various combinations. Texture profile analysis (TPA) of the treated samples was performed to determine changes in muscle hardness. At treatment temperatures up to and including 50 °C, heat and pressure acted synergistically to increase muscle hardness. However, at 60 and 70 °C, hardness decreased following treatments in excess of 200 MPa. TPA was performed on extracted myofibrillar protein gels that after treatment under similar conditions revealed similar effects of heat and pressure. Differential scanning calorimetry analysis of whole muscle samples revealed that at ambient pressure the unfolding of myosin was completed at 60 °C, unlike actin, which completely denatured only above 70 °C. With simultaneous pressure treatment at >200 MPa, myosin and actin unfolded at 20 °C. Unfolding of myosin and actin could be induced in extracted myofibrillar protein with simultaneous treatment at 200 MPa and 40 °C. Electrophoretic analysis indicated high pressure/temperature regimens induced disulfide bonding between myosin chains.
Resumo:
The effects of high pressure (to 800 MPa) applied at different temperatures (20-70 degreesC) for 20 min on beef post-rigor longissimus dorsi texture were studied. Texture profile analysis showed that when heated at ambient pressure there was the expected increase in hardness with increasing temperature and when pressure was applied at room temperature there was again the expected increase in hardness with increasing pressure. Similar results to those found at ambient temperature were found when pressure was applied at 40 degreesC. However, at higher temperatures, 60 and 70 degreesC it was found that pressures of 200 MPa caused large and significant decreases in hardness. The results found for hardness were mirrored by those for gumminess and chewiness. To further understand the changes in texture observed, intact beef longissimus dorsi samples and extracted myofibrils were both subjected to differential scanning calorimetry after being subjected to the same pressure/temperature regimes. As expected collagen was reasonably inert to pressure and only at temperatures of 60-70 degreesC was it denatured/unfolded. However, myosin was relatively easily unfolded by both pressure and temperature and when pressure denatured a new and modified structure was formed of low thermal stability. Although this new structure had low thermal stability at ambient pressure it still formed in both the meat and myofibrils when pressure was applied at 60 degreesC. It seems unlikely that structurally induced changes can be a major cause of the significant loss of hardness observed when beef is treated at high temperature (60-70 degreesC) and 200 MPa and it is suggested that accelerated proteolysis under these conditions is the major cause. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Bubbles impart a very unique texture, chew, and mouth feel to foods. However, little is known about the relationship between structure of such products and consumer response in terms of mouth-feel and eating experience. The objective of this article is to investigate the sensory properties of 4 types of bubble-containing chocolates, produced by using different gases: carbon dioxide, nitrogen, nitrous oxide, and argon. The structure of these chocolates were characterized in terms of (1) gas hold-up values determined by density measurements and (2) bubble size distribution which was measured by undertaking an image analysis of X-ray microtomograph sections. Bubble size distributions were obtained by measuring bubble volumes after reconstructing 3D images from the tomographic sections. A sensory study was undertaken by a nonexpert panel of 20 panelists and their responses were analyzed using qualitative descriptive analysis (QDA). The results show that chocolates made from the 4 gases could be divided into 2 groups on the basis of bubble volume and gas hold-up: the samples produced using carbon dioxide and nitrous oxide had a distinctly higher gas hold-up containing larger bubbles in comparison with those produced using argon and nitrogen. The sensory study also demonstrated that chocolates made with the latter were perceived to be harder, less aerated, slow to melt in the mouth, and having overall flavor intensity. These products were further found to be creamier than the chocolates made by using carbon dioxide and nitrous oxide; the latter sample also showed a higher intensity of cocoa flavor.
Resumo:
Eye-movements have long been considered a problem when trying to understand the visual control of locomotion. They transform the retinal image from a simple expanding pattern of moving texture elements (pure optic flow), into a complex combination of translation and rotation components (retinal flow). In this article we investigate whether there are measurable advantages to having an active free gaze, over a static gaze or tracking gaze, when steering along a winding path. We also examine patterns of free gaze behavior to determine preferred gaze strategies during active locomotion. Participants were asked to steer along a computer-simulated textured roadway with free gaze, fixed gaze, or gaze tracking the center of the roadway. Deviation of position from the center of the road was recorded along with their point of gaze. It was found that visually tracking the middle of the road produced smaller steering errors than for fixed gaze. Participants performed best at the steering task when allowed to sample naturally from the road ahead with free gaze. There was some variation in the gaze strategies used, but sampling was predominantly of areas proximal to the center of the road. These results diverge from traditional models of flow analysis.
Resumo:
Williams syndrome (WS) is a developmental disorder in which visuo-spatial cognition is poor relative to verbal ability. At the level of visuo-spatial perception, individuals with WS can perceive both the local and global aspects of an image. However, the manner in which local elements are integrated into a global whole is atypical, with relative strengths in integration by luminance, closure, and alignment compared to shape, orientation and proximity. The present study investigated the manner in which global images are segmented into local parts. Segmentation by seven gestalt principles was investigated: proximity, shape, luminance, orientation, closure, size (and alignment: Experiment I only). Participants were presented with uniform texture squares and asked to detect the presence of a discrepant patch (Experiment 1) or to identify the form of a discrepant patch as a capital E or H (Experiment 2). In Experiment 1, the pattern and level of performance of the WS group did not differ from that of typically developing controls, and was commensurate with the general level of non-verbal ability observed in WS. These results were replicated in Experiment 2, with the exception of segmentation by proximity, where individuals with WS demonstrated superior performance relative to the remaining segmentation types. Overall, the results suggest that, despite some atypical aspects of visuo-spatial perception in WS, the ability to segment a global form into parts is broadly typical in this population. In turn, this informs predictions of brain function in WS, particularly areas V1 and V4. (c) 2006 Elsevier Ltd. All rights reserved.