870 resultados para Texture segmentation
Resumo:
International audience
Resumo:
In design and manufacturing, mesh segmentation is required for FACE construction in boundary representation (BRep), which in turn is central for featurebased design, machining, parametric CAD and reverse engineering, among others -- Although mesh segmentation is dictated by geometry and topology, this article focuses on the topological aspect (graph spectrum), as we consider that this tool has not been fully exploited -- We preprocess the mesh to obtain a edgelength homogeneous triangle set and its Graph Laplacian is calculated -- We then produce a monotonically increasing permutation of the Fiedler vector (2nd eigenvector of Graph Laplacian) for encoding the connectivity among part feature submeshes -- Within the mutated vector, discontinuities larger than a threshold (interactively set by a human) determine the partition of the original mesh -- We present tests of our method on large complex meshes, which show results which mostly adjust to BRep FACE partition -- The achieved segmentations properly locate most manufacturing features, although it requires human interaction to avoid over segmentation -- Future work includes an iterative application of this algorithm to progressively sever features of the mesh left from previous submesh removals
Resumo:
One of the objectives of this study is to perform classification of socio-demographic components for the level of city section in City of Lisbon. In order to accomplish suitable platform for the restaurant potentiality map, the socio-demographic components were selected to produce a map of spatial clusters in accordance to restaurant suitability. Consequently, the second objective is to obtain potentiality map in terms of underestimation and overestimation in number of restaurants. To the best of our knowledge there has not been found identical methodology for the estimation of restaurant potentiality. The results were achieved with combination of SOM (Self-Organized Map) which provides a segmentation map and GAM (Generalized Additive Model) with spatial component for restaurant potentiality. Final results indicate that the highest influence in restaurant potentiality is given to tourist sites, spatial autocorrelation in terms of neighboring restaurants (spatial component), and tax value, where lower importance is given to household with 1 or 2 members and employed population, respectively. In addition, an important conclusion is that the most attractive market sites have shown no change or moderate underestimation in terms of restaurants potentiality.
Resumo:
Recent advances in mobile phone cameras have poised them to take over compact hand-held cameras as the consumer’s preferred camera option. Along with advances in the number of pixels, motion blur removal, face-tracking, and noise reduction algorithms have significant roles in the internal processing of the devices. An undesired effect of severe noise reduction is the loss of texture (i.e. low-contrast fine details) of the original scene. Current established methods for resolution measurement fail to accurately portray the texture loss incurred in a camera system. The development of an accurate objective method to identify the texture preservation or texture reproduction capability of a camera device is important in this regard. The ‘Dead Leaves’ target has been used extensively as a method to measure the modulation transfer function (MTF) of cameras that employ highly non-linear noise-reduction methods. This stochastic model consists of a series of overlapping circles with radii r distributed as r−3, and having uniformly distributed gray level, which gives an accurate model of occlusion in a natural setting and hence mimics a natural scene. This target can be used to model the texture transfer through a camera system when a natural scene is captured. In the first part of our study we identify various factors that affect the MTF measured using the ‘Dead Leaves’ chart. These include variations in illumination, distance, exposure time and ISO sensitivity among others. We discuss the main differences of this method with the existing resolution measurement techniques and identify the advantages. In the second part of this study, we propose an improvement to the current texture MTF measurement algorithm. High frequency residual noise in the processed image contains the same frequency content as fine texture detail, and is sometimes reported as such, thereby leading to inaccurate results. A wavelet thresholding based denoising technique is utilized for modeling the noise present in the final captured image. This updated noise model is then used for calculating an accurate texture MTF. We present comparative results for both algorithms under various image capture conditions.
Resumo:
The present work aims to evaluate the acceptance and preference for sweet taste in red wine, according to consumer segmentation in age, gender, personality type, tasting sensitivity and consumer experience in wine. A hundred and fourteen wine tasters were invited to the wine tasting, and the average age was 27 years. An addition of sugar was made with equal concentrations of glucose and fructose to the wine at 2g/L, 4g/L, 8g/L, 16g/L and 32g/L. Five pairs of glasses were presented for the subjects to taste containing each a control wine and a spiked sample. Pairs were presented in order of concentration, from 2g/L to 32g/l. The subjects were also asked to answer two online questionnaires at the end of the tasting, on the personality types and vinotype, which is related to mouth sensitivity. ISO-5495 paired comparison tests were used for sensorial analysis. The objective was to assess if any of the nine segmentation factors had influence on preference or rejection for spiked samples and to establish whether this preference was statistically significant. We concluded that it would be important to have subjects with an age average higher than 27 years and more experienced in wine drinking, mostly because the data relative to preferences in novices shows some dispersion and lack of attention. A panel of older and more experienced wine tasters is likely to be more attentive and focused and therefore yield differentiated results. It was also concluded that more research is required to extend this investigation to other wine styles because the differences in preferences can depend on other reasons, such as preferring a wine with more or less sugar according to the type of wine. Finally it was concluded also that some variables influence preference for sweet taste in red wine, such as gender, vinotype and category of experience
Resumo:
Overrecentdecades,remotesensinghasemergedasaneffectivetoolforimprov- ing agriculture productivity. In particular, many works have dealt with the problem of identifying characteristics or phenomena of crops and orchards on different scales using remote sensed images. Since the natural processes are scale dependent and most of them are hierarchically structured, the determination of optimal study scales is mandatory in understanding these processes and their interactions. The concept of multi-scale/multi- resolution inherent to OBIA methodologies allows the scale problem to be dealt with. But for that multi-scale and hierarchical segmentation algorithms are required. The question that remains unsolved is to determine the suitable scale segmentation that allows different objects and phenomena to be characterized in a single image. In this work, an adaptation of the Simple Linear Iterative Clustering (SLIC) algorithm to perform a multi-scale hierarchi- cal segmentation of satellite images is proposed. The selection of the optimal multi-scale segmentation for different regions of the image is carried out by evaluating the intra- variability and inter-heterogeneity of the regions obtained on each scale with respect to the parent-regions defined by the coarsest scale. To achieve this goal, an objective function, that combines weighted variance and the global Moran index, has been used. Two different kinds of experiment have been carried out, generating the number of regions on each scale through linear and dyadic approaches. This methodology has allowed, on the one hand, the detection of objects on different scales and, on the other hand, to represent them all in a sin- gle image. Altogether, the procedure provides the user with a better comprehension of the land cover, the objects on it and the phenomena occurring.
Resumo:
The new generation of artificial satellites is providing a huge amount of Earth observation images whose exploitation can report invaluable benefits, both economical and environmental. However, only a small fraction of this data volume has been analyzed, mainly due to the large human resources needed for that task. In this sense, the development of unsupervised methodologies for the analysis of these images is a priority. In this work, a new unsupervised segmentation algorithm for satellite images is proposed. This algorithm is based on the rough-set theory, and it is inspired by a previous segmentation algorithm defined in the RGB color domain. The main contributions of the new algorithm are: (i) extending the original algorithm to four spectral bands; (ii) the concept of the superpixel is used in order to define the neighborhood similarity of a pixel adapted to the local characteristics of each image; (iii) and two new region merged strategies are proposed and evaluated in order to establish the final number of regions in the segmented image. The experimental results show that the proposed approach improves the results provided by the original method when both are applied to satellite images with different spectral and spatial resolutions.
Resumo:
Purpose: To evaluate and compare the performance of Ripplet Type-1 transform and directional discrete cosine transform (DDCT) and their combinations for improved representation of MRI images while preserving its fine features such as edges along the smooth curves and textures. Methods: In a novel image representation method based on fusion of Ripplet type-1 and conventional/directional DCT transforms, source images were enhanced in terms of visual quality using Ripplet and DDCT and their various combinations. The enhancement achieved was quantified on the basis of peak signal to noise ratio (PSNR), mean square error (MSE), structural content (SC), average difference (AD), maximum difference (MD), normalized cross correlation (NCC), and normalized absolute error (NAE). To determine the attributes of both transforms, these transforms were combined to represent the entire image as well. All the possible combinations were tested to present a complete study of combinations of the transforms and the contrasts were evaluated amongst all the combinations. Results: While using the direct combining method (DDCT) first and then the Ripplet method, a PSNR value of 32.3512 was obtained which is comparatively higher than the PSNR values of the other combinations. This novel designed technique gives PSNR value approximately equal to the PSNR’s of parent techniques. Along with this, it was able to preserve edge information, texture information and various other directional image features. The fusion of DDCT followed by the Ripplet reproduced the best images. Conclusion: The transformation of images using Ripplet followed by DDCT ensures a more efficient method for the representation of images with preservation of its fine details like edges and textures.
Resumo:
Le site Gaudreau est un site perturbé et à occupations multiples situé dans le sud-est du Québec, et présente des occupations datant du Paléoindien Récent jusqu’à la période historique. Les occupations Archaïques du site, noté par la présence de bifaces diagnostiques de l’Archaïque Supérieur et de l’Archaïque Terminal et par des Macrooutils de l’Archaïque Moyen et de l’Archaïque Supérieur, sont le sujet principal de ce mémoire. Puisqu’aucune occupation ne peut être différencié horizontalement ni verticalement, et qu’aucun objet non-diagnostique ne peut être associé avec certitude, seul un échantillon de 32 objets ont été observés. Étant donné la faible taille de l’échantillon analysé, il est fort probable qu’un plus grand nombre de sources de matières premières aient été utilisés durant les occupations de l’Archaïque. Toutefois, un réseau de matières premières lithiques similaire à ceux des sites du Lac Mégantic a été observé, avec une forte représentation de la rhyolite Kineo-Traveller et des cherts Appalachiens. Des cherts des Grands Lacs et le quartzite de Cheshire sont aussi présents. Le mudstone silicifié d’origine locale et le quartz sont par contre faiblement représentés dans l’échantillon, probablement dû à un biais de proximité de source. L’analyse technique de l’échantillon, sans contrôle pour les pratiques techno-économiques, dénote plusieurs récurrences techniques à l’intérieur des unités typologiques, sans toutefois appuyer des différences récurrentes significatives entre les matières premières de régions différentes. À cause de la taille de l’échantillon et du contexte perturbé, la pertinence des fortes similarités entre certains objets est douteuse. La segmentation interpersonnelle des chaînes opératoires ne pouvait être déterminée dans l’échantillon. Cependant, les résultats incitent plutôt à croire que les matières premières devaient circuler sous diverses formes. Il peut être considéré que, en dehors des matières premières locales, les occupants Archaïques du site Gaudreau n’avaient pas d’accès direct aux matières premières exogènes.
Resumo:
Tumor functional volume (FV) and its mean activity concentration (mAC) are the quantities derived from positron emission tomography (PET). These quantities are used for estimating radiation dose for a therapy, evaluating the progression of a disease and also use it as a prognostic indicator for predicting outcome. PET images have low resolution, high noise and affected by partial volume effect (PVE). Manually segmenting each tumor is very cumbersome and very hard to reproduce. To solve the above problem I developed an algorithm, called iterative deconvolution thresholding segmentation (IDTS) algorithm; the algorithm segment the tumor, measures the FV, correct for the PVE and calculates mAC. The algorithm corrects for the PVE without the need to estimate camera’s point spread function (PSF); also does not require optimizing for a specific camera. My algorithm was tested in physical phantom studies, where hollow spheres (0.5-16 ml) were used to represent tumors with a homogeneous activity distribution. It was also tested on irregular shaped tumors with a heterogeneous activity profile which were acquired using physical and simulated phantom. The physical phantom studies were performed with different signal to background ratios (SBR) and with different acquisition times (1-5 min). The algorithm was applied on ten clinical data where the results were compared with manual segmentation and fixed percentage thresholding method called T50 and T60 in which 50% and 60% of the maximum intensity respectively is used as threshold. The average error in FV and mAC calculation was 30% and -35% for 0.5 ml tumor. The average error FV and mAC calculation were ~5% for 16 ml tumor. The overall FV error was ~10% for heterogeneous tumors in physical and simulated phantom data. The FV and mAC error for clinical image compared to manual segmentation was around -17% and 15% respectively. In summary my algorithm has potential to be applied on data acquired from different cameras as its not dependent on knowing the camera’s PSF. The algorithm can also improve dose estimation and treatment planning.
Resumo:
We explored the submarine portions of the Enriquillo–Plantain Garden Fault zone (EPGFZ) and the Septentrional–Oriente Fault zone (SOFZ) along the Northern Caribbean plate boundary using high-resolution multibeam echo-sounding and shallow seismic reflection. The bathymetric data shed light on poorly documented or previously unknown submarine fault zones running over 200 km between Haiti and Jamaica (EPGFZ) and 300 km between the Dominican Republic and Cuba (SOFZ). The primary plate-boundary structures are a series of strike-slip fault segments associated with pressure ridges, restraining bends, step overs and dogleg offsets indicating very active tectonics. Several distinct segments 50–100 km long cut across pre-existing structures inherited from former tectonic regimes or bypass recent morphologies formed under the current strike-slip regime. Along the most recent trace of the SOFZ, we measured a strike-slip offset of 16.5 km, which indicates steady activity for the past ~1.8 Ma if its current GPS-derived motion of 9.8 ± 2 mm a−1 has remained stable during the entire Quaternary.
Resumo:
Le site Gaudreau est un site perturbé et à occupations multiples situé dans le sud-est du Québec, et présente des occupations datant du Paléoindien Récent jusqu’à la période historique. Les occupations Archaïques du site, noté par la présence de bifaces diagnostiques de l’Archaïque Supérieur et de l’Archaïque Terminal et par des Macrooutils de l’Archaïque Moyen et de l’Archaïque Supérieur, sont le sujet principal de ce mémoire. Puisqu’aucune occupation ne peut être différencié horizontalement ni verticalement, et qu’aucun objet non-diagnostique ne peut être associé avec certitude, seul un échantillon de 32 objets ont été observés. Étant donné la faible taille de l’échantillon analysé, il est fort probable qu’un plus grand nombre de sources de matières premières aient été utilisés durant les occupations de l’Archaïque. Toutefois, un réseau de matières premières lithiques similaire à ceux des sites du Lac Mégantic a été observé, avec une forte représentation de la rhyolite Kineo-Traveller et des cherts Appalachiens. Des cherts des Grands Lacs et le quartzite de Cheshire sont aussi présents. Le mudstone silicifié d’origine locale et le quartz sont par contre faiblement représentés dans l’échantillon, probablement dû à un biais de proximité de source. L’analyse technique de l’échantillon, sans contrôle pour les pratiques techno-économiques, dénote plusieurs récurrences techniques à l’intérieur des unités typologiques, sans toutefois appuyer des différences récurrentes significatives entre les matières premières de régions différentes. À cause de la taille de l’échantillon et du contexte perturbé, la pertinence des fortes similarités entre certains objets est douteuse. La segmentation interpersonnelle des chaînes opératoires ne pouvait être déterminée dans l’échantillon. Cependant, les résultats incitent plutôt à croire que les matières premières devaient circuler sous diverses formes. Il peut être considéré que, en dehors des matières premières locales, les occupants Archaïques du site Gaudreau n’avaient pas d’accès direct aux matières premières exogènes.
Resumo:
L’échocardiographie et l’imagerie par résonance magnétique sont toutes deux des techniques non invasives utilisées en clinique afin de diagnostiquer ou faire le suivi de maladies cardiaques. La première mesure un délai entre l’émission et la réception d’ultrasons traversant le corps, tandis que l’autre mesure un signal électromagnétique généré par des protons d’hydrogène présents dans le corps humain. Les résultats des acquisitions de ces deux modalités d’imagerie sont fondamentalement différents, mais contiennent dans les deux cas de l’information sur les structures du coeur humain. La segmentation du ventricule gauche consiste à délimiter les parois internes du muscle cardiaque, le myocarde, afin d’en calculer différentes métriques cliniques utiles au diagnostic et au suivi de différentes maladies cardiaques, telle la quantité de sang qui circule à chaque battement de coeur. Suite à un infarctus ou autre condition, les performances ainsi que la forme du coeur en sont affectées. L’imagerie du ventricule gauche est utilisée afin d’aider les cardiologues à poser les bons diagnostics. Cependant, dessiner les tracés manuels du ventricule gauche requiert un temps non négligeable aux cardiologues experts, d’où l’intérêt pour une méthode de segmentation automatisée fiable et rapide. Ce mémoire porte sur la segmentation du ventricule gauche. La plupart des méthodes existantes sont spécifiques à une seule modalité d’imagerie. Celle proposée dans ce document permet de traiter rapidement des acquisitions provenant de deux modalités avec une précision de segmentation équivalente au tracé manuel d’un expert. Pour y parvenir, elle opère dans un espace anatomique, induisant ainsi une forme a priori implicite. L’algorithme de Graph Cut, combiné avec des stratégies telles les cartes probabilistes et les enveloppes convexes régionales, parvient à générer des résultats qui équivalent (ou qui, pour la majorité des cas, surpassent) l’état de l’art ii Sommaire au moment de la rédaction de ce mémoire. La performance de la méthode proposée, quant à l’état de l’art, a été démontrée lors d’un concours international. Elle est également validée exhaustivement via trois bases de données complètes en se comparant aux tracés manuels de deux experts et des tracés automatisés du logiciel Syngovia. Cette recherche est un projet collaboratif avec l’Université de Bourgogne, en France.
Resumo:
Meat industry needs to reduce salt in their products due to health issues. The present study evaluated the effect of salt reduction from 6% to 3% in two Portuguese traditional blood dry-cured sausages. Physicochemical and microbiological parameters, biogenic amines, fatty acids and texture profiles and sensory panel evaluations were considered. Differences due to salt reduction were perceptible in a faint decline of water activity, which slightly favoured microbial growth. Total biogenic amines content ranged from 88.86 to 796.68 mg kg 1 fresh matter, with higher amounts, particularly of cadaverine, histamine and tyramine, in low-salt products. Still, histamine and other vasoactive amines remained at low levels, thus not affecting consumers’ health. Regarding fatty acids, no significant differences were observed due to salt. However, texture profile analysis revealed lower resilience and cohesiveness in low-salt products, although no textural changes were observed by the sensory panel. Nevertheless, low-salt sausages were clearly preferred by panellists.
Resumo:
A utilização generalizada do computador para a automatização das mais diversas tarefas, tem conduzido ao desenvolvimento de aplicações que possibilitam a realização de actividades que até então poderiam não só ser demoradas, como estar sujeitas a erros inerentes à actividade humana. A investigação desenvolvida no âmbito desta tese, tem como objectivo o desenvolvimento de um software e algoritmos que permitam a avaliação e classificação de queijos produzidos na região de Évora, através do processamento de imagens digitais. No decurso desta investigação, foram desenvolvidos algoritmos e metodologias que permitem a identificação dos olhos e dimensões do queijo, a presença de textura na parte exterior do queijo, assim como características relativas à cor do mesmo, permitindo que com base nestes parâmetros possa ser efectuada uma classificação e avaliação do queijo. A aplicação de software, resultou num produto de simples utilização. As fotografias devem respeitar algumas regras simples, sobre as quais se efectuará o processamento e classificação do queijo. ABSTRACT: The widespread use of computers for the automation of repetitive tasks, has resulted in developing applications that allow a range of activities, that until now could not only be time consuming and also subject to errors inherent to human activity, to be performed without or with little human intervention. The research carried out within this thesis, aims to develop a software application and algorithms that enable the assessment and classification of cheeses produced in the region of Évora, by digital images processing. Throughout this research, algorithms and methodologies have been developed that allow the identification of the cheese eyes, the dimensions of the cheese, the presence of texture on the outside of cheese, as well as an analysis of the color, so that, based on these parameters, a classification and evaluation of the cheese can be conducted. The developed software application, is product simple to use, requiring no special computer knowledge. Requires only the acquisition of the photographs following a simple set of rules, based on which it will do the processing and classification of cheese.