984 resultados para Computer Images
Resumo:
Addiction, and the experience of being addicted, is notoriously difficult to describe verbally and explain rationally. Would multifaceted and multisensory cinematic images work better in making addiction understandable? This study enquires how cinematic expression can render visible the experience of being addicted which is invisible as such. The basic data consists of circa 50 mainly North American and European fiction films from the early 1900s to the early 2000s that deal with addictive disorders as defined in the psychiatric DSM-V classification (substance dependence- and gambling disorders). The study develops an approach for analyzing and interpreting a large volume of digital film data: digital cinematic iconography is a framework to study the multifaceted cinematic images by processing and viewing them in the “digital image-laboratory” of the computer. Images are cut and classified by editing software and algorithmic sorting. The approach draws on early 1900s German art historian Aby Waburg’s image research and media archaeology, that are connected to film studies inspired by the phenomenology of the body and Gilles Deleuze’s film-philosophy. The first main chapter, “Montage”, analyses montage, gestural and postural images, and colors in addiction films. The second main chapter, “Thingness”, focuses on the close-ups of material objects and faces, and their relation to the theme of spirituality in cinema and art history, The study argues that the cinema engages the spectator to "feel" what addiction is through everyday experience and art historical imagery. There is a particular, historically transmitted cinematic iconography of addiction that is profane, material, thing-centered, abject, and repetitive. The experience of being addicted is visualized through montages of images characterized by dark and earthy colors, horizontal compositions and downward- directed movements. This is very profane and secular imagery that, however, circulates image-historical traces of Christian iconography, such as that of being in the grip of an unknown power.
Resumo:
Objetivos: Testar a hipótese de que a mucosa do intestino delgado proximal de crianças com diarréia persistente apresenta alterações morfométricas e estereológicas proporcionais ao estado nutricional. Métodos: estudo transversal, incluindo 65 pacientes pediátricos internados no período de maio de 1989 a novembro de 1991, com idade entre 4 meses e 5 anos , com diarréia de mais de 14 dias de duração, que necessitaram realizar biópsia de intestino delgado como parte do protocolo de investigação. A avaliação nutricional foi realizada pelos métodos de Gomez, Waterlow e pelos escores z para peso/ idade (P/I), peso/estatura (P/E) e estatura/idade (E/I), divididos em: eutróficos = z ≥ 2 DP e desnutridos z < -2dp; eutróficos = z ≥ 2 DP, risco nutricional = z < -1DP e desnutridos = z < -2DP; e de maneira contínua em ordem decrescente, utilizando-se as tabelas do NCHS. A captura e análise das imagens por programa de computador foi efetuada com o auxílio do patologista. Nos fragmentos de mucosa do intestino delgado, foram medidas a altura dos vilos, a profundidade das criptas, a espessura da mucosa, a espessura total da mucosa e a relação vilo/cripta, com aumento de 100 vezes. Com aumento de 500 vezes, foram medidas a altura do enterócito, a altura do núcleo e do bordo em escova. O programa computadorizado utilizado foi o Scion Image. A análise estereológica, foi feita com o uso de arcos ciclóides. Resultados: Para os escores z P/I, P/E e E/I, divididos em duas categorias de estado nutricional, não houve diferença estatisticamente significante quanto às medidas da altura dos vilos, profundidade das criptas, espessura da mucosa, espessura total da mucosa e relação vilo/cripta. A altura do enterócito foi a característica que apresentou maior diferença entre os grupos eutrófico e desnutrido, para os índices P/I e P/E, em 500 aumentos, sem atingir significância estatística. Quando os escores z foram divididos em 3 categorias de estado nutricional, a análise morfométrica digitalizada mostrou diferença estatisticamente significante para a relação vilo/cripta entre eutróficos e desnutridos leves e entre eutróficos e desnutridos moderados e graves (p=0,048). A relação vilo/cripta foi maior nos eutróficos. A avaliação nutricional pelos critérios de Waterlow e a análise estereológica não mostraram associação com o estado nutricional. Pelo método de Gomez, houve diferença estatisticamente significante para a altura do enterócito entre eutróficos e desnutridos de Grau III: quanto maior o grau de desnutrição, menor a altura do enterócito (r= -.3330; p = 0,005). As variáveis altura do enterócito, altura do núcleo do enterócito e do bordo em escova apresentaram uma clara associação com os índices P/I (r=0,25;p=0,038), P/E (r=0,029;p=0,019) e com o critério de avaliação nutricional de Gomez (r=-0,33;p=0,007), quando foram avaliadas pelo coeficiente de correlação de Pearson. A altura do núcleo mostrou associação com o índice P/I (r=0,24;p=0,054). A altura do bordo em escova mostrou associação com o índice P/I (r=0,26;p=0,032) e a avaliação nutricional de Gomez (r=-0,28;p=0,020). Conclusões: As associações encontradas entre o estado nutricional - avaliado de acordo com Gomez e os índices P/I e P/E - e as variáveis da mucosa do intestino delgado mostraram relação com o peso dos pacientes. Embora estas associações tenham sido de magnitude fraca a moderada, há uma tendência à diminuição do tamanho do enterócito, seu núcleo e seu bordo em escova à medida que aumenta o grau de desnutrição.
Resumo:
Pós-graduação em Ciências Cartográficas - FCT
Resumo:
Dental implant recognition in patients without available records is a time-consuming and not straightforward task. The traditional method is a complete user-dependent process, where the expert compares a 2D X-ray image of the dental implant with a generic database. Due to the high number of implants available and the similarity between them, automatic/semi-automatic frameworks to aide implant model detection are essential. In this study, a novel computer-aided framework for dental implant recognition is suggested. The proposed method relies on image processing concepts, namely: (i) a segmentation strategy for semi-automatic implant delineation; and (ii) a machine learning approach for implant model recognition. Although the segmentation technique is the main focus of the current study, preliminary details of the machine learning approach are also reported. Two different scenarios are used to validate the framework: (1) comparison of the semi-automatic contours against implant’s manual contours of 125 X-ray images; and (2) classification of 11 known implants using a large reference database of 601 implants. Regarding experiment 1, 0.97±0.01, 2.24±0.85 pixels and 11.12±6 pixels of dice metric, mean absolute distance and Hausdorff distance were obtained, respectively. In experiment 2, 91% of the implants were successfully recognized while reducing the reference database to 5% of its original size. Overall, the segmentation technique achieved accurate implant contours. Although the preliminary classification results prove the concept of the current work, more features and an extended database should be used in a future work.
Resumo:
This paper presents a pattern recognition method focused on paintings images. The purpose is construct a system able to recognize authors or art styles based on common elements of his work (here called patterns). The method is based on comparing images that contain the same or similar patterns. It uses different computer vision techniques, like SIFT and SURF, to describe the patterns in descriptors, K-Means to classify and simplify these descriptors, and RANSAC to determine and detect good results. The method are good to find patterns of known images but not so good if they are not.
Resumo:
Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).
Resumo:
The automatic characterization of particles in metallographic images has been paramount, mainly because of the importance of quantifying such microstructures in order to assess the mechanical properties of materials common used in industry. This automated characterization may avoid problems related with fatigue and possible measurement errors. In this paper, computer techniques are used and assessed towards the accomplishment of this crucial industrial goal in an efficient and robust manner. Hence, the use of the most actively pursued machine learning classification techniques. In particularity, Support Vector Machine, Bayesian and Optimum-Path Forest based classifiers, and also the Otsu's method, which is commonly used in computer imaging to binarize automatically simply images and used here to demonstrated the need for more complex methods, are evaluated in the characterization of graphite particles in metallographic images. The statistical based analysis performed confirmed that these computer techniques are efficient solutions to accomplish the aimed characterization. Additionally, the Optimum-Path Forest based classifier demonstrated an overall superior performance, both in terms of accuracy and speed. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
The human dentition is naturally translucent, opalescent and fluorescent. Differences between the level of fluorescence of tooth structure and restorative materials may result in distinct metameric properties and consequently perceptible disparate esthetic behavior, which impairs the esthetic result of the restorations, frustrating both patients and staff. In this study, we evaluated the level of fluorescence of different composites (Durafill in tones A2 (Du), Charisma in tones A2 (Ch), Venus in tone A2 (Ve), Opallis enamel and dentin in tones A2 (OPD and OPE), Point 4 in tones A2 (P4), Z100 in tones A2 ( Z1), Z250 in tones A2 (Z2), Te-Econom in tones A2 (TE), Tetric Ceram in tones A2 (TC), Tetric Ceram N in tones A1, A2, A4 (TN1, TN2, TN4), Four seasons enamel and dentin in tones A2 (and 4SD 4SE), Empress Direct enamel and dentin in tones A2 (EDE and EDD) and Brilliant in tones A2 (Br)). Cylindrical specimens were prepared, coded and photographed in a standardized manner with a Canon EOS digital camera (400 ISO, 2.8 aperture and 1/ 30 speed), in a dark environment under the action of UV light (25 W). The images were analyzed with the software ScanWhite©-DMC/Darwin systems. The results showed statistical differences between the groups (p < 0.05), and between these same groups and the average fluorescence of the dentition of young (18 to 25 years) and adults (40 to 45 years) taken as control. It can be concluded that: Composites Z100, Z250 (3M ESPE) and Point 4 (Kerr) do not match with the fluorescence of human dentition and the fluorescence of the materials was found to be affected by their own tone.
Resumo:
In this paper we present a tool to carry out the multifractal analysis of binary, two-dimensional images through the calculation of the Rényi D(q) dimensions and associated statistical regressions. The estimation of a (mono)fractal dimension corresponds to the special case where the moment order is q = 0.
Resumo:
An approach to building a CBIR-system for searching computer tomography images using the methods of wavelet-analysis is presented in this work. The index vectors are constructed on the basis of the local features of the image and on their positions. The purpose of the proposed system is to extract visually similar data from the individual personal records and from analogous analysis of other patients.
Resumo:
Traditional Optics has provided ways to compensate some common visual limitations (up to second order visual impairments) through spectacles or contact lenses. Recent developments in wavefront science make it possible to obtain an accurate model of the Point Spread Function (PSF) of the human eye. Through what is known as the "Wavefront Aberration Function" of the human eye, exact knowledge of the optical aberration of the human eye is possible, allowing a mathematical model of the PSF to be obtained. This model could be used to pre-compensate (inverse-filter) the images displayed on computer screens in order to counter the distortion in the user's eye. This project takes advantage of the fact that the wavefront aberration function, commonly expressed as a Zernike polynomial, can be generated from the ophthalmic prescription used to fit spectacles to a person. This allows the pre-compensation, or onscreen deblurring, to be done for various visual impairments, up to second order (commonly known as myopia, hyperopia, or astigmatism). The technique proposed towards that goal and results obtained using a lens, for which the PSF is known, that is introduced into the visual path of subjects without visual impairment will be presented. In addition to substituting the effect of spectacles or contact lenses in correcting the loworder visual limitations of the viewer, the significance of this approach is that it has the potential to address higher-order abnormalities in the eye, currently not correctable by simple means.
Resumo:
Due to both the widespread and multipurpose use of document images and the current availability of a high number of document images repositories, robust information retrieval mechanisms and systems have been increasingly demanded. This paper presents an approach to support the automatic generation of relationships among document images by exploiting Latent Semantic Indexing (LSI) and Optical Character Recognition (OCR). We developed the LinkDI (Linking of Document Images) service, which extracts and indexes document images content, computes its latent semantics, and defines relationships among images as hyperlinks. LinkDI was experimented with document images repositories, and its performance was evaluated by comparing the quality of the relationships created among textual documents as well as among their respective document images. Considering those same document images, we ran further experiments in order to compare the performance of LinkDI when it exploits or not the LSI technique. Experimental results showed that LSI can mitigate the effects of usual OCR misrecognition, which reinforces the feasibility of LinkDI relating OCR output with high degradation.
Resumo:
Considering the difficulties in finding good-quality images for the development and test of computer-aided diagnosis (CAD), this paper presents a public online mammographic images database free for all interested viewers and aimed to help develop and evaluate CAD schemes. The digitalization of the mammographic images is made with suitable contrast and spatial resolution for processing purposes. The broad recuperation system allows the user to search for different images, exams, or patient characteristics. Comparison with other databases currently available has shown that the presented database has a sufficient number of images, is of high quality, and is the only one to include a functional search system.