78 resultados para Visual methods
Resumo:
Os indivíduos com idade ≥ 50 anos representam entre 65% a 82% dos casos de baixa visão e cegueira. Cerca de metade dos casos de baixa de acuidade visual são corrigíveis e cerca de 1/4 são preveníveis. As causas mais frequentes são os erros de refração (43%) e as cataratas (33%). Os idosos podem não reportar informação relacionada com queixas do foro visual, uma vez que, podem estar mais focados noutros sintomas. Objectivos: identificar a prevalência de alterações da função visual; identificar a prevalência de alterações da acuidade visual para longe e perto; identificar as alterações da acuidade visual para longe causadas por erros refrativos através do buraco estenopeico; identificar alterações da sensibilidade ao contraste, estereopsia e visão cromática; identificar a periodicidade da avaliação oftalmológica.
Resumo:
Tomographic image can be degraded, partially by patient based attenuation. The aim of this paper is to quantitatively verify the effects of attenuation correction methods Chang and CT in 111In studies through the analysis of profiles from abdominal SPECT, correspondent to a uniform radionuclide uptake organ, the left kidney.
Resumo:
RESUMO: Introdução – O envelhecimento pode estar relacionado com a perda de autonomia e declínio da capacidade funcional dos indivíduos, o que tende a comprometer a execução de tarefas do quotidiano e consequentemente leva a repercussões na qualidade de vida, afetando-a de forma negativa. Objetivos – Rever a bibliografia atualmente disponível no que respeita às repercussões do envelhecimento no campo visual binocular e atencional e à influência do campo visual binocular na leitura, escrita e marcha/locomoção em idosos. Metodologia – Este estudo é uma revisão de literatura. Procedeu-se à análise de 37 artigos científicos, que posteriormente foram organizados numa grelha de observação e numa tabela comparativa. Resultados – Dos artigos analisados, 32,43% (n=12) apontam para uma diminuição da extensão do campo visual binocular e atencional relacionada com o envelhecimento. Repercussões da diminuição da extensão do campo visual binocular sem fator atencional nas atividades quotidianas são referidas em 54,05% (n=20) dos artigos. Neste grupo de artigos 40,53% (n=15) apontam para a existência de uma relação entre o campo visual binocular com o desempenho na leitura, escrita ou marcha/locomoção. Do total de artigos analisados, dos 45,95% (n=17) que descrevem o campo visual binocular com fator atencional, 10,81% (n=4) apontam para a mesma relação. Discussão/Conclusões – O envelhecimento provoca um decréscimo no campo visual binocular, sendo este mais acentuado na periferia. Este decréscimo, na presença de uma atenção visual diminuída, influencia o desempenho na leitura, escrita e marcha/locomoção.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Epidemiological studies showed increased prevalence of respiratory symptoms and adverse changes in pulmonary function parameters in poultry workers, corroborating the increased exposure to risk factors, such as fungal load and their metabolites. This study aimed to determine the occupational exposure threat due to fungal contamination caused by the toxigenic isolates belonging to the complex of the species of Aspergillus flavus and also isolates fromAspergillus fumigatus species complex. The study was carried out in seven Portuguese poultries, using cultural and molecularmethodologies. For conventional/cultural methods, air, surfaces, and litter samples were collected by impaction method using the Millipore Air Sampler. For the molecular analysis, air samples were collected by impinger method using the Coriolis μ air sampler. After DNA extraction, samples were analyzed by real-time PCR using specific primers and probes for toxigenic strains of the Aspergillus flavus complex and for detection of isolates from Aspergillus fumigatus complex. Through conventional methods, and among the Aspergillus genus, different prevalences were detected regarding the presence of Aspergillus flavus and Aspergillus fumigatus species complexes, namely: 74.5 versus 1.0% in the air samples, 24.0 versus 16.0% in the surfaces, 0 versus 32.6% in new litter, and 9.9 versus 15.9%in used litter. Through molecular biology, we were able to detect the presence of aflatoxigenic strains in pavilions in which Aspergillus flavus did not grow in culture. Aspergillus fumigatus was only found in one indoor air sample by conventional methods. Using molecular methodologies, however, Aspergillus fumigatus complex was detected in seven indoor samples from three different poultry units. The characterization of fungal contamination caused by Aspergillus flavus and Aspergillus fumigatus raises the concern of occupational threat not only due to the detected fungal load but also because of the toxigenic potential of these species.
Resumo:
Relevant past events can be remembered when visualizing related pictures. The main difficulty is how to find these photos in a large personal collection. Query definition and image annotation are key issues to overcome this problem. The former is relevant due to the diversity of the clues provided by our memory when recovering a past moment and the later because images need to be annotated with information regarding those clues to be retrieved. Consequently, tools to recover past memories should deal carefully with these two tasks. This paper describes a user interface designed to explore pictures from personal memories. Users can query the media collection in several ways and for this reason an iconic visual language to define queries is proposed. Automatic and semi-automatic annotation is also performed using the image content and the audio information obtained when users show their images to others. The paper also presents the user interface evaluation based on tests with 58 participants.
Resumo:
Liver steatosis is a common disease usually associated with social and genetic factors. Early detection and quantification is important since it can evolve to cirrhosis. Steatosis is usually a diffuse liver disease, since it is globally affected. However, steatosis can also be focal affecting only some foci difficult to discriminate. In both cases, steatosis is detected by laboratorial analysis and visual inspection of ultrasound images of the hepatic parenchyma. Liver biopsy is the most accurate diagnostic method but its invasive nature suggest the use of other non-invasive methods, while visual inspection of the ultrasound images is subjective and prone to error. In this paper a new Computer Aided Diagnosis (CAD) system for steatosis classification and analysis is presented, where the Bayes Factor, obatined from objective intensity and textural features extracted from US images of the liver, is computed in a local or global basis. The main goal is to provide the physician with an application to make it faster and accurate the diagnosis and quantification of steatosis, namely in a screening approach. The results showed an overall accuracy of 93.54% with a sensibility of 95.83% and 85.71% for normal and steatosis class, respectively. The proposed CAD system seemed suitable as a graphical display for steatosis classification and comparison with some of the most recent works in the literature is also presented.
Resumo:
This project was developed to fully assess the indoor air quality in archives and libraries from a fungal flora point of view. It uses classical methodologies such as traditional culture media – for the viable fungi – and modern molecular biology protocols, especially relevant to assess the non-viable fraction of the biological contaminants. Denaturing high-performance liquid chromatography (DHPLC) has emerged as an alternative to denaturing gradient gel electrophoresis (DGGE) and has already been applied to the study of a few bacterial communities. We propose the application of DHPLC to the study of fungal colonization on paper-based archive materials. This technology allows for the identification of each component of a mixture of fungi based on their genetic variation. In a highly complex mixture of microbial DNA this method can be used simply to study the population dynamics, and it also allows for sample fraction collection, which can, in many cases, be immediately sequenced, circumventing the need for cloning. Some examples of the methodological application are shown. Also applied is fragment length analysis for the study of mixed Candida samples. Both of these methods can later be applied in various fields, such as clinical and sand sample analysis. So far, the environmental analyses have been extremely useful to determine potentially pathogenic/toxinogenic fungi such as Stachybotrys sp., Aspergillus niger, Aspergillus fumigatus, and Fusarium sp. This work will hopefully lead to more accurate evaluation of environmental conditions for both human health and the preservation of documents.
Resumo:
Dissertação de Mestrado em Engenharia de Redes de Comunicação e Multimédia
Resumo:
The handling of waste and compost that occurs frequently in composting plants (compost turning, shredding, and screening) has been shown to be responsible for the release of dust and air borne microorganisms and their compounds in the air. Thermophilic fungi, such as A. fumigatus, have been reported and this kind of contamination in composting facilities has been associated with increased respiratory symptoms among compost workers. This study intended to characterize fungal contamination in a totally indoor composting plant located in Portugal. Besides conventional methods, molecular biology was also applied to overcome eventual limitations.
Resumo:
Dissertação apresentada à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Educação Artística, na Especialização de Artes Plásticas na Educação
Resumo:
Todos os anos, milhares de pessoas morrem vítimas de doenças causadas pelo consumo de produtos derivados do tabaco, este é considerado a principal causa de morte evitável. O tabaco também colabora com as seis das oito principais causas de mortes entre fumantes e não fumantes a nível mundial. Algumas medidas governamentais como as campanhas publicitárias antitabagistas, buscam alertar, conscientizar e mudar o pensamento e o interesse coletivo neste tipo de produto e consequentemente, diminuir a taxa de consumo. Avaliar se as crenças, pensamentos e atitudes dos brasileiros são influenciados por este tipo de publicidade e se o comportamento relacionado a não fumar ou deixar de fumar é uma consequência da persuasão das mensagens antitabagistas, ajudam a conhecer o real impacto destas campanhas e sua eficácia. Através dos métodos de investigação quantitativo e qualitativo e das análises extensiva e semiótica, a pesquisa inquiriu 272 indivíduos brasileiros à respeito das advertências sanitárias e das campanhas publicitárias antitabagistas, classificando-os como não fumantes, ex-fumantes e fumantes, identificando os elementos visuais e textuais que compõem a narrativa publicitária de 5 anúncios antitabagistas. Após a análise, a pesquisa concluiu que as campanhas publicitárias coordenadas pelo INCA – Instituto Nacional de Câncer, denominadas campanhas antitabagistas, são eficazes para alertar e conscientizar os indivíduos sobre os males causados pelo consumo do cigarro mas ineficazes para influenciar suas atitudes e comportamentos. Embora estas consigam persuadir à crença nas mensagens, fazendo com que os indivíduos as vejam como verdadeira, isto não é suficiente para que a intenção de deixar de fumar torne-se um ato prático. Todos os anúncios possuem o mesmo formato e a maioria utilizou o mesmo percurso visual, equilíbrio, enquadramento, luz, ângulo e função do personagem. Todos possuem textos com funções identificadora, ancoragem e apoio e a narrativa conota o cigarro como algo negativo, prejudicial, mortífero e destruidor.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização em Edificações
Resumo:
Purpose - This study aims to investigate the influence of tube potential (kVp) variation in relation to perceptual image quality and effective dose (E) for pelvis using automatic exposure control (AEC) and non-AEC in a Computed Radiography (CR) system. Methods and materials - To determine the effects of using AEC and non-AEC by applying the 10 kVp rule in two experiments using an anthropomorphic pelvis phantom. Images were acquired using 10 kVp increments (60–120 kVp) for both experiments. The first experiment, based on seven AEC combinations, produced 49 images. The mean mAs from each kVp increment were used as a baseline for the second experiment producing 35 images. A total of 84 images were produced and a panel of 5 experienced observers participated for the image scoring using the two alternative forced choice (2AFC) visual grading software. PCXMC software was used to estimate E. Results - A decrease in perceptual image quality as the kVp increases was observed both in non-AEC and AEC experiments, however no significant statistical differences (p > 0.05) were found. Image quality scores from all observers at 10 kVp increments for all mAs values using non-AEC mode demonstrates a better score up to 90 kVp. E results show a statistically significant decrease (p = 0.000) on the 75th quartile from 0.37 mSv at 60 kVp to 0.13 mSv at 120 kVp when applying the 10 kVp rule in non-AEC mode. Conclusion - Using the 10 kVp rule, no significant reduction in perceptual image quality is observed when increasing kVp whilst a marked and significant E reduction is observed.
Resumo:
Microarray allow to monitoring simultaneously thousands of genes, where the abundance of the transcripts under a same experimental condition at the same time can be quantified. Among various available array technologies, double channel cDNA microarray experiments have arisen in numerous technical protocols associated to genomic studies, which is the focus of this work. Microarray experiments involve many steps and each one can affect the quality of raw data. Background correction and normalization are preprocessing techniques to clean and correct the raw data when undesirable fluctuations arise from technical factors. Several recent studies showed that there is no preprocessing strategy that outperforms others in all circumstances and thus it seems difficult to provide general recommendations. In this work, it is proposed to use exploratory techniques to visualize the effects of preprocessing methods on statistical analysis of cancer two-channel microarray data sets, where the cancer types (classes) are known. For selecting differential expressed genes the arrow plot was used and the graph of profiles resultant from the correspondence analysis for visualizing the results. It was used 6 background methods and 6 normalization methods, performing 36 pre-processing methods and it was analyzed in a published cDNA microarray database (Liver) available at http://genome-www5.stanford.edu/ which microarrays were already classified by cancer type. All statistical analyses were performed using the R statistical software.