12 resultados para Visual Object Identification Task

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aging of Portuguese population is characterized by an increase of individuals aged older than 65 years. Preventable visual loss in older persons is an important public health problem. Tests used for vision screening should have a high degree of diagnostic validity confirmed by means of clinical trials. The primary aim of a screening program is the early detection of visual diseases. Between 20% and 50% of older people in the UK have undetected reduced vision and in most cases is correctable. Elderly patients do not receive a systematic eye examination unless a problem arises with their glasses or suspicion vision loss. This study aimed to determine and evaluate the diagnostic accuracy of visual screening tests for detecting vision loss in elderly. Furthermore, it pretends to define the ability to find the subjects affected with vision loss as positive and the subjects not affected with the same disease as negative. The ideal vision screening method should have high sensitivity and specificity for early detection of risk factors. It should be also low cost and easy to implement in all geographic and socioeconomic regions. Sensitivity is the ability of an examination to identify the presence of a given disease and specificity is the ability of the examination to identify the absence of a given disease. It was not an aim of this study to detect abnormalities that affect visual acuity. The aim of this study was to find out what´s the best test for the identification of any vision loss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identification of core competencies which are important for undertaking accurate visual screening by orthoptists is considered in this study. The aim was to construct and validate a questionnaire for orthoptists to assess visual screening competency. This study comprised three steps. The first step involved a 69-item self-assessment questionnaire constructed to assess orthoptists' perception of their competencies in visual screening programs for children. This questionnaire was constructed with statements from the Orthoptic Benchmark Statement for Health Care Programmes (Quality Assurance Agency for Higher Education, UK) and included three competency dimensions: interpersonal (IP), instrumental (IT) and systemic (ST). The second step involved questionnaire translation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introdução – Na avaliação diagnóstica em mamografia, o desempenho do radiologista pode estar sujeito a erros de diagnóstico. Objetivo – Descrever a importância da perceção visual na análise da mamografia, identificando os principais fatores que contribuem para a perceção visual do radiologista e que condicionam a acuidade diagnóstica. Metodologia – Estudo descritivo baseado numa revisão sistemática de literatura através da PubMed e da Science Direct. Foram incluídos 42 artigos que respeitavam, pelo menos, um dos critérios de inclusão no estudo. Para a seleção das referências foi utilizada a metodologia PRISMA, constituída por 4 fases: identificação, seleção preliminar, elegibilidade e estudos incluídos. Resultados – Na avaliação diagnóstica em mamografia, a perceção visual está intimamente relacionada com: 1) diferentes parâmetros visuais e da motilidade ocular (acuidade visual, sensibilidade ao contraste e à luminância e movimentos oculares); 2) com condições de visualização de uma imagem (iluminância da sala e luminância do monitor); e 3) fadiga ocular provocada pela observação diária consecutiva de imagens. Conclusões – A perceção visual pode ser influenciada por 3 categorias de erros observados: erros de pesquisa (lesões não são fixadas pela fóvea), erros de reconhecimento (lesões fixadas, mas não durante o tempo suficiente) e erros de decisão (lesões fixadas, mas não identificadas como suspeitas). Os estudos analisados sobre perceção visual, atenção visual e estratégia visual, bem como os estudos sobre condições de visualização não caracterizam a função visual dos observadores. Para uma avaliação correta da perceção visual em mamografia deverão ser efetuados estudos que correlacionem a função visual com a qualidade diagnóstica. ABSTRACT - Introduction – Diagnostic evaluation in mammography could be influenced by the radiologist performance that could be under diagnostic errors. Aims – To describe the importance of radiologist visual perception in mammographic diagnostic evaluation and to identify the main factors that contribute to diagnostic accuracy. Methods – In this systematic review 42 references were included based on inclusion criteria (PubMed and Science Direct). PRISMA method was used to select the references following 4 steps: identification, screening, eligibility and included references. Results – Visual perception in mammography diagnostic evaluation is related with: 1) visual parameters and ocular motility (visual acuity, contrast sensitivity and luminance and ocular movements); 2) image visualization environment (room iluminance and monitor luminance); and 3) eyestrain caused by image daily consecutive observation. Conclusions – Visual perception can be influenced by three errors categories: search errors (lesions are never looked at with high-resolution foveal vision), recognition errors (lesions are looked at, but not long enough to detect or recognize) and decision errors (lesions are looked at for long periods of time but are still missed). The reviewed studies concerning visual perception, visual attention, visual strategies and image visualization environment do not describe observer’s visual function. An accurate evaluation of visual perception in mammography must include visual function analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, local image features have been widely used in robot visual localization. To assess image similarity, a strategy exploiting these features compares raw descriptors extracted from the current image to those in the models of places. This paper addresses the ensuing step in this process, where a combining function must be used to aggregate results and assign each place a score. Casting the problem in the multiple classifier systems framework, we compare several candidate combiners with respect to their performance in the visual localization task. A deeper insight into the potential of the sum and product combiners is provided by testing two extensions of these algebraic rules: threshold and weighted modifications. In addition, a voting method, previously used in robot visual localization, is assessed. All combiners are tested on a visual localization task, carried out on a public dataset. It is experimentally demonstrated that the sum rule extensions globally achieve the best performance. The voting method, whilst competitive to the algebraic rules in their standard form, is shown to be outperformed by both their modified versions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Environment monitoring has an important role in occupational exposure assessment. However, due to several factors is done with insufficient frequency and normally don´t give the necessary information to choose the most adequate safety measures to avoid or control exposure. Identifying all the tasks developed in each workplace and conducting a task-based exposure assessment help to refine the exposure characterization and reduce assessment errors. A task-based assessment can provide also a better evaluation of exposure variability, instead of assessing personal exposures using continuous 8-hour time weighted average measurements. Health effects related with exposure to particles have mainly been investigated with mass-measuring instruments or gravimetric analysis. However, more recently, there are some studies that support that size distribution and particle number concentration may have advantages over particle mass concentration for assessing the health effects of airborne particles. Several exposure assessments were performed in different occupational settings (bakery, grill house, cork industry and horse stable) and were applied these two resources: task-based exposure assessment and particle number concentration by size. The results showed interesting results: task-based approach applied permitted to identify the tasks with higher exposure to the smaller particles (0.3 μm) in the different occupational settings. The data obtained allow more concrete and effective risk assessment and the identification of priorities for safety investments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many studies have demonstrated the relationship between the alpha activity and the central visual ability, in which the visual ability is usually assessed through static stimuli. Besides static circumstance, however in the real environment there are often dynamic changes and the peripheral visual ability in a dynamic environment (i.e., dynamic peripheral visual ability) is important for all people. So far, no work has reported whether there is a relationship between the dynamic peripheral visual ability and the alpha activity. Thus, the objective of this study was to investigate their relationship. Sixty-two soccer players performed a newly designed peripheral vision task in which the visual stimuli were dynamic, while their EEG signals were recorded from Cz, O1, and O2 locations. The relationship between the dynamic peripheral visual performance and the alpha activity was examined by the percentage-bend correlation test. The results indicated no significant correlation between the dynamic peripheral visual performance and the alpha amplitudes in the eyes-open and eyes-closed resting condition. However, it was not the case for the alpha activity during the peripheral vision task: the dynamic peripheral visual performance showed significant positive inter-individual correlations with the amplitudes in the alpha band (8-12 Hz) and the individual alpha band (IAB) during the peripheral vision task. A potential application of this finding is to improve the dynamic peripheral visual performance by up-regulating alpha activity using neuromodulation techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, local image features have been widely used in robot visual localization. In order to assess image similarity, a strategy exploiting these features compares raw descriptors extracted from the current image with those in the models of places. This paper addresses the ensuing step in this process, where a combining function must be used to aggregate results and assign each place a score. Casting the problem in the multiple classifier systems framework, in this paper we compare several candidate combiners with respect to their performance in the visual localization task. For this evaluation, we selected the most popular methods in the class of non-trained combiners, namely the sum rule and product rule. A deeper insight into the potential of these combiners is provided through a discriminativity analysis involving the algebraic rules and two extensions of these methods: the threshold, as well as the weighted modifications. In addition, a voting method, previously used in robot visual localization, is assessed. Furthermore, we address the process of constructing a model of the environment by describing how the model granularity impacts upon performance. All combiners are tested on a visual localization task, carried out on a public dataset. It is experimentally demonstrated that the sum rule extensions globally achieve the best performance, confirming the general agreement on the robustness of this rule in other classification problems. The voting method, whilst competitive with the product rule in its standard form, is shown to be outperformed by its modified versions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Terrestrial remote sensing imagery involves the acquisition of information from the Earth's surface without physical contact with the area under study. Among the remote sensing modalities, hyperspectral imaging has recently emerged as a powerful passive technology. This technology has been widely used in the fields of urban and regional planning, water resource management, environmental monitoring, food safety, counterfeit drugs detection, oil spill and other types of chemical contamination detection, biological hazards prevention, and target detection for military and security purposes [2-9]. Hyperspectral sensors sample the reflected solar radiation from the Earth surface in the portion of the spectrum extending from the visible region through the near-infrared and mid-infrared (wavelengths between 0.3 and 2.5 µm) in hundreds of narrow (of the order of 10 nm) contiguous bands [10]. This high spectral resolution can be used for object detection and for discriminating between different objects based on their spectral xharacteristics [6]. However, this huge spectral resolution yields large amounts of data to be processed. For example, the Airbone Visible/Infrared Imaging Spectrometer (AVIRIS) [11] collects a 512 (along track) X 614 (across track) X 224 (bands) X 12 (bits) data cube in 5 s, corresponding to about 140 MBs. Similar data collection ratios are achieved by other spectrometers [12]. Such huge data volumes put stringent requirements on communications, storage, and processing. The problem of signal sbspace identification of hyperspectral data represents a crucial first step in many hypersctral processing algorithms such as target detection, change detection, classification, and unmixing. The identification of this subspace enables a correct dimensionality reduction (DR) yelding gains in data storage and retrieval and in computational time and complexity. Additionally, DR may also improve algorithms performance since it reduce data dimensionality without losses in the useful signal components. The computation of statistical estimates is a relevant example of the advantages of DR, since the number of samples required to obtain accurate estimates increases drastically with the dimmensionality of the data (Hughes phnomenon) [13].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - In this study we aim to validate a method to assess the impact of reduced visual function and observer performance concurrently with a nodule detection task. Materials and methods - Three consultant radiologists completed a nodule detection task under three conditions: without visual defocus (0.00 Dioptres; D), and with two different magnitudes of visual defocus (−1.00 D and −2.00 D). Defocus was applied with lenses and visual function was assessed prior to each image evaluation. Observers evaluated the same cases on each occasion; this comprised of 50 abnormal cases containing 1–4 simulated nodules (5, 8, 10 and 12 mm spherical diameter, 100 HU) placed within a phantom, and 25 normal cases (images containing no nodules). Data was collected under the free-response paradigm and analysed using Rjafroc. A difference in nodule detection performance would be considered significant at p < 0.05. Results - All observers had acceptable visual function prior to beginning the nodule detection task. Visual acuity was reduced to an unacceptable level for two observers when defocussed to −1.00 D and for one observer when defocussed to −2.00 D. Stereoacuity was unacceptable for one observer when defocussed to −2.00 D. Despite unsatisfactory visual function in the presence of defocus we were unable to find a statistically significant difference in nodule detection performance (F(2,4) = 3.55, p = 0.130). Conclusion - A method to assess visual function and observer performance is proposed. In this pilot evaluation we were unable to detect any difference in nodule detection performance when using lenses to reduce visual function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Task-based approach implicates identifying all the tasks developed in each workplace aiming to refine the exposure characterization. The starting point of this approach is the recognition that only through a more detailed and comprehensive understanding of tasks is possible to understand, in more detail, the exposure scenario. In addition allows also the most suitable risk management measures identification. This approach can be also used when there is a need of identifying the workplace surfaces for sampling chemicals that have the dermal exposure route as the most important. In this case is possible to identify, through detail observation of tasks performance, the surfaces that involves higher contact (frequency) by the workers and can be contaminated. Identify the surfaces to sample when performing occupational exposure assessment to antineoplasic agents. Surfaces selection done based on the task-based approach.