870 resultados para audio-visual information
Resumo:
Background: Visual impairment (VI) is rising in prevalence and contributing to increasing morbidity, particularly among older people. Understanding patients' problems is fundamental to achieving optimal health outcomes but little is known about how VI impacts on self-management of medication.
Aim: To compare issues relating to medication self-management between older people with and without VI.
Design and setting: Case-control study with participants aged =65 years, prescribed at least two long-term oral medications daily, living within the community.
Method: The study recruited 156 patients with VI (best corrected visual acuity [BCVA] 6/18 to 3/60) at low-vision clinics; community optometrists identified 158 controls (BCVA 6/9 or better). Researchers visited participants in their homes, administered two validated questionnaires to assess medication adherence (Morisky; Medication Adherence Report Scale [MARS]), and asked questions about medication self-management, beliefs, and support.
Results: Approximately half of the participants in both groups reported perfect adherence on both questionnaires (52.5% Morisky; 43.3%, MARS). Despite using optical aids, few (3%) with VI could read medication information clearly; 24% had difficulty distinguishing different tablets. More people with VI (29%) than controls (13%) (odds ratio [OR] = 2.8; 95% confidence interval [CI] = 1.6 to 5.0) needed help managing their medication, from friends (19% versus 10%) or pharmacists (10% versus 2.5%; OR = 4.4, 95% CI = 1.4 to 13.5); more received social service support (OR = 7.1; 95% CI = 3.9 to 12.9).
Conclusion: Compared to their peers without VI, older people with VI are more than twice as likely to need help in managing medication. In clinical practice in primary care, patients' needs for practical support in taking prescribed treatment must be recognised. Strategies for effective medication self-management should be explored.
Resumo:
The cerebral cortex contains circuitry for continuously computing properties of the environment and one's body, as well as relations among those properties. The success of complex perceptuomotor performances requires integrated, simultaneous use of such relational information. Ball catching is a good example as it involves reaching and grasping of visually pursued objects that move relative to the catcher. Although integrated neural control of catching has received sparse attention in the neuroscience literature, behavioral observations have led to the identification of control principles that may be embodied in the involved neural circuits. Here, we report a catching experiment that refines those principles via a novel manipulation. Visual field motion was used to perturb velocity information about balls traveling on various trajectories relative to a seated catcher, with various initial hand positions. The experiment produced evidence for a continuous, prospective catching strategy, in which hand movements are planned based on gaze-centered ball velocity and ball position information. Such a strategy was implemented in a new neural model, which suggests how position, velocity, and temporal information streams combine to shape catching movements. The model accurately reproduces the main and interaction effects found in the behavioral experiment and provides an interpretation of recently observed target motion-related activity in the motor cortex during interceptive reaching by monkeys. It functionally interprets a broad range of neurobiological and behavioral data, and thus contributes to a unified theory of the neural control of reaching to stationary and moving targets.
Resumo:
To date, the usefulness of stereoscopic visual displays in research on manual interceptive actions has never been examined. In this study, we compared the catching movements of 8 right-handed participants (6 men, 2 women) in a real environment (with suspended balls swinging past the participant, requiring lateral hand movements for interception) with those in a situation in which similar virtual ball trajectories were displayed stereoscopically in a virtual reality system (Cave Automated Virtual Environment [CAVE]; Cruz-Neira, Sandin, DeFranti, Kenyon, & Hart, 1992) with the head fixated. Catching the virtual ball involved grasping a lightweight ball attached to the palm of the hand. The results showed that, compared to real catching, hand movements in the CAVE were (a) initiated later, (b) less accurate, (c) smoother, and (d) aimed more directly at the interception point. Although the latter 3 observations might be attributable to the delayed movement initiation observed in the CAVE, this delayed initiation might have resulted from the use of visual displays. This suggests that stereoscopic visual displays such as present in many virtual reality systems should be used circumspectly in the experimental study of catching and should be used only to address research questions requiring no detailed analysis of the information-based online control of the catching movements.
Resumo:
Growing evidence suggests that significant motor problems are associated with a diagnosis of Autism Spectrum Disorders (ASD), particularly in catching tasks. Catching is a complex, dynamic skill that involves the ability to synchronise one's own movement to that of a moving target. To successfully complete the task, the participant must pick up and use perceptual information about the moving target to arrive at the catching place at the right time. This study looks at catching ability in children diagnosed with ASD (mean age 10.16 ± 0.9 years) and age-matched non-verbal (9.72 ± 0.79 years) and receptive language (9.51 ± 0.46) control groups. Participants were asked to "catch" a ball as it rolled down a fixed ramp. Two ramp heights provided two levels of task difficulty, whilst the sensory information (audio and visual) specifying ball arrival time was varied. Results showed children with ASD performed significantly worse than both the receptive language (p =.02) and non-verbal (p =.02) control groups in terms of total number of balls caught. A detailed analysis of the movement kinematics showed that difficulties with picking up and using the sensory information to guide the action may be the source of the problem. © 2013 Elsevier Ltd.
Resumo:
PURPOSE. To investigate the methods used in contemporary ophthalmic literature to designate visual acuity (VA). METHODS. Papers in all 2005 editions of five ophthalmic journals were considered. Papers were included if (1) VA, vision, or visual function was mentioned in the abstract and (2) if the study involved age-related macular degeneration, cataract, or refractive surgery. If a paper was selected on the basis of its abstract, the full text of the paper was examined for information on the method of refractive correction during VA testing, type of chart used to measure VA, specifics concerning chart features, testing protocols, and data analysis and means of expressing VA in results. RESULTS. One hundred twenty-eight papers were included. The most common type of charts used were described as logMAR-based. Although most (89.8%) of the studies reported on the method of refractive correction during VA testing, only 58.6% gave the chart design, and less than 12% gave any information whatsoever on chart features or measurement procedures used. CONCLUSIONS. The methods used and the approach to analysis were rarely described in sufficient detail to allow others to replicate the study being reported. Sufficient detail should be given on VA measurement to enable others to duplicate the research. The authors suggest that charts adhering to Bailey-Lovie design principles always be used to measure vision in prospective studies and their use encouraged in clinical settings. The distinction between the terms logMAR, an acuity notation, and Bailey-Lovie or ETDRS as chart types should be adhered to more strictly. Copyright © Association for Research in Vision and Ophthalmology.
Resumo:
Human listeners seem to be remarkably able to recognise acoustic sound sources based on timbre cues. Here we describe a psychophysical paradigm to estimate the time it takes to recognise a set of complex sounds differing only in timbre cues: both in terms of the minimum duration of the sounds and the inferred neural processing time. Listeners had to respond to the human voice while ignoring a set of distractors. All sounds were recorded from natural sources over the same pitch range and equalised to the same duration and power. In a first experiment, stimuli were gated in time with a raised-cosine window of variable duration and random onset time. A voice/non-voice (yes/no) task was used. Performance, as measured by d', remained above chance for the shortest sounds tested (2 ms); d's above 1 were observed for durations longer than or equal to 8 ms. Then, we constructed sequences of short sounds presented in rapid succession. Listeners were asked to report the presence of a single voice token that could occur at a random position within the sequence. This method is analogous to the "rapid sequential visual presentation" paradigm (RSVP), which has been used to evaluate neural processing time for images. For 500-ms sequences made of 32-ms and 16-ms sounds, d' remained above chance for presentation rates of up to 30 sounds per second. There was no effect of the pitch relation between successive sounds: identical for all sounds in the sequence or random for each sound. This implies that the task was not determined by streaming or forward masking, as both phenomena would predict better performance for the random pitch condition. Overall, the recognition of familiar sound categories such as the voice seems to be surprisingly fast, both in terms of the acoustic duration required and of the underlying neural time constants.
Resumo:
In this paper, we propose a novel visual tracking framework, based on a decision-theoretic online learning algorithm namely NormalHedge. To make NormalHedge more robust against noise, we propose an adaptive NormalHedge algorithm, which exploits the historic information of each expert to perform more accurate prediction than the standard NormalHedge. Technically, we use a set of weighted experts to predict the state of the target to be tracked over time. The weight of each expert is online learned by pushing the cumulative regret of the learner towards that of the expert. Our simulation experiments demonstrate the effectiveness of the proposed adaptive NormalHedge, compared to the standard NormalHedge method. Furthermore, the experimental results of several challenging video sequences show that the proposed tracking method outperforms several state-of-the-art methods.
Resumo:
Experience obtained in the support of mobile learning using podcast audio is reported. The paper outlines design, storage and distribution via a web site. An initial evaluation of the uptake of the approach in a final year computing module was undertaken. Audio objects were tailored to meet different pedagogical needs resulting in a repository of persistent glossary terms and disposable audio lectures distributed by podcasting. An aim of our approach is to document the interest from the students, and evaluate the potential of mobile learning for supplementing revision
Resumo:
Information Visualization is gradually emerging to assist the representation and comprehension of large datasets about Higher Education Institutions, making the data more easily understood. The importance of gaining insights and knowledge regarding higher education institutions is little disputed. Within this knowledge, the emerging and urging area in need of a systematic understanding is the use of communication technologies, area that is having a transformative impact on educational practices worldwide. This study focused on the need to visually represent a dataset about how Portuguese Public Higher Education Institutions are using Communication Technologies as a support to teaching and learning processes. Project TRACER identified this need, regarding the Portuguese public higher education context, and carried out a national data collection. This study was developed within project TRACER, and worked with the dataset collected in order to conceptualize an information visualization tool U-TRACER®. The main goals of this study related to: conceptualization of the information visualization tool U-TRACER®, to represent the data collected by project TRACER; understand higher education decision makers perception of usefulness regarding the tool. The goals allowed us to contextualize the phenomenon of information visualization tools regarding higher education data, realizing the existing trends. The research undertaken was of qualitative nature, and followed the method of case study with four moments of data collection.The first moment regarded the conceptualization of the U-TRACER®, with two focus group sessions with Higher Education professionals, with the aim of defining the interaction features the U-TRACER® should offer. The second data collection moment involved the proposal of the graphical displays that would represent the dataset, which reading effectiveness was tested by end-users. The third moment involved the development of a usability test to the UTRACER ® performed by higher education professionals and which resulted in the proposal of improvements to the final prototype of the tool. The fourth moment of data collection involved conducting exploratory, semi-structured interviews, to the institutional decision makers regarding their perceived usefulness of the U-TRACER®. We consider that the results of this study contribute towards two moments of reflection. The challenges of involving end-users in the conceptualization of an information visualization tool; the relevance of effective visual displays for an effective communication of the data and information. The second relates to the reflection about how the higher education decision makers, stakeholders of the U-TRACER® tool, perceive usefulness of the tool, both for communicating their institutions data and for benchmarking exercises, as well as a support for decision processes. Also to reflect on the main concerns about opening up data about higher education institutions in a global market.
Resumo:
As infraestruturas de televisão interativa atualmente existentes possibilitam a integração de uma grande variedade de recursos e serviços, possibilitando aos utilizadores novas experiências de interação e participação. Para a maioria dos telespetadores, o uso de serviços interativos não acarreta grandes dificuldades; no entanto, para públicos com necessidades especiais, por exemplo para pessoas com défice visual, essa tarefa torna-se complexa, dificultando, ou mesmo impedindo, que estes utilizadores possam beneficiar deste tipo de serviços. Portugal não é uma exceção neste contexto, existindo um número significativo de utilizadores com défice visual (UDV) que não beneficiam totalmente das potencialidades do paradigma televisivo atual. Neste âmbito, o projeto de investigação que suporta esta tese explora a problemática do Design Universal aplicado à Televisão Interativa (iTV) e tem como objetivos a conceptualização, prototipagem e validação de um serviço de iTV adaptado especificamente a UDV, visando promover a sua inclusão digital. Para cumprir estes objetivos, a investigação dividiu-se em três etapas distintas. Na primeira etapa, a partir da Teoria Fundamentada nos Dados, foram identificadas as dificuldades e necessidades dos UDV enquanto consumidores de conteúdos televisivos e serviços de audiodescrição; foi selecionada a plataforma tecnológica mais adequada para o suporte do serviço prototipado; e foi definido um conjunto de princípios orientadores de design (POD’s) de interfaces de televisão interativa específico para este público-alvo. Inicialmente foram efetuadas duas entrevistas a 20 participantes com défice visual, para determinar as suas dificuldades e necessidades enquanto consumidores de conteúdos televisivos e serviços de audiodescrição. De seguida, foi realizada uma entrevista a um perito responsável pelo processo de transição para a TDT em Portugal (inicialmente considerou-se que a TDT seria uma plataforma promissora e poderia suportar o protótipo) e efetuada a revisão da literatura sobre POD’s para o desenvolvimento de interfaces para serviços iTV dirigidos a pessoas com défice visual. A partir dos resultados obtidos nesta etapa foi possível definir os requisitos funcionais e técnicos do sistema, bem como os seus PODs, tanto ao nível da componente gráfica, como de interação. Na segunda etapa foi concetualizado e desenvolvido o protótipo iTV adaptado a UDV ‘meo ad+’, com recurso à plataforma tecnológica IPTV da Portugal Telecom, seguindo os requisitos e os princípios de design definidos. Relativamente à terceira etapa, esta contemplou a avaliação do serviço prototipado, por parte de um grupo de participantes com défice visual. Esta fase do trabalho foi conduzida através do método de Estudo Avaliativo, possibilitando, através de testes de usabilidade e acessibilidade, complementados com entrevistas, compreender se o serviço prototipado ia efetivamente ao encontro das necessidades deste tipo de utilizadores, tendo-se observado que os participantes que estiveram envolvidos nos testes ao protótipo mostraram-se satisfeitos com as funcionalidades oferecidas pelo sistema, bem como com o design da sua interface.
Resumo:
Lines and edges provide important information for object categorization and recognition. In addition, one brightness model is based on a symbolic interpretation of the cortical multi-scale line/edge representation. In this paper we present an improved scheme for line/edge extraction from simple and complex cells and we illustrate the multi-scale representation. This representation can be used for visual reconstruction, but also for nonphotorealistic rendering. Together with keypoints and a new model of disparity estimation, a 3D wireframe representation of e.g. faces can be obtained in the future.
Resumo:
Face detection and recognition should be complemented by recognition of facial expression, for example for social robots which must react to human emotions. Our framework is based on two multi-scale representations in cortical area V1: keypoints at eyes, nose and mouth are grouped for face detection [1]; lines and edges provide information for face recognition [2].
Resumo:
Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2015
Resumo:
Tese de mestrado, Educação (Área de especialização em Educação e Tecnologias Digitais), Universidade de Lisboa, Instituto de Educação, 2014
Resumo:
The neuropsychological phenomenon of blindsight has been taken to suggest that the primary visual cortex (V1) plays a unique role in visual awareness, and that extrastriate activation needs to be fed back to V1 in order for the content of that activation to be consciously perceived. The aim of this review is to evaluate this theoretical framework and to revisit its key tenets. Firstly, is blindsight truly a dissociation of awareness and visual detection? Secondly, is there sufficient evidence to rule out the possibility that the loss of awareness resulting from a V1 lesion simply reflects reduced extrastriate responsiveness, rather than a unique role of V1 in conscious experience? Evaluation of these arguments and the empirical evidence leads to the conclusion that the loss of phenomenal awareness in blindsight may not be due to feedback activity in V1 being the hallmark awareness. On the basis of existing literature, an alternative explanation of blindsight is proposed. In this view, visual awareness is a “global” cognitive function as its hallmark is the availability of information to a large number of perceptual and cognitive systems; this requires inter-areal long-range synchronous oscillatory activity. For these oscillations to arise, a specific temporal profile of neuronal activity is required, which is established through recurrent feedback activity involving V1 and the extrastriate cortex. When V1 is lesioned, the loss of recurrent activity prevents inter-areal networks on the basis of oscillatory activity. However, as limited amount of input can reach extrastriate cortex and some extrastriate neuronal selectivity is preserved, computations involving comparison of neural firing rates within a cortical area remain possible. This enables “local” read-out from specific brain regions, allowing for the detection and discrimination of basic visual attributes. Thus blindsight is blind due to lack of “global” long-range synchrony, and it functions via “local” neural readout from extrastriate areas.