969 resultados para optical character recognition
Resumo:
Delirium is a significant problem for older hospitalized people and is associated with poor outcomes. It is poorly recognized and evidence suggests that a major reason is lack of education. Nurses, who are educated about delirium, can play a significant role in improving delirium recognition. This study evaluated the impact of a delirium specific educational website. A cluster randomized controlled trial, with a pretest/post-test time series design, was conducted to measure delirium knowledge (DK) and delirium recognition (DR) over three time-points. Statistically significant differences were found between the intervention and non-intervention group. The intervention groups' DK scores were higher and the change over time results were statistically significant [T3 and T1 (t=3.78 p=<0.001) and T2 and T1 baseline (t=5.83 p=<0.001)]. Statistically significant improvements were also seen for DR when comparing T2 and T1 results (t=2.56 p=0.011) between both groups but not for changes in DR scores between T3 and T1 (t=1.80 p=0.074). Participants rated the website highly on the visual, functional and content elements. This study supports the concept that web-based delirium learning is an effective and satisfying method of information delivery for registered nurses. Future research is required to investigate clinical outcomes as a result of this web-based education.
Resumo:
Whole-image descriptors such as GIST have been used successfully for persistent place recognition when combined with temporal filtering or sequential filtering techniques. However, whole-image descriptor localization systems often apply a heuristic rather than a probabilistic approach to place recognition, requiring substantial environmental-specific tuning prior to deployment. In this paper we present a novel online solution that uses statistical approaches to calculate place recognition likelihoods for whole-image descriptors, without requiring either environmental tuning or pre-training. Using a real world benchmark dataset, we show that this method creates distributions appropriate to a specific environment in an online manner. Our method performs comparably to FAB-MAP in raw place recognition performance, and integrates into a state of the art probabilistic mapping system to provide superior performance to whole-image methods that are not based on true probability distributions. The method provides a principled means for combining the powerful change-invariant properties of whole-image descriptors with probabilistic back-end mapping systems without the need for prior training or system tuning.
Resumo:
This paper presents a new multi-scale place recognition system inspired by the recent discovery of overlapping, multi-scale spatial maps stored in the rodent brain. By training a set of Support Vector Machines to recognize places at varying levels of spatial specificity, we are able to validate spatially specific place recognition hypotheses against broader place recognition hypotheses without sacrificing localization accuracy. We evaluate the system in a range of experiments using cameras mounted on a motorbike and a human in two different environments. At 100% precision, the multiscale approach results in a 56% average improvement in recall rate across both datasets. We analyse the results and then discuss future work that may lead to improvements in both robotic mapping and our understanding of sensory processing and encoding in the mammalian brain.
Resumo:
In this paper we present a novel place recognition algorithm inspired by recent discoveries in human visual neuroscience. The algorithm combines intolerant but fast low resolution whole image matching with highly tolerant, sub-image patch matching processes. The approach does not require prior training and works on single images (although we use a cohort normalization score to exploit temporal frame information), alleviating the need for either a velocity signal or image sequence, differentiating it from current state of the art methods. We demonstrate the algorithm on the challenging Alderley sunny day – rainy night dataset, which has only been previously solved by integrating over 320 frame long image sequences. The system is able to achieve 21.24% recall at 100% precision, matching drastically different day and night-time images of places while successfully rejecting match hypotheses between highly aliased images of different places. The results provide a new benchmark for single image, condition-invariant place recognition.
Resumo:
Facial expression recognition (FER) systems must ultimately work on real data in uncontrolled environments although most research studies have been conducted on lab-based data with posed or evoked facial expressions obtained in pre-set laboratory environments. It is very difficult to obtain data in real-world situations because privacy laws prevent unauthorized capture and use of video from events such as funerals, birthday parties, marriages etc. It is a challenge to acquire such data on a scale large enough for benchmarking algorithms. Although video obtained from TV or movies or postings on the World Wide Web may also contain ‘acted’ emotions and facial expressions, they may be more ‘realistic’ than lab-based data currently used by most researchers. Or is it? One way of testing this is to compare feature distributions and FER performance. This paper describes a database that has been collected from television broadcasts and the World Wide Web containing a range of environmental and facial variations expected in real conditions and uses it to answer this question. A fully automatic system that uses a fusion based approach for FER on such data is introduced for performance evaluation. Performance improvements arising from the fusion of point-based texture and geometry features, and the robustness to image scale variations are experimentally evaluated on this image and video dataset. Differences in FER performance between lab-based and realistic data, between different feature sets, and between different train-test data splits are investigated.
Resumo:
Fusion techniques can be used in biometrics to achieve higher accuracy. When biometric systems are in operation and the threat level changes, controlling the trade-off between detection error rates can reduce the impact of an attack. In a fused system, varying a single threshold does not allow this to be achieved, but systematic adjustment of a set of parameters does. In this paper, fused decisions from a multi-part, multi-sample sequential architecture are investigated for that purpose in an iris recognition system. A specific implementation of the multi-part architecture is proposed and the effect of the number of parts and samples in the resultant detection error rate is analysed. The effectiveness of the proposed architecture is then evaluated under two specific cases of obfuscation attack: miosis and mydriasis. Results show that robustness to such obfuscation attacks is achieved, since lower error rates than in the case of the non-fused base system are obtained.
Resumo:
This work aims at developing a planetary rover capable of acting as an assistant astrobiologist: making a preliminary analysis of the collected visual images that will help to make better use of the scientists time by pointing out the most interesting pieces of data. This paper focuses on the problem of detecting and recognising particular types of stromatolites. Inspired by the processes actual astrobiologists go through in the field when identifying stromatolites, the processes we investigate focus on recognising characteristics associated with biogenicity. The extraction of these characteristics is based on the analysis of geometrical structure enhanced by passing the images of stromatolites into an edge-detection filter and its Fourier Transform, revealing typical spatial frequency patterns. The proposed analysis is performed on both simulated images of stromatolite structures and images of real stromatolites taken in the field by astrobiologists.
Resumo:
Studies of the optical properties and catalytic capabilities of noble metal nanoparticles (NPs), such as gold (Au) and silver (Ag), have formed the basis for the very recent fast expansion of the field of green photocatalysis: photocatalysis utilizing visible and ultraviolet light, a major part of the solar spectrum. The reason for this growth is the recognition that the localised surface plasmon resonance (LSPR) effect of Au NPs and Ag NPs can couple the light flux to the conduction electrons of metal NPs, and the excited electrons and enhanced electric fields in close proximity to the NPs can contribute to converting the solar energy to chemical energy by photon-driven photocatalytic reactions. Previously the LSPR effect of noble metal NPs was utilized almost exclusively to improve the performance of semiconductor photocatalysts (for example, TiO2 and Ag halides), but recently, a conceptual breakthrough was made: studies on light driven reactions catalysed by NPs of Au or Ag on photocatalytically inactive supports (insulating solids with a very wide band gap) have demonstrated that these materials are a class of efficient photocatalysts working by mechanisms distinct from those of semiconducting photocatalysts. There are several reasons for the significant photocatalytic activity of Au and Ag NPs. (1) The conduction electrons of the particles gain the irradiation energy, resulting in high energy electrons at the NP surface which is desirable for activating molecules on the particles for chemical reactions. (2) In such a photocatalysis system, both light harvesting and the catalysing reaction take place on the nanoparticle, and so charge transfer between the NPs and support is not a prerequisite. (3) The density of the conduction electrons at the NP surface is much higher than that at the surface of any semiconductor, and these electrons can drive the reactions on the catalysts. (4) The metal NPs have much better affinity than semiconductors to many reactants, especially organic molecules. Recent progress in photocatalysis using Au and Ag NPs on insulator supports is reviewed. We focus on the mechanism differences between insulator and semiconductor-supported Au and Ag NPs when applied in photocatalytic processes, and the influence of important factors, light intensity and wavelength, in particular estimations of light irradiation contribution, by calculating the apparent activation energies of photo reactions and thermal reactions.
Resumo:
This article aims to discuss the notion of moral progress in the theory of recognition. It argues that Axel Honneth's program offers sophisticated theoretical guidance to observe and critically interpret emancipatory projects in contemporary politics based on ideas of individuality and social inclusiveness. Using a case study – the investigation, through frame analysis, of transformations in the portrayal of people with impairment as well as in public discourses on the issue of disability in major Brazilian news media from 1960 to 2008 – this article addresses three controversies: the notion of progress as a directional process; the problem of moral disagreement and conflict of interest in struggles for recognition; and the processes of social learning. By articulating empirically based arguments and Honneth's normative discussions, this study concludes that one can talk about moral progress without losing sight of value pluralism and conflict of interest.
Resumo:
Purpose To examine choroidal thickness (ChT) and its topographical variation across the posterior pole in myopic and non-myopic children. Methods One hundred and four children aged 10-15 years of age (mean age 13.1 ± 1.4 years) had ChT measured using enhanced depth imaging optical coherence tomography (OCT). Forty one children were myopic (mean spherical equivalent -2.4 ± 1.5 D) and 63 non-myopic (mean +0.3 ± 0.3 D). Two series of 6 radial OCT line scans centred on the fovea were assessed for each child. Subfoveal ChT and ChT across a series of parafoveal zones over the central 6mm of the posterior pole were determined through manual image segmentation. Results Subfoveal ChT was significantly thinner in myopes (mean 303 ± 79 µm) compared to non-myopes (mean 359 ± 77 µm) (p<0.0001). Multiple regression analysis revealed both refractive error (r = 0.39, p<0.001) and age (r = 0.21, p = 0.02) were positively associated with subfoveal ChT. ChT also exhibited significant topographical variations, with the choroid being thicker in more central regions. The thinnest choroid was typically observed in nasal (mean 286 ± 77 µm) and inferior-nasal (306 ± 79 µm) locations, and the thickest in superior (346 ± 79 µm) and superior-temporal (341 ± 74 µm) locations. The difference in ChT between myopic and non-myopic children was significantly greater in central foveal regions compared to more peripheral regions (>3 mm diameter) (p<0.001). Conclusions Myopic children have significantly thinner choroids compared to non-myopic children of similar age, particularly in central foveal regions. The magnitude of difference in choroidal thickness associated with myopia appears greater than would be predicted by a simple passive choroidal thinning with axial elongation.
Resumo:
In presented method combination of Fourier and Time domain detection enables to broaden the effective bandwidth for time dependent Doppler Signal that allows for using higher-order Bessel functions to calculate unambiguously the vibration amplitudes.
Resumo:
My practice-led research explores and maps workflows for generating experimental creative work involving inertia based motion capture technology. Motion capture has often been used as a way to bridge animation and dance resulting in abstracted visuals outcomes. In early works this process was largely done by rotoscoping, reference footage and mechanical forms of motion capture. With the evolution of technology, optical and inertial forms of motion capture are now more accessible and able to accurately capture a larger range of complex movements. The creative work titled “Contours in Motion” was the first in a series of studies on captured motion data used to generating experimental visual forms that reverberate in space and time. With the source or ‘seed’ comes from using an Xsens MVN - Inertial Motion Capture system to capture spontaneous dance movements, with the visual generation conducted through a customised dynamics simulation. The aim of the creative work was to diverge way from a standard practice of using particle system and/or a simple re-targeting of the motion data to drive a 3d character as a means to produce abstracted visual forms. To facilitate this divergence a virtual dynamic object was tether to a selection of data points from a captured performance. The proprieties of the dynamic object were then adjusted to balance the influences from the human movement data with the influence of computer based randomization. The resulting outcome was a visual form that surpassed simple data visualization to project the intent of the performer’s movements into a visual shape itself. The reported outcomes from this investigation have contributed to a larger study on the use of motion capture in the generative arts, furthering the understanding of and generating theories on practice.
Resumo:
This paper describes a texture recognition based method for segmenting kelp from images collected in highly dynamic shallow water environments by an Autonomous Underwater Vehicle (AUV). A particular challenge is image quality that is affected by uncontrolled lighting, reduced visibility, significantly varying perspective due to platform egomotion, and kelp sway from wave action. The kelp segmentation approach uses the Mahalanobis distance as a way to classify Haralick texture features from sub-regions within an image. The results illustrate the applicability of the method to classify kelp allowing construction of probability maps of kelp masses across a sequence of images.
Resumo:
How do you identify "good" teaching practice in the complexity of a real classroom? How do you know that beginning teachers can recognise effective digital pedagogy when they see it? How can teacher educators see through their students’ eyes? The study in this paper has arisen from our interest in what pre-service teachers “see” when observing effective classroom practice and how this might reveal their own technological, pedagogical and content knowledge. We asked 104 pre-service teachers from Early Years, Primary and Secondary cohorts to watch and comment upon selected exemplary videos of teachers using ICT (information and communication technologies) in Science. The pre-service teachers recorded their observations using a simple PMI (plus, minus, interesting) matrix which were then coded using the SOLO Taxonomy to look for evidence of their familiarity with and judgements of digital pedagogies. From this, we determined that the majority of preservice teachers we surveyed were using a descriptive rather than a reflective strategy, that is, not extending beyond what was demonstrated in the teaching exemplar or differentiating between action and purpose. We also determined that this method warrants wider trialling as a means of evaluating students’ understandings of the complexity of the digital classroom.
Resumo:
Background: Recent evidence indicates that gene variants related to carotenoid metabolism play a role in the uptake of macular pigments lutein (L) and zeaxanthine (Z). Moreover, these pigments are proposed to reduce the risk for advanced age-related macular degeneration (AMD). This study provides the initial examination of the relationship between the gene variants related to carotenoid metabolism, macular pigment optical density (MPOD) and their combined expression in healthy humans and patients with AMD. Participants and Methods: Forty-four participants were enrolled from a general population and a private practice including 20 healthy participants and 24 patients with advanced (neovascular) AMD. Participants were genotyped for the three single nucleotide polymorphisms (SNPs) upstream from BCMO1, rs11645428, rs6420424 and rs6564851 that have been shown to either up or down regulate beta-carotene conversion efficiency in the plasma. MPOD was determined by heterochromatic flicker photometry. Results: Healthy participants with the rs11645428 GG genotype, rs6420424 AA genotype and rs6564851 GG genotype all had on average significantly lower MPOD compared to those with the other genotypes (p < 0.01 for all three comparisons). When combining BCMO1 genotypes reported to have “high” (rs11645428 AA/rs6420424 GG/rs6564851 TT) and “low” (rs11645428 GG/rs6420424 AA/rs6564851 GG) beta-carotene conversion efficiency, we demonstrate clear differences in MPOD values (p<0.01). In patients with AMD there were no significant differences in MPOD for any of the three BCMO1 gene variants. Conclusion: In healthy participants MPOD levels can be related to high and low beta-carotene conversion BCMO1 genotypes. Such relationships were not found in patients with advanced neovascular AMD, indicative of additional processes influencing carotenoid uptake, possibly related to other AMD susceptibility genes. Our findings indicate that specific BCMO1 SNPs should be determined when assessing the effects of carotenoid supplementation on macular pigment and that their expression may be influenced by retinal disease.