995 resultados para virtual sensing
Resumo:
Recovering a volumetric model of a person, car, or other object of interest from a single snapshot would be useful for many computer graphics applications. 3D model estimation in general is hard, and currently requires active sensors, multiple views, or integration over time. For a known object class, however, 3D shape can be successfully inferred from a single snapshot. We present a method for generating a ``virtual visual hull''-- an estimate of the 3D shape of an object from a known class, given a single silhouette observed from an unknown viewpoint. For a given class, a large database of multi-view silhouette examples from calibrated, though possibly varied, camera rigs are collected. To infer a novel single view input silhouette's virtual visual hull, we search for 3D shapes in the database which are most consistent with the observed contour. The input is matched to component single views of the multi-view training examples. A set of viewpoint-aligned virtual views are generated from the visual hulls corresponding to these examples. The 3D shape estimate for the input is then found by interpolating between the contours of these aligned views. When the underlying shape is ambiguous given a single view silhouette, we produce multiple visual hull hypotheses; if a sequence of input images is available, a dynamic programming approach is applied to find the maximum likelihood path through the feasible hypotheses over time. We show results of our algorithm on real and synthetic images of people.
Resumo:
The problem of automatic face recognition is to visually identify a person in an input image. This task is performed by matching the input face against the faces of known people in a database of faces. Most existing work in face recognition has limited the scope of the problem, however, by dealing primarily with frontal views, neutral expressions, and fixed lighting conditions. To help generalize existing face recognition systems, we look at the problem of recognizing faces under a range of viewpoints. In particular, we consider two cases of this problem: (i) many example views are available of each person, and (ii) only one view is available per person, perhaps a driver's license or passport photograph. Ideally, we would like to address these two cases using a simple view-based approach, where a person is represented in the database by using a number of views on the viewing sphere. While the view-based approach is consistent with case (i), for case (ii) we need to augment the single real view of each person with synthetic views from other viewpoints, views we call 'virtual views'. Virtual views are generated using prior knowledge of face rotation, knowledge that is 'learned' from images of prototype faces. This prior knowledge is used to effectively rotate in depth the single real view available of each person. In this thesis, I present the view-based face recognizer, techniques for synthesizing virtual views, and experimental results using real and virtual views in the recognizer.
Resumo:
O sistema Diagnose Virtual é um ambiente virtual na WEB para diagnóstico de doenças de plantas e enfermidades de animais que se utiliza de mecanismos de inferência (investigação) aplicados sobre o conhecimento de especialistas previamente categorizado. Este documento tem por objetivo orientar o usuário do sistema Diagnose Virtual no procedimento para sua utilização visando obter resultados corretos com menor esforço. O sistema é também dotado de ajuda online, na qual cada funcionalidade do sistema é descrita de forma sucinta mostrada desde que o ponteiro do mouse fique parado por um instante em cima da funcionalidade. Outra forma de ajuda pode ser obtida a cada tela, clicando o símbolo de interrogação no canto inferior direito. O documento aborda o módulo do usuário/produtor, no qual são exploradas as características de um problema (um caso) de uma determinada cultura até obter-se o diagnóstico. Como resultados são fornecidas as possíveis desordens com seus respectivos graus de certeza.
Resumo:
O sistema Diagnose Virtual é um ambiente virtual na WEB para diagnóstico de doenças de plantas e enfermidades de animais, que utiliza mecanismos de inferência baseados em conhecimentos de especialistas para simular o processo de diagnóstico. Este documento tem por objetivo orientar o usuário do sistema Diagnose Virtual no procedimento para sua utilização, visando obter resultados corretos com menor esforço.
Colorimetric and ratiometric fluorescence sensing of fluoride: Tuning selectivity in proton transfer
Resumo:
[GRAPHICS]
Resumo:
Sin índice de impacto (2012)
Resumo:
Lee M.H., ?Tactile Sensing: new directions, new challenges?, Int J. Robotics Research 19: 7, 636-643. July 2000.
Resumo:
Lee M.H. and Nicholls H.R., Tactile Sensing for Mechatronics: A State of the Art Survey, Mechatronics, 9, Jan 1999, pp1-31.
Resumo:
Urquhart, C., Spink, S., Thomas, R., Yeoman, A., Durbin, J., Turner, J., Fenton, R. & Armstrong, C. (2004). Evaluating the development of virtual learning environments in higher and further education. In J. Cook (Ed.), Blue skies and pragmatism: learning technologies for the next decade. Research proceedings of the 11th Association for Learning Technology conference (ALT-C 2004), 14-16 September 2004, University of Exeter, Devon, England (pp. 157-169). Oxford: Association for Learning Technology Sponsorship: JISC
Resumo:
Yeoman, A., Urquhart, C. & Sharp, S. (2003). Moving Communities of Practice forward: the challenge for the National electronic Library for Health and its Virtual Branch Libraries. Health Informatics Journal, 9(4), 241-252. Previously appeared as a conference paper for the iSHIMR2003 conference (Proceedings of the Eighth International Symposium on Health Information Management Research, June 1-3, 2003, Boras, Sweden) Sponsorship: NHS Information Authority/National electronic Library for Health
Resumo:
This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses.
Resumo:
IEEE Transactions on Knowledge and Data Engineering, vol. 15, no. 5, pp. 1338-1343, 2003.
Resumo:
Grande, Manuel; Dunkin, S. K.; Kellett, B., 'Opportunities for X-ray remote sensing at Mercury', Planetary And Space Science (2001) 49(14-15) pp.1553-1559 RAE2008
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Farmacêuticas
Resumo:
This paper presents a new approach to window-constrained scheduling, suitable for multimedia and weakly-hard real-time systems. We originally developed an algorithm, called Dynamic Window-Constrained Scheduling (DWCS), that attempts to guarantee no more than x out of y deadlines are missed for real-time jobs such as periodic CPU tasks, or delay-constrained packet streams. While DWCS is capable of generating a feasible window-constrained schedule that utilizes 100% of resources, it requires all jobs to have the same request periods (or intervals between successive service requests). We describe a new algorithm called Virtual Deadline Scheduling (VDS), that provides window-constrained service guarantees to jobs with potentially different request periods, while still maximizing resource utilization. VDS attempts to service m out of k job instances by their virtual deadlines, that may be some finite time after the corresponding real-time deadlines. Notwithstanding, VDS is capable of outperforming DWCS and similar algorithms, when servicing jobs with potentially different request periods. Additionally, VDS is able to limit the extent to which a fraction of all job instances are serviced late. Results from simulations show that VDS can provide better window-constrained service guarantees than other related algorithms, while still having as good or better delay bounds for all scheduled jobs. Finally, an implementation of VDS in the Linux kernel compares favorably against DWCS for a range of scheduling loads.