999 resultados para virtual banking


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Co-CreativePen Toolkit is a pen-based 3D toolkit for children cooperatly designing virtual environment. This toolkit is used to construct different applications involved with distributedpen-based 3D interaction. In this toolkit,sketch method is encapsulated as kinds of interaction techniques. Children can use pen to construct 3D and IBR objects, to navigate in the virtual world, to select and manipulate virtual objects, and to communicate with other children. Children can use pen to select other children in the virtual world, and use pen to write message to children selected The distributed architecture of Co-CreativePen Toolkit is based on the CORBA. A common scene graph is managed in the server with several copies of this graph are managed in every client.Every changes of the scene graph in client will cause the change in the server and other client.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An important characteristic of virtual assembly is interaction. Traditional di-rect manipulation in virtual assembly relies on dynamic collision detection, which is very time-consuming and even impossible in desktop virtual assembly environment. Feature-matching isa critical process in harmonious virtual assembly, and is the premise of assembly constraint sens-ing. This paper puts forward an active object-based feature-matching perception mechanism and afeature-matching interactive computing process, both of which make the direct manipulation in vir-tual assembly break away from collision detection. They also help to enhance virtual environmentunderstandability of user intention and promote interaction performance. Experimental resultsshow that this perception mechanism can ensure that users achieve real-time direct manipulationin desktop virtual environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semisupervised dimensionality reduction has been attracting much attention as it not only utilizes both labeled and unlabeled data simultaneously, but also works well in the situation of out-of-sample. This paper proposes an effective approach of semisupervised dimensionality reduction through label propagation and label regression. Different from previous efforts, the new approach propagates the label information from labeled to unlabeled data with a well-designed mechanism of random walks, in which outliers are effectively detected and the obtained virtual labels of unlabeled data can be well encoded in a weighted regression model. These virtual labels are thereafter regressed with a linear model to calculate the projection matrix for dimensionality reduction. By this means, when the manifold or the clustering assumption of data is satisfied, the labels of labeled data can be correctly propagated to the unlabeled data; and thus, the proposed approach utilizes the labeled and the unlabeled data more effectively than previous work. Experimental results are carried out upon several databases, and the advantage of the new approach is well demonstrated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this note, I propose two extensions to the Java virtual machine (or VM) to allow dynamic languages such as Dylan, Scheme and Smalltalk to be efficiently implemented on the VM. These extensions do not affect the performance of pure Java programs on the machine. The first extension allows for efficient encoding of dynamic data; the second allows for efficient encoding of language-specific computational elements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recovering a volumetric model of a person, car, or other object of interest from a single snapshot would be useful for many computer graphics applications. 3D model estimation in general is hard, and currently requires active sensors, multiple views, or integration over time. For a known object class, however, 3D shape can be successfully inferred from a single snapshot. We present a method for generating a ``virtual visual hull''-- an estimate of the 3D shape of an object from a known class, given a single silhouette observed from an unknown viewpoint. For a given class, a large database of multi-view silhouette examples from calibrated, though possibly varied, camera rigs are collected. To infer a novel single view input silhouette's virtual visual hull, we search for 3D shapes in the database which are most consistent with the observed contour. The input is matched to component single views of the multi-view training examples. A set of viewpoint-aligned virtual views are generated from the visual hulls corresponding to these examples. The 3D shape estimate for the input is then found by interpolating between the contours of these aligned views. When the underlying shape is ambiguous given a single view silhouette, we produce multiple visual hull hypotheses; if a sequence of input images is available, a dynamic programming approach is applied to find the maximum likelihood path through the feasible hypotheses over time. We show results of our algorithm on real and synthetic images of people.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of automatic face recognition is to visually identify a person in an input image. This task is performed by matching the input face against the faces of known people in a database of faces. Most existing work in face recognition has limited the scope of the problem, however, by dealing primarily with frontal views, neutral expressions, and fixed lighting conditions. To help generalize existing face recognition systems, we look at the problem of recognizing faces under a range of viewpoints. In particular, we consider two cases of this problem: (i) many example views are available of each person, and (ii) only one view is available per person, perhaps a driver's license or passport photograph. Ideally, we would like to address these two cases using a simple view-based approach, where a person is represented in the database by using a number of views on the viewing sphere. While the view-based approach is consistent with case (i), for case (ii) we need to augment the single real view of each person with synthetic views from other viewpoints, views we call 'virtual views'. Virtual views are generated using prior knowledge of face rotation, knowledge that is 'learned' from images of prototype faces. This prior knowledge is used to effectively rotate in depth the single real view available of each person. In this thesis, I present the view-based face recognizer, techniques for synthesizing virtual views, and experimental results using real and virtual views in the recognizer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O sistema Diagnose Virtual é um ambiente virtual na WEB para diagnóstico de doenças de plantas e enfermidades de animais que se utiliza de mecanismos de inferência (investigação) aplicados sobre o conhecimento de especialistas previamente categorizado. Este documento tem por objetivo orientar o usuário do sistema Diagnose Virtual no procedimento para sua utilização visando obter resultados corretos com menor esforço. O sistema é também dotado de ajuda online, na qual cada funcionalidade do sistema é descrita de forma sucinta mostrada desde que o ponteiro do mouse fique parado por um instante em cima da funcionalidade. Outra forma de ajuda pode ser obtida a cada tela, clicando o símbolo de interrogação no canto inferior direito. O documento aborda o módulo do usuário/produtor, no qual são exploradas as características de um problema (um caso) de uma determinada cultura até obter-se o diagnóstico. Como resultados são fornecidas as possíveis desordens com seus respectivos graus de certeza.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O sistema Diagnose Virtual é um ambiente virtual na WEB para diagnóstico de doenças de plantas e enfermidades de animais, que utiliza mecanismos de inferência baseados em conhecimentos de especialistas para simular o processo de diagnóstico. Este documento tem por objetivo orientar o usuário do sistema Diagnose Virtual no procedimento para sua utilização, visando obter resultados corretos com menor esforço.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Urquhart, C., Spink, S., Thomas, R., Yeoman, A., Durbin, J., Turner, J., Fenton, R. & Armstrong, C. (2004). Evaluating the development of virtual learning environments in higher and further education. In J. Cook (Ed.), Blue skies and pragmatism: learning technologies for the next decade. Research proceedings of the 11th Association for Learning Technology conference (ALT-C 2004), 14-16 September 2004, University of Exeter, Devon, England (pp. 157-169). Oxford: Association for Learning Technology Sponsorship: JISC

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Yeoman, A., Urquhart, C. & Sharp, S. (2003). Moving Communities of Practice forward: the challenge for the National electronic Library for Health and its Virtual Branch Libraries. Health Informatics Journal, 9(4), 241-252. Previously appeared as a conference paper for the iSHIMR2003 conference (Proceedings of the Eighth International Symposium on Health Information Management Research, June 1-3, 2003, Boras, Sweden) Sponsorship: NHS Information Authority/National electronic Library for Health

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

IEEE Transactions on Knowledge and Data Engineering, vol. 15, no. 5, pp. 1338-1343, 2003.