882 resultados para Interactive Displays
Resumo:
In the last years, the well known ray tracing algorithm gained new popularity with the introduction of interactive ray tracing methods. The high modularity and the ability to produce highly realistic images make ray tracing an attractive alternative to raster graphics hardware. Interactive ray tracing also proved its potential in the field of Mixed Reality rendering and provides novel methods for seamless integration of real and virtual content. Actor insertion methods, a subdomain of Mixed Reality and closely related to virtual television studio techniques, can use ray tracing for achieving high output quality in conjunction with appropriate visual cues like shadows and reflections at interactive frame rates. In this paper, we show how interactive ray tracing techniques can provide new ways of implementing virtual studio applications.
Resumo:
Good cooperation between farrier, veterinarian and horse owner is an important prerequisite for optimal support of the horse with regards to shoeing and hoof health. The introduction of a joint educational aid aims to improve the level of education of both veterinarians and farriers. The interactive, multimedia approach represents an innovative new dimension in instruction techniques, predominantly provided through images and videos. The contents of the new teaching aid will focus on detailed anatomy of the foot and distal limb, as well as currently accepted shoeing practices and techniques and pathologic conditions of the hoof and foot.
Resumo:
Television and movie images have been altered ever since it was technically possible. Nowadays embedding advertisements, or incorporating text and graphics in TV scenes, are common practice, but they can not be considered as integrated part of the scene. The introduction of new services for interactive augmented television is discussed in this paper. We analyse the main aspects related with the whole chain of augmented reality production. Interactivity is one of the most important added values of the digital television: This paper aims to break the model where all TV viewers receive the same final image. Thus, we introduce and discuss the new concept of interactive augmented television, i. e. real time composition of video and computer graphics - e.g. a real scene and freely selectable images or spatial rendered objects - edited and customized by the end user within the context of the user's set top box and TV receiver.
Resumo:
The central question for this paper is how to improve the production process by closing the gap between industrial designers and software engineers of television(TV)-based User Interfaces (UI) in an industrial environment. Software engineers are highly interested whether one UI design can be converted into several fully functional UIs for TV products with different screen properties. The aim of the software engineers is to apply automatic layout and scaling in order to speed up and improve the production process. However, the question is whether a UI design lends itself for such automatic layout and scaling. This is investigated by analysing a prototype UI design done by industrial designers. In a first requirements study, industrial designers had created meta-annotations on top of their UI design in order to disclose their design rationale for discussions with software engineers. In a second study, five (out of ten) industrial designers assessed the potential of four different meta-annotation approaches. The question was which annotation method industrial designers would prefer and whether it could satisfy the technical requirements of the software engineering process. One main result is that the industrial designers preferred the method they were already familiar with, which therefore seems to be the most effective one although the main objective of automatic layout and scaling could still not be achieved.
Resumo:
Neurons in Action (NIA1, 2000; NIA1.5, 2004; NIA2, 2007), a set of tutorials and linked simulations, is designed to acquaint students with neuronal physiology through interactive, virtual laboratory experiments. Here we explore the uses of NIA in lecture, both interactive and didactic, as well as in the undergraduate laboratory, in the graduate seminar course, and as an examination tool through homework and problem set assignments. NIA, made with the simulator NEURON (http://www.neuron.yale.edu/neuron/), displays voltages, currents, and conductances in a membrane patch or signals moving within the dendrites, soma and/or axon of a neuron. Customized simulations start with the plain lipid bilayer and progress through equilibrium potentials; currents through single Na and K channels; Na and Ca action potentials; voltage clamp of a patch or a whole neuron; voltage spread and propagation in axons, motoneurons and nerve terminals; synaptic excitation and inhibition; and advanced topics such as channel kinetics and coincidence detection. The user asks and answers "what if" questions by specifying neuronal parameters, ion concentrations, and temperature, and the experimental results are then plotted as conductances, currents, and voltage changes. Such exercises provide immediate confirmation or refutation of the student's ideas to guide their learning. The tutorials are hyperlinked to explanatory information and to original research papers. Although the NIA tutorials were designed as a sequence to empower a student with a working knowledge of fundamental neuronal principles, we find that faculty are using the individual tutorials in a variety of educational situations, some of which are described here. Here we offer ideas to colleagues using interactive software, whether NIA or another tool, for educating students of differing backgrounds in the subject of neurophysiology.
Resumo:
ModelDB's mission is to link computational models and publications, supporting the field of computational neuroscience (CNS) by making model source code readily available. It is continually expanding, and currently contains source code for more than 300 models that cover more than 41 topics. Investigators, educators, and students can use it to obtain working models that reproduce published results and can be modified to test for new domains of applicability. Users can browse ModelDB to survey the field of computational neuroscience, or pursue more focused explorations of specific topics. Here we describe tutorials and initial experiences with ModelDB as an interactive educational tool.
Resumo:
What does it mean for curriculum to be interactive? It encourages student engagement and active participation in both individual and group work. It offers teachers a coherent set of materials to choose from that can enhance their classes. It is the product of on-going development and continuous improvement based on research and feedback from the field. This paper will introduce work in progress from the Center for Excellence in Education, Science, and Technology (CELEST), an NSF Science of Learning Center. Among its many goals, CELEST is developing a unique educational curriculum, an interactive curriculum based upon models of mind and brain. Teachers, administrators, and governments are naturally concerned with how students learn. Students are greatly concerned about how minds work, including how to learn. CELEST aims to introduce curricula that not only meet current U.S. standards in mathematics, science, and psychology but also influence plans to improve those standards. Software and support materials are in development and available at http://cns.bu.edu/celest/private/. Interested parties are invited to contact the author for access.
Resumo:
Understanding the functioning of brains is an extremely challenging endeavour - both for researches as well as for students. Interactive media and tools, like simulations, databases and visualizations or virtual laboratories proved to be not only indispensable in research but also in education to help understanding brain function. Accordingly, a wide range of such media and tools are now available and it is getting increasingly difficult to see an overall picture. Written by researchers, tool developers and experienced academic teachers, this special issue of Brains, Minds & Media covers a broad range of interactive research media and tools with a strong emphasis on their use in neural and cognitive sciences education. The focus lies not only on the tools themselves, but also on the question of how research tools can significantly enhance learning and teaching and how a curricular integration can be achieved. This collection gives a comprehensive overview of existing tools and their usage as well as the underlying educational ideas and thus provides an orientation guide not only for teaching researchers but also for interested teachers and students.
Resumo:
BrainMaps.org is an interactive high-resolution digital brain atlas and virtual microscope that is based on over 20 million megapixels of scanned images of serial sections of both primate and non-primate brains and that is integrated with a high-speed database for querying and retrieving data about brain structure and function over the internet. Complete brain datasets for various species, including Homo sapiens, Macaca mulatta, Chlorocebus aethiops, Felis catus, Mus musculus, Rattus norvegicus, and Tyto alba, are accessible online. The methods and tools we describe are useful for both research and teaching, and can be replicated by labs seeking to increase accessibility and sharing of neuroanatomical data. These tools offer the possibility of visualizing and exploring completely digitized sections of brains at a sub-neuronal level, and can facilitate large-scale connectional tracing, histochemical and stereological analyses.
Resumo:
Having to carry input devices can be inconvenient when interacting with wall-sized, high-resolution tiled displays. Such displays are typically driven by a cluster of computers. Running existing games on a cluster is non-trivial, and the performance attained using software solutions like Chromium is not good enough. This paper presents a touch-free, multi-user, humancomputer interface for wall-sized displays that enables completely device-free interaction. The interface is built using 16 cameras and a cluster of computers, and is integrated with the games Quake 3 Arena (Q3A) and Homeworld. The two games were parallelized using two different approaches in order to run on a 7x4 tile, 21 megapixel display wall with good performance. The touch-free interface enables interaction with a latency of 116 ms, where 81 ms are due to the camera hardware. The rendering performance of the games is compared to their sequential counterparts running on the display wall using Chromium. Parallel Q3A’s framerate is an order of magnitude higher compared to using Chromium. The parallel version of Homeworld performed on par with the sequential, which did not run at all using Chromium. Informal use of the touch-free interface indicates that it works better for controlling Q3A than Homeworld.
Resumo:
This paper reports on a Virtual Reality theater experiment named Il était Xn fois, conducted by artists and computer scientists working in cognitive science. It offered the opportunity for knowledge and ideas exchange between these groups, highlighting the benefits of collaboration of this kind. Section 1 explains the link between enaction in cognitive science and virtual reality, and specifically the need to develop an autonomous entity which enhances presence in an artificial world. Section 2 argues that enactive artificial intelligence is able to produce such autonomy. This was demonstrated by the theatrical experiment, "Il était Xn fois" (in English: Once upon Xn time), explained in section 3. Its first public performance was in 2009, by the company Dérézo. The last section offers the view that enaction can form a common ground between the artistic and computer science areas.
Resumo:
We present a user supported tracking framework that combines automatic tracking with extended user input to create error free tracking results that are suitable for interactive video production. The goal of our approach is to keep the necessary user input as small as possible. In our framework, the user can select between different tracking algorithms - existing ones and new ones that are described in this paper. Furthermore, the user can automatically fuse the results of different tracking algorithms with our robust fusion approach. The tracked object can be marked in more than one frame, which can significantly improve the tracking result. After tracking, the user can validate the results in an easy way, thanks to the support of a powerful interpolation technique. The tracking results are iteratively improved until the complete track has been found. After the iterative editing process the tracking result of each object is stored in an interactive video file that can be loaded by our player for interactive videos.