16 resultados para interactive exhibit
em Digital Peer Publishing
Resumo:
Innovations in hardware and network technologies lead to an exploding number of non-interrelated parallel media streams. Per se this does not mean any additional value for consumers. Broadcasting and advertisement industries have not yet found new formats to reach the individual user with their content. In this work we propose and describe a novel digital broadcasting framework, which allows for the live staging of (mass) media events and improved consumer personalisation. In addition new professions for future TV production workflows which will emerge are described, namely the 'video composer' and the 'live video conductor'.
Resumo:
Interactive TV technology has been addressed in many previous works, but there is sparse research on the topic of interactive content broadcasting and how to support the production process. In this article, the interactive broadcasting process is broadly defined to include studio technology and digital TV applications at consumer set-top boxes. In particular, augmented reality studio technology employs smart-projectors as light sources and blends real scenes with interactive computer graphics that are controlled at end-user terminals. Moreover, TV producer-friendly multimedia authoring tools empower the development of novel TV formats. Finally, the support for user-contributed content raises the potential to revolutionize the hierarchical TV production process, by introducing the viewer as part of content delivery chain.
Resumo:
In this article, it is shown that IWD incorporates topological perceptual characteristics of both spoken and written language, and it is argued that these characteristics should not be ignored or given up when synchronous textual CMC is technologically developed and upgraded.
Resumo:
In the last years, the well known ray tracing algorithm gained new popularity with the introduction of interactive ray tracing methods. The high modularity and the ability to produce highly realistic images make ray tracing an attractive alternative to raster graphics hardware. Interactive ray tracing also proved its potential in the field of Mixed Reality rendering and provides novel methods for seamless integration of real and virtual content. Actor insertion methods, a subdomain of Mixed Reality and closely related to virtual television studio techniques, can use ray tracing for achieving high output quality in conjunction with appropriate visual cues like shadows and reflections at interactive frame rates. In this paper, we show how interactive ray tracing techniques can provide new ways of implementing virtual studio applications.
Resumo:
Television and movie images have been altered ever since it was technically possible. Nowadays embedding advertisements, or incorporating text and graphics in TV scenes, are common practice, but they can not be considered as integrated part of the scene. The introduction of new services for interactive augmented television is discussed in this paper. We analyse the main aspects related with the whole chain of augmented reality production. Interactivity is one of the most important added values of the digital television: This paper aims to break the model where all TV viewers receive the same final image. Thus, we introduce and discuss the new concept of interactive augmented television, i. e. real time composition of video and computer graphics - e.g. a real scene and freely selectable images or spatial rendered objects - edited and customized by the end user within the context of the user's set top box and TV receiver.
Resumo:
The central question for this paper is how to improve the production process by closing the gap between industrial designers and software engineers of television(TV)-based User Interfaces (UI) in an industrial environment. Software engineers are highly interested whether one UI design can be converted into several fully functional UIs for TV products with different screen properties. The aim of the software engineers is to apply automatic layout and scaling in order to speed up and improve the production process. However, the question is whether a UI design lends itself for such automatic layout and scaling. This is investigated by analysing a prototype UI design done by industrial designers. In a first requirements study, industrial designers had created meta-annotations on top of their UI design in order to disclose their design rationale for discussions with software engineers. In a second study, five (out of ten) industrial designers assessed the potential of four different meta-annotation approaches. The question was which annotation method industrial designers would prefer and whether it could satisfy the technical requirements of the software engineering process. One main result is that the industrial designers preferred the method they were already familiar with, which therefore seems to be the most effective one although the main objective of automatic layout and scaling could still not be achieved.
Resumo:
Adding virtual objects to real environments plays an important role in todays computer graphics: Typical examples are virtual furniture in a real room and virtual characters in real movies. For a believable appearance, consistent lighting of the virtual objects is required. We present an augmented reality system that displays virtual objects with consistent illumination and shadows in the image of a simple webcam. We use two high dynamic range video cameras with fisheye lenses permanently recording the environment illumination. A sampling algorithm selects a few bright parts in one of the wide angle images and the corresponding points in the second camera image. The 3D position can then be calculated using epipolar geometry. Finally, the selected point lights are used in a multi pass algorithm to draw the virtual object with shadows. To validate our approach, we compare the appearance and shadows of the synthetic objects with real objects.
Resumo:
ModelDB's mission is to link computational models and publications, supporting the field of computational neuroscience (CNS) by making model source code readily available. It is continually expanding, and currently contains source code for more than 300 models that cover more than 41 topics. Investigators, educators, and students can use it to obtain working models that reproduce published results and can be modified to test for new domains of applicability. Users can browse ModelDB to survey the field of computational neuroscience, or pursue more focused explorations of specific topics. Here we describe tutorials and initial experiences with ModelDB as an interactive educational tool.
Resumo:
What does it mean for curriculum to be interactive? It encourages student engagement and active participation in both individual and group work. It offers teachers a coherent set of materials to choose from that can enhance their classes. It is the product of on-going development and continuous improvement based on research and feedback from the field. This paper will introduce work in progress from the Center for Excellence in Education, Science, and Technology (CELEST), an NSF Science of Learning Center. Among its many goals, CELEST is developing a unique educational curriculum, an interactive curriculum based upon models of mind and brain. Teachers, administrators, and governments are naturally concerned with how students learn. Students are greatly concerned about how minds work, including how to learn. CELEST aims to introduce curricula that not only meet current U.S. standards in mathematics, science, and psychology but also influence plans to improve those standards. Software and support materials are in development and available at http://cns.bu.edu/celest/private/. Interested parties are invited to contact the author for access.
Resumo:
Understanding the functioning of brains is an extremely challenging endeavour - both for researches as well as for students. Interactive media and tools, like simulations, databases and visualizations or virtual laboratories proved to be not only indispensable in research but also in education to help understanding brain function. Accordingly, a wide range of such media and tools are now available and it is getting increasingly difficult to see an overall picture. Written by researchers, tool developers and experienced academic teachers, this special issue of Brains, Minds & Media covers a broad range of interactive research media and tools with a strong emphasis on their use in neural and cognitive sciences education. The focus lies not only on the tools themselves, but also on the question of how research tools can significantly enhance learning and teaching and how a curricular integration can be achieved. This collection gives a comprehensive overview of existing tools and their usage as well as the underlying educational ideas and thus provides an orientation guide not only for teaching researchers but also for interested teachers and students.
Resumo:
BrainMaps.org is an interactive high-resolution digital brain atlas and virtual microscope that is based on over 20 million megapixels of scanned images of serial sections of both primate and non-primate brains and that is integrated with a high-speed database for querying and retrieving data about brain structure and function over the internet. Complete brain datasets for various species, including Homo sapiens, Macaca mulatta, Chlorocebus aethiops, Felis catus, Mus musculus, Rattus norvegicus, and Tyto alba, are accessible online. The methods and tools we describe are useful for both research and teaching, and can be replicated by labs seeking to increase accessibility and sharing of neuroanatomical data. These tools offer the possibility of visualizing and exploring completely digitized sections of brains at a sub-neuronal level, and can facilitate large-scale connectional tracing, histochemical and stereological analyses.
Resumo:
This paper reports on a Virtual Reality theater experiment named Il était Xn fois, conducted by artists and computer scientists working in cognitive science. It offered the opportunity for knowledge and ideas exchange between these groups, highlighting the benefits of collaboration of this kind. Section 1 explains the link between enaction in cognitive science and virtual reality, and specifically the need to develop an autonomous entity which enhances presence in an artificial world. Section 2 argues that enactive artificial intelligence is able to produce such autonomy. This was demonstrated by the theatrical experiment, "Il était Xn fois" (in English: Once upon Xn time), explained in section 3. Its first public performance was in 2009, by the company Dérézo. The last section offers the view that enaction can form a common ground between the artistic and computer science areas.
Resumo:
We present a user supported tracking framework that combines automatic tracking with extended user input to create error free tracking results that are suitable for interactive video production. The goal of our approach is to keep the necessary user input as small as possible. In our framework, the user can select between different tracking algorithms - existing ones and new ones that are described in this paper. Furthermore, the user can automatically fuse the results of different tracking algorithms with our robust fusion approach. The tracked object can be marked in more than one frame, which can significantly improve the tracking result. After tracking, the user can validate the results in an easy way, thanks to the support of a powerful interpolation technique. The tracking results are iteratively improved until the complete track has been found. After the iterative editing process the tracking result of each object is stored in an interactive video file that can be loaded by our player for interactive videos.