837 resultados para 3D multi-user virtual environments
Resumo:
Three-dimensional virtual worlds have been growing fast in number of users, and are used for the most diverse purposes. In collaboration, they are used with good results due to features such as immersion, interaction capabilities, use of avatar embodiment, and physical space. In the particular cases of avatar embodiment and physical space, these features support nonverbal communication, but its impact on collaboration is not well known. In this work we present a protocol for case study research and its creation process, which aims to assert itself as a tool to collect data on how nonverbal communication influences collaboration in three-dimensional virtual worlds. We define the propositions and units of analysis, and a pilot case to validate them. Then, two cases are analysed under the created protocol. Most of the propositions found chains of evidences supporting them.
Resumo:
Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.
Resumo:
Part 9: Innovation Networks
Resumo:
This paper presents a prototype tracking system for tracking people in enclosed indoor environments where there is a high rate of occlusions. The system uses a stereo camera for acquisition, and is capable of disambiguating occlusions using a combination of depth map analysis, a two step ellipse fitting people detection process, the use of motion models and Kalman filters and a novel fit metric, based on computationally simple object statistics. Testing shows that our fit metric outperforms commonly used position based metrics and histogram based metrics, resulting in more accurate tracking of people.
Resumo:
Knowmore (House of Commons) is a large scale generative interactive installation that incorporates embodied interaction, dynamic image creation, new furniture forms, touch sensitivity, innovative collaborative processes and multichannel generative sound creation. A large circular table spun by hand and a computer-controlled video projection falls on its top, creating an uncanny blend of physical object and virtual media. Participants’ presence around the table and how they touch it is registered, allowing up to five people to collaboratively ‘play’ this deeply immersive audiovisual work. Set within an ecological context, the work subtly asks what kind of resources and knowledges might be necessary to move us past simply knowing what needs to be changed to instead actually embodying that change, whilst hinting at other deeply relational ways of understanding and knowing the world. The work has successfully operated in two high traffic public environments, generating a subtle form of interactivity that allows different people to interact at different paces and speeds and with differing intentions, each contributing towards dramatic public outcomes. The research field involved developing new interaction and engagement strategies for eco-political media arts practice. The context was the creation of improved embodied, performative and improvisational experiences for participants; further informed by ‘Sustainment’ theory. The central question was, what ontological shifts may be necessary to better envision and align our everyday life choices in ways that respect that which is shared by all - 'The Commons'. The methodology was primarily practice-led and in concert with underlying theories. The work’s knowledge contribution was to question how new media interactive experience and embodied interaction might prompt participants to reflect upon the kind of resources and knowledges required to move past simply knowing what needs to be changed to instead actually embodying that change. This was achieved through focusing on the power of embodied learning implied by the works' strongly physical interface (i.e. the spinning of a full size table) in concert with the complex field of layered imagery and sound. The work was commissioned by the State Library of Queensland and Queensland Artworkers Alliance and significantly funded by The Australia Council for the Arts, Arts Queensland, QUT, RMIT Centre for Animation and Interactive Media and industry partners E2E Visuals. After premiering for 3 months at the State Library of Queensland it was curated into the significant ‘Mediations Biennial of Modern Art’ in Poznan, Poland. The work formed the basis of two papers, was reviewed in Realtime (90), was overviewed at Subtle Technologies (2010) in Toronto and shortlisted for ISEA 2011 Istanbul and included in the edited book/catalogue ‘Art in Spite of Economics’, a collaboration between Leonardo/ISAST (MIT Press); Goldsmiths, University of London; ISEA International; and Sabanci University, Istanbul.
Resumo:
John Frazer's architectural work is inspired by living and generative processes. Both evolutionary and revolutionary, it explores informatin ecologies and the dynamics of the spaces between objects. Fuelled by an interest in the cybernetic work of Gordon Pask and Norbert Wiener, and the possibilities of the computer and the "new science" it has facilitated, Frazer and his team of collaborators have conducted a series of experiments that utilize genetic algorithms, cellular automata, emergent behaviour, complexity and feedback loops to create a truly dynamic architecture. Frazer studied at the Architectural Association (AA) in London from 1963 to 1969, and later became unit master of Diploma Unit 11 there. He was subsequently Director of Computer-Aided Design at the University of Ulter - a post he held while writing An Evolutionary Architecture in 1995 - and a lecturer at the University of Cambridge. In 1983 he co-founded Autographics Software Ltd, which pioneered microprocessor graphics. Frazer was awarded a person chair at the University of Ulster in 1984. In Frazer's hands, architecture becomes machine-readable, formally open-ended and responsive. His work as computer consultant to Cedric Price's Generator Project of 1976 (see P84)led to the development of a series of tools and processes; these have resulted in projects such as the Calbuild Kit (1985) and the Universal Constructor (1990). These subsequent computer-orientated architectural machines are makers of architectural form beyond the full control of the architect-programmer. Frazer makes much reference to the multi-celled relationships found in nature, and their ongoing morphosis in response to continually changing contextual criteria. He defines the elements that describe his evolutionary architectural model thus: "A genetic code script, rules for the development of the code, mapping of the code to a virtual model, the nature of the environment for the development of the model and, most importantly, the criteria for selection. In setting out these parameters for designing evolutionary architectures, Frazer goes beyond the usual notions of architectural beauty and aesthetics. Nevertheless his work is not without an aesthetic: some pieces are a frenzy of mad wire, while others have a modularity that is reminiscent of biological form. Algorithms form the basis of Frazer's designs. These algorithms determine a variety of formal results dependent on the nature of the information they are given. His work, therefore, is always dynamic, always evolving and always different. Designing with algorithms is also critical to other architects featured in this book, such as Marcos Novak (see p150). Frazer has made an unparalleled contribution to defining architectural possibilities for the twenty-first century, and remains an inspiration to architects seeking to create responsive environments. Architects were initially slow to pick up on the opportunities that the computer provides. These opportunities are both representational and spatial: computers can help architects draw buildings and, more importantly, they can help architects create varied spaces, both virtual and actual. Frazer's work was groundbreaking in this respect, and well before its time.
Resumo:
This short paper presents a means of capturing non spatial information (specifically understanding of places) for use in a Virtual Heritage application. This research is part of the Digital Songlines Project which is developing protocols, methodologies and a toolkit to facilitate the collection and sharing of Indigenous cultural heritage knowledge, using virtual reality. Within the context of this project most of the cultural activities relate to celebrating life and to the Australian Aboriginal people, land is the heart of life. Australian Indigenous art, stories, dances, songs and rituals celebrate country as its focus or basis. To the Aboriginal people the term “Country” means a lot more than a place or a nation, rather “Country” is a living entity with a past a present and a future; they talk about it in the same way as they talk about their mother. The landscape is seen to have a spiritual connection in a view seldom understood by non-indigenous persons; this paper introduces an attempt to understand such empathy and relationship and to reproduce it in a virtual environment.
Resumo:
For robots to operate in human environments they must be able to make their own maps because it is unrealistic to expect a user to enter a map into the robot’s memory; existing floorplans are often incorrect; and human environments tend to change. Traditionally robots have used sonar, infra-red or laser range finders to perform the mapping task. Digital cameras have become very cheap in recent years and they have opened up new possibilities as a sensor for robot perception. Any robot that must interact with humans can reasonably be expected to have a camera for tasks such as face recognition, so it makes sense to also use the camera for navigation. Cameras have advantages over other sensors such as colour information (not available with any other sensor), better immunity to noise (compared to sonar), and not being restricted to operating in a plane (like laser range finders). However, there are disadvantages too, with the principal one being the effect of perspective. This research investigated ways to use a single colour camera as a range sensor to guide an autonomous robot and allow it to build a map of its environment, a process referred to as Simultaneous Localization and Mapping (SLAM). An experimental system was built using a robot controlled via a wireless network connection. Using the on-board camera as the only sensor, the robot successfully explored and mapped indoor office environments. The quality of the resulting maps is comparable to those that have been reported in the literature for sonar or infra-red sensors. Although the maps are not as accurate as ones created with a laser range finder, the solution using a camera is significantly cheaper and is more appropriate for toys and early domestic robots.
Resumo:
The availability of innumerable intelligent building (IB) products, and the current dearth of inclusive building component selection methods suggest that decision makers might be confronted with the quandary of forming a particular combination of components to suit the needs of a specific IB project. Despite this problem, few empirical studies have so far been undertaken to analyse the selection of the IB systems, and to identify key selection criteria for major IB systems. This study is designed to fill these research gaps. Two surveys: a general survey and the analytic hierarchy process (AHP) survey are proposed to achieve these objectives. The first general survey aims to collect general views from IB experts and practitioners to identify the perceived critical selection criteria, while the AHP survey was conducted to prioritize and assign the important weightings for the perceived criteria in the general survey. Results generally suggest that each IB system was determined by a disparate set of selection criteria with different weightings. ‘Work efficiency’ is perceived to be most important core selection criterion for various IB systems, while ‘user comfort’, ‘safety’ and ‘cost effectiveness’ are also considered to be significant. Two sub-criteria, ‘reliability’ and ‘operating and maintenance costs’, are regarded as prime factors to be considered in selecting IB systems. The current study contributes to the industry and IB research in at least two aspects. First, it widens the understanding of the selection criteria, as well as their degree of importance, of the IB systems. It also adopts a multi-criteria AHP approach which is a new method to analyse and select the building systems in IB. Further research would investigate the inter-relationship amongst the selection criteria.
Resumo:
Some Engineering Faculties are turning to the problem-based learning (PBL)paradigm to engender necessary skills and competence in their graduates. Since, at the same time, some Faculties are moving towards distance education, questions are being asked about the effectiveness of PBL for technical fields such as Engineering when delivered in virtual space. This paper outlines an investigation of how student attributes affect their learning experience in PBL courses offered in virtual space. A frequency distribution was superimposed on the outcome space of a phenomenographical study on a suitable PBL course to investigate the effect of different student attributes on the learning experience. It was discovered that the quality, quantity, and style of facilitator interaction had the greatest impact on the student learning experience. This highlights the need to establish consistent student interaction plans and to set, and ensure compliance with, minimum standards with respect to facilitation and student interactions.
Resumo:
An important aspect of designing any product is validation. Virtual design process (VDP) is an alternative to hardware prototyping in which analysis of designs can be done without manufacturing physical samples. In recent years, VDP have been generated either for animation or filming applications. This paper proposes a virtual reality design process model on one of the applications when used as a validation tool. This technique is used to generate a complete design guideline and validation tool of product design. To support the design process of a product, a virtual environment and VDP method were developed that supports validation and an initial design cycle performed by a designer. The product model car carrier is used as illustration for which virtual design was generated. The loading and unloading sequence of the model for the prototype was generated using automated reasoning techniques and was completed by interactively animating the product in the virtual environment before complete design was built. By using the VDP process critical issues like loading, unloading, Australian Design rules (ADR) and clearance analysis were done. The process would save time, money in physical sampling and to large extent in complete math generation. Since only schematic models are required, it saves time in math modelling and handling of bigger size assemblies due to complexity of the models. This extension of VDP process for design evaluation is unique and was developed, implemented successfully. In this paper a Toll logistics and J Smith and Sons car carrier which is developed under author’s responsibility has been used to illustrate our approach of generating design validation via VDP.
Resumo:
Although previous work in nonlinear dynamics on neurobiological coordination and control has provided valuable insights from studies of single joint movements in humans, researchers have shown increasing interest in coordination of multi-articular actions. Multi-articular movement models have provided valuable insights on neurobiological systems conceptualised as degenerate, adaptive complex systems satisfying the constraints of dynamic environments. In this paper, we overview empirical evidence illustrating the dynamics of adaptive movement behavior in a range of multi-articular actions including kicking, throwing, hitting and balancing. We model the emergence of creativity and the diversity of neurobiological action in the meta-stable region of self organising criticality. We examine the influence on multi-articular actions of decaying and emerging constraints in the context of skill acquisition. We demonstrate how, in this context, transitions between preferred movement patterns exemplify the search for and adaptation of attractor states within the perceptual motor workspace as a function of practice. We conclude by showing how empirical analyses of neurobiological coordination and control have been used to establish a nonlinear pedagogical framework for enhancing acquisition of multi-articular actions.
Resumo:
A great challenge exists today: how to reach youth (a.k.a. the iYGeneration) who consume multiple media concurrently, who can access information on demand, and who have intertwined virtual social media networks in their lives. Our research finds that Australian youth multi-task and rarely use traditional media, although significant differences between males and females, as well as late tweens and 20-somethings, exist. Technology convergence facilitates two-way dialogue, allowing growing social interactions to occur in their technological environments. Our findings show that in order for marketing communication professionals to effectively communicate with this market, it is crucial to know exactly how the iYGeneration use media, which media they use, and when they use it.
Resumo:
In this paper, cognitive load analysis via acoustic- and CAN-Bus-based driver performance metrics is employed to assess two different commercial speech dialog systems (SDS) during in-vehicle use. Several metrics are proposed to measure increases in stress, distraction and cognitive load and we compare these measures with statistical analysis of the speech recognition component of each SDS. It is found that care must be taken when designing an SDS as it may increase cognitive load which can be observed through increased speech response delay (SRD), changes in speech production due to negative emotion towards the SDS, and decreased driving performance on lateral control tasks. From this study, guidelines are presented for designing systems which are to be used in vehicular environments.