933 resultados para Large Scale Virtual Environments


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article describes a technique for Large Scale Virtual Environments (LSVEs) partitioning in hexagon cells and using portal in the cell interfaces to reduce the number of messages on the network and the complexity of the virtual world. These environments usually demand a high volume of data that must be sent only to those users who needs the information [Greenhalgh, Benford 1997].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present redirection techniques that support exploration of large-scale virtual environments (VEs) by means of real walking. We quantify to what degree users can unknowingly be redirected in order to guide them through VEs in which virtual paths differ from the physical paths. We further introduce the concept of dynamic passive haptics by which any number of virtual objects can be mapped to real physical proxy props having similar haptic properties (i. e., size, shape, and surface structure), such that the user can sense these virtual objects by touching their real world counterparts. Dynamic passive haptics provides the user with the illusion of interacting with a desired virtual object by redirecting her to the corresponding proxy prop. We describe the concepts of generic redirected walking and dynamic passive haptics and present experiments in which we have evaluated these concepts. Furthermore, we discuss implications that have been derived from a user study, and we present approaches that derive physical paths which may vary from the virtual counterparts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Agents inhabiting large scale environments are faced with the problem of generating maps by which they can navigate. One solution to this problem is to use probabilistic roadmaps which rely on selecting and connecting a set of points that describe the interconnectivity of free space. However, the time required to generate these maps can be prohibitive, and agents do not typically know the environment in advance. In this paper we show that the optimal combination of different point selection methods used to create the map is dependent on the environment, no point selection method dominates. This motivates a novel self-adaptive approach for an agent to combine several point selection methods. The success rate of our approach is comparable to the state of the art and the generation cost is substantially reduced. Self-adaptation therefore enables a more efficient use of the agent's resources. Results are presented for both a set of archetypal scenarios and large scale virtual environments based in Second Life, representing real locations in London.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we propose a solution to solve the scalability problem found in collaborative, virtual and mixed reality environments of large scale, that use the hierarchical client-server model. Basically, we use a hierarchy of servers. When the capacity of a server is reached, a new server is created as a sun of the first one, and the system load is distributed between them (father and sun). We propose efficient tools and techniques for solving problems inherent to client-server model, as the definition of clusters of users, distribution and redistribution of users through the servers, and some mixing and filtering operations, that are necessary to reduce flow between servers. The new model was tested, in simulation, emulation and in interactive applications that were implemented. The results of these experimentations show enhancements in the traditional, previous models indicating the usability of the proposed in problems of all-to-all communications. This is the case of interactive games and other applications devoted to Internet (including multi-user environments) and interactive applications of the Brazilian Digital Television System, to be developed by the research group. Keywords: large scale virtual environments, interactive digital tv, distributed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Where users are interacting in a distributed virtual environment, the actions of each user must be observed by peers with sufficient consistency and within a limited delay so as not to be detrimental to the interaction. The consistency control issue may be split into three parts: update control; consistent enactment and evolution of events; and causal consistency. The delay in the presentation of events, termed latency, is primarily dependent on the network propagation delay and the consistency control algorithms. The latency induced by the consistency control algorithm, in particular causal ordering, is proportional to the number of participants. This paper describes how the effect of network delays may be reduced and introduces a scalable solution that provides sufficient consistency control while minimising its effect on latency. The principles described have been developed at Reading over the past five years. Similar principles are now emerging in the simulation community through the HLA standard. This paper attempts to validate the suggested principles within the schema of distributed simulation and virtual environments and to compare and contrast with those described by the HLA definition documents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experience plays an important role in building management. “How often will this asset need repair?” or “How much time is this repair going to take?” are types of questions that project and facility managers face daily in planning activities. Failure or success in developing good schedules, budgets and other project management tasks depend on the project manager's ability to obtain reliable information to be able to answer these types of questions. Young practitioners tend to rely on information that is based on regional averages and provided by publishing companies. This is in contrast to experienced project managers who tend to rely heavily on personal experience. Another aspect of building management is that many practitioners are seeking to improve available scheduling algorithms, estimating spreadsheets and other project management tools. Such “micro-scale” levels of research are important in providing the required tools for the project manager's tasks. However, even with such tools, low quality input information will produce inaccurate schedules and budgets as output. Thus, it is also important to have a broad approach to research at a more “macro-scale.” Recent trends show that the Architectural, Engineering, Construction (AEC) industry is experiencing explosive growth in its capabilities to generate and collect data. There is a great deal of valuable knowledge that can be obtained from the appropriate use of this data and therefore the need has arisen to analyse this increasing amount of available data. Data Mining can be applied as a powerful tool to extract relevant and useful information from this sea of data. Knowledge Discovery in Databases (KDD) and Data Mining (DM) are tools that allow identification of valid, useful, and previously unknown patterns so large amounts of project data may be analysed. These technologies combine techniques from machine learning, artificial intelligence, pattern recognition, statistics, databases, and visualization to automatically extract concepts, interrelationships, and patterns of interest from large databases. The project involves the development of a prototype tool to support facility managers, building owners and designers. This final report presents the AIMMTM prototype system and documents how and what data mining techniques can be applied, the results of their application and the benefits gained from the system. The AIMMTM system is capable of searching for useful patterns of knowledge and correlations within the existing building maintenance data to support decision making about future maintenance operations. The application of the AIMMTM prototype system on building models and their maintenance data (supplied by industry partners) utilises various data mining algorithms and the maintenance data is analysed using interactive visual tools. The application of the AIMMTM prototype system to help in improving maintenance management and building life cycle includes: (i) data preparation and cleaning, (ii) integrating meaningful domain attributes, (iii) performing extensive data mining experiments in which visual analysis (using stacked histograms), classification and clustering techniques, associative rule mining algorithm such as “Apriori” and (iv) filtering and refining data mining results, including the potential implications of these results for improving maintenance management. Maintenance data of a variety of asset types were selected for demonstration with the aim of discovering meaningful patterns to assist facility managers in strategic planning and provide a knowledge base to help shape future requirements and design briefing. Utilising the prototype system developed here, positive and interesting results regarding patterns and structures of data have been obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experience plays an important role in building management. “How often will this asset need repair?” or “How much time is this repair going to take?” are types of questions that project and facility managers face daily in planning activities. Failure or success in developing good schedules, budgets and other project management tasks depend on the project manager's ability to obtain reliable information to be able to answer these types of questions. Young practitioners tend to rely on information that is based on regional averages and provided by publishing companies. This is in contrast to experienced project managers who tend to rely heavily on personal experience. Another aspect of building management is that many practitioners are seeking to improve available scheduling algorithms, estimating spreadsheets and other project management tools. Such “micro-scale” levels of research are important in providing the required tools for the project manager's tasks. However, even with such tools, low quality input information will produce inaccurate schedules and budgets as output. Thus, it is also important to have a broad approach to research at a more “macro-scale.” Recent trends show that the Architectural, Engineering, Construction (AEC) industry is experiencing explosive growth in its capabilities to generate and collect data. There is a great deal of valuable knowledge that can be obtained from the appropriate use of this data and therefore the need has arisen to analyse this increasing amount of available data. Data Mining can be applied as a powerful tool to extract relevant and useful information from this sea of data. Knowledge Discovery in Databases (KDD) and Data Mining (DM) are tools that allow identification of valid, useful, and previously unknown patterns so large amounts of project data may be analysed. These technologies combine techniques from machine learning, artificial intelligence, pattern recognition, statistics, databases, and visualization to automatically extract concepts, interrelationships, and patterns of interest from large databases. The project involves the development of a prototype tool to support facility managers, building owners and designers. This Industry focused report presents the AIMMTM prototype system and documents how and what data mining techniques can be applied, the results of their application and the benefits gained from the system. The AIMMTM system is capable of searching for useful patterns of knowledge and correlations within the existing building maintenance data to support decision making about future maintenance operations. The application of the AIMMTM prototype system on building models and their maintenance data (supplied by industry partners) utilises various data mining algorithms and the maintenance data is analysed using interactive visual tools. The application of the AIMMTM prototype system to help in improving maintenance management and building life cycle includes: (i) data preparation and cleaning, (ii) integrating meaningful domain attributes, (iii) performing extensive data mining experiments in which visual analysis (using stacked histograms), classification and clustering techniques, associative rule mining algorithm such as “Apriori” and (iv) filtering and refining data mining results, including the potential implications of these results for improving maintenance management. Maintenance data of a variety of asset types were selected for demonstration with the aim of discovering meaningful patterns to assist facility managers in strategic planning and provide a knowledge base to help shape future requirements and design briefing. Utilising the prototype system developed here, positive and interesting results regarding patterns and structures of data have been obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ubiquitous computing is an exciting paradigm shift where technology becomes virtually invisible in our lives. In the increasingly interconnected world, threats to our daily lives can come from unexpected sources and universal directions. Criminals and terrorists have recognized the value of leveraging the ubiquitous computing environments to facilitate the commission of crimes. The cyber criminals typically launch different forms of large-scale and coordinated attacks, causing huge financial loss and potential life hazard. In this talk, we report two innovative approaches to defend against large-scale and coordinated attacks in the ubiquitous environments: 1) Inferring the cyber crime's intent through network traffic classification to enable the early warning of potential attacks, and 2) Profiling the large-scale and coordinated cyber attacks through both microscopic and macroscopic modeling to provide better control of such attacks. These approaches are effective in finding weak symptoms caused by the attacks thus can successfully defend against the large-scale and coordinated attacks at their early stages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Running hydrodynamic models interactively allows both visual exploration and change of model state during simulation. One of the main characteristics of an interactive model is that it should provide immediate feedback to the user, for example respond to changes in model state or view settings. For this reason, such features are usually only available for models with a relatively small number of computational cells, which are used mainly for demonstration and educational purposes. It would be useful if interactive modeling would also work for models typically used in consultancy projects involving large scale simulations. This results in a number of technical challenges related to the combination of the model itself and the visualisation tools (scalability, implementation of an appropriate API for control and access to the internal state). While model parallelisation is increasingly addressed by the environmental modeling community, little effort has been spent on developing a high-performance interactive environment. What can we learn from other high-end visualisation domains such as 3D animation, gaming, virtual globes (Autodesk 3ds Max, Second Life, Google Earth) that also focus on efficient interaction with 3D environments? In these domains high efficiency is usually achieved by the use of computer graphics algorithms such as surface simplification depending on current view, distance to objects, and efficient caching of the aggregated representation of object meshes. We investigate how these algorithms can be re-used in the context of interactive hydrodynamic modeling without significant changes to the model code and allowing model operation on both multi-core CPU personal computers and high-performance computer clusters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modeling the development of structure in the universe on galactic and larger scales is the challenge that drives the field of computational cosmology. Here, photorealism is used as a simple, yet expert, means of assessing the degree to which virtual worlds succeed in replicating our own.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most departmental computing infrastructure reflects the state of networking technology and available funds at the time of construction, which converge in a preconceived notion of homogeneity of network architecture and usage patterns. The DMAN (Digital Media Access Network) project, a large-scale server and network foundation for the Hong Kong Polytechnic University's School of Design was created as a platform that would support a highly complex academic environment while giving maximum freedom to students, faculty and researchers through simplicity and ease of use. As a centralized multi-user computation backbone, DMAN faces an extremely hetrogeneous user and application profile, exceeding implementation and maintenance challenges of typical enterprise, and even most academic server set-ups. This paper sumarizes the specification, implementation and application of the system while describing its significance for design education in a computational context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes technologies we have developed to perform autonomous large-scale off-world excavation. A scale dragline excavator of size similar to that required for lunar excavation was made capable of autonomous control. Systems have been put in place to allow remote operation of the machine from anywhere in the world. Algorithms have been developed for complete autonomous digging and dumping of material taking into account machine and terrain constraints and regolith variability. Experimental results are presented showing the ability to autonomously excavate and move large amounts of regolith and accurately place it at a specified location.