893 resultados para Distributed virtual environments (DVE)
Resumo:
In the last years the number of industrial applications for Augmented Reality (AR) and Virtual Reality (VR) environments has significantly increased. Optical tracking systems are an important component of AR/VR environments. In this work, a low cost optical tracking system with adequate attributes for professional use is proposed. The system works in infrared spectral region to reduce optical noise. A highspeed camera, equipped with daylight blocking filter and infrared flash strobes, transfers uncompressed grayscale images to a regular PC, where image pre-processing software and the PTrack tracking algorithm recognize a set of retro-reflective markers and extract its 3D position and orientation. Included in this work is a comprehensive research on image pre-processing and tracking algorithms. A testbed was built to perform accuracy and precision tests. Results show that the system reaches accuracy and precision levels slightly worse than but still comparable to professional systems. Due to its modularity, the system can be expanded by using several one-camera tracking modules linked by a sensor fusion algorithm, in order to obtain a larger working range. A setup with two modules was built and tested, resulting in performance similar to the stand-alone configuration.
Resumo:
This paper presents two tools developed to facilitate the use and automate the process of using Virtual Worlds for educational purposes. The first tool has been developed to automatically create the classroom space, usually called region in the virtual world, which means, a region in the virtual world used to develop educational activities between professors, students and interactive objects. The second tool helps the process of creating 3D interactive objects in a virtual world. With these tools educators will be able to produce 3D interactive learning objects and use them in virtual classrooms improving the quality and appeal, for students, of their classes. © 2011 IEEE.
Resumo:
Access control is a fundamental concern in any system that manages resources, e.g., operating systems, file systems, databases and communications systems. The problem we address is how to specify, enforce, and implement access control in distributed environments. This problem occurs in many applications such as management of distributed project resources, e-newspaper and payTV subscription services. Starting from an access relation between users and resources, we derive a user hierarchy, a resource hierarchy, and a unified hierarchy. The unified hierarchy is then used to specify the access relation in a way that is compact and that allows efficient queries. It is also used in cryptographic schemes that enforce the access relation. We introduce three specific cryptography based hierarchical schemes, which can effectively enforce and implement access control and are designed for distributed environments because they do not need the presence of a central authority (except perhaps for set- UP).
Resumo:
The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.
Resumo:
Tracking user’s visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user’s visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of two-dimensional fixations relative to a defined area of projection. The study presented in this article compares precision, accuracy and application performance of two binocular eye tracking devices. Two algorithms are compared which derive depth information as required for visual attention-based 3D interfaces. This information is further applied to an improved VR selection task in which a binocular eye tracker and an adaptive neural network algorithm is used during the disambiguation of partly occluded objects.
Resumo:
Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.
Resumo:
This paper presents some results of a R+D project entitled “e-Learning system for Practical Training of University students in Education Faculties (ForELearn)”, developed in Spain by the Universidad de Granada and the Universidad Politécnica de Madrid and funded by the Spanish Ministry of Education. In a first phase, through the use of AulaWeb Learning Management System, a set of adaptations and improvements of this software application have been done for the design and development of an experimental course of Practicum supervision. Next, the implementation of this course by means of a group of face to face and online seminars provides experimental data for the analysis and discussion about the point of view of users (preservice teachers) that have tracked their practice supervision with AulaWeb.
Resumo:
This work presents a navigation system for UGVs in large outdoor environments; virtual obstacles are added to the system in order to avoid zones that may present risks to the UGV or the elements in its surroundings. The platform, software architecture and the modifications necessary to handle the virtual obstacles are explained in detail. Several tests have been performed and their results show that the system proposed is capable of performing safe navigation in complex environments.
Resumo:
Analysis of learning data (learning analytics) is a new research field with high growth potential. The main objective of Learning analytics is the analysis of data (interactions being the basic data unit) generated in virtual learning environments, in order to maximize the outcomes of the learning process; however, a consensus has not been reached yet on which interactions must be measured and what is their influence on learning outcomes. This research is grounded on the study of e-learning interaction typologies and their relationship with students? academic performance, by means of a comparative study between different interaction typologies (based on the agents involved, frequency of use and participation mode). The main conclusions are a) that classifications based on agents offer a better explanation of academic performance; and b) that each of the three typologies are able to explain academic performance in terms of some of their components (student-teacher and student-student interactions, evaluating students interactions and active interactions, respectively), with the other components being nonrelevant.
Resumo:
To perform advanced manipulation of remote environments such as grasping, more than one finger is required implying higher requirements for the control architecture. This paper presents the design and control of a modular 3-finger haptic device that can be used to interact with virtual scenarios or to teleoperate dexterous remote hands. In a modular haptic device, each module allows the interaction with a scenario by using a single finger; hence, multi-finger interaction can be achieved by adding more modules. Control requirements for a multifinger haptic device are analyzed and new hardware/software architecture for these kinds of devices is proposed. The software architecture described in this paper is distributed and the different modules communicate to allow the remote manipulation. Moreover, an application in which this haptic device is used to interact with a virtual scenario is shown.
Resumo:
Learning analytics is the analysis of static and dynamic data extracted from virtual learning environments, in order to understand and optimize the learning process. Generally, this dynamic data is generated by the interactions which take place in the virtual learning environment. At the present time, many implementations for grouping of data have been proposed, but there is no consensus yet on which interactions and groups must be measured and analyzed. There is also no agreement on what is the influence of these interactions, if any, on learning outcomes, academic performance or student success. This study presents three different extant interaction typologies in e-learning and analyzes the relation of their components with students? academic performance. The three different classifications are based on the agents involved in the learning process, the frequency of use and the participation mode, respectively. The main findings from the research are: a) that agent-based classifications offer a better explanation of student academic performance; b) that at least one component in each typology predicts academic performance; and c) that student-teacher and student-student, evaluating students, and active interactions, respectively, have a significant impact on academic performance, while the other interaction types are not significantly related to academic performance.
Resumo:
This paper presents the model named Accepting Networks of Evolutionary Processors as NP-problem solver inspired in the biological DNA operations. A processor has a rules set, splicing rules in this model,an object multiset and a filters set. Rules can be applied in parallel since there exists a large number of copies of objects in the multiset. Processors can form a graph in order to solve a given problem. This paper shows the network configuration in order to solve the SAT problem using linear resources and time. A rule representation arquitecture in distributed environments can be easily implemented using these networks of processors, such as decision support systems, as shown in the paper.
Resumo:
Este trabajo es una contribución a los sistemas fotovoltaicos (FV) con seguimiento distribuido del punto de máxima potencia (DMPPT), una topología que se caracteriza porque lleva a cabo el MPPT a nivel de módulo, al contrario de las topologías más tradicionales que llevan a cabo el MPPT para un número más elevado de módulos, pudiendo ser hasta cientos de módulos. Las dos tecnologías DMPPT que existen en el mercado son conocidos como microinversores y optimizadores de potencia, y ofrecen ciertas ventajas sobre sistemas de MPPT central como: mayor producción en situaciones de mismatch, monitorización individual de cada módulo, flexibilidad de diseño, mayor seguridad del sistema, etc. Aunque los sistemas DMPPT no están limitados a los entornos urbanos, se ha enfatizado en el título ya que es su mercado natural, siendo difícil una justificación de su sobrecoste en grandes huertas solares en suelo. Desde el año 2010 el mercado de estos sistemas ha incrementado notablemente y sigue creciendo de una forma continuada. Sin embargo, todavía falta un conocimiento profundo de cómo funcionan estos sistemas, especialmente en el caso de los optimizadores de potencia, de las ganancias energéticas esperables en condiciones de mismatch y de las posibilidades avanzadas de diagnóstico de fallos. El principal objetivo de esta tesis es presentar un estudio completo de cómo funcionan los sistemas DMPPT, sus límites y sus ventajas, así como experimentos varios que verifican la teoría y el desarrollo de herramientas para valorar las ventajas de utilizar DMPPT en cada instalación. Las ecuaciones que modelan el funcionamiento de los sistemas FVs con optimizadores de potencia se han desarrollado y utilizado para resaltar los límites de los mismos a la hora de resolver ciertas situaciones de mismatch. Se presenta un estudio profundo sobre el efecto de las sombras en los sistemas FVs: en la curva I-V y en los algoritmos MPPT. Se han llevado a cabo experimentos sobre el funcionamiento de los algoritmos MPPT en situaciones de sombreado, señalando su ineficiencia en estas situaciones. Un análisis de la ventaja del uso de DMPPT frente a los puntos calientes es presentado y verificado. También se presenta un análisis sobre las posibles ganancias en potencia y energía con el uso de DMPPT en condiciones de sombreado y este también es verificado experimentalmente, así como un breve estudio de su viabilidad económica. Para ayudar a llevar a cabo todos los análisis y experimentos descritos previamente se han desarrollado una serie de herramientas software. Una siendo un programa en LabView para controlar un simulador solar y almacenar las medidas. También se ha desarrollado un programa que simula curvas I-V de módulos y generador FVs afectados por sombras y este se ha verificado experimentalmente. Este mismo programa se ha utilizado para desarrollar un programa todavía más completo que estima las pérdidas anuales y las ganancias obtenidas con DMPPT en instalaciones FVs afectadas por sombras. Finalmente, se han desarrollado y verificado unos algoritmos para diagnosticar fallos en sistemas FVs con DMPPT. Esta herramienta puede diagnosticar los siguientes fallos: sombras debido a objetos fijos (con estimación de la distancia al objeto), suciedad localizada, suciedad general, posible punto caliente, degradación de módulos y pérdidas en el cableado de DC. Además, alerta al usuario de las pérdidas producidas por cada fallo y no requiere del uso de sensores de irradiancia y temperatura. ABSTRACT This work is a contribution to photovoltaic (PV) systems with distributed maximum power point tracking (DMPPT), a system topology characterized by performing the MPPT at module level, instead of the more traditional topologies which perform MPPT for a larger number of modules. The two DMPPT technologies available at the moment are known as microinverters and power optimizers, also known as module level power electronics (MLPE), and they provide certain advantages over central MPPT systems like: higher energy production in mismatch situations, monitoring of each individual module, system design flexibility, higher system safety, etc. Although DMPPT is not limited to urban environments, it has been emphasized in the title as it is their natural market, since in large ground-mounted PV plants the extra cost is difficult to justify. Since 2010 MLPE have increased their market share steadily and continuing to grow steadily. However, there still lacks a profound understanding of how they work, especially in the case of power optimizers, the achievable energy gains with their use and the possibilities in failure diagnosis. The main objective of this thesis is to provide a complete understanding of DMPPT technologies: how they function, their limitations and their advantages. A series of equations used to model PV arrays with power optimizers have been derived and used to point out limitations in solving certain mismatch situation. Because one of the most emphasized benefits of DMPPT is their ability to mitigate shading losses, an extensive study on the effects of shadows on PV systems is presented; both on the I-V curve and on MPPT algorithms. Experimental tests have been performed on the MPPT algorithms of central inverters and MLPE, highlighting their inefficiency in I-V curves with local maxima. An analysis of the possible mitigation of hot-spots with DMPPT is discussed and experimentally verified. And a theoretical analysis of the possible power and energy gains is presented as well as experiments in real PV systems. A short economic analysis of the benefits of DMPPT has also been performed. In order to aide in the previous task, a program which simulates I-V curves under shaded conditions has been developed and experimentally verified. This same program has been used to develop a software tool especially designed for PV systems affected by shading, which estimates the losses due to shading and the energy gains obtained with DMPPT. Finally, a set of algorithms for diagnosing system faults in PV systems with DMPPT has been developed and experimentally verified. The tool can diagnose the following failures: fixed object shading (with distance estimation), localized dirt, generalized dirt, possible hot-spots, module degradation and excessive losses in DC cables. In addition, it alerts the user of the power losses produced by each failure and classifies the failures by their severity and it does not require the use of irradiance or temperature sensors.