27 resultados para Virtual Computer World
em Universidad Politécnica de Madrid
Resumo:
El objetivo del PFC es el diseño e implementación de una aplicación que funcione como osciloscopio, analizador de espectro y generador de funciones virtual, todo dentro de la misma aplicacion. Mediante una tarjeta de adquisición de datos tomaremos muestras de señales del mundo real (sistema analógico) para generar datos que puedan ser manipulados por un ordenador (sistema digital). Con esta misma tarjeta también se podrán generar señales básicas, tales como señales senoidales, cuadradas.... y además se ha añadido la funcionalidad de generar señales moduladas en frecuencia, señales tipo Chirp (usadas comúnmente tanto en aplicaciones sonar y radar, como en transmisión óptica) o PRN (ruido pseudo-aleatorio que consta de una secuencia determinista de pulsos que se repite cada periodo, usada comúnmente en receptores GPS), como también señales ampliamente conocidas como el ruido blanco Gaussiano o el ruido blanco uniforme. La aplicación mostrará con detalle las señales adquiridas y analizará de diversas maneras esas señales. Posee la función de enventanado de los tipos de ventana mas comunes, respuesta en frecuencia, transformada de Fourier, etc. La configuración es elegida por el usuario en un entorno amigable y de visualización atractiva. The objective of the PFC is the design and implementation of an application that works as oscilloscope, spectrum analyzer and virtual signal generator, all within the same application. Through a data acquisition card, the user can take samples of real-world signals (analog system) to generate data that can be manipulated by a computer (digital system). This same card can also generate basic signals, such as sine waves, square waves, sawtooth waves.... and further has added other functionalities as frequency modulated signals generation, Chirp signals type generation (commonly used in both sonar and radar applications, such as optical transmission) or PRN (pseudo-random noise sequence comprising a deterministic pulse that repeats every period, commonly used in GPS receivers). It also can generate widely known as Gaussian white noise signals or white noise uniform signals. The application will show in detail the acquired signals and will analyze these signals in different ways selected by the user. Windowing function has the most common window types, frequency response, Fourier transform are examples of what kind of analyzing that can be processed. The configuration is chosen by the user throught friendly and attractive displays and panels.
Resumo:
The project arises from the need to develop improved teaching methodologies in field of the mechanics of continuous media. The objective is to offer the student a learning process to acquire the necessary theoretical knowledge, cognitive skills and the responsibility and autonomy to professional development in this area. Traditionally the teaching of the concepts of these subjects was performed through lectures and laboratory practice. During these lessons the students attitude was usually passive, and therefore their effectiveness was poor. The proposed methodology has already been successfully employed in universities like University Bochum, Germany, University the South Australia and aims to improve the effectiveness of knowledge acquisition through use by the student of a virtual laboratory. This laboratory allows to adapt the curricula and learning techniques to the European Higher Education and improve current learning processes in the University School of Public Works Engineers -EUITOP- of the Technical University of Madrid -UPM-, due there are not laboratories in this specialization. The virtual space is created using a software platform built on OpenSim, manages 3D virtual worlds, and, language LSL -Linden Scripting Language-, which imprints specific powers to objects. The student or user can access this virtual world through their avatar -your character in the virtual world- and can perform practices within the space created for the purpose, at any time, just with computer with internet access and viewfinder. The virtual laboratory has three partitions. The virtual meeting rooms, where the avatar can interact with peers, solve problems and exchange existing documentation in the virtual library. The interactive game room, where the avatar is has to resolve a number of issues in time. And the video room where students can watch instructional videos and receive group lessons. Each audiovisual interactive element is accompanied by explanations framing it within the area of knowledge and enables students to begin to acquire a vocabulary and practice of the profession for which they are being formed. Plane elasticity concepts are introduced from the tension and compression testing of test pieces of steel and concrete. The behavior of reticulated and articulated structures is reinforced by some interactive games and concepts of tension, compression, local and global buckling will by tests to break articulated structures. Pure bending concepts, simple and composite torsion will be studied by observing a flexible specimen. Earthquake resistant design of buildings will be checked by a laboratory test video.
Resumo:
La Realidad Aumentada forma parte de múltiples proyectos de investigación desde hace varios años. La unión de la información del mundo real y la información digital ofrece un sinfín de posibilidades. Las más conocidas van orientadas a los juegos pero, gracias a ello, también se pueden implementar Interfaces Naturales. En otras palabras, conseguir que el usuario maneje un dispositivo electrónico con sus propias acciones: movimiento corporal, expresiones faciales, etc. El presente proyecto muestra el desarrollo de la capa de sistema de una Interfaz Natural, Mokey, que permite la simulación de un teclado mediante movimientos corporales del usuario. Con esto, se consigue que cualquier aplicación de un ordenador que requiera el uso de un teclado, pueda ser usada con movimientos corporales, aunque en el momento de su creación no fuese diseñada para ello. La capa de usuario de Mokey es tratada en el proyecto realizado por Carlos Lázaro Basanta. El principal objetivo de Mokey es facilitar el acceso de una tecnología tan presente en la vida de las personas como es el ordenador a los sectores de la población que tienen alguna discapacidad motora o movilidad reducida. Ya que vivimos en una sociedad tan informatizada, es esencial que, si se quiere hablar de inclusión social, se permita el acceso de la actual tecnología a esta parte de la población y no crear nuevas herramientas exclusivas para ellos, que generarían una situación de discriminación, aunque esta no sea intencionada. Debido a esto, es esencial que el diseño de Mokey sea simple e intuitivo, y al mismo tiempo que esté dotado de la suficiente versatilidad, para que el mayor número de personas discapacitadas puedan encontrar una configuración óptima para ellos. En el presente documento, tras exponer las motivaciones de este proyecto, se va a hacer un análisis detallado del estado del arte, tanto de la tecnología directamente implicada, como de otros proyectos similares. Se va prestar especial atención a la cámara Microsoft Kinect, ya que es el hardware que permite a Mokey detectar la captación de movimiento. Tras esto, se va a proceder a una explicación detallada de la Interfaz Natural desarrollada. Se va a prestar especial atención a todos aquellos algoritmos que han sido implementados para la detección del movimiento, así como para la simulación del teclado. Finalmente, se va realizar un análisis exhaustivo del funcionamiento de Mokey con otras aplicaciones. Se va a someter a una batería de pruebas muy amplia que permita determinar su rendimiento en las situaciones más comunes. Del mismo modo, se someterá a otra batería de pruebas destinada a definir su compatibilidad con los diferentes tipos de programas existentes en el mercado. Para una mayor precisión a la hora de analizar los datos, se va a proceder a comparar Mokey con otra herramienta similar, FAAST, pudiendo observar de esta forma las ventajas que tiene una aplicación especialmente pensada para gente discapacitada sobre otra que no tenía este fin. ABSTRACT. During the last few years, Augmented Reality has been an important part of several research projects, as the combination of the real world and the digital information offers a whole new set of possibilities. Among them, one of the most well-known possibilities are related to games by implementing Natural Interfaces, which main objective is to enable the user to handle an electronic device with their own actions, such as corporal movements, facial expressions… The present project shows the development of Mokey, a Natural Interface that simulates a keyboard by user’s corporal movements. Hence, any application that requires the use of a keyboard can be handled with this Natural Interface, even if the application was not designed in that way at the beginning. The main objective of Mokey is to simplify the use of the computer for those people that are handicapped or have some kind of reduced mobility. As our society has been almost completely digitalized, this kind of interfaces are essential to avoid social exclusion and discrimination, even when it is not intentional. Thus, some of the most important requirements of Mokey are its simplicity to use, as well as its versatility. In that way, the number of people that can find an optimal configuration for their particular condition will grow exponentially. After stating the motivations of this project, the present document will provide a detailed state of the art of both the technologies applied and other similar projects, highlighting the Microsoft Kinect camera, as this hardware allows Mokey to detect movements. After that, the document will describe the Natural Interface that has been developed, paying special attention to the algorithms that have been implemented to detect movements and synchronize the keyboard. Finally, the document will provide an exhaustive analysis of Mokey’s functioning with other applications by checking its behavior with a wide set of tests, so as to determine its performance in the most common situations. Likewise, the interface will be checked against another set of tests that will define its compatibility with different softwares that already exist on the market. In order to have better accuracy while analyzing the data, Mokey’s interface will be compared with a similar tool, FAAST, so as to highlight the advantages of designing an application that is specially thought for disabled people.
Resumo:
Spatial Data Infrastructures have become a methodological and technological benchmark enabling distributed access to historical-cartographic archives. However, it is essential to offer enhanced virtual tools that imitate the current processes and methodologies that are carried out by librarians, historians and academics in the existing map libraries around the world. These virtual processes must be supported by a generic framework for managing, querying, and accessing distributed georeferenced resources and other content types such as scientific data or information. The authors have designed and developed support tools to provide enriched browsing, measurement and geometrical analysis capabilities, and dynamical querying methods, based on SDI foundations. The DIGMAP engine and the IBERCARTO collection enable access to georeferenced historical-cartographical archives. Based on lessons learned from the CartoVIRTUAL and DynCoopNet projects, a generic service architecture scheme is proposed. This way, it is possible to achieve the integration of virtual map rooms and SDI technologies bringing support to researchers within the historical and social domains.
Resumo:
Virtual certification partially substitutes by computer simulations the experimental techniques required for rail vehicle certification. In this paper, several works were these techniques were used in the vehicle design and track maintenance processes are presented. Dynamic simulation of multibody systems was used to virtually apply the EN14363 standard to certify the dynamic behaviour of vehicles. The works described are: assessment of a freight bogie design adapted to meter-gauge, assessment of a railway track layout for a subway network, freight bogie design with higher speed and axle load, and processing of the data acquired by a track recording vehicle for track maintenance.
Resumo:
International agricultural trade has been growing significantly during the last decade. Many countries rely on imports to ensure adequate food supplies to the people. A few are becoming food baskets of the world. This process raises issues about the food security in depending countries and potentially unsustainable land and water use in exporting countries. In this paper, we analyse the impacts of amplified farm trade on natural resources, especially water. Farm exports and imports of five Latin America countries (Brazil, Argentina, Mexico, Peru and Chile) are examined carefully. A preliminary analysis indicates that virtual water imports can save valuable water resources in water-short countries, such as Mexico and Chile. Major exporting countries, including Brazil and Argentina, have become big exporters due to abundant natural resource endowments. The opportunity costs of agricultural production in those countries are identified as being low, because of the predominant green water use. It is concluded that virtual water trade can be a powerful tool to alleviate water stress in semi-arid countries. However, for exporting nations a sustainable water use can only be guaranteed if environmental production costs are fully reflected in the commodity prices. There is no basis for erecting environmental trade tariffs on exporters though. Setting up legal foundations for them in full compliance with WTOs processes would be a daunting task.
Resumo:
After the extraordinary spread of the World Wide Web during the last fifteen years, engineers and developers are pushing now the Internet to its next border. A new conception in computer science and networks communication has been burgeoning during roughly the last decade: a world where most of the computers of the future will be extremely downsized, to the point that they will look like dust at its most advanced prototypes. In this vision, every single element of our “real” world has an intelligent tag that carries all their relevant data, effectively mapping the “real” world into a “virtual” one, where all the electronically augmented objects are present, can interact among them and influence with their behaviour that of the other objects, or even the behaviour of a final human user. This is the vision of the Internet of the Future, which also draws ideas of several novel tendencies in computer science and networking, as pervasive computing and the Internet of Things. As it has happened before, materializing a new paradigm that changes the way entities interrelate in this new environment has proved to be a goal full of challenges in the way. Right now the situation is exciting, with a plethora of new developments, proposals and models sprouting every time, often in an uncoordinated, decentralised manner away from any standardization, resembling somehow the status quo of the first developments of advanced computer networking, back in the 60s and the 70s. Usually, a system designed after the Internet of the Future will consist of one or several final user devices attached to these final users, a network –often a Wireless Sensor Network- charged with the task of collecting data for the final user devices, and sometimes a base station sending the data for its further processing to less hardware-constrained computers. When implementing a system designed with the Internet of the Future as a pattern, issues, and more specifically, limitations, that must be faced are numerous: lack of standards for platforms and protocols, processing bottlenecks, low battery lifetime, etc. One of the main objectives of this project is presenting a functional model of how a system based on the paradigms linked to the Internet of the Future works, overcoming some of the difficulties that can be expected and showing a model for a middleware architecture specifically designed for a pervasive, ubiquitous system. This Final Degree Dissertation is divided into several parts. Beginning with an Introduction to the main topics and concepts of this new model, a State of the Art is offered so as to provide a technological background. After that, an example of a semantic and service-oriented middleware is shown; later, a system built by means of this semantic and service-oriented middleware, and other components, is developed, justifying its placement in a particular scenario, describing it and analysing the data obtained from it. Finally, the conclusions inferred from this system and future works that would be good to be tackled are mentioned as well. RESUMEN Tras el extraordinario desarrollo de la Web durante los últimos quince años, ingenieros y desarrolladores empujan Internet hacia su siguiente frontera. Una nueva concepción en la computación y la comunicación a través de las redes ha estado floreciendo durante la última década; un mundo donde la mayoría de los ordenadores del futuro serán extremadamente reducidas de tamaño, hasta el punto que parecerán polvo en sus más avanzado prototipos. En esta visión, cada uno de los elementos de nuestro mundo “real” tiene una etiqueta inteligente que porta sus datos relevantes, mapeando de manera efectiva el mundo “real” en uno “virtual”, donde todos los objetos electrónicamente aumentados están presentes, pueden interactuar entre ellos e influenciar con su comportamiento el de los otros, o incluso el comportamiento del usuario final humano. Ésta es la visión del Internet del Futuro, que también toma ideas de varias tendencias nuevas en las ciencias de la computación y las redes de ordenadores, como la computación omnipresente y el Internet de las Cosas. Como ha sucedido antes, materializar un nuevo paradigma que cambia la manera en que las entidades se interrelacionan en este nuevo entorno ha demostrado ser una meta llena de retos en el camino. Ahora mismo la situación es emocionante, con una plétora de nuevos desarrollos, propuestas y modelos brotando todo el rato, a menudo de una manera descoordinada y descentralizada lejos de cualquier estandarización, recordando de alguna manera el estado de cosas de los primeros desarrollos de redes de ordenadores avanzadas, allá por los años 60 y 70. Normalmente, un sistema diseñado con el Internet del futuro como modelo consistirá en uno o varios dispositivos para usuario final sujetos a estos usuarios finales, una red –a menudo, una red de sensores inalámbricos- encargada de recolectar datos para los dispositivos de usuario final, y a veces una estación base enviando los datos para su consiguiente procesado en ordenadores menos limitados en hardware. Al implementar un sistema diseñado con el Internet del futuro como patrón, los problemas, y más específicamente, las limitaciones que deben enfrentarse son numerosas: falta de estándares para plataformas y protocolos, cuellos de botella en el procesado, bajo tiempo de vida de las baterías, etc. Uno de los principales objetivos de este Proyecto Fin de Carrera es presentar un modelo funcional de cómo trabaja un sistema basado en los paradigmas relacionados al Internet del futuro, superando algunas de las dificultades que pueden esperarse y mostrando un modelo de una arquitectura middleware específicamente diseñado para un sistema omnipresente y ubicuo. Este Proyecto Fin de Carrera está dividido en varias partes. Empezando por una introducción a los principales temas y conceptos de este modelo, un estado del arte es ofrecido para proveer un trasfondo tecnológico. Después de eso, se muestra un ejemplo de middleware semántico orientado a servicios; después, se desarrolla un sistema construido por medio de este middleware semántico orientado a servicios, justificando su localización en un escenario particular, describiéndolo y analizando los datos obtenidos de él. Finalmente, las conclusiones extraídas de este sistema y las futuras tareas que sería bueno tratar también son mencionadas.
Resumo:
Within the membrane computing research field, there are many papers about software simulations and a few about hardware implementations. In both cases, algorithms for implementing membrane systems in software and hardware that try to take advantages of massive parallelism are implemented. P-systems are parallel and non deterministic systems which simulate membranes behavior when processing information. This paper presents software techniques based on the proper utilization of virtual memory of a computer. There is a study of how much virtual memory is necessary to host a membrane model. This method improves performance in terms of time.
Resumo:
Web-based education or „e-learning‟ has become a critical component in higher education for the last decade, replacing other distance learning methods, such as traditional computer training or correspondence learning. The number of university students who take on-line courses is continuously increasing all over the world. In Spain, nearly a 90% of the universities have an institutional e-learning platform and over 60% of the traditional on-site courses use this technology as a supplement to the traditional face-to-face classes. This new form of learning allows the disappearance of geographical barriers and enables students to schedule their own learning process, among some other advantages. On-line education is developed through specific software called „e-learning platform‟ or „virtual learning environment‟ (VLE). A considerable number of web-based tools to deliver distance courses are currently available. Open source software packages such as Moodle, Sakai, dotLRN or Dokeos are the most commonly used in the virtual campuses of Spanish universities. This paper analyzes the possibilities that virtual learning environments provide university teachers and learners and offers a technical comparison among some of the most popular e-learning learning platforms.
Resumo:
To perform advanced manipulation of remote environments such as grasping, more than one finger is required implying higher requirements for the control architecture. This paper presents the design and control of a modular 3-finger haptic device that can be used to interact with virtual scenarios or to teleoperate dexterous remote hands. In a modular haptic device, each module allows the interaction with a scenario by using a single finger; hence, multi-finger interaction can be achieved by adding more modules. Control requirements for a multifinger haptic device are analyzed and new hardware/software architecture for these kinds of devices is proposed. The software architecture described in this paper is distributed and the different modules communicate to allow the remote manipulation. Moreover, an application in which this haptic device is used to interact with a virtual scenario is shown.
Resumo:
El presente Proyecto de Fin de Carrera supone el propósito conjunto de los alumnos Álvaro Morillas y Fernando Sáez, y del profesor Vladimir Ulin, de desarrollar una unidad didáctica sobre el programa de simulación para ingeniería Virtual.Lab. La versión sobre la que se ha trabajado para realizar este texto es la 11, publicada en agosto de 2012. Virtual.Lab, del fabricante belga LMS International, es una plataforma software de ingeniería asistida por ordenador, que agrupa en una misma aplicación varias herramientas complementarias en el diseño de un producto, desde su definición geométrica a los análisis de durabilidad, ruido u optimización. No obstante, de entre todas las posibles simulaciones que nos permite el programa, en este proyecto sólo se tratan las que están relacionadas con la acústica. Cabe resaltar que gran parte de los conceptos manejados en Virtual.Lab son compatibles con el programa CATIA V5, ya que ambos programas vienen instalados y funcionan conjuntamente. Por eso, el lector de este proyecto podrá transportar sus conocimientos al que es uno de los programas estándar en las industrias aeronáutica, naval y automovilística, entre otras. Antes de este proyecto, otros alumnos de la escuela también realizaron proyectos de fin de carrera en el campo de la simulación computarizada en acústica. Una característica común a estos trabajos es que era necesario hacer uso de distintos programas para cada una de las etapas de simulación (como por ejemplo, ANSYS para el modelado y estudio de la vibración y SYSNOISE para las simulaciones acústicas, además de otros programas auxiliares para las traducciones de formato). Con Virtual.Lab desaparece esta necesidad y el tiempo empleado se reduce. Debido a que las soluciones por ordenador están ganando cada vez más importancia en la industria actual, los responsables de este proyecto consideran la necesidad de formación de profesionales en esta rama. Para responder a la demanda empresarial de trabajadores cualificados, se espera que en los próximos años los planes de estudio contengan más cursos en esta materia. Por tanto la intención de los autores es que este material sea de utilidad para el aprendizaje y docencia de estas asignaturas en cursos sucesivos. Por todo esto, se justifica la relevancia de este PFC como manual para introducir a los alumnos interesados en iniciarse en un sistema actual, de uso extendido en otras universidades tecnológicas europeas, y con buenas perspectivas de futuro. En este proyecto se incluyen varios ejemplos ejecutables desde el programa, así como vídeos explicativos que ayudan a mostrar gráficamente los procesos de simulación. Estos archivos se pueden encontrar en el CD adjunto. Abstract This final thesis is a joint project made by the students Álvaro Morillas and Fernando Sáez, and the professor Vladimir Ulin. The nature of the joint regards the writing of a didactic unit on Virtual.Lab, the simulation software. The software version used in this text is the number 11, released in August 2012. Virtual.Lab, from the Belgian developer LMS International, is a computer-aided engineering software which is used for several related tasks in this field: product design, durability simulation, optimization, etc. However, this project is focused on the acoustical capabilities. It is worthy to highlight that most procedures explained in this text can be used in the software CATIA V5 as well. Both tools come installed together and may be used at the same time. Therefore, the reader of this project will be able to use the acquired knowledge in one of the most relevant softwares for the aerospace, marine and automotive engineering. Previously to the development of this project, this School has conducted projects on this field. These projects regarded the use of ANSYS for modeling and meshing stages as well as the use of SYSNOISE for the final acoustic analysis. Since both systems use different file formats, a third-party translation software was required. This thesis fulfill this pending necessity with Virtual.Lab; the translation software procedure is not necessary anymore and simulations can be done in a more flexible, fast way. Since companies have an increasing usage of numerical methods in the development of their products and services, the authors think that it is important to develop the appropriate method to instruct new professionals in the field. Thus, the aim of this project is to help teachers and students in their process of learning the use of this leading software in acoustical simulations. For all the reasons mentioned above, we consider that this project is relevant for the School and the educational community. Aiming to achieve this objective the author offers example files and video demonstrations with guidance in the CD that accompanies this material. This facilitates the comprehension of the practical tasks and guides the prospect users of the software.
Resumo:
El mundo tecnológico está cambiando hacia la optimización en la gestión de recursos gracias a la poderosa influencia de tecnologías como la virtualización y la computación en la nube (Cloud Computing). En esta memoria se realiza un acercamiento a las mismas, desde las causas que las motivaron hasta sus últimas tendencias, pasando por la identificación de sus principales características, ventajas e inconvenientes. Por otro lado, el Hogar Digital es ya una realidad para la mayoría de los seres humanos. En él se dispone de acceso a múltiples tipos de redes de telecomunicaciones (3G, 4G, WI-FI, ADSL…) con más o menos capacidad pero que permiten conexiones a internet desde cualquier parte, en todo momento, y con prácticamente cualquier dispositivo (ordenadores personales, smartphones, tabletas, televisores…). Esto es aprovechado por las empresas para ofrecer todo tipo de servicios. Algunos de estos servicios están basados en el cloud computing sobre todo ofreciendo almacenamiento en la nube a aquellos dispositivos con capacidad reducida, como son los smarthphones y las tabletas. Ese espacio de almacenamiento normalmente está en los servidores bajo el control de grandes compañías. Guardar documentos, videos, fotos privadas sin tener la certeza de que estos no son consultados por alguien sin consentimiento, puede despertar en el usuario cierto recelo. Para estos usuarios que desean control sobre su intimidad, se ofrece la posibilidad de que sea el propio usuario el que monte sus propios servidores y su propio servicio cloud para compartir su información privada sólo con sus familiares y amigos o con cualquiera al que le dé permiso. Durante el proyecto se han comparado diversas soluciones, la mayoría de código abierto y de libre distribución, que permiten desplegar como mínimo un servicio de almacenamiento accesible a través de Internet. Algunas de ellas lo complementan con servicios de streaming tanto de música como de videos, compartición y sincronización de documentos entre múltiples dispositivos, calendarios, copias de respaldo (backups), virtualización de escritorios, versionado de ficheros, chats, etc. El proyecto finaliza con una demostración de cómo utilizar dispositivos de un hogar digital interactuando con un servidor Cloud, en el que previamente se ha instalado y configurado una de las soluciones comparadas. Este servidor quedará empaquetado en una máquina virtual para que sea fácilmente transportable e utilizable. ABSTRACT. The technological world is changing towards optimizing resource management thanks to the powerful influence of technologies such as Virtualization and Cloud Computing. This document presents a closer approach to them, from the causes that have motivated to their last trends, as well as showing their main features, advantages and disadvantages. In addition, the Digital Home is a reality for most humans. It provides access to multiple types of telecommunication networks (3G, 4G, WI-FI, ADSL...) with more or less capacity, allowing Internet connections from anywhere, at any time, and with virtually any device (computer personal smartphones, tablets, televisions...).This is used by companies to provide all kinds of services. Some of these services offer storage on the cloud to devices with limited capacity, such as smartphones and tablets. That is normally storage space on servers under the control of important companies. Saving private documents, videos, photos, without being sure that they are not viewed by anyone without consent, can wake up suspicions in some users. For those users who want control over their privacy, it offers the possibility that it is the user himself to mount his own server and its own cloud service to share private information only with family and friends or with anyone with consent. During the project I have compared different solutions, most open source and with GNU licenses, for deploying one storage facility accessible via the Internet. Some supplement include streaming services of music , videos or photos, sharing and syncing documents across multiple devices, calendars, backups, desktop virtualization, file versioning, chats... The project ends with a demonstration of how to use our digital home devices interacting with a cloud server where one of the solutions compared is installed and configured. This server will be packaged in a virtual machine to be easily transportable and usable.
Resumo:
Recommender systems in e-learning have proved to be powerful tools to find suitable educational material during the learning experience. But traditional user request-response patterns are still being used to generate these recommendations. By including contextual information derived from the use of ubiquitous learning environments, the possibility of incorporating proactivity to the recommendation process has arisen. In this paper we describe methods to push proactive recommendations to e-learning systems users when the situation is appropriate without being needed their explicit request. As a result, interesting learning objects can be recommended attending to the user?s needs in every situation. The impact of this proactive recommendations generated have been evaluated among teachers and scientists in a real e-learning social network called Virtual Science Hub related to the GLOBAL excursion European project. Outcomes indicate that the methods proposed are valid to generate such kind of recommendations in e-learning scenarios. The results also show that the users' perceived appropriateness of having proactive recommendations is high.
Resumo:
On December 17 came into force on community standard marine fuels. Bunker prices are expected to increase, recent statistics support this argument and the difference between the high sulphur (HS) and the low sulphur (LS) marine bunkers will be sustained. Considering also the price difference between the basis-market of Rotterdam with the rest European ports, the expected bunker prices will be higher in the Mediterranean. This paper begins with a review of the current situation in ECAS areas, highlighting the rules to be implemented shortly. The aim of the paper is known the current situation bunkering determine the estimated short term in Spain from world fleet.
Resumo:
By combining virtualization technologies, virtual private network techniques and parameterization of network scenarios it is possible to enhance a networking laboratory, typically carried out in university laboratory premises using equipment located there, by interconnecting it to virtual networks running on the students own personal computers. This paper describes some experiences applying this model to create hands-on assignments for a large group of students in computer networking education.