62 resultados para BIM, Building Information Modeling, Cloud Computing, CAD, FM, GIS
Resumo:
It has been shown that cloud computing brings cost benefits and promotes efficiency in the operations of the organizations, no matter what their type or size. However, few public organizations are benefiting from this new paradigm shift in the way the organizations consume and manage computational resources. The objective of this thesis is to analyze both internal and external factors that may influence the adoption of cloud computing by public organizations and propose possible strategies that can assist these organizations in their path to cloud usage. In order to achieve this objective, a SWOT analysis has been conducted, detecting internal factors (strengths and weaknesses) and external factors (opportunities and threats) that can influence the adoption of a governmental cloud. With the application of a TOWS matrix, by combining the internal and external factors, a list of possible strategies have been formulated to be used as a guide to decision-making related to the transition to a cloud environment.
Resumo:
With the advent of cloud computing model, distributed caches have become the cornerstone for building scalable applications. Popular systems like Facebook [1] or Twitter use Memcached [5], a highly scalable distributed object cache, to speed up applications by avoiding database accesses. Distributed object caches assign objects to cache instances based on a hashing function, and objects are not moved from a cache instance to another unless more instances are added to the cache and objects are redistributed. This may lead to situations where some cache instances are overloaded when some of the objects they store are frequently accessed, while other cache instances are less frequently used. In this paper we propose a multi-resource load balancing algorithm for distributed cache systems. The algorithm aims at balancing both CPU and Memory resources among cache instances by redistributing stored data. Considering the possible conflict of balancing multiple resources at the same time, we give CPU and Memory resources weighted priorities based on the runtime load distributions. A scarcer resource is given a higher weight than a less scarce resource when load balancing. The system imbalance degree is evaluated based on monitoring information, and the utility load of a node, a unit for resource consumption. Besides, since continuous rebalance of the system may affect the QoS of applications utilizing the cache system, our data selection policy ensures that each data migration minimizes the system imbalance degree and hence, the total reconfiguration cost can be minimized. An extensive simulation is conducted to compare our policy with other policies. Our policy shows a significant improvement in time efficiency and decrease in reconfiguration cost.
Resumo:
Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.
Resumo:
El mundo tecnológico está cambiando hacia la optimización en la gestión de recursos gracias a la poderosa influencia de tecnologías como la virtualización y la computación en la nube (Cloud Computing). En esta memoria se realiza un acercamiento a las mismas, desde las causas que las motivaron hasta sus últimas tendencias, pasando por la identificación de sus principales características, ventajas e inconvenientes. Por otro lado, el Hogar Digital es ya una realidad para la mayoría de los seres humanos. En él se dispone de acceso a múltiples tipos de redes de telecomunicaciones (3G, 4G, WI-FI, ADSL…) con más o menos capacidad pero que permiten conexiones a internet desde cualquier parte, en todo momento, y con prácticamente cualquier dispositivo (ordenadores personales, smartphones, tabletas, televisores…). Esto es aprovechado por las empresas para ofrecer todo tipo de servicios. Algunos de estos servicios están basados en el cloud computing sobre todo ofreciendo almacenamiento en la nube a aquellos dispositivos con capacidad reducida, como son los smarthphones y las tabletas. Ese espacio de almacenamiento normalmente está en los servidores bajo el control de grandes compañías. Guardar documentos, videos, fotos privadas sin tener la certeza de que estos no son consultados por alguien sin consentimiento, puede despertar en el usuario cierto recelo. Para estos usuarios que desean control sobre su intimidad, se ofrece la posibilidad de que sea el propio usuario el que monte sus propios servidores y su propio servicio cloud para compartir su información privada sólo con sus familiares y amigos o con cualquiera al que le dé permiso. Durante el proyecto se han comparado diversas soluciones, la mayoría de código abierto y de libre distribución, que permiten desplegar como mínimo un servicio de almacenamiento accesible a través de Internet. Algunas de ellas lo complementan con servicios de streaming tanto de música como de videos, compartición y sincronización de documentos entre múltiples dispositivos, calendarios, copias de respaldo (backups), virtualización de escritorios, versionado de ficheros, chats, etc. El proyecto finaliza con una demostración de cómo utilizar dispositivos de un hogar digital interactuando con un servidor Cloud, en el que previamente se ha instalado y configurado una de las soluciones comparadas. Este servidor quedará empaquetado en una máquina virtual para que sea fácilmente transportable e utilizable. ABSTRACT. The technological world is changing towards optimizing resource management thanks to the powerful influence of technologies such as Virtualization and Cloud Computing. This document presents a closer approach to them, from the causes that have motivated to their last trends, as well as showing their main features, advantages and disadvantages. In addition, the Digital Home is a reality for most humans. It provides access to multiple types of telecommunication networks (3G, 4G, WI-FI, ADSL...) with more or less capacity, allowing Internet connections from anywhere, at any time, and with virtually any device (computer personal smartphones, tablets, televisions...).This is used by companies to provide all kinds of services. Some of these services offer storage on the cloud to devices with limited capacity, such as smartphones and tablets. That is normally storage space on servers under the control of important companies. Saving private documents, videos, photos, without being sure that they are not viewed by anyone without consent, can wake up suspicions in some users. For those users who want control over their privacy, it offers the possibility that it is the user himself to mount his own server and its own cloud service to share private information only with family and friends or with anyone with consent. During the project I have compared different solutions, most open source and with GNU licenses, for deploying one storage facility accessible via the Internet. Some supplement include streaming services of music , videos or photos, sharing and syncing documents across multiple devices, calendars, backups, desktop virtualization, file versioning, chats... The project ends with a demonstration of how to use our digital home devices interacting with a cloud server where one of the solutions compared is installed and configured. This server will be packaged in a virtual machine to be easily transportable and usable.
Resumo:
Estamos viviendo la era de la Internetificación. A día de hoy, las conexiones a Internet se asumen presentes en nuestro entorno como una necesidad más. La Web, se ha convertido en un lugar de generación de contenido por los usuarios. Una información generada, que sobrepasa la idea con la que surgió esta, ya que en la mayoría de casos, su contenido no se ha diseñado más que para ser consumido por humanos, y no por máquinas. Esto supone un cambio de mentalidad en la forma en que diseñamos sistemas capaces de soportar una carga computacional y de almacenamiento que crece sin un fin aparente. Al mismo tiempo, vivimos un momento de crisis de la educación superior: los altos costes de una educación de calidad suponen una amenaza para el mundo académico. Mediante el uso de la tecnología, se puede lograr un incremento de la productividad, y una reducción en dichos costes en un campo, en el que apenas se ha avanzado desde el Renacimiento. En CloudRoom se ha diseñado una plataforma MOOC con una arquitectura ajustada a las últimas convenciones en Cloud Computing, que implica el uso de Servicios REST, bases de datos NoSQL, y que hace uso de las últimas recomendaciones del W3C en materia de desarrollo web y Linked Data. Para su construcción, se ha hecho uso de métodos ágiles de Ingeniería del Software, técnicas de Interacción Persona-Ordenador, y tecnologías de última generación como Neo4j, Redis, Node.js, AngularJS, Bootstrap, HTML5, CSS3 o Amazon Web Services. Se ha realizado un trabajo integral de Ingeniería Informática, combinando prácticamente la totalidad de aquellas áreas de conocimiento fundamentales en Informática. En definitiva se han ideado las bases de un sistema distribuido robusto, mantenible, con características sociales y semánticas, que puede ser ejecutado en múltiples dispositivos, y que es capaz de responder ante millones de usuarios. We are living through an age of Internetification. Nowadays, Internet connections are a utility whose presence one can simply assume. The web has become a place of generation of content by users. The information generated surpasses the notion with which the World Wide Web emerged because, in most cases, this content has been designed to be consumed by humans and not by machines. This fact implies a change of mindset in the way that we design systems; these systems should be able to support a computational and storage capacity that apparently grows endlessly. At the same time, our education system is in a state of crisis: the high costs of high-quality education threaten the academic world. With the use of technology, we could achieve an increase of productivity and quality, and a reduction of these costs in this field, which has remained largely unchanged since the Renaissance. In CloudRoom, a MOOC platform has been designed with an architecture that satisfies the last conventions on Cloud Computing; which involves the use of REST services, NoSQL databases, and uses the last recommendations from W3C in terms of web development and Linked Data. For its building process, agile methods of Software Engineering, Human-Computer Interaction techniques, and state of the art technologies such as Neo4j, Redis, Node.js, AngularJS, Bootstrap, HTML5, CSS3 or Amazon Web Services have been used. Furthermore, a comprehensive Informatics Engineering work has been performed, by combining virtually all of the areas of knowledge in Computer Science. Summarizing, the pillars of a robust, maintainable, and distributed system have been devised; a system with social and semantic capabilities, which runs in multiple devices, and scales to millions of users.
Resumo:
Recent developments in the area of multiscale modeling of fiber-reinforced polymers are presented. The overall strategy takes advantage of the separa-tion of length scales between different entities (ply, laminate, and component) found in composite structures. This allows us to carry out multiscale modeling by computing the properties of one entity (e.g., individual plies) at the relevant length scale, homogenizing the results into a constitutive model, and passing this information to the next length scale to determine the mechanical behavior of the larger entity (e.g., laminate). As a result, high-fidelity numerical sim-ulations of the mechanical behavior of composite coupons and small compo-nents are nowadays feasible starting from the matrix, fiber, and interface properties and spatial distribution. Finally, the roadmap is outlined for extending the current strategy to include functional properties and processing into the simulation scheme.
Resumo:
New technologies such as, the new Information and Communication Technology ICT, break new paths and redefines the way we understand business, the Cloud Computing is one of them. The on demand resource gathering and the per usage payment scheme are now commonplace, and allows companies to save on their ICT investments. Despite the importance of this issue, we still lack methodologies that help companies, to develop applications oriented for its exploitation in the Cloud. In this study we aim to fill this gap and propose a methodology for the development of ICT applications, which are directed towards a business model, and further outsourcing in the Cloud. In the former the Development of SOA applications, we take, as a baseline scenario, a business model from which to obtain a business process model. To this end, we use software engineering tools; and in the latter The Outsourcing we propose a guide that would facilitate uploading business models into the Cloud; to this end we describe a SOA governance model, which controls the SOA. Additionally we propose a Cloud government that integrates Service Level Agreements SLAs, plus SOA governance, and Cloud architecture. Finally we apply our methodology in an example illustrating our proposal. We believe that our proposal can be used as a guide/pattern for the development of business applications.
Resumo:
Real-world experimentation facilities accelerate the development of Future Internet technologies and services, advance the market for smart infrastructures, and increase the effectiveness of business processes through the Internet. The federation of facilities fosters the experimentation and innovation with larger and more powerful environment, increases the number and variety of the offered services and brings forth possibilities for new experimentation scenarios. This paper introduces a management solution for cloud federation that automates service provisioning to the largest possible extent, relieves the developers from time-consuming configuration settings, and caters for real-time information of all information related to the whole lifecycle of the provisioned services. This is achieved by proposing solutions to achieve the seamless deployment of services across the federation and ability of services to span across different infrastructures of the federation, as well as monitoring of the resources and data which can be aggregated with a common structure, offered as an open ecosystem for innovation at the developers' disposal. This solution consists of several federation management tools and components that are part of the work on Cloud Federation conducted within XIFI project to build the federation of cloud infrastructures for the Future Internet Lab (FIWARE Lab). We present the design and implementation of the solution-concerned FIWARE Lab management tools and components that are deployed within a federation of 17 cloud infrastructures distributed across Europe.
Resumo:
Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.
Resumo:
La tesis está focalizada en la resolución de problemas de optimización combinatoria, haciendo uso de las opciones tecnológicas actuales que ofrecen las tecnologías de la información y las comunicaciones, y la investigación operativa. Los problemas de optimización combinatoria se resuelven en general mediante programación lineal y metaheurísticas. La aplicación de las técnicas de resolución de los problemas de optimización combinatoria requiere de una elevada carga computacional, y los algoritmos deben diseñarse, por un lado pensando en la efectividad para encontrar buenas soluciones del problema, y por otro lado, pensando en un uso adecuado de los recursos informáticos disponibles. La programación lineal y las metaheurísticas son técnicas de resolución genéricas, que se pueden aplicar a diferentes problemas, partiendo de una base común que se particulariza para cada problema concreto. En el campo del desarrollo de software, los frameworks cumplen esa función de comenzar un proyecto con el trabajo general ya disponible, con la opción de cambiar o extender ese comportamiento base o genérico, para construir el sistema concreto, lo que permite reducir el tiempo de desarrollo, y amplía las posibilidades de éxito del proyecto. En esta tesis se han desarrollado dos frameworks de desarrollo. El framework ILP permite modelar y resolver problemas de programación lineal, de forma independiente al software de resolución de programación lineal que se utilice. El framework LME permite resolver problemas de optimización combinatoria mediante metaheurísticas. Tradicionalmente, las aplicaciones de resolución de problemas de optimización combinatoria son aplicaciones de escritorio que permiten gestionar toda la información de entrada del problema y resuelven el problema en local, con los recursos hardware disponibles. Recientemente ha aparecido un nuevo paradigma de despliegue y uso de aplicaciones que permite compartir recursos informáticos especializados por Internet. Esta nueva forma de uso de recursos informáticos es la computación en la nube, que presenta el modelo de software como servicio (SaaS). En esta tesis se ha construido una plataforma SaaS, para la resolución de problemas de optimización combinatoria, que se despliega sobre arquitecturas compuestas por procesadores multi-núcleo y tarjetas gráficas, y dispone de algoritmos de resolución basados en frameworks de programación lineal y metaheurísticas. Toda la infraestructura es independiente del problema de optimización combinatoria a resolver, y se han desarrollado tres problemas que están totalmente integrados en la plataforma SaaS. Estos problemas se han seleccionado por su importancia práctica. Uno de los problemas tratados en la tesis, es el problema de rutas de vehículos (VRP), que consiste en calcular las rutas de menor coste de una flota de vehículos, que reparte mercancías a todos los clientes. Se ha partido de la versión más clásica del problema y se han hecho estudios en dos direcciones. Por un lado se ha cuantificado el aumento en la velocidad de ejecución de la resolución del problema en tarjetas gráficas. Por otro lado, se ha estudiado el impacto en la velocidad de ejecución y en la calidad de soluciones, en la resolución por la metaheurística de colonias de hormigas (ACO), cuando se introduce la programación lineal para optimizar las rutas individuales de cada vehículo. Este problema se ha desarrollado con los frameworks ILP y LME, y está disponible en la plataforma SaaS. Otro de los problemas tratados en la tesis, es el problema de asignación de flotas (FAP), que consiste en crear las rutas de menor coste para la flota de vehículos de una empresa de transporte de viajeros. Se ha definido un nuevo modelo de problema, que engloba características de problemas presentados en la literatura, y añade nuevas características, lo que permite modelar los requerimientos de las empresas de transporte de viajeros actuales. Este nuevo modelo resuelve de forma integrada el problema de definir los horarios de los trayectos, el problema de asignación del tipo de vehículo, y el problema de crear las rotaciones de los vehículos. Se ha creado un modelo de programación lineal para el problema, y se ha resuelto por programación lineal y por colonias de hormigas (ACO). Este problema se ha desarrollado con los frameworks ILP y LME, y está disponible en la plataforma SaaS. El último problema tratado en la tesis es el problema de planificación táctica de personal (TWFP), que consiste en definir la configuración de una plantilla de trabajadores de menor coste, para cubrir una demanda de carga de trabajo variable. Se ha definido un modelo de problema muy flexible en la definición de contratos, que permite el uso del modelo en diversos sectores productivos. Se ha definido un modelo matemático de programación lineal para representar el problema. Se han definido una serie de casos de uso, que muestran la versatilidad del modelo de problema, y permiten simular el proceso de toma de decisiones de la configuración de una plantilla de trabajadores, cuantificando económicamente cada decisión que se toma. Este problema se ha desarrollado con el framework ILP, y está disponible en la plataforma SaaS. ABSTRACT The thesis is focused on solving combinatorial optimization problems, using current technology options offered by information technology and communications, and operations research. Combinatorial optimization problems are solved in general by linear programming and metaheuristics. The application of these techniques for solving combinatorial optimization problems requires a high computational load, and algorithms are designed, on the one hand thinking to find good solutions to the problem, and on the other hand, thinking about proper use of the available computing resources. Linear programming and metaheuristic are generic resolution techniques, which can be applied to different problems, beginning with a common base that is particularized for each specific problem. In the field of software development, frameworks fulfill this function that allows you to start a project with the overall work already available, with the option to change or extend the behavior or generic basis, to build the concrete system, thus reducing the time development, and expanding the possibilities of success of the project. In this thesis, two development frameworks have been designed and developed. The ILP framework allows to modeling and solving linear programming problems, regardless of the linear programming solver used. The LME framework is designed for solving combinatorial optimization problems using metaheuristics. Traditionally, applications for solving combinatorial optimization problems are desktop applications that allow the user to manage all the information input of the problem and solve the problem locally, using the available hardware resources. Recently, a new deployment paradigm has appeared, that lets to share hardware and software resources by the Internet. This new use of computer resources is cloud computing, which presents the model of software as a service (SaaS). In this thesis, a SaaS platform has been built for solving combinatorial optimization problems, which is deployed on architectures, composed of multi-core processors and graphics cards, and has algorithms based on metaheuristics and linear programming frameworks. The SaaS infrastructure is independent of the combinatorial optimization problem to solve, and three problems are fully integrated into the SaaS platform. These problems have been selected for their practical importance. One of the problems discussed in the thesis, is the vehicle routing problem (VRP), which goal is to calculate the least cost of a fleet of vehicles, which distributes goods to all customers. The VRP has been studied in two directions. On one hand, it has been quantified the increase in execution speed when the problem is solved on graphics cards. On the other hand, it has been studied the impact on execution speed and quality of solutions, when the problem is solved by ant colony optimization (ACO) metaheuristic, and linear programming is introduced to optimize the individual routes of each vehicle. This problem has been developed with the ILP and LME frameworks, and is available in the SaaS platform. Another problem addressed in the thesis, is the fleet assignment problem (FAP), which goal is to create lower cost routes for a fleet of a passenger transport company. It has been defined a new model of problem, which includes features of problems presented in the literature, and adds new features, allowing modeling the business requirements of today's transport companies. This new integrated model solves the problem of defining the flights timetable, the problem of assigning the type of vehicle, and the problem of creating aircraft rotations. The problem has been solved by linear programming and ACO. This problem has been developed with the ILP and LME frameworks, and is available in the SaaS platform. The last problem discussed in the thesis is the tactical planning staff problem (TWFP), which is to define the staff of lower cost, to cover a given work load. It has been defined a very rich problem model in the definition of contracts, allowing the use of the model in various productive sectors. It has been defined a linear programming mathematical model to represent the problem. Some use cases has been defined, to show the versatility of the model problem, and to simulate the decision making process of setting up a staff, economically quantifying every decision that is made. This problem has been developed with the ILP framework, and is available in the SaaS platform.
Resumo:
as tecnologías emergentes como el cloud computing y los dispositivos móviles están creando una oportunidad sin precedentes para mejorar el sistema educativo, permitiendo tanto a los educadores personalizar y mejorar la experiencia de aprendizaje, como facilitar a los estudiantes que adquieran conocimientos sin importar dónde estén. Por otra parte, a través de técnicas de gamificacion será posible promover y motivar a los estudiantes a que aprendan materias arduas haciendo que la experiencia sea más motivadora. Los juegos móviles pueden ser el camino correcto para dar soporte a esta experiencia de aprendizaje mejorada. Este proyecto integra el diseño y desarrollo de una arquitectura en la nube altamente escalable y con alto rendimiento, así como el propio cliente de iOS, para dar soporte a una nueva version de Temporis, un juego móvil multijugador orientado a reordenar eventos históricos en una línea temporal (e.j. historia, arte, deportes, entretenimiento y literatura). Temporis actualmente está disponible en Google Play. Esta memoria describe el desarrollo de la nueva versión de Temporis (Temporis v.2.0) proporcionando detalles acerca de la mejora y adaptación basados en el Temporis original. En particular se describe el nuevo backend hecho en Go sobre Google App Engine creado para soportar miles de usuarios, asó como otras características por ejemplo como conseguir enviar noticaciones push desde la propia plataforma. Por último, el cliente de iOS en Temporis v.2.0 se ha desarrollado utilizando las últimas y más relevantes tecnologías, prestando especial atención a Swift (el lenguaje de programación nuevo de Apple, que es seguro y rápido), el Paradigma Funcional Reactivo (que ayuda a construir aplicaciones altamente interactivas además de a minimizar errores) y la arquitectura VIPER (una arquitectura que sigue los principios SOLID, se centra en la separación de asuntos y favorece la reutilización de código en otras plataformas). ABSTRACT Emerging technologies such as cloud computing and mobile devices are creating an unprecedented opportunity for enhancing the educational system, letting both educators customize and improve the learning experience, and students acquire knowledge regardless of where they are. Moreover, through gamification techniques it would be possible to encourage and motivate students to learn arduous subjects by making the experience more motivating. Mobile games can be a perfect vehicle to support this enhanced learning experience. This project integrates the design and development of a highly scalable and performant cloud architecture, as well as the iOS client that uses it, in order to provide support to a new version of Temporis, a mobile multiplayer game focused on ordering time-based (e.g. history, art, sports, entertainment and literature) in a timeline that currently is available on Google Play. This work describes the development of the new Temporis version (Temporis v.2.0), providing details about improvements and details on the adaptation of the original Temporis. In particular, the new Google App Engine backend is described, which was created to support thousand of users developed in Go language are provided, in addition to other features like how to achieve push notications in this platform. Finally, the mobile iOS client developed using the latest and more relevant technologies is explained paying special attention to Swift (Apple's new programming language, that is safe and fast), the Functional Reactive Paradigm (that helps building highly interactive apps while minimizing bugs) and the VIPER architecture (a SOLID architecture that enforces separation of concerns and makes it easy to reuse code for other platforms).
Resumo:
Over the last few years, the Data Center market has increased exponentially and this tendency continues today. As a direct consequence of this trend, the industry is pushing the development and implementation of different new technologies that would improve the energy consumption efficiency of data centers. An adaptive dashboard would allow the user to monitor the most important parameters of a data center in real time. For that reason, monitoring companies work with IoT big data filtering tools and cloud computing systems to handle the amounts of data obtained from the sensors placed in a data center.Analyzing the market trends in this field we can affirm that the study of predictive algorithms has become an essential area for competitive IT companies. Complex algorithms are used to forecast risk situations based on historical data and warn the user in case of danger. Considering that several different users will interact with this dashboard from IT experts or maintenance staff to accounting managers, it is vital to personalize it automatically. Following that line of though, the dashboard should only show relevant metrics to the user in different formats like overlapped maps or representative graphs among others. These maps will show all the information needed in a visual and easy-to-evaluate way. To sum up, this dashboard will allow the user to visualize and control a wide range of variables. Monitoring essential factors such as average temperature, gradients or hotspots as well as energy and power consumption and savings by rack or building would allow the client to understand how his equipment is behaving, helping him to optimize the energy consumption and efficiency of the racks. It also would help him to prevent possible damages in the equipment with predictive high-tech algorithms.
Resumo:
Los procesos de diseño y construcción en Arquitectura han mostrado un desarrollo de optimización históricamente muy deficiente cuando se compara con las restantes actividades típicamente industriales. La aspiración constante a una industrialización efectiva, tanto en aras de alcanzar mayores cotas de calidad así como de ahorro de recursos, recibe hoy una oportunidad inmejorable desde el ámbito informático: el Building Information Modelling o BIM. Lo que en un inicio puede parecer meramente un determinado tipo de programa informático, en realidad supone un concepto de “proceso” que subvierte muchas rutinas hoy habituales en el desarrollo de proyectos y construcciones arquitectónicas. La inclusión y desarrollo de datos ligados al proyecto, desde su inicio hasta el fin de su ciclo de vida, conlleva la oportunidad de crear una realidad virtual dinámica y actualizable, que por añadidura posibilita su ensayo y optimización en todos sus aspectos: antes y durante su ejecución, así como vida útil. A ello se suma la oportunidad de transmitir eficientemente los datos completos de proyecto, sin apenas pérdidas o reelaboración, a la cadena de fabricación, lo que facilita el paso a una industrialización verdaderamente significativa en edificación. Ante una llamada mundial a la optimización de recursos y el interés indudable de aumentar beneficios económicos por medio de la reducción del factor de incertidumbre de los procesos, BIM supone un opción de mejora indudable, y así ha sido reconocido a través de la inminente implantación obligatoria por parte de los gobiernos (p. ej. Gran Bretaña en 2016 y España en 2018). La modificación de procesos y roles profesionales que conlleva la incorporación de BIM resulta muy significativa y marcará el ejercicio profesional de los futuros graduados en las disciplinas de Arquitectura, Ingeniería y Construcción (AEC por sus siglas en inglés). La universidad debe responder ágilmente a estas nuevas necesidades incorporando esta metodología en la enseñanza reglada y aportando una visión sinérgica que permita extraer los beneficios formativos subyacentes en el propio marco BIM. En este sentido BIM, al aglutinar el conjunto de datos sobre un único modelo virtual, ofrece un potencial singularmente interesante. La realidad tridimensional del modelo, desarrollada y actualizada continuamente, ofrece al estudiante una gestión radicalmente distinta de la representación gráfica, en la que las vistas parciales de secciones y plantas, tan complejas de asimilar en los inicios de la formación universitaria, resultan en una mera petición a posteriori, para ser extraída según necesidad del modelo virtual. El diseño se realiza siempre sobre el propio modelo único, independientemente de la vista de trabajo elegida en cada momento, permaneciendo los datos y sus relaciones constructivas siempre actualizados y plenamente coherentes. Esta descripción condensada de características de BIM preconfiguran gran parte de las beneficios formativos que ofrecen los procesos BIM, en especial, en referencia al desarrollo del diseño integrado y la gestión de la información (incluyendo TIC). Destacan a su vez las facilidades en comprensión visual de elementos arquitectónicos, sistemas técnicos, sus relaciones intrínsecas así como procesos constructivos. A ello se une el desarrollo experimental que la plataforma BIM ofrece a través de sus software colaborativos: la simulación del comportamiento estructural, energético, económico, entre otros muchos, del modelo virtual en base a los datos inherentes del proyecto. En la presente tesis se describe un estudio de conjunto para explicitar tanto las cualidades como posibles reservas en el uso de procesos BIM, en el marco de una disciplina concreta: la docencia de la Arquitectura. Para ello se ha realizado una revisión bibliográfica general sobre BIM y específica sobre docencia en Arquitectura, así como analizado las experiencias de distintos grupos de interés en el marco concreto de la enseñanza de la en Arquitectura en la Universidad Europea de Madrid. El análisis de beneficios o reservas respecto al uso de BIM se ha enfocado a través de la encuesta a estudiantes y la entrevista a profesionales AEC relacionados o no con BIM. Las conclusiones del estudio permiten sintetizar una implantación de metodología BIM que para mayor claridad y facilidad de comunicación y manejo, se ha volcado en un Marco de Implantación eminentemente gráfico. En él se orienta sobre las acciones docentes para el desarrollo de competencias concretas, valiéndose de la flexibilidad conceptual de los Planes de Estudio en el contexto del Espacio Europeo de Educación Superior (Declaración de Bolonia) para incorporar con naturalidad la nueva herramienta docente al servicio de los objetivos formativo legalmente establecidos. El enfoque global del Marco de Implementación propuesto facilita la planificación de acciones formativas con perspectiva de conjunto: combinar los formatos puntuales o vehiculares BIM, establecer sinergias transversales y armonizar recursos, de modo que la metodología pueda beneficiar tanto la asimilación de conocimientos y habilidades establecidas para el título, como el propio flujo de aprendizaje o learn flow BIM. Del mismo modo reserva, incluso visualmente, aquellas áreas de conocimiento en las que, al menos en la planificación actual, la inclusión de procesos BIM no se considera ventajosa respecto a otras metodologías, o incluso inadecuadas para los objetivos docentes establecidos. Y es esta última categorización la que caracteriza el conjunto de conclusiones de esta investigación, centrada en: 1. la incuestionable necesidad de formar en conceptos y procesos BIM desde etapas muy iniciales de la formación universitaria en Arquitectura, 2. los beneficios formativos adicionales que aporta BIM en el desarrollo de competencias muy diversas contempladas en el currículum académico y 3. la especificidad del rol profesional del arquitecto que exigirá una implantación cuidadosa y ponderada de BIM que respete las metodologías de desarrollo creativo tradicionalmente efectivas, y aporte valor en una reorientación simbiótica con el diseño paramétrico y fabricación digital que permita un diseño finalmente generativo. ABSTRACT The traditional architectural design and construction procedures have proven to be deficient where process optimization is concerned, particularly when compared to other common industrial activities. The ever‐growing strife to achieve effective industrialization, both in favor of reaching greater quality levels as well as sustainable management of resources, has a better chance today than ever through a mean out of the realm of information technology, the Building Information Modelling o BIM. What may initially seem to be merely another computer program, in reality turns out to be a “process” concept that subverts many of today’s routines in architectural design and construction. Including and working with project data from the very beginning to the end of its full life cycle allows for creating a dynamic and updatable virtual reality, enabling data testing and optimizing throughout: before and during execution, all the way to the end of its lifespan. In addition, there is an opportunity to transmit complete project data efficiently, with hardly any loss or redeveloping of the manufacture chain required, which facilitates attaining a truly significant industrialization within the construction industry. In the presence of a world‐wide call for optimizing resources, along with an undeniable interest in increasing economic benefits through reducing uncertainty factors in its processes, BIM undoubtedly offers a chance for improvement as acknowledged by its imminent and mandatory implementation on the part of governments (for example United Kingdom in 2016 and Spain in 2018). The changes involved in professional roles and procedures upon incorporating BIM are highly significant and will set the course for future graduates of Architecture, Engineering and Construction disciplines (AEC) within their professions. Higher Education must respond to such needs with swiftness by incorporating this methodology into their educational standards and providing a synergetic vision that focuses on the underlying educational benefits inherent in the BIM framework. In this respect, BIM, in gathering data set under one single virtual model, offers a uniquely interesting potential. The three‐dimensional reality of the model, under continuous development and updating, provides students with a radically different graphic environment, in which partial views of elevation, section or plan that tend characteristically to be difficult to assimilate at the beginning of their studies, become mere post hoc requests to be ordered when needed directly out the virtual model. The design is always carried out on the sole model itself, independently of the working view chosen at any particular moment, with all data and data relations within construction permanently updated and fully coherent. This condensed description of the features of BIM begin to shape an important part of the educational benefits posed by BIM processes, particularly in reference to integrated design development and information management (including ITC). At the same time, it highlights the ease with which visual understanding is achieved regarding architectural elements, technology systems, their intrinsic relationships, and construction processes. In addition to this, there is the experimental development the BIM platform grants through its collaborative software: simulation of structural, energetic, and economic behavior, among others, of the virtual model according to the data inherent to the project. This doctoral dissertation presents a broad study including a wide array of research methods and issues in order to specify both the virtues and possible reservations in the use of BIM processes within the framework of a specific discipline: teaching Architecture. To do so, a literature review on BIM has been carried out, specifically concerning teaching in the discipline of Architecture, as well as an analysis of the experience of different groups of interest delimited to Universidad Europea de Madrid. The analysis of the benefits and/or limitations of using BIM has been approached through student surveys and interviews with professionals from the AEC sector, associated or not, with BIM. Various diverse educational experiences are described and academic management for experimental implementation has been analyzed. The conclusions of this study offer a synthesis for a Framework of Implementation of BIM methodology, which in order to reach greater clarity, communication ease and user‐friendliness, have been posed in an eminently graphic manner. The proposed framework proffers guidance on teaching methods conducive to the development of specific skills, taking advantage of the conceptual flexibility of the European Higher Education Area guidelines based on competencies, which naturally facilitate for the incorporation of this new teaching tool to achieve the educational objectives established by law. The global approach of the Implementation Framework put forth in this study facilitates the planning of educational actions within a common perspective: combining exceptional or vehicular BIM formats, establishing cross‐disciplinary synergies, and sharing resources, so as to purport a methodology that contributes to the assimilation of knowledge and pre‐defined competencies within the degree program, and to the flow of learning itself. At the same time, it reserves, even visually, those areas of knowledge in which the use of BIM processes is not considered necessarily an advantage over other methodologies, or even inadequate for the learning outcomes established, at least where current planning is concerned. It is this last category which characterizes the research conclusions as a whole, centering on: 1. The unquestionable need for teaching BIM concepts and processes in Architecture very early on, in the initial stages of higher education; 2. The additional educational benefits that BIM offers in a varied array of competency development within the academic curriculum; and 3. The specific nature of the professional role of the Architect, which demands a careful and balanced implementation of BIM that respects the traditional teaching methodologies that have proven effective and creative, and adds value by a symbiotic reorientation merged with parametric design and digital manufacturing so to enable for a finally generative design.
Resumo:
Complexity has always been one of the most important issues in distributed computing. From the first clusters to grid and now cloud computing, dealing correctly and efficiently with system complexity is the key to taking technology a step further. In this sense, global behavior modeling is an innovative methodology aimed at understanding the grid behavior. The main objective of this methodology is to synthesize the grid's vast, heterogeneous nature into a simple but powerful behavior model, represented in the form of a single, abstract entity, with a global state. Global behavior modeling has proved to be very useful in effectively managing grid complexity but, in many cases, deeper knowledge is needed. It generates a descriptive model that could be greatly improved if extended not only to explain behavior, but also to predict it. In this paper we present a prediction methodology whose objective is to define the techniques needed to create global behavior prediction models for grid systems. This global behavior prediction can benefit grid management, specially in areas such as fault tolerance or job scheduling. The paper presents experimental results obtained in real scenarios in order to validate this approach.
Resumo:
The number of online real-time streaming services deployed over network topologies like P2P or centralized ones has remarkably increased in the recent years. This has revealed the lack of networks that are well prepared to respond to this kind of traffic. A hybrid distribution network can be an efficient solution for real-time streaming services. This paper contains the experimental results of streaming distribution in a hybrid architecture that consist of mixed connections among P2P and Cloud nodes that can interoperate together. We have chosen to represent the P2P nodes as Planet Lab machines over the world and the cloud nodes using a Cloud provider's network. First we present an experimental validation of the Cloud infrastructure's ability to distribute streaming sessions with respect to some key streaming QoS parameters: jitter, throughput and packet losses. Next we show the results obtained from different test scenarios, when a hybrid distribution network is used. The scenarios measure the improvement of the multimedia QoS parameters, when nodes in the streaming distribution network (located in different continents) are gradually moved into the Cloud provider infrastructure. The overall conclusion is that the QoS of a streaming service can be efficiently improved, unlike in traditional P2P systems and CDN, by deploying a hybrid streaming architecture. This enhancement can be obtained by strategic placing of certain distribution network nodes into the Cloud provider infrastructure, taking advantage of the reduced packet loss and low latency that exists among its datacenters.