21 resultados para Infrastructure Management
em Universidad Politécnica de Madrid
Resumo:
En este estudio se aplica una metodología de obtención de las leyes de frecuencia derivadas (de caudales máximo vertidos y niveles máximos alcanzados) en un entorno de simulaciones de Monte Carlo, para su inclusión en un modelo de análisis de riesgo de presas. Se compara su comportamiento respecto del uso de leyes de frecuencia obtenidas con las técnicas tradicionalmente utilizadas.
Resumo:
El Transportation Research Board es un congreso de reconocido prestigio internacional en el ámbito de la investigación del transporte. Aunque las actas publicadas están en formato digital y sin ISSN ni ISBN, lo consideramos lo suficientemente importante como para que se considere en los indicadores. This paper focuses on the implementation of safety based incentives in Public Private Partnerships (PPPs). The aim of this paper is twofold. First, to evaluate whether PPPs lead to an improvement in road safety, when compared with other infrastructure management systems. Second, is to analyze whether the incentives to improve road safety in PPP contracts in Spain have been effective at improving safety performance. To this end, negative binomial regression models have been applied using information from the Spanish high-capacity network covering years 2007-2009. The results showed that even though road safety is highly influenced by variables that are not manageable by the private concessionaire such as the average annual daily traffic, the implementation of safety incentives in PPPs has a positive influence in the reduction of accidents.
Resumo:
Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.
Resumo:
Models are an effective tool for systems and software design. They allow software architects to abstract from the non-relevant details. Those qualities are also useful for the technical management of networks, systems and software, such as those that compose service oriented architectures. Models can provide a set of well-defined abstractions over the distributed heterogeneous service infrastructure that enable its automated management. We propose to use the managed system as a source of dynamically generated runtime models, and decompose management processes into a composition of model transformations. We have created an autonomic service deployment and configuration architecture that obtains, analyzes, and transforms system models to apply the required actions, while being oblivious to the low-level details. An instrumentation layer automatically builds these models and interprets the planned management actions to the system. We illustrate these concepts with a distributed service update operation.
Resumo:
This document contains detailed description of the design and the implementation of a multi-agent application controlling traffic lights in a city together with a system for simulating traffic and testing. The goal of this thesis is to design and build a simplified intelligent and distributed solution to the problem with the traffic in the big cities following different good practices in order to allow future refining of the model of the real world. The problem of the traffic in the big cities is still a problem that cannot be solved. Not only is the increasing number of cars a reason for the traffic jams, but also the way the traffic is organized. Usually, the intersections with traffic lights are replaced by roundabouts or interchanges to increase the number of cars that can cross the intersection in certain time. But still there are places where the infrastructure cannot be changed and the traffic light semaphores are the only way to control the car flows. In real life, the traffic lights have a predefined plan for change or they receive information from a centralized system when and how they have to change. But what if the traffic lights can cooperate and decide on their own when and how to change? Using this problem, the purpose of the thesis is to explore different agent-based software engineering approaches to design and build a non-conventional distributed system. From the software engineering point of view, the goal of the thesis is to apply the knowledge and use the skills, acquired during the various courses of the master program in Software Engineering, while solving a practical and complex problem such as the traffic in the cities.
Resumo:
A useful strategy for improving disaster risk management is sharing spatial data across different technical organizations using shared information systems. However, the implementation of this type of system requires a large effort, so it is difficult to find fully implemented and sustainable information systems that facilitate sharing multinational spatial data about disasters, especially in developing countries. In this paper, we describe a pioneer system for sharing spatial information that we developed for the Andean Community. This system, called SIAPAD (Andean Information System for Disaster Prevention and Relief), integrates spatial information from 37 technical organizations in the Andean countries (Bolivia, Colombia, Ecuador, and Peru). SIAPAD was based on the concept of a thematic Spatial Data Infrastructure (SDI) and includes a web application, called GEORiesgo, which helps users to find relevant information with a knowledge-based system. In the paper, we describe the design and implementation of SIAPAD together with general conclusions and future directions which we learned as a result of this work.
Resumo:
From the water management perspective, water scarcity is an unacceptable risk of facing water shortages to serve water demands in the near future. Water scarcity may be temporary and related to drought conditions or other accidental situation, or may be permanent and due to deeper causes such as excessive demand growth, lack of infrastructure for water storage or transport, or constraints in water management. Diagnosing the causes of water scarcity in complex water resources systems is a precondition to adopt effective drought risk management actions. In this paper we present four indices which have been developed to evaluate water scarcity. We propose a methodology for interpretation of index values that can lead to conclusions about the reliability and vulnerability of systems to water scarcity, as well as to diagnose their possible causes and to propose solutions. The described methodology was applied to the Ebro river basin, identifying existing and expected problems and possible solutions. System diagnostics, based exclusively on the analysis of index values, were compared with the known reality as perceived by system managers, validating the conclusions in all cases
Resumo:
Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.
Resumo:
Infrastructure concession is an alternative widely used by governments to increase investment. In the case of the road sector, the main characteristics of the concessions are: long-term projects, high investments in the early years of the contract and high risks. A viability analysis must be carried out for each concession and consider the characteristics of the project. When the infrastructure is located in a developing country, political and market growth uncertainties should be add in the concession project analysis, as well as economic instability, because they present greater risks. This paper is an analysis of state bank participation in road infrastructure finance in developing countries. For this purpose, we studied road infrastructure financing and its associated risks, and also the features of developing countries. Furthermore, we considered the issue of state banks and multilateral development banks that perform an important role by offering better credit lines than the private banks, in terms of cost, interest and grace period. Based on this study, we analyzed the Brazilian Development Bank - BNDES – and their credit supply to road infrastructure concessions. The results show that BNDES is the main financing agent for long-term investment in the sector, offering loans with low interest rates in Brazilian currency. From this research we argue that a single state bank should not alone support the increasing demand for finance in Brazil. Therefore, we conclude that there is a need to expand the supply of credit in Brazil, by strengthening private banks in the long-term lending market.
Resumo:
This paper focuses on identifying and analysing the elements of Strategic Management for infrastructure and engineering assets. These elements are contended to involve an understanding of governance, corporate policy, corporate objectives, corporate strategy and interagency collaboration and will in turn, allow the ability determine a broader and more comprehensive framework for engineering asset management, ie a ‘staged approach’ to understanding how assets are managed within organisations. While the assets themselves have often been the sole concern for good management practices, other social and contextual elements have come into the mix in order to promote strategic asset management. The development of an integrated approach to asset management is at the base of the research question. What are the considerations and implications for adopting and implementing an integrated strategic asset management (ISAM) framework? While operational matters have been given prominence, a subset of corporate governance, Asset Governance, details the policies and processes needed to acquire, utilise, maintain and account for an organisation’s assets. Asset governance stems from the organisation’s overarching corporate governance principles; as a result it defines the management context in which engineering asset management is implemented. This aspect will be examined to determine the appropriate relationship between organisational strategic management and strategic asset management to further the theoretical engagement with the maturity of strategy,policy and governance for infrastructure and engineered assets. Asset governance stems from the organisation’s overarching corporate governance principles; as a result it defines the management context in which engineering asset management is implemented. The research proceeds by a document analysis of corporate reports and policy recommendations in terms of infrastructure and engineered assets. The paper concludes that incorporating an integrated asset management framework can promote a more robust conceptualisation of public assets and how they combine to provide a comprehensive system of service outcomes.
Resumo:
The recognition of the relevance of energy, especially of the renewable energies generated by the sun, water, wind, tides, modern biomass or thermal is growing significantly in the global society based on the possibility it has to improve societies′ quality of life, to support poverty reduction and sustainable development. Renewable energy, and mainly the energy generated by large hydropower generation projects that supply most of the renewable energy consumed by developing countries, requires many technical, legal, financial and social complex processes sustained by innovations and valuable knowledge. Besides these efforts, renewable energy requires a solid infrastructure to generate and distribute the energy resources needed to solve the basic needs of society. This demands a proper construction performance to deliver the energy projects planned according to specifications and respecting environmental and social concerns, which implies the observance of sustainable construction guidelines. But construction projects are complex and demanding and frequently face time and cost overruns that may cause negative impacts on the initial planning and thus on society. The renewable energy issue and the large renewable energy power generation and distribution projects are particularly significant for developing countries and for Latin America in particular, as this region concentrates an important hydropower potential and installed capacity. Using as references the performance of Venezuelan large hydropower generation projects and the Guri dam construction, this research evaluates the tight relationship existing between sustainable construction and knowledge management and their impact to achieve sustainability goals. The knowledge management processes are proposed as a basic strategy to allow learning from successes and failures obtained in previous projects and transform the enhancement opportunites into actions to improve the performance of the renewable energy power generation and distribution projects.
Resumo:
There is an increasing awareness among all kinds of organisations (in business,government and civil society) about the benefits of jointly working with stakeholders to satisfy both their goals and the social demands placed upon them. This is particularly the case within corporate social responsibility (CSR) frameworks. In this regard, multi-criteria tools for decision-making like the analytic hierarchy process (AHP) described in the paper can be useful for the building relationships with stakeholders. Since these tools can reveal decision-maker’s preferences, the integration of opinions from various stakeholders in the decision-making process may result in better and more innovative solutions with significant shared value. This paper is based on ongoing research to assess the feasibility of an AHP-based model to support CSR decisions in large infrastructure projects carried out by Red Electrica de España, the sole transmission agent and operator of the Spanishelectricity system.
Resumo:
Education can take advantage of e-Infrastructures to provide teachers with new opportunities to increase students' motivation and engagement while they learn. Nevertheless, teachers need to find, integrate and customize the resources provided by e-Infrastructures in an easy way. This paper presents ViSH Editor, an innovative web-based e-Learning authoring tool that aims to allow teachers to create new learning objects using e-Infrastructure resources. These new learning objects are called Virtual Excursions and are created as reusable, granular and interoperable learning objects. This way they can be reused to build new ones and they can be integrated in websites or Learning Management Systems. Details about the design, development and the tool itself are explained in this paper as well as the concept, structure and metadata of the new learning objects. Lastly, some real examples of how to enrich learning using Virtual Excursions are exposed.
Resumo:
The security event correlation scalability has become a major concern for security analysts and IT administrators when considering complex IT infrastructures that need to handle gargantuan amounts of events or wide correlation window spans. The current correlation capabilities of Security Information and Event Management (SIEM), based on a single node in centralized servers, have proved to be insufficient to process large event streams. This paper introduces a step forward in the current state of the art to address the aforementioned problems. The proposed model takes into account the two main aspects of this ?eld: distributed correlation and query parallelization. We present a case study of a multiple-step attack on the Olympic Games IT infrastructure to illustrate the applicability of our approach.
Resumo:
Among the main features that are intended to become part of what can be expected from the Smart City, one of them should be an improved energy management system, in order to benefit from a healthier relation with the environment, minimize energy expenses, and offer dynamic market opportunities. A Smart Grid seems like a very suitable infrastructure for this objective, as it guarantees a two-way information flow that will provide the means for energy management enhancement. However, to obtain all the required information, another entity must care about all the devices required to gather the data. What is more, this entity must consider the lifespan of the devices within the Smart Grid—when they are turned on and off or when new appliances are added—along with the services that devices are able to provide. This paper puts forward SMArc—an acronym for semantic middleware architecture—as a middleware proposal for the Smart Grid, so as to process the collected data and use it to insulate applications from the complexity of the metering facilities and guarantee that any change that may happen at these lower levels will be updated for future actions in the system.