934 resultados para Infrastructure Management
Resumo:
The Midwest Transportation Consortium (MTC) recently completed its sixth year of operation. The MTC has become an established portion of the research and educational programs at ISU and its partner universities. The MTC continues to emphasize its primary focus of developing human capital. For example, this semester, Fall, 2005, ISU has graduate scholars in its educational program. However, we also recognize that the federal grant is an opportunity to build programs at our respective universities that continue after the U.S. DOT UTCP may end. An example of building a long lasting program is the University of Missouri – St. Louis’ (UMSL) and its development of a transportation Ph.D. program in their business college. Admittedly, this program could have been started regardless, but Dr. Ray Mundy, Director of UMSL’s Transportation Scholars Program, believes that the MTC support of the transportation educational program at UMSL was the essential component in establishing a Ph.D. program. At ISU, the MTC has been instrumental in establishing two research and outreach programs, and both have themes that are related to the MTC’s theme of “Transportation System Management and Operation.” The Center for Weather Impacts on Mobility and Safety (C-WIMS) was recently established, and the Center for Road Infrastructure Management and Operations (RIMO) is in the process of being established. The MTC has a critical role in establishing each of these two programs. As part of the on-going MTC program, we have established an effective network that promotes the education of future transportation professionals and the development of new knowledge on how to manage transportation infrastructure and services in a more sustainable manner. The MTC has a track record of developing outstanding students; these students are now becoming leaders in the private sector, government, and academia. The MTC has also supported the development of an extensive research portfolio related to sustainable transportation asset management. More research projects are in the pipeline. Finally, the MTC has dedicated itself to the dissemination of asset management research results through an ongoing technology transfer program. This document provides a progress for the latest fiscal year of operation of the MTC, which ran from October 2004 through September 2005.
Resumo:
Pablo de Castro, Director de GrandIR, describió la visión que el Grupo euroCRIS tiene de la infraestructura integrada de gestión de la información científica, compuesta por un sistema CRIS institucional, un repositorio de publicaciones y un repositorio de datos y software, y presentó el modelo de infraestructura integrada del Trinity College Dublin (TCD) como estudio de caso internacional. El sistema CRIS del TCD (TCD Research Support System o RSS), desde su primera versión en 2002, está basado en el estándar CERIF, un modelo de descripción de la actividad científica que está adquiriendo una progresiva relevancia como base de los sistemas CRIS en Europa, particularmente en el Reino Unido. Se citaron en la presentación los ensayos para incorporar CERIF al modelo de datos del software ePrints de repositorios, habilitándolo así para soportar parte de las tareas de recolección de información que realiza un CRIS, y la progresiva cobertura de CERIF a ámbitos tales como la gestión de datos de investigación.
Resumo:
IT outsourcing refers to the way companies focus on their core competencies and buy the supporting functions from other companies specialized in that area. Service is the total outcome of numerous of activities by employees and other resources to provide solutions to customers' problems. Outsourcing and service business have their unique characteristics. Service Level Agreements quantify the minimum acceptable service to the user. The service quality has to be objectively quantified so that its achievement or non-achievement of it can be monitored. Usually offshoring refers to the transferring of tasks to low-cost nations. Offshoring presents a lot of challenges that require special attention and they need to be assessed thoroughly. IT Infrastructure management refers to installation and basic usability assistance of operating systems, network and server tools and utilities. ITIL defines the industry best practices for organizing IT processes. This thesis did an analysis of server operations service and the customers’ perception of the quality of daily operations. The agreed workflows and processes should be followed better. Service providers’ processes are thoroughly defined but both the customer and the service provider might disobey them. Service provider should review the workflows regarding customer functions. Customer facing functions require persistent skill development, as they communicate the quality to the customer. Service provider needs to provide better organized communication and knowledge exchange methods between the specialists in different geographical locations.
Resumo:
The main objective of this master’s thesis is to provide a comprehensive view to cloud computing and SaaS, and analyze how well CADM, a unit of Capgemini Finland Ltd., would fit to the cloud-based SaaS business. Another objective for this thesis is to investigate how public clouds would fit for CADM as a delivery model, if they would provide SaaS applications to their customers. This master’s thesis is executed by investigating characteristics of cloud computing and SaaS especially from application provider point of view. This is done by exploring what kinds of researches and analysis there have been done regarding these two phenomena during past few years. Then CADM’s current business model and operations are analyzed from SaaS’s and public cloud’s perspective. This analyzing part is conducted by using SWOT analysis which is widely used analytical tool when observing company’s strategic position and when figuring out possibilities how to improve company’s operations. The conducted analysis and observations reveals that CADM should pursue SaaS business as it could provide remarkable advantages and strengthen their position in current markets. However, pure SaaS model would not be the optimal solution for CADM because they do not have own product which could be transformed to SaaS model, and they lack of Infrastructure Management ability. Also public cloud would not be the most suitable delivery model for them if providing SaaS services. The main observation of this thesis is that CADM should adopt the SaaS model via Capgemini Immediate offering.
Resumo:
Este estudo visa desenvolver uma investigação exploratória e quali-quantitativa, a cerca da representação social do Cloud Computing, na visão dos profissionais de TI brasileiros. Objetiva expor quais as percepções dos usuários da área de TI a respeito do paradigma computacional Cloud Computing. Para suportar o estudo teórico, foram coletados dados empíricos, por meio de questionários online respondidos por 221 profissionais da área de TI. Com o uso da técnica de evocação de palavras e da teoria da representação social (TRS), os dados coletados foram sumarizados. Após o tratamento dos dados mediante o uso da técnica do quadro de quatro casas de Vergès, obteve-se como resultado, a identificação do núcleo central e do sistema periférico da representação social do Cloud Computing. Por fim, os dados foram analisados utilizando-se as análises implicativa e de conteúdo, de forma a que todas as informações fossem abstraídas para melhor interpretação do tema. Obteve-se como resultado, que o núcleo central da representação social do Cloud Computing é composto pelas seguintes palavras “Nuvem”, “Armazenamento”, “Disponibilidade”, “Internet”, “Virtualização” e “Segurança”. Por sua vez, as palavras identificadas como parte do sistema periférico da representação social do Cloud Computing foram: “Compartilhamento”, “Escalabilidade” e ”Facilidade”. Os resultados permitem compreender qual percepção dos profissionais de TI a respeito deste paradigma tecnológico e sua correlação com o referencial teórico abordado. Tais informações e percepções podem auxiliar a tornar o não familiar em familiar, ou seja, compreender como o Cloud Computing é representado, visto e, finalmente, reconhecido pelos profissionais da área de TI.
Resumo:
The present study is aimed to diagnose the current public programs focused on herbal medicines in Brazil by means of in loco visits to 10 programs selected by means of questionnaires sent to 124 municipalities that count on herbal medicine services. The main purpose of the implementation of program programs is related to the development of medicinal herbs. 70% of them are intended for the production of herbal medicines and 50% are aimed to ensure the access of the population to medicinal plants and or herbal medicines. The initiative of the implementation of these programs was related to the managers (60%). The difficulties in this implementation were due to the lack of funding (100%) of the programs. In 60% of the programs, the physicians did not adhere to herbal medicine services due to the lack of knowledge of the subject. Training courses were proposed (80%) to increase the adhesion of prescribers to the system. Some municipalities use information obtained from patients to assess the therapeutic efficiency of medicinal plants and herbal medicines. of the programs underway, cultivation of medicinal plants was observed in 90% and 78% of them adopt quality control. In most programs, this control is not performed in accordance with the legal requirements. The programs focused on medicinal plants and herbal medicines implemented in Brazil face sonic chronic problems of infrastructure, management, operational capacity and self-sustainability, which can be directly related to the absence of a national policy on medicinal plants and herbal medicines.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
The aim of this paper is to analyze the current brazilian high school situation that refers specifically to issues of infrastructure, training of professionals working in high school and the question of management of high school, because we believe that such questions also may have fundamental importance for both young people who drop out of high school and for young people who stay there. We will discuss these issues (infrastructure, management and training of professionals working in high school) as a reference, on the one hand, the goals and objectives proposed by the last National Education Plan (PNE - Law No. 10.172/2001), and on the other hand, the objectives and goals proposed for these same issues by the new PNE, to run until 2020, emphasizing that this is still under discussion in Congress, therefore, liable to modifications.
Resumo:
Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.
Resumo:
We describe a system for performing SLA-driven management and orchestration of distributed infrastructures composed of services supporting mobile computing use cases. In particular, we focus on a Follow-Me Cloud scenario in which we consider mobile users accessing cloud-enable services. We combine a SLA-driven approach to infrastructure optimization, with forecast-based performance degradation preventive actions and pattern detection for supporting mobile cloud infrastructure management. We present our system's information model and architecture including the algorithmic support and the proposed scenarios for system evaluation.
Resumo:
En este estudio se aplica una metodología de obtención de las leyes de frecuencia derivadas (de caudales máximo vertidos y niveles máximos alcanzados) en un entorno de simulaciones de Monte Carlo, para su inclusión en un modelo de análisis de riesgo de presas. Se compara su comportamiento respecto del uso de leyes de frecuencia obtenidas con las técnicas tradicionalmente utilizadas.
Resumo:
El Transportation Research Board es un congreso de reconocido prestigio internacional en el ámbito de la investigación del transporte. Aunque las actas publicadas están en formato digital y sin ISSN ni ISBN, lo consideramos lo suficientemente importante como para que se considere en los indicadores. This paper focuses on the implementation of safety based incentives in Public Private Partnerships (PPPs). The aim of this paper is twofold. First, to evaluate whether PPPs lead to an improvement in road safety, when compared with other infrastructure management systems. Second, is to analyze whether the incentives to improve road safety in PPP contracts in Spain have been effective at improving safety performance. To this end, negative binomial regression models have been applied using information from the Spanish high-capacity network covering years 2007-2009. The results showed that even though road safety is highly influenced by variables that are not manageable by the private concessionaire such as the average annual daily traffic, the implementation of safety incentives in PPPs has a positive influence in the reduction of accidents.
Resumo:
Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.