916 resultados para Application performance monitoring.


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A monitorização de redes é um aspeto de elevada importância, principalmente em redes de média ou grande dimensão. A necessidade de utilização de uma ferramenta para realização dessa gestão facilita o trabalho e proporciona de uma forma mais rápida e eficaz a identificação de problemas na rede e nos seus sistemas. Neste sentido, o presente trabalho tem como objetivo o desenvolvimento de uma solução para a monitorização de GateBoxes, um dos produtos desenvolvidos e comercializados pela empresa NextToYou. A necessidade de monitorização das GateBoxes, por parte da NextToYou, é essencial para que possa detetar falhas no seu funcionamento ou realizar notificações aquando da deteção de problemas para uma rápida resolução. Neste contexto a empresa decidiu implementar uma ferramenta para a referida monitorização e propôs, no âmbito da tese, o desenvolvimento de uma aplicação que satisfizesse esses propósitos. Disponibilizou então, para o desenvolvimento uma plataforma, a WebForge, e definiu alguns requisitos funcionais dessa ferramenta, tais como, a monitorização remota de informação, gestão de alarmes, geração de avisos e notificações. Para a elaboração deste trabalho foram realizados estudos teóricos sobre o tema da gestão e monitorização remotas, realizando-se posteriormente o desenvolvimento de uma aplicação para a monitorização de GateBoxes. Após a implementação efetuou-se a validação do trabalho realizado através da execução de testes e demonstrações, de forma a poder validar e verificar o desempenho do sistema.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A full assessment of para-­virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-­‐metal, as well as on para-­‐virtualization. The idea is to see what the overheads of para-­‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-­‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-­‐metal, then on the para-­‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-­‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-­‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-­‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-­‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-­‐native performance. You can deploy both para-­‐virtualization and full virtualization across various virtualized systems. Para-­‐virtualization is an OS-­‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-­‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-­‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-­‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the results of performance monitoring under real winter weather conditions, controlled laboratory testing and computational fluid dynamics (CFD) analysis of a wall mounted ventilation air inlet heat convector. For real winter weather monitoring, the wall-mounted convector was installed in a laboratory room of the Engineering Building of the School of Construction Management and Engineering. Air and hot water temperatures and air speeds were measured at the entrance to the convector and in the room. The hot water temperature was controlled at 40, 60 and 80 °C. The monitoring results were later used as boundary conditions for a CFD simulation to investigate the air movement in the room. Controlled laboratory testing was conducted in laboratories at the University of Reading, UK and at Wetterstad Consultancy, Sweden. The results of the performance investigation showed that the system contributed greatly to the room heating, particularly at a water temperature of 80 °C. Also adequate fresh air was supplied to the room. Such a system is able to provide an energy efficient method of eliminating problems associated with cold winter draughts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Resource monitoring in distributed systems is required to understand the 'health' of the overall system and to help identify particular problems, such as dysfunctional hardware or faulty system or application software. Monitoring systems such as GridRM provide the ability to connect to any number of different types of monitoring agents and provide different views of the system, based on a client's particular preferences. Web 2.0 technologies, and in particular 'mashups', are emerging as a promising technique for rapidly constructing rich user interfaces, that combine and present data in intuitive ways. This paper describes a Web 2.0 user interface that was created to expose resource data harvested by the GridRM resource monitoring system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We experimentally revisit a technique of low-cost multiparameter monitor for optical performance monitoring based on low frequency polarization modulation. A simplified calibration procedure, which significantly reduces the mathematical complexity and processing effort is proposed. Validation is achieved by carrying out relative optical power, wavelength, and differential group delay measurements. (C) 2012 Wiley Periodicals, Inc. Microwave Opt Technol Lett 54:18201824, 2012; View this article online at wileyonlinelibrary.com. DOI 10.1002/mop.26956

Relevância:

90.00% 90.00%

Publicador:

Resumo:

L’obiettivo della seguente tesi è quello di illustrare le condizioni in cui avviene il trasferimento di conoscenza da consulente a cliente, e come questo sia il fattore abilitante per l’esistenza e l’utilizzo di tale figura professionale da parte delle organizzazioni. Questa tesi nasce dall’esperienza vissuta durante la partecipazione alla XXIV edizione di Junior Consulting, un programma organizzato da CONSEL - ELIS presso Roma, che ha permesso al sottoscritto di prendere parte al progetto di consulenza “Engineering & Construction - Performance Monitoring System” commissionato da Enel Green Power. L’esigenza manifestata dall’azienda era quella di riuscire ad avere una visione chiara e definita della qualità delle attività svolte dalla funzione Engineering & Construction (E&C) e quindi l’obiettivo del progetto si è concretizzato nella definizione di un set di KPI idoneo a rappresentare una misura delle performance della suddetta funzione. Avendo testato, seppure per pochi mesi, l’attività consulenziale, il candidato ha compiuto uno studio teorico che andasse ad analizzare in dettaglio l’attività del consulente, la sua importanza nel mondo economico, il contesto in cui opera e i principali dibattiti accademici di cui è protagonista negli ultimi anni. Nell’elaborato viene analizzato il caso pratico sulla base degli studi della letteratura, con lo scopo di formulare un modello che descriva l’approccio che il consulente dovrebbe avere, durante lo svolgimento delle sue attività, per la realizzazione del trasferimento di conoscenza. Nel primo capitolo è riportato uno studio relativo all’importanza e agli effetti che le società di consulenza hanno avuto nell’ultimo secolo ed in particolare negli ultimi 30 anni. Nel secondo capitolo è descritto il core asset delle società di consulenza, ovvero la conoscenza, e come questa è strutturata e gestita. Nel terzo capitolo sono analizzate le principali teorie sviluppate sulla figura del consulente, sia l’evoluzione storica che le principali posizioni ancora oggi oggetto di dibattito. Nel quarto capitolo l’attenzione è focalizzata sul trasferimento di conoscenza da consulente a cliente e sulle condizioni in cui questo può avvenire. Nel quinto capitolo è presentato il progetto sopracitato, i cui risultati sono riportati negli allegati 1 e 2. Infine, nel sesto capitolo, è presentato un modello di approccio del consulente ottenuto dall’analisi critica del caso pratico sulla base delle evidenze emerse dallo studio della letteratura.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing environmental conditions and number of users, application performance might suffer, leading to Service Level Agreement (SLA) violations and inefficient use of hardware resources. We introduce a system for controlling the complexity of scaling applications composed of multiple services using mechanisms based on fulfillment of SLAs. We present how service monitoring information can be used in conjunction with service level objectives, predictions, and correlations between performance indicators for optimizing the allocation of services belonging to distributed applications. We validate our models using experiments and simulations involving a distributed enterprise information system. We show how discovering correlations between application performance indicators can be used as a basis for creating refined service level objectives, which can then be used for scaling the application and improving the overall application's performance under similar conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation, whose research has been conducted at the Group of Electronic and Microelectronic Design (GDEM) within the framework of the project Power Consumption Control in Multimedia Terminals (PCCMUTE), focuses on the development of an energy estimation model for the battery-powered embedded processor board. The main objectives and contributions of the work are summarized as follows: A model is proposed to obtain the accurate energy estimation results based on the linear correlation between the performance monitoring counters (PMCs) and energy consumption. the uniqueness of the appropriate PMCs for each different system, the modeling methodology is improved to obtain stable accuracies with slight variations among multiple scenarios and to be repeatable in other systems. It includes two steps: the former, the PMC-filter, to identify the most proper set among the available PMCs of a system and the latter, the k-fold cross validation method, to avoid the bias during the model training stage. The methodology is implemented on a commercial embedded board running the 2.6.34 Linux kernel and the PAPI, a cross-platform interface to configure and access PMCs. The results show that the methodology is able to keep a good stability in different scenarios and provide robust estimation results with the average relative error being less than 5%. Este trabajo fin de máster, cuya investigación se ha desarrollado en el Grupo de Diseño Electrónico y Microelectrónico (GDEM) en el marco del proyecto PccMuTe, se centra en el desarrollo de un modelo de estimación de energía para un sistema empotrado alimentado por batería. Los objetivos principales y las contribuciones de esta tesis se resumen como sigue: Se propone un modelo para obtener estimaciones precisas del consumo de energía de un sistema empotrado. El modelo se basa en la correlación lineal entre los valores de los contadores de prestaciones y el consumo de energía. Considerando la particularidad de los contadores de prestaciones en cada sistema, la metodología de modelado se ha mejorado para obtener precisiones estables, con ligeras variaciones entre escenarios múltiples y para replicar los resultados en diferentes sistemas. La metodología incluye dos etapas: la primera, filtrado-PMC, que consiste en identificar el conjunto más apropiado de contadores de prestaciones de entre los disponibles en un sistema y la segunda, el método de validación cruzada de K iteraciones, cuyo fin es evitar los sesgos durante la fase de entrenamiento. La metodología se implementa en un sistema empotrado que ejecuta el kernel 2.6.34 de Linux y PAPI, un interfaz multiplataforma para configurar y acceder a los contadores. Los resultados muestran que esta metodología consigue una buena estabilidad en diferentes escenarios y proporciona unos resultados robustos de estimación con un error medio relativo inferior al 5%.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Como en todos los medios de transporte, la seguridad en los viajes en avión es de primordial importancia. Con los aumentos de tráfico aéreo previstos en Europa para la próxima década, es evidente que el riesgo de accidentes necesita ser evaluado y monitorizado cuidadosamente de forma continúa. La Tesis presente tiene como objetivo el desarrollo de un modelo de riesgo de colisión exhaustivo como método para evaluar el nivel de seguridad en ruta del espacio aéreo europeo, considerando todos los factores de influencia. La mayor limitación en el desarrollo de metodologías y herramientas de monitorización adecuadas para evaluar el nivel de seguridad en espacios de ruta europeos, donde los controladores aéreos monitorizan el tráfico aéreo mediante la vigilancia radar y proporcionan instrucciones tácticas a las aeronaves, reside en la estimación del riesgo operacional. Hoy en día, la estimación del riesgo operacional está basada normalmente en reportes de incidentes proporcionados por el proveedor de servicios de navegación aérea (ANSP). Esta Tesis propone un nuevo e innovador enfoque para evaluar el nivel de seguridad basado exclusivamente en el procesamiento y análisis trazas radar. La metodología propuesta ha sido diseñada para complementar la información recogida en las bases de datos de accidentes e incidentes, mediante la provisión de información robusta de los factores de tráfico aéreo y métricas de seguridad inferidas del análisis automático en profundidad de todos los eventos de proximidad. La metodología 3-D CRM se ha implementado en un prototipo desarrollado en MATLAB © para analizar automáticamente las trazas radar y planes de vuelo registrados por los Sistemas de Procesamiento de Datos Radar (RDP) e identificar y analizar todos los eventos de proximidad (conflictos, conflictos potenciales y colisiones potenciales) en un periodo de tiempo y volumen del espacio aéreo. Actualmente, el prototipo 3-D CRM está siendo adaptado e integrado en la herramienta de monitorización de prestaciones de Aena (PERSEO) para complementar las bases de accidentes e incidentes ATM y mejorar la monitorización y proporcionar evidencias de los niveles de seguridad. ABSTRACT As with all forms of transport, the safety of air travel is of paramount importance. With the projected increases in European air traffic in the next decade and beyond, it is clear that the risk of accidents needs to be assessed and carefully monitored on a continuing basis. The present thesis is aimed at the development of a comprehensive collision risk model as a method of assessing the European en-route risk, due to all causes and across all dimensions within the airspace. The major constraint in developing appropriate monitoring methodologies and tools to assess the level of safety in en-route airspaces where controllers monitor air traffic by means of radar surveillance and provide aircraft with tactical instructions lies in the estimation of the operational risk. The operational risk estimate normally relies on incident reports provided by the air navigation service providers (ANSPs). This thesis proposes a new and innovative approach to assessing aircraft safety level based exclusively upon the process and analysis of radar tracks. The proposed methodology has been designed to complement the information collected in the accident and incident databases, thereby providing robust information on air traffic factors and safety metrics inferred from the in depth assessment of proximate events. The 3-D CRM methodology is implemented in a prototype tool in MATLAB © in order to automatically analyze recorded aircraft tracks and flight plan data from the Radar Data Processing systems (RDP) and identify and analyze all proximate events (conflicts, potential conflicts and potential collisions) within a time span and a given volume of airspace. Currently, the 3D-CRM prototype is been adapted and integrated in AENA’S Performance Monitoring Tool (PERSEO) to complement the information provided by the ATM accident and incident databases and to enhance monitoring and providing evidence of levels of safety.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cognitive impairment is the main cause of disability in developed societies. New interactive technologies help therapists in neurorehabilitation in order to increase patients’ autonomy and quality of life. This work proposes Interactive Video (IV) as a technology to develop cognitive rehabilitation tasks based on Activities of Daily Living (ADL). ADL cognitive task has been developed and integrated with eye-tracking technology for task interaction and patients’ performance monitoring.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este proyecto fin de grado presenta dos herramientas, Papify y Papify-Viewer, para medir y visualizar, respectivamente, las prestaciones a bajo nivel de especificaciones RVC-CAL basándose en eventos hardware. RVC-CAL es un lenguaje de flujo de datos estandarizado por MPEG y utilizado para definir herramientas relacionadas con la codificación de vídeo. La estructura de los programas descritos en RVC-CAL se basa en unidades funcionales llamadas actores, que a su vez se subdividen en funciones o procedimientos llamados acciones. ORCC (Open RVC-CAL Compiler) es un compilador de código abierto que utiliza como entrada descripciones RVC-CAL y genera a partir de ellas código fuente en un lenguaje dado, como por ejemplo C. Internamente, el compilador ORCC se divide en tres etapas distinguibles: front-end, middle-end y back-end. La implementación de Papify consiste en modificar la etapa del back-end del compilador, encargada de la generación de código, de modo tal que los actores, al ser traducidos a lenguaje C, queden instrumentados con PAPI (Performance Application Programing Interface), una herramienta utilizada como interfaz a los registros contadores de rendimiento (PMC) de los procesadores. Además, también se modifica el front-end para permitir identificar cierto tipo de anotaciones en las descripciones RVC-CAL, utilizadas para que el diseñador pueda indicar qué actores o acciones en particular se desean analizar. Los actores instrumentados, además de conservar su funcionalidad original, generan una serie de ficheros que contienen datos sobre los distintos eventos hardware que suceden a lo largo de su ejecución. Los eventos incluidos en estos ficheros son configurables dentro de las anotaciones previamente mencionadas. La segunda herramienta, Papify-Viewer, utiliza los datos generados por Papify y los procesa, obteniendo una representación visual de la información a dos niveles: por un lado, representa cronológicamente la ejecución de la aplicación, distinguiendo cada uno de los actores a lo largo de la misma. Por otro lado, genera estadísticas sobre la cantidad de eventos disparados por acción, actor o núcleo de ejecución y las representa mediante gráficos de barra. Ambas herramientas pueden ser utilizadas en conjunto para verificar el funcionamiento del programa, balancear la carga de los actores o la distribución por núcleos de los mismos, mejorar el rendimiento y diagnosticar problemas. ABSTRACT. This diploma project presents two tools, Papify and Papify-Viewer, used to measure and visualize the low level performance of RVC-CAL specifications based on hardware events. RVC-CAL is a dataflow language standardized by MPEG which is used to define video codec tools. The structure of the applications described in RVC-CAL is based on functional units called actors, which are in turn divided into smaller procedures called actions. ORCC (Open RVC-CAL Compiler) is an open-source compiler capable of transforming RVC-CAL descriptions into source code in a given language, such as C. Internally, the compiler is divided into three distinguishable stages: front-end, middle-end and back-end. Papify’s implementation consists of modifying the compiler’s back-end stage, which is responsible for generating the final source code, so that translated actors in C code are now instrumented with PAPI (Performance Application Programming Interface), a tool that provides an interface to the microprocessor’s performance monitoring counters (PMC). In addition, the front-end is also modified in such a way that allows identification of a certain type of annotations in the RVC-CAL descriptions, allowing the designer to set the actors or actions to be included in the measurement. Besides preserving their initial behavior, the instrumented actors will also generate a set of files containing data about the different events triggered throughout the program’s execution. The events included in these files can be configured inside the previously mentioned annotations. The second tool, Papify-Viewer, makes use of the files generated by Papify to process them and provide a visual representation of the information in two different ways: on one hand, a chronological representation of the application’s execution where each actor has its own timeline. On the other hand, statistical information is generated about the amount of triggered events per action, actor or core. Both tools can be used together to assert the normal functioning of the program, balance the load between actors or cores, improve performance and identify problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modern compilers present a great and ever increasing number of options which can modify the features and behavior of a compiled program. Many of these options are often wasted due to the required comprehensive knowledge about both the underlying architecture and the internal processes of the compiler. In this context, it is usual, not having a single design goal but a more complex set of objectives. In addition, the dependencies between different goals are difficult to be a priori inferred. This paper proposes a strategy for tuning the compilation of any given application. This is accomplished by using an automatic variation of the compilation options by means of multi-objective optimization and evolutionary computation commanded by the NSGA-II algorithm. This allows finding compilation options that simultaneously optimize different objectives. The advantages of our proposal are illustrated by means of a case study based on the well-known Apache web server. Our strategy has demonstrated an ability to find improvements up to 7.5% and up to 27% in context switches and L2 cache misses, respectively, and also discovers the most important bottlenecks involved in the application performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Addressing inconsistencies in relational demography research, we examine the relationship between cultural dissimilarity and individual performance through the lens of social self-regulation theory, which extends the social identity perspective in relational demography with the analysis of social self-regulation. We propose that social self-regulation in culturally diverse teams manifests itself as performance monitoring (i.e., individuals' actions to meet team performance standards and peer expectations). Contingent on the status associated with individuals' cultural background, performance monitoring is proposed to have a curvilinear relationship with individual performance and to mediate between cultural dissimilarity and performance. Multilevel moderated mediation analyses of time-lagged data from 316 members of 69 teams confirmed these hypotheses. Cultural dissimilarity had a negative relationship with performance monitoring for high cultural-status members, and a positive relationship for low cultural-status members. Performance monitoring had a curvilinear relationship with individual performance that became decreasingly positive. Cultural dissimilarity thus was increasingly negatively associated with performance for high culturalstatus members, and decreasingly positively for low cultural-status members. These findings suggest that cultural dissimilarity to the team is not unconditionally negative for the individual but, in moderation, may in fact have positive motivational effects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Governmental accountability is the requirement of government entities to be accountable to the citizenry in order to justify the raising and expenditure of public resources. The concept of service efforts and accomplishments measurement for government programs was introduced by the Governmental Accounting Standards Board (GASB) in Service Efforts and Accomplishments Reporting: Its Time Has Come (1990). This research tested the feasibility of implementing the concept for the Federal-aid highway construction program and identified factors affecting implementation with a case study of the District of Columbia. Changes in condition and performance ratings for specific highway segments in 15 projects, before and after construction expenditures, were evaluated using data provided by the Federal Highway Administration. The results of the evaluation indicated difficulty in drawing conclusions on the state program performance, as a whole. The state program reflects problems within the Federally administered program that severely limit implementation of outcome-oriented performance measurement. Major problems identified with data acquisition are: data reliability, availability, compatibility and consistency among states. Other significant factors affecting implementation are institutional barriers and political barriers. Institutional issues in the Federal Highway Administration include the lack of integration of the fiscal project specific database with the Highway Performance Monitoring System database. The Federal Highway Administration has the ability to resolve both of the data problems, however interviews with key Federal informants indicate this will not occur without external directives and changes to the Federal “stewardship” approach to program administration. ^ The findings indicate many issues must be resolved for successful implementation of outcome-oriented performance measures in the Federal-aid construction program. The issues are organizational and political in nature, however in the current environment resolution is possible. Additional research is desirable and would be useful in overcoming the obstacles to successful implementation. ^