26 resultados para performance monitoring

em Universidad Politécnica de Madrid


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation, whose research has been conducted at the Group of Electronic and Microelectronic Design (GDEM) within the framework of the project Power Consumption Control in Multimedia Terminals (PCCMUTE), focuses on the development of an energy estimation model for the battery-powered embedded processor board. The main objectives and contributions of the work are summarized as follows: A model is proposed to obtain the accurate energy estimation results based on the linear correlation between the performance monitoring counters (PMCs) and energy consumption. the uniqueness of the appropriate PMCs for each different system, the modeling methodology is improved to obtain stable accuracies with slight variations among multiple scenarios and to be repeatable in other systems. It includes two steps: the former, the PMC-filter, to identify the most proper set among the available PMCs of a system and the latter, the k-fold cross validation method, to avoid the bias during the model training stage. The methodology is implemented on a commercial embedded board running the 2.6.34 Linux kernel and the PAPI, a cross-platform interface to configure and access PMCs. The results show that the methodology is able to keep a good stability in different scenarios and provide robust estimation results with the average relative error being less than 5%. Este trabajo fin de máster, cuya investigación se ha desarrollado en el Grupo de Diseño Electrónico y Microelectrónico (GDEM) en el marco del proyecto PccMuTe, se centra en el desarrollo de un modelo de estimación de energía para un sistema empotrado alimentado por batería. Los objetivos principales y las contribuciones de esta tesis se resumen como sigue: Se propone un modelo para obtener estimaciones precisas del consumo de energía de un sistema empotrado. El modelo se basa en la correlación lineal entre los valores de los contadores de prestaciones y el consumo de energía. Considerando la particularidad de los contadores de prestaciones en cada sistema, la metodología de modelado se ha mejorado para obtener precisiones estables, con ligeras variaciones entre escenarios múltiples y para replicar los resultados en diferentes sistemas. La metodología incluye dos etapas: la primera, filtrado-PMC, que consiste en identificar el conjunto más apropiado de contadores de prestaciones de entre los disponibles en un sistema y la segunda, el método de validación cruzada de K iteraciones, cuyo fin es evitar los sesgos durante la fase de entrenamiento. La metodología se implementa en un sistema empotrado que ejecuta el kernel 2.6.34 de Linux y PAPI, un interfaz multiplataforma para configurar y acceder a los contadores. Los resultados muestran que esta metodología consigue una buena estabilidad en diferentes escenarios y proporciona unos resultados robustos de estimación con un error medio relativo inferior al 5%.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Como en todos los medios de transporte, la seguridad en los viajes en avión es de primordial importancia. Con los aumentos de tráfico aéreo previstos en Europa para la próxima década, es evidente que el riesgo de accidentes necesita ser evaluado y monitorizado cuidadosamente de forma continúa. La Tesis presente tiene como objetivo el desarrollo de un modelo de riesgo de colisión exhaustivo como método para evaluar el nivel de seguridad en ruta del espacio aéreo europeo, considerando todos los factores de influencia. La mayor limitación en el desarrollo de metodologías y herramientas de monitorización adecuadas para evaluar el nivel de seguridad en espacios de ruta europeos, donde los controladores aéreos monitorizan el tráfico aéreo mediante la vigilancia radar y proporcionan instrucciones tácticas a las aeronaves, reside en la estimación del riesgo operacional. Hoy en día, la estimación del riesgo operacional está basada normalmente en reportes de incidentes proporcionados por el proveedor de servicios de navegación aérea (ANSP). Esta Tesis propone un nuevo e innovador enfoque para evaluar el nivel de seguridad basado exclusivamente en el procesamiento y análisis trazas radar. La metodología propuesta ha sido diseñada para complementar la información recogida en las bases de datos de accidentes e incidentes, mediante la provisión de información robusta de los factores de tráfico aéreo y métricas de seguridad inferidas del análisis automático en profundidad de todos los eventos de proximidad. La metodología 3-D CRM se ha implementado en un prototipo desarrollado en MATLAB © para analizar automáticamente las trazas radar y planes de vuelo registrados por los Sistemas de Procesamiento de Datos Radar (RDP) e identificar y analizar todos los eventos de proximidad (conflictos, conflictos potenciales y colisiones potenciales) en un periodo de tiempo y volumen del espacio aéreo. Actualmente, el prototipo 3-D CRM está siendo adaptado e integrado en la herramienta de monitorización de prestaciones de Aena (PERSEO) para complementar las bases de accidentes e incidentes ATM y mejorar la monitorización y proporcionar evidencias de los niveles de seguridad. ABSTRACT As with all forms of transport, the safety of air travel is of paramount importance. With the projected increases in European air traffic in the next decade and beyond, it is clear that the risk of accidents needs to be assessed and carefully monitored on a continuing basis. The present thesis is aimed at the development of a comprehensive collision risk model as a method of assessing the European en-route risk, due to all causes and across all dimensions within the airspace. The major constraint in developing appropriate monitoring methodologies and tools to assess the level of safety in en-route airspaces where controllers monitor air traffic by means of radar surveillance and provide aircraft with tactical instructions lies in the estimation of the operational risk. The operational risk estimate normally relies on incident reports provided by the air navigation service providers (ANSPs). This thesis proposes a new and innovative approach to assessing aircraft safety level based exclusively upon the process and analysis of radar tracks. The proposed methodology has been designed to complement the information collected in the accident and incident databases, thereby providing robust information on air traffic factors and safety metrics inferred from the in depth assessment of proximate events. The 3-D CRM methodology is implemented in a prototype tool in MATLAB © in order to automatically analyze recorded aircraft tracks and flight plan data from the Radar Data Processing systems (RDP) and identify and analyze all proximate events (conflicts, potential conflicts and potential collisions) within a time span and a given volume of airspace. Currently, the 3D-CRM prototype is been adapted and integrated in AENA’S Performance Monitoring Tool (PERSEO) to complement the information provided by the ATM accident and incident databases and to enhance monitoring and providing evidence of levels of safety.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cognitive impairment is the main cause of disability in developed societies. New interactive technologies help therapists in neurorehabilitation in order to increase patients’ autonomy and quality of life. This work proposes Interactive Video (IV) as a technology to develop cognitive rehabilitation tasks based on Activities of Daily Living (ADL). ADL cognitive task has been developed and integrated with eye-tracking technology for task interaction and patients’ performance monitoring.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este proyecto fin de grado presenta dos herramientas, Papify y Papify-Viewer, para medir y visualizar, respectivamente, las prestaciones a bajo nivel de especificaciones RVC-CAL basándose en eventos hardware. RVC-CAL es un lenguaje de flujo de datos estandarizado por MPEG y utilizado para definir herramientas relacionadas con la codificación de vídeo. La estructura de los programas descritos en RVC-CAL se basa en unidades funcionales llamadas actores, que a su vez se subdividen en funciones o procedimientos llamados acciones. ORCC (Open RVC-CAL Compiler) es un compilador de código abierto que utiliza como entrada descripciones RVC-CAL y genera a partir de ellas código fuente en un lenguaje dado, como por ejemplo C. Internamente, el compilador ORCC se divide en tres etapas distinguibles: front-end, middle-end y back-end. La implementación de Papify consiste en modificar la etapa del back-end del compilador, encargada de la generación de código, de modo tal que los actores, al ser traducidos a lenguaje C, queden instrumentados con PAPI (Performance Application Programing Interface), una herramienta utilizada como interfaz a los registros contadores de rendimiento (PMC) de los procesadores. Además, también se modifica el front-end para permitir identificar cierto tipo de anotaciones en las descripciones RVC-CAL, utilizadas para que el diseñador pueda indicar qué actores o acciones en particular se desean analizar. Los actores instrumentados, además de conservar su funcionalidad original, generan una serie de ficheros que contienen datos sobre los distintos eventos hardware que suceden a lo largo de su ejecución. Los eventos incluidos en estos ficheros son configurables dentro de las anotaciones previamente mencionadas. La segunda herramienta, Papify-Viewer, utiliza los datos generados por Papify y los procesa, obteniendo una representación visual de la información a dos niveles: por un lado, representa cronológicamente la ejecución de la aplicación, distinguiendo cada uno de los actores a lo largo de la misma. Por otro lado, genera estadísticas sobre la cantidad de eventos disparados por acción, actor o núcleo de ejecución y las representa mediante gráficos de barra. Ambas herramientas pueden ser utilizadas en conjunto para verificar el funcionamiento del programa, balancear la carga de los actores o la distribución por núcleos de los mismos, mejorar el rendimiento y diagnosticar problemas. ABSTRACT. This diploma project presents two tools, Papify and Papify-Viewer, used to measure and visualize the low level performance of RVC-CAL specifications based on hardware events. RVC-CAL is a dataflow language standardized by MPEG which is used to define video codec tools. The structure of the applications described in RVC-CAL is based on functional units called actors, which are in turn divided into smaller procedures called actions. ORCC (Open RVC-CAL Compiler) is an open-source compiler capable of transforming RVC-CAL descriptions into source code in a given language, such as C. Internally, the compiler is divided into three distinguishable stages: front-end, middle-end and back-end. Papify’s implementation consists of modifying the compiler’s back-end stage, which is responsible for generating the final source code, so that translated actors in C code are now instrumented with PAPI (Performance Application Programming Interface), a tool that provides an interface to the microprocessor’s performance monitoring counters (PMC). In addition, the front-end is also modified in such a way that allows identification of a certain type of annotations in the RVC-CAL descriptions, allowing the designer to set the actors or actions to be included in the measurement. Besides preserving their initial behavior, the instrumented actors will also generate a set of files containing data about the different events triggered throughout the program’s execution. The events included in these files can be configured inside the previously mentioned annotations. The second tool, Papify-Viewer, makes use of the files generated by Papify to process them and provide a visual representation of the information in two different ways: on one hand, a chronological representation of the application’s execution where each actor has its own timeline. On the other hand, statistical information is generated about the amount of triggered events per action, actor or core. Both tools can be used together to assert the normal functioning of the program, balance the load between actors or cores, improve performance and identify problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper proposes a model for estimation of perceived video quality in IPTV, taking as input both video coding and network Quality of Service parameters. It includes some fitting parameters that depend mainly on the information contents of the video sequences. A method to derive them from the Spatial and Temporal Information contents of the sequences is proposed. The model may be used for near real-time monitoring of IPTV video quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current nanometer technologies suffer within-die parameter uncertainties, varying workload conditions, aging, and temperature effects that cause a serious reduction on yield and performance. In this scenario, monitoring, calibration, and dynamic adaptation become essential, demanding systems with a collection of multi purpose monitors and exposing the need for light-weight monitoring networks. This paper presents a new monitoring network paradigm able to perform an early prioritization of the information. This is achieved by the introduction of a new hierarchy level, the threshing level. Targeting it, we propose a time-domain signaling scheme over a single-wire that minimizes the network switching activity as well as the routing requirements. To validate our approach, we make a thorough analysis of the architectural trade-offs and expose two complete monitoring systems that suppose an area improvement of 40% and a power reduction of three orders of magnitude compared to previous works.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current nanometer technologies are subjected to several adverse effects that seriously impact the yield and performance of integrated circuits. Such is the case of within-die parameters uncertainties, varying workload conditions, aging, temperature, etc. Monitoring, calibration and dynamic adaptation have appeared as promising solutions to these issues and many kinds of monitors have been presented recently. In this scenario, where systems with hundreds of monitors of different types have been proposed, the need for light-weight monitoring networks has become essential. In this work we present a light-weight network architecture based on digitization resource sharing of nodes that require a time-to-digital conversion. Our proposal employs a single wire interface, shared among all the nodes in the network, and quantizes the time domain to perform the access multiplexing and transmit the information. It supposes a 16% improvement in area and power consumption compared to traditional approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work evaluates a spline-based smoothing method applied to the output of a glucose predictor. Methods:Our on-line prediction algorithm is based on a neural network model (NNM). We trained/validated the NNM with a prediction horizon of 30 minutes using 39/54 profiles of patients monitored with the Guardian® Real-Time continuous glucose monitoring system The NNM output is smoothed by fitting a causal cubic spline. The assessment parameters are the error (RMSE), mean delay (MD) and the high-frequency noise (HFCrms). The HFCrms is the root-mean-square values of the high-frequency components isolated with a zero-delay non-causal filter. HFCrms is 2.90±1.37 (mg/dl) for the original profiles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Variabilities associated with CMOS evolution affect the yield and performance of current digital designs. FPGAs, which are widely used for fast prototyping and implementation of digital circuits, also suffer from these issues. Proactive approaches start to appear to achieve self-awareness and dynamic adaptation of these devices. To support these techniques we propose the employment of a multi-purpose sensor network. This infrastructure, through adequate use of configuration and automation tools, is able to obtain relevant data along the life cycle of an FPGA. This is realised at a very reduced cost, not only in terms of area or other limited resources, but also regarding the design effort required to define and deploy the measuring infrastructure. Our proposal has been validated by measuring inter-die and intra-die variability in different FPGA families.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is now an emerging need for an efficient modeling strategy to develop a new generation of monitoring systems. One method of approaching the modeling of complex processes is to obtain a global model. It should be able to capture the basic or general behavior of the system, by means of a linear or quadratic regression, and then superimpose a local model on it that can capture the localized nonlinearities of the system. In this paper, a novel method based on a hybrid incremental modeling approach is designed and applied for tool wear detection in turning processes. It involves a two-step iterative process that combines a global model with a local model to take advantage of their underlying, complementary capacities. Thus, the first step constructs a global model using a least squares regression. A local model using the fuzzy k-nearest-neighbors smoothing algorithm is obtained in the second step. A comparative study then demonstrates that the hybrid incremental model provides better error-based performance indices for detecting tool wear than a transductive neurofuzzy model and an inductive neurofuzzy model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tool wear detection is a key issue for tool condition monitoring. The maximization of useful tool life is frequently related with the optimization of machining processes. This paper presents two model-based approaches for tool wear monitoring on the basis of neuro-fuzzy techniques. The use of a neuro-fuzzy hybridization to design a tool wear monitoring system is aiming at exploiting the synergy of neural networks and fuzzy logic, by combining human reasoning with learning and connectionist structure. The turning process that is a well-known machining process is selected for this case study. A four-input (i.e., time, cutting forces, vibrations and acoustic emissions signals) single-output (tool wear rate) model is designed and implemented on the basis of three neuro-fuzzy approaches (inductive, transductive and evolving neuro-fuzzy systems). The tool wear model is then used for monitoring the turning process. The comparative study demonstrates that the transductive neuro-fuzzy model provides better error-based performance indices for detecting tool wear than the inductive neuro-fuzzy model and than the evolving neuro-fuzzy model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The installers and owners show a growing interest in the follow-up of the performance of their photovoltaic (PV) systems. The owners are requesting reliable sources of information to ensure that their system is functioning properly, and the installers are actively looking for efficient ways of providing them the most useful possible information from the data available. Policy makers are becoming increasingly interested in the knowledge of the real performance of PV systems and the most frequent sources of problems that they suffer to be able to target the identified challenges properly. The scientific and industrial PV community is also requiring an access to massive operational data to pursue the technological improvements further.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work introduces a web-based learning environment to facilitate learning in Project Management. The proposed web-based support system integrates methodological procedures and information systems, allowing to promote learning among geographically-dispersed students. Thus, students who are enrolled in different universities at different locations and attend their own project management courses, share a virtual experience in executing and managing projects. Specific support systems were used or developed to automatically collect information about student activities, making it possible to monitor the progress made on learning and assess learning performance as established in the defined rubric.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The inherent complexity of modern cloud infrastructures has created the need for innovative monitoring approaches, as state-of-the-art solutions used for other large-scale environments do not address specific cloud features. Although cloud monitoring is nowadays an active research field, a comprehensive study covering all its aspects has not been presented yet. This paper provides a deep insight into cloud monitoring. It proposes a unified cloud monitoring taxonomy, based on which it defines a layered cloud monitoring architecture. To illustrate it, we have implemented GMonE, a general-purpose cloud monitoring tool which covers all aspects of cloud monitoring by specifically addressing the needs of modern cloud infrastructures. Furthermore, we have evaluated the performance, scalability and overhead of GMonE with Yahoo Cloud Serving Benchmark (YCSB), by using the OpenNebula cloud middleware on the Grid’5000 experimental testbed. The results of this evaluation demonstrate the benefits of our approach, surpassing the monitoring performance and capabilities of cloud monitoring alternatives such as those present in state-of-the-art systems such as Amazon EC2 and OpenNebula.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-Performance Computing, Cloud computing and next-generation applications such e-Health or Smart Cities have dramatically increased the computational demand of Data Centers. The huge energy consumption, increasing levels of CO2 and the economic costs of these facilities represent a challenge for industry and researchers alike. Recent research trends propose the usage of holistic optimization techniques to jointly minimize Data Center computational and cooling costs from a multilevel perspective. This paper presents an analysis on the parameters needed to integrate the Data Center in a holistic optimization framework and leverages the usage of Cyber-Physical systems to gather workload, server and environmental data via software techniques and by deploying a non-intrusive Wireless Sensor Net- work (WSN). This solution tackles data sampling, retrieval and storage from a reconfigurable perspective, reducing the amount of data generated for optimization by a 68% without information loss, doubling the lifetime of the WSN nodes and allowing runtime energy minimization techniques in a real scenario.