799 resultados para Utility-based performance measures


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis consists of four essays on the design and disclosure of compensation contracts. Essays 1, 2 and 3 focus on behavioral aspects of mandatory compensation disclosure rules and of contract negotiations in agency relationships. The three experimental studies develop psychology- based theory and present results that deviate from standard economic predictions. Furthermore, the results of Essay 1 and 2 also have implications for firms’ discretion in how to communicate their top management’s incentives to the capital market. Essay 4 analyzes the role of fairness perceptions for the evaluation of executive compensation. For this purpose, two surveys targeting representative eligible voters as well as investment professionals were conducted. Essay 1 investigates the role of the detailed ‘Compensation Discussion and Analysis’, which is part of the Security and Exchange Commission’s 2006 regulation, on investors’ evaluations of executive performance. Compensation disclosure complying with this regulation clarifies the relationship between realized reported compensation and the underlying performance measures and their target achievement levels. The experimental findings suggest that the salient presentation of executives’ incentives inherent in the ‘Compensation Discussion and Analysis’ makes investors’ performance evaluations less outcome dependent. Therefore, investors’ judgment and investment decisions might be less affected by noisy environmental factors that drive financial performance. The results also suggest that fairness perceptions of compensation contracts are essential for investors’ performance evaluations in that more transparent disclosure increases the perceived fairness of compensation and the performance evaluation of managers who are not responsible for a bad financial performance. These results have important practical implications as firms might choose to communicate their top management’s incentive compensation more transparently in order to benefit from less volatile expectations about their future performance. Similar to the first experiment, the experiment described in Essay 2 addresses the question of more transparent compensation disclosure. However, other than the first experiment, the second experiment does not analyze the effect of a more salient presentation of contract information but the informational effect of contract information itself. For this purpose, the experiment tests two conditions in which the assessment of the compensation contracts’ incentive compatibility, which determines executive effort, is either possible or not. On the one hand, the results suggest that the quality of investors’ expectations about executive effort is improved, but on the other hand investors might over-adjust their prior expectations about executive effort if being confronted with an unexpected financial performance and under-adjust if the financial performance confirms their prior expectations. Therefore, in the experiment, more transparent compensation disclosure does not lead to more correct overall judgments of executive effort and to even lower processing quality of outcome information. These results add to the literature on disclosure which predominantly advocates more transparency. The findings of the experiment however, identify decreased information processing quality as a relevant disclosure cost category. Firms might therefore carefully evaluate the additional costs and benefits of more transparent compensation disclosure. Together with the results from the experiment in Essay 1, the two experiments on compensation disclosure imply that firms should rather focus on their discretion how to present their compensation disclosure to benefit from investors’ improved fairness perceptions and their spill-over on performance evaluation. Essay 3 studies the behavioral effects of contextual factors in recruitment processes that do not affect the employer’s or the applicant’s bargaining power from a standard economic perspective. In particular, the experiment studies two common characteristics of recruitment processes: Pre-contractual competition among job applicants and job applicants’ non-binding effort announcements as they might be made during job interviews. Despite the standard economic irrelevance of these factors, the experiment develops theory regarding the behavioral effects on employees’ subsequent effort provision and the employers’ contract design choices. The experimental findings largely support the predictions. More specifically, the results suggest that firms can benefit from increased effort and, therefore, may generate higher profits. Further, firms may seize a larger share of the employment relationship’s profit by highlighting the competitive aspects of the recruitment process and by requiring applicants to make announcements about their future effort. Finally, Essay 4 studies the role of fairness perceptions for the public evaluation of executive compensation. Although economic criteria for the design of incentive compensation generally do not make restrictive recommendations with regard to the amount of compensation, fairness perceptions might be relevant from the perspective of firms and standard setters. This is because behavioral theory has identified fairness as an important determinant of individuals’ judgment and decisions. However, although fairness concerns about executive compensation are often stated in the popular media and even in the literature, evidence on the meaning of fairness in the context of executive compensation is scarce and ambiguous. In order to inform practitioners and standard setters whether fairness concerns are exclusive to non-professionals or relevant for investment professionals as well, the two surveys presented in Essay 4 aim to find commonalities in the opinions of representative eligible voters and investments professionals. The results suggest that fairness is an important criterion for both groups. Especially, exposure to risk in the form of the variable compensation share is an important criterion shared by both groups. The higher the assumed variable share, the higher is the compensation amount to be perceived as fair. However, to a large extent, opinions on executive compensation depend on personality characteristics, and to some extent, investment professionals’ perceptions deviate systematically from those of non-professionals. The findings imply that firms might benefit from emphasizing the riskiness of their managers’ variable pay components and, therefore, the findings are also in line with those of Essay 1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several techniques have been proposed to exploit GNSS-derived kinematic orbit information for the determination of long-wavelength gravity field features. These methods include the (i) celestial mechanics approach, (ii) short-arc approach, (iii) point-wise acceleration approach, (iv) averaged acceleration approach, and (v) energy balance approach. Although there is a general consensus that—except for energy balance—these methods theoretically provide equivalent results, real data gravity field solutions from kinematic orbit analysis have never been evaluated against each other within a consistent data processing environment. This contribution strives to close this gap. Target consistency criteria for our study are the input data sets, period of investigation, spherical harmonic resolution, a priori gravity field information, etc. We compare GOCE gravity field estimates based on the aforementioned approaches as computed at the Graz University of Technology, the University of Bern, the University of Stuttgart/Austrian Academy of Sciences, and by RHEA Systems for the European Space Agency. The involved research groups complied with most of the consistency criterions. Deviations only occur where technical unfeasibility exists. Performance measures include formal errors, differences with respect to a state-of-the-art GRACE gravity field, (cumulative) geoid height differences, and SLR residuals from precise orbit determination of geodetic satellites. We found that for the approaches (i) to (iv), the cumulative geoid height differences at spherical harmonic degree 100 differ by only ≈10 % ; in the absence of the polar data gap, SLR residuals agree by ≈96 % . From our investigations, we conclude that real data analysis results are in agreement with the theoretical considerations concerning the (relative) performance of the different approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With hundreds of single nucleotide polymorphisms (SNPs) in a candidate gene and millions of SNPs across the genome, selecting an informative subset of SNPs to maximize the ability to detect genotype-phenotype association is of great interest and importance. In addition, with a large number of SNPs, analytic methods are needed that allow investigators to control the false positive rate resulting from large numbers of SNP genotype-phenotype analyses. This dissertation uses simulated data to explore methods for selecting SNPs for genotype-phenotype association studies. I examined the pattern of linkage disequilibrium (LD) across a candidate gene region and used this pattern to aid in localizing a disease-influencing mutation. The results indicate that the r2 measure of linkage disequilibrium is preferred over the common D′ measure for use in genotype-phenotype association studies. Using step-wise linear regression, the best predictor of the quantitative trait was not usually the single functional mutation. Rather it was a SNP that was in high linkage disequilibrium with the functional mutation. Next, I compared three strategies for selecting SNPs for application to phenotype association studies: based on measures of linkage disequilibrium, based on a measure of haplotype diversity, and random selection. The results demonstrate that SNPs selected based on maximum haplotype diversity are more informative and yield higher power than randomly selected SNPs or SNPs selected based on low pair-wise LD. The data also indicate that for genes with small contribution to the phenotype, it is more prudent for investigators to increase their sample size than to continuously increase the number of SNPs in order to improve statistical power. When typing large numbers of SNPs, researchers are faced with the challenge of utilizing an appropriate statistical method that controls the type I error rate while maintaining adequate power. We show that an empirical genotype based multi-locus global test that uses permutation testing to investigate the null distribution of the maximum test statistic maintains a desired overall type I error rate while not overly sacrificing statistical power. The results also show that when the penetrance model is simple the multi-locus global test does as well or better than the haplotype analysis. However, for more complex models, haplotype analyses offer advantages. The results of this dissertation will be of utility to human geneticists designing large-scale multi-locus genotype-phenotype association studies. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

India's public sector banks (PSBs) are compared unfavorably with their private sector counterparts, domestic and foreign. This comparison rests, for the most part, on financial measures of performance, and such a comparison provides much of the rationale for privatization of PSBs.In this paper, we attempt a comparison between PSBs and their private sector counterparts based on measures of productivity that use quantities of outputs and inputs. We employ two measures of productivity: Tornqvist and Malmquist total factor productivity growth. We attempt these comparisons over the period 1992-2000, comparing PSBs with both domestic private and foreign banks. Out of a total of four comparisons we have made, there are no differences in three cases, PSBs do better in two, and foreign banks in one. To put it differently, PSBs are seen to be at a disadvantage in only one out of six comparisons. It is difficult, therefore, to sustain the proposition that efficiency and productivity have been lower in public sector banks relative to their peers in the private sector.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los problemas de programación de tareas son muy importantes en el mundo actual. Se puede decir que se presentan en todos los fundamentos de la industria moderna, de ahí la importancia de que estos sean óptimos, de forma que se puedan ahorrar recursos que estén asociados al problema. La programación adecuada de trabajos en procesos de manufactura, constituye un importante problema que se plantea dentro de la producción en muchas empresas. El orden en que estos son procesados, no resulta indiferente, sino que determinará algún parámetro de interés, cuyos valores convendrá optimizar en la medida de lo posible. Así podrá verse afectado el coste total de ejecución de los trabajos, el tiempo necesario para concluirlos o el stock de productos en curso que será generado. Esto conduce de forma directa al problema de determinar cuál será el orden más adecuado para llevar a cabo los trabajos con vista a optimizar algunos de los anteriores parámetros u otros similares. Debido a las limitaciones de las técnicas de optimización convencionales, en la presente tesis se presenta una metaheurística basada en un Algoritmo Genético Simple (Simple Genetic Algorithm, SGA), para resolver problemas de programación de tipo flujo general (Job Shop Scheduling, JSS) y flujo regular (Flow Shop Scheduling, FSS), que están presentes en un taller con tecnología de mecanizado con el objetivo de optimizar varias medidas de desempeño en un plan de trabajo. La aportación principal de esta tesis, es un modelo matemático para medir el consumo de energía, como criterio para la optimización, de las máquinas que intervienen en la ejecución de un plan de trabajo. Se propone además, un método para mejorar el rendimiento en la búsqueda de las soluciones encontradas, por parte del Algoritmo Genético Simple, basado en el aprovechamiento del tiempo ocioso. ABSTRACT The scheduling problems are very important in today's world. It can be said to be present in all the basics of modern industry, hence the importance that these are optimal, so that they can save resources that are associated with the problem. The appropriate programming jobs in manufacturing processes is an important problem that arises in production in many companies. The order in which they are processed, it is immaterial, but shall determine a parameter of interest, whose values agree optimize the possible. This may be affected the total cost of execution of work, the time needed to complete them or the stock of work in progress that will be generated. This leads directly to the problem of determining what the most appropriate order to carry out the work in order to maximize some of the above parameters or other similar. Due to the limitations of conventional optimization techniques, in this work present a metaheuristic based on a Simple Genetic Algorithm (Simple Genetic Algorithm, SGA) to solve programming problems overall flow rate (Job Shop Scheduling, JSS) and regular flow (Flow Shop Scheduling, FSS), which are present in a workshop with machining technology in order to optimize various performance measures in a plan. The main contribution of this thesis is a mathematical model to measure the energy consumption as a criterion for the optimization of the machines involved in the implementation of a work plan. It also proposes a method to improve performance in finding the solutions, by the simple genetic algorithm, based on the use of idle time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is now an emerging need for an efficient modeling strategy to develop a new generation of monitoring systems. One method of approaching the modeling of complex processes is to obtain a global model. It should be able to capture the basic or general behavior of the system, by means of a linear or quadratic regression, and then superimpose a local model on it that can capture the localized nonlinearities of the system. In this paper, a novel method based on a hybrid incremental modeling approach is designed and applied for tool wear detection in turning processes. It involves a two-step iterative process that combines a global model with a local model to take advantage of their underlying, complementary capacities. Thus, the first step constructs a global model using a least squares regression. A local model using the fuzzy k-nearest-neighbors smoothing algorithm is obtained in the second step. A comparative study then demonstrates that the hybrid incremental model provides better error-based performance indices for detecting tool wear than a transductive neurofuzzy model and an inductive neurofuzzy model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo de esta investigación es desarrollar una metodología para estimar los potenciales impactos económicos y de transporte generados por la aplicación de políticas en el sector transporte. Los departamentos de transporte y otras instituciones gubernamentales relacionadas se encuentran interesadas en estos análisis debido a que son presentados comúnmente de forma errónea por la insuficiencia de datos o por la falta de metodologías adecuadas. La presente investigación tiene por objeto llenar este vacío haciendo un análisis exhaustivo de las técnicas disponibles que coincidan con ese propósito. Se ha realizado un análisis que ha identificado las diferencias cuando son aplicados para la valoración de los beneficios para el usuario o para otros efectos como aspectos sociales. Como resultado de ello, esta investigación ofrece un enfoque integrado que incluye un modelo Input-Output de múltiples regiones basado en la utilidad aleatoria (RUBMRIO), y un modelo de red de transporte por carretera. Este modelo permite la reproducción con mayor detalle y realismo del transporte de mercancías que por medio de su estructura sectorial identifica los vínculos de las compras y ventas inter-industriales dentro de un país utilizando los servicios del transporte de mercancías. Por esta razón, el modelo integrado es aplicable a diversas políticas de transporte. En efecto, el enfoque se ha aplicado para estudiar los efectos macroeconómicos regionales de la implementación de dos políticas diferentes en el sistema de transporte de mercancías de España, tales como la tarificación basada en la distancia recorrida por vehículo-kilómetro (€/km) aplicada a los vehículos del transporte de mercancías, y para la introducción de vehículos más largos y pesados de mercancías en la red de carreteras de España. El enfoque metodológico se ha evaluado caso por caso teniendo en cuenta una selección de la red de carreteras que unen las capitales de las regiones españolas. También se ha tenido en cuenta una dimensión económica a través de una tabla Input-Output de múltiples regiones (MRIO) y la base de datos de conteo de tráfico existente para realizar la validación del modelo. El enfoque integrado reproduce las condiciones de comercio observadas entre las regiones usando el sistema de transporte de mercancías por carretera, y que permite por comparación con los escenarios de políticas, determinar las contribuciones a los cambios distributivos y generativos. Así pues, el análisis estima los impactos económicos en cualquier región considerando los cambios en el Producto Interno Bruto (PIB) y el empleo. El enfoque identifica los cambios en el sistema de transporte a través de todos los caminos de la red de transporte a través de las medidas de efectividad (MOEs). Los resultados presentados en esta investigación proporcionan evidencia sustancial de que en la evaluación de las políticas de transporte, es necesario establecer un vínculo entre la estructura económica de las regiones y de los servicios de transporte. Los análisis muestran que para la mayoría de las regiones del país, los cambios son evidentes para el PIB y el empleo, ya que el comercio se fomenta o se inhibe. El enfoque muestra cómo el tráfico se desvía en ambas políticas, y también determina detalles de las emisiones de contaminantes en los dos escenarios. Además, las políticas de fijación de precios o de regulación de los sistemas de transporte de mercancías por carretera dirigidas a los productores y consumidores en las regiones promoverán transformaciones regionales afectando todo el país, y esto conduce a conclusiones diferentes. Así mismo, este enfoque integrado podría ser útil para evaluar otras políticas y otros países en todo el mundo. The purpose of this research is to develop a methodological approach aimed at assessing the potential economic and transportation impacts of transport policies. Transportation departments and other related government parties are interested in such analysis because it is commonly misrepresented for the insufficiency of data and suitable methodologies available. This research is directed at filling this gap by making a comprehensive analysis of the available techniques that match with that purpose. The differences when they are applied for the valuation of user benefits or for other impacts as social matters have been identified. As a result, this research presents an integrated approach which includes both a random utility-based multiregional Input-Output model (RUBMRIO), and a road transport network model. This model accounts for freight transport with more detail and realism because its commodity-based structure traces the linkages of inter-industry purchases and sales that use freight services within a given country. For this reason, the integrated model is applicable to various transport policies. In fact, the approach is applied to study the regional macroeconomic effects of implementing two different policies in the freight transport system of Spain, such as a distance-based charge in vehicle-kilometer (€/km) for Heavy Goods Vehicles (HGVs), and the introduction of Longer and Heavier Vehicles (LHVs) in the road network of Spain. The methodological approach has been evaluated on a case by case basis considering a selected road network of highways linking the capitals of the Spanish regions. It has also considered an economic dimension through a Multiregional Input Output Table (MRIO) and the existing traffic count database used in the model validation. The integrated approach replicates observed conditions of trade among regions using road freight transport systems that determine contributions to distributional and generative changes by comparison with policy scenarios. Therefore, the model estimates economic impacts in any given area by considering changes in Gross Domestic Product (GDP), employment (jobs), and in the transportation system across all paths of the transport network considering Measures of effectiveness (MOEs). The results presented in this research provide substantive evidence that in the assessment of transport policies it is necessary to establish a link between the economic structure of regions and the transportation services. The analysis shows that for most regions in the country, GDP and employment changes are noticeable when trade is encouraged or discouraged. This approach shows how traffic is diverted in both policies, and also provides details of the pollutant emissions in both scenarios. Furthermore, policies, such as pricing or regulation of road freight transportation systems, directed to producers and consumers in regions will promote different regional transformations across the country, and this lead to different conclusions. In addition, this integrated approach could be useful to assess other policies and countries worldwide.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sustainable transport planning requires an integrated approach involving strategic planning, impact analysis and multi-criteria evaluation. This study aims at relaxing the utility-based decision-making assumption by newly embedding anticipated-regret and combined utility-regret decision mechanisms in an integrated transport planning framework. The framework consists of a two-round Delphi survey, an integrated land-use and transport model for Madrid, and multi-criteria analysis. Results show that (i) regret-based ranking has similar mean but larger variance than utility-based ranking; (ii) the least-regret scenario forms a compromise between the desired and the expected scenarios; (iii) the least-regret scenario can lead to higher user benefits in the short-term and lower user benefits in the long-term; (iv) utility-based, regret-based and combined utility-regret-based multi-criteria analysis result in different rankings of policy packages; and (v) the combined utility-regret ranking is more informative compared with utility-based or regret-based ranking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mode switches are used to partition the system’s behavior into different modes to reduce the complexity of large embedded systems. Such systems operate in multiple modes in which each one corresponds to a specific application scenario; these are called Multi-Mode Systems (MMS). A different piece of software is normally executed for each mode. At any given time, the system can be in one of the predefined modes and then be switched to another as a result of a certain condition. A mode switch mechanism (or mode change protocol) is used to shift the system from one mode to another at run-time. In this thesis we have used a hierarchical scheduling framework to implement a multi-mode system called Multi-Mode Hierarchical Scheduling Framework (MMHSF). A two-level Hierarchical Scheduling Framework (HSF) has already been implemented in an open source real-time operating system, FreeRTOS, to support temporal isolation among real-time components. The main contribution of this thesis is the extension of the HSF featuring a multimode feature with an emphasis on making minimal changes in the underlying operating system (FreeRTOS) and its HSF implementation. Our implementation uses fixed-priority preemptive scheduling at both local and global scheduling levels and idling periodic servers. It also now supports different modes of the system which can be switched at run-time. Each subsystem and task exhibit different timing attributes according to mode, and upon a Mode Change Request (MCR) the task-set and timing interfaces of the entire system (including subsystems and tasks) undergo a change. A Mode Change Protocol specifies precisely how the system-mode will be changed. However, an application may not only need to change a mode but also a different mode change protocol semantic. For example, the mode change from normal to shutdown can allow all the tasks to be completed before the mode itself is changed, while changing a mode from normal to emergency may require aborting all tasks instantly. In our work, both the system mode and the mode change protocol can be changed at run-time. We have implemented three different mode change protocols to switch from one mode to another: the Suspend/resume protocol, the Abort protocol, and the Complete protocol. These protocols increase the flexibility of the system, allowing users to select the way they want to switch to a new mode. The implementation of MMHSF is tested and evaluated on an AVR-based 32 bit board EVK1100 with an AVR32UC3A0512 micro-controller. We have tested the behavior of each system mode and for each mode change protocol. We also provide the results for the performance measures of all mode change protocols in the thesis. RESUMEN Los conmutadores de modo son usados para particionar el comportamiento del sistema en diferentes modos, reduciendo así la complejidad de grandes sistemas empotrados. Estos sistemas tienen multiples modos de operación, cada uno de ellos correspondiente a distintos escenarios y para distintas aplicaciones; son llamados Sistemas Multimodales (o en inglés “Multi-Mode Systems” o MMS). Normalmente cada modo ejecuta una parte de código distinto. En un momento dado el sistema, que está en un modo concreto, puede ser cambiado a otro modo distinto como resultado de alguna condicion impuesta previamente. Los mecanismos de cambio de modo (o protocolos de cambio de modo) son usados para mover el sistema de un modo a otro durante el tiempo de ejecución. En este trabajo se ha usado un modelo de sistema operativo para implementar un sistema multimodo llamado MMHSF, siglas en inglés correspondientes a (Multi-Mode Hierarchical Scheduling Framework). Este sistema está basado en el HSF (Hierarchical Scheduling Framework), un modelo de sistema operativo con jerarquía de dos niveles, implementado en un sistema operativo en tiempo real de libre distribución llamado FreeRTOS, capaz de permitir el aislamiento temporal entre componentes. La principal contribución de este trabajo es la ampliación del HSF convirtiendolo en un sistema multimodo realizando los cambios mínimos necesarios sobre el sistema operativo FreeRTOS y la implementación ya existente del HSF. Esta implementación usa un sistema de planificación de prioridad fija para ambos niveles de jerarquía, ocupando el tiempo entre tareas con un “modo reposo”. Además el sistema es capaz de cambiar de un modo a otro en tiempo de ejecución. Cada subsistema y tarea son capaces de tener distintos atributos de tiempo (prioridad, periodo y tiempo de ejecución) en función del modo. Bajo una demanda de cambio de modo (Mode Change Request MCR) se puede variar el set de tareas en ejecución, así como los atributos de los servidores y las tareas. Un protocolo de cambio de modo espeficica precisamente cómo será cambiado el sistema de un modo a otro. Sin embargo una aplicación puede requerir no solo un cambio de modo, sino que lo haga de una forma especifica. Por ejemplo, el cambio de modo de “normal” a “apagado” puede permitir a las tareas en ejecución ser finalizadas antes de que se complete la transición, pero sin embargo el cambio de “normal” a “emergencia” puede requerir abortar todas las tareas instantaneamente. En este trabajo ambas características, tanto el modo como el protocolo de cambio, pueden ser cambiadas en tiempo de ejecución, pero deben ser previamente definidas por el desarrollador. Han sido definidos tres protocolos de cambios: el protocolo “suspender/continuar”, protocolo “abortar” y el protocolo “completar”. Estos protocolos incrementan la flexibilidad del sistema, permitiendo al usuario seleccionar de que forma quieren cambiar hacia el nuevo modo. La implementación del MMHSF ha sido testada y evaluada en una placa AVR EVK1100, con un micro-controlador AVR32UC3A0. Se ha comprobado el comportamiento de los distintos modos para los distintos protocolos, definidos previamente. Como resultado se proporcionan las medidades de rendimiento de los distintos protocolos de cambio de modo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este proyecto se basa en la integración de funciones optimizadas de OpenHEVC en el códec Reconfigurable Video Coding (RVC) - High Efficiency Video Coding (HEVC). RVC es un framework capaz de generar automáticamente el código que implementa cualquier estándar de video mediante el uso de librerías. Estas librerías contienen la definición de bloques funcionales de los que se componen los distintos estándares de video a implementar. Sin embargo, como desventaja a la facilidad de creación de estándares utilizando este framework, las librerías que utiliza no se encuentran optimizadas. Por ello se pretende que el códec RVC-HEVC sea capaz de realizar llamadas a funciones optimizadas, que para el estudio éstas se encontrarán en la librería OpenHEVC. Por otro lado, estos codificadores de video se pueden encontrar implementados tanto en PCs como en sistemas embebidos. Los Digital Signal Processors (DSPs) son unas plataformas especializadas en el procesamiento digital, teniendo una alta velocidad en el cómputo de operaciones matemáticas. Por ello, para este proyecto se integrará RVC-HEVC con las llamadas a OpenHEVC en una plataforma DSP como la TMS320C6678. Una vez completa la integración se efectuan medidas de eficiencia para ver cómo las llamadas a funciones optimizadas mejoran la velocidad en la decodificación de imágenes. ABSTRACT. This project is based in the integration of optimized functions from OpenHEVC in the RVC-HEVC (Reconfigurable Video Coding- High Efficiency Video Coding) codec. RVC is a framework capable of generating automatically any type of video standard with the use of libraries. Inside these libraries there are the definitions of the functional blocks which make up the different standards, in which for the case of study will be the HEVC standard. Nevertheless, as a downside for the simplicity in producing standards with the RVC tool, these libraries are not optimized. Thus, one of the goals for the project will be to make the RVC-HEVC call optimized functions, in which in this case they will be inside the OpenHEVC library. On the other hand, these video encoders can be implemented both in PCs and embedded systems. The DSPs (Digital Signal Processors) are platforms specialized in digital processing, being able to compute mathematical operations in a short period of time. Consequently, for this project the integration of the RVC-HEVC with calls to the OpenHEVC library will be done in a DSP platform such as a TMS320C6678. Once completed the integration, performance measures will be carried out to evaluate the improvement in the decoding speed obtained when optimized functions are used by the RVC-HEVC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objetivo. Estimar la reproducibilidad de tres medidas objetivas de desempeño físico de personas mayores en atención primaria. Diseño. Estudio descriptivo y prospectivo con observación directa de la función física por parte de profesionales de la salud de acuerdo a un protocolo estandarizado. Emplazamiento. Tres centros de atención primaria de las provincias de Alicante y Valencia. Participantes. Muestra de 66 personas de 70 y más años, evaluadas en dos ocasiones por el mismo profesional al objeto de replicar idénticas condiciones del estudio, en un intervalo temporal de dos semanas (mediana de 14 días). Mediciones principales. Se evaluó el funcionamiento físico a través de tres pruebas objetivas de desempeño: el test de equilibrio, el de velocidad de la marcha, y la capacidad para levantarse y sentarse de una silla. Estas medidas provienen de los estudios E PESE (Established Populations for Epidemiologic Studies of the Elderly). Se ha calculado la fiabilidad test-retest mediante el coeficiente de correlación intraclase. Resultados. Los coeficientes de correlación intraclase (CCI) fueron de 0,55 para el test de equilibrio, de 0,69 para el test de levantarse de la silla, y de O, 79 para el de velocidad de la marcha. El valor para la puntuación total de la batería EPESE fue de 0,80. Conclusiones. La reproducibilidad de estas medidas de desempeño es tan aceptable como las aportadas por la bibliografía de referencia. Estas pruebas de desempeño permiten evaluar con rigor cambios importantes en funcionamiento y salud que se producen con el tiempo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.