766 resultados para accruals-based earnings management


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Emergency management is one of the key aspects within the day-to-day operation procedures in a highway. Efficiency in the overall response in case of an incident is paramount in reducing the consequences of any incident. However, the approach of highway operators to the issue of incident management is still usually far from a systematic, standardized way. This paper attempts to address the issue and provide several hints on why this happens, and a proposal on how the situation could be overcome. An introduction to a performance based approach to a general system specification will be described, and then applied to a particular road emergency management task. A real testbed has been implemented to show the validity of the proposed approach. Ad-hoc sensors (one camera and one laser scanner) were efficiently deployed to acquire data, and advanced fusion techniques applied at the processing stage to reach the specific user requirements in terms of functionality, flexibility and accuracy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we focus on the selection of safeguards in a fuzzy risk analysis and management methodology for information systems (IS). Assets are connected by dependency relationships, and a failure of one asset may affect other assets. After computing impact and risk indicators associated with previously identified threats, we identify and apply safeguards to reduce risks in the IS by minimizing the transmission probabilities of failures throughout the asset network. However, as safeguards have associated costs, the aim is to select the safeguards that minimize costs while keeping the risk within acceptable levels. To do this, we propose a dynamic programming-based method that incorporates simulated annealing to tackle optimizations problems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One important steps in a successful project-based-learning methodology (PBL) is the process of providing the students with a convenient feedback that allows them to keep on developing their projects or to improve them. However, this task is more difficult in massive courses, especially when the project deadline is close. Besides, the continuous evaluation methodology makes necessary to find ways to objectively and continuously measure students' performance without increasing excessively instructors' work load. In order to alleviate these problems, we have developed a web service that allows students to request personal tutoring assistance during the laboratory sessions by specifying the kind of problem they have and the person who could help them to solve it. This service provides tools for the staff to manage the laboratory, for performing continuous evaluation for all students and for the student collaborators, and to prioritize tutoring according to the progress of the student's project. Additionally, the application provides objective metrics which can be used at the end of the subject during the evaluation process in order to support some students' final scores. Different usability statistics and the results of a subjective evaluation with more than 330 students confirm the success of the proposed application.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Las Field-Programmable Gate Arrays (FPGAs) SRAM se construyen sobre una memoria de configuración de tecnología RAM Estática (SRAM). Presentan múltiples características que las hacen muy interesantes para diseñar sistemas empotrados complejos. En primer lugar presentan un coste no-recurrente de ingeniería (NRE) bajo, ya que los elementos lógicos y de enrutado están pre-implementados (el diseño de usuario define su conexionado). También, a diferencia de otras tecnologías de FPGA, pueden ser reconfiguradas (incluso en campo) un número ilimitado de veces. Es más, las FPGAs SRAM de Xilinx soportan Reconfiguración Parcial Dinámica (DPR), la cual permite reconfigurar la FPGA sin interrumpir la aplicación. Finalmente, presentan una alta densidad de lógica, una alta capacidad de procesamiento y un rico juego de macro-bloques. Sin embargo, un inconveniente de esta tecnología es su susceptibilidad a la radiación ionizante, la cual aumenta con el grado de integración (geometrías más pequeñas, menores tensiones y mayores frecuencias). Esta es una precupación de primer nivel para aplicaciones en entornos altamente radiativos y con requisitos de alta confiabilidad. Este fenómeno conlleva una degradación a largo plazo y también puede inducir fallos instantáneos, los cuales pueden ser reversibles o producir daños irreversibles. En las FPGAs SRAM, los fallos inducidos por radiación pueden aparecer en en dos capas de arquitectura diferentes, que están físicamente superpuestas en el dado de silicio. La Capa de Aplicación (o A-Layer) contiene el hardware definido por el usuario, y la Capa de Configuración contiene la memoria de configuración y la circuitería de soporte. Los fallos en cualquiera de estas capas pueden hacer fracasar el sistema, lo cual puede ser ás o menos tolerable dependiendo de los requisitos de confiabilidad del sistema. En el caso general, estos fallos deben gestionados de alguna manera. Esta tesis trata sobre la gestión de fallos en FPGAs SRAM a nivel de sistema, en el contexto de sistemas empotrados autónomos y confiables operando en un entorno radiativo. La tesis se centra principalmente en aplicaciones espaciales, pero los mismos principios pueden aplicarse a aplicaciones terrenas. Las principales diferencias entre ambas son el nivel de radiación y la posibilidad de mantenimiento. Las diferentes técnicas para la gestión de fallos en A-Layer y C-Layer son clasificados, y sus implicaciones en la confiabilidad del sistema son analizados. Se proponen varias arquitecturas tanto para Gestores de Fallos de una capa como de doble-capa. Para estos últimos se propone una arquitectura novedosa, flexible y versátil. Gestiona las dos capas concurrentemente de manera coordinada, y permite equilibrar el nivel de redundancia y la confiabilidad. Con el objeto de validar técnicas de gestión de fallos dinámicas, se desarrollan dos diferentes soluciones. La primera es un entorno de simulación para Gestores de Fallos de C-Layer, basado en SystemC como lenguaje de modelado y como simulador basado en eventos. Este entorno y su metodología asociada permite explorar el espacio de diseño del Gestor de Fallos, desacoplando su diseño del desarrollo de la FPGA objetivo. El entorno incluye modelos tanto para la C-Layer de la FPGA como para el Gestor de Fallos, los cuales pueden interactuar a diferentes niveles de abstracción (a nivel de configuration frames y a nivel físico JTAG o SelectMAP). El entorno es configurable, escalable y versátil, e incluye capacidades de inyección de fallos. Los resultados de simulación para algunos escenarios son presentados y comentados. La segunda es una plataforma de validación para Gestores de Fallos de FPGAs Xilinx Virtex. La plataforma hardware aloja tres Módulos de FPGA Xilinx Virtex-4 FX12 y dos Módulos de Unidad de Microcontrolador (MCUs) de 32-bits de propósito general. Los Módulos MCU permiten prototipar Gestores de Fallos de C-Layer y A-Layer basados en software. Cada Módulo FPGA implementa un enlace de A-Layer Ethernet (a través de un switch Ethernet) con uno de los Módulos MCU, y un enlace de C-Layer JTAG con el otro. Además, ambos Módulos MCU intercambian comandos y datos a través de un enlace interno tipo UART. Al igual que para el entorno de simulación, se incluyen capacidades de inyección de fallos. Los resultados de pruebas para algunos escenarios son también presentados y comentados. En resumen, esta tesis cubre el proceso completo desde la descripción de los fallos FPGAs SRAM inducidos por radiación, pasando por la identificación y clasificación de técnicas de gestión de fallos, y por la propuesta de arquitecturas de Gestores de Fallos, para finalmente validarlas por simulación y pruebas. El trabajo futuro está relacionado sobre todo con la implementación de Gestores de Fallos de Sistema endurecidos para radiación. ABSTRACT SRAM-based Field-Programmable Gate Arrays (FPGAs) are built on Static RAM (SRAM) technology configuration memory. They present a number of features that make them very convenient for building complex embedded systems. First of all, they benefit from low Non-Recurrent Engineering (NRE) costs, as the logic and routing elements are pre-implemented (user design defines their connection). Also, as opposed to other FPGA technologies, they can be reconfigured (even in the field) an unlimited number of times. Moreover, Xilinx SRAM-based FPGAs feature Dynamic Partial Reconfiguration (DPR), which allows to partially reconfigure the FPGA without disrupting de application. Finally, they feature a high logic density, high processing capability and a rich set of hard macros. However, one limitation of this technology is its susceptibility to ionizing radiation, which increases with technology scaling (smaller geometries, lower voltages and higher frequencies). This is a first order concern for applications in harsh radiation environments and requiring high dependability. Ionizing radiation leads to long term degradation as well as instantaneous faults, which can in turn be reversible or produce irreversible damage. In SRAM-based FPGAs, radiation-induced faults can appear at two architectural layers, which are physically overlaid on the silicon die. The Application Layer (or A-Layer) contains the user-defined hardware, and the Configuration Layer (or C-Layer) contains the (volatile) configuration memory and its support circuitry. Faults at either layers can imply a system failure, which may be more ore less tolerated depending on the dependability requirements. In the general case, such faults must be managed in some way. This thesis is about managing SRAM-based FPGA faults at system level, in the context of autonomous and dependable embedded systems operating in a radiative environment. The focus is mainly on space applications, but the same principles can be applied to ground applications. The main differences between them are the radiation level and the possibility for maintenance. The different techniques for A-Layer and C-Layer fault management are classified and their implications in system dependability are assessed. Several architectures are proposed, both for single-layer and dual-layer Fault Managers. For the latter, a novel, flexible and versatile architecture is proposed. It manages both layers concurrently in a coordinated way, and allows balancing redundancy level and dependability. For the purpose of validating dynamic fault management techniques, two different solutions are developed. The first one is a simulation framework for C-Layer Fault Managers, based on SystemC as modeling language and event-driven simulator. This framework and its associated methodology allows exploring the Fault Manager design space, decoupling its design from the target FPGA development. The framework includes models for both the FPGA C-Layer and for the Fault Manager, which can interact at different abstraction levels (at configuration frame level and at JTAG or SelectMAP physical level). The framework is configurable, scalable and versatile, and includes fault injection capabilities. Simulation results for some scenarios are presented and discussed. The second one is a validation platform for Xilinx Virtex FPGA Fault Managers. The platform hosts three Xilinx Virtex-4 FX12 FPGA Modules and two general-purpose 32-bit Microcontroller Unit (MCU) Modules. The MCU Modules allow prototyping software-based CLayer and A-Layer Fault Managers. Each FPGA Module implements one A-Layer Ethernet link (through an Ethernet switch) with one of the MCU Modules, and one C-Layer JTAG link with the other. In addition, both MCU Modules exchange commands and data over an internal UART link. Similarly to the simulation framework, fault injection capabilities are implemented. Test results for some scenarios are also presented and discussed. In summary, this thesis covers the whole process from describing the problem of radiationinduced faults in SRAM-based FPGAs, then identifying and classifying fault management techniques, then proposing Fault Manager architectures and finally validating them by simulation and test. The proposed future work is mainly related to the implementation of radiation-hardened System Fault Managers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE The decision-making process plays a key role in organizations. Every decision-making process produces a final choice that may or may not prompt action. Recurrently, decision makers find themselves in the dichotomous question of following a traditional sequence decision-making process where the output of a decision is used as the input of the next stage of the decision, or following a joint decision-making approach where several decisions are taken simultaneously. The implication of the decision-making process will impact different players of the organization. The choice of the decision- making approach becomes difficult to find, even with the current literature and practitioners’ knowledge. The pursuit of better ways for making decisions has been a common goal for academics and practitioners. Management scientists use different techniques and approaches to improve different types of decisions. The purpose of this decision is to use the available resources as well as possible (data and techniques) to achieve the objectives of the organization. The developing and applying of models and concepts may be helpful to solve managerial problems faced every day in different companies. As a result of this research different decision models are presented to contribute to the body of knowledge of management science. The first models are focused on the manufacturing industry and the second part of the models on the health care industry. Despite these models being case specific, they serve the purpose of exemplifying that different approaches to the problems and could provide interesting results. Unfortunately, there is no universal recipe that could be applied to all the problems. Furthermore, the same model could deliver good results with certain data and bad results for other data. A framework to analyse the data before selecting the model to be used is presented and tested in the models developed to exemplify the ideas. METHODOLOGY As the first step of the research a systematic literature review on the joint decision is presented, as are the different opinions and suggestions of different scholars. For the next stage of the thesis, the decision-making process of more than 50 companies was analysed in companies from different sectors in the production planning area at the Job Shop level. The data was obtained using surveys and face-to-face interviews. The following part of the research into the decision-making process was held in two application fields that are highly relevant for our society; manufacturing and health care. The first step was to study the interactions and develop a mathematical model for the replenishment of the car assembly where the problem of “Vehicle routing problem and Inventory” were combined. The next step was to add the scheduling or car production (car sequencing) decision and use some metaheuristics such as ant colony and genetic algorithms to measure if the behaviour is kept up with different case size problems. A similar approach is presented in a production of semiconductors and aviation parts, where a hoist has to change from one station to another to deal with the work, and a jobs schedule has to be done. However, for this problem simulation was used for experimentation. In parallel, the scheduling of operating rooms was studied. Surgeries were allocated to surgeons and the scheduling of operating rooms was analysed. The first part of the research was done in a Teaching hospital, and for the second part the interaction of uncertainty was added. Once the previous problem had been analysed a general framework to characterize the instance was built. In the final chapter a general conclusion is presented. FINDINGS AND PRACTICAL IMPLICATIONS The first part of the contributions is an update of the decision-making literature review. Also an analysis of the possible savings resulting from a change in the decision process is made. Then, the results of the survey, which present a lack of consistency between what the managers believe and the reality of the integration of their decisions. In the next stage of the thesis, a contribution to the body of knowledge of the operation research, with the joint solution of the replenishment, sequencing and inventory problem in the assembly line is made, together with a parallel work with the operating rooms scheduling where different solutions approaches are presented. In addition to the contribution of the solving methods, with the use of different techniques, the main contribution is the framework that is proposed to pre-evaluate the problem before thinking of the techniques to solve it. However, there is no straightforward answer as to whether it is better to have joint or sequential solutions. Following the proposed framework with the evaluation of factors such as the flexibility of the answer, the number of actors, and the tightness of the data, give us important hints as to the most suitable direction to take to tackle the problem. RESEARCH LIMITATIONS AND AVENUES FOR FUTURE RESEARCH In the first part of the work it was really complicated to calculate the possible savings of different projects, since in many papers these quantities are not reported or the impact is based on non-quantifiable benefits. The other issue is the confidentiality of many projects where the data cannot be presented. For the car assembly line problem more computational power would allow us to solve bigger instances. For the operation research problem there was a lack of historical data to perform a parallel analysis in the teaching hospital. In order to keep testing the decision framework it is necessary to keep applying more case studies in order to generalize the results and make them more evident and less ambiguous. The health care field offers great opportunities since despite the recent awareness of the need to improve the decision-making process there are many opportunities to improve. Another big difference with the automotive industry is that the last improvements are not spread among all the actors. Therefore, in the future this research will focus more on the collaboration between academia and the health care sector.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: To evaluate the effectiveness of a health visitor led intervention for failure to thrive in children under 2 years old.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Os investidores institucionais, tais como os fundos de pensão, são entidades que administram recursos de numerosos grupos de pessoas, e que, por isso, tendem a gerir grandes carteiras de investimento e a ter incentivos para se tornar bem informados. Por isso, espera-se que eles sejam bons representantes da classe de investidores sofisticados, ou bem informados, e que o aumento de sua presença no mercado de capitais melhore a velocidade do ajuste do preço, contribuindo para evitar ineficiências do mercado, como, por exemplo, a anomalia dos accruals (Sloan, 1996), que é um atraso na revisão dos preços diante da informação sobre a magnitude dos accruals do lucro. Assim, o objetivo deste estudo é analisar, em diversos países, o impacto da participação de investidores institucionais sobre a anomalia dos accruals. São formuladas quatro hipóteses: (i) a proporção de informações sobre o desempenho futuro da empresa refletida no preço de sua ação é positivamente relacionada com o percentual de participação societária dos investidores institucionais; (ii) quanto maior for o percentual da participação societária de investidores institucionais, maior será a qualidade do lucro; (iii) quanto maior for a value relevance do lucro, maior será a anomalia dos accruals; e (iv) quanto maior for a participação societária dos investidores institucionais, menor será a anomalia dos accruals. Para se atingir os objetivos, a bibliografia sobre investidores institucionais, investidores sofisticados e anomalia dos accruals é analisada e cotejada com a literatura sobre value relevance e qualidade do lucro, em especial com o de Dechow e Dichev (2002). A pesquisa empírica utiliza dados de empresas não financeiras listadas nas bolsas de valores da Alemanha, do Brasil, da Espanha, dos Estados Unidos, da França, da Holanda, da Itália, do Reino Unido e da Suíça, e cobre o período de 2004 a 2013. A amostra contempla entre 2.314 e 4.076 empresas, totalizando entre 15.902 e 20.174 observações, a depender do modelo estimado. São realizadas regressões com dados em painel, uma abordagem de equações aparentemente não relacionadas (Seemingly Unrelated Regression - SUR) e a aplicação do teste de Mishkin (1983). Constata-se que nos Estados Unidos e na Itália os investidores institucionais são mais bem informados que os demais, e que na Alemanha, nos Estados Unidos, na França e no Reino Unido eles exercem um papel de monitoramento, pressionando por lucros de qualidade superior. Não se constata, porém, relação positiva entre value relevance do lucro e anomalia dos accruals, nem entre participação de investidores institucionais e esta anomalia. O estudo enriquece a discussão sobre o mercado ser eficiente a longo prazo, mas apresentar anomalias no curto prazo; enfatiza a importância de o investidor ser capaz de converter informações em previsão e avaliação; discute o vínculo entre o papel de monitoramento dos investidores institucionais e a qualidade do lucro; e avalia a relação entre a atuação destes investidores e o prices lead earnings.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Over the last few years, a new stream of research has emerged in the field of strategic management which focuses on the analysis of its microfoundations. This line of research analyzes strategic topics examining their foundations rooted in individual actions and interactions. The main purpose of this paper is to examine this emerging literature of microfoundations, indicating its usefulness and main characteristics. Through a systematic literature review, this paper contributes to the field by identifying the main areas studied, the benefits and potential of this approach, and some limitations and criticisms. Moreover, the paper studies how the integration of micro and macro aspects in strategy research may be carried out, examining several works that use a multilevel approach. Some methodological approaches that may help to effect this integration are indicated. These aspects will be analyzed in relation to the resource-based theory.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

L'obiettivo della tesi è dimostrare l'utilità e i vantaggi che può fornire il Self-Management del diabete mellito di tipo 1 in un sistema di mobile Health a partire da un modello computazionale Agent-Based. Viene quindi affrontata in maniera approfondita la tematica del mobile Health ed il suo sviluppo nei paesi a basso/medio reddito, illustrando i risultati ottenuti dalla ricerca scientifica fino ad oggi, ed il concetto di Self-Management di malattie croniche, un processo di cura caratterizzato dalla partecipazione autonoma del paziente stesso, fornendo una panoramica degli approcci computazionali sviluppati. Viene quindi studiato il diabete mellito in ogni sua caratteristica, seguito dall'illustrazione di diverse applicazioni per la gestione autonoma della suddetta patologia tutt'ora in commercio. Nel caso di studio vengono effettuate diverse simulazioni, tramite la piattaforma di simulazione MASON, per realizzare varie dinamiche della rete fisiologica di un paziente al fine di stabilire feedback qualitativi per il Self-Management della patologia.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Federal Highway Administration, Washington, D.C.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

At head of title: Prepared by the Bureau of business standards of the A.W. Shaw Company.