721 resultados para Accounting errors
Resumo:
Dissertação apresentada ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do Grau de Mestre em Auditoria Orientador: Professor Doutor José da Silva Fernandes
Resumo:
Il est important pour les entreprises de compresser les informations détaillées dans des sets d'information plus compréhensibles. Au chapitre 1, je résume et structure la littérature sur le sujet « agrégation d'informations » en contrôle de gestion. Je récapitule l'analyse coûts-bénéfices que les comptables internes doivent considérer quand ils décident des niveaux optimaux d'agrégation d'informations. Au-delà de la perspective fondamentale du contenu d'information, les entreprises doivent aussi prendre en considération des perspectives cogni- tives et comportementales. Je développe ces aspects en faisant la part entre la comptabilité analytique, les budgets et plans, et la mesure de la performance. Au chapitre 2, je focalise sur un biais spécifique qui se crée lorsque les informations incertaines sont agrégées. Pour les budgets et plans, des entreprises doivent estimer les espérances des coûts et des durées des projets, car l'espérance est la seule mesure de tendance centrale qui est linéaire. A la différence de l'espérance, des mesures comme le mode ou la médiane ne peuvent pas être simplement additionnés. En considérant la forme spécifique de distributions des coûts et des durées, l'addition des modes ou des médianes résultera en une sous-estimation. Par le biais de deux expériences, je remarque que les participants tendent à estimer le mode au lieu de l'espérance résultant en une distorsion énorme de l'estimati¬on des coûts et des durées des projets. Je présente également une stratégie afin d'atténuer partiellement ce biais. Au chapitre 3, j'effectue une étude expérimentale pour comparer deux approches d'esti¬mation du temps qui sont utilisées en comptabilité analytique, spécifiquement « coûts basés sur les activités (ABC) traditionnelles » et « time driven ABC » (TD-ABC). Au contraire des affirmations soutenues par les défenseurs de l'approche TD-ABC, je constate que cette dernière n'est pas nécessairement appropriée pour les calculs de capacité. Par contre, je démontre que le TD-ABC est plus approprié pour les allocations de coûts que l'approche ABC traditionnelle. - It is essential for organizations to compress detailed sets of information into more comprehensi¬ve sets, thereby, establishing sharp data compression and good decision-making. In chapter 1, I review and structure the literature on information aggregation in management accounting research. I outline the cost-benefit trade-off that management accountants need to consider when they decide on the optimal levels of information aggregation. Beyond the fundamental information content perspective, organizations also have to account for cognitive and behavi¬oral perspectives. I elaborate on these aspects differentiating between research in cost accounti¬ng, budgeting and planning, and performance measurement. In chapter 2, I focus on a specific bias that arises when probabilistic information is aggregated. In budgeting and planning, for example, organizations need to estimate mean costs and durations of projects, as the mean is the only measure of central tendency that is linear. Different from the mean, measures such as the mode or median cannot simply be added up. Given the specific shape of cost and duration distributions, estimating mode or median values will result in underestimations of total project costs and durations. In two experiments, I find that participants tend to estimate mode values rather than mean values resulting in large distortions of estimates for total project costs and durations. I also provide a strategy that partly mitigates this bias. In the third chapter, I conduct an experimental study to compare two approaches to time estimation for cost accounting, i.e., traditional activity-based costing (ABC) and time-driven ABC (TD-ABC). Contrary to claims made by proponents of TD-ABC, I find that TD-ABC is not necessarily suitable for capacity computations. However, I also provide evidence that TD-ABC seems better suitable for cost allocations than traditional ABC.
Resumo:
Este trabalho investiga, no mercado acionário brasileiro, o efeito da contabilidade de hedge na qualidade das informações contábeis divulgadas, no disclosure dos instrumentos financeiros derivativos e na assimetria de informação. Para medir a qualidade da informação contábil, foram utilizadas as métricas de relevância da informação contábil e informatividade dos lucros contábeis. Para a execução deste trabalho, foi constituída uma amostra geral com empresas brasileiras, não financeiras, listadas na Bolsa de Valores de São Paulo, compreendendo as 150 empresas com maior valor de mercado em 01/01/2014. A partir da amostra geral, foram constituídas amostras para a aplicação dos modelos de value relevance, informativeness, disclosure e assimetria de informação. A amostra para relevância contou com 758 observações firmas-anos, para o período de 2008 a 2013. A amostra para informatividade contou com 701 observações firmas-anos, para o período de 2008 a 2013. A amostra para disclosure contou com 100 observações firmas-anos, para o período de 2011 a 2012. A amostra para assimetria de informação contou com 100 observações firmas-anos, para o período de 2011 a 2012. Para as análises dos dados, utilizou-se regressões com errospadrão robustos com abordagem POLS e Efeitos Fixos, aplicadas sobre dados em painel. Complementarmente, para as análises do efeito do hedge accounting sobre o disclosure e assimetria de informação, foi aplicado o método de Propensity Score Matching. As evidências encontradas para a influência da contabilidade de hedge na relevância da informação contábil apontaram uma relação positiva e significante na interação com o LL. Na análise da informatividade dos lucros contábeis, a pesquisa evidenciou uma relação negativa e estatisticamente significante do lucro quando interagido com a variável dummy de hedge accounting. Quanto às evidências encontradas para a influência do hedge accounting sobre o disclosure dos derivativos, verificou-se uma relação positiva e estatisticamente significante da dummy de hedge accounting com o indicador de evidenciação dos derivativos. Em relação às evidências para a assimetria de informação, embora os coeficientes se mostrassem no sentido esperado, os mesmos não foram estatisticamente significativos. Adicionalmente, incorporamse às análises econométricas uma análise descritiva, na amostra geral, da utilização do hedge accounting no Brasil, para o ano de 2013. Dentre as 150 empresas da amostra, 49 empresas utilizaram hedge accounting, onde 41 empresas adotam apenas 1 tipo de hedge. O hedge de fluxo de caixa é o tipo de hedge mais adotado pelas empresas, sendo utilizado por 42 companhias.
Resumo:
This paper compares methods for calculating Input-Output (IO) Type II multipliers. These are formulations of the standard Leontief IO model which endogenise elements of household consumption. An analytical comparison of the two basic IO Type II multiplier methods with the Social Accounting Matrix (SAM) multiplier approach identifies the treatment of non-wage income generated in production as a central problem. The multiplier values for each of the IO and SAM methods are calculated using Scottish data for 2009. These results can be used to choose which Type II IO multiplier to adopt where SAM multiplier values are unavailable.
Resumo:
This note develops general model-free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit the recent asymptotic distributional results in Barndorff-Nielsen and Shephard (2002a), are both easy to implement and highly accurate in empirically realistic situations. On properly accounting for the measurement errors in the volatility forecast evaluations reported in Andersen, Bollerslev, Diebold and Labys (2003), the adjustments result in markedly higher estimates for the true degree of return-volatility predictability.
Resumo:
Inspired by the commercial desires of global brands and retailers to access the lucrative green consumer market, carbon is increasingly being counted and made knowable at the mundane sites of everyday production and consumption, from the carbon footprint of a plastic kitchen fork to that of an online bank account. Despite the challenges of counting and making commensurable the global warming impact of a myriad of biophysical and societal activities, this desire to communicate a product or service's carbon footprint has sparked complicated carbon calculative practices and enrolled actors at literally every node of multi-scaled and vastly complex global supply chains. Against this landscape, this paper critically analyzes the counting practices that create the ‘e’ in ‘CO2e’. It is shown that, central to these practices are a series of tools, models and databases which, in building upon previous work (Eden, 2012 and Star and Griesemer, 1989) we conceptualize here as ‘boundary objects’. By enrolling everyday actors from farmers to consumers, these objects abstract and stabilize greenhouse gas emissions from their messy material and social contexts into units of CO2e which can then be translated along a product's supply chain, thereby establishing a new currency of ‘everyday supply chain carbon’. However, in making all greenhouse gas-related practices commensurable and in enrolling and stabilizing the transfer of information between multiple actors these objects oversee a process of simplification reliant upon, and subject to, a multiplicity of approximations, assumptions, errors, discrepancies and/or omissions. Further the outcomes of these tools are subject to the politicized and commercial agendas of the worlds they attempt to link, with each boundary actor inscribing different meanings to a product's carbon footprint in accordance with their specific subjectivities, commercial desires and epistemic framings. It is therefore shown that how a boundary object transforms greenhouse gas emissions into units of CO2e, is the outcome of distinct ideologies regarding ‘what’ a product's carbon footprint is and how it should be made legible. These politicized decisions, in turn, inform specific reduction activities and ultimately advance distinct, specific and increasingly durable transition pathways to a low carbon society.
Resumo:
Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.
Resumo:
Purpose: To establish the prevalence of refractive errors and ocular disorders in preschool and schoolchildren of Ibiporã, Brazil. Methods: A survey of 6 to 12-year-old children from public and private elementary schools was carried out in Ibiporã between 1989 and 1996. Visual acuity measurements were performed by trained teachers using Snellen's chart. Children with visual acuity <0.7 in at least one eye were referred to a complete ophthalmologic examination. Results: 35,936 visual acuity measurements were performed in 13,471 children. 1.966 children (14.59%) were referred to an ophthalmologic examination. Amblyopia was diagnosed in 237 children (1.76%), whereas strabismus was observed in 114 cases (0.84%). Cataract (n=17) (0.12%), chorioretinitis (n=38) (0.28%) and eyelid ptosis (n=6) (0.04%) were also diagnosed. Among the 614 (4.55%) children who were found to have refractive errors, 284 (46.25%) had hyperopia (hyperopia or hyperopic astigmatism), 206 (33.55%) had myopia (myopia or myopic astigmatism) and 124 (20.19%) showed mixed astigmatism. Conclusions: The study determined the local prevalence of amblyopia, refractive errors and eye disorders among preschool and schoolchildren.
Resumo:
In this work we investigate knowledge acquisition as performed by multiple agents interacting as they infer, under the presence of observation errors, respective models of a complex system. We focus the specific case in which, at each time step, each agent takes into account its current observation as well as the average of the models of its neighbors. The agents are connected by a network of interaction of Erdos-Renyi or Barabasi-Albert type. First, we investigate situations in which one of the agents has a different probability of observation error (higher or lower). It is shown that the influence of this special agent over the quality of the models inferred by the rest of the network can be substantial, varying linearly with the respective degree of the agent with different estimation error. In case the degree of this agent is taken as a respective fitness parameter, the effect of the different estimation error is even more pronounced, becoming superlinear. To complement our analysis, we provide the analytical solution of the overall performance of the system. We also investigate the knowledge acquisition dynamic when the agents are grouped into communities. We verify that the inclusion of edges between agents (within a community) having higher probability of observation error promotes the loss of quality in the estimation of the agents in the other communities.
Resumo:
Background: There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results: This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent) and non-time series (independent) data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models) and dependent (autoregressive models) data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error). The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions: Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.
Resumo:
Medication administration errors (MAE) are the most frequent kind of medication errors. Errors with antimicrobial drugs (AD) are relevant because they may interfere inpatient safety and in the development of microbial resistance. The aim of this study is to analyze the AD errors detected in a Brazilian multicentric study of MAE. It was a devcriptive and explorotory study carried out in clinical units in five Brazilian teaching hospitals. The hospitals were investigated during 30 days. MAE were detected by observation technique. MAE were classified in categories: wrong route(WR), wrong patient(WP), wrong dose(WD) wrong time (WT) and unordered drug (UD). AD with MA E were classified by Anatomical-Therapeutical-Chemical Classification System. AD with narrow therapeutic index (NTI) wet-e identified A descriptive statistical analysis was performed using SPSS version 11.5 software. A total of 1500 errors were observed, 277 (18.5%) of them were error with AD. The hopes of AD error were: WT87.7%, QD 6.9%, WR 1.5%, UD 3.2% and WP 0.7%. The number of AD found was 36. The mostly ATC class were fluoroquinolones 13.9%, combinations of penicillin 13.9%, macrolides 8.3% and third-generation cephalosporines 5.6%. The parenteral drug dosage form was associated with 55.6% of AD. 16.7% of AD were NTI. 47.4% of WD and 21.8% WT were with NTI drugs. This study shows that these errors should be considered potential areas for improvement in the medication process and patient safety plus there is requirement to develop rational drug use of AD.
Resumo:
This paper proposes a three-stage offline approach to detect, identify, and correct series and shunt branch parameter errors. In Stage 1 the branches suspected of having parameter errors are identified through an Identification Index (II). The II of a branch is the ratio between the number of measurements adjacent to that branch, whose normalized residuals are higher than a specified threshold value, and the total number of measurements adjacent to that branch. Using several measurement snapshots, in Stage 2 the suspicious parameters are estimated, in a simultaneous multiple-state-and-parameter estimation, via an augmented state and parameter estimator which increases the V - theta state vector for the inclusion of suspicious parameters. Stage 3 enables the validation of the estimation obtained in Stage 2, and is performed via a conventional weighted least squares estimator. Several simulation results (with IEEE bus systems) have demonstrated the reliability of the proposed approach to deal with single and multiple parameter errors in adjacent and non-adjacent branches, as well as in parallel transmission lines with series compensation. Finally the proposed approach is confirmed on tests performed on the Hydro-Quebec TransEnergie network.
Resumo:
This work presents an automated system for the measurement of form errors of mechanical components using an industrial robot. A three-probe error separation technique was employed to allow decoupling between the measured form error and errors introduced by the robotic system. A mathematical model of the measuring system was developed to provide inspection results by means of the solution of a system of linear equations. A new self-calibration procedure, which employs redundant data from several runs, minimizes the influence of probes zero-adjustment on the final result. Experimental tests applied to the measurement of straightness errors of mechanical components were accomplished and demonstrated the effectiveness of the employed methodology. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This work shows the application of the analytic hierarchy process (AHP) in the full cost accounting (FCA) within the integrated resource planning (IRP) process. For this purpose, a pioneer case was developed and different energy solutions of supply and demand for a metropolitan airport (Congonhas) were considered [Moreira, E.M., 2005. Modelamento energetico para o desenvolvimento limpo de aeroporto metropolitano baseado na filosofia do PIR-O caso da metropole de Sao Paulo. Dissertacao de mestrado, GEPEA/USP]. These solutions were compared and analyzed utilizing the software solution ""Decision Lens"" that implements the AHP. The final part of this work has a classification of resources that can be considered to be the initial target as energy resources, thus facilitating the restraints of the IRP of the airport and setting parameters aiming at sustainable development. (C) 2007 Elsevier Ltd. All rights reserved.
Diagnostic errors and repetitive sequential classifications in on-line process control by attributes
Resumo:
The procedure of on-line process control by attributes, known as Taguchi`s on-line process control, consists of inspecting the mth item (a single item) at every m produced items and deciding, at each inspection, whether the fraction of conforming items was reduced or not. If the inspected item is nonconforming, the production is stopped for adjustment. As the inspection system can be subject to diagnosis errors, one develops a probabilistic model that classifies repeatedly the examined item until a conforming or b non-conforming classification is observed. The first event that occurs (a conforming classifications or b non-conforming classifications) determines the final classification of the examined item. Proprieties of an ergodic Markov chain were used to get the expression of average cost of the system of control, which can be optimized by three parameters: the sampling interval of the inspections (m); the number of repeated conforming classifications (a); and the number of repeated non-conforming classifications (b). The optimum design is compared with two alternative approaches: the first one consists of a simple preventive policy. The production system is adjusted at every n produced items (no inspection is performed). The second classifies the examined item repeatedly r (fixed) times and considers it conforming if most classification results are conforming. Results indicate that the current proposal performs better than the procedure that fixes the number of repeated classifications and classifies the examined item as conforming if most classifications were conforming. On the other hand, the preventive policy can be averagely the most economical alternative rather than those ones that require inspection depending on the degree of errors and costs. A numerical example illustrates the proposed procedure. (C) 2009 Elsevier B. V. All rights reserved.