768 resultados para Convergence of accounting standards
Resumo:
Umami taste is produced by glutamate acting on a fifth taste system. However, glutamate presented alone as a taste stimulus is not highly pleasant, and does not act synergistically with other tastes (sweet, salt, bitter and sour). We show here that when glutamate is given in combination with a consonant, savory, odour (vegetable), the resulting flavor can be much more pleasant. Moreover, we showed using functional brain imaging with fMRI that the glutamate taste and savory odour combination produced much greater activation of the medial orbitofrontal cortex and pregenual cingulate cortex than the sum of the activations by the taste and olfactory components presented separately. Supralinear effects were much less (and significantly less) evident for sodium chloride and vegetable odour. Further, activations in these brain regions were correlated with the pleasantness and fullness of the flavor, and with the consonance of the taste and olfactory components. Supralinear effects of glutamate taste and savory odour were not found in the insular primary taste cortex. We thus propose that glutamate acts by the nonlinear effects it can produce when combined with a consonant odour in multimodal cortical taste-olfactory convergence regions. We propose the concept that umami can be thought of as a rich and delicious flavor that is produced by a combination of glutamate taste and a consonant savory odour. Glutamate is thus a flavor enhancer because of the way that it can combine supralinearly with consonant odours in cortical areas where the taste and olfactory pathways converge far beyond the receptors.
Resumo:
Monte Carlo algorithms often aim to draw from a distribution π by simulating a Markov chain with transition kernel P such that π is invariant under P. However, there are many situations for which it is impractical or impossible to draw from the transition kernel P. For instance, this is the case with massive datasets, where is it prohibitively expensive to calculate the likelihood and is also the case for intractable likelihood models arising from, for example, Gibbs random fields, such as those found in spatial statistics and network analysis. A natural approach in these cases is to replace P by an approximation Pˆ. Using theory from the stability of Markov chains we explore a variety of situations where it is possible to quantify how ’close’ the chain given by the transition kernel Pˆ is to the chain given by P . We apply these results to several examples from spatial statistics and network analysis.
Resumo:
We consider the two-dimensional Helmholtz equation with constant coefficients on a domain with piecewise analytic boundary, modelling the scattering of acoustic waves at a sound-soft obstacle. Our discretisation relies on the Trefftz-discontinuous Galerkin approach with plane wave basis functions on meshes with very general element shapes, geometrically graded towards domain corners. We prove exponential convergence of the discrete solution in terms of number of unknowns.
Resumo:
This paper takes as its motivation debates surrounding the multiplicity of functions of accounting information. We are in particular interested in the existential function of accounting numbers and argue that numerical signs having discursive possibilities may acquire new meanings through reframing. Drawing on Goffman’s (1974) frame analysis and Vollmer’s (2007) work on three-dimensional character of numerical signs, we explore the ways in which numbers can go through instantaneous transformations and tell a new kind of story. In our analysis, we look at the main historical developments and current controversies surrounding accounting practice with a specific focus on scandals involving numerical signs as moments where our understandings and the discursive function of previously inoffensive signs shifts through a collective involvement. We map the purpose and usefulness of Vollmer’s three-dimensional framework in the analysis of selected financial accounting practices and scandals as examples of instances where numbers are reframed to suddenly perform a different existential function in context of their calculative and symptomatic dimensions.
Resumo:
Inspired by the commercial desires of global brands and retailers to access the lucrative green consumer market, carbon is increasingly being counted and made knowable at the mundane sites of everyday production and consumption, from the carbon footprint of a plastic kitchen fork to that of an online bank account. Despite the challenges of counting and making commensurable the global warming impact of a myriad of biophysical and societal activities, this desire to communicate a product or service's carbon footprint has sparked complicated carbon calculative practices and enrolled actors at literally every node of multi-scaled and vastly complex global supply chains. Against this landscape, this paper critically analyzes the counting practices that create the ‘e’ in ‘CO2e’. It is shown that, central to these practices are a series of tools, models and databases which, in building upon previous work (Eden, 2012 and Star and Griesemer, 1989) we conceptualize here as ‘boundary objects’. By enrolling everyday actors from farmers to consumers, these objects abstract and stabilize greenhouse gas emissions from their messy material and social contexts into units of CO2e which can then be translated along a product's supply chain, thereby establishing a new currency of ‘everyday supply chain carbon’. However, in making all greenhouse gas-related practices commensurable and in enrolling and stabilizing the transfer of information between multiple actors these objects oversee a process of simplification reliant upon, and subject to, a multiplicity of approximations, assumptions, errors, discrepancies and/or omissions. Further the outcomes of these tools are subject to the politicized and commercial agendas of the worlds they attempt to link, with each boundary actor inscribing different meanings to a product's carbon footprint in accordance with their specific subjectivities, commercial desires and epistemic framings. It is therefore shown that how a boundary object transforms greenhouse gas emissions into units of CO2e, is the outcome of distinct ideologies regarding ‘what’ a product's carbon footprint is and how it should be made legible. These politicized decisions, in turn, inform specific reduction activities and ultimately advance distinct, specific and increasingly durable transition pathways to a low carbon society.
Resumo:
Data from 58 strong-lensing events surveyed by the Sloan Lens ACS Survey are used to estimate the projected galaxy mass inside their Einstein radii by two independent methods: stellar dynamics and strong gravitational lensing. We perform a joint analysis of these two estimates inside models with up to three degrees of freedom with respect to the lens density profile, stellar velocity anisotropy, and line-of-sight (LOS) external convergence, which incorporates the effect of the large-scale structure on strong lensing. A Bayesian analysis is employed to estimate the model parameters, evaluate their significance, and compare models. We find that the data favor Jaffe`s light profile over Hernquist`s, but that any particular choice between these two does not change the qualitative conclusions with respect to the features of the system that we investigate. The density profile is compatible with an isothermal, being sightly steeper and having an uncertainty in the logarithmic slope of the order of 5% in models that take into account a prior ignorance on anisotropy and external convergence. We identify a considerable degeneracy between the density profile slope and the anisotropy parameter, which largely increases the uncertainties in the estimates of these parameters, but we encounter no evidence in favor of an anisotropic velocity distribution on average for the whole sample. An LOS external convergence following a prior probability distribution given by cosmology has a small effect on the estimation of the lens density profile, but can increase the dispersion of its value by nearly 40%.
Resumo:
Optimization methods that employ the classical Powell-Hestenes-Rockafellar augmented Lagrangian are useful tools for solving nonlinear programming problems. Their reputation decreased in the last 10 years due to the comparative success of interior-point Newtonian algorithms, which are asymptotically faster. In this research, a combination of both approaches is evaluated. The idea is to produce a competitive method, being more robust and efficient than its `pure` counterparts for critical problems. Moreover, an additional hybrid algorithm is defined, in which the interior-point method is replaced by the Newtonian resolution of a Karush-Kuhn-Tucker (KKT) system identified by the augmented Lagrangian algorithm. The software used in this work is freely available through the Tango Project web page:http://www.ime.usp.br/similar to egbirgin/tango/.
Resumo:
We study the effects of population size in the Peck-Shell analysis of bank runs. We find that a contract featuring equal-treatment for almost all depositors of the same type approximates the optimum. Because the approximation also satisfies Green-Lin incentive constraints, when the planner discloses positions in the queue, welfare in these alternative specifications are sandwiched. Disclosure, however, it is not needed since our approximating contract is not subject to runs.
Resumo:
O objetivo deste estudo é fazer uma análise da relação entre o erro de previsão dos analistas de mercado quanto à rentabilidade das empresas listadas na BM&FBOVESPA S.A. (Bovespa) e os requerimentos de divulgação do International Financial Reporting Standards (IFRS). Isto foi feito através da regressão do erro de previsão dos analistas, utilizando a metodologia de dados em painel no ano de implantação do IFRS no Brasil, 2010, e, complementarmente em 2012, para referenciamento desses dados. Partindo desse pressuposto, foi determinado o erro de previsão das empresas listadas na Bovespa através de dados de rentabilidade (índice de lucro por ação/earnings per share) previstos e realizados, disponíveis nas bases de dados I/B/E/S Earnings Consensus Information, providos pela plataforma Thomson ONE Investment Banking e Economática Pro®, respectivamente. Os resultados obtidos indicam uma relação negativa entre o erro de previsão e o cumprimento dos requisitos de divulgação do IFRS, ou seja, quanto maior a qualidade nas informações divulgadas, menor o erro de previsão dos analistas. Portanto, esses resultados sustentam a perspectiva de que o grau de cumprimento das normas contábeis é tão ou mais importante do que as próprias normas. Adicionalmente, foi verificado que quando a empresa listada na BM&FBOVESPA é vinculada a Agência Reguladora, seu erro de previsão não é alterado. Por fim, esses resultados sugerem que é importante que haja o aprimoramento dos mecanismos de auditoria das firmas quanto ao cumprimento dos requerimentos normativos de divulgação, tais como: penalidades pela não observância da norma (enforcement), estruturas de governança corporativa e auditorias interna e externa.
Resumo:
It has previously been shown that measurement of the critical speed is a non-invasive method of estimating the blood lactate response during exercise. However, its validity in children has yet to be demonstrated. The aims of this study were: (1) to verify if the critical speed determined in accordance with the protocol of Wakayoshi et al. is a non-invasive means of estimating the swimming speed equivalent to a blood lactate concentration of 4 mmol . l(-1) in children aged 10-12 years; and (2) to establish whether standard of performance has an effect on its determination. Sixteen swimmers were divided into two groups: beginners and trained. They initially completed a protocol for determination of speed equivalent to a blood lactate concentration of 4 mmol . l(-1). Later, during training sessions, maximum efforts were swum over distances of 50, 100 and 200 m for the calculation of the critical speed. The speeds equivalent to a blood lactate concentration of 4 mmol . l(-1) (beginners = 0.82 +/- 0.09 m . s(-1), trained = 1.19 +/- 0.11 m . s(-1); mean +/- s) were significantly faster than the critical speeds (beginners = 0.78 +/- 0.25 m . s(-1), trained = 1.08 +/- 0.04 m . s(-1)) in both groups. There was a high correlation between speed at a blood lactate concentration of 4 mmol . l(-1) and the critical speed for the beginners (r = 0.96, P < 0.001), but not for the trained group (r = 0.60, P > 0.05). The blood lactate concentration corresponding to the critical speed was 2.7 +/- 1.1 and 3.1 +/- 0.4 mmol . l(-1) for the beginners and trained group respectively. The percent difference between speed at a blood lactate concentration of 4 mmol . l(-1) and the critical speed was not significantly different between the two groups. At all distances studied, swimming performance was significantly faster in the trained group. Our results suggest that the critical speed underestimates swimming intensity corresponding to a blood lactate concentration of 4 mmol . l(-1) in children aged 10-12 years and that standard of performance does not affect the determination of the critical speed.
Resumo:
This article presents considerations about viability on reutilize existing web based e-Learning systems on Interactive Digital TV environment according to Digital TV standard adopted in Brazil. Considering the popularity of Moodle system in academic and corporative area, such system was chosen as a foundation for a survey into its properties to create a specification of an Application Programming Interface (API) for convergence to t-Learning characteristics that demands efforts in interface design area due the fact that computer and TV concepts are totally different. This work aims to present studies concerning user interface design during two stages: survey and detail of functionalities from an e-Learning system and how to adapt them for the Interactive TV regarding usability context and Information Architecture concepts.