976 resultados para Losses
Resumo:
Discussion Conclusions Materials and Methods Acknowledgments Author Contributions References Reader Comments (0) Figures Abstract The importance of mangrove forests in carbon sequestration and coastal protection has been widely acknowledged. Large-scale damage of these forests, caused by hurricanes or clear felling, can enhance vulnerability to erosion, subsidence and rapid carbon losses. However, it is unclear how small-scale logging might impact on mangrove functions and services. We experimentally investigated the impact of small-scale tree removal on surface elevation and carbon dynamics in a mangrove forest at Gazi bay, Kenya. The trees in five plots of a Rhizophora mucronata (Lam.) forest were first girdled and then cut. Another set of five plots at the same site served as controls. Treatment induced significant, rapid subsidence (−32.1±8.4 mm yr−1 compared with surface elevation changes of +4.2±1.4 mm yr−1 in controls). Subsidence in treated plots was likely due to collapse and decomposition of dying roots and sediment compaction as evidenced from increased sediment bulk density. Sediment effluxes of CO2 and CH4 increased significantly, especially their heterotrophic component, suggesting enhanced organic matter decomposition. Estimates of total excess fluxes from treated compared with control plots were 25.3±7.4 tCO2 ha−1 yr−1 (using surface carbon efflux) and 35.6±76.9 tCO2 ha−1 yr−1 (using surface elevation losses and sediment properties). Whilst such losses might not be permanent (provided cut areas recover), observed rapid subsidence and enhanced decomposition of soil sediment organic matter caused by small-scale harvesting offers important lessons for mangrove management. In particular mangrove managers need to carefully consider the trade-offs between extracting mangrove wood and losing other mangrove services, particularly shoreline stabilization, coastal protection and carbon storage.
Resumo:
We postulate that exogenous losses-which are typically regarded as introducing undesirable "noise" that needs to be filtered out or hidden from end points-can be surprisingly beneficial. In this paper we evaluate the effects of exogenous losses on transmission control loops, focusing primarily on efficiency and convergence to fairness properties. By analytically capturing the effects of exogenous losses, we are able to characterize the transient behavior of TCP. Our numerical results suggest that "noise" resulting from exogenous losses should not be filtered out blindly, and that a careful examination of the parameter space leads to better strategies regarding the treatment of exogenous losses inside the network. Specifically, we show that while low levels of exogenous losses do help connections converge to their fair share, higher levels of losses lead to inefficient network utilization. We draw the line between these two cases by determining whether or not it is advantageous to hide, or more interestingly introduce, exogenous losses. Our proposed approach is based on classifying the effects of exogenous losses into long-term and short-term effects. Such classification informs the extent to which we control exogenous losses, so as to operate in an efficient and fair region. We validate our results through simulations.
Resumo:
(This Technical Report revises TR-BUCS-2003-011) The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. In this paper, we investigate a Bayesian approach to infer at the source host the reason of a packet loss, whether congestion or wireless transmission error. Our approach is "mostly" end-to-end since it requires only one long-term average quantity (namely, long-term average packet loss probability over the wireless segment) that may be best obtained with help from the network (e.g. wireless access agent).Specifically, we use Maximum Likelihood Ratio tests to evaluate TCP as a classifier of the type of packet loss. We study the effectiveness of short-term classification of packet errors (congestion vs. wireless), given stationary prior error probabilities and distributions of packet delays conditioned on the type of packet loss (measured over a larger time scale). Using our Bayesian-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient online error classifier can be built. We introduce a simple queueing model to underline the conditional delay distributions arising from different kinds of packet losses over a heterogeneous wired/wireless path. We show how Hidden Markov Models (HMMs) can be used by a TCP connection to infer efficiently conditional delay distributions. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect classification.
Resumo:
ERRATA: We present corrections to Fact 3 and (as a consequence) to Lemma 1 of BUCS Technical Report BUCS-TR-2000-013 (also published in IEEE INCP'2000)[1]. These corrections result in slight changes to the formulae used for the identifications of shared losses, which we quantify.
Resumo:
Current Internet transport protocols make end-to-end measurements and maintain per-connection state to regulate the use of shared network resources. When two or more such connections share a common endpoint, there is an opportunity to correlate the end-to-end measurements made by these protocols to better diagnose and control the use of shared resources. We develop packet probing techniques to determine whether a pair of connections experience shared congestion. Correct, efficient diagnoses could enable new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. Our extensive simulation results demonstrate that the conditional (Bayesian) probing approach we employ provides superior accuracy, converges faster, and tolerates a wider range of network conditions than recently proposed memoryless (Markovian) probing approaches for addressing this opportunity.
Resumo:
This thesis examines the relationship between initial loss events and the corporate governance and earnings management behaviour of these firms. This is done using four years of corporate governance information spanning the report of an initial loss for companies listed on the UK Stock Exchange. An industry- and sizematched control sample is used in a difference-in-difference analysis to isolate the impact of the initial loss event during the period. It is reported that, in general, an initial loss motivates an improvement in corporate governance in those loss firms where a relative weakness existed prior to the loss and that these changes mainly occur before the initial loss is announced. Firms with stronger (i.e. better quality) corporate governance have less need to alter it in response to the loss. It is also reported that initial loss firms use positive abnormal accruals in the year before the loss in an attempt to defer/avoid the loss — the weaker corporate governance the more likely is it that loss firms manage earnings in this manner. Abnormal accruals are also found to be predictive of an initial loss and when used as a conditioning variable, the quality of corporate governance is an important mitigating factor in this regard. Once the loss is reported, loss firms unwind these abnormal accruals although no evidence of big-bath behaviour is found. The extent to which these abnormal accruals are subsequently unwound are also found to be a function of both the quality of corporate governance as well as the severity of the initial loss.
Resumo:
Malaysian Financial Reporting Standard (FRS) No. 136, Impairment of Assets, was issued in 2005. The standard requires public listed companies to report their non-current assets at no more than their recoverable amount. When the value of impaired assets is recovered, or partly recovered, FRS 136 requires the impairment charges to be reversed to its new recoverable amount. This study tests whether the reversal of impairment losses by Malaysian firms is more closely associated with economic reasons or reporting incentives. The sample of this study consists of 182 public companies listed on Bursa Malaysia (formerly known as the Kuala Lumpur Stock Exchange) that reported reversals of their impairment charges during the period 2006-2009. These firms are matched with firms which do not reverse impairment on the basis of industrial classification and size. In the year of reversal, this study finds that the reversal firms are more profitable (before reversals) than their matched firms. On average, the Malaysian stock market values the reversals of impairment losses positively. These results suggest that the reversals generally reflect increases in the value of the previously impaired assets. After partitioning firms that are likely to manage earnings and those that are not, this study finds that there are some Malaysian firms which reverse the impairment charges to manage earnings. Their reversals are not value-relevant, and are negatively associated with future firm performance. On the other hand, the reversals of firms which are deemed not to be earnings managers are positively associated with both future firm performance and current stock price performance, and this is the dominant motivation for the reversal of impairment charges in Malaysia. In further analysis, this study provides evidence that the opportunistic reversals are also associated with other earnings management manifestations, namely abnormal working capital accruals and the motivation to avoid earnings declines. In general, the findings suggest that the fair value measurement in impairment standard provides useful information to the users of financial statements.
Resumo:
It has long been recognized that whistler-mode waves can be trapped in plasmaspheric whistler ducts which guide the waves. For nonguided cases these waves are said to be "nonducted", which is dominant for L < 1.6. Wave-particle interactions are affected by the wave being ducted or nonducted. In the field-aligned ducted case, first-order cyclotron resonance is dominant, whereas nonducted interactions open up a much wider range of energies through equatorial and off-equatorial resonance. There is conflicting information as to whether the most significant particle loss processes are driven by ducted or nonducted waves. In this study we use loss cone observations from the DEMETER and POES low-altitude satellites to focus on electron losses driven by powerful VLF communications transmitters. Both satellites confirm that there are well-defined enhancements in the flux of electrons in the drift loss cone due to ducted transmissions from the powerful transmitter with call sign NWC. Typically, ∼80% of DEMETER nighttime orbits to the east of NWC show electron flux enhancements in the drift loss cone, spanning a L range consistent with first-order cyclotron theory, and inconsistent with nonducted resonances. In contrast, ∼1% or less of nonducted transmissions originate from NPM-generated electron flux enhancements. While the waves originating from these two transmitters have been predicted to lead to similar levels of pitch angle scattering, we find that the enhancements from NPM are at least 50 times smaller than those from NWC. This suggests that lower-latitude, nonducted VLF waves are much less effective in driving radiation belt pitch angle scattering. Copyright 2010 by the American Geophysical Union.