931 resultados para harmonic losses
Resumo:
Discussion Conclusions Materials and Methods Acknowledgments Author Contributions References Reader Comments (0) Figures Abstract The importance of mangrove forests in carbon sequestration and coastal protection has been widely acknowledged. Large-scale damage of these forests, caused by hurricanes or clear felling, can enhance vulnerability to erosion, subsidence and rapid carbon losses. However, it is unclear how small-scale logging might impact on mangrove functions and services. We experimentally investigated the impact of small-scale tree removal on surface elevation and carbon dynamics in a mangrove forest at Gazi bay, Kenya. The trees in five plots of a Rhizophora mucronata (Lam.) forest were first girdled and then cut. Another set of five plots at the same site served as controls. Treatment induced significant, rapid subsidence (−32.1±8.4 mm yr−1 compared with surface elevation changes of +4.2±1.4 mm yr−1 in controls). Subsidence in treated plots was likely due to collapse and decomposition of dying roots and sediment compaction as evidenced from increased sediment bulk density. Sediment effluxes of CO2 and CH4 increased significantly, especially their heterotrophic component, suggesting enhanced organic matter decomposition. Estimates of total excess fluxes from treated compared with control plots were 25.3±7.4 tCO2 ha−1 yr−1 (using surface carbon efflux) and 35.6±76.9 tCO2 ha−1 yr−1 (using surface elevation losses and sediment properties). Whilst such losses might not be permanent (provided cut areas recover), observed rapid subsidence and enhanced decomposition of soil sediment organic matter caused by small-scale harvesting offers important lessons for mangrove management. In particular mangrove managers need to carefully consider the trade-offs between extracting mangrove wood and losing other mangrove services, particularly shoreline stabilization, coastal protection and carbon storage.
Resumo:
Malignant or benign tumors may be ablated with high‐intensity focused ultrasound (HIFU). This technique, known as focused ultrasound surgery (FUS), has been actively investigated for decades, but slow to be implemented and difficult to control due to lack of real‐time feedback during ablation. Two methods of imaging and monitoring HIFU lesions during formation were implemented simultaneously, in order to investigate the efficacy of each and to increase confidence in the detection of the lesion. The first, Acousto‐Optic Imaging (AOI) detects the increasing optical absorption and scattering in the lesion. The intensity of a diffuse optical field in illuminated tissue is mapped at the spatial resolution of an ultrasound focal spot, using the acousto‐optic effect. The second, Harmonic Motion Imaging (HMI), detects the changing stiffness in the lesion. The HIFU beam is modulated to force oscillatory motion in the tissue, and the amplitude of this motion, measured by ultrasound pulse‐echo techniques, is influenced by the stiffness. Experiments were performed on store‐bought chicken breast and freshly slaughtered bovine liver. The AOI results correlated with the onset and relative size of forming lesions much better than prior knowledge of the HIFU power and duration. For HMI, a significant artifact was discovered due to acoustic nonlinearity. The artifact was mitigated by adjusting the phase of the HIFU and imaging pulses. A more detailed model of the HMI process than previously published was made using finite element analysis. The model showed that the amplitude of harmonic motion was primarily affected by increases in acoustic attenuation and stiffness as the lesion formed and the interaction of these effects was complex and often counteracted each other. Further biological variability in tissue properties meant that changes in motion were masked by sample‐to‐sample variation. The HMI experiments predicted lesion formation in only about a quarter of the lesions made. In simultaneous AOI/HMI experiments it appeared that AOI was a more robust method for lesion detection.
Resumo:
We postulate that exogenous losses-which are typically regarded as introducing undesirable "noise" that needs to be filtered out or hidden from end points-can be surprisingly beneficial. In this paper we evaluate the effects of exogenous losses on transmission control loops, focusing primarily on efficiency and convergence to fairness properties. By analytically capturing the effects of exogenous losses, we are able to characterize the transient behavior of TCP. Our numerical results suggest that "noise" resulting from exogenous losses should not be filtered out blindly, and that a careful examination of the parameter space leads to better strategies regarding the treatment of exogenous losses inside the network. Specifically, we show that while low levels of exogenous losses do help connections converge to their fair share, higher levels of losses lead to inefficient network utilization. We draw the line between these two cases by determining whether or not it is advantageous to hide, or more interestingly introduce, exogenous losses. Our proposed approach is based on classifying the effects of exogenous losses into long-term and short-term effects. Such classification informs the extent to which we control exogenous losses, so as to operate in an efficient and fair region. We validate our results through simulations.
Resumo:
(This Technical Report revises TR-BUCS-2003-011) The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. In this paper, we investigate a Bayesian approach to infer at the source host the reason of a packet loss, whether congestion or wireless transmission error. Our approach is "mostly" end-to-end since it requires only one long-term average quantity (namely, long-term average packet loss probability over the wireless segment) that may be best obtained with help from the network (e.g. wireless access agent).Specifically, we use Maximum Likelihood Ratio tests to evaluate TCP as a classifier of the type of packet loss. We study the effectiveness of short-term classification of packet errors (congestion vs. wireless), given stationary prior error probabilities and distributions of packet delays conditioned on the type of packet loss (measured over a larger time scale). Using our Bayesian-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient online error classifier can be built. We introduce a simple queueing model to underline the conditional delay distributions arising from different kinds of packet losses over a heterogeneous wired/wireless path. We show how Hidden Markov Models (HMMs) can be used by a TCP connection to infer efficiently conditional delay distributions. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect classification.
Resumo:
ERRATA: We present corrections to Fact 3 and (as a consequence) to Lemma 1 of BUCS Technical Report BUCS-TR-2000-013 (also published in IEEE INCP'2000)[1]. These corrections result in slight changes to the formulae used for the identifications of shared losses, which we quantify.
Resumo:
Most real-time scheduling problems are known to be NP-complete. To enable accurate comparison between the schedules of heuristic algorithms and the optimal schedule, we introduce an omniscient oracle. This oracle provides schedules for periodic task sets with harmonic periods and variable resource requirements. Three different job value functions are described and implemented. Each corresponds to a different system goal. The oracle is used to examine the performance of different on-line schedulers under varying loads, including overload. We have compared the oracle against Rate Monotonic Scheduling, Statistical Rate Monotonic Scheduling, and Slack Stealing Job Admission Control Scheduling. Consistently, the oracle provides an upper bound on performance for the metric under consideration.
Resumo:
Current Internet transport protocols make end-to-end measurements and maintain per-connection state to regulate the use of shared network resources. When two or more such connections share a common endpoint, there is an opportunity to correlate the end-to-end measurements made by these protocols to better diagnose and control the use of shared resources. We develop packet probing techniques to determine whether a pair of connections experience shared congestion. Correct, efficient diagnoses could enable new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. Our extensive simulation results demonstrate that the conditional (Bayesian) probing approach we employ provides superior accuracy, converges faster, and tolerates a wider range of network conditions than recently proposed memoryless (Markovian) probing approaches for addressing this opportunity.
Resumo:
An extension to the Boundary Contour System model is proposed to account for boundary completion through vertices with arbitrary numbers of orientations, in a manner consistent with psychophysical observartions, by way of harmonic resonance in a neural architecture.
Resumo:
An extension to the orientational harmonic model is presented as a rotation, translation, and scale invariant representation of geometrical form in biological vision.
Resumo:
The proposed model, called the combinatorial and competitive spatio-temporal memory or CCSTM, provides an elegant solution to the general problem of having to store and recall spatio-temporal patterns in which states or sequences of states can recur in various contexts. For example, fig. 1 shows two state sequences that have a common subsequence, C and D. The CCSTM assumes that any state has a distributed representation as a collection of features. Each feature has an associated competitive module (CM) containing K cells. On any given occurrence of a particular feature, A, exactly one of the cells in CMA will be chosen to represent it. It is the particular set of cells active on the previous time step that determines which cells are chosen to represent instances of their associated features on the current time step. If we assume that typically S features are active in any state then any state has K^S different neural representations. This huge space of possible neural representations of any state is what underlies the model's ability to store and recall numerous context-sensitive state sequences. The purpose of this paper is simply to describe this mechanism.
Resumo:
This thesis examines the relationship between initial loss events and the corporate governance and earnings management behaviour of these firms. This is done using four years of corporate governance information spanning the report of an initial loss for companies listed on the UK Stock Exchange. An industry- and sizematched control sample is used in a difference-in-difference analysis to isolate the impact of the initial loss event during the period. It is reported that, in general, an initial loss motivates an improvement in corporate governance in those loss firms where a relative weakness existed prior to the loss and that these changes mainly occur before the initial loss is announced. Firms with stronger (i.e. better quality) corporate governance have less need to alter it in response to the loss. It is also reported that initial loss firms use positive abnormal accruals in the year before the loss in an attempt to defer/avoid the loss — the weaker corporate governance the more likely is it that loss firms manage earnings in this manner. Abnormal accruals are also found to be predictive of an initial loss and when used as a conditioning variable, the quality of corporate governance is an important mitigating factor in this regard. Once the loss is reported, loss firms unwind these abnormal accruals although no evidence of big-bath behaviour is found. The extent to which these abnormal accruals are subsequently unwound are also found to be a function of both the quality of corporate governance as well as the severity of the initial loss.
Resumo:
Malaysian Financial Reporting Standard (FRS) No. 136, Impairment of Assets, was issued in 2005. The standard requires public listed companies to report their non-current assets at no more than their recoverable amount. When the value of impaired assets is recovered, or partly recovered, FRS 136 requires the impairment charges to be reversed to its new recoverable amount. This study tests whether the reversal of impairment losses by Malaysian firms is more closely associated with economic reasons or reporting incentives. The sample of this study consists of 182 public companies listed on Bursa Malaysia (formerly known as the Kuala Lumpur Stock Exchange) that reported reversals of their impairment charges during the period 2006-2009. These firms are matched with firms which do not reverse impairment on the basis of industrial classification and size. In the year of reversal, this study finds that the reversal firms are more profitable (before reversals) than their matched firms. On average, the Malaysian stock market values the reversals of impairment losses positively. These results suggest that the reversals generally reflect increases in the value of the previously impaired assets. After partitioning firms that are likely to manage earnings and those that are not, this study finds that there are some Malaysian firms which reverse the impairment charges to manage earnings. Their reversals are not value-relevant, and are negatively associated with future firm performance. On the other hand, the reversals of firms which are deemed not to be earnings managers are positively associated with both future firm performance and current stock price performance, and this is the dominant motivation for the reversal of impairment charges in Malaysia. In further analysis, this study provides evidence that the opportunistic reversals are also associated with other earnings management manifestations, namely abnormal working capital accruals and the motivation to avoid earnings declines. In general, the findings suggest that the fair value measurement in impairment standard provides useful information to the users of financial statements.
Resumo:
A method is proposed which uses a lower-frequency transmit to create a known harmonic acoustical source in tissue suitable for wavefront correction without a priori assumptions of the target or requiring a transponder. The measurement and imaging steps of this method were implemented on the Duke phased array system with a two-dimensional (2-D) array. The method was tested with multiple electronic aberrators [0.39π to 1.16π radians root-mean-square (rms) at 4.17 MHz] and with a physical aberrator 0.17π radians rms at 4.17 MHz) in a variety of imaging situations. Corrections were quantified in terms of peak beam amplitude compared to the unaberrated case, with restoration between 0.6 and 36.6 dB of peak amplitude with a single correction. Standard phantom images before and after correction were obtained and showed both visible improvement and 14 dB contrast improvement after correction. This method, when combined with previous phase correction methods, may be an important step that leads to improved clinical images.
Resumo:
It has long been recognized that whistler-mode waves can be trapped in plasmaspheric whistler ducts which guide the waves. For nonguided cases these waves are said to be "nonducted", which is dominant for L < 1.6. Wave-particle interactions are affected by the wave being ducted or nonducted. In the field-aligned ducted case, first-order cyclotron resonance is dominant, whereas nonducted interactions open up a much wider range of energies through equatorial and off-equatorial resonance. There is conflicting information as to whether the most significant particle loss processes are driven by ducted or nonducted waves. In this study we use loss cone observations from the DEMETER and POES low-altitude satellites to focus on electron losses driven by powerful VLF communications transmitters. Both satellites confirm that there are well-defined enhancements in the flux of electrons in the drift loss cone due to ducted transmissions from the powerful transmitter with call sign NWC. Typically, ∼80% of DEMETER nighttime orbits to the east of NWC show electron flux enhancements in the drift loss cone, spanning a L range consistent with first-order cyclotron theory, and inconsistent with nonducted resonances. In contrast, ∼1% or less of nonducted transmissions originate from NPM-generated electron flux enhancements. While the waves originating from these two transmitters have been predicted to lead to similar levels of pitch angle scattering, we find that the enhancements from NPM are at least 50 times smaller than those from NWC. This suggests that lower-latitude, nonducted VLF waves are much less effective in driving radiation belt pitch angle scattering. Copyright 2010 by the American Geophysical Union.
Resumo:
Nonlinear metamaterials have been predicted to support new and exciting domains in the manipulation of light, including novel phase-matching schemes for wave mixing. Most notable is the so-called nonlinear-optical mirror, in which a nonlinear negative-index medium emits the generated frequency towards the source of the pump. In this Letter, we experimentally demonstrate the nonlinear-optical mirror effect in a bulk negative-index nonlinear metamaterial, along with two other novel phase-matching configurations, utilizing periodic poling to switch between the three phase-matching domains.