997 resultados para Quenching rate
Resumo:
Quality of Service (QoS) guarantees are required by an increasing number of applications to ensure a minimal level of fidelity in the delivery of application data units through the network. Application-level QoS does not necessarily follow from any transport-level QoS guarantees regarding the delivery of the individual cells (e.g. ATM cells) which comprise the application's data units. The distinction between application-level and transport-level QoS guarantees is due primarily to the fragmentation that occurs when transmitting large application data units (e.g. IP packets, or video frames) using much smaller network cells, whereby the partial delivery of a data unit is useless; and, bandwidth spent to partially transmit the data unit is wasted. The data units transmitted by an application may vary in size while being constant in rate, which results in a variable bit rate (VBR) data flow. That data flow requires QoS guarantees. Statistical multiplexing is inadequate, because no guarantees can be made and no firewall property exists between different data flows. In this paper, we present a novel resource management paradigm for the maintenance of application-level QoS for VBR flows. Our paradigm is based on Statistical Rate Monotonic Scheduling (SRMS), in which (1) each application generates its variable-size data units at a fixed rate, (2) the partial delivery of data units is of no value to the application, and (3) the QoS guarantee extended to the application is the probability that an arbitrary data unit will be successfully transmitted through the network to/from the application.
Resumo:
Statistical Rate Monotonic Scheduling (SRMS) is a generalization of the classical RMS results of Liu and Layland [LL73] for periodic tasks with highly variable execution times and statistical QoS requirements. The main tenet of SRMS is that the variability in task resource requirements could be smoothed through aggregation to yield guaranteed QoS. This aggregation is done over time for a given task and across multiple tasks for a given period of time. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. SRMS feasibility test ensures that it is possible for a given periodic task set to share a given resource without violating any of the statistical QoS constraints imposed on each task in the set. The SRMS scheduling algorithm consists of two parts: a job admission controller and a scheduler. The SRMS scheduler is a simple, preemptive, fixed-priority scheduler. The SRMS job admission controller manages the QoS delivered to the various tasks through admit/reject and priority assignment decisions. In particular, it ensures the important property of task isolation, whereby tasks do not infringe on each other. In this paper we present the design and implementation of SRMS within the KURT Linux Operating System [HSPN98, SPH 98, Sri98]. KURT Linux supports conventional tasks as well as real-time tasks. It provides a mechanism for transitioning from normal Linux scheduling to a mixed scheduling of conventional and real-time tasks, and to a focused mode where only real-time tasks are scheduled. We overview the technical issues that we had to overcome in order to integrate SRMS into KURT Linux and present the API we have developed for scheduling periodic real-time tasks using SRMS.
Resumo:
Recent research have exposed new breeds of attacks that are capable of denying service or inflicting significant damage to TCP flows, without sustaining the attack traffic. Such attacks are often referred to as "low-rate" attacks and they stand in sharp contrast against traditional Denial of Service (DoS) attacks that can completely shut off TCP flows by flooding an Internet link. In this paper, we study the impact of these new breeds of attacks and the extent to which defense mechanisms are capable of mitigating the attack's impact. Through adopting a simple discrete-time model with a single TCP flow and a nonoblivious adversary, we were able to expose new variants of these low-rate attacks that could potentially have high attack potency per attack burst. Our analysis is focused towards worst-case scenarios, thus our results should be regarded as upper bounds on the impact of low-rate attacks rather than a real assessment under a specific attack scenario.
Resumo:
Speech can be understood at widely varying production rates. A working memory is described for short-term storage of temporal lists of input items. The working memory is a cooperative-competitive neural network that automatically adjusts its integration rate, or gain, to generate a short-term memory code for a list that is independent of item presentation rate. Such an invariant working memory model is used to simulate data of Repp (1980) concerning the changes of phonetic category boundaries as a function of their presentation rate. Thus the variability of categorical boundaries can be traced to the temporal in variance of the working memory code.
Resumo:
Background: With cesarean section rates increasing worldwide, clarity regarding negative effects is essential. This study aimed to investigate the rate of subsequent stillbirth, miscarriage, and ectopic pregnancy following primary cesarean section, controlling for confounding by indication. Methods and Findings: We performed a population-based cohort study using Danish national registry data linking various registers. The cohort included primiparous women with a live birth between January 1, 1982, and December 31, 2010 (n = 832,996), with follow-up until the next event (stillbirth, miscarriage, or ectopic pregnancy) or censoring by live birth, death, emigration, or study end. Cox regression models for all types of cesarean sections, sub-group analyses by type of cesarean, and competing risks analyses for the causes of stillbirth were performed. An increased rate of stillbirth (hazard ratio [HR] 1.14, 95% CI 1.01, 1.28) was found in women with primary cesarean section compared to spontaneous vaginal delivery, giving a theoretical absolute risk increase (ARI) of 0.03% for stillbirth, and a number needed to harm (NNH) of 3,333 women. Analyses by type of cesarean section showed similarly increased rates for emergency (HR 1.15, 95% CI 1.01, 1.31) and elective cesarean (HR 1.11, 95% CI 0.91, 1.35), although not statistically significant in the latter case. An increased rate of ectopic pregnancy was found among women with primary cesarean overall (HR 1.09, 95% CI 1.04, 1.15) and by type (emergency cesarean, HR 1.09, 95% CI 1.03, 1.15, and elective cesarean, HR 1.12, 95% CI 1.03, 1.21), yielding an ARI of 0.1% and a NNH of 1,000 women for ectopic pregnancy. No increased rate of miscarriage was found among women with primary cesarean, with maternally requested cesarean section associated with a decreased rate of miscarriage (HR 0.72, 95% CI 0.60, 0.85). Limitations include incomplete data on maternal body mass index, maternal smoking, fertility treatment, causes of stillbirth, and maternally requested cesarean section, as well as lack of data on antepartum/intrapartum stillbirth and gestational age for stillbirth and miscarriage. Conclusions: This study found that cesarean section is associated with a small increased rate of subsequent stillbirth and ectopic pregnancy. Underlying medical conditions, however, and confounding by indication for the primary cesarean delivery account for at least part of this increased rate. These findings will assist women and health-care providers to reach more informed decisions regarding mode of delivery.
Resumo:
Assuming that daily spot exchange rates follow a martingale process, we derive the implied time series process for the vector of 30-day forward rate forecast errors from using weekly data. The conditional second moment matrix of this vector is modelled as a multivariate generalized ARCH process. The estimated model is used to test the hypothesis that the risk premium is a linear function of the conditional variances and covariances as suggested by the standard asset pricing theory literature. Little supportt is found for this theory; instead lagged changes in the forward rate appear to be correlated with the 'risk premium.'. © 1990.
Resumo:
While cochlear implants (CIs) usually provide high levels of speech recognition in quiet, speech recognition in noise remains challenging. To overcome these difficulties, it is important to understand how implanted listeners separate a target signal from interferers. Stream segregation has been studied extensively in both normal and electric hearing, as a function of place of stimulation. However, the effects of pulse rate, independent of place, on the perceptual grouping of sequential sounds in electric hearing have not yet been investigated. A rhythm detection task was used to measure stream segregation. The results of this study suggest that while CI listeners can segregate streams based on differences in pulse rate alone, the amount of stream segregation observed decreases as the base pulse rate increases. Further investigation of the perceptual dimensions encoded by the pulse rate and the effect of sequential presentation of different stimulation rates on perception could be beneficial for the future development of speech processing strategies for CIs.
Resumo:
Maps are a mainstay of visual, somatosensory, and motor coding in many species. However, auditory maps of space have not been reported in the primate brain. Instead, recent studies have suggested that sound location may be encoded via broadly responsive neurons whose firing rates vary roughly proportionately with sound azimuth. Within frontal space, maps and such rate codes involve different response patterns at the level of individual neurons. Maps consist of neurons exhibiting circumscribed receptive fields, whereas rate codes involve open-ended response patterns that peak in the periphery. This coding format discrepancy therefore poses a potential problem for brain regions responsible for representing both visual and auditory information. Here, we investigated the coding of auditory space in the primate superior colliculus(SC), a structure known to contain visual and oculomotor maps for guiding saccades. We report that, for visual stimuli, neurons showed circumscribed receptive fields consistent with a map, but for auditory stimuli, they had open-ended response patterns consistent with a rate or level-of-activity code for location. The discrepant response patterns were not segregated into different neural populations but occurred in the same neurons. We show that a read-out algorithm in which the site and level of SC activity both contribute to the computation of stimulus location is successful at evaluating the discrepant visual and auditory codes, and can account for subtle but systematic differences in the accuracy of auditory compared to visual saccades. This suggests that a given population of neurons can use different codes to support appropriate multimodal behavior.
Resumo:
BACKGROUND: Primary care providers' suboptimal recognition of the severity of chronic kidney disease (CKD) may contribute to untimely referrals of patients with CKD to subspecialty care. It is unknown whether U.S. primary care physicians' use of estimated glomerular filtration rate (eGFR) rather than serum creatinine to estimate CKD severity could improve the timeliness of their subspecialty referral decisions. METHODS: We conducted a cross-sectional study of 154 United States primary care physicians to assess the effect of use of eGFR (versus creatinine) on the timing of their subspecialty referrals. Primary care physicians completed a questionnaire featuring questions regarding a hypothetical White or African American patient with progressing CKD. We asked primary care physicians to identify the serum creatinine and eGFR levels at which they would recommend patients like the hypothetical patient be referred for subspecialty evaluation. We assessed significant improvement in the timing [from eGFR < 30 to ≥ 30 mL/min/1.73m(2)) of their recommended referrals based on their use of creatinine versus eGFR. RESULTS: Primary care physicians recommended subspecialty referrals later (CKD more advanced) when using creatinine versus eGFR to assess kidney function [median eGFR 32 versus 55 mL/min/1.73m(2), p < 0.001]. Forty percent of primary care physicians significantly improved the timing of their referrals when basing their recommendations on eGFR. Improved timing occurred more frequently among primary care physicians practicing in academic (versus non-academic) practices or presented with White (versus African American) hypothetical patients [adjusted percentage(95% CI): 70% (45-87) versus 37% (reference) and 57% (39-73) versus 25% (reference), respectively, both p ≤ 0.01). CONCLUSIONS: Primary care physicians recommended subspecialty referrals earlier when using eGFR (versus creatinine) to assess kidney function. Enhanced use of eGFR by primary care physicians' could lead to more timely subspecialty care and improved clinical outcomes for patients with CKD.
Resumo:
At a workshop held at Resources for the Future in September 2011, twelve of the authors were asked by the US Environmental Protection Agency (EPA) to provide advice on the principles to be used in discounting the benefits and costs of projects that affect future generations. Maureen L. Cropper chaired the workshop. Much of the discussion in this article is based on the authors' recommendations and advice presented at the workshop. © The Author 2014.
Resumo:
© 2015 IEEE.In virtual reality applications, there is an aim to provide real time graphics which run at high refresh rates. However, there are many situations in which this is not possible due to simulation or rendering issues. When running at low frame rates, several aspects of the user experience are affected. For example, each frame is displayed for an extended period of time, causing a high persistence image artifact. The effect of this artifact is that movement may lose continuity, and the image jumps from one frame to another. In this paper, we discuss our initial exploration of the effects of high persistence frames caused by low refresh rates and compare it to high frame rates and to a technique we developed to mitigate the effects of low frame rates. In this technique, the low frame rate simulation images are displayed with low persistence by blanking out the display during the extra time such image would be displayed. In order to isolate the visual effects, we constructed a simulator for low and high persistence displays that does not affect input latency. A controlled user study comparing the three conditions for the tasks of 3D selection and navigation was conducted. Results indicate that the low persistence display technique may not negatively impact user experience or performance as compared to the high persistence case. Directions for future work on the use of low persistence displays for low frame rate situations are discussed.
Resumo:
BACKGROUND: Automated reporting of estimated glomerular filtration rate (eGFR) is a recent advance in laboratory information technology (IT) that generates a measure of kidney function with chemistry laboratory results to aid early detection of chronic kidney disease (CKD). Because accurate diagnosis of CKD is critical to optimal medical decision-making, several clinical practice guidelines have recommended the use of automated eGFR reporting. Since its introduction, automated eGFR reporting has not been uniformly implemented by U. S. laboratories despite the growing prevalence of CKD. CKD is highly prevalent within the Veterans Health Administration (VHA), and implementation of automated eGFR reporting within this integrated healthcare system has the potential to improve care. In July 2004, the VHA adopted automated eGFR reporting through a system-wide mandate for software implementation by individual VHA laboratories. This study examines the timing of software implementation by individual VHA laboratories and factors associated with implementation. METHODS: We performed a retrospective observational study of laboratories in VHA facilities from July 2004 to September 2009. Using laboratory data, we identified the status of implementation of automated eGFR reporting for each facility and the time to actual implementation from the date the VHA adopted its policy for automated eGFR reporting. Using survey and administrative data, we assessed facility organizational characteristics associated with implementation of automated eGFR reporting via bivariate analyses. RESULTS: Of 104 VHA laboratories, 88% implemented automated eGFR reporting in existing laboratory IT systems by the end of the study period. Time to initial implementation ranged from 0.2 to 4.0 years with a median of 1.8 years. All VHA facilities with on-site dialysis units implemented the eGFR software (52%, p<0.001). Other organizational characteristics were not statistically significant. CONCLUSIONS: The VHA did not have uniform implementation of automated eGFR reporting across its facilities. Facility-level organizational characteristics were not associated with implementation, and this suggests that decisions for implementation of this software are not related to facility-level quality improvement measures. Additional studies on implementation of laboratory IT, such as automated eGFR reporting, could identify factors that are related to more timely implementation and lead to better healthcare delivery.
Resumo:
info:eu-repo/semantics/nonPublished
Resumo:
Thanks to a passive cavity configuration, modulational instability in fibers is successfully observed, for the first time to our knowledge, in the continuous-wave regime. Our technique provides a new means of generating all-optically ultrahigh-repetition-rate pulse trains and opens up new possibilities for the fundamental study of modulational instability and related phenomena. © 2001 Optical Society of America.
Resumo:
While incidents requiring the rapid egress of passengers from trains are infrequent, perhaps the most challenging scenario for passengers involves the evacuation from an overturned carriage subjected to fire. In this paper we attempt to estimate the flow rate capacity of an overturned rail carriage end exit. This was achieved through two full-scale evacuation experiments, in one of which the participants were subjected to non-toxic smoke. The experiments were conducted as part of a pilot study into evacuation from rail carriages. In reviewing the experimental results, it should be noted that only a single run of each trial was undertaken with a limited — though varied — population. As a result it is not possible to test the statistical significance of the evacuation times quoted and so the results should be treated as indicative rather than definitive. The carriage used in the experiments was a standard class Mark IID which, while an old carriage design, shares many features with those carriages commonly found on the British rail network. In the evacuation involving smoke, the carriage end exit was found to achieve an average flow rate capacity of approximately 5.0 persons/min. The average flow rate capacity of the exit without smoke was found to be approximately 9.2 persons/min. It was noted that the presence of smoke tended to reduce significantly the exit flow rate. Due to the nature of the experimental conditions, these flow rates are considered optimistic. Finally, the authors make several recommendations for improving survivability in rail accidents. Copyright © 2000 John Wiley & Sons, Ltd.