931 resultados para Rate-equation models
Resumo:
For a given TCP flow, exogenous losses are those occurring on links other than the flow's bottleneck link. Exogenous losses are typically viewed as introducing undesirable "noise" into TCP's feedback control loop, leading to inefficient network utilization and potentially severe global unfairness. This has prompted much research on mechanisms for hiding such losses from end-points. In this paper, we show through analysis and simulations that low levels of exogenous losses are surprisingly beneficial in that they improve stability and convergence, without sacrificing efficiency. Based on this, we argue that exogenous loss awareness should be taken into account in any AQM design that aims to achieve global fairness. To that end, we propose an exogenous-loss aware Queue Management (XQM) that actively accounts for and leverages exogenous losses. We use an equation based approach to derive the quiescent loss rate for a connection based on the connection's profile and its global fair share. In contrast to other queue management techniques, XQM ensures that a connection sees its quiescent loss rate, not only by complementing already existing exogenous losses, but also by actively hiding exogenous losses, if necessary, to achieve global fairness. We establish the advantages of exogenous-loss awareness using extensive simulations in which, we contrast the performance of XQM to that of a host of traditional exogenous-loss unaware AQM techniques.
Resumo:
Hidden State Shape Models (HSSMs) [2], a variant of Hidden Markov Models (HMMs) [9], were proposed to detect shape classes of variable structure in cluttered images. In this paper, we formulate a probabilistic framework for HSSMs which provides two major improvements in comparison to the previous method [2]. First, while the method in [2] required the scale of the object to be passed as an input, the method proposed here estimates the scale of the object automatically. This is achieved by introducing a new term for the observation probability that is based on a object-clutter feature model. Second, a segmental HMM [6, 8] is applied to model the "duration probability" of each HMM state, which is learned from the shape statistics in a training set and helps obtain meaningful registration results. Using a segmental HMM provides a principled way to model dependencies between the scales of different parts of the object. In object localization experiments on a dataset of real hand images, the proposed method significantly outperforms the method of [2], reducing the incorrect localization rate from 40% to 15%. The improvement in accuracy becomes more significant if we consider that the method proposed here is scale-independent, whereas the method of [2] takes as input the scale of the object we want to localize.
Resumo:
Temporal structure in skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefrontal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables, such as time-to-contact. At a fine scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over-shoot the amounts needed for the precise acts. Each context of action may require a much different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive parallel patterns of analog signals. From some parts of the cerebellum, such signals controls muscles. But a recent model shows how the lateral cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (in frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine system design to serve the lowest and the highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between levels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.
Resumo:
Many deterministic models with hysteresis have been developed in the areas of economics, finance, terrestrial hydrology and biology. These models lack any stochastic element which can often have a strong effect in these areas. In this work stochastically driven closed loop systems with hysteresis type memory are studied. This type of system is presented as a possible stochastic counterpart to deterministic models in the areas of economics, finance, terrestrial hydrology and biology. Some price dynamics models are presented as a motivation for the development of this type of model. Numerical schemes for solving this class of stochastic differential equation are developed in order to examine the prototype models presented. As a means of further testing the developed numerical schemes, numerical examination is made of the behaviour near equilibrium of coupled ordinary differential equations where the time derivative of the Preisach operator is included in one of the equations. A model of two phenotype bacteria is also presented. This model is examined to explore memory effects and related hysteresis effects in the area of biology. The memory effects found in this model are similar to that found in the non-ideal relay. This non-ideal relay type behaviour is used to model a colony of bacteria with multiple switching thresholds. This model contains a Preisach type memory with a variable Preisach weight function. Shown numerically for this multi-threshold model is a pattern formation for the distribution of the phenotypes among the available thresholds.
Resumo:
Background: With cesarean section rates increasing worldwide, clarity regarding negative effects is essential. This study aimed to investigate the rate of subsequent stillbirth, miscarriage, and ectopic pregnancy following primary cesarean section, controlling for confounding by indication. Methods and Findings: We performed a population-based cohort study using Danish national registry data linking various registers. The cohort included primiparous women with a live birth between January 1, 1982, and December 31, 2010 (n = 832,996), with follow-up until the next event (stillbirth, miscarriage, or ectopic pregnancy) or censoring by live birth, death, emigration, or study end. Cox regression models for all types of cesarean sections, sub-group analyses by type of cesarean, and competing risks analyses for the causes of stillbirth were performed. An increased rate of stillbirth (hazard ratio [HR] 1.14, 95% CI 1.01, 1.28) was found in women with primary cesarean section compared to spontaneous vaginal delivery, giving a theoretical absolute risk increase (ARI) of 0.03% for stillbirth, and a number needed to harm (NNH) of 3,333 women. Analyses by type of cesarean section showed similarly increased rates for emergency (HR 1.15, 95% CI 1.01, 1.31) and elective cesarean (HR 1.11, 95% CI 0.91, 1.35), although not statistically significant in the latter case. An increased rate of ectopic pregnancy was found among women with primary cesarean overall (HR 1.09, 95% CI 1.04, 1.15) and by type (emergency cesarean, HR 1.09, 95% CI 1.03, 1.15, and elective cesarean, HR 1.12, 95% CI 1.03, 1.21), yielding an ARI of 0.1% and a NNH of 1,000 women for ectopic pregnancy. No increased rate of miscarriage was found among women with primary cesarean, with maternally requested cesarean section associated with a decreased rate of miscarriage (HR 0.72, 95% CI 0.60, 0.85). Limitations include incomplete data on maternal body mass index, maternal smoking, fertility treatment, causes of stillbirth, and maternally requested cesarean section, as well as lack of data on antepartum/intrapartum stillbirth and gestational age for stillbirth and miscarriage. Conclusions: This study found that cesarean section is associated with a small increased rate of subsequent stillbirth and ectopic pregnancy. Underlying medical conditions, however, and confounding by indication for the primary cesarean delivery account for at least part of this increased rate. These findings will assist women and health-care providers to reach more informed decisions regarding mode of delivery.
Resumo:
This paper considers forecasting the conditional mean and variance from a single-equation dynamic model with autocorrelated disturbances following an ARMA process, and innovations with time-dependent conditional heteroskedasticity as represented by a linear GARCH process. Expressions for the minimum MSE predictor and the conditional MSE are presented. We also derive the formula for all the theoretical moments of the prediction error distribution from a general dynamic model with GARCH(1, 1) innovations. These results are then used in the construction of ex ante prediction confidence intervals by means of the Cornish-Fisher asymptotic expansion. An empirical example relating to the uncertainty of the expected depreciation of foreign exchange rates illustrates the usefulness of the results. © 1992.
Resumo:
BACKGROUND: Dropouts and missing data are nearly-ubiquitous in obesity randomized controlled trails, threatening validity and generalizability of conclusions. Herein, we meta-analytically evaluate the extent of missing data, the frequency with which various analytic methods are employed to accommodate dropouts, and the performance of multiple statistical methods. METHODOLOGY/PRINCIPAL FINDINGS: We searched PubMed and Cochrane databases (2000-2006) for articles published in English and manually searched bibliographic references. Articles of pharmaceutical randomized controlled trials with weight loss or weight gain prevention as major endpoints were included. Two authors independently reviewed each publication for inclusion. 121 articles met the inclusion criteria. Two authors independently extracted treatment, sample size, drop-out rates, study duration, and statistical method used to handle missing data from all articles and resolved disagreements by consensus. In the meta-analysis, drop-out rates were substantial with the survival (non-dropout) rates being approximated by an exponential decay curve (e(-lambdat)) where lambda was estimated to be .0088 (95% bootstrap confidence interval: .0076 to .0100) and t represents time in weeks. The estimated drop-out rate at 1 year was 37%. Most studies used last observation carried forward as the primary analytic method to handle missing data. We also obtained 12 raw obesity randomized controlled trial datasets for empirical analyses. Analyses of raw randomized controlled trial data suggested that both mixed models and multiple imputation performed well, but that multiple imputation may be more robust when missing data are extensive. CONCLUSION/SIGNIFICANCE: Our analysis offers an equation for predictions of dropout rates useful for future study planning. Our raw data analyses suggests that multiple imputation is better than other methods for handling missing data in obesity randomized controlled trials, followed closely by mixed models. We suggest these methods supplant last observation carried forward as the primary method of analysis.
Resumo:
Diabetes mellitus is becoming increasingly prevalent worldwide. Additionally, there is an increasing number of patients receiving implantable devices such as glucose sensors and orthopedic implants. Thus, it is likely that the number of diabetic patients receiving these devices will also increase. Even though implantable medical devices are considered biocompatible by the Food and Drug Administration, the adverse tissue healing that occurs adjacent to these foreign objects is a leading cause of their failure. This foreign body response leads to fibrosis, encapsulation of the device, and a reduction or cessation of device performance. A second adverse event is microbial infection of implanted devices, which can lead to persistent local and systemic infections and also exacerbates the fibrotic response. Nearly half of all nosocomial infections are associated with the presence of an indwelling medical device. Events associated with both the foreign body response and implant infection can necessitate device removal and may lead to amputation, which is associated with significant morbidity and cost. Diabetes mellitus is generally indicated as a risk factor for the infection of a variety of implants such as prosthetic joints, pacemakers, implantable cardioverter defibrillators, penile implants, and urinary catheters. Implant infection rates in diabetic patients vary depending upon the implant and the microorganism, however, for example, diabetes was found to be a significant variable associated with a nearly 7.2% infection rate for implantable cardioverter defibrillators by the microorganism Candida albicans. While research has elucidated many of the altered mechanisms of diabetic cutaneous wound healing, the internal healing adjacent to indwelling medical devices in a diabetic model has rarely been studied. Understanding this healing process is crucial to facilitating improved device design. The purpose of this article is to summarize the physiologic factors that influence wound healing and infection in diabetic patients, to review research concerning diabetes and biomedical implants and device infection, and to critically analyze which diabetic animal model might be advantageous for assessing internal healing adjacent to implanted devices.
Resumo:
The antibracket in the antifield-BRST formalism is known to define a map Hp × Hq → Hp + q + 1 associating with two equivalence classes of BRST invariant observables of respective ghost number p and q an equivalence class of BRST invariant observables of ghost number p + q + 1. It is shown that this map is trivial in the space of all functionals, i.e. that its image contains only the zeroth class. However, it is generically non-trivial in the space of local functionals. Implications of this result for the problem of consistent interactions among fields with a gauge freedom are then drawn. It is shown that the obstructions to constructing non-trivial such interactions lie precisely in the image of the antibracket map and are accordingly non-existent if one does not insist on locality. However consistent local interactions are severely constrained. The example of the Chern-Simons theory is considered. It is proved that the only consistent, local, Lorentz covariant interactions for the abelian models are exhausted by the non-abelian Chern-Simons extensions. © 1993.
Resumo:
A discretized series of events is a binary time series that indicates whether or not events of a point process in the line occur in successive intervals. Such data are common in environmental applications. We describe a class of models for them, based on an unobserved continuous-time discrete-state Markov process, which determines the rate of a doubly stochastic Poisson process, from which the binary time series is constructed by discretization. We discuss likelihood inference for these processes and their second-order properties and extend them to multiple series. An application involves modeling the times of exposures to air pollution at a number of receptors in Western Europe.
Resumo:
Mathematical models of straight-grate pellet induration processes have been developed and carefully validated by a number of workers over the past two decades. However, the subsequent exploitation of these models in process optimization is less clear, but obviously requires a sound understanding of how the key factors control the operation. In this article, we show how a thermokinetic model of pellet induration, validated against operating data from one of the Iron Ore Company of Canada (IOCC) lines in Canada, can be exploited in process optimization from the perspective of fuel efficiency, production rate, and product quality. Most existing processes are restricted in the options available for process optimization. Here, we review the role of each of the drying (D), preheating (PH), firing (F), after-firing (AF), and cooling (C) phases of the induration process. We then use the induration process model to evaluate whether the first drying zone is best to use on the up- or down-draft gas-flow stream, and we optimize the on-gas temperature profile in the hood of the PH, F, and AF zones, to reduce the burner fuel by at least 10 pct over the long term. Finally, we consider how efficient and flexible the process could be if some of the structural constraints were removed (i.e., addressed at the design stage). The analysis suggests it should be possible to reduce the burner fuel lead by 35 pct, easily increase production by 5+ pct, and improve pellet quality.
Resumo:
Particle concentration is known as a main factor that affects erosion rate of pipe bends in pneumatic conveyors. With consideration of different bend radii, the effect of particle concentration on weight loss of mild steel bends has been investigated in an industrial scale test rig. Experimental results show that there was a significant reduction of the specific erosion rate for high particle concentrations. This reduction was considered to be as a result of the shielding effect during the particle impacts. An empirical model is given. Also a theoretical study of scaling on the shielding effect, and comparisons with some existing models, are presented. It is found that the reduction in specific erosion rate (relative to particle concentration) has a stronger relationship in conveying pipelines than has been found in the erosion tester.
Resumo:
Host-parasitoid models including integrated pest management (IPM) interventions with impulsive effects at both fixed and unfixed times were analyzed with regard to host-eradication, host-parasitoid persistence and host-outbreak solutions. The host-eradication periodic solution with fixed moments is globally stable if the host's intrinsic growth rate is less than the summation of the mean host-killing rate and the mean parasitization rate during the impulsive period. Solutions for all three categories can coexist, with switch-like transitions among their attractors showing that varying dosages and frequencies of insecticide applications and the numbers of parasitoids released are crucial. Periodic solutions also exist for models with unfixed moments for which the maximum amplitude of the host is less than the economic threshold. The dosages and frequencies of IPM interventions for these solutions are much reduced in comparison with the pest-eradication periodic solution. Our results, which are robust to inclusion of stochastic effects and with a wide range of parameter values, confirm that IPM is more effective than any single control tactic.
Resumo:
It has been shown that remote monitoring of pulmonary activity can be achieved using ultra-wideband (UWB) systems, which shows promise in home healthcare, rescue, and security applications. In this paper, we first present a multi-ray propagation model for UWB signal, which is traveling through the human thorax and is reflected on the air/dry-skin/fat/muscle interfaces. A geometry-based statistical channel model is then developed for simulating the reception of UWB signals in the indoor propagation environment. This model enables replication of time-varying multipath profiles due to the displacement of a human chest. Subsequently, a UWB distributed cognitive radar system (UWB-DCRS) is developed for the robust detection of chest cavity motion and the accurate estimation of respiration rate. The analytical framework can serve as a basis in the planning and evaluation of future measurement programs. We also provide a case study on how the antenna beamwidth affects the estimation of respiration rate based on the proposed propagation models and system architecture
Resumo:
The effect of temperature on respiration rate has been established, using Cartesian divers, for the meiofaunal sabellid polychaeteManayunkia aestuarina, the free-living nematodeSphaerolaimus hirsutus and the harpacticoid copepodTachidius discipes from a mudflat in the Lynher estuary, Cornwall, U.K. Over the temperature range normally experienced in the field, i.e. 5–20° C the size-compensated respiration rate (R c) was related to the temperature (T) in °C by the equation Log10 R c=-0.635+0.0339T forManayunkia, Log10 R c=0.180+0.0069T forSphaerolaimus and Log10 R c=-0.428+0.0337T forTachidius, being equivalent toQ 10 values of 2.19, 1.17 and 2.17 respectively. In order to derive the temperature response forManayunkia a relationship was first established between respiration rate and body size: Log10 R=0.05+0.75 Log10 V whereR=respiration in nl·O2·ind-1·h-1 andV=body volume in nl. TheQ 10 values are compared with values for other species derived from the literature. From these limited data a dichotomy emerges: species with aQ 10≏2 which apparently feed on diatoms and bacteria, the abundance of which are subject to large short term variability, and species withQ 10≏1 apparently dependent on more stable food sources.