906 resultados para Quick responsiveness
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
This paper examines the effects of permanent and transitory changes in government purchases in the context of a model of a small open economy that produces and consumes both traded and nontraded goods. The model incorporates an equilibrium interpretation of the business cycle that emphasizes the responsiveness of agents to intertemporal relative price changes. It is demonstrated that transitory increases in government purchases lead to an appreciation of the real exchange rate and an ambiguous change (although a likely worsening) in the current account, while permanent increases have an ambiguous impact on the real exchange rate and no effect on the current account. When agents do not know whether a given increase in government purchases is permanent or transitory the effect is a weighted average of these separate effects. The weights depend on the relative variances of the transitory and permanent components of government purchases. © 1985.
Resumo:
BACKGROUND: Outcome assessment can support the therapeutic process by providing a way to track symptoms and functionality over time, providing insights to clinicians and patients, as well as offering a common language to discuss patient behavior/functioning. OBJECTIVES: In this article, we examine the patient-based outcome assessment (PBOA) instruments that have been used to determine outcomes in acupuncture clinical research and highlight measures that are feasible, practical, economical, reliable, valid, and responsive to clinical change. The aims of this review were to assess and identify the commonly available PBOA measures, describe a framework for identifying appropriate sets of measures, and address the challenges associated with these measures and acupuncture. Instruments were evaluated in terms of feasibility, practicality, economy, reliability, validity, and responsiveness to clinical change. METHODS: This study was a systematic review. A total of 582 abstracts were reviewed using PubMed (from inception through April 2009). RESULTS: A total of 582 citations were identified. After screening of title/abstract, 212 articles were excluded. From the remaining 370 citations, 258 manuscripts identified explicit PBOA; 112 abstracts did not include any PBOA. The five most common PBOA instruments identified were the Visual Analog Scale, Symptom Diary, Numerical Pain Rating Scales, SF-36, and depression scales such as the Beck Depression Inventory. CONCLUSIONS: The way a questionnaire or scale is administered can have an effect on the outcome. Also, developing and validating outcome measures can be costly and difficult. Therefore, reviewing the literature on existing measures before creating or modifying PBOA instruments can significantly reduce the burden of developing a new measure.
Resumo:
The long-term soil carbon dynamics may be approximated by networks of linear compartments, permitting theoretical analysis of transit time (i.e., the total time spent by a molecule in the system) and age (the time elapsed since the molecule entered the system) distributions. We compute and compare these distributions for different network. configurations, ranging from the simple individual compartment, to series and parallel linear compartments, feedback systems, and models assuming a continuous distribution of decay constants. We also derive the transit time and age distributions of some complex, widely used soil carbon models (the compartmental models CENTURY and Rothamsted, and the continuous-quality Q-Model), and discuss them in the context of long-term carbon sequestration in soils. We show how complex models including feedback loops and slow compartments have distributions with heavier tails than simpler models. Power law tails emerge when using continuous-quality models, indicating long retention times for an important fraction of soil carbon. The responsiveness of the soil system to changes in decay constants due to altered climatic conditions or plant species composition is found to be stronger when all compartments respond equally to the environmental change, and when the slower compartments are more sensitive than the faster ones or lose more carbon through microbial respiration. Copyright 2009 by the American Geophysical Union.
Resumo:
Atomic force microscopy, which is normally used for DNA imaging to gain qualitative results, can also be used for quantitative DNA research, at a single-molecular level. Here, we evaluate the performance of AFM imaging specifically for quantifying supercoiled and relaxed plasmid DNA fractions within a mixture, and compare the results with the bulk material analysis method, gel electrophoresis. The advantages and shortcomings of both methods are discussed in detail. Gel electrophoresis is a quick and well-established quantification method. However, it requires a large amount of DNA, and needs to be carefully calibrated for even slightly different experimental conditions for accurate quantification. AFM imaging is accurate, in that single DNA molecules in different conformations can be seen and counted. When used carefully with necessary correction, both methods provide consistent results. Thus, AFM imaging can be used for DNA quantification, as an alternative to gel electrophoresis.
Resumo:
BACKGROUND: The clinical syndrome of heart failure (HF) is characterized by an impaired cardiac beta-adrenergic receptor (betaAR) system, which is critical in the regulation of myocardial function. Expression of the betaAR kinase (betaARK1), which phosphorylates and uncouples betaARs, is elevated in human HF; this likely contributes to the abnormal betaAR responsiveness that occurs with beta-agonist administration. We previously showed that transgenic mice with increased myocardial betaARK1 expression had impaired cardiac function in vivo and that inhibiting endogenous betaARK1 activity in the heart led to enhanced myocardial function. METHODS AND RESULTS: We created hybrid transgenic mice with cardiac-specific concomitant overexpression of both betaARK1 and an inhibitor of betaARK1 activity to study the feasibility and functional consequences of the inhibition of elevated betaARK1 activity similar to that present in human HF. Transgenic mice with myocardial overexpression of betaARK1 (3 to 5-fold) have a blunted in vivo contractile response to isoproterenol when compared with non-transgenic control mice. In the hybrid transgenic mice, although myocardial betaARK1 levels remained elevated due to transgene expression, in vitro betaARK1 activity returned to control levels and the percentage of betaARs in the high-affinity state increased to normal wild-type levels. Furthermore, the in vivo left ventricular contractile response to betaAR stimulation was restored to normal in the hybrid double-transgenic mice. CONCLUSIONS: Novel hybrid transgenic mice can be created with concomitant cardiac-specific overexpression of 2 independent transgenes with opposing actions. Elevated myocardial betaARK1 in transgenic mouse hearts (to levels seen in human HF) can be inhibited in vivo by a peptide that can prevent agonist-stimulated desensitization of cardiac betaARs. This may represent a novel strategy to improve myocardial function in the setting of compromised heart function.
Resumo:
Morphine induces antinociception by activating mu opioid receptors (muORs) in spinal and supraspinal regions of the CNS. (Beta)arrestin-2 (beta)arr2), a G-protein-coupled receptor-regulating protein, regulates the muOR in vivo. We have shown previously that mice lacking (beta)arr2 experience enhanced morphine-induced analgesia and do not become tolerant to morphine as determined in the hot-plate test, a paradigm that primarily assesses supraspinal pain responsiveness. To determine the general applicability of the (beta)arr2-muOR interaction in other neuronal systems, we have, in the present study, tested (beta)arr2 knock-out ((beta)arr2-KO) mice using the warm water tail-immersion paradigm, which primarily assesses spinal reflexes to painful thermal stimuli. In this test, the (beta)arr2-KO mice have greater basal nociceptive thresholds and markedly enhanced sensitivity to morphine. Interestingly, however, after a delayed onset, they do ultimately develop morphine tolerance, although to a lesser degree than the wild-type (WT) controls. In the (beta)arr2-KO but not WT mice, morphine tolerance can be completely reversed with a low dose of the classical protein kinase C (PKC) inhibitor chelerythrine. These findings provide in vivo evidence that the muOR is differentially regulated in diverse regions of the CNS. Furthermore, although (beta)arr2 appears to be the most prominent and proximal determinant of muOR desensitization and morphine tolerance, in the absence of this mechanism, the contributions of a PKC-dependent regulatory system become readily apparent.
Resumo:
Exogenous gene delivery to alter the function of the heart is a potential novel therapeutic strategy for treatment of cardiovascular diseases such as heart failure (HF). Before gene therapy approaches to alter cardiac function can be realized, efficient and reproducible in vivo gene techniques must be established to efficiently transfer transgenes globally to the myocardium. We have been testing the hypothesis that genetic manipulation of the myocardial beta-adrenergic receptor (beta-AR) system, which is impaired in HF, can enhance cardiac function. We have delivered adenoviral transgenes, including the human beta2-AR (Adeno-beta2AR), to the myocardium of rabbits using an intracoronary approach. Catheter-mediated Adeno-beta2AR delivery produced diffuse multichamber myocardial expression, peaking 1 week after gene transfer. A total of 5 x 10(11) viral particles of Adeno-beta2AR reproducibly produced 5- to 10-fold beta-AR overexpression in the heart, which, at 7 and 21 days after delivery, resulted in increased in vivo hemodynamic function compared with control rabbits that received an empty adenovirus. Several physiological parameters, including dP/dtmax as a measure of contractility, were significantly enhanced basally and showed increased responsiveness to the beta-agonist isoproterenol. Our results demonstrate that global myocardial in vivo gene delivery is possible and that genetic manipulation of beta-AR density can result in enhanced cardiac performance. Thus, replacement of lost receptors seen in HF may represent novel inotropic therapy.
Resumo:
Gemstone Team FASTR (Finding Alternative Specialized Travel Routes)
Resumo:
With an ever increasing number of people taking numerous medications, the need to safely administer drugs and limit unintended side effects has never been greater. Antidote control remains the most direct means to counteract acute side effects of drugs, but, unfortunately, it has been challenging and cost prohibitive to generate antidotes for most therapeutic agents. Here we describe the development of a set of antidote molecules that are capable of counteracting the effects of an entire class of therapeutic agents based upon aptamers. These universal antidotes exploit the fact that, when systemically administered, aptamers are the only free extracellular oligonucleotides found in circulation. We show that protein- and polymer-based molecules that capture oligonucleotides can reverse the activity of several aptamers in vitro and counteract aptamer activity in vivo. The availability of universal antidotes to control the activity of any aptamer suggests that aptamers may be a particularly safe class of therapeutics.
Resumo:
This performance project will cover performing issues in terms of technique in the scherzo. The Dictionary of Musical Terms defines technique as "the system of creating music, the musical slull to show personality by controlling tones that is not an abstract theory but a practical ability in composition or performance." My project focuses on techniques in fast tempos, specifically those found in the scherzo form and in concertos containing a scherzo character. The term scherzo has varied in its meaning and form throughout history. In the Baroque period, a scherzo was a work of light vocal or instrumental character. In the Classical period, scherzo still meant light in style, but it also indicated a quick tempo, often in 2/4 time. The scherzo was usually a single movement in a suite or multi-movement work. Like the minuet form, the scherzo contained a contrasting trio section. The scherzo was also standard in Romantic and post-Romantic symphonies and related genres. Because of the high degree of subjectivity in Romantic music, genres that stressed emotional content over abstract form developed rapidly. Some composers even wrote one-movement pieces entitled scherzo. These pieces became very important because they usually expressed a particular character or mood. The objective of my dissertation project is to research scherzo-like concertos, scherzo as single movements in larger forms, and scherzo as independent works. My first recital will consist of two concertos with a scherzo-like character. These are Mozart's Piano Concerto No. 9 i ?nl Major; K. 271 and Ravel's Piano Concerto in G Major. I will perform these works in December 2002 with a second piano. In addition, I will perform the Ravel with an orchestra in 2003. My second recital will consist of two parts. The fxst half presents multi-movement works with scherzo movements. The pieces are Haydn's Piano Sonata No. 3 in F Majol; Hob. WI/9, Beethoven's Piano Sonata No. 10 in G Major; Op. 14, No. 2. The second half presents independent four scherzi by Chopin. The final program will also include multi-movement works containing scherzo and independent scherzo. These are Prokofiev's Piano Sonata No. 2 in D minor, Op. 14, Grieg Lyric Pieces Op. 54, Schubert Zwei Scherzi D. 593 and Copland Scherzo humoristique; Le Chat et la Souris (The Cat and the Mouse).
Resumo:
We estimate a carbon mitigation cost curve for the U.S. commercial sector based on econometric estimation of the responsiveness of fuel demand and equipment choices to energy price changes. The model econometrically estimates fuel demand conditional on fuel choice, which is characterized by a multinomial logit model. Separate estimation of end uses (e.g., heating, cooking) using the U.S. Commercial Buildings Energy Consumption Survey allows for exceptionally detailed estimation of price responsiveness disaggregated by end use and fuel type. We then construct aggregate long-run elasticities, by fuel type, through a series of simulations; own-price elasticities range from -0.9 for district heat services to -2.9 for fuel oil. The simulations form the basis of a marginal cost curve for carbon mitigation, which suggests that a price of $20 per ton of carbon would result in an 8% reduction in commercial carbon emissions, and a price of $100 per ton would result in a 28% reduction. © 2008 Elsevier B.V. All rights reserved.
Resumo:
Heart failure is accompanied by severely impaired beta-adrenergic receptor (betaAR) function, which includes loss of betaAR density and functional uncoupling of remaining receptors. An important mechanism for the rapid desensitization of betaAR function is agonist-stimulated receptor phosphorylation by the betaAR kinase (betaARK1), an enzyme known to be elevated in failing human heart tissue. To investigate whether alterations in betaAR function contribute to the development of myocardial failure, transgenic mice with cardiac-restricted overexpression of either a peptide inhibitor of betaARK1 or the beta2AR were mated into a genetic model of murine heart failure (MLP-/-). In vivo cardiac function was assessed by echocardiography and cardiac catheterization. Both MLP-/- and MLP-/-/beta2AR mice had enlarged left ventricular (LV) chambers with significantly reduced fractional shortening and mean velocity of circumferential fiber shortening. In contrast, MLP-/-/betaARKct mice had normal LV chamber size and function. Basal LV contractility in the MLP-/-/betaARKct mice, as measured by LV dP/dtmax, was increased significantly compared with the MLP-/- mice but less than controls. Importantly, heightened betaAR desensitization in the MLP-/- mice, measured in vivo (responsiveness to isoproterenol) and in vitro (isoproterenol-stimulated membrane adenylyl cyclase activity), was completely reversed with overexpression of the betaARK1 inhibitor. We report here the striking finding that overexpression of this inhibitor prevents the development of cardiomyopathy in this murine model of heart failure. These findings implicate abnormal betaAR-G protein coupling in the pathogenesis of the failing heart and point the way toward development of agents to inhibit betaARK1 as a novel mode of therapy.
Resumo:
Screening of a human placenta lambda gt11 library has led to the isolation of the cDNA for the human beta 1-adrenergic receptor (beta 1AR). Used as the probe was the human genomic clone termed G-21. This clone, which contains an intronless gene for a putative receptor, was previously isolated by virtue of its cross hybridization with the human beta 2-adrenergic receptor (beta 2AR). The 2.4-kilobase cDNA for the human beta 1AR encodes a protein of 477 amino acid residues that is 69% homologous with the avian beta AR but only 54% homologous with the human beta 2AR. This suggests that the avian gene encoding beta AR and the human gene encoding beta 1AR evolved from a common ancestral gene. RNA blot analysis indicates a message of 2.5 kilobases in rat tissues, with a pattern of tissue distribution consistent with beta 1AR binding. This pattern is quite distinct from the pattern obtained when the beta 2AR cDNA is used as a probe. Expression of receptor protein in Xenopus laevis oocytes conveys adenylate cyclase responsiveness to catecholamines with a typical beta 1AR specificity. This contrasts with the typical beta 2 subtype specificity observed when the human beta 2AR cDNA is expressed in this system. Mammalian beta 1AR and beta 2AR are thus products of distinct genes, both of which are apparently related to the putative G-21 receptor.
Resumo:
We develop a methodology for testing Hicks's induced innovation hypothesis by estimating a product-characteristics model of energy-using consumer durables, augmenting the hypothesis to allow for the influence of government regulations. For the products we explored, the evidence suggests that (i) the rate of overall innovation was independent of energy prices and regulations; (ii) the direction of innovation was responsive to energy price changes for some products but not for others; (iii) energy price changes induced changes in the subset of technically feasible models that were offered for sale; (iv) this responsiveness increased substantially during the period after energy-efficiency product labeling was required; and (v) nonetheless, a sizable portion of efficiency improvements were autonomous.