982 resultados para STEADY-STATE VOLTAMMOGRAMS
Resumo:
Sonic boom propagation in a quiet) stratified) lossy atmosphere is the subject of this dissertation. Two questions are considered in detail: (1) Does waveform freezing occur? (2) Are sonic booms shocks in steady state? Both assumptions have been invoked in the past to predict sonic boom waveforms at the ground. A very general form of the Burgers equation is derived and used as the model for the problem. The derivation begins with the basic conservation equations. The effects of nonlinearity) attenuation and dispersion due to multiple relaxations) viscosity) and heat conduction) geometrical spreading) and stratification of the medium are included. When the absorption and dispersion terms are neglected) an analytical solution is available. The analytical solution is used to answer the first question. Geometrical spreading and stratification of the medium are found to slow down the nonlinear distortion of finite-amplitude waves. In certain cases the distortion reaches an absolute limit) a phenomenon called waveform freezing. Judging by the maturity of the distortion mechanism, sonic booms generated by aircraft at 18 km altitude are not frozen when they reach the ground. On the other hand, judging by the approach of the waveform to its asymptotic shape, N waves generated by aircraft at 18 km altitude are frozen when they reach the ground. To answer the second question we solve the full Burgers equation and for this purpose develop a new computer code, THOR. The code is based on an algorithm by Lee and Hamilton (J. Acoust. Soc. Am. 97, 906-917, 1995) and has the novel feature that all its calculations are done in the time domain, including absorption and dispersion. Results from the code compare very well with analytical solutions. In a NASA exercise to compare sonic boom computer programs, THOR gave results that agree well with those of other participants and ran faster. We show that sonic booms are not steady state waves because they travel through a varying medium, suffer spreading, and fail to approximate step shocks closely enough. Although developed to predict sonic boom propagation, THOR can solve other problems for which the extended Burgers equation is a good propagation model.
Resumo:
In this paper, we expose an unorthodox adversarial attack that exploits the transients of a system's adaptive behavior, as opposed to its limited steady-state capacity. We show that a well orchestrated attack could introduce significant inefficiencies that could potentially deprive a network element from much of its capacity, or significantly reduce its service quality, while evading detection by consuming an unsuspicious, small fraction of that element's hijacked capacity. This type of attack stands in sharp contrast to traditional brute-force, sustained high-rate DoS attacks, as well as recently proposed attacks that exploit specific protocol settings such as TCP timeouts. We exemplify what we term as Reduction of Quality (RoQ) attacks by exposing the vulnerabilities of common adaptation mechanisms. We develop control-theoretic models and associated metrics to quantify these vulnerabilities. We present numerical and simulation results, which we validate with observations from real Internet experiments. Our findings motivate the need for the development of adaptation mechanisms that are resilient to these new forms of attacks.
Resumo:
The increased diversity of Internet application requirements has spurred recent interests in transport protocols with flexible transmission controls. In window-based congestion control schemes, increase rules determine how to probe available bandwidth, whereas decrease rules determine how to back off when losses due to congestion are detected. The parameterization of these control rules is done so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and loss rate. In this paper, we define a new spectrum of window-based congestion control algorithms that are TCP-friendly as well as TCP-compatible under RED. Contrary to previous memory-less controls, our algorithms utilize history information in their control rules. Our proposed algorithms have two salient features: (1) They enable a wider region of TCP-friendliness, and thus more flexibility in trading off among smoothness, aggressiveness, and responsiveness; and (2) they ensure a faster convergence to fairness under a wide range of system conditions. We demonstrate analytically and through extensive ns simulations the steady-state and transient behaviors of several instances of this new spectrum of algorithms. In particular, SIMD is one instance in which the congestion window is increased super-linearly with time since the detection of the last loss. Compared to recently proposed TCP-friendly AIMD and binomial algorithms, we demonstrate the superiority of SIMD in: (1) adapting to sudden increases in available bandwidth, while maintaining competitive smoothness and responsiveness; and (2) rapidly converging to fairness and efficiency.
Resumo:
Existing approaches for multirate multicast congestion control are either friendly to TCP only over large time scales or introduce unfortunate side effects, such as significant control traffic, wasted bandwidth, or the need for modifications to existing routers. We advocate a layered multicast approach in which steady-state receiver reception rates emulate the classical TCP sawtooth derived from additive-increase, multiplicative decrease (AIMD) principles. Our approach introduces the concept of dynamic stair layers to simulate various rates of additive increase for receivers with heterogeneous round-trip times (RTTs), facilitated by a minimal amount of IGMP control traffic. We employ a mix of cumulative and non-cumulative layering to minimize the amount of excess bandwidth consumed by receivers operating asynchronously behind a shared bottleneck. We integrate these techniques together into a congestion control scheme called STAIR which is amenable to those multicast applications which can make effective use of arbitrary and time-varying subscription levels.
Resumo:
The increasing diversity of Internet application requirements has spurred recent interest in transport protocols with flexible transmission controls. In window-based congestion control schemes, increase rules determine how to probe available bandwidth, whereas decrease rules determine how to back off when losses due to congestion are detected. The control rules are parameterized so as to ensure that the resulting protocol is TCP-friendly in terms of the relationship between throughput and loss rate. This paper presents a comprehensive study of a new spectrum of window-based congestion controls, which are TCP-friendly as well as TCP-compatible under RED. Our controls utilize history information in their control rules. By doing so, they improve the transient behavior, compared to recently proposed slowly-responsive congestion controls such as general AIMD and binomial controls. Our controls can achieve better tradeoffs among smoothness, aggressiveness, and responsiveness, and they can achieve faster convergence. We demonstrate analytically and through extensive ns simulations the steady-state and transient behavior of several instances of this new spectrum.
Resumo:
The initial phase in a content distribution (file sharing) scenario is a delicate phase due to the lack of global knowledge and the dynamics of the overlay. An unwise distribution of the pieces in this phase can cause delays in reaching steady state, thus increasing file download times. We devise a scheduling algorithm at the seed (source peer with full content), based on a proportional fair approach, and we implement it on a real file sharing client [1]. In dynamic overlays, our solution improves up to 25% the average downloading time of a standard protocol ala BitTorrent.
Resumo:
Buried heat sources can be investigated by examining thermal infrared images and comparing these with the results of theoretical models which predict the thermal anomaly a given heat source may generate. Key factors influencing surface temperature include the geometry and temperature of the heat source, the surface meteorological environment, and the thermal conductivity and anisotropy of the rock. In general, a geothermal heat flux of greater than 2% of solar insolation is required to produce a detectable thermal anomaly in a thermal infrared image. A heat source of, for example, 2-300K greater than the average surface temperature must be a t depth shallower than 50m for the detection of the anomaly in a thermal infrared image, for typical terrestrial conditions. Atmospheric factors are of critical importance. While the mean atmospheric temperature has little significance, the convection is a dominant factor, and can act to swamp the thermal signature entirely. Given a steady state heat source that produces a detectable thermal anomaly, it is possible to loosely constrain the physical properties of the heat source and surrounding rock, using the surface thermal anomaly as a basis. The success of this technique is highly dependent on the degree to which the physical properties of the host rock are known. Important parameters include the surface thermal properties and thermal conductivity of the rock. Modelling of transient thermal situations was carried out, to assess the effect of time dependant thermal fluxes. One-dimensional finite element models can be readily and accurately applied to the investigation of diurnal heat flow, as with thermal inertia models. Diurnal thermal models of environments on Earth, the Moon and Mars were carried out using finite elements and found to be consistent with published measurements. The heat flow from an injection of hot lava into a near surface lava tube was considered. While this approach was useful for study, and long term monitoring in inhospitable areas, it was found to have little hazard warning utility, as the time taken for the thermal energy to propagate to the surface in dry rock (several months) in very long. The resolution of the thermal infrared imaging system is an important factor. Presently available satellite based systems such as Landsat (resolution of 120m) are inadequate for detailed study of geothermal anomalies. Airborne systems, such as TIMS (variable resolution of 3-6m) are much more useful for discriminating small buried heat sources. Planned improvements in the resolution of satellite based systems will broaden the potential for application of the techniques developed in this thesis. It is important to note, however, that adequate spatial resolution is a necessary but not sufficient condition for successful application of these techniques.
Resumo:
Phase-locked loops (PLLs) are a crucial component in modern communications systems. Comprising of a phase-detector, linear filter, and controllable oscillator, they are widely used in radio receivers to retrieve the information content from remote signals. As such, they are capable of signal demodulation, phase and carrier recovery, frequency synthesis, and clock synchronization. Continuous-time PLLs are a mature area of study, and have been covered in the literature since the early classical work by Viterbi [1] in the 1950s. With the rise of computing in recent decades, discrete-time digital PLLs (DPLLs) are a more recent discipline; most of the literature published dates from the 1990s onwards. Gardner [2] is a pioneer in this area. It is our aim in this work to address the difficulties encountered by Gardner [3] in his investigation of the DPLL output phase-jitter where additive noise to the input signal is combined with frequency quantization in the local oscillator. The model we use in our novel analysis of the system is also applicable to another of the cases looked at by Gardner, that is the DPLL with a delay element integrated in the loop. This gives us the opportunity to look at this system in more detail, our analysis providing some unique insights into the variance `dip' seen by Gardner in [3]. We initially provide background on the probability theory and stochastic processes. These branches of mathematics are the basis for the study of noisy analogue and digital PLLs. We give an overview of the classical analogue PLL theory as well as the background on both the digital PLL and circle map, referencing the model proposed by Teplinsky et al. [4, 5]. For our novel work, the case of the combined frequency quantization and noisy input from [3] is investigated first numerically, and then analytically as a Markov chain via its Chapman-Kolmogorov equation. The resulting delay equation for the steady-state jitter distribution is treated using two separate asymptotic analyses to obtain approximate solutions. It is shown how the variance obtained in each case matches well to the numerical results. Other properties of the output jitter, such as the mean, are also investigated. In this way, we arrive at a more complete understanding of the interaction between quantization and input noise in the first order DPLL than is possible using simulation alone. We also do an asymptotic analysis of a particular case of the noisy first-order DPLL with delay, previously investigated by Gardner [3]. We show a unique feature of the simulation results, namely the variance `dip' seen for certain levels of input noise, is explained by this analysis. Finally, we look at the second-order DPLL with additive noise, using numerical simulations to see the effects of low levels of noise on the limit cycles. We show how these effects are similar to those seen in the noise-free loop with non-zero initial conditions.
Resumo:
A digital differentiator simply involves the derivation of an input signal. This work includes the presentation of first-degree and second-degree differentiators, which are designed as both infinite-impulse-response (IIR) filters and finite-impulse-response (FIR) filters. The proposed differentiators have low-pass magnitude response characteristics, thereby rejecting noise frequencies higher than the cut-off frequency. Both steady-state frequency-domain characteristics and Time-domain analyses are given for the proposed differentiators. It is shown that the proposed differentiators perform well when compared to previously proposed filters. When considering the time-domain characteristics of the differentiators, the processing of quantized signals proved especially enlightening, in terms of the filtering effects of the proposed differentiators. The coefficients of the proposed differentiators are obtained using an optimization algorithm, while the optimization objectives include magnitude and phase response. The low-pass characteristic of the proposed differentiators is achieved by minimizing the filter variance. The low-pass differentiators designed show the steep roll-off, as well as having highly accurate magnitude response in the pass-band. While having a history of over three hundred years, the design of fractional differentiator has become a ‘hot topic’ in recent decades. One challenging problem in this area is that there are many different definitions to describe the fractional model, such as the Riemann-Liouville and Caputo definitions. Through use of a feedback structure, based on the Riemann-Liouville definition. It is shown that the performance of the fractional differentiator can be improved in both the frequency-domain and time-domain. Two applications based on the proposed differentiators are described in the thesis. Specifically, the first of these involves the application of second degree differentiators in the estimation of the frequency components of a power system. The second example concerns for an image processing, edge detection application.
Resumo:
The bifunctional Ru(II) complex [Ru(BPY)2POQ-Nmet]2+ (1), in which the metallic unit is tethered by an aliphatic chain to an organic DNA binder, was designed in order to increase the affinity toward nucleic acids. The interaction of 1 with DNA was characterised from luminescence and absorption data and compared with the binding of its monofunctional metallic and organic analogues, [Ru(BPY)2(ac)phen]2+ (2) and Nmet-quinoline (3). The bifunctional complex has a binding affinity one order of magnitude higher than that of each of its separated moieties. Absorption changes induced upon addition of DNA at different pH indicate protonation of the organic sub-unit upon interaction with DNA under neutral conditions. The combination of the luminescence data under steady-state and time-resolved conditions shows that the attachment of the organic unit in 1 induces modifications of the association modes of the metallic unit, owing to the presence of the aliphatic chain which probably hinders the metallic moiety binding. The salt dependence of the binding constants was analysed in order to compare the thermodynamic parameters describing the association with DNA for each complex. This study demonstrates the interest of the derivatisation of a Ru(II) complex with an organic moiety (ia the bifunctional ligand POQ-Nmet) for the development of high affinity DNA probes or photoreactive agents.
Resumo:
Pigeons and other animals soon learn to wait (pause) after food delivery on periodic-food schedules before resuming the food-rewarded response. Under most conditions the steady-state duration of the average waiting time, t, is a linear function of the typical interfood interval. We describe three experiments designed to explore the limits of this process. In all experiments, t was associated with one key color and the subsequent food delay, T, with another. In the first experiment, we compared the relation between t (waiting time) and T (food delay) under two conditions: when T was held constant, and when T was an inverse function of t. The pigeons could maximize the rate of food delivery under the first condition by setting t to a consistently short value; optimal behavior under the second condition required a linear relation with unit slope between t and T. Despite this difference in optimal policy, the pigeons in both cases showed the same linear relation, with slope less than one, between t and T. This result was confirmed in a second parametric experiment that added a third condition, in which T + t was held constant. Linear waiting appears to be an obligatory rule for pigeons. In a third experiment we arranged for a multiplicative relation between t and T (positive feedback), and produced either very short or very long waiting times as predicted by a quasi-dynamic model in which waiting time is strongly determined by the just-preceding food delay.
Resumo:
Use of phase transfer catalysts such as 18-crown-6 enables ionic, linear conjugated poly[2,6-{1,5-bis(3-propoxysulfonicacidsodiumsalt)}naphthylene]ethynylene (PNES) to efficiently disperse single-walled carbon nanotubes (SWNTs) in multiple organic solvents under standard ultrasonication methods. Steady-state electronic absorption spectroscopy, atomic force microscopy (AFM), and transmission electron microscopy (TEM) reveal that these SWNT suspensions are composed almost exclusively of individualized tubes. High-resolution TEM and AFM data show that the interaction of PNES with SWNTs in both protic and aprotic organic solvents provides a self-assembled superstructure in which a PNES monolayer helically wraps the nanotube surface with periodic and constant morphology (observed helical pitch length = 10 ± 2 nm); time-dependent examination of these suspensions indicates that these structures persist in solution over periods that span at least several months. Pump-probe transient absorption spectroscopy reveals that the excited state lifetimes and exciton binding energies of these well-defined nanotube-semiconducting polymer hybrid structures remain unchanged relative to analogous benchmark data acquired previously for standard sodium dodecylsulfate (SDS)-SWNT suspensions, regardless of solvent. These results demonstrate that the use of phase transfer catalysts with ionic semiconducting polymers that helically wrap SWNTs provide well-defined structures that solubulize SWNTs in a wide range of organic solvents while preserving critical nanotube semiconducting and conducting properties.
Resumo:
Phosphorus (P) is a crucial element for life and therefore for maintaining ecosystem productivity. Its local availability to the terrestrial biosphere results from the interaction between climate, tectonic uplift, atmospheric transport, and biotic cycling. Here we present a mathematical model that describes the terrestrial P-cycle in a simple but comprehensive way. The resulting dynamical system can be solved analytically for steady-state conditions, allowing us to test the sensitivity of the P-availability to the key parameters and processes. Given constant inputs, we find that humid ecosystems exhibit lower P availability due to higher runoff and losses, and that tectonic uplift is a fundamental constraint. In particular, we find that in humid ecosystems the biotic cycling seem essential to maintain long-term P-availability. The time-dependent P dynamics for the Franz Josef and Hawaii chronosequences show how tectonic uplift is an important constraint on ecosystem productivity, while hydroclimatic conditions control the P-losses and speed towards steady-state. The model also helps describe how, with limited uplift and atmospheric input, as in the case of the Amazon Basin, ecosystems must rely on mechanisms that enhance P-availability and retention. Our novel model has a limited number of parameters and can be easily integrated into global climate models to provide a representation of the response of the terrestrial biosphere to global change. © 2010 Author(s).
Resumo:
Steady-state diffuse reflection spectroscopy is a well-studied optical technique that can provide a noninvasive and quantitative method for characterizing the absorption and scattering properties of biological tissues. Here, we compare three fiber-based diffuse reflection spectroscopy systems that were assembled to create a light-weight, portable, and robust optical spectrometer that could be easily translated for repeated and reliable use in mobile settings. The three systems were built using a broadband light source and a compact, commercially available spectrograph. We tested two different light sources and two spectrographs (manufactured by two different vendors). The assembled systems were characterized by their signal-to-noise ratios, the source-intensity drifts, and detector linearity. We quantified the performance of these instruments in extracting optical properties from diffuse reflectance spectra in tissue-mimicking liquid phantoms with well-controlled optical absorption and scattering coefficients. We show that all assembled systems were able to extract the optical absorption and scattering properties with errors less than 10%, while providing greater than ten-fold decrease in footprint and cost (relative to a previously well-characterized and widely used commercial system). Finally, we demonstrate the use of these small systems to measure optical biomarkers in vivo in a small-animal model cancer therapy study. We show that optical measurements from the simple portable system provide estimates of tumor oxygen saturation similar to those detected using the commercial system in murine tumor models of head and neck cancer.
Resumo:
A number of lines of evidence suggest that cross-talk exists between the cellular signal transduction pathways involving tyrosine phosphorylation catalyzed by members of the pp60c-src kinase family and those mediated by guanine nucleotide regulatory proteins (G proteins). In this study, we explore the possibility that direct interactions between pp60c-src and G proteins may occur with functional consequences. Preparations of pp60c-src isolated by immunoprecipitation phosphorylate on tyrosine residues the purified G-protein alpha subunits (G alpha) of several heterotrimeric G proteins. Phosphorylation is highly dependent on G-protein conformation, and G alpha(GDP) uncomplexed by beta gamma subunits appears to be the preferred substrate. In functional studies, phosphorylation of stimulatory G alpha (G alpha s) modestly increases the rate of binding of guanosine 5'-[gamma-[35S]thio]triphosphate to Gs as well as the receptor-stimulated steady-state rate of GTP hydrolysis by Gs. Heterotrimeric G proteins may represent a previously unappreciated class of potential substrates for pp60c-src.