194 resultados para relative loss bounds
em Indian Institute of Science - Bangalore - Índia
Resumo:
The galactose-binding lectin from the seeds of the jequirity plant (Abrus precatorius) was subjected to various chemical modifications in order to detect the amino acid residues involved in its binding activity. Modification of lysine, tyrosine, arginine, histidine, glutamic acid and aspartic acid residues did not affect the carbohydratebinding activity of the agglutinin. However, modification of tryptophan residues carried out in native and denaturing conditions with N-bromosuccinimide and 2- hydroxy-5-nitrobenzyl bromide led to a complete loss of its carbohydrate-binding activity. Under denaturing conditions 30 tryptophan residues/molecule were modified by both reagents, whereas only 16 and 18 residues/molecule were available for modification by N-bromosuccinimide and 2-hydroxy-5-nitrobenzyl bromide respectively under native conditions. The relative loss in haemagglutinating activity after the modification of tryptophan residues indicates that two residues/molecule are required for the carbohydrate-binding activity of the agglutinin. A partial protection was observed in the presence of saturating concentrations of lactose (0.15 M). The decrease in fluorescence intensity of Abrus agglutinin on modification of tryptophan residues is linear in the absence of lactose and shows a biphasic pattern in the presence of lactose, indicating that tryptophan residues go from a similar to a different molecular environment on saccharide binding. The secondary structure of the protein remains practically unchanged upon modification of tryptophan residues, as indicated by c.d. and immunodiffusion studies, confirming that the loss in activity is due to modification only.
Resumo:
This paper looks at the complexity of four different incremental problems. The following are the problems considered: (1) Interval partitioning of a flow graph (2) Breadth first search (BFS) of a directed graph (3) Lexicographic depth first search (DFS) of a directed graph (4) Constructing the postorder listing of the nodes of a binary tree. The last problem arises out of the need for incrementally computing the Sethi-Ullman (SU) ordering [1] of the subtrees of a tree after it has undergone changes of a given type. These problems are among those that claimed our attention in the process of our designing algorithmic techniques for incremental code generation. BFS and DFS have certainly numerous other applications, but as far as our work is concerned, incremental code generation is the common thread linking these problems. The study of the complexity of these problems is done from two different perspectives. In [2] is given the theory of incremental relative lower bounds (IRLB). We use this theory to derive the IRLBs of the first three problems. Then we use the notion of a bounded incremental algorithm [4] to prove the unboundedness of the fourth problem with respect to the locally persistent model of computation. Possibly, the lower bound result for lexicographic DFS is the most interesting. In [5] the author considers lexicographic DFS to be a problem for which the incremental version may require the recomputation of the entire solution from scratch. In that sense, our IRLB result provides further evidence for this possibility with the proviso that the incremental DFS algorithms considered be ones that do not require too much of preprocessing.
Resumo:
The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.
Resumo:
With the liberalisation of electricity market it has become very important to determine the participants making use of the transmission network.Transmission line usage computation requires information of generator to load contributions and the path used by various generators to meet loads and losses. In this study relative electrical distance (RED) concept is used to compute reactive power contributions from various sources like generators, switchable volt-amperes reactive(VAR) sources and line charging susceptances that are scattered throughout the network, to meet the system demands. The transmission line charge susceptances contribution to the system reactive flows and its aid extended in reducing the reactive generation at the generator buses are discussed in this paper. Reactive power transmission cost evaluation is carried out in this study. The proposed approach is also compared with other approaches viz.,proportional sharing and modified Y-bus.Detailed case studies with base case and optimised results are carried out on a sample 8-bus system. IEEE 39-bus system and a practical 72-bus system, an equivalent of Indian Southern grid are also considered for illustration and results are discussed.
Resumo:
An interaction analysis has been conducted to study the effects of a local loss of support beneath the beam footing of a two-bay plane frame. The results of the study indicate that the magnitude of increase in the bending moment and axial force in the structure due to the presence of a void are dependent, not only on the extent of support loss, but also on the relative stiffnesses between foundation beam and soil, and between superstructure and soil. The increase in bending moment even for a void span of 1/12 of the foundation beam length can become so significant as to exceed the safety provisions. The study shows that the effect of a void on the superstructure moments can be greatly minimized by a combination of rigid foundation and flexible superstructure.
Resumo:
Adsorption of CO has been investigated on the surfaces of polycrystalline transition metals as well as alloys by employing electron energy loss spectroscopy (eels) and ultraviolet photoelectron spectroscopy (ups). CO adsorbs on polycrystalline transition metal surfaces with a multiplicity of sites, each being associated with a characteristic CO stretching frequency; the relative intensities vary with temperature as well as coverage. Whilst at low temperatures (80- 120 K), low coordination sites are stabilized, the higher coordination sites are stabilized at higher temperatures (270-300 K). Adsorption on surfaces of polycrystalline alloys gives characteristic stretching frequencies due to the constituent metal sites. Alloying, however, causes a shift in the stretching frequencies, indicating the effect of the band structure on the nature of adsorption. The up spectra provide confirmatory evidence for the existence of separate metal sites in the alloys as well as for the high-temperature and low-temperature phases of adsorbed CO.
Resumo:
Energy loss spectra of superconducting YBa2Cu3O6.9' Bi1.5Pb0.5Ca2.5Sr1.5Cu3O10+δ and Tl2CaBa2Cu3O8 obtained at primary electron energies in the 170–310 eV range show features reflecting the commonalities in their electronic structures. The relative intensity of the plasmon peak shows a marked drop across the transition temperature. Secondary electron emission spectra of the cuprates also reveal some features of the electronic structure.
Resumo:
Process control systems are designed for a closed-loop peak magnitude of 2dB, which corresponds to a damping coefficient () of 0.5 approximately. With this specified constraint, the designer should choose and/or design the loop components to maintain a constant relative stability. However, the manipulative variable in almost all chemical processes will be the flow rate of a process stream. Since the gains and the time constants of the process will be functions of the manipulative variable, a constant relative stability cannot be maintained. Up to now, this problem has been overcome either by selecting proper control valve flow characteristics or by gain scheduling of controller parameters. Nevertheless, if a wrong control valve selection is made then one has to account for huge loss in controllability or eventually it may lead to an unstable control system. To overcome these problems, a compensator device that can bring back the relative stability of the control system was proposed. This compensator is similar to a dynamic nonlinear controller that has both online and offline information on several factors related to the control system. The design and analysis of the proposed compensator is discussed in this article. Finally, the performance of the compensator is validated by applying it to a two-tank blending process. It has been observed that by using a compensator in the process control system, the relative stability could be brought back to a great extent despite the effects of changes in manipulative flow rate.
Resumo:
This paper considers a firm real-time M/M/1 system, where jobs have stochastic deadlines till the end of service. A method for approximately specifying the loss ratio of the earliest-deadline-first scheduling policy along with exit control through the early discarding technique is presented. This approximation uses the arrival rate and the mean relative deadline, normalized with respect to the mean service time, for exponential and uniform distributions of relative deadlines. Simulations show that the maximum approximation error is less than 4% and 2% for the two distributions, respectively, for a wide range of arrival rates and mean relative deadlines. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
The problem of bipartite ranking, where instances are labeled positive or negative and the goal is to learn a scoring function that minimizes the probability of mis-ranking a pair of positive and negative instances (or equivalently, that maximizes the area under the ROC curve), has been widely studied in recent years. A dominant theoretical and algorithmic framework for the problem has been to reduce bipartite ranking to pairwise classification; in particular, it is well known that the bipartite ranking regret can be formulated as a pairwise classification regret, which in turn can be upper bounded using usual regret bounds for classification problems. Recently, Kotlowski et al. (2011) showed regret bounds for bipartite ranking in terms of the regret associated with balanced versions of the standard (non-pairwise) logistic and exponential losses. In this paper, we show that such (non-pairwise) surrogate regret bounds for bipartite ranking can be obtained in terms of a broad class of proper (composite) losses that we term as strongly proper. Our proof technique is much simpler than that of Kotlowski et al. (2011), and relies on properties of proper (composite) losses as elucidated recently by Reid and Williamson (2010, 2011) and others. Our result yields explicit surrogate bounds (with no hidden balancing terms) in terms of a variety of strongly proper losses, including for example logistic, exponential, squared and squared hinge losses as special cases. An important consequence is that standard algorithms minimizing a (non-pairwise) strongly proper loss, such as logistic regression and boosting algorithms (assuming a universal function class and appropriate regularization), are in fact consistent for bipartite ranking; moreover, our results allow us to quantify the bipartite ranking regret in terms of the corresponding surrogate regret. We also obtain tighter surrogate bounds under certain low-noise conditions via a recent result of Clemencon and Robbiano (2011).
Resumo:
This paper derives outer bounds for the 2-user symmetric linear deterministic interference channel (SLDIC) with limited-rate transmitter cooperation and perfect secrecy constraints at the receivers. Five outer bounds are derived, under different assumptions of providing side information to receivers and partitioning the encoded message/output depending on the relative strength of the signal and the interference. The usefulness of these outer bounds is shown by comparing the bounds with the inner bound on the achievable secrecy rate derived by the authors in a previous work. Also, the outer bounds help to establish that sharing random bits through the cooperative link can achieve the optimal rate in the very high interference regime.
Resumo:
This paper derives outer bounds on the sum rate of the K-user MIMO Gaussian interference channel (GIC). Three outer bounds are derived, under different assumptions of cooperation and providing side information to receivers. The novelty in the derivation lies in the careful selection of side information, which results in the cancellation of the negative differential entropy terms containing signal components, leading to a tractable outer bound. The overall outer bound is obtained by taking the minimum of the three outer bounds. The derived bounds are simplified for the MIMO Gaussian symmetric IC to obtain outer bounds on the generalized degrees of freedom (GDOF). The relative performance of the bounds yields insight into the performance limits of multiuser MIMO GICs and the relative merits of different schemes for interference management. These insights are confirmed by establishing the optimality of the bounds in specific cases using an inner bound on the GDOF derived by the authors in a previous work. It is also shown that many of the existing results on the GDOF of the GIC can be obtained as special cases of the bounds, e. g., by setting K = 2 or the number of antennas at each user to 1.
Resumo:
An analytical and experimental study of the hydraulic jump in stilling basins with abrupt drop and sudden enlargement, called the spatial B-jump here, is carried out for finding the sequent depth ratio and resulting energy dissipation. The spatial B-jump studied has its toe downstream of the expansion section, and the stream lines at the toe are characterized by downward curvature. An expression is obtained for the sequent depth ratio based on the momentum equation with suitable assumptions for the extra pressure force term because of the abrupt drop in the bed and sudden enlargement in the basin width. Predictions compare favorably with experiments. It is shown that the spatial B-jump needs less tailwater depth, thereby enhancing the stability of the jump when compared either with spatial jump, which forms in sudden expanding channels, or with B-jump, which forms in a channel with an abrupt drop in bed. It is also shown that there is a significant increase in relative energy loss for the spatial B-jump compared to either the spatial jump or B-jump alone.
Resumo:
In a classic study, Kacser & Burns (1981, Genetics 97, 639-666) demonstrated that given certain plausible assumptions, the flux in a metabolic pathway was more or less indifferent to the activity of any of the enzymes in the pathway taken singly. It was inferred from this that the observed dominance of most wild-type alleles with respect to loss-of-function mutations did not require an adaptive, meaning selectionist, explanation. Cornish-Bowden (1987, J. theor. Biol. 125, 333-338) showed that the Kacser-Burns inference was not valid when substrate concentrations were large relative to the relevant Michaelis constants. We find that in a randomly constructed functional pathway, even when substrate levels are small, one can expect high values of control coefficients for metabolic flux in the presence of significant nonlinearities as exemplified by enzymes with Hill coefficients ranging from two to six, or by the existence of oscillatory loops. Under these conditions the flux can be quite sensitive to changes in enzyme activity as might be caused by inactivating one of the two alleles in a diploid. Therefore, the phenomenon of dominance cannot be a trivial ''default'' consequence of physiology but must be intimately linked to the manner in which metabolic networks have been moulded by natural selection.
Resumo:
The measurement of surface energy balance over a land surface in an open area in Bangalore is reported. Measurements of all variables needed to calculate the surface energy balance on time scales longer than a week are made. Components of radiative fluxes are measured while sensible and latent heat fluxes are based on the bulk method using measurements made at two levels on a micrometeorological tower of 10 m height. The bulk flux formulation is verified by comparing its fluxes with direct fluxes using sonic anemometer data sampled at 10 Hz. Soil temperature is measured at 4 depths. Data have been continuously collected for over 6 months covering pre-monsoon and monsoon periods during the year 2006. The study first addresses the issue of getting the fluxes accurately. It is shown that water vapour measurements are the most crucial. A bias of 0.25% in relative humidity, which is well above the normal accuracy assumed the manufacturers but achievable in the field using a combination of laboratory calibration and field intercomparisons, results in about 20 W m(-2) change in the latent heat flux on the seasonal time scale. When seen on the seasonal time scale, the net longwave radiation is the largest energy loss term at the experimental site. The seasonal variation in the energy sink term is small compared to that in the energy source term.