18 resultados para Quality of Data
em Indian Institute of Science - Bangalore - Índia
Resumo:
Many next-generation distributed applications, such as grid computing, require a single source to communicate with a group of destinations. Traditionally, such applications are implemented using multicast communication. A typical multicast session requires creating the shortest-path tree to a fixed number of destinations. The fundamental issue in multicasting data to a fixed set of destinations is receiver blocking. If one of the destinations is not reachable, the entire multicast request (say, grid task request) may fail. Manycasting is a generalized variation of multicasting that provides the freedom to choose the best subset of destinations from a larger set of candidate destinations. We propose an impairment-aware algorithm to provide manycasting service in the optical layer, specifically OBS. We compare the performance of our proposed manycasting algorithm with traditional multicasting and multicast with over provisioning. Our results show a significant improvement in the blocking probability by implementing optical-layer manycasting.
Resumo:
Electron Diffraction Structure Analysis (EDSA) with data from standard selected-area electron diffraction (SAED) is still the method of choice for structure determination of nano-sized single crystals. The recently determined heavy atom structure α-Ti2Se (Albe & Weirich, 2003) is used as an example to illustrate the developed procedure for structure determination from two-dimensionally SAED data via direct methods and kinematical least-squares refinement. Despite the investigated crystallite had a relatively large effective thickness of about 230 Å as determined from dynamical calculations, the obtained structural model from SAED data was found in good agreement with the result from an earlier single crystal X-ray study (Weirich, Pöttgen & Simon, 1996). Arguments, which support the validity of the used quasi-kinematical approach, are given in the text. The influences of dynamical and secondary scattering on the quality of the data and the structure solution are discussed. Moreover, the usefulness of first-principles calculations for verifying the results from EDSA is demonstrated by two examples, whereas one of the structures was unattainable by conventional X-ray diffraction.
Resumo:
Because of frequent topology changes and node failures, providing quality of service routing in mobile ad hoc networks becomes a very critical issue. The quality of service can be provided by routing the data along multiple paths. Such selection of multiple paths helps to improve reliability and load balancing, reduce delay introduced due to route rediscovery in presence of path failures. There are basically two issues in such a multipath routing Firstly, the sender node needs to obtain the exact topology information. Since the nodes are continuously roaming, obtaining the exact topology information is a tough task. Here, we propose an algorithm which constructs highly accurate network topology with minimum overhead. The second issue is that the paths in the path set should offer best reliability and network throughput. This is achieved in two ways 1) by choice of a proper metric which is a function of residual power, traffic load on the node and in the surrounding medium 2) by allowing the reliable links to be shared between different paths.
Resumo:
Femtocells are a new concept which improves the coverage and capacity of a cellular system. We consider the problem of channel allocation and power control to different users within a Femtocell. Knowing the channels available, the channel states and the rate requirements of different users the Femtocell base station (FBS), allocates the channels to different users to satisfy their requirements. Also, the Femtocell should use minimal power so as to cause least interference to its neighboring Femtocells and outside users. We develop efficient, low complexity algorithms which can be used online by the Femtocell. The users may want to transmit data or voice. We compare our algorithms with the optimal solutions.
Resumo:
We consider optimal power allocation policies for a single server, multiuser system. The power is consumed in transmission of data only. The transmission channel may experience multipath fading. We obtain very efficient, low computational complexity algorithms which minimize power and ensure stability of the data queues. We also obtain policies when the users may have mean delay constraints. If the power required is a linear function of rate then we exploit linearity and obtain linear programs with low complexity.
Resumo:
Streamflow forecasts at daily time scale are necessary for effective management of water resources systems. Typical applications include flood control, water quality management, water supply to multiple stakeholders, hydropower and irrigation systems. Conventionally physically based conceptual models and data-driven models are used for forecasting streamflows. Conceptual models require detailed understanding of physical processes governing the system being modeled. Major constraints in developing effective conceptual models are sparse hydrometric gauge network and short historical records that limit our understanding of physical processes. On the other hand, data-driven models rely solely on previous hydrological and meteorological data without directly taking into account the underlying physical processes. Among various data driven models Auto Regressive Integrated Moving Average (ARIMA), Artificial Neural Networks (ANNs) are most widely used techniques. The present study assesses performance of ARIMA and ANNs methods in arriving at one-to seven-day ahead forecast of daily streamflows at Basantpur streamgauge site that is situated at upstream of Hirakud Dam in Mahanadi river basin, India. The ANNs considered include Feed-Forward back propagation Neural Network (FFNN) and Radial Basis Neural Network (RBNN). Daily streamflow forecasts at Basantpur site find use in management of water from Hirakud reservoir. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
Chips were produced by orthogonal Cutting of cast pure magnesium billet with three different tool rake angles viz., -15 degrees, -5 degrees and +15 degrees on a lathe. Chip consolidation by solid state recycling technique involved cold compaction followed by hot extrusion. The extruded products were characterized for microstructure and mechanical properties. Chip-consolidated products from -15 degrees rake angle tools showed 19% increase in tensile strength, 60% reduction ingrain size and 12% increase in hardness compared to +15 degrees rake chip-consolidated product indicating better chip bonding and grain refinement. Microstructure of the fracture specimen Supports the abovefinding. On the overall, the present work high lights the importance of tool take angle in determining the quality of the chip-consolidated products. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
An ad hoc network is composed of mobile nodes without any infrastructure. Recent trends in applications of mobile ad hoc networks rely on increased group oriented services. Hence multicast support is critical for ad hoc networks. We also need to provide service differentiation schemes for different group of users. An efficient application layer multicast (APPMULTICAST) solution suitable for low mobility applications in MANET environment has been proposed in [10]. In this paper, we present an improved application layer multicast solution suitable for medium mobility applications in MANET environment. We define multicast groups with low priority and high priority and incorporate a two level service differentiation scheme. We use network layer support to build the overlay topology closer to the actual network topology. We try to maximize Packet Delivery Ratio. Through simulations we show that the control overhead for our algorithm is within acceptable limit and it achieves acceptable Packet Delivery Ratio for medium mobility applications.
Resumo:
We consider the problem of wireless channel allocation to multiple users. A slot is given to a user with a highest metric (e.g., channel gain) in that slot. The scheduler may not know the channel states of all the users at the beginning of each slot. In this scenario opportunistic splitting is an attractive solution. However this algorithm requires that the metrics of different users form independent, identically distributed (iid) sequences with same distribution and that their distribution and number be known to the scheduler. This limits the usefulness of opportunistic splitting. In this paper we develop a parametric version of this algorithm. The optimal parameters of the algorithm are learnt online through a stochastic approximation scheme. Our algorithm does not require the metrics of different users to have the same distribution. The statistics of these metrics and the number of users can be unknown and also vary with time. Each metric sequence can be Markov. We prove the convergence of the algorithm and show its utility by scheduling the channel to maximize its throughput while satisfying some fairness and/or quality of service constraints.
Resumo:
The quality of tap water from water supplies from 14 districts of Kerala state, India was studied. Parameters like pH, water temperature, total dissolved solids, salinity, nitrates, chloride, hardness, magnesium, calcium, sodium, potassium, fluoride, sulphate, phosphates, and coliform bacteria were enumerated. The results showed that all water samples were contaminated by coliform bacteria. About 20% of the tap water samples from Alappuzha and 15% samples from Palakkad district are above desirable limits prescribed by Bureau of Indian Standards. The contamination of the source water (due to lack of community hygiene) and insufficient treatment are the major cause for the coliform contamination in the state. Water samples from Alappuzha and Palakkad have high ionic and fluoride content which could be attributed to the geology of the region. Water supplied for drinking in rural areas are relatively free of any contamination than the water supplied in urban area by municipalities, which may be attributed higher chances of contamination in urban area due to mismanagement of solid and liquid wastes. The study highlights the need for regular bacteriological enumeration along with water quality in addition to setting up decentralised region specific improved treatment system.
Resumo:
This paper describes techniques to estimate the worst case execution time of executable code on architectures with data caches. The underlying mechanism is Abstract Interpretation, which is used for the dual purposes of tracking address computations and cache behavior. A simultaneous numeric and pointer analysis using an abstraction for discrete sets of values computes safe approximations of access addresses which are then used to predict cache behavior using Must Analysis. A heuristic is also proposed which generates likely worst case estimates. It can be used in soft real time systems and also for reasoning about the tightness of the safe estimate. The analysis methods can handle programs with non-affine access patterns, for which conventional Presburger Arithmetic formulations or Cache Miss Equations do not apply. The precision of the estimates is user-controlled and can be traded off against analysis time. Executables are analyzed directly, which, apart from enhancing precision, renders the method language independent.
Resumo:
Abstract—A new breed of processors like the Cell Broadband Engine, the Imagine stream processor and the various GPU processors emphasize data-level parallelism (DLP) and threadlevel parallelism (TLP) as opposed to traditional instructionlevel parallelism (ILP). This allows them to achieve order-ofmagnitude improvements over conventional superscalar processors for many workloads. However, it is unclear as to how much parallelism of these types exists in current programs. Most earlier studies have largely concentrated on the amount of ILP in a program, without differentiating DLP or TLP. In this study, we investigate the extent of data-level parallelism available in programs in the MediaBench suite. By packing instructions in a SIMD fashion, we observe reductions of up to 91 % (84 % on average) in the number of dynamic instructions, indicating a very high degree of DLP in several applications. I.