21 resultados para Publicly Traded
Resumo:
In this paper we have proposed and implemented a joint Medium Access Control (MAC) -cum- Routing scheme for environment data gathering sensor networks. The design principle uses node 'battery lifetime' maximization to be traded against a network that is capable of tolerating: A known percentage of combined packet losses due to packet collisions, network synchronization mismatch and channel impairments Significant end-to-end delay of an order of few seconds We have achieved this with a loosely synchronized network of sensor nodes that implement Slotted-Aloha MAC state machine together with route information. The scheme has given encouraging results in terms of energy savings compared to other popular implementations. The overall packet loss is about 12%. The battery life time increase compared to B-MAC varies from a minimum of 30% to about 90% depending on the duty cycle.
Resumo:
In this paper, we exploit the idea of decomposition to match buyers and sellers in an electronic exchange for trading large volumes of homogeneous goods, where the buyers and sellers specify marginal-decreasing piecewise constant price curves to capture volume discounts. Such exchanges are relevant for automated trading in many e-business applications. The problem of determining winners and Vickrey prices in such exchanges is known to have a worst-case complexity equal to that of as many as (1 + m + n) NP-hard problems, where m is the number of buyers and n is the number of sellers. Our method proposes the overall exchange problem to be solved as two separate and simpler problems: 1) forward auction and 2) reverse auction, which turns out to be generalized knapsack problems. In the proposed approach, we first determine the quantity of units to be traded between the sellers and the buyers using fast heuristics developed by us. Next, we solve a forward auction and a reverse auction using fully polynomial time approximation schemes available in the literature. The proposed approach has worst-case polynomial time complexity. and our experimentation shows that the approach produces good quality solutions to the problem. Note to Practitioners- In recent times, electronic marketplaces have provided an efficient way for businesses and consumers to trade goods and services. The use of innovative mechanisms and algorithms has made it possible to improve the efficiency of electronic marketplaces by enabling optimization of revenues for the marketplace and of utilities for the buyers and sellers. In this paper, we look at single-item, multiunit electronic exchanges. These are electronic marketplaces where buyers submit bids and sellers ask for multiple units of a single item. We allow buyers and sellers to specify volume discounts using suitable functions. Such exchanges are relevant for high-volume business-to-business trading of standard products, such as silicon wafers, very large-scale integrated chips, desktops, telecommunications equipment, commoditized goods, etc. The problem of determining winners and prices in such exchanges is known to involve solving many NP-hard problems. Our paper exploits the familiar idea of decomposition, uses certain algorithms from the literature, and develops two fast heuristics to solve the problem in a near optimal way in worst-case polynomial time.
Resumo:
Relay selection combined with buffering of packets of relays can substantially increase the throughput of a cooperative network that uses rateless codes. However, buffering also increases the end-to-end delays due to the additional queuing delays at the relay nodes. In this paper we propose a novel method that exploits a unique property of rateless codes that enables a receiver to decode a packet from non-contiguous and unordered portions of the received signal. In it, each relay, depending on its queue length, ignores its received coded bits with a given probability. We show that this substantially reduces the end-to-end delays while retaining almost all of the throughput gain achieved by buffering. In effect, the method increases the odds that the packet is first decoded by a relay with a smaller queue. Thus, the queuing load is balanced across the relays and traded off with transmission times. We derive explicit necessary and sufficient conditions for the stability of this system when the various channels undergo fading. Despite encountering analytically intractable G/GI/1 queues in our system, we also gain insights about the method by analyzing a similar system with a simpler model for the relay-to-destination transmission times.
Resumo:
The protein-protein docking programs typically perform four major tasks: (i) generation of docking poses, (ii) selecting a subset of poses, (iii) their structural refinement and (iv) scoring, ranking for the final assessment of the true quaternary structure. Although the tasks can be integrated or performed in a serial order, they are by nature modular, allowing an opportunity to substitute one algorithm with another. We have implemented two modular web services, (i) PRUNE: to select a subset of docking poses generated during sampling search (http://pallab.serc.iisc.ernet.in/prune) and (ii) PROBE: to refine, score and rank them (http://pallab.serc.iisc.ernet.in/probe). The former uses a new interface area based edge-scoring function to eliminate > 95% of the poses generated during docking search. In contrast to other multi-parameter-based screening functions, this single parameter based elimination reduces the computational time significantly, in addition to increasing the chances of selecting native-like models in the top rank list. The PROBE server performs ranking of pruned poses, after structure refinement and scoring using a regression model for geometric compatibility, and normalized interaction energy. While web-service similar to PROBE is infrequent, no web-service akin to PRUNE has been described before. Both the servers are publicly accessible and free for use.
Resumo:
Given a parametrized n-dimensional SQL query template and a choice of query optimizer, a plan diagram is a color-coded pictorial enumeration of the execution plan choices of the optimizer over the query parameter space. These diagrams have proved to be a powerful metaphor for the analysis and redesign of modern optimizers, and are gaining currency in diverse industrial and academic institutions. However, their utility is adversely impacted by the impractically large computational overheads incurred when standard brute-force exhaustive approaches are used for producing fine-grained diagrams on high-dimensional query templates. In this paper, we investigate strategies for efficiently producing close approximations to complex plan diagrams. Our techniques are customized to the features available in the optimizer's API, ranging from the generic optimizers that provide only the optimal plan for a query, to those that also support costing of sub-optimal plans and enumerating rank-ordered lists of plans. The techniques collectively feature both random and grid sampling, as well as inference techniques based on nearest-neighbor classifiers, parametric query optimization and plan cost monotonicity. Extensive experimentation with a representative set of TPC-H and TPC-DS-based query templates on industrial-strength optimizers indicates that our techniques are capable of delivering 90% accurate diagrams while incurring less than 15% of the computational overheads of the exhaustive approach. In fact, for full-featured optimizers, we can guarantee zero error with less than 10% overheads. These approximation techniques have been implemented in the publicly available Picasso optimizer visualization tool.
Resumo:
This paper describes techniques to estimate the worst case execution time of executable code on architectures with data caches. The underlying mechanism is Abstract Interpretation, which is used for the dual purposes of tracking address computations and cache behavior. A simultaneous numeric and pointer analysis using an abstraction for discrete sets of values computes safe approximations of access addresses which are then used to predict cache behavior using Must Analysis. A heuristic is also proposed which generates likely worst case estimates. It can be used in soft real time systems and also for reasoning about the tightness of the safe estimate. The analysis methods can handle programs with non-affine access patterns, for which conventional Presburger Arithmetic formulations or Cache Miss Equations do not apply. The precision of the estimates is user-controlled and can be traded off against analysis time. Executables are analyzed directly, which, apart from enhancing precision, renders the method language independent.
Resumo:
Reduction of carbon emissions is of paramount importance in the context of global warming and climate change. Countries and global companies are now engaged in understanding systematic ways of solving carbon economics problems, aimed ultimately at achieving well defined emission targets. This paper proposes mechanism design as an approach to solving carbon economics problems. The paper first introduces carbon economics issues in the world today and next focuses on carbon economics problems facing global industries. The paper identifies four problems faced by global industries: carbon credit allocation (CCA), carbon credit buying (CCB), carbon credit selling (CCS), and carbon credit exchange (CCE). It is argued that these problems are best addressed as mechanism design problems. The discipline of mechanism design is founded on game theory and is concerned with settings where a social planner faces the problem of aggregating the announced preferences of multiple agents into a collective decision, when the actual preferences are not known publicly. The paper provides an overview of mechanism design and presents the challenges involved in designing mechanisms with desirable properties. To illustrate the application of mechanism design in carbon economics,the paper describes in detail one specific problem, the carbon credit allocation problem.
Resumo:
When an Indian prime minister publicly admits that India has fallen behind China, it is news. Manmohan Singh's statement last January at the Indian Science Congress in Bhubaneswar that this is so with respect to scientific research, and that “India's relative position in the world of science has been declining”, has rung alarm bells. Singh was not springing anything new on Indian scientists; many of us will admit that things are not well1. Recognizing the problem is the first step towards reversing this slide.
Resumo:
Electronic exchanges are double-sided marketplaces that allow multiple buyers to trade with multiple sellers, with aggregation of demand and supply across the bids to maximize the revenue in the market. Two important issues in the design of exchanges are (1) trade determination (determining the number of goods traded between any buyer-seller pair) and (2) pricing. In this paper we address the trade determination issue for one-shot, multi-attribute exchanges that trade multiple units of the same good. The bids are configurable with separable additive price functions over the attributes and each function is continuous and piecewise linear. We model trade determination as mixed integer programming problems for different possible bid structures and show that even in two-attribute exchanges, trade determination is NP-hard for certain bid structures. We also make some observations on the pricing issues that are closely related to the mixed integer formulations.
Resumo:
Ampcalculator (AMPC) is a Mathematica (c) based program that was made publicly available some time ago by Unterdorfer and Ecker. It enables the user to compute several processes at one loop (upto O(p(4))) in SU(3) chiral perturbation theory. They include computing matrix elements and form factors for strong and non-leptonic weak processes with at most six external states. It was used to compute some novel processes and was tested against well-known results by the original authors. Here we present the results of several thorough checks of the package. Exhaustive checks performed by the original authors are not publicly available, and hence the present effort. Some new results are obtained from the software especially in the kaon odd-intrinsic parity non-leptonic decay sector involving the coupling G(27). Another illustrative set of amplitudes at tree level we provide is in the context of tau-decays with several mesons including quark mass effects, of use to the BELLE experiment. All eight meson-meson scattering amplitudes have been checked. The Kaon-Compton amplitude has been checked and a minor error in the published results has been pointed out. This exercise is a tutorial-based one, wherein several input and output notebooks are also being made available as ancillary files on the arXiv. Some of the additional notebooks we provide contain explicit expressions that we have used for comparison with established results. The purpose is to encourage users to apply the software to suit their specific needs. An automatic amplitude generator of this type can provide error-free outputs that could be used as inputs for further simplification, and in varied scenarios such as applications of chiral perturbation theory at finite temperature, density and volume. This can also be used by students as a learning aid in low-energy hadron dynamics.
Resumo:
Luteal insufficiency affects fertility and hence study of mechanisms that regulate corpus luteum (CL) function is of prime importance to overcome infertility problems. Exploration of human genome sequence has helped to study the frequency of single nucleotide polymorphisms (SNPs). Clinical benefits of screening SNPs in infertility are being recognized well in recent times. Examining SNPs in genes associated with maintenance and regression of CL may help to understand unexplained luteal insufficiency and related infertility. Publicly available microarray gene expression databases reveal the global gene expression patterns in primate CL during the different functional state. We intend to explore computationally the deleterious SNPs of human genes reported to be common targets of luteolysin and luteotropin in primate CL Different computational algorithms were used to dissect out the functional significance of SNPs in the luteinizing hormone sensitive genes. The results raise the possibility that screening for SNPs might be integrated to evaluate luteal insufficiency associated with human female infertility for future studies. (C) 2012 Elsevier B.V. All rights reserved,
Resumo:
We have benchmarked the maximum obtainable recognition accuracy on five publicly available standard word image data sets using semi-automated segmentation and a commercial OCR. These images have been cropped from camera captured scene images, born digital images (BDI) and street view images. Using the Matlab based tool developed by us, we have annotated at the pixel level more than 3600 word images from the five data sets. The word images binarized by the tool, as well as by our own midline analysis and propagation of segmentation (MAPS) algorithm are recognized using the trial version of Nuance Omnipage OCR and these two results are compared with the best reported in the literature. The benchmark word recognition rates obtained on ICDAR 2003, Sign evaluation, Street view, Born-digital and ICDAR 2011 data sets are 83.9%, 89.3%, 79.6%, 88.5% and 86.7%, respectively. The results obtained from MAPS binarized word images without the use of any lexicon are 64.5% and 71.7% for ICDAR 2003 and 2011 respectively, and these values are higher than the best reported values in the literature of 61.1% and 41.2%, respectively. MAPS results of 82.8% for BDI 2011 dataset matches the performance of the state of the art method based on power law transform.
Resumo:
Multi-view head-pose estimation in low-resolution, dynamic scenes is difficult due to blurred facial appearance and perspective changes as targets move around freely in the environment. Under these conditions, acquiring sufficient training examples to learn the dynamic relationship between position, face appearance and head-pose can be very expensive. Instead, a transfer learning approach is proposed in this work. Upon learning a weighted-distance function from many examples where the target position is fixed, we adapt these weights to the scenario where target positions are varying. The adaptation framework incorporates reliability of the different face regions for pose estimation under positional variation, by transforming the target appearance to a canonical appearance corresponding to a reference scene location. Experimental results confirm effectiveness of the proposed approach, which outperforms state-of-the-art by 9.5% under relevant conditions. To aid further research on this topic, we also make DPOSE- a dynamic, multi-view head-pose dataset with ground-truth publicly available with this paper.
Resumo:
Sport hunting is often proposed as a tool to support the conservation of large carnivores. However, it is challenging to provide tangible economic benefits from this activity as an incentive for local people to conserve carnivores. We assessed economic gains from sport hunting and poaching of leopards (Panthera pardus), costs of leopard depredation of livestock, and attitudes of people toward leopards in Niassa National Reserve, Mozambique. We sent questionnaires to hunting concessionaires (n = 8) to investigate the economic value of and the relative importance of leopards relative to other key trophy-hunted species. We asked villagers (n = 158) the number of and prices for leopards poached in the reserve and the number of goats depredated by leopard. Leopards were the mainstay of the hunting industry; a single animal was worth approximately U.S.$24,000. Most safari revenues are retained at national and international levels, but poached leopard are illegally traded locally for small amounts ($83). Leopards depredated 11 goats over 2 years in 2 of 4 surveyed villages resulting in losses of $440 to 6 households. People in these households had negative attitudes toward leopards. Although leopard sport hunting generates larger gross revenues than poaching, illegal hunting provides higher economic benefits for households involved in the activity. Sport-hunting revenues did not compensate for the economic losses of livestock at the household level. On the basis of our results, we propose that poaching be reduced by increasing the costs of apprehension and that the economic benefits from leopard sport hunting be used to improve community livelihoods and provide incentives not to poach.
Resumo:
P bodies are 100-300 nm sized organelles involved in mRNA silencing and degradation. A total of 60 human proteins have been reported to localize to P bodies. Several human SNPs contribute to complex diseases by altering the structure and function of the proteins. Also, SNPs alter various transcription factors binding, splicing and miRNA regulatory sites. Owing to the essential functions of P bodies in mRNA regulation, we explored computationally the functional significance of SNPs in 7 P body components such as XRN1, DCP2, EDC3, CPEB1, GEMIN5, STAU1 and TRIM71. Computational analyses of non-synonymous SNPs of these components was initiated using well utilized publicly available software programs such as the SIFT, followed by PolyPhen, PANTHER, MutPred, I-Mutant-2.0 and PhosSNP 1.0. Functional significance of noncoding SNPs in the regulatory regions were analysed using FastSNP. Utilizing miRSNP database, we explored the role of SNPs in the context that alters the miRNA binding sites in the above mentioned genes. Our in silico studies have identified various deleterious SNPs and this cataloguing is essential and gives first hand information for further analysis by in vitro and in vivo methods for a better understanding of maintenance, assembly and functional aspects of P bodies in both health and disease. (C) 2013 Elsevier B.V. All rights reserved.