18 resultados para user data
em Indian Institute of Science - Bangalore - Índia
Resumo:
The literature on pricing implicitly assumes an "infinite data" model, in which sources can sustain any data rate indefinitely. We assume a more realistic "finite data" model, in which sources occasionally run out of data; this leads to variable user data rates. Further, we assume that users have contracts with the service provider, specifying the rates at which they can inject traffic into the network. Our objective is to study how prices can be set such that a single link can be shared efficiently and fairly among users in a dynamically changing scenario where a subset of users occasionally has little data to send. User preferences are modelled by concave increasing utility functions. Further, we introduce two additional elements: a convex increasing disutility function and a convex increasing multiplicative congestion-penally function. The disutility function takes the shortfall (contracted rate minus present rate) as its argument, and essentially encourages users to send traffic at their contracted rates, while the congestion-penalty function discourages heavy users from sending excess data when the link is congested. We obtain simple necessary and sufficient conditions on prices for fair and efficient link sharing; moreover, we show that a single price for all users achieves this. We illustrate the ideas using a simple experiment.
Resumo:
This paper is concerned with the integration of voice and data on an experimental local area network used by the School of Automation, of the Indian Institute of Science. SALAN (School of Automation Local Area Network) consists of a number of microprocessor-based communication nodes linked to a shared coaxial cable transmission medium. The communication nodes handle the various low-level functions associated with computer communication, and interface user data equipment to the network. SALAN at present provides a file transfer facility between an Intel Series III microcomputer development system and a Texas Instruments Model 990/4 microcomputer system. Further, a packet voice communication system has also been implemented on SALAN. The various aspects of the design and implementation of the above two utilities are discussed.
Resumo:
The search engine log files have been used to gather direct user feedback on the relevancy of the documents presented in the results page. Typically the relative position of the clicks gathered from the log files is used a proxy for the direct user feedback. In this paper we identify reasons for the incompleteness of the relative position of clicks for deciphering the user preferences. Hence, we propose the use of time spent by the user in reading through the document as indicative of user preference for a document with respect to a query. Also, we identify the issues involved in using the time measure and propose means to address them.
Resumo:
Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
We introduce a variation density function that profiles the relationship between multiple scalar fields over isosurfaces of a given scalar field. This profile serves as a valuable tool for multifield data exploration because it provides the user with cues to identify interesting isovalues of scalar fields. Existing isosurface-based techniques for scalar data exploration like Reeb graphs, contour spectra, isosurface statistics, etc., study a scalar field in isolation. We argue that the identification of interesting isovalues in a multifield data set should necessarily be based on the interaction between the different fields. We demonstrate the effectiveness of our approach by applying it to explore data from a wide variety of applications.
Resumo:
This paper describes an algorithm for constructing the solid model (boundary representation) from pout data measured from the faces of the object. The poznt data is assumed to be clustered for each face. This algorithm does not require any compuiier model of the part to exist and does not require any topological infarmation about the part to be input by the user. The property that a convex solid can be constructed uniquely from geometric input alone is utilized in the current work. Any object can be represented a5 a combznatzon of convex solids. The proposed algorithm attempts to construct convex polyhedra from the given input. The polyhedra so obtained are then checked against the input data for containment and those polyhedra, that satisfy this check, are combined (using boolean union operation) to realise the solid model. Results of implementation are presented.
Resumo:
We develop an optimal, distributed, and low feedback timer-based selection scheme to enable next generation rate-adaptive wireless systems to exploit multi-user diversity. In our scheme, each user sets a timer depending on its signal to noise ratio (SNR) and transmits a small packet to identify itself when its timer expires. When the SNR-to-timer mapping is monotone non-decreasing, timers of users with better SNRs expire earlier. Thus, the base station (BS) simply selects the first user whose timer expiry it can detect, and transmits data to it at as high a rate as reliably possible. However, timers that expire too close to one another cannot be detected by the BS due to collisions. We characterize in detail the structure of the SNR-to-timer mapping that optimally handles these collisions to maximize the average data rate. We prove that the optimal timer values take only a discrete set of values, and that the rate adaptation policy strongly influences the optimal scheme's structure. The optimal average rate is very close to that of ideal selection in which the BS always selects highest rate user, and is much higher than that of the popular, but ad hoc, timer schemes considered in the literature.
Resumo:
In this paper we propose a new method of data handling for web servers. We call this method Network Aware Buffering and Caching (NABC for short). NABC facilitates reduction of data copies in web server's data sending path, by doing three things: (1) Layout the data in main memory in a way that protocol processing can be done without data copies (2) Keep a unified cache of data in kernel and ensure safe access to it by various processes and kernel and (3) Pass only the necessary meta data between processes so that bulk data handling time spent during IPC can be reduced. We realize NABC by implementing a set of system calls and an user library. The end product of the implementation is a set of APIs specifically designed for use by the web servers. We port an in house web server called SWEET, to NABC APIs and evaluate performance using a range of workloads both simulated and real. The results show a very impressive gain of 12% to 21% in throughput for static file serving and 1.6 to 4 times gain in throughput for lightweight dynamic content serving for a server using NABC APIs over the one using UNIX APIs.
Resumo:
A mathematical model has been developed for the gas carburising (diffusion) process using finite volume method. The computer simulation has been carried out for an industrial gas carburising process. The model's predictions are in good agreement with industrial experimental data and with data collected from the literature. A study of various mass transfer and diffusion coefficients has been carried out in order to suggest which correlations should be used for the gas carburising process. The model has been interfaced in a Windows environment using a graphical user interface. In this way, the model is extremely user friendly. The sensitivity analysis of various parameters such as initial carbon concentration in the specimen, carbon potential of the atmosphere, temperature of the process, etc. has been carried out using the model.
Resumo:
PDB Goodies is a web-based graphical user interface (GUI) to manipulate the Protein Data Bank file containing the three-dimensional atomic coordinates of protein structures. The program also allows users to save the manipulated three-dimensional atomic coordinate file on their local client system. These fragments are used in various stages of structure elucidation and analysis. This software is incorporated with all the three-dimensional protein structures available in the Protein Data Bank, which presently holds approximately 18 000 structures. In addition, this program works on a three-dimensional atomic coordinate file (Protein Data Bank format) uploaded from the client machine. The program is written using CGI/PERL scripts and is platform independent. The program PDB Goodies can be accessed over the World Wide Web at http:// 144.16.71.11/pdbgoodies/.
Resumo:
We consider a setting in which several operators offer downlink wireless data access services in a certain geographical region. Each operator deploys several base stations or access points, and registers some subscribers. In such a situation, if operators pool their infrastructure, and permit the possibility of subscribers being served by any of the cooperating operators, then there can be overall better user satisfaction, and increased operator revenue. We use coalitional game theory to investigate such resource pooling and cooperation between operators.We use utility functions to model user satisfaction, and show that the resulting coalitional game has the property that if all operators cooperate (i.e., form a grand coalition) then there is an operating point that maximizes the sum utility over the operators while providing the operators revenues such that no subset of operators has an incentive to break away from the coalition. We investigate whether such operating points can result in utility unfairness between users of the various operators. We also study other revenue sharing concepts, namely, the nucleolus and the Shapely value. Such investigations throw light on criteria for operators to accept or reject subscribers, based on the service level agreements proposed by them. We also investigate the situation in which only certain subsets of operators may be willing to cooperate.
Resumo:
With the emergence of large-volume and high-speed streaming data, the recent techniques for stream mining of CFIpsilas (closed frequent itemsets) will become inefficient. When concept drift occurs at a slow rate in high speed data streams, the rate of change of information across different sliding windows will be negligible. So, the user wonpsilat be devoid of change in information if we slide window by multiple transactions at a time. Therefore, we propose a novel approach for mining CFIpsilas cumulatively by making sliding width(ges1) over high speed data streams. However, it is nontrivial to mine CFIpsilas cumulatively over stream, because such growth may lead to the generation of exponential number of candidates for closure checking. In this study, we develop an efficient algorithm, stream-close, for mining CFIpsilas over stream by exploring some interesting properties. Our performance study reveals that stream-close achieves good scalability and has promising results.
Resumo:
This paper describes techniques to estimate the worst case execution time of executable code on architectures with data caches. The underlying mechanism is Abstract Interpretation, which is used for the dual purposes of tracking address computations and cache behavior. A simultaneous numeric and pointer analysis using an abstraction for discrete sets of values computes safe approximations of access addresses which are then used to predict cache behavior using Must Analysis. A heuristic is also proposed which generates likely worst case estimates. It can be used in soft real time systems and also for reasoning about the tightness of the safe estimate. The analysis methods can handle programs with non-affine access patterns, for which conventional Presburger Arithmetic formulations or Cache Miss Equations do not apply. The precision of the estimates is user-controlled and can be traded off against analysis time. Executables are analyzed directly, which, apart from enhancing precision, renders the method language independent.
Resumo:
Channel-aware assignment of subchannels to users in the downlink of an OFDMA system requires extensive feedback of channel state information (CSI) to the base station. Since bandwidth is scarce, schemes that limit feedback are necessary. We develop a novel, low feedback, distributed splitting-based algorithm called SplitSelect to opportunistically assign each subchannel to its most suitable user. SplitSelect explicitly handles multiple access control aspects associated with CSI feedback, and scales well with the number of users. In it, according to a scheduling criterion, each user locally maintains a scheduling metric for each subchannel. The goal is to select, for each subchannel, the user with the highest scheduling metric. At any time, each user contends for the subchannel for which it has the largest scheduling metric among the unallocated subchannels. A tractable asymptotic analysis of a system with many users is central to SplitSelect's simple design. Extensive simulation results demonstrate the speed with which subchannels and users are paired. The net data throughput, when the time overhead of selection is accounted for, is shown to be substantially better than several schemes proposed in the literature. We also show how fairness and user prioritization can be ensured by suitably defining the scheduling metric.