17 resultados para Internet users

em Indian Institute of Science - Bangalore - Índia


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we examine the energy consumption of IP Over Optical WDM Networks. As the number of Internet users increases the Internet expands in reach and capacity. This results in increased energy consumption of the network. Minimizing the power consumption, termed as ``Greening the Internet'', is desirable to help service providers (SP) operate their networks and provide services more efficiently in terms of power consumption. Minimizing the operational power typically depends on the strategy (e. g., lightpath bypass, lightpath non-bypass and traffic grooming) and operations (e. g., electronic domain versus optical domain). We consider a typical optical backbone network model, and develop a model which minimizes the power consumption. Performance calculation shows that our method consumes less power compared to traffic grooming approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many cases, a mobile user has the option of connecting to one of several IEEE 802.11 access points (APs),each using an independent channel. User throughput in each AP is determined by the number of other users as well as the frame size and physical rate being used. We consider the scenario where users could multihome, i.e., split their traffic amongst all the available APs, based on the throughput they obtain and the price charged. Thus, they are involved in a non-cooperative game with each other. We convert the problem into a fluid model and show that under a pricing scheme, which we call the cost price mechanism, the total system throughput is maximized,i.e., the system suffers no loss of efficiency due to selfish dynamics. We also study the case where the Internet Service Provider (ISP) could charge prices greater than that of the cost price mechanism. We show that even in this case multihoming outperforms unihoming, both in terms of throughput as well as profit to the ISP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The existing internet computing resource, Biomolecules Segment Display Device (BSDD), has been updated with several additional useful features. An advanced option is provided to superpose the structural motifs obtained from a search on the Protein Data Bank (PDB) in order to see if the three-dimensional structures adopted by identical or similar sequence motifs are the same. Furthermore, the options to display structural aspects like inter- and intra-molecular interactions, ion-pairs, disulphide bonds, etc. have been provided.The updated resource is interfaced with an up-to-date copy of the public domain PDB as well as 25 and 90% non-redundant protein structures. Further, users can upload the three-dimensional atomic coordinates (PDB format) from the client machine. A free molecular graphics program, JMol, is interfaced with it to display the three-dimensional structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We provide analytical models for capacity evaluation of an infrastructure IEEE 802.11 based network carrying TCP controlled file downloads or full-duplex packet telephone calls. In each case the analytical models utilize the attempt probabilities from a well known fixed-point based saturation analysis. For TCP controlled file downloads, following Bruno et al. (In Networking '04, LNCS 2042, pp. 626-637), we model the number of wireless stations (STAs) with ACKs as a Markov renewal process embedded at packet success instants. In our work, analysis of the evolution between the embedded instants is done by using saturation analysis to provide state dependent attempt probabilities. We show that in spite of its simplicity, our model works well, by comparing various simulated quantities, such as collision probability, with values predicted from our model. Next we consider N constant bit rate VoIP calls terminating at N STAs. We model the number of STAs that have an up-link voice packet as a Markov renewal process embedded at so called channel slot boundaries. Analysis of the evolution over a channel slot is done using saturation analysis as before. We find that again the AP is the bottleneck, and the system can support (in the sense of a bound on the probability of delay exceeding a given value) a number of calls less than that at which the arrival rate into the AP exceeds the average service rate applied to the AP. Finally, we extend the analytical model for VoIP calls to determine the call capacity of an 802.11b WLAN in a situation where VoIP calls originate from two different types of coders. We consider N-1 calls originating from Type 1 codecs and N-2 calls originating from Type 2 codecs. For G711 and G729 voice coders, we show that the analytical model again provides accurate results in comparison with simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

XML has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Self Adaptive Migration Model Genetic Algorithm (SAMCA)[5] and multi class Support Vector Machine (SVM) are used to learn a user model. Based on the feedback from the users the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multimedia mining primarily involves, information analysis and retrieval based on implicit knowledge. The ever increasing digital image databases on the Internet has created a need for using multimedia mining on these databases for effective and efficient retrieval of images. Contents of an image can be expressed in different features such as Shape, Texture and Intensity-distribution(STI). Content Based Image Retrieval(CBIR) is an efficient retrieval of relevant images from large databases based on features extracted from the image. Most of the existing systems either concentrate on a single representation of all features or linear combination of these features. The paper proposes a CBIR System named STIRF (Shape, Texture, Intensity-distribution with Relevance Feedback) that uses a neural network for nonlinear combination of the heterogenous STI features. Further the system is self-adaptable to different applications and users based upon relevance feedback. Prior to retrieval of relevant images, each feature is first clustered independent of the other in its own space and this helps in matching of similar images. Testing the system on a database of images with varied contents and intensive backgrounds showed good results with most relevant images being retrieved for a image query. The system showed better and more robust performance compared to existing CBIR systems

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A detailed characterization of interference power statistics in CDMA systems is of considerable practical and theoretical interest. Such a characterization for uplink inter-cell interference has been difficult because of transmit power control, randomness in the number of interfering mobile stations, and randomness in their locations. We develop a new method to model the uplink inter-cell interference power as a lognormal distribution, and show that it is an order of magnitude more accurate than the conventional Gaussian approximation even when the average number of mobile stations per cell is relatively large and even outperforms the moment-matched lognormal approximation considered in the literature. The proposed method determines the lognormal parameters by matching its moment generating function with a new approximation of the moment generating function for the inter-cell interference. The method is tractable and exploits the elegant spatial Poisson process theory. Using several numerical examples, the accuracy of the proposed method in modeling the probability distribution of inter-cell interference is verified for both small and large values of interference.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The worldwide research in nanoelectronics is motivated by the fact that scaling of MOSFETs by conventional top down approach will not continue for ever due to fundamental limits imposed by physics even if it is delayed for some more years. The research community in this domain has largely become multidisciplinary trying to discover novel transistor structures built with novel materials so that semiconductor industry can continue to follow its projected roadmap. However, setting up and running a nanoelectronics facility for research is hugely expensive. Therefore it is a common model to setup a central networked facility that can be shared with large number of users across the research community. The Centres for Excellence in Nanoelectronics (CEN) at Indian Institute of Science, Bangalore (IISc) and Indian Institute of Technology, Bombay (IITB) are such central networked facilities setup with funding of about USD 20 million from the Department of Information Technology (DIT), Ministry of Communications and Information Technology (MCIT), Government of India, in 2005. Indian Nanoelectronics Users Program (INUP) is a missionary program not only to spread awareness and provide training in nanoelectronics but also to provide easy access to the latest facilities at CEN in IISc and at IITB for the wider nanoelectronics research community in India. This program, also funded by MCIT, aims to train researchers by conducting workshops, hands-on training programs, and providing access to CEN facilities. This is a unique program aiming to expedite nanoelectronics research in the country, as the funding for projects required for projects proposed by researchers from around India has prior financial approval from the government and requires only technical approval by the IISc/ IITB team. This paper discusses the objectives of INUP, gives brief descriptions of CEN facilities, the training programs conducted by INUP and list various research activities currently under way in the program.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a system comprising a finite number of nodes, with infinite packet buffers, that use unslotted ALOHA with Code Division Multiple Access (CDMA) to share a channel for transmitting packetised data. We propose a simple model for packet transmission and retransmission at each node, and show that saturation throughput in this model yields a sufficient condition for the stability of the packet buffers; we interpret this as the capacity of the access method. We calculate and compare the capacities of CDMA-ALOHA (with and without code sharing) and TDMA-ALOHA; we also consider carrier sensing and collision detection versions of these protocols. In each case, saturation throughput can be obtained via analysis pf a continuous time Markov chain. Our results show how saturation throughput degrades with code-sharing. Finally, we also present some simulation results for mean packet delay. Our work is motivated by optical CDMA in which "chips" can be optically generated, and hence the achievable chip rate can exceed the achievable TDMA bit rate which is limited by electronics. Code sharing may be useful in the optical CDMA context as it reduces the number of optical correlators at the receivers. Our throughput results help to quantify by how much the CDMA chip rate should exceed the TDMA bit rate so that CDMA-ALOHA yields better capacity than TDMA-ALOHA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Perception of operator influences ultrasound image acquisition and processing. Lower costs are attracting new users to medical ultrasound. Anticipating an increase in this trend, we conducted a study to quantify the variability in ultrasonic measurements made by novice users and identify methods to reduce it. We designed a protocol with four presets and trained four new users to scan and manually measure the head circumference of a fetal phantom with an ultrasound scanner. In the first phase, the users followed this protocol in seven distinct sessions. They then received feedback on the quality of the scans from an expert. In the second phase, two of the users repeated the entire protocol aided by visual cues provided to them during scanning. We performed off-line measurements on all the images using a fully automated algorithm capable of measuring the head circumference from fetal phantom images. The ground truth (198.1 +/- 1.6 mm) was based on sixteen scans and measurements made by an expert. Our analysis shows that: (1) the inter-observer variability of manual measurements was 5.5 mm, whereas the inter-observer variability of automated measurements was only 0.6 mm in the first phase (2) consistency of image appearance improved and mean manual measurements was 4-5 mm closer to the ground truth in the second phase (3) automated measurements were more precise, accurate and less sensitive to different presets compared to manual measurements in both phases. Our results show that visual aids and automation can bring more reproducibility to ultrasonic measurements made by new users.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present the design of ``e-SURAKSHAK,'' a novel cyber-physical health care management system of Wireless Embedded Internet Devices (WEIDs) that sense vital health parameters. The system is capable of sensing body temperature, heart rate, oxygen saturation level and also allows noninvasive blood pressure (NIBP) measurement. End to end internet connectivity is provided by using 6LoWPAN based wireless network that uses the 802.15.4 radio. A service oriented architecture (SOA) 1] is implemented to extract meaningful information and present it in an easy-to-understand form to the end-user instead of raw data made available by sensors. A central electronic database and health care management software are developed. Vital health parameters are measured and stored periodically in the database. Further, support for real-time measurement of health parameters is provided through a web based GUI. The system has been implemented completely and demonstrated with multiple users and multiple WEIDs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the problem of analyzing influence of various factors affecting individual messages posted in social media. The problem is challenging because of various types of influences propagating through the social media network that act simultaneously on any user. Additionally, the topic composition of the influencing factors and the susceptibility of users to these influences evolve over time. This problem has not been studied before, and off-the-shelf models are unsuitable for this purpose. To capture the complex interplay of these various factors, we propose a new non-parametric model called the Dynamic Multi-Relational Chinese Restaurant Process. This accounts for the user network for data generation and also allows the parameters to evolve over time. Designing inference algorithms for this model suited for large scale social-media data is another challenge. To this end, we propose a scalable and multi-threaded inference algorithm based on online Gibbs Sampling. Extensive evaluations on large-scale Twitter and Face book data show that the extracted topics when applied to authorship and commenting prediction outperform state-of-the-art baselines. More importantly, our model produces valuable insights on topic trends and user personality trends beyond the capability of existing approaches.