852 resultados para SERVER


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The downlink scheduling problem in multi-queue multi-server systems under channel uncertainty is considered. Two policies that make allocations based on predicted channel states are proposed. The first is an extension of the well-known dynamic backpressure policy to the uncertain channel case. The second is a variant that improves delay performance under light loads. The stability region of the system is characterised and the first policy is argued to be throughput optimal. A recently proposed policy of Kar et al [1] has lesser complexity, but is shown to be throughput suboptimal. Further, simulations demonstrate better delay and backlog properties for both our policies at light loads.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper deals with a batch service queue and multiple vacations. The system consists of a single server and a waiting room of finite capacity. Arrival of customers follows a Markovian arrival process (MAP). The server is unavailable for occasional intervals of time called vacations, and when it is available, customers are served in batches of maximum size ‘b’ with a minimum threshold value ‘a’. We obtain the queue length distributions at various epochs along with some key performance measures. Finally, some numerical results have been presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mass spectrometry (MS) became a standard tool for identifying metabolites in biological tissues, and metabolomics is slowly acknowledged as a legitimate research discipline for characterizing biological conditions. The computational analyses of metabolomics, however, lag behind compared with the rapid advances in analytical aspects for two reasons. First is the lack of standardized data repository for mass spectra: each research institution is flooded with gigabytes of mass-spectral data from its own analytical groups and cannot host a world-class repository for mass spectra. The second reason is the lack of informatics experts that are fully experienced with spectral analyses. The two barriers must be overcome to establish a publicly free data server for MS analysis in metabolomics as does GenBank in genomics and UniProt in proteomics. The workshop brought together bioinformaticians working on mass spectral analyses in Finland and Japan with the goal to establish a consortium to freely exchange and publicize mass spectra of metabolites measured on various platforms computational tools to analyze spectra spectral knowledge that are computationally predicted from standardized data. This book contains the abstracts of the presentations given in the workshop. The programme of the workshop consisted of oral presentations from Japan and Finland, invited lectures from Steffen Neumann (Leibniz Institute of Plant Biochemistry), Matej Oresic (VTT), Merja Penttila (VTT) and Nicola Zamboni (ETH Zurich) as well as free form discussion among the participants. The event was funded by Academy of Finland (grants 139203 and 118653), Japan Society for the Promotion of Science (JSPS Japan-Finland Bilateral Semi- nar Program 2010) and Department of Computer Science University of Helsinki. We would like to thank all the people contributing to the technical pro- gramme and the sponsors for making the workshop possible. Helsinki, October 2010 Masanori Arita, Markus Heinonen and Juho Rousu

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform’s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction This case study is based on the experiences with the Electronic Journal of Information Technology in Construction (ITcon), founded in 1995. Development This journal is an example of a particular category of open access journals, which use neither author charges nor subscriptions to finance their operations, but rely largely on unpaid voluntary work in the spirit of the open source movement. The journal has, after some initial struggle, survived its first decade and is now established as one of half-a-dozen peer reviewed journals in its field. Operations The journal publishes articles as they become ready, but creates virtual issues through alerting messages to “subscribers”. It has also started to publish special issues, since this helps in attracting submissions, and also helps in sharing the work-load of review management. From the start the journal adopted a rather traditional layout of the articles. After the first few years the HTML version was dropped and papers are only published in PDF format. Performance The journal has recently been benchmarked against the competing journals in its field. Its acceptance rate of 53% is slightly higher and its average turnaround time of seven months almost a year faster compared to those journals in the sample for which data could be obtained. The server log files for the past three years have also been studied. Conclusions Our overall experience demonstrates that it is possible to publish this type of OA journal, with a yearly publishing volume equal to a quarterly journal and involving the processing of some fifty submissions a year, using a networked volunteer-based organization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Erasure coding techniques are used to increase the reliability of distributed storage systems while minimizing storage overhead. Also of interest is minimization of the bandwidth required to repair the system following a node failure. In a recent paper, Wu et al. characterize the tradeoff between the repair bandwidth and the amount of data stored per node. They also prove the existence of regenerating codes that achieve this tradeoff. In this paper, we introduce Exact Regenerating Codes, which are regenerating codes possessing the additional property of being able to duplicate the data stored at a failed node. Such codes require low processing and communication overheads, making the system practical and easy to maintain. Explicit construction of exact regenerating codes is provided for the minimum bandwidth point on the storage-repair bandwidth tradeoff, relevant to distributed-mail-server applications. A sub-space based approach is provided and shown to yield necessary and sufficient conditions on a linear code to possess the exact regeneration property as well as prove the uniqueness of our construction. Also included in the paper, is an explicit construction of regenerating codes for the minimum storage point for parameters relevant to storage in peer-to-peer systems. This construction supports a variable number of nodes and can handle multiple, simultaneous node failures. All constructions given in the paper are of low complexity, requiring low field size in particular.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motivated by certain situations in manufacturing systems and communication networks, we look into the problem of maximizing the profit in a queueing system with linear reward and cost structure and having a choice of selecting the streams of Poisson arrivals according to an independent Markov chain. We view the system as a MMPP/GI/1 queue and seek to maximize the profits by optimally choosing the stationary probabilities of the modulating Markov chain. We consider two formulations of the optimization problem. The first one (which we call the PUT problem) seeks to maximize the profit per unit time whereas the second one considers the maximization of the profit per accepted customer (the PAC problem). In each of these formulations, we explore three separate problems. In the first one, the constraints come from bounding the utilization of an infinite capacity server; in the second one the constraints arise from bounding the mean queue length of the same queue; and in the third one the finite capacity of the buffer reflect as a set of constraints. In the problems bounding the utilization factor of the queue, the solutions are given by essentially linear programs, while the problems with mean queue length constraints are linear programs if the service is exponentially distributed. The problems modeling the finite capacity queue are non-convex programs for which global maxima can be found. There is a rich relationship between the solutions of the PUT and PAC problems. In particular, the PUT solutions always make the server work at a utilization factor that is no less than that of the PAC solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Signal transduction events often involve transient, yet specific, interactions between structurally conserved protein domains and polypeptide sequences in target proteins. The identification and validation of these associating domains is crucial to understand signal transduction pathways that modulate different cellular or developmental processes. Bioinformatics strategies to extract and integrate information from diverse sources have been shown to facilitate the experimental design to understand complex biological events. These methods, primarily based on information from high-throughput experiments, have also led to the identification of new connections thus providing hypothetical models for cellular events. Such models, in turn, provide a framework for directing experimental efforts for validating the predicted molecular rationale for complex cellular processes. In this context, it is envisaged that the rational design of peptides for protein-peptide binding studies could substantially facilitate the experimental strategies to evaluate a predicted interaction. This rational design procedure involves the integration of protein-protein interaction data, gene ontology, physico-chemical calculations, domain-domain interaction data and information on functional sites or critical residues. Results: Here we describe an integrated approach called ``PeptideMine'' for the identification of peptides based on specific functional patterns present in the sequence of an interacting protein. This approach based on sequence searches in the interacting sequence space has been developed into a webserver, which can be used for the identification and analysis of peptides, peptide homologues or functional patterns from the interacting sequence space of a protein. To further facilitate experimental validation, the PeptideMine webserver also provides a list of physico-chemical parameters corresponding to the peptide to determine the feasibility of using the peptide for in vitro biochemical or biophysical studies. Conclusions: The strategy described here involves the integration of data and tools to identify potential interacting partners for a protein and design criteria for peptides based on desired biochemical properties. Alongside the search for interacting protein sequences using three different search programs, the server also provides the biochemical characteristics of candidate peptides to prune peptide sequences based on features that are most suited for a given experiment. The PeptideMine server is available at the URL: http://caps.ncbs.res.in/peptidemine

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The high cost and extraordinary demands made on sophisticated air defence systems, pose hard challenges to the managers and engineers who plan the operation and maintenance of such systems. This paper presents a study aimed at developing simulation and systems analysis techniques for the effective planning and efficient operation of small fleets of aircraft, typical of the air force of a developing country. We consider an important aspect of fleet management: the problem of resource allocation for achieving prescribed operational effectiveness of the fleet. At this stage, we consider a single flying-base, where the operationally ready aircraft are stationed, and a repair-depot, where the planes are overhauled. An important measure of operational effectiveness is ‘ availability ’, which may be defined as the expected fraction of the fleet fit for use at a given instant. The tour of aircraft in a flying-base, repair-depot system through a cycle of ‘ operationally ready ’ and ‘ scheduled overhaul ’ phases is represented first by a deterministic flow process and then by a cyclic queuing process. Initially the steady-state availability at the flying-base is computed under the assumptions of Poisson arrivals, exponential service times and an equivalent singleserver repair-depot. This analysis also brings out the effect of fleet size on availability. It defines a ‘ small ’ fleet essentially in terms of the important ‘ traffic ’ parameter of service rate/maximum arrival rate.A simulation model of the system has been developed using GPSS to study sensitivity to distributional assumptions, to validate the principal assumptions of the analytical model such as the single-server assumption and to obtain confidence intervals for the statistical parameters of interest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A model comprising several servers, each equipped with its own queue and with possibly different service speeds, is considered. Each server receives a dedicated arrival stream of jobs; there is also a stream of generic jobs that arrive to a job scheduler and can be individually allocated to any of the servers. It is shown that if the arrival streams are all Poisson and all jobs have the same exponentially distributed service requirements, the probabilistic splitting of the generic stream that minimizes the average job response time is such that it balances the server idle times in a weighted least-squares sense, where the weighting coefficients are related to the service speeds of the servers. The corresponding result holds for nonexponentially distributed service times if the service speeds are all equal. This result is used to develop adaptive quasi-static algorithms for allocating jobs in the generic arrival stream when the load parameters are unknown. The algorithms utilize server idle-time measurements which are sent periodically to the central job scheduler. A model is developed for these measurements, and the result mentioned is used to cast the problem into one of finding a projection of the root of an affine function, when only noisy values of the function can be observed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose to compress weighted graphs (networks), motivated by the observation that large networks of social, biological, or other relations can be complex to handle and visualize. In the process also known as graph simplication, nodes and (unweighted) edges are grouped to supernodes and superedges, respectively, to obtain a smaller graph. We propose models and algorithms for weighted graphs. The interpretation (i.e. decompression) of a compressed, weighted graph is that a pair of original nodes is connected by an edge if their supernodes are connected by one, and that the weight of an edge is approximated to be the weight of the superedge. The compression problem now consists of choosing supernodes, superedges, and superedge weights so that the approximation error is minimized while the amount of compression is maximized. In this paper, we formulate this task as the 'simple weighted graph compression problem'. We then propose a much wider class of tasks under the name of 'generalized weighted graph compression problem'. The generalized task extends the optimization to preserve longer-range connectivities between nodes, not just individual edge weights. We study the properties of these problems and propose a range of algorithms to solve them, with dierent balances between complexity and quality of the result. We evaluate the problems and algorithms experimentally on real networks. The results indicate that weighted graphs can be compressed efficiently with relatively little compression error.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of optimally scheduling a processor executing a multilayer protocol in an intelligent Network Interface Controller (NIC). In particular, we assume a typical LAN environment with class 4 transport service, a connectionless network service, and a class 1 link level protocol. We develop a queuing model for the problem. In the most general case this becomes a cyclic queuing network in which some queues have dedicated servers, and the others have a common schedulable server. We use sample path arguments and Markov decision theory to determine optimal service schedules. The optimal throughputs are compared with those obtained with simple policies. The optimal policy yields upto 25% improvement in some cases. In some other cases, the optimal policy does only slightly better than much simpler policies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

CD-ROMs have proliferated as a distribution media for desktop machines for a large variety of multimedia applications (targeted for a single-user environment) like encyclopedias, magazines and games. With CD-ROM capacities up to 3 GB being available in the near future, they will form an integral part of Video on Demand (VoD) servers to store full-length movies and multimedia. In the first section of this paper we look at issues related to the single- user desktop environment. Since these multimedia applications are highly interactive in nature, we take a pragmatic approach, and have made a detailed study of the multimedia application behavior in terms of the I/O request patterns generated to the CD-ROM subsystem by tracing these patterns. We discuss prefetch buffer design and seek time characteristics in the context of the analysis of these traces. We also propose an adaptive main-memory hosted cache that receives caching hints from the application to reduce the latency when the user moves from one node of the hyper graph to another. In the second section we look at the use of CD-ROM in a VoD server and discuss the problem of scheduling multiple request streams and buffer management in this scenario. We adapt the C-SCAN (Circular SCAN) algorithm to suit the CD-ROM drive characteristics and prove that it is optimal in terms of buffer size management. We provide computationally inexpensive relations by which this algorithm can be implemented. We then propose an admission control algorithm which admits new request streams without disrupting the continuity of playback of the previous request streams. The algorithm also supports operations such as fast forward and replay. Finally, we discuss the problem of optimal placement of MPEG streams on CD-ROMs in the third section.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fork-join queueing systems offer a natural modelling paradigm for parallel processing systems and for assembly operations in automated manufacturing. The analysis of fork-join queueing systems has been an important subject of research in recent years. Existing analysis methodologies-both exact and approximate-assume that the servers are failure-free. In this study, we consider fork-join queueing systems in the presence of server failures and compute the cumulative distribution of performability with respect to the response time of such systems. For this, we employ a computational methodology that uses a recent technique based on randomization. We compare the performability of three different fork-join queueing models proposed in the literature: the distributed model, the centralized splitting model, and the split-merge model. The numerical results show that the centralized splitting model offers the highest levels of performability, followed by the distributed splitting and split-merge models.