976 resultados para File system
Resumo:
In this paper we develop and numerically explore the modeling heuristic of using saturation attempt probabilities as state dependent attempt probabilities in an IEEE 802.11e infrastructure network carrying packet telephone calls and TCP controlled file downloads, using Enhanced Distributed Channel Access (EDCA). We build upon the fixed point analysis and performance insights in [1]. When there are a certain number of nodes of each class contending for the channel (i.e., have nonempty queues), then their attempt probabilities are taken to be those obtained from saturation analysis for that number of nodes. Then we model the system queue dynamics at the network nodes. With the proposed heuristic, the system evolution at channel slot boundaries becomes a Markov renewal process, and regenerative analysis yields the desired performance measures.The results obtained from this approach match well with ns2 simulations. We find that, with the default IEEE 802.11e EDCA parameters for AC 1 and AC 3, the voice call capacity decreases if even one file download is initiated by some station. Subsequently, reducing the voice calls increases the file download capacity almost linearly (by 1/3 Mbps per voice call for the 11 Mbps PHY).
Resumo:
PDB Goodies is a web-based graphical user interface (GUI) to manipulate the Protein Data Bank file containing the three-dimensional atomic coordinates of protein structures. The program also allows users to save the manipulated three-dimensional atomic coordinate file on their local client system. These fragments are used in various stages of structure elucidation and analysis. This software is incorporated with all the three-dimensional protein structures available in the Protein Data Bank, which presently holds approximately 18 000 structures. In addition, this program works on a three-dimensional atomic coordinate file (Protein Data Bank format) uploaded from the client machine. The program is written using CGI/PERL scripts and is platform independent. The program PDB Goodies can be accessed over the World Wide Web at http:// 144.16.71.11/pdbgoodies/.
Resumo:
The primary purpose of this project is to attempt to improve the existing hydrogeologic information through lithologic and hydrogeologic characterizations of the sediments overlying the Floridan aquifer system in Alachua County. These sediments locally comprise both the intermediate aquifer system and associated confining beds and the surficial aquifer system. (PDF has 119 pages.)
Resumo:
This study examined the efficiency of fish diversion and survivorship of diverted fishes in the San Onofre Nuclear Generating Station Fish Return System in 1984 and 1985. Generally, fishes were diverted back to the ocean with high frequency, particularly in 1984. Most species were diverted at rates of 80% or more. Over 90% of the most abundant species, Engraulis mordax, were diverted. The system worked particularly well for strong-swimming forms such as Paralobrax clothratus, Atherinopsis californiensis, and Xenistius californiensis, and did not appreciably divert weaker-swimming species such as Porichthys notatus, Heterostichus rostratus, and Syngnathus sp. Return rates of some species were not as high in 1985 as in 1984. Individuals of most tested species survived both transit through the fish return system and 96 hours in a holding net. Some species, such as E. mordox, X. californiensis, and Umbrina roncador, experienced tittle or no mortality. Survivorship of Seriphus politus was highly variable and no Anchoa delicatissima survived. (PDF file contains 22 pages.)
Resumo:
An oceanographic software is presented which enables quick access to oceanographic databases. The program is interactive, yields a graphic display for quick-look of data availability and parameter ranges. Additionally, the results of the data retrieval are stored in an ASCII file which can be interfaced with commercial programs like spreadsheet and isoline software. An example is given for the temperature distribution in Greenland waters.
Resumo:
We consider the problem of task assignment in a distributed system (such as a distributed Web server) in which task sizes are drawn from a heavy-tailed distribution. Many task assignment algorithms are based on the heuristic that balancing the load at the server hosts will result in optimal performance. We show this conventional wisdom is less true when the task size distribution is heavy-tailed (as is the case for Web file sizes). We introduce a new task assignment policy, called Size Interval Task Assignment with Variable Load (SITA-V). SITA-V purposely operates the server hosts at different loads, and directs smaller tasks to the lighter-loaded hosts. The result is that SITA-V provably decreases the mean task slowdown by significant factors (up to 1000 or more) where the more heavy-tailed the workload, the greater the improvement factor. We evaluate the tradeoff between improvement in slowdown and increase in waiting time in a system using SITA-V, and show conditions under which SITA-V represents a particularly appealing policy. We conclude with a discussion of the use of SITA-V in a distributed Web server, and show that it is attractive because it has a simple implementation which requires no communication from the server hosts back to the task router.
Resumo:
A foundational issue underlying many overlay network applications ranging from routing to P2P file sharing is that of connectivity management, i.e., folding new arrivals into the existing mesh and re-wiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are tractable to address via theoretical analyses, especially game-theoretic analysis. Our work unifies these two thrusts first by distilling insights gleaned from clean theoretical models, notably that under natural resource constraints, selfish players can select neighbors so as to efficiently reach near-equilibria that also provide high global performance. Using Egoist, a prototype overlay routing system we implemented on PlanetLab, we demonstrate that our neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics; that Egoist is competitive with an optimal, but unscalable full-mesh approach; and that it remains highly effective under significant churn. We also describe variants of Egoist's current design that would enable it to scale to overlays of much larger scale and allow it to cater effectively to applications, such as P2P file sharing in unstructured overlays, based on the use of primitives such as scoped-flooding rather than routing.
Resumo:
A foundational issue underlying many overlay network applications ranging from routing to peer-to-peer file sharing is that of connectivity management, i.e., folding new arrivals into an existing overlay, and rewiring to cope with changing network conditions. Previous work has considered the problem from two perspectives: devising practical heuristics for specific applications designed to work well in real deployments, and providing abstractions for the underlying problem that are analytically tractable, especially via game-theoretic analysis. In this paper, we unify these two thrusts by using insights gleaned from novel, realistic theoretic models in the design of Egoist – a distributed overlay routing system that we implemented, deployed, and evaluated on PlanetLab. Using extensive measurements of paths between nodes, we demonstrate that Egoist’s neighbor selection primitives significantly outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, we demonstrate that Egoist is competitive with an optimal, but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overhead. Finally, we use a multiplayer peer-to-peer game to demonstrate the value of Egoist to end-user applications. This technical report supersedes BUCS-TR-2007-013.
Resumo:
Background: Many European countries including Ireland lack high quality, on-going, population based estimates of maternal behaviours and experiences during pregnancy. PRAMS is a CDC surveillance program which was established in the United States in 1987 to generate high quality, population based data to reduce infant mortality rates and improve maternal and infant health. PRAMS is the only on-going population based surveillance system of maternal behaviours and experiences that occur before, during and after pregnancy worldwide.Methods: The objective of this study was to adapt, test and evaluate a modified CDC PRAMS methodology in Ireland. The birth certificate file which is the standard approach to sampling for PRAMS in the United States was not available for the PRAMS Ireland study. Consequently, delivery record books for the period between 3 and 5 months before the study start date at a large urban obstetric hospital [8,900 births per year] were used to randomly sample 124 women. Name, address, maternal age, infant sex, gestational age at delivery, delivery method, APGAR score and birth weight were manually extracted from records. Stillbirths and early neonatal deaths were excluded using APGAR scores and hospital records. Women were sent a letter of invitation to participate including option to opt out, followed by a modified PRAMS survey, a reminder letter and a final survey.Results: The response rate for the pilot was 67%. Two per cent of women refused the survey, 7% opted out of the study and 24% did not respond. Survey items were at least 88% complete for all 82 respondents. Prevalence estimates of socially undesirable behaviours such as alcohol consumption during pregnancy were high [>50%] and comparable with international estimates.Conclusion: PRAMS is a feasible and valid method of collecting information on maternal experiences and behaviours during pregnancy in Ireland. PRAMS may offer a potential solution to data deficits in maternal health behaviour indicators in Ireland with further work. This study is important to researchers in Europe and elsewhere who may be interested in new ways of tailoring an established CDC methodology to their unique settings to resolve data deficits in maternal health.
Resumo:
The origin of eusociality in haplo-diploid organisms such as Hymenoptera has been mostly explained by kin selection. However, several studies have uncovered decreased relatedness values within colonies, resulting primarily from multiple queen matings (polyandry) and/or from the presence of more than one functional queen (polygyny). Here, we report on the use of microsatellite data for the investigation of sociogenetic parameters, such as relatedness, and levels of polygyny and polyandry, in the ant Pheidole pallidula. We demonstrate, through analysis of mother-offspring combinations and the use of direct sperm typing, that each queen is inseminated by a single male. The inbreeding coefficient within colonies and the levels of relatedness between the queens and their mate are not significantly different from zero, indicating that matings occur between unrelated individuals. Analyses of worker genotypes demonstrate that 38% of the colonies are polygynous with 2-4 functional queens, and suggest the existence of reproductive skew, i.e. unequal respective contribution of queens to reproduction. Finally, our analyses indicate that colonies are genetically differentiated and form a population exhibiting significant isolation-by-distance, suggesting that some colonies originate through budding.
Resumo:
Gemstone Team FLIP (File Lending in Proximity)
Resumo:
OBJECTIVE: This work investigates the delivery accuracy of different Varian linear accelerator models using log-file derived MLC RMS values.
METHODS: Seven centres independently created a plan on the same virtual phantom using their own planning system and the log files were analysed following delivery of the plan in each centre to assess MLC positioning accuracy. A single standard plan was also delivered by seven centres to remove variations in complexity and the log files were analysed for Varian TrueBeams and Clinacs (2300IX or 2100CD models).
RESULTS: Varian TrueBeam accelerators had better MLC positioning accuracy (<1.0mm) than the 2300IX (<2.5mm) following delivery of the plans created by each centre and also the standard plan. In one case log files provided evidence that reduced delivery accuracy was not associated with the linear accelerator model but was due to planning issues.
CONCLUSIONS: Log files are useful in identifying differences between linear accelerator models, and isolate errors during end-to-end testing in VMAT audits. Log file analysis can rapidly eliminate the machine delivery from the process and divert attention with confidence to other aspects. Advances in Knowledge: Log file evaluation was shown to be an effective method to rapidly verify satisfactory treatment delivery when a dosimetric evaluation fails during end-to-end dosimetry audits. MLC RMS values for Varian TrueBeams were shown to be much smaller than Varian Clinacs for VMAT deliveries.
Resumo:
Wednesday 12th March 2014 Speaker(s): Dr Tim Chown Organiser: Time: 12/03/2014 11:00-11:50 Location: B32/3077 File size: 642 Mb Abstract The WAIS seminar series is designed to be a blend of classic seminars, research discussions, debates and tutorials. The Domain Name System (DNS) is a critical part of the Internet infrastructure. In this talk we begin by explaining the basic model of operation of the DNS, including how domain names are delegated and how a DNS resolver performs a DNS lookup. We then take a tour of DNS-related topics, including caching, poisoning, governance, the increasing misuse of the DNS in DDoS attacks, and the expansion of the DNS namespace to new top level domains and internationalised domain names. We also present the latest work in the IETF on DNS privacy. The talk will be pitched such that no detailed technical knowledge is required. We hope that attendees will gain some familiarity with how the DNS works, some key issues surrounding DNS operation, and how the DNS might touch on various areas of research within WAIS.
Resumo:
Special issue of the ICL Technical Journal on the theme of the Content-Addressable File Store: editor Guy Haworth. Twelve invited papers covering hardware, software, system integration, patents, applications and futures.
Resumo:
The main objective for this degree project is to implement an Application Availability Monitoring (AAM) system named Softek EnView for Fujitsu Services. The aim of implementing the AAM system is to proactively identify end user performance problems, such as application and site performance, before the actual end users experience them. No matter how well applications and sites are designed and nomatter how well they meet business requirements, they are useless to the end users if the performance is slow and/or unreliable. It is important for the customers to find out whether the end user problems are caused by the network or application malfunction. The Softek EnView was comprised of the following EnView components: Robot, Monitor, Reporter, Collector and Repository. The implemented system, however, is designed to use only some of these EnView elements: Robot, Reporter and depository. Robots can be placed at any key user location and are dedicated to customers, which means that when the number of customers increases, at the sametime the amount of Robots will increase. To make the AAM system ideal for the company to use, it was integrated with Fujitsu Services’ centralised monitoring system, BMC PATROL Enterprise Manager (PEM). That was actually the reason for deciding to drop the EnView Monitor element. After the system was fully implemented, the AAM system was ready for production. Transactions were (and are) written and deployed on Robots to simulate typical end user actions. These transactions are configured to run with certain intervals, which are defined collectively with customers. While they are driven against customers’ applicationsautomatically, transactions collect availability data and response time data all the time. In case of a failure in transactions, the robot immediately quits the transactionand writes detailed information to a log file about what went wrong and which element failed while going through an application. Then an alert is generated by a BMC PATROL Agent based on this data and is sent to the BMC PEM. Fujitsu Services’ monitoring room receives the alert, reacts to it according to the incident management process in ITIL and by alerting system specialists on critical incidents to resolve problems. As a result of the data gathered by the Robots, weekly reports, which contain detailed statistics and trend analyses of ongoing quality of IT services, is provided for the Customers.