995 resultados para available bandwidth
Resumo:
We propose Trade & Cap (T&C), an economics-inspired mechanism that incentivizes users to voluntarily coordinate their consumption of the bandwidth of a shared resource (e.g., a DSLAM link) so as to converge on what they perceive to be an equitable allocation, while ensuring efficient resource utilization. Under T&C, rather than acting as an arbiter, an Internet Service Provider (ISP) acts as an enforcer of what the community of rational users sharing the resource decides is a fair allocation of that resource. Our T&C mechanism proceeds in two phases. In the first, software agents acting on behalf of users engage in a strategic trading game in which each user agent selfishly chooses bandwidth slots to reserve in support of primary, interactive network usage activities. In the second phase, each user is allowed to acquire additional bandwidth slots in support of presumed open-ended need for fluid bandwidth, catering to secondary applications. The acquisition of this fluid bandwidth is subject to the remaining "buying power" of each user and by prevalent "market prices" – both of which are determined by the results of the trading phase and a desirable aggregate cap on link utilization. We present analytical results that establish the underpinnings of our T&C mechanism, including game-theoretic results pertaining to the trading phase, and pricing of fluid bandwidth allocation pertaining to the capping phase. Using real network traces, we present extensive experimental results that demonstrate the benefits of our scheme, which we also show to be practical by highlighting the salient features of an efficient implementation architecture.
Resumo:
A number of recent studies have pointed out that TCP's performance over ATM networks tends to suffer, especially under congestion and switch buffer limitations. Switch-level enhancements and link-level flow control have been proposed to improve TCP's performance in ATM networks. Selective Cell Discard (SCD) and Early Packet Discard (EPD) ensure that partial packets are discarded from the network "as early as possible", thus reducing wasted bandwidth. While such techniques improve the achievable throughput, their effectiveness tends to degrade in multi-hop networks. In this paper, we introduce Lazy Packet Discard (LPD), an AAL-level enhancement that improves effective throughput, reduces response time, and minimizes wasted bandwidth for TCP/IP over ATM. In contrast to the SCD and EPD policies, LPD delays as much as possible the removal from the network of cells belonging to a partially communicated packet. We outline the implementation of LPD and show the performance advantage of TCP/LPD, compared to plain TCP and TCP/EPD through analysis and simulations.
Resumo:
'Maximum Available Feedback' is Bode's term for the highest possible loop gain over a given bandwidth, with specified stability margins, in a single loop feedback system. His work using asymptotic analysis allowed Bode to develop a methodology for achieving this. However, the actual system performance differs from that specified, due to the use of asymptotic approximations, and the author[2] has described how, for instance, the actual phase margin is often much lower than required when the bandwidth is high, and proposed novel modifications to the asymptotes to address the issue. This paper gives some new analysis of such systems, showing that the method also contravenes Bode's definition of phase margin, and shows how the author's modifications can be used for different amounts of bandwidth.
Resumo:
A feedback system for control or electronics should have high loop gain, so that its output is close to its desired state, and the effects of changes in the system and of disturbances are minimised. Bode proposed a method for single loop feedback systems to obtain the maximum available feedback, defined as the largest possible loop gain over a bandwidth pertinent to the system, with appropriate gain and phase margins. The method uses asymptotic approximations, and this paper describes some novel adjustments to the asymptotes, so that the final system often exceeds the maximum available feedback. The implementation of the method requires the cascading of a series of lead-lag element. This paper describes a new way to determine how many elements should be used.
Resumo:
In this paper, implementation and testing of non- commercial GaN HEMT in a simple buck converter for envelope amplifier in ET and EER transmission techn iques has been done. Comparing to the prototypes with commercially available EPC1014 and 1015 GaN HEMTs, experimentally demonstrated power supply provided better thermal management and increased the switching frequency up to 25MHz. 64QAM signal with 1MHz of large signal bandw idth and 10.5dB of Peak to Average Power Ratio was gener ated, using the switching frequency of 20MHz. The obtaine defficiency was 38% including the driving circuit an d the total losses breakdown showed that switching power losses in the HEMT are the dominant ones. In addition to this, some basic physical modeling has been done, in order to provide an insight on the correlation between the electrical characteristics of the GaN HEMT and physical design parameters. This is the first step in the optimization of the HEMT design for this particular application.
Resumo:
COO 1469-0194.
Resumo:
"Supported in part by contract number U.S. AEC AT(11-1) 1469."
Resumo:
Ordinary desktop computers continue to obtain ever more resources – in-creased processing power, memory, network speed and bandwidth – yet these resources spend much of their time underutilised. Cycle stealing frameworks harness these resources so they can be used for high-performance computing. Traditionally cycle stealing systems have used client-server based architectures which place significant limits on their ability to scale and the range of applica-tions they can support. By applying a fully decentralised network model to cycle stealing the limits of centralised models can be overcome. Using decentralised networks in this manner presents some difficulties which have not been encountered in their previous uses. Generally decentralised ap-plications do not require any significant fault tolerance guarantees. High-performance computing on the other hand requires very stringent guarantees to ensure correct results are obtained. Unfortunately mechanisms developed for traditional high-performance computing cannot be simply translated because of their reliance on a reliable storage mechanism. In the highly dynamic world of P2P computing this reliable storage is not available. As part of this research a fault tolerance system has been created which provides considerable reliability without the need for a persistent storage. As well as increased scalability, fully decentralised networks offer the ability for volunteers to communicate directly. This ability provides the possibility of supporting applications whose tasks require direct, message passing style communication. Previous cycle stealing systems have only supported embarrassingly parallel applications and applications with limited forms of communication so a new programming model has been developed which can support this style of communication within a cycle stealing context. In this thesis I present a fully decentralised cycle stealing framework. The framework addresses the problems of providing a reliable fault tolerance sys-tem and supporting direct communication between parallel tasks. The thesis includes a programming model for developing cycle stealing applications with direct inter-process communication and methods for optimising object locality on decentralised networks.
Resumo:
The Australian Research Collaboration Service (ARCS) has been supporting a wide range of Collaboration Services and Tools which have been allowing researchers, groups and research communities to share ideas and collaborate across organisational boundaries.----- This talk will give an introduction to a number of exciting technologies which are now available. Focus will be on two main areas of Video Collaboration Tools, allowing researchers to talk face-to-face and share data in real-time, and Web Collaboration Tools, allowing researchers to share information and ideas with other like-minded researchers irrespective of distance or organisational structure. A number of examples will also be shown of how these technologies have been used with in various research communities.----- A brief introduction will be given to a number of services which ARCS is now operating and/or supporting such as:--- * EVO – A video conferencing application, which is particularly suited to desktop or low bandwidth applications.--- * AccessGrid – An open source video conferencing and collaboration tool kit, which is great for room to room meetings.--- * Sakai – An online collaboration and learning environment, support teaching and learning, ad hoc group collaboration, support for portfolios and research collaboration.--- * Plone – A ready-to-run content management system, that provides you with a system for managing web content that is ideal for project groups, communities, web sites, extranets and intranets.--- * Wikis – A way to easily create, edit, and link pages together, to create collaborative websites.
Resumo:
Technology is continually changing, and evolving, throughout the entire construction industry; and particularly in the design process. One of the principal manifestations of this is a move away from team working in a shared work space to team working in a virtual space, using increasingly sophisticated electronic media. Due to the significant operating differences when working in shared and virtual spaces adjustments to generic skills utilised by members is a necessity when moving between the two conditions. This paper reports an aspect of a CRC-CI research project based on research of ‘generic skills’ used by individuals and teams when engaging with high bandwidth information and communication technologies (ICT). It aligns with the project’s other two aspects of collaboration in virtual environments: ‘processes’ and ‘models’. The entire project focuses on the early stages of a project (i.e. design) in which models for the project are being developed and revised. The paper summarises the first stage of the research project which reviews literature to identify factors of virtual teaming which may affect team member skills. It concludes that design team participants require ‘appropriate skills’ to function efficiently and effectively, and that the introduction of high band-width technologies reinforces the need for skills mapping and measurement.
Resumo:
In today’s global design world, architectural and other related design firms design across time zones and geographically distant locations. High bandwidth virtual environments have the potential to make a major impact on these global design teams. However, there is insufficient evidence about the way designers collaborate in their normal working environments using traditional and/or digital media. This paper presents a method to study the impact of communication and information technologies on collaborative design practice by comparing design tasks done in a normal working environment with design tasks done in a virtual environment. Before introducing high bandwidth collaboration technology to the work environment, a baseline study is conducted to observe and analyze the existing collaborative process. Designers currently rely on phone, fax, email, and image files for communication and collaboration. Describing the current context is important for comparison with the following phases. We developed the coding scheme that will be used in analyzing three stages of the collaborative design activity. The results will establish the basis for measures of collaborative design activity when a new technology is introduced later to the same work environment – for example, designers using electronic whiteboards, 3D virtual worlds, webcams, and internet phone. The results of this work will form the basis of guidelines for the introduction of technology into global design offices
Resumo:
Objective: To examine the reliability of work-related activity coding for injury-related hospitalisations in Australia. Method: A random sample of 4373 injury-related hospital separations from 1 July 2002 to 30 June 2004 were obtained from a stratified random sample of 50 hospitals across 4 states in Australia. From this sample, cases were identified as work-related if they contained an ICD-10-AM work-related activity code (U73) allocated by either: (i) the original coder; (ii) an independent auditor, blinded to the original code; or (iii) a research assistant, blinded to both the original and auditor codes, who reviewed narrative text extracted from the medical record. The concordance of activity coding and number of cases identified as work-related using each method were compared. Results: Of the 4373 cases sampled, 318 cases were identified as being work-related using any of the three methods for identification. The original coder identified 217 and the auditor identified 266 work-related cases (68.2% and 83.6% of the total cases identified, respectively). Around 10% of cases were only identified through the text description review. The original coder and auditor agreed on the assignment of work-relatedness for 68.9% of cases. Conclusions and Implications: The current best estimates of the frequency of hospital admissions for occupational injury underestimate the burden by around 32%. This is a substantial underestimate that has major implications for public policy, and highlights the need for further work on improving the quality and completeness of routine, administrative data sources for a more complete identification of work-related injuries.
Resumo:
This paper details research completed in 2007 which investigated autopsy decision making in a death investigation. The data was gathered during the first year of operation of a new Coroners Act in Queensland, Australia, which changed the process of death investigation in three ways which are important to this paper. First, it required a greater amount of information to be gathered at the scene by police, and this included a thorough investigation of the circumstances of the death, including statements from witnesses, friends and family, as well as evidence gathering at the scene. Second, it required Coroners, for the first time, to determine the level of invasiveness of the autopsy required to complete the death investigation. Third, it enabled the communication of a genuine family concern, to be communicated to the Coroner. The outcome of such information was threefold. First, a greater amount of information offered to the Coroner led to a decrease in the number of full internal autopsies ordered, but an increase in the number of partial internal autopsies ordered. Second, this shift in autopsy decision making by Coroners saw certain factors given greater importance than others in decisions to order full internal or external only autopsies. Third, a raised family concern had a significant impact on autopsy decision making and tended to decrease the invasiveness of the autopsy ordered by Coroners.