926 resultados para Muscular Load
Resumo:
The physics-based parameter: load/unload response ratio (LURR) was proposed to measure the proximity of a strong earthquake, which achieved good results in earthquake prediction. As LURR can be used to describe the damage degree of the focal media qualitatively, there must be a relationship between LURR and damage variable (D) which describes damaged materials quantitatively in damage mechanics. Hence, based on damage mechanics and LURR theory, taking Weibull distribution as the probability distribution function, the relationship between LURR and D is set up and analyzed. This relationship directs LURR applied in damage analysis of materials quantitatively from being qualitative earlier, which not only provides the LURR method with a more solid basis in physics, but may also give a new approach to the damage evaluation of big scale structures and prediction of engineering catastrophic failure. Copyright (c) 2009 John Wiley & Sons, Ltd.
Resumo:
The physics-based parameter: load/unload response ratio (LURR) was proposed to measure the proximity of a strong earthquake, which achieved good results in earthquake prediction. As LURR can be used to describe the damage degree of the focal media qualitatively, there must be a relationship between LURR and damage variable (D) which describes damaged materials quantitatively in damage mechanics. Hence, based on damage mechanics and LURR theory, taking Weibull distribution as the probability distribution function, the relationship between LURR and D is set up and analyzed. This relationship directs LURR applied in damage analysis of materials quantitatively from being qualitative earlier, which not only provides the LURR method with a more solid basis in physics, but may also give a new approach to the damage evaluation of big scale structures and prediction of engineering catastrophic failure. Copyright (c) 2009 John Wiley & Sons, Ltd.
Resumo:
A series of WO3/ZrO2 strong solid acid prepared under different conditions were studied. Their crystal structures, surface properties and acidities were determined by means of XRD, DTA-TG, H-2- TPR, Laser Raman and acidity measurements. The results revealed that ZrO2 in WO3/ZrO2 existed mainly in tetragonal phase, the addition of WO3 plays an important role to stabilize tetragonal phase of ZrO2 and thus the catalyst had a considerable surface area. WO3 in WO3/ZrO2 was dispersed and crystalized in WO3 crystalite on ZrO2 surface and partly reacted with ZrO2 to form the bond of Zr-O-W, which acts as the strong solid acid site. The catalytic properties of WO3/ZrO2 strong solid acid for alkylation of iso-butane with butene under the different conditions were investigated. They had a better reaction performance than other strong solid acids, a parallel relationship could be drawn between the catalytic activity and the amount of acid sites as well as the acidic strength of the catalysts.
Resumo:
Thatcher, Rhys, et al., 'A modified TRIMP to quantify the in-season training load of team sport players', Journal of Sport Sciences, (2007) 25(6) pp.629-634 RAE2008
Resumo:
Tod, D. A., Iredale, F., Gill, N. (2003). 'Psyching-up' and muscular force production. Sports Medicine, 33 (1), 47-58. RAE2008
Resumo:
Monografia apresentada à Universidade Fernando Pessoa para obtenção do grau de Licenciada em Fisioterapia
Resumo:
Speculative service implies that a client's request for a document is serviced by sending, in addition to the document requested, a number of other documents (or pointers thereto) that the server speculates will be requested by the client in the near future. This speculation is based on statistical information that the server maintains for each document it serves. The notion of speculative service is analogous to prefetching, which is used to improve cache performance in distributed/parallel shared memory systems, with the exception that servers (not clients) control when and what to prefetch. Using trace simulations based on the logs of our departmental HTTP server http://cs-www.bu.edu, we show that both server load and service time could be reduced considerably, if speculative service is used. This is above and beyond what is currently achievable using client-side caching [3] and server-side dissemination [2]. We identify a number of parameters that could be used to fine-tune the level of speculation performed by the server.
Resumo:
Load balancing is often used to ensure that nodes in a distributed systems are equally loaded. In this paper, we show that for real-time systems, load balancing is not desirable. In particular, we propose a new load-profiling strategy that allows the nodes of a distributed system to be unequally loaded. Using load profiling, the system attempts to distribute the load amongst its nodes so as to maximize the chances of finding a node that would satisfy the computational needs of incoming real-time tasks. To that end, we describe and evaluate a distributed load-profiling protocol for dynamically scheduling time-constrained tasks in a loosely-coupled distributed environment. When a task is submitted to a node, the scheduling software tries to schedule the task locally so as to meet its deadline. If that is not feasible, it tries to locate another node where this could be done with a high probability of success, while attempting to maintain an overall load profile for the system. Nodes in the system inform each other about their state using a combination of multicasting and gossiping. The performance of the proposed protocol is evaluated via simulation, and is contrasted to other dynamic scheduling protocols for real-time distributed systems. Based on our findings, we argue that keeping a diverse availability profile and using passive bidding (through gossiping) are both advantageous to distributed scheduling for real-time systems.
Resumo:
High-speed networks, such as ATM networks, are expected to support diverse Quality of Service (QoS) constraints, including real-time QoS guarantees. Real-time QoS is required by many applications such as those that involve voice and video communication. To support such services, routing algorithms that allow applications to reserve the needed bandwidth over a Virtual Circuit (VC) have been proposed. Commonly, these bandwidth-reservation algorithms assign VCs to routes using the least-loaded concept, and thus result in balancing the load over the set of all candidate routes. In this paper, we show that for such reservation-based protocols|which allow for the exclusive use of a preset fraction of a resource's bandwidth for an extended period of time-load balancing is not desirable as it results in resource fragmentation, which adversely affects the likelihood of accepting new reservations. In particular, we show that load-balancing VC routing algorithms are not appropriate when the main objective of the routing protocol is to increase the probability of finding routes that satisfy incoming VC requests, as opposed to equalizing the bandwidth utilization along the various routes. We present an on-line VC routing scheme that is based on the concept of "load profiling", which allows a distribution of "available" bandwidth across a set of candidate routes to match the characteristics of incoming VC QoS requests. We show the effectiveness of our load-profiling approach when compared to traditional load-balancing and load-packing VC routing schemes.
Resumo:
To support the diverse Quality of Service (QoS) requirements of real-time (e.g. audio/video) applications in integrated services networks, several routing algorithms that allow for the reservation of the needed bandwidth over a Virtual Circuit (VC) established on one of several candidate routes have been proposed. Traditionally, such routing is done using the least-loaded concept, and thus results in balancing the load across the set of candidate routes. In a recent study, we have established the inadequacy of this load balancing practice and proposed the use of load profiling as an alternative. Load profiling techniques allow the distribution of "available" bandwidth across a set of candidate routes to match the characteristics of incoming VC QoS requests. In this paper we thoroughly characterize the performance of VC routing using load profiling and contrast it to routing using load balancing and load packing. We do so both analytically and via extensive simulations of multi-class traffic routing in Virtual Path (VP) based networks. Our findings confirm that for routing guaranteed bandwidth flows in VP networks, load balancing is not desirable as it results in VP bandwidth fragmentation, which adversely affects the likelihood of accepting new VC requests. This fragmentation is more pronounced when the granularity of VC requests is large. Typically, this occurs when a common VC is established to carry the aggregate traffic flow of many high-bandwidth real-time sources. For VP-based networks, our simulation results show that our load-profiling VC routing scheme performs better or as well as the traditional load-balancing VC routing in terms of revenue under both skewed and uniform workloads. Furthermore, load-profiling routing improves routing fairness by proactively increasing the chances of admitting high-bandwidth connections.
Resumo:
We consider the problem of task assignment in a distributed system (such as a distributed Web server) in which task sizes are drawn from a heavy-tailed distribution. Many task assignment algorithms are based on the heuristic that balancing the load at the server hosts will result in optimal performance. We show this conventional wisdom is less true when the task size distribution is heavy-tailed (as is the case for Web file sizes). We introduce a new task assignment policy, called Size Interval Task Assignment with Variable Load (SITA-V). SITA-V purposely operates the server hosts at different loads, and directs smaller tasks to the lighter-loaded hosts. The result is that SITA-V provably decreases the mean task slowdown by significant factors (up to 1000 or more) where the more heavy-tailed the workload, the greater the improvement factor. We evaluate the tradeoff between improvement in slowdown and increase in waiting time in a system using SITA-V, and show conditions under which SITA-V represents a particularly appealing policy. We conclude with a discussion of the use of SITA-V in a distributed Web server, and show that it is attractive because it has a simple implementation which requires no communication from the server hosts back to the task router.
Resumo:
Distributed hash tables have recently become a useful building block for a variety of distributed applications. However, current schemes based upon consistent hashing require both considerable implementation complexity and substantial storage overhead to achieve desired load balancing goals. We argue in this paper that these goals can b e achieved more simply and more cost-effectively. First, we suggest the direct application of the "power of two choices" paradigm, whereby an item is stored at the less loaded of two (or more) random alternatives. We then consider how associating a small constant number of hash values with a key can naturally b e extended to support other load balancing methods, including load-stealing or load-shedding schemes, as well as providing natural fault-tolerance mechanisms.
Resumo:
In this paper, we propose and evaluate an implementation of a prototype scalable web server. The prototype consists of a load-balanced cluster of hosts that collectively accept and service TCP connections. The host IP addresses are advertised using the Round Robin DNS technique, allowing any host to receive requests from any client. Once a client attempts to establish a TCP connection with one of the hosts, a decision is made as to whether or not the connection should be redirected to a different host---namely, the host with the lowest number of established connections. We use the low-overhead Distributed Packet Rewriting (DPR) technique to redirect TCP connections. In our prototype, each host keeps information about connections in hash tables and linked lists. Every time a packet arrives, it is examined to see if it has to be redirected or not. Load information is maintained using periodic broadcasts amongst the cluster hosts.