943 resultados para Glomerular filtration rate
Time dependency of molecular rate estimates and systematic overestimation of recent divergence times
Resumo:
Studies of molecular evolutionary rates have yielded a wide range of rate estimates for various genes and taxa. Recent studies based on population-level and pedigree data have produced remarkably high estimates of mutation rate, which strongly contrast with substitution rates inferred in phylogenetic (species-level) studies. Using Bayesian analysis with a relaxed-clock model, we estimated rates for three groups of mitochondrial data: avian protein-coding genes, primate protein-coding genes, and primate d-loop sequences. In all three cases, we found a measurable transition between the high, short-term (<1–2 Myr) mutation rate and the low, long-term substitution rate. The relationship between the age of the calibration and the rate of change can be described by a vertically translated exponential decay curve, which may be used for correcting molecular date estimates. The phylogenetic substitution rates in mitochondria are approximately 0.5% per million years for avian protein-coding sequences and 1.5% per million years for primate protein-coding and d-loop sequences. Further analyses showed that purifying selection offers the most convincing explanation for the observed relationship between the estimated rate and the depth of the calibration. We rule out the possibility that it is a spurious result arising from sequence errors, and find it unlikely that the apparent decline in rates over time is caused by mutational saturation. Using a rate curve estimated from the d-loop data, several dates for last common ancestors were calculated: modern humans and Neandertals (354 ka; 222–705 ka), Neandertals (108 ka; 70–156 ka), and modern humans (76 ka; 47–110 ka). If the rate curve for a particular taxonomic group can be accurately estimated, it can be a useful tool for correcting divergence date estimates by taking the rate decay into account. Our results show that it is invalid to extrapolate molecular rates of change across different evolutionary timescales, which has important consequences for studies of populations, domestication, conservation genetics, and human evolution.
Resumo:
Long-term changes in the genetic composition of a population occur by the fixation of new mutations, a process known as substitution. The rate at which mutations arise in a population and the rate at which they are fixed are expected to be equal under neutral conditions (Kimura, 1968). Between the appearance of a new mutation and its eventual fate of fixation or loss, there will be a period in which it exists as a transient polymorphism in the population (Kimura and Ohta, 1971). If the majority of mutations are deleterious (and nonlethal), the fixation probabilities of these transient polymorphisms are reduced and the mutation rate will exceed the substitution rate (Kimura, 1983). Consequently, different apparent rates may be observed on different time scales of the molecular evolutionary process (Penny, 2005; Penny and Holmes, 2001). The substitution rate of the mitochondrial protein-coding genes of birds and mammals has been traditionally recognized to be about 0.01 substitutions/site/million years (Myr) (Brown et al., 1979; Ho, 2007; Irwin et al., 1991; Shields and Wilson, 1987), with the noncoding D-loop evolving several times more quickly (e.g., Pesole et al., 1992; Quinn, 1992). Over the past decade, there has been mounting evidence that instantaneous mutation rates substantially exceed substitution rates, in a range of organisms (e.g., Denver et al., 2000; Howell et al., 2003; Lambert et al., 2002; Mao et al., 2006; Mumm et al., 1997; Parsons et al., 1997; Santos et al., 2005). The immediate reaction to the first of these findings was that the polymorphisms generated by the elevated mutation rate are short-lived, perhaps extending back only a few hundred years (Gibbons, 1998; Macaulay et al., 1997). That is, purifying selection was thought to remove these polymorphisms very rapidly.
Resumo:
Objectives To evaluate differences among patients with different clinical features of ALS, we used our Bayesian method of motor unit number estimation (MUNE). Methods We performed serial MUNE studies on 42 subjects who fulfilled the diagnostic criteria for ALS during the course of their illness. Subjects were classified into three subgroups according to whether they had typical ALS (with upper and lower motor neurone signs) or had predominantly upper motor neurone weakness with only minor LMN signs, or predominantly lower motor neurone weakness with only minor UMN signs. In all subjects we calculated the half life of MUs, defined as the expected time for the number of MUs to halve, in one or more of the abductor digiti minimi (ADM), abductor pollicis brevis (APB) and extensor digitorum brevis (EDB) muscles. Results The mean half life of MUs was less in subjects who had typical ALS with both upper and lower motor neurone signs than in those with predominantly upper motor neurone weakness or predominantly lower motor neurone weakness. In 18 subjects we analysed the estimated size of the MUs and demonstrated the appearance of large MUs in subjects with upper or lower motor neurone predominant weakness. We found that the appearance of large MUs was correlated with the half life of MUs. Conclusions Patients with different clinical features of ALS have different rates of loss and different sizes of MUs. Significance: These findings could indicate differences in disease pathogenesis.
Resumo:
Hybrid system representations have been exploited in a number of challenging modelling situations, including situations where the original nonlinear dynamics are too complex (or too imprecisely known) to be directly filtered. Unfortunately, the question of how to best design suitable hybrid system models has not yet been fully addressed, particularly in the situations involving model uncertainty. This paper proposes a novel joint state-measurement relative entropy rate based approach for design of hybrid system filters in the presence of (parameterised) model uncertainty. We also present a design approach suitable for suboptimal hybrid system filters. The benefits of our proposed approaches are illustrated through design examples and simulation studies.
Resumo:
Introduction Electrical impedance tomography (EIT) has been shown to be able to distinguish both ventilation and perfusion. With adequate filtering the regional distributions of both ventilation and perfusion and their relationships could be analysed. Several methods of separation have been suggested previously, including breath holding, electrocardiograph (ECG) gating and frequency filtering. Many of these methods require interventions inappropriate in a clinical setting. This study therefore aims to extend a previously reported frequency filtering technique to a spontaneously breathing cohort and assess the regional distributions of ventilation and perfusion and their relationship. Methods Ten healthy adults were measured during a breath hold and while spontaneously breathing in supine, prone, left and right lateral positions. EIT data were analysed with and without filtering at the respiratory and heart rate. Profiles of ventilation, perfusion and ventilation/perfusion related impedance change were generated and regions of ventilation and pulmonary perfusion were identified and compared. Results Analysis of the filtration technique demonstrated its ability to separate the ventilation and cardiac related impedance signals without negative impact. It was, therefore, deemed suitable for use in this spontaneously breathing cohort. Regional distributions of ventilation, perfusion and the combined ΔZV/ΔZQ were calculated along the gravity axis and anatomically in each position. Along the gravity axis, gravity dependence was seen only in the lateral positions in ventilation distribution, with the dependent lung being better ventilated regardless of position. This gravity dependence was not seen in perfusion. When looking anatomically, differences were only apparent in the lateral positions. The lateral position ventilation distributions showed a difference in the left lung, with the right lung maintaining a similar distribution in both lateral positions. This is likely caused by more pronounced anatomical changes in the left lung when changing positions. Conclusions The modified filtration technique was demonstrated to be effective in separating the ventilation and perfusion signals in spontaneously breathing subjects. Gravity dependence was seen only in ventilation distribution in the left lung in lateral positions, suggesting gravity based shifts in anatomical structures. Gravity dependence was not seen in any perfusion distributions.
Resumo:
Vehicular safety applications, such as cooperative collision warning systems, rely on beaconing to provide situational awareness that is needed to predict and therefore to avoid possible collisions. Beaconing is the continual exchange of vehicle motion-state information, such as position, speed, and heading, which enables each vehicle to track its neighboring vehicles in real time. This work presents a context-aware adaptive beaconing scheme that dynamically adapts the beaconing repetition rate based on an estimated channel load and the danger severity of the interactions among vehicles. The safety, efficiency, and scalability of the new scheme is evaluated by simulating vehicle collisions caused by inattentive drivers under various road traffic densities. Simulation results show that the new scheme is more efficient and scalable, and is able to improve safety better than the existing non-adaptive and adaptive rate schemes.
Resumo:
Objective: To use our Bayesian method of motor unit number estimation (MUNE) to evaluate lower motor neuron degeneration in ALS. Methods: In subjects with ALS we performed serial MUNE studies. We examined the repeatability of the test and then determined whether the loss of MUs was fitted by an exponential or Weibull distribution. Results: The decline in motor unit (MU) numbers was well-fitted by an exponential decay curve. We calculated the half life of MUs in the abductor digiti minimi (ADM), abductor pollicis brevis (APB) and/or extensor digitorum brevis (EDB) muscles. The mean half life of the MUs of ADM muscle was greater than those of the APB or EDB muscles. The half-life of MUs was less in the ADM muscle of subjects with upper limb than in those with lower limb onset. Conclusions: The rate of loss of lower motor neurons in ALS is exponential, the motor units of the APB decay more quickly than those of the ADM muscle and the rate of loss of motor units is greater at the site of onset of disease. Significance: This shows that the Bayesian MUNE method is useful in following the course and exploring the clinical features of ALS. 2012 International Federation of Clinical Neurophysiology.
Consecutive days of cold water immersion: effects on cycling performance and heart rate variability.
Resumo:
We investigated performance and heart rate (HR) variability (HRV) over consecutive days of cycling with post-exercise cold water immersion (CWI) or passive recovery (PAS). In a crossover design, 11 cyclists completed two separate 3-day training blocks (120 min cycling per day, 66 maximal sprints, 9 min time trialling [TT]), followed by 2 days of recovery-based training. The cyclists recovered from each training session by standing in cold water (10 °C) or at room temperature (27 °C) for 5 min. Mean power for sprints, total TT work and HR were assessed during each session. Resting vagal-HRV (natural logarithm of square-root of mean squared differences of successive R-R intervals; ln rMSSD) was assessed after exercise, after the recovery intervention, during sleep and upon waking. CWI allowed better maintenance of mean sprint power (between-trial difference [90 % confidence limits] +12.4 % [5.9; 18.9]), cadence (+2.0 % [0.6; 3.5]), and mean HR during exercise (+1.6 % [0.0; 3.2]) compared with PAS. ln rMSSD immediately following CWI was higher (+144 % [92; 211]) compared with PAS. There was no difference between the trials in TT performance (-0.2 % [-3.5; 3.0]) or waking ln rMSSD (-1.2 % [-5.9; 3.4]). CWI helps to maintain sprint performance during consecutive days of training, whereas its effects on vagal-HRV vary over time and depend on prior exercise intensity.
Resumo:
We investigated the effect of hydrotherapy on time-trial performance and cardiac parasympathetic reactivation during recovery from intense training. On three occasions, 18 well-trained cyclists completed 60 min high-intensity cycling, followed 20 min later by one of three 10-min recovery interventions: passive rest (PAS), cold water immersion (CWI), or contrast water immersion (CWT). The cyclists then rested quietly for 160 min with R-R intervals and perceptions of recovery recorded every 30 min. Cardiac parasympathetic activity was evaluated using the natural logarithm of the square root of mean squared differences of successive R-R intervals (ln rMSSD). Finally, the cyclists completed a work-based cycling time trial. Effects were examined using magnitude-based inferences. Differences in time-trial performance between the three trials were trivial. Compared with PAS, general fatigue was very likely lower for CWI (difference [90% confidence limits; -12% (-18; -5)]) and CWT [-11% (-19; -2)]. Leg soreness was almost certainly lower following CWI [-22% (-30; -14)] and CWT [-27% (-37; -15)]. The change in mean ln rMSSD following the recovery interventions (ln rMSSD(Post-interv)) was almost certainly higher following CWI [16.0% (10.4; 23.2)] and very likely higher following CWT [12.5% (5.5; 20.0)] compared with PAS, and possibly higher following CWI [3.7% (-0.9; 8.4)] compared with CWT. The correlations between performance, ln rMSSD(Post-interv) and perceptions of recovery were unclear. A moderate correlation was observed between ln rMSSD(Post-interv) and leg soreness [r = -0.50 (-0.66; -0.29)]. Although the effects of CWI and CWT on performance were trivial, the beneficial effects on perceptions of recovery support the use of these recovery strategies.
Resumo:
Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.
Resumo:
The increasing popularity of video consumption from mobile devices requires an effective video coding strategy. To overcome diverse communication networks, video services often need to maintain sustainable quality when the available bandwidth is limited. One of the strategy for a visually-optimised video adaptation is by implementing a region-of-interest (ROI) based scalability, whereby important regions can be encoded at a higher quality while maintaining sufficient quality for the rest of the frame. The result is an improved perceived quality at the same bit rate as normal encoding, which is particularly obvious at the range of lower bit rate. However, because of the difficulties of predicting region-of-interest (ROI) accurately, there is a limited research and development of ROI-based video coding for general videos. In this paper, the phase spectrum quaternion of Fourier Transform (PQFT) method is adopted to determine the ROI. To improve the results of ROI detection, the saliency map from the PQFT is augmented with maps created from high level knowledge of factors that are known to attract human attention. Hence, maps that locate faces and emphasise the centre of the screen are used in combination with the saliency map to determine the ROI. The contribution of this paper lies on the automatic ROI detection technique for coding a low bit rate videos which include the ROI prioritisation technique to give different level of encoding qualities for multiple ROIs, and the evaluation of the proposed automatic ROI detection that is shown to have a close performance to human ROI, based on the eye fixation data.
Resumo:
Epidemiological research has consistently shown an association between fine and ultrafine particle concentrations, and increases in both respiratory and cardiovascular morbidity and mortality. These particles, often found in vehicle emissions outside buildings, can penetrate inside via their envelopes and mechanically ventilated systems. Indoor activities such as printing, cooking and cleaning, as well as the movement of building occupants are also an additional source of these particles. In this context, the filtration systems of mechanically ventilated buildings can reduce indoor particle concentrations. Several studies have quantified the efficiency of dry-media and electrostatic filters, but they mainly focused on the particle size range > 300 nm. Some others studied ultrafine particles but their investigations were conducted in laboratories. At this point, there is still only limited information on in situ filter efficiency and an incomplete understanding of filtration influence on I/O ratios of particle concentrations. To help address these gaps in knowledge and provide new information for the selection of appropriate filter types in office building HVAC systems, we aimed to: (1) measure particle concentrations at up and down stream flows of filter devices, as well as outdoor and indoor office buildings; (2) quantify efficiency of different filter types at different buildings; and (3) assess the impact of these filters on I/O ratios at different indoor and outdoor source operation scenarios.
Resumo:
Background: Hamstring strain injuries are prevalent in sport and re-injury rates have been high for many years. Whilst much focus has centred on the impact of previous hamstring strain injury on maximal eccentric strength, high rates of torque development is also of interest, given the important role of the hamstrings during the terminal swing phase of running. The impact of prior strain injury on myoelectrical activity of the hamstrings during tasks requiring high rates of torque development has received little attention. Purpose: To determine if recreational athletes with a history of unilateral hamstring strain injury, who have returned to training and competition, will exhibit lower levels of myoelectrical activity during eccentric contraction, rate of torque development and impulse 30, 50 and 100ms after the onset of myoelectrical activity or torque development in the previously injured limb compared to the uninjured limb. Study design: Case-control study Methods: Twenty-six recreational athletes were recruited. Of these, 13 athletes had a history of unilateral hamstring strain injury (all confined to biceps femoris long head) and 13 had no history of hamstring strain injury. Following familiarisation, all athletes undertook isokinetic dynamometry testing and surface electromyography assessment of the biceps femoris long head and medial hamstrings during eccentric contractions at -60 and -1800.s-1. Results: In the injured limb of the injured group, compared to the contralateral uninjured limb rate of torque development and impulse was lower during -600.s-1 eccentric contractions at 50 (RTD, injured limb = 312.27 ± 191.78Nm.s-1 vs. uninjured limb = 518.54 ± 172.81Nm.s-1, p=0.008; IMP, injured limb = 0.73 ± 0.30 Nm.s vs. uninjured limb = 0.97 ± 0.23 Nm.s, p=0.005) and 100ms (RTD, injured limb = 280.03 ± 131.42Nm.s-1 vs. uninjured limb = 460.54.54 ± 152.94Nm.s-1,p=0.001; IMP, injured limb = 2.15 ± 0.89 Nm.s vs. uninjured limb = 3.07 ± 0.63 Nm.s, p<0.001) after the onset of contraction. Biceps femoris long head muscle activation was lower at 100ms at both contraction speeds (-600.s-1, normalised iEMG activity (x1000), injured limb = 26.25 ± 10.11 vs. uninjured limb 33.57 ± 8.29, p=0.009; -1800.s-1, normalised iEMG activity (x1000), injured limb = 31.16 ± 10.01 vs. uninjured limb 39.64 ± 8.36, p=0.009). Medial hamstring activation did not differ between limbs in the injured group. Comparisons in the uninjured group showed no significant between limbs difference for any variables. Conclusion: Previously injured hamstrings displayed lower rate of torque development and impulse during slow maximal eccentric contraction compared to the contralateral uninjured limb. Lower myoelectrical activity was confined to the biceps femoris long head. Regardless of whether these deficits are the cause of or the result of injury, these findings could have important implications for hamstring strain injury and re-injury. Particularly, given the importance of high levels of muscle activity to bring about specific muscular adaptations, lower levels of myoelectrical activity may limit the adaptive response to rehabilitation interventions and suggest greater attention be given to neural function of the knee flexors following hamstring strain injury.
Resumo:
Background: Hamstring strain injuries (HSIs) are prevalent in sport and re-injury rates have been high for many years. Whilst much focus has centred on the impact of previous hamstring strain injury on maximal eccentric strength, high rates of torque development is also of interest, given the important role of the hamstrings during the terminal swing phase of gait. The impact of prior strain injury on neuromuscular function of the hamstrings during tasks requiring high rates of torque development has received little attention. The purpose of this study is to determine if recreational athletes with a history of unilateral hamstring strain injury, who have returned to training and competition, will exhibit lower levels of eccentric muscle activation, rate of torque development and impulse 30, 50 and 100ms after the onset of electromyographical or torque development in the previously injured limb compared to the uninjured limb. Methods: Twenty-six recreational athletes were recruited. Of these, 13 athletes had a history of unilateral hamstring strain injury (all confined to biceps femoris long head) and 13 had no history of hamstring strain injury. Following familiarisation, all athletes undertook isokinetic dynamometry testing and surface electromyography assessment of the biceps femoris long head and medial hamstrings during eccentric contractions at -60 and -1800.s-1. Results: In the injured limb of the injured group, compared to the contralateral uninjured limb rate of torque development and impulse was lower during -600.s-1 eccentric contractions at 50 (RTD, p=0.008; IMP, p=0.005) and 100ms (RTD, p=0.001; IMP p<0.001) after the onset of contraction. There was also a non-significant trend for rate of torque development during -1800.s-1 to be lower 100ms after onset of contraction (p=0.064). Biceps femoris long head muscle activation was lower at 100ms at both contraction speeds (-600.s-1, p=0.009; -1800.s-1, p=0.009). Medial hamstring activation did not differ between limbs in the injured group. Comparisons in the uninjured group showed no significant between limbs difference for any variables. Conclusion: Previously injured hamstrings displayed lower rate of torque development and impulse during eccentric contraction. Lower muscle activation was confined to the biceps femoris long head. Regardless of whether these deficits are the cause of or the result of injury, these findings have important implications for hamstring strain injury and re-injury and suggest greater attention be given to neural function of the knee flexors.