998 resultados para Stride frequency
Resumo:
Bioelectrical impedance analysis, (BIA), is a method of body composition analysis first investigated in 1962 which has recently received much attention by a number of research groups. The reasons for this recent interest are its advantages, (viz: inexpensive, non-invasive and portable) and also the increasing interest in the diagnostic value of body composition analysis. The concept utilised by BIA to predict body water volumes is the proportional relationship for a simple cylindrical conductor, (volume oc length2/resistance), which allows the volume to be predicted from the measured resistance and length. Most of the research to date has measured the body's resistance to the passage of a 50· kHz AC current to predict total body water, (TBW). Several research groups have investigated the application of AC currents at lower frequencies, (eg 5 kHz), to predict extracellular water, (ECW). However all research to date using BIA to predict body water volumes has used the impedance measured at a discrete frequency or frequencies. This thesis investigates the variation of impedance and phase of biological systems over a range of frequencies and describes the development of a swept frequency bioimpedance meter which measures impedance and phase at 496 frequencies ranging from 4 kHz to 1 MHz. The impedance of any biological system varies with the frequency of the applied current. The graph of reactance vs resistance yields a circular arc with the resistance decreasing with increasing frequency and reactance increasing from zero to a maximum then decreasing to zero. Computer programs were written to analyse the measured impedance spectrum and determine the impedance, Zc, at the characteristic frequency, (the frequency at which the reactance is a maximum). The fitted locus of the measured data was extrapolated to determine the resistance, Ro, at zero frequency; a value that cannot be measured directly using surface electrodes. The explanation of the theoretical basis for selecting these impedance values (Zc and Ro), to predict TBW and ECW is presented. Studies were conducted on a group of normal healthy animals, (n=42), in which TBW and ECW were determined by the gold standard of isotope dilution. The prediction quotients L2/Zc and L2/Ro, (L=length), yielded standard errors of 4.2% and 3.2% respectively, and were found to be significantly better than previously reported, empirically determined prediction quotients derived from measurements at a single frequency. The prediction equations established in this group of normal healthy animals were applied to a group of animals with abnormally low fluid levels, (n=20), and also to a group with an abnormal balance of extra-cellular to intracellular fluids, (n=20). In both cases the equations using L2/Zc and L2/Ro accurately and precisely predicted TBW and ECW. This demonstrated that the technique developed using multiple frequency bioelectrical impedance analysis, (MFBIA), can accurately predict both TBW and ECW in both normal and abnormal animals, (with standard errors of the estimate of 6% and 3% for TBW and ECW respectively). Isotope dilution techniques were used to determine TBW and ECW in a group of 60 healthy human subjects, (male. and female, aged between 18 and 45). Whole body impedance measurements were recorded on each subject using the MFBIA technique and the correlations between body water volumes, (TBW and ECW), and heighe/impedance, (for all measured frequencies), were compared. The prediction quotients H2/Zc and H2/Ro, (H=height), again yielded the highest correlation with TBW and ECW respectively with corresponding standard errors of 5.2% and 10%. The values of the correlation coefficients obtained in this study were very similar to those recently reported by others. It was also observed that in healthy human subjects the impedance measured at virtually any frequency yielded correlations not significantly different from those obtained from the MFBIA quotients. This phenomenon has been reported by other research groups and emphasises the need to validate the technique by investigating its application in one or more groups with abnormalities in fluid levels. The clinical application of MFBIA was trialled and its capability of detecting lymphoedema, (an excess of extracellular fluid), was investigated. The MFBIA technique was demonstrated to be significantly more sensitive, (P<.05), in detecting lymphoedema than the current technique of circumferential measurements. MFBIA was also shown to provide valuable information describing the changes in the quantity of muscle mass of the patient during the course of the treatment. The determination of body composition, (viz TBW and ECW), by MFBIA has been shown to be a significant improvement on previous bioelectrical impedance techniques. The merit of the MFBIA technique is evidenced in its accurate, precise and valid application in animal groups with a wide variation in body fluid volumes and balances. The multiple frequency bioelectrical impedance analysis technique developed in this study provides accurate and precise estimates of body composition, (viz TBW and ECW), regardless of the individual's state of health.
Resumo:
In this paper, the commonly used switching schemes for sliding mode control of power converters is analyzed and designed in the frequency domain. Particular application of a distribution static compensator (DSTATCOM) in voltage control mode is investigated in a power distribution system. Tsypkin's method and describing function is used to obtain the switching conditions for the two-level and three-level voltage source inverters. Magnitude conditions of carrier signals are developed for robust switching of the inverter under carrier-based modulation scheme of sliding mode control. The existence of border collision bifurcation is identified to avoid the complex switching states of the inverter. The load bus voltage of an unbalanced three-phase nonstiff radial distribution system is controlled using the proposed carrier-based design. The results are validated using PSCAD/EMTDC simulation studies and through a scaled laboratory model of DSTATCOM that is developed for experimental verification
Resumo:
Purpose: The aim was to construct and advise on the use of a cost-per-wear model based on contact lens replacement frequency, to form an equitable basis for cost comparison. ---------- Methods: The annual cost of professional fees, contact lenses and solutions when wearing daily, two-weekly and monthly replacement contact lenses is determined in the context of the Australian market for spherical, toric and multifocal prescription types. This annual cost is divided by the number of times lenses are worn per year, resulting in a ‘cost-per-wear’. The model is presented graphically as the cost-per-wear versus the number of times lenses are worn each week for daily replacement and reusable (two-weekly and monthly replacement) lenses.---------- Results: The cost-per-wear for two-weekly and monthly replacement spherical lenses is almost identical but decreases with increasing frequency of wear. The cost-per-wear of daily replacement spherical lenses is lower than for reusable spherical lenses, when worn from one to four days per week but higher when worn six or seven days per week. The point at which the cost-per-wear is virtually the same for all three spherical lens replacement frequencies (approximately AUD$3.00) is five days of lens wear per week. A similar but upwardly displaced (higher cost) pattern is observed for toric lenses, with the cross-over point occurring between three and four days of wear per week (AUD$4.80). Multifocal lenses have the highest price, with cross-over points for daily versus two-weekly replacement lenses at between four and five days of wear per week (AUD$5.00) and for daily versus monthly replacement lenses at three days per week (AUD$5.50).---------- Conclusions: This cost-per-wear model can be used to assist practitioners and patients in making an informed decision in relation to the cost of contact lens wear as one of many considerations that must be taken into account when deciding on the most suitable lens replacement modality.
Resumo:
A statistical modeling method to accurately determine combustion chamber resonance is proposed and demonstrated. This method utilises Markov-chain Monte Carlo (MCMC) through the use of the Metropolis-Hastings (MH) algorithm to yield a probability density function for the combustion chamber frequency and find the best estimate of the resonant frequency, along with uncertainty. The accurate determination of combustion chamber resonance is then used to investigate various engine phenomena, with appropriate uncertainty, for a range of engine cycles. It is shown that, when operating on various ethanol/diesel fuel combinations, a 20% substitution yields the least amount of inter-cycle variability, in relation to combustion chamber resonance.
Resumo:
Safety interventions (e.g., median barriers, photo enforcement) and road features (e.g., median type and width) can influence crash severity, crash frequency, or both. Both dimensions—crash frequency and crash severity—are needed to obtain a full accounting of road safety. Extensive literature and common sense both dictate that crashes are not created equal, with fatalities costing society more than 1,000 times the cost of property damage crashes on average. Despite this glaring disparity, the profession has not unanimously embraced or successfully defended a nonarbitrary severity weighting approach for analyzing safety data and conducting safety analyses. It is argued here that the two dimensions (frequency and severity) are made available by intelligently and reliably weighting crash frequencies and converting all crashes to property-damage-only crash equivalents (PDOEs) by using comprehensive societal unit crash costs. This approach is analogous to calculating axle load equivalents in the prediction of pavement damage: for instance, a 40,000-lb truck causes 4,025 times more stress than does a 4,000-lb car and so simply counting axles is not sufficient. Calculating PDOEs using unit crash costs is the most defensible and nonarbitrary weighting scheme, allows for the simple incorporation of severity and frequency, and leads to crash models that are sensitive to factors that affect crash severity. Moreover, using PDOEs diminishes the errors introduced by underreporting of less severe crashes—an added benefit of the PDOE analysis approach. The method is illustrated with rural road segment data from South Korea (which in practice would develop PDOEs with Korean crash cost data).
Resumo:
Safety at roadway intersections is of significant interest to transportation professionals due to the large number of intersections in transportation networks, the complexity of traffic movements at these locations that leads to large numbers of conflicts, and the wide variety of geometric and operational features that define them. A variety of collision types including head-on, sideswipe, rear-end, and angle crashes occur at intersections. While intersection crash totals may not reveal a site deficiency, over exposure of a specific crash type may reveal otherwise undetected deficiencies. Thus, there is a need to be able to model the expected frequency of crashes by collision type at intersections to enable the detection of problems and the implementation of effective design strategies and countermeasures. Statistically, it is important to consider modeling collision type frequencies simultaneously to account for the possibility of common unobserved factors affecting crash frequencies across crash types. In this paper, a simultaneous equations model of crash frequencies by collision type is developed and presented using crash data for rural intersections in Georgia. The model estimation results support the notion of the presence of significant common unobserved factors across crash types, although the impact of these factors on parameter estimates is found to be rather modest.
Resumo:
In rural low-voltage networks, distribution lines are usually highly resistive. When many distributed generators are connected to such lines, power sharing among them is difficult when using conventional droop control, as the real and reactive power have strong coupling with each other. A high droop gain can alleviate this problem but may lead the system to instability. To overcome4 this, two droop control methods are proposed for accurate load sharing with frequency droop controller. The first method considers no communication among the distributed generators and regulates the output voltage and frequency, ensuring acceptable load sharing. The droop equations are modified with a transformation matrix based on the line R/X ration for this purpose. The second proposed method, with minimal low bandwidth communication, modifies the reference frequency of the distributed generators based on the active and reactive power flow in the lines connected to the points of common coupling. The performance of these two proposed controllers is compared with that of a controller, which includes an expensive high bandwidth communication system through time-domain simulation of a test system. The magnitude of errors in power sharing between these three droop control schemes are evaluated and tabulated.
Resumo:
As the use of renewable energy sources (RESs) increases worldwide, there is a rising interest on their impacts on power system operation and control. An overview of the key issues and new challenges on frequency regulation concerning the integration of renewable energy units into the power systems is presented. Following a brief survey on the existing challenges and recent developments, the impact of power fluctuation produced by variable renewable sources (such as wind and solar units) on sysstem frequency performance is also presented. An updated LFC model is introduced, and power system frequency response in the presence of RESs and associated issues is analysed. The need for the revising of frequency performance standards is emphasised. Finally, non-linear time-domain simulations on the standard 39-bus and 24-bus test systems show that the simulated results agree with those predicted analytically.
Resumo:
A protein-truncating variant of CHEK2, 1100delC, is associated with a moderate increase in breast cancer risk. We have determined the prevalence of this allele in index cases from 300 Australian multiple-case breast cancer families, 95% of which had been found to be negative for mutations in BRCA1 and BRCA2. Only two (0.6%) index cases heterozygous for the CHEK2 mutation were identified. All available relatives in these two families were genotyped, but there was no evidence of co-segregation between the CHEK2 variant and breast cancer. Lymphoblastoid cell lines established from a heterozygous carrier contained approximately 20% of the CHEK2 1100delC mRNA relative to wild-type CHEK2 transcript. However, no truncated CHK2 protein was detectable. Analyses of expression and phosphorylation of wild-type CHK2 suggest that the variant is likely to act by haploinsufficiency. Analysis of CDC25A degradation, a downstream target of CHK2, suggests that some compensation occurs to allow normal degradation of CDC25A. Such compensation of the 1100delC defect in CHEK2 might explain the rather low breast cancer risk associated with the CHEK2 variant, compared to that associated with truncating mutations in BRCA1 or BRCA2.
Resumo:
Eigen-based techniques and other monolithic approaches to face recognition have long been a cornerstone in the face recognition community due to the high dimensionality of face images. Eigen-face techniques provide minimal reconstruction error and limit high-frequency content while linear discriminant-based techniques (fisher-faces) allow the construction of subspaces which preserve discriminatory information. This paper presents a frequency decomposition approach for improved face recognition performance utilising three well-known techniques: Wavelets; Gabor / Log-Gabor; and the Discrete Cosine Transform. Experimentation illustrates that frequency domain partitioning prior to dimensionality reduction increases the information available for classification and greatly increases face recognition performance for both eigen-face and fisher-face approaches.
Resumo:
Activated protein C resistance (APCR), the most common risk factor for venous thrombosis, is the result of a G to A base substitution at nucleotide 1691 (R506Q) in the factor V gene. Current techniques to detect the factor V Leiden mutation, such as determination of restriction length polymorphisms, do not have the capacity to screen large numbers of samples in a rapid, cost- effective test. The aim of this study was to apply the first nucleotide change (FNC) technology, to the detection of the factor V Leiden mutation. After preliminary amplification of genomic DNA by polymerase chain reaction (PCR), an allele-specific primer was hybridised to the PCR product and extended using fluorescent terminating dideoxynucleotides which were detected by colorimetric assay. Using this ELISA-based assay, the prevalence of the factor V Leiden mutation was determined in an Australian blood donor population (n = 500). A total of 18 heterozygotes were identified (3.6%) and all of these were confirmed with conventional MnlI restriction digest. No homozygotes for the variant allele were detected. We conclude from this study that the frequency of 3.6% is compatible with others published for Caucasian populations. In addition, the FNC technology shows promise as the basis for a rapid, automated DNA based test for factor V Leiden.