358 resultados para Comprehensive Osteoarthritis Test
Resumo:
The Escherichia coli mu operon was subcloned into a pKK233-2 vector containing rat glutathione S-transferase (GST) 5-5 cDNA and the plasmid thus obtained was introduced into Salmonella typhimurium TA1535. The newly developed strain S.typhimurium NM5004, was found to have 52-fold greater GST activity than the original umu strain S.typhimurium TA1535/pSK1002. We compared sensitivities of these two tester strains, NM5004 and TA1535/ pSK1002, for induction of umuC gene expression with several dihaloalkanes which are activated or inactivated by GST 5-5 activity. The induction of umuC gene expression by these chemicals was monitored by measuring the cellular P-galactosidase activity produced by umuC'lacZ fusion gene in these two tester strains. Ethylene dibromide, 1-bromo-2-chloroethane, 1,2-dichloroethane, and methylene dichloride induced umuC gene expression more strongly in the NM5004 strain than the original strain, 4-Nitroquinoline 1-oxide and N-methyl-N'-nitro-N-nitrosoguanidine were found to induce umuC gene expression to similar extents in both strains. In the case of 1-nitropyrene and 2-nitrofluorene, however, NM5004 strain showed weaker umuC gene expression responses than the original TA1535/ pSK1002 strain, 1,2-Epoxy-3-(4'-nitrophenoxy)propane, a known substrate for GST 5-5, was found to inhibit umuC induction caused by 1-bromo-2-chloroethane. These results indicate that this new tester NM5004 strain expressing a mammalian GST theta class enzyme may be useful for studies of environmental chemicals proposed to be activated or inactivated by GST activity.
Resumo:
In this 1972 documentary, The Computer Generation, by John Musilli, artist Stan Vanderbeek talks about the possibility of computers as an artist tool. My aim with drawing on this documentary is to compare the current state of transmedia with previous significant changes in media history, to illustrate how the current state of transmedia is quite diverse.
Resumo:
Background To determine whether changes in appetite and energy intake (EI) can be detected and play a role in the effectiveness of interventions, it is necessary to identify their variability under normal conditions. We assessed the reproducibility of subjective appetite ratings and ad libitum test meal EI after a standardised pre-load in overweight and obese males. Methods Fifteen overweight and obese males (BMI 30.3 ± 4.9 kg/m2, aged 34.9 ± 10.6 years) completed two identical test days, 7 days apart. Participants were provided with a standardised fixed breakfast (1676 kJ) and 5 h later an ad libitum pasta lunch. An electronic appetite rating system was used to assess subjective ratings before and after the fixed breakfast, and periodically during the postprandial period. EI was assessed at the ad libitum lunch meal. Sample size estimates for paired design studies were calculated. Results Appetite ratings demonstrated a consistent oscillating pattern between test days, and were more reproducible for mean postprandial than fasting ratings. The correlation between ad libitum EI on the two test days was r = 0.78 (P < 0.01). Using a paired design and a power of 0.8, a minimum of 12 participants would be needed to detect a 10 mm change in 5 h postprandial mean ratings and 17 to detect a 500 kJ difference in ad libitum EI. Conclusion Intra-individual variability of appetite and ad libitum test meal EI in overweight and obese males is comparable to previous reports in normal weight adults. Sample size requirements for studies vary depending on the parameter of interest and sensitivity needed.
Resumo:
There is an increasing desire and emphasis to integrate assessment tools into the everyday training environment of athletes. These tools are intended to fine-tune athlete development, enhance performance and aid in the development of individualised programmes for athletes. The areas of workload monitoring, skill development and injury assessment are expected to benefit from such tools. This paper describes the development of an instrumented leg press and its application to testing leg dominance with a cohort of athletes. The developed instrumented leg press is a 45° reclining sled-type leg press with dual force plates, a displacement sensor and a CCD camera. A custom software client was developed using C#. The software client enabled near-real-time display of forces beneath each limb together with displacement of the quad track roller system and video feedback of the exercise. In recording mode, the collection of athlete particulars is prompted at the start of the exercise, and pre-set thresholds are used subsequently to separate the data into epochs from each exercise repetition. The leg press was evaluated in a controlled study of a cohort of physically active adults who performed a series of leg press exercises. The leg press exercises were undertaken at a set cadence with nominal applied loads of 50%, 100% and 150% of body weight without feedback. A significant asymmetry in loading of the limbs was observed in healthy adults during both the eccentric and concentric phases of the leg press exercise (P < .05). Mean forces were significantly higher beneath the non-dominant limb (4–10%) and during the concentric phase of the muscle action (5%). Given that symmetrical loading is often emphasized during strength training and remains a common goal in sports rehabilitation, these findings highlight the clinical potential for this instrumented leg press system to monitor symmetry in lower-limb loading during progressive strength training and sports rehabilitation protocols.
Resumo:
Enhancing quality of food products and reducing volume of waste during mechanical operations of food industry requires a comprehensive knowledge of material response under loadings. While research has focused on mechanical response of food material, the volume of waste after harvesting and during processing stages is still considerably high in both developing and developed countries. This research aims to develop and evaluate a constitutive model of mechanical response of tough skinned vegetables under postharvest and processing operations. The model focuses on both tensile and compressive properties of pumpkin flesh and peel tissues where the behaviours of these tissues vary depending on various factors such as rheological response and cellular structure. Both elastic and plastic response of tissue were considered in the modelling process and finite elasticity combined with pseudo elasticity theory was applied to generate the model. The outcomes were then validated using the published results of experimental work on pumpkin flesh and peel under uniaxial tensile and compression. The constitutive coefficients for peel under tensile test was α = 25.66 and β = −18.48 Mpa and for flesh α = −5.29 and β = 5.27 Mpa. under compression the constitutive coefficients were α = 4.74 and β = −1.71 Mpa for peel and α = 0.76 and β = −1.86 Mpa for flesh samples. Constitutive curves predicted the values of force precisely and close to the experimental values. The curves were fit for whole stress versus strain curve as well as a section of curve up to bio yield point. The modelling outputs had presented good agreement with the empirical values and the constructive curves exhibited a very similar pattern to the experimental curves. The presented constitutive model can be applied next to other agricultural materials under loading in future.
Resumo:
Today’s information systems log vast amounts of data. These collections of data (implicitly) describe events (e.g. placing an order or taking a blood test) and, hence, provide information on the actual execution of business processes. The analysis of such data provides an excellent starting point for business process improvement. This is the realm of process mining, an area which has provided a repertoire of many analysis techniques. Despite the impressive capabilities of existing process mining algorithms, dealing with the abundance of data recorded by contemporary systems and devices remains a challenge. Of particular importance is the capability to guide the meaningful interpretation of “oceans of data” by process analysts. To this end, insights from the field of visual analytics can be leveraged. This article proposes an approach where process states are reconstructed from event logs and visualised in succession, leading to an animated history of a process. This approach is customisable in how a process state, partially defined through a collection of activity instances, is visualised: one can select a map and specify a projection of events on this map based on the properties of the events. This paper describes a comprehensive implementation of the proposal. It was realised using the open-source process mining framework ProM. Moreover, this paper also reports on an evaluation of the approach conducted with Suncorp, one of Australia’s largest insurance companies.
Resumo:
The ambiguity acceptance test is an important quality control procedure in high precision GNSS data processing. Although the ambiguity acceptance test methods have been extensively investigated, its threshold determine method is still not well understood. Currently, the threshold is determined with the empirical approach or the fixed failure rate (FF-) approach. The empirical approach is simple but lacking in theoretical basis, while the FF-approach is theoretical rigorous but computationally demanding. Hence, the key of the threshold determination problem is how to efficiently determine the threshold in a reasonable way. In this study, a new threshold determination method named threshold function method is proposed to reduce the complexity of the FF-approach. The threshold function method simplifies the FF-approach by a modeling procedure and an approximation procedure. The modeling procedure uses a rational function model to describe the relationship between the FF-difference test threshold and the integer least-squares (ILS) success rate. The approximation procedure replaces the ILS success rate with the easy-to-calculate integer bootstrapping (IB) success rate. Corresponding modeling error and approximation error are analysed with simulation data to avoid nuisance biases and unrealistic stochastic model impact. The results indicate the proposed method can greatly simplify the FF-approach without introducing significant modeling error. The threshold function method makes the fixed failure rate threshold determination method feasible for real-time applications.
Resumo:
With the variety of PV inverter types and the number of transformerless PV inverters on the Australian market increasing, we revisit some of the issues associated with these topologies. A recent electric shock incident in Queensland (luckily without serious outcome) associated with a transformerless PV system, highlights the need for earthing PV array structures and PV module frames to prevent capacitive leakage currents causing electric shock. The presented test results of the relevant voltages associated with leakage currents of five transformerless PV inverters stress this requirement, which is currently being addressed by both the Clean Energy Council and Standards Australia. DC current injection tests were performed on the same five inverters and were used to develop preliminary recommendations for a more meaningful DC current test procedure for AS4777 Part 2. The test circuit, methodology and results are presented and discussed. A notable temperature dependency of DC current injections with three of the five inverters suggests that DC current injection should be tested at high and low internal inverter temperatures whereas the power dependency noted only for one inverter does not seem to justify recommendations for a (rather involved) standard test procedure at different power levels.
Resumo:
Background: A major challenge for assessing students’ conceptual understanding of STEM subjects is the capacity of assessment tools to reliably and robustly evaluate student thinking and reasoning. Multiple-choice tests are typically used to assess student learning and are designed to include distractors that can indicate students’ incomplete understanding of a topic or concept based on which distractor the student selects. However, these tests fail to provide the critical information uncovering the how and why of students’ reasoning for their multiple-choice selections. Open-ended or structured response questions are one method for capturing higher level thinking, but are often costly in terms of time and attention to properly assess student responses. Purpose: The goal of this study is to evaluate methods for automatically assessing open-ended responses, e.g. students’ written explanations and reasoning for multiple-choice selections. Design/Method: We incorporated an open response component for an online signals and systems multiple-choice test to capture written explanations of students’ selections. The effectiveness of an automated approach for identifying and assessing student conceptual understanding was evaluated by comparing results of lexical analysis software packages (Leximancer and NVivo) to expert human analysis of student responses. In order to understand and delineate the process for effectively analysing text provided by students, the researchers evaluated strengths and weakness for both the human and automated approaches. Results: Human and automated analyses revealed both correct and incorrect associations for certain conceptual areas. For some questions, that were not anticipated or included in the distractor selections, showing how multiple-choice questions alone fail to capture the comprehensive picture of student understanding. The comparison of textual analysis methods revealed the capability of automated lexical analysis software to assist in the identification of concepts and their relationships for large textual data sets. We also identified several challenges to using automated analysis as well as the manual and computer-assisted analysis. Conclusions: This study highlighted the usefulness incorporating and analysing students’ reasoning or explanations in understanding how students think about certain conceptual ideas. The ultimate value of automating the evaluation of written explanations is that it can be applied more frequently and at various stages of instruction to formatively evaluate conceptual understanding and engage students in reflective
Resumo:
This thesis focuses on the role of a bone cell, osteocytes in the progression of osteoarthritis. The biological relevance of this study has opened doors to new possibilities of understanding the pathogenesis of osteoarthritis.
Resumo:
BACKGROUND: Postural instability is one of the major complications found in stroke survivors. Parameterising the functional reach test (FRT) could be useful in clinical practice and basic research. OBJECTIVES: To analyse the reliability, sensitivity, and specificity in the FRT parameterisation using inertial sensors for recording kinematic variables in patients who have suffered a stroke. DESIGN: Cross-sectional study. While performing FRT, two inertial sensors were placed on the patient's back (lumbar and trunk). PARTICIPANTS: Five subjects over 65 who suffer from a stroke. MEASUREMENTS: FRT measures, lumbosacral/thoracic maximum angular displacement, maximum time of lumbosacral/thoracic angular displacement, time return initial position, and total time. Speed and acceleration of the movements were calculated indirectly. RESULTS: FRT measure is 12.75±2.06 cm. Intrasubject reliability values range from 0.829 (time to return initial position (lumbar sensor)) to 0.891 (lumbosacral maximum angular displacement). Intersubject reliability values range from 0.821 (time to return initial position (lumbar sensor)) to 0.883 (lumbosacral maximum angular displacement). FRT's reliability was 0.987 (0.983-0.992) and 0.983 (0.979-0.989) intersubject and intrasubject, respectively. CONCLUSION: The main conclusion could be that the inertial sensors are a tool with excellent reliability and validity in the parameterization of the FRT in people who have had a stroke.
Resumo:
Objective. To estimate the burden of disease attributable to excess body weight using the body mass index (BMI), by age and sex, in South Africa in 2000. Design. World Health Organization comparative risk assessment (CRA) methodology was followed. Re-analysis of the 1998 South Africa Demographic and Health Survey data provided mean BMI estimates by age and sex. Populationattributable fractions were calculated and applied to revised burden of disease estimates. Monte Carlo simulation-modelling techniques were used for the uncertainty analysis. Setting. South Africa. Subjects. Adults 30 years of age. Outcome measures. Deaths and disability-adjusted life years (DALYs) from ischaemic heart disease, ischaemic stroke, hypertensive disease, osteoarthritis, type 2 diabetes mellitus, and selected cancers. Results. Overall, 87% of type 2 diabetes, 68% of hypertensive disease, 61% of endometrial cancer, 45% of ischaemic stroke, 38% of ischaemic heart disease, 31% of kidney cancer, 24% of osteoarthritis, 17% of colon cancer, and 13% of postmenopausal breast cancer were attributable to a BMI 21 kg/m2. Excess body weight is estimated to have caused 36 504 deaths (95% uncertainty interval 31 018 - 38 637) or 7% (95% uncertainty interval 6.0 - 7.4%) of all deaths in 2000, and 462 338 DALYs (95% uncertainty interval 396 512 - 478 847) or 2.9% of all DALYs (95% uncertainty interval 2.4 - 3.0%). The burden in females was approximately double that in males. Conclusions. This study shows the importance of recognising excess body weight as a major risk to health, particularly among females, highlighting the need to develop, implement and evaluate comprehensive interventions to achieve lasting change in the determinants and impact of excess body weight.
Resumo:
Ambiguity validation as an important procedure of integer ambiguity resolution is to test the correctness of the fixed integer ambiguity of phase measurements before being used for positioning computation. Most existing investigations on ambiguity validation focus on test statistic. How to determine the threshold more reasonably is less understood, although it is one of the most important topics in ambiguity validation. Currently, there are two threshold determination methods in the ambiguity validation procedure: the empirical approach and the fixed failure rate (FF-) approach. The empirical approach is simple but lacks of theoretical basis. The fixed failure rate approach has a rigorous probability theory basis, but it employs a more complicated procedure. This paper focuses on how to determine the threshold easily and reasonably. Both FF-ratio test and FF-difference test are investigated in this research and the extensive simulation results show that the FF-difference test can achieve comparable or even better performance than the well-known FF-ratio test. Another benefit of adopting the FF-difference test is that its threshold can be expressed as a function of integer least-squares (ILS) success rate with specified failure rate tolerance. Thus, a new threshold determination method named threshold function for the FF-difference test is proposed. The threshold function method preserves the fixed failure rate characteristic and is also easy-to-apply. The performance of the threshold function is validated with simulated data. The validation results show that with the threshold function method, the impact of the modelling error on the failure rate is less than 0.08%. Overall, the threshold function for the FF-difference test is a very promising threshold validation method and it makes the FF-approach applicable for the real-time GNSS positioning applications.