893 resultados para MCDONALD EXTENDED EXPONENTIAL MODEL
Resumo:
The new model of North Island Cenozoic palaeogeography developed by Kamp et al. has a range of important implications for the evolution of New Zealand terrestrial taxa over the past 30 Ma. Key aspects include the prolonged isolation of the biota on the North Island landmass from the larger and more diverse greater South Island, and the founding of North Island taxa from the potentially unusual ecosystem of a small island around Northland. The prolonged period of isolation is expected to have generated deep phylogenetic splits within taxa present on both islands, and an important current aim should be to identify such signals in surviving endemics to start building a picture of the historical phylogeography, and inferred ecology of both islands through the Cenozoic. Given the potential differences in founding terrestrial species and climatic conditions, it seems likely that the ecology may have been very diferent between the North and South Islands. New genetic data from the 10 or so species of extinct moa suggest that the radiation of moa was much more recent than previously suggested, and reveals a complex pattern that is inferred to result from the interplay of the Cenozoic biogeography, marine barriers, and glacial cycles.
Resumo:
The Lockyer Valley in southeast Queensland, Australia, hosts an economically significant alluvial aquifer system which has been impacted by prolonged drought conditions (~1997 to ~ 2009). Throughout this time, the system was under continued groundwater extraction, resulting in severe aquifer depletion. By 2008, much of the aquifer was at <30% of storage but some relief occurred with rains in early 2009. However, between December 2010 and January 2011, most of southeast Queensland experienced unprecedented flooding, which generated significant aquifer recharge. In order to understand the spatial and temporal controls of groundwater recharge in the alluvium, a detailed 3D lithological property model of gravels, sands and clays was developed using GOCAD software. The spatial distribution of recharge throughout the catchment was assessed using hydrograph data from about 400 groundwater observation wells screened at the base of the alluvium. Water levels from these bores were integrated into a catchment-wide 3D geological model using the 3D geological modelling software GOCAD; the model highlights the complexity of recharge mechanisms. To support this analysis, groundwater tracers (e.g. major and minor ions, stable isotopes, 3H and 14C) were used as independent verification. The use of these complementary methods has allowed the identification of zones where alluvial recharge primarily occurs from stream water during episodic flood events. However, the study also demonstrates that in some sections of the alluvium, rainfall recharge and discharge from the underlying basement into the alluvium are the primary recharge mechanisms of the alluvium. This is indicated by the absence of any response to the flood, as well as the observed old radiocarbon ages and distinct basement water chemistry signatures at these locations. Within the 3D geological model, integration of water chemistry and time-series displays of water level surfaces before and after the flood suggests that the spatial variations of the flood response in the alluvium are primarily controlled by the valley morphology and lithological variations within the alluvium. The integration of time-series of groundwater level surfaces in the 3D geological model also enables the quantification of the volumetric change of groundwater stored in the unconfined sections of this alluvial aquifer during drought and following flood events. The 3D representation and analysis of hydraulic and recharge information has considerable advantages over the traditional 2D approach. For example, while many studies focus on singular aspects of catchment dynamics and groundwater-surface water interactions, the 3D approach is capable of integrating multiple types of information (topography, geological, hydraulic, water chemistry and spatial) into a single representation which provides valuable insights into the major factors controlling aquifer processes.
Resumo:
This presentation presents a blended learning model that provides greater opportunity for learning to be self-managed and personalized.
Resumo:
Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.
Resumo:
This chapter is a tutorial that teaches you how to design extended finite state machine (EFSM) test models for a system that you want to test. EFSM models are more powerful and expressive than simple finite state machine (FSM) models, and are one of the most commonly used styles of models for model-based testing, especially for embedded systems. There are many languages and notations in use for writing EFSM models, but in this tutorial we write our EFSM models in the familiar Java programming language. To generate tests from these EFSM models we use ModelJUnit, which is an open-source tool that supports several stochastic test generation algorithms, and we also show how to write your own model-based testing tool. We show how EFSM models can be used for unit testing and system testing of embedded systems, and for offline testing as well as online testing.
Resumo:
Capacity probability models of generating units are commonly used in many power system reliability studies, at hierarchical level one (HLI). Analytical modelling of a generating system with many units or generating units with many derated states in a system, can result in an extensive number of states in the capacity model. Limitations on available memory and computational time of present computer facilities can pose difficulties for assessment of such systems in many studies. A cluster procedure using the nearest centroid sorting method was used for IEEE-RTS load model. The application proved to be very effective in producing a highly similar model with substantially fewer states. This paper presents an extended application of the clustering method to include capacity probability representation. A series of sensitivity studies are illustrated using IEEE-RTS generating system and load models. The loss of load and energy expectations (LOLE, LOEE), are used as indicators to evaluate the application
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Organizations from every industry sector seek to enhance their business performance and competitiveness through the deployment of contemporary information systems (IS), such as Enterprise Systems (ERP). Investments in ERP are complex and costly, attracting scrutiny and pressure to justify their cost. Thus, IS researchers highlight the need for systematic evaluation of information system success, or impact, which has resulted in the introduction of varied models for evaluating information systems. One of these systematic measurement approaches is the IS-Impact Model introduced by a team of researchers at Queensland University of technology (QUT) (Gable, Sedera, & Chan, 2008). The IS-Impact Model is conceptualized as a formative, multidimensional index that consists of four dimensions. Gable et al. (2008) define IS-Impact as "a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups" (p.381). The IT Evaluation Research Program (ITE-Program) at QUT has grown the IS-Impact Research Track with the central goal of conducting further studies to enhance and extend the IS-Impact Model. The overall goal of the IS-Impact research track at QUT is "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable, 2009). In order to achieve that, the IS-Impact research track advocates programmatic research having the principles of tenacity, holism, and generalizability through extension research strategies. This study was conducted within the IS-Impact Research Track, to further generalize the IS-Impact Model by extending it to the Saudi Arabian context. According to Hofsted (2012), the national culture of Saudi Arabia is significantly different from the Australian national culture making the Saudi Arabian culture an interesting context for testing the external validity of the IS-Impact Model. The study re-visits the IS-Impact Model from the ground up. Rather than assume the existing instrument is valid in the new context, or simply assess its validity through quantitative data collection, the study takes a qualitative, inductive approach to re-assessing the necessity and completeness of existing dimensions and measures. This is done in two phases: Exploratory Phase and Confirmatory Phase. The exploratory phase addresses the first research question of the study "Is the IS-Impact Model complete and able to capture the impact of information systems in Saudi Arabian Organization?". The content analysis, used to analyze the Identification Survey data, indicated that 2 of the 37 measures of the IS-Impact Model are not applicable for the Saudi Arabian Context. Moreover, no new measures or dimensions were identified, evidencing the completeness and content validity of the IS-Impact Model. In addition, the Identification Survey data suggested several concepts related to IS-Impact, the most prominent of which was "Computer Network Quality" (CNQ). The literature supported the existence of a theoretical link between IS-Impact and CNQ (CNQ is viewed as an antecedent of IS-Impact). With the primary goal of validating the IS-Impact model within its extended nomological network, CNQ was introduced to the research model. The Confirmatory Phase addresses the second research question of the study "Is the Extended IS-Impact Model Valid as a Hierarchical Multidimensional Formative Measurement Model?". The objective of the Confirmatory Phase was to test the validity of IS-Impact Model and CNQ Model. To achieve that, IS-Impact, CNQ, and IS-Satisfaction were operationalized in a survey instrument, and then the research model was assessed by employing the Partial Least Squares (PLS) approach. The CNQ model was validated as a formative model. Similarly, the IS-Impact Model was validated as a hierarchical multidimensional formative construct. However, the analysis indicated that one of the IS-Impact Model indicators was insignificant and can be removed from the model. Thus, the resulting Extended IS-Impact Model consists of 4 dimensions and 34 measures. Finally, the structural model was also assessed against two aspects: explanatory and predictive power. The analysis revealed that the path coefficient between CNQ and IS-Impact is significant with t-value= (4.826) and relatively strong with â = (0.426) with CNQ explaining 18% of the variance in IS-Impact. These results supported the hypothesis that CNQ is antecedent of IS-Impact. The study demonstrates that the quality of Computer Network affects the quality of the Enterprise System (ERP) and consequently the impacts of the system. Therefore, practitioners should pay attention to the Computer Network quality. Similarly, the path coefficient between IS-Impact and IS-Satisfaction was significant t-value = (17.79) and strong â = (0.744), with IS-Impact alone explaining 55% of the variance in Satisfaction, consistent with results of the original IS-Impact study (Gable et al., 2008). The research contributions include: (a) supporting the completeness and validity of IS-Impact Model as a Hierarchical Multi-dimensional Formative Measurement Model in the Saudi Arabian context, (b) operationalizing Computer Network Quality as conceptualized in the ITU-T Recommendation E.800 (ITU-T, 1993), (c) validating CNQ as a formative measurement model and as an antecedent of IS Impact, and (d) conceptualizing and validating IS-Satisfaction as a reflective measurement model and as an immediate consequence of IS Impact. The CNQ model provides a framework to perceptually measure Computer Network Quality from multiple perspectives. The CNQ model features an easy-to-understand, easy-to-use, and economical survey instrument.
Resumo:
This paper examines the role of first aid training in increasing adolescent helping behaviours when taught in a school-based injury prevention program, Skills for Preventing Injury in Youth (SPIY). The research involved the development and application of an extended Theory of Planned Behaviour (TPB), including “behavioural willingness in a fight situation,” “first aid knowledge” and “perceptions of injury seriousness”, to predict the relationship between participation in SPIY and helping behaviours when a friend is injured in a fight. From 35 Queensland high schools, 2500 Year 9 students (mean age = 13.5, 40% male) completed surveys measuring their attitudes, perceived behavioural control, subjective norms and behavioural intention, from the TPB, and added measures of behavioural willingness in a fight situation, perceptions of injury seriousness and first aid knowledge, to predict helping behaviours when a friend is injured in a fight. It is expected that the TPB will significantly contribute to understanding the relationship between participation in SPIY and helping behaviours when a friend is injured in a fight. Further analyses will determine whether the extension of the model significantly increases the variance explained in helping behaviours. The findings of this research will provide insight into the critical factors that may increase adolescent bystanders’ actions in injury situations.
Resumo:
Bus Rapid Transit (BRT) station is the interface between passenger and service. The station is crucial to line operation as it is typically the only location where buses can pass each other. Congestion may occur here when buses maneuvering into and out of the platform lane interfere with bus flow, or when a queue of buses forms upstream of the platform lane blocking the passing lane. However, some systems include operation where express buses pass the critical station, resulting in a proportion of non stopping buses. It is important to understand the operation of the critical busway station under this type of operation, as it affects busway line capacity. This study uses micro simulation to treat the BRT station operation and to analyze the relationship between station Limit state bus capacity (B_ls), Total Bus Capacity (B_ttl). First, the simulation model is developed for Limit state scenario and then a mathematical model is defined, calibrated for a specified range of controlled scenarios of mean and coefficient of variation of dwell time. Thereafter, the proposed B_ls model is extended to consider non stopping buses and B_ttlmodel is defined. The proposed models provides better understanding to the BRT line capacity and is useful for transit authorities for designing better BRT operation.
Resumo:
A new optimal control model of the interactions between a growing tumour and the host immune system along with an immunotherapy treatment strategy is presented. The model is based on an ordinary differential equation model of interactions between the growing tu- mour and the natural killer, cytotoxic T lymphocyte and dendritic cells of the host immune system, extended through the addition of a control function representing the application of a dendritic cell treat- ment to the system. The numerical solution of this model, obtained from a multi species Runge–Kutta forward-backward sweep scheme, is described. We investigate the effects of varying the maximum al- lowed amount of dendritic cell vaccine administered to the system and find that control of the tumour cell population is best effected via a high initial vaccine level, followed by reduced treatment and finally cessation of treatment. We also found that increasing the strength of the dendritic cell vaccine causes an increase in the number of natural killer cells and lymphocytes, which in turn reduces the growth of the tumour.
Resumo:
Community support agencies routinely employ a web presence to provide information on their services. While this online information provision helps to increase an agency’s reach, this paper argues that it can be further extended by mapping relationships between services and by facilitating two-way communication and collaboration with local communities. We argue that emergent technologies, such as locative media and networking tools, can assist in harnessing this social capital. However, new applications must be designed in ways that both persuade and support community members to contribute information and support others in need. An analysis of the online presence of community service agencies and social benefit applications is presented against Fogg’s Behaviour Model. From this evaluation, design principles are proposed for developing new locative, collaborative online applications for social benefit.
Resumo:
We have used electronic structure calculations to investigate the 1,2-dehydration of alcohols as a model for water loss during the pyrolysis of carbohydrates found in biomass. Reaction enthalpies and energy barriers have been calculated for neat alcohols, protonated alcohols and alcohols complexed to alkali metal ions (Li + and Na +). We have estimated pre-exponential A factors in order to obtain gas phase rate constants. For neat alcohols, the barrier to 1,2-dehydration is about 67 kcal mol -1, which is consistent with the limited experimental data. Protonation and metal complexation significantly reduce this activation barrier and thus, facilitate more rapid reaction. With the addition of alkali metals, the rate of dehydration can increase by a factor of 10 8 while addition of a proton can lead to an increase of a factor of 10 23.
Resumo:
This study models young people's moderate drinking decision-making using the Model of Goal-Directed Behaviour (MGB), thus presenting insights into young people's desires and intentions to drink responsibly. Testing the applicability of the MGB to quantitatively analyse responsible drinking, the explanatory sphere of the MGB is extended. An online survey resulted in 1522 completed questionnaires from respondents aged between 18 and 25 years. Collected data were analysed with structural equation modelling (SEM) using SPSS AMOS21 (IBM, New York, NY, USA) software. The key finding of this study is that an individual's desire to drink moderately is the most important predictor of young people's responsible drinking intentions. Our use of MGB provides further evidence that there is a strong distinction between consumer desires and intentions.
Resumo:
The validity of fatigue protocols involving multi-joint movements, such as stepping, has yet to be clearly defined. Although surface electromyography can monitor the fatigue state of individual muscles, the effects of joint angle and velocity variation on signal parameters are well established. Therefore, the aims of this study were to i) describe sagittal hip and knee kinematics during repetitive stepping ii) identify periods of high inter-trial variability and iii) determine within-test reliability of hip and knee kinematic profiles. A group of healthy men (N = 15) ascended and descended from a knee-high platform wearing a weighted vest (10%BW) for 50 consecutive trials. The hip and knee underwent rapid flexion and extension during step ascent and descent. Variability of hip and knee velocity peaked between 20-40% of the ascent phase and 80-100% of the descent. Significant (p<0.05) reductions in joint range of motion and peak velocity during step ascent were observed, while peak flexion velocity increased during descent. Healthy individuals use complex hip and knee motion to negotiate a knee-high step with kinematic patterns varying across multiple repetitions. These findings have important implications for future studies intending to use repetitive stepping as a fatigue model for the knee extensors and flexors.