72 resultados para Running-based anaerobic sprint test
Resumo:
We introduce a procedure for association based analysis of nuclear families that allows for dichotomous and more general measurements of phenotype and inclusion of covariate information. Standard generalized linear models are used to relate phenotype and its predictors. Our test procedure, based on the likelihood ratio, unifies the estimation of all parameters through the likelihood itself and yields maximum likelihood estimates of the genetic relative risk and interaction parameters. Our method has advantages in modelling the covariate and gene-covariate interaction terms over recently proposed conditional score tests that include covariate information via a two-stage modelling approach. We apply our method in a study of human systemic lupus erythematosus and the C-reactive protein that includes sex as a covariate.
Resumo:
This paper presents a reappraisal of the blood clotting response (BCR) tests for anticoagulant rodenticides, and proposes a standardised methodology for identifying and quantifying physiological resistance in populations of rodent species. The standardisation is based on the International Normalised Ratio, which is standardised against a WHO international reference preparation of thromboplastin, and allows comparison of data obtained using different thromboplastin reagents. ne methodology is statistically sound, being based on the 50% response, and has been validated against the Norway rat (Rattus norvegicus) and the house mouse (Mus domesticus). Susceptibility baseline data are presented for warfarin, diphacinone, chlorophacinone and coumatetralyl against the Norway rat, and for bromadiolone, difenacoum, difethialone, flocoumafen and brodifacoum against the Norway rat and the house mouse. A 'test dose' of twice the ED50 can be used for initial identification of resistance, and will provide a similar level of information to previously published methods. Higher multiples of the ED50 can be used to assess the resistance factor, and to predict the likely impact on field control.
Resumo:
BACKGROUND: In order to maintain the most comprehensive structural annotation databases we must carry out regular updates for each proteome using the latest profile-profile fold recognition methods. The ability to carry out these updates on demand is necessary to keep pace with the regular updates of sequence and structure databases. Providing the highest quality structural models requires the most intensive profile-profile fold recognition methods running with the very latest available sequence databases and fold libraries. However, running these methods on such a regular basis for every sequenced proteome requires large amounts of processing power.In this paper we describe and benchmark the JYDE (Job Yield Distribution Environment) system, which is a meta-scheduler designed to work above cluster schedulers, such as Sun Grid Engine (SGE) or Condor. We demonstrate the ability of JYDE to distribute the load of genomic-scale fold recognition across multiple independent Grid domains. We use the most recent profile-profile version of our mGenTHREADER software in order to annotate the latest version of the Human proteome against the latest sequence and structure databases in as short a time as possible. RESULTS: We show that our JYDE system is able to scale to large numbers of intensive fold recognition jobs running across several independent computer clusters. Using our JYDE system we have been able to annotate 99.9% of the protein sequences within the Human proteome in less than 24 hours, by harnessing over 500 CPUs from 3 independent Grid domains. CONCLUSION: This study clearly demonstrates the feasibility of carrying out on demand high quality structural annotations for the proteomes of major eukaryotic organisms. Specifically, we have shown that it is now possible to provide complete regular updates of profile-profile based fold recognition models for entire eukaryotic proteomes, through the use of Grid middleware such as JYDE.
Resumo:
This paper describes the development and validation of a novel web-based interface for the gathering of feedback from building occupants about their environmental discomfort including signs of Sick Building Syndrome (SBS). The gathering of such feedback may enable better targeting of environmental discomfort down to the individual as well as the early detection and subsequently resolution by building services of more complex issues such as SBS. The occupant's discomfort is interpreted and converted to air-conditioning system set points using Fuzzy Logic. Experimental results from a multi-zone air-conditioning test rig have been included in this paper.
Resumo:
The hazards associated with high-voltage three-phase inverters and high-powered large electrical machines have resulted in most of the engineering courses covering three-phase machines and drives theoretically. This paper describes a set of purpose-built, low-voltage, and low-cost teaching equipment that allows the hands-on instruction of three-phase inverters and rotating machines. The motivation for moving towards a system running at low voltages is that the students can safely experiment freely with the motors and inverter. The students can also access all of the current and voltage waveforms, which until now could only be studied in textbooks or observed as part of laboratory demonstrations. Both the motor and the inverter designs are for teaching purposes and require minimal effort and cost.
Resumo:
Investigation of the fracture mode for hard and soft wheat endosperm was aimed at gaining a better understanding of the fragmentation process. Fracture mechanical characterization was based on the three-point bending test which enables stable crack propagation to take place in small rectangular pieces of wheat endosperm. The crack length can be measured in situ by using an optical microscope with light illumination from the side of the specimen or from the back of the specimen. Two new techniques were developed and used to estimate the fracture toughness of wheat endosperm, a geometric approach and a compliance method. The geometric approach gave average fracture toughness values of 53.10 and 27.0 J m(-2) for hard and soft endosperm, respectively. Fracture toughness estimated using the compliance method gave values of 49.9 and 29.7 J m(-2) for hard and soft endosperm, respectively. Compressive properties of the endosperm in three mutually perpendicular axes revealed that the hard and soft endosperms are isotropic composites. Scanning electron microscopy (SEM) observation of the fracture surfaces and the energy-time curves of loading-unloading cycles revealed that there was a plastic flow during crack propagation for both the hard and soft endosperms, and confirmed that the fracture mode is significantly related to the adhesion level between starch granules and the protein matrix.
Resumo:
A structure-function study was carried out to increase knowledge of how glycosidic linkages and molecular weights of carbohydrates contribute toward the selectivity of fermentation by gut bacteria. Oligosaccharides with maltose as the common carbohydrate source were used. Potentially prebiotic alternansucrase and dextransucrase maltose acceptor products were synthesized and separated into different molecular weights using a Bio-gel P2 column. These fractions were characterized by matrix-assisted laser desorption/ionization time-of-flight. Nonprebiotic maltooligosaccharides with degrees of polymerization (DP) from three to seven were commercially obtained for comparison. Growth selectivity of fecal bacteria on these oligosaccharides was studied using an anaerobic in vitro fermentation method. In general, carbohydrates of DP3 showed the highest selectivity towards bifidobacteria; however, oligosaccharides with a higher molecular weight (DP6-DP7) also resulted in a selective fermentation. Oligosaccharides with DPs above seven did not promote the growth of "beneficial" bacteria. The knowledge of how specific structures modify the gut microflora could help to find new prebiotic oligosaccharides.
Resumo:
The utility of an "ecologically rational" recognition-based decision rule in multichoice decision problems is analyzed, varying the type of judgment required (greater or lesser). The maximum size and range of a counterintuitive advantage associated with recognition-based judgment (the "less-is-more effect") is identified for a range of cue validity values. Greater ranges of the less-is-more effect occur when participants are asked which is the greatest of to choices (m > 2) than which is the least. Less-is-more effects also have greater range for larger values of in. This implies that the classic two-altemative forced choice task, as studied by Goldstein and Gigerenzer (2002), may not be the most appropriate test case for less-is-more effects.
Resumo:
In this study, for the first time, prospective memory was investigated in 11 school-aged children with autism spectrum disorders and 11 matched neurotypical controls. A computerised time-based prospective memory task was embedded in a visuospatial working memory test and required participants to remember to respond to certain target times. Controls had significantly more correct prospective memory responses than the autism spectrum group. Moreover, controls checked the time more often and increased time-monitoring more steeply as the target times approached. These differences in time-checking may suggest that prospective memory in autism spectrum disorders is affected by reduced self-initiated processing as indicated by reduced task monitoring.
Resumo:
We describe, and make publicly available, two problem instance generators for a multiobjective version of the well-known quadratic assignment problem (QAP). The generators allow a number of instance parameters to be set, including those controlling epistasis and inter-objective correlations. Based on these generators, several initial test suites are provided and described. For each test instance we measure some global properties and, for the smallest ones, make some initial observations of the Pareto optimal sets/fronts. Our purpose in providing these tools is to facilitate the ongoing study of problem structure in multiobjective (combinatorial) optimization, and its effects on search landscape and algorithm performance.
Resumo:
A novel sparse kernel density estimator is derived based on a regression approach, which selects a very small subset of significant kernels by means of the D-optimality experimental design criterion using an orthogonal forward selection procedure. The weights of the resulting sparse kernel model are calculated using the multiplicative nonnegative quadratic programming algorithm. The proposed method is computationally attractive, in comparison with many existing kernel density estimation algorithms. Our numerical results also show that the proposed method compares favourably with other existing methods, in terms of both test accuracy and model sparsity, for constructing kernel density estimates.
Resumo:
This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.
Resumo:
There is growing interest, especially for trials in stroke, in combining multiple endpoints in a single clinical evaluation of an experimental treatment. The endpoints might be repeated evaluations of the same characteristic or alternative measures of progress on different scales. Often they will be binary or ordinal, and those are the cases studied here. In this paper we take a direct approach to combining the univariate score statistics for comparing treatments with respect to each endpoint. The correlations between the score statistics are derived and used to allow a valid combined score test to be applied. A sample size formula is deduced and application in sequential designs is discussed. The method is compared with an alternative approach based on generalized estimating equations in an illustrative analysis and replicated simulations, and the advantages and disadvantages of the two approaches are discussed.
Resumo:
This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.
Resumo:
We have conducted the first extensive field test of two new methods to retrieve optical properties for overhead clouds that range from patchy to overcast. The methods use measurements of zenith radiance at 673 and 870 nm wavelengths and require the presence of green vegetation in the surrounding area. The test was conducted at the Atmospheric Radiation Measurement Program Oklahoma site during September–November 2004. These methods work because at 673 nm (red) and 870 nm (near infrared (NIR)), clouds have nearly identical optical properties, while vegetated surfaces reflect quite differently. The first method, dubbed REDvsNIR, retrieves not only cloud optical depth τ but also radiative cloud fraction. Because of the 1-s time resolution of our radiance measurements, we are able for the first time to capture changes in cloud optical properties at the natural timescale of cloud evolution. We compared values of τ retrieved by REDvsNIR to those retrieved from downward shortwave fluxes and from microwave brightness temperatures. The flux method generally underestimates τ relative to the REDvsNIR method. Even for overcast but inhomogeneous clouds, differences between REDvsNIR and the flux method can be as large as 50%. In addition, REDvsNIR agreed to better than 15% with the microwave method for both overcast and broken clouds. The second method, dubbed COUPLED, retrieves τ by combining zenith radiances with fluxes. While extra information from fluxes was expected to improve retrievals, this is not always the case. In general, however, the COUPLED and REDvsNIR methods retrieve τ to within 15% of each other.