63 resultados para Variable Sampling Intervals
em University of Queensland eSpace - Australia
Resumo:
The depletion of zeta-cypermethrin residues in bovine tissues and milk was studied. Beef cattle were treated three times at 3-week intervals with 1 ml 10 kg(-1) body weight of a 25 g litre(-1) or 50 g litre(-1) pour-on formulation (2.5 and 5.0 mg zeta-cypermethrin kg(-1) body weight) or 100 mg kg(-1) spray to simulate a likely worst-case treatment regime. Friesian and Jersey dairy cows were treated once with 2.5 mg zeta-cypermethrin kg(-1) in a pour-on formulation. Muscle, liver and kidney residue concentrations were generally less than the limit of detection (LOD = 0.01 mg kg(-1)). Residues in renal-fat and back-fat samples from animals treated with 2.5 mg kg(-1) all exceeded the limit of quantitation (LOQ = 0.05 mg kg(-1)), peaking at 10 days after treatment. Only two of five kidney fat samples were above the LOQ after 34 days, but none of the back-fat samples exceeded the LOQ at 28 days after treatment. Following spray treatments, fat residues were detectable in some animals but were below the LOQ at all sampling intervals. Zeta-cypermethrin was quantifiable (LOQ = 0.01 mg kg(-1)) in only one whole-milk sample from the Friesian cows (0.015 mg kg(-1), 2 days after treatment). In whole milk from Jersey cows, the mean concentration of zeta-cypermethrin peaked 1 day after treatment, at 0.015 mg kg(-1), and the highest individual sample concentration was 0.025 mg kg(-1) at 3 days after treatment. Residues in milk were not quantifiable beginning 4 days after treatment. The mean concentrations of zeta-cypermethrin in milk fat from Friesian and Jersey cows peaked two days after treatment at 0.197 mg kg(-1) and 0.377 mg kg(-1), respectively, and the highest individual sample concentrations were 2 days after treatment at 0.47 mg kg(-1) and 0.98 mg kg(-1), respectively. (C) 2001 Society of Chemical Industry.
Resumo:
Clinical evaluation of arterial potency in acute ST-elevation myocardial infarction (STEMI) is unreliable. We sought to identify infarction and predict infarct-related artery potency measured by the Thrombolysis In Myocardial Infarction (TIMI) score with qualitative and quantitative intravenous myocardial contrast echocardiography (MCE). Thirty-four patients with suspected STEMI underwent MCE before emergency angiography and planned angioplasty. MCE was performed with harmonic imaging and variable triggering intervals during intravenous administration of Optison. Myocardial perfusion was quantified offline, fitting an exponential function to contrast intensity at various pulsing intervals. Plateau myocardial contrast intensity (A), rate of rise (beta), and myocardial flow (Q = A x beta) were assessed in 6 segments. Qualitative assessment of perfusion defects was sensitive for the diagnosis of infarction (sensitivity 93%) and did not differ between anterior and inferior infarctions. However, qualitative assessment had only moderate specificity (50%), and perfusion defects were unrelated to TIMI flow. In patients with STEMI, quantitatively derived myocardial blood flow Q (A x beta) was significantly lower in territories subtended by an artery with impaired (TIMI 0 to 2) flow than those territories supplied by a reperfused artery with TIMI 3 flow (10.2 +/- 9.1 vs 44.3 +/- 50.4, p = 0.03). Quantitative flow was also lower in segments with impaired flow in the subtending artery compared with normal patients with TIMI 3 flow (42.8 +/- 36.6, p = 0.006) and all segments with TIMI 3 flow (35.3 +/- 32.9, p = 0.018). An receiver-operator characteristic curve derived cut-off Q value of
Resumo:
HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.
Resumo:
A variable that appears to affect preference development is the exposure to a variety of options. Providing opportunities for systematically sampling different options is one procedure that can facilitate the development of preference, which is indicated by the consistency of selections. The purpose of this study was to evaluate the effects of providing sampling opportunities on the preference development for two adults with severe disabilities. Opportunities for sampling a variety of drink items were presented, followed by choice opportunities for selections at the site where sampling occurred and at a non-sampling site (a grocery store). Results show that the participants developed a definite response consistency in selections at both sites. Implications for sampling practices are discussed.
Resumo:
The influence of meteorological parameters on airborne pollen of Australian native arboreal species was investigated in the sub-tropical city of Brisbane, Australia over the five-year period, June 1994–May 1999. Australian native arboreal pollen (ANAP), shed by taxa belonging to the families Cupressaceae, Casuarinaceae and Myrtaceae accounts for 18.4% of the total annual pollen count and is distributed in the atmosphere during the entire year with maximum loads restricted to the months May through November. Daily counts within the range 11–100 grains m–3 occurred over short intervals each year and were recorded on 100 days during the five-year sampling period. Total seasonal ANAP concentrations varied each year, with highest annual values measured for the family Cupressaceae, for which greater seasonal frequencies were shown to be related to pre-seasonal precipitation (r 2 = 0.76, p = 0.05). Seasonal start dates were near consistent for the Cupressaceae and Casuarinaceae. Myrtaceae start dates were variable and established to be directly related to lower average pre-seasonal maximum temperature (r 2 = 0.78, p = 0.04). Associations between daily ANAP loads and weather parameters showed that densities of airborne Cupressaceae and Casuarinaceae pollen were negatively correlated with maximum temperature (p < 0.0001), minimum temperature (p < 0.0001) and precipitation (p < 0.05), whereas associations with daily Myrtaceae pollen counts were not statistically significant. This is the first study to be conducted in Australia that has assessed the relationships between weather parameters and the airborne distribution of pollen emitted by Australian native arboreal species. Pollen shed by Australian native Cupressaceae, Casuarinaceae and Myrtaceae species are considered to be important aeroallergens overseas, however their significance as a sensitising source in Australia remains unclear and requires further investigation.
Resumo:
Fine-scale spatial genetic structure (SGS) in natural tree populations is largely a result of restricted pollen and seed dispersal. Understanding the link between limitations to dispersal in gene vectors and SGS is of key interest to biologists and the availability of highly variable molecular markers has facilitated fine-scale analysis of populations. However, estimation of SGS may depend strongly on the type of genetic marker and sampling strategy (of both loci and individuals). To explore sampling limits, we created a model population with simulated distributions of dominant and codominant alleles, resulting from natural regeneration with restricted gene flow. SGS estimates from subsamples (simulating collection and analysis with amplified fragment length polymorphism (AFLP) and microsatellite markers) were correlated with the 'real' estimate (from the full model population). For both marker types, sampling ranges were evident, with lower limits below which estimation was poorly correlated and upper limits above which sampling became inefficient. Lower limits (correlation of 0.9) were 100 individuals, 10 loci for microsatellites and 150 individuals, 100 loci for AFLPs. Upper limits were 200 individuals, five loci for microsatellites and 200 individuals, 100 loci for AFLPs. The limits indicated by simulation were compared with data sets from real species. Instances where sampling effort had been either insufficient or inefficient were identified. The model results should form practical boundaries for studies aiming to detect SGS. However, greater sample sizes will be required in cases where SGS is weaker than for our simulated population, for example, in species with effective pollen/seed dispersal mechanisms.
Resumo:
Two stochastic production frontier models are formulated within the generalized production function framework popularized by Zellner and Revankar (Rev. Econ. Stud. 36 (1969) 241) and Zellner and Ryu (J. Appl. Econometrics 13 (1998) 101). This framework is convenient for parsimonious modeling of a production function with returns to scale specified as a function of output. Two alternatives for introducing the stochastic inefficiency term and the stochastic error are considered. In the first the errors are added to an equation of the form h(log y, theta) = log f (x, beta) where y denotes output, x is a vector of inputs and (theta, beta) are parameters. In the second the equation h(log y,theta) = log f(x, beta) is solved for log y to yield a solution of the form log y = g[theta, log f(x, beta)] and the errors are added to this equation. The latter alternative is novel, but it is needed to preserve the usual definition of firm efficiency. The two alternative stochastic assumptions are considered in conjunction with two returns to scale functions, making a total of four models that are considered. A Bayesian framework for estimating all four models is described. The techniques are applied to USDA state-level data on agricultural output and four inputs. Posterior distributions for all parameters, for firm efficiencies and for the efficiency rankings of firms are obtained. The sensitivity of the results to the returns to scale specification and to the stochastic specification is examined. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
This study determined the inter-tester and intra-tester reliability of physiotherapists measuring functional motor ability of traumatic brain injury clients using the Clinical Outcomes Variable Scale (COVS). To test inter-tester reliability, 14 physiotherapists scored the ability of 16 videotaped patients to execute the items that comprise the COVS. Intra-tester reliability was determined by four physiotherapists repeating their assessments after one week, and three months later. The intra-class correlation coefficients (ICC) were very high for both inter-tester reliability (ICC > 0.97 for total COVS scores, ICC > 0.93 for individual COVS items) and intra-tester reliability (ICC > 0.97). This study demonstrates that physiotherapists are reliable in the administration of the COVS.
Resumo:
Two experiments were conducted on the nature of expert perception in the sport of squash. In the first experiment, ten expert and fifteen novice players attempted to predict the direction and force of squash strokes from either a film display (occluded at variable time periods before and after the opposing player had struck the ball) or a matched point-light display (containing only the basic kinematic features of the opponent's movement pattern). Experts outperformed the novices under both display conditions, and the same basic time windows that characterised expert and novice pick-up of information in the film task also persisted in the point-light task. This suggests that the experts' perceptual advantage is directly related to their superior pick-up of essential kinematic information. In the second experiment, the vision of six expert and six less skilled players was occluded by remotely triggered liquid-crystal spectacles at quasi-random intervals during simulated match play. Players were required to complete their current stroke even when the display was occluded and their prediction performance was assessed with respect to whether they moved to the correct half of the court to match the direction and depth of the opponent's stroke. Consistent with experiment 1, experts were found to be superior in their advance pick-up of both directional and depth information when the display was occluded during the opponent's hitting action. However, experts also remained better than chance, and clearly superior to less skilled players, in their prediction performance under conditions where occlusion occurred before any significant pre-contact preparatory movement by the opposing player was visible. This additional source of expert superiority is attributable to their superior attunement to the information contained in the situational probabilities and sequential dependences within their opponent's pattern of play.
Resumo:
The generalized Gibbs sampler (GGS) is a recently developed Markov chain Monte Carlo (MCMC) technique that enables Gibbs-like sampling of state spaces that lack a convenient representation in terms of a fixed coordinate system. This paper describes a new sampler, called the tree sampler, which uses the GGS to sample from a state space consisting of phylogenetic trees. The tree sampler is useful for a wide range of phylogenetic applications, including Bayesian, maximum likelihood, and maximum parsimony methods. A fast new algorithm to search for a maximum parsimony phylogeny is presented, using the tree sampler in the context of simulated annealing. The mathematics underlying the algorithm is explained and its time complexity is analyzed. The method is tested on two large data sets consisting of 123 sequences and 500 sequences, respectively. The new algorithm is shown to compare very favorably in terms of speed and accuracy to the program DNAPARS from the PHYLIP package.
Resumo:
Stable carbon isotope analyses of wool staples provided insight into the vegetation consumed by sheep at a temporal resolution not previously studied. Contemporary Australian and historic South African samples dating back to 1916 were analyzed for their stable carbon isotope ratio, a proxy for the proportion of C-3 and C-4 plant species consumed by animals. Sheep sample vegetation continuously throughout a year, and as their wool grows it integrates and stores information about their diet. In subtropical and tropical rangelands the majority of grass species are C-4. Since sheep prefer to graze, and their wool is an isotopic record of their diet, we now have the potential to develop a high resolution index to the availability of grass from a sheep's perspective. Isotopic analyses of wool suggest a new direction for monitoring grazing and for the reconstruction of past vegetation changes, which will make a significant contribution to traditional rangeland ecology and management. It is recommended that isotopic and other analyses of wool be further developed for use in rangeland monitoring programs to provide valuable feedback for land managers.
Resumo:
Prepulse inhibition and facilitation of the blink reflex are said to reflect different responses elicited by the lead stimulus, transient detection and orienting response respectively. Two experiments investigated the effects of trial repetition and lead stimulus change on blink modification. It was hypothesized that these manipulations will affect orienting and thus blink facilitation to a greater extent than they will affect transient detection and thus blink inhibition. In Experiment 1 (N = 64), subjects were trained with a sequence of 12 lead stimulus and 12 blink stimulus alone presentations, and 24 lead stimulus-blink stimulus pairings. Lead interval was 120 ms for 12 of the trials and 2000 ms for the other 12. For half the subjects this sequence was followed by a change in pitch of the lead stimulus. In Experiment 2 (N = 64), subjects were trained with a sequence of 36 blink alone stimuli and 36 lead stimulus-blink stimulus pairings. The lead interval was 120 ms for half the subjects and 2000 ms for the other half. The pitch of the lead stimulus on prestimulus trials 31-33 was changed for half the subjects in each group. In both experiments, the amount of blink inhibition decreased during training whereas the amount of blink facilitation remained unchanged. Lead stimulus change had no effect on blink modification in either experiment although it resulted in enhanced skin conductance responses and greater heart rate deceleration in Experiment 2. The present results are not consistent with the notion that blink facilitation is linked to orienting whereas blink inhibition reflects a transient detection mechanism. (C) 1998 Elsevier Science B.V.
Resumo:
A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.