73 resultados para VARIABLE SAMPLING INTERVAL
em University of Queensland eSpace - Australia
Resumo:
HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.
Resumo:
A variable that appears to affect preference development is the exposure to a variety of options. Providing opportunities for systematically sampling different options is one procedure that can facilitate the development of preference, which is indicated by the consistency of selections. The purpose of this study was to evaluate the effects of providing sampling opportunities on the preference development for two adults with severe disabilities. Opportunities for sampling a variety of drink items were presented, followed by choice opportunities for selections at the site where sampling occurred and at a non-sampling site (a grocery store). Results show that the participants developed a definite response consistency in selections at both sites. Implications for sampling practices are discussed.
Resumo:
Distance sampling using line transects has not been previously used or tested for estimating koala abundance. In July 2001, a pilot survey was conducted to compare the use of line transects with strip transects for estimating koala abundance. Both methods provided a similar estimate of density. On the basis of the results of the pilot survey, the distribution and abundance of koalas in the Pine Rivers Shire, south-east Queensland, was determined using line-transect sampling. In total, 134 lines (length 64 km) were used to sample bushland areas. Eighty-two independent koalas were sighted. Analysis of the frequency distribution of sighting distances using the software program DISTANCE enabled a global detection function to be estimated for survey sites in bushland areas across the Shire. Abundance in urban parts of the Shire was estimated from densities obtained from total counts at eight urban sites that ranged from 26 to 51 ha in size. Koala abundance in the Pine Rivers Shire was estimated at 4584 (95% confidence interval, 4040-5247). Line-transect sampling is a useful method for estimating koala abundance provided experienced koala observers are used when conducting surveys.
Resumo:
Despite extensive efforts to confirm a direct association between Chlamydia pneumoniae and atherosclerosis, different laboratories continue to report a large variability in detection rates. In this study, we analyzed multiple sections from atherosclerotic carotid arteries from 10 endartectomy patients to determine the location of C. pneumoniae DNA and the number of sections of the plaque required for analysis to obtain a 95% confidence of detecting the bacterium. A sensitive nested PCR assay detected C. pneumoniae DNA in all patients at one or more locations within the plaque. On average, 42% (ranging from 5 to 91%) of the sections from any single patient had C. pneumoniae DNA present. A patchy distribution of C. pneumoniae in the atherosclerotic lesions was observed, with no area of the carotid having significantly more C. pneumoniae DNA present. If a single random 30-mum-thick section was tested, there was only a 35.6 to 41.6% (95% confidence interval) chance of detecting C. pneumoniae DNA in a patient with carotid artery disease. A minimum of 15 sections would therefore be required to obtain a 95% chance of detecting all true positives. The low concentration and patchy distribution of C. pneumoniae DNA in atherosclerotic plaque appear to be among the reasons for inconsistency between laboratories in the results reported.
Resumo:
There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (C) 2004 American Institute of Physics.
Resumo:
Fine-scale spatial genetic structure (SGS) in natural tree populations is largely a result of restricted pollen and seed dispersal. Understanding the link between limitations to dispersal in gene vectors and SGS is of key interest to biologists and the availability of highly variable molecular markers has facilitated fine-scale analysis of populations. However, estimation of SGS may depend strongly on the type of genetic marker and sampling strategy (of both loci and individuals). To explore sampling limits, we created a model population with simulated distributions of dominant and codominant alleles, resulting from natural regeneration with restricted gene flow. SGS estimates from subsamples (simulating collection and analysis with amplified fragment length polymorphism (AFLP) and microsatellite markers) were correlated with the 'real' estimate (from the full model population). For both marker types, sampling ranges were evident, with lower limits below which estimation was poorly correlated and upper limits above which sampling became inefficient. Lower limits (correlation of 0.9) were 100 individuals, 10 loci for microsatellites and 150 individuals, 100 loci for AFLPs. Upper limits were 200 individuals, five loci for microsatellites and 200 individuals, 100 loci for AFLPs. The limits indicated by simulation were compared with data sets from real species. Instances where sampling effort had been either insufficient or inefficient were identified. The model results should form practical boundaries for studies aiming to detect SGS. However, greater sample sizes will be required in cases where SGS is weaker than for our simulated population, for example, in species with effective pollen/seed dispersal mechanisms.
Resumo:
Two stochastic production frontier models are formulated within the generalized production function framework popularized by Zellner and Revankar (Rev. Econ. Stud. 36 (1969) 241) and Zellner and Ryu (J. Appl. Econometrics 13 (1998) 101). This framework is convenient for parsimonious modeling of a production function with returns to scale specified as a function of output. Two alternatives for introducing the stochastic inefficiency term and the stochastic error are considered. In the first the errors are added to an equation of the form h(log y, theta) = log f (x, beta) where y denotes output, x is a vector of inputs and (theta, beta) are parameters. In the second the equation h(log y,theta) = log f(x, beta) is solved for log y to yield a solution of the form log y = g[theta, log f(x, beta)] and the errors are added to this equation. The latter alternative is novel, but it is needed to preserve the usual definition of firm efficiency. The two alternative stochastic assumptions are considered in conjunction with two returns to scale functions, making a total of four models that are considered. A Bayesian framework for estimating all four models is described. The techniques are applied to USDA state-level data on agricultural output and four inputs. Posterior distributions for all parameters, for firm efficiencies and for the efficiency rankings of firms are obtained. The sensitivity of the results to the returns to scale specification and to the stochastic specification is examined. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
This study determined the inter-tester and intra-tester reliability of physiotherapists measuring functional motor ability of traumatic brain injury clients using the Clinical Outcomes Variable Scale (COVS). To test inter-tester reliability, 14 physiotherapists scored the ability of 16 videotaped patients to execute the items that comprise the COVS. Intra-tester reliability was determined by four physiotherapists repeating their assessments after one week, and three months later. The intra-class correlation coefficients (ICC) were very high for both inter-tester reliability (ICC > 0.97 for total COVS scores, ICC > 0.93 for individual COVS items) and intra-tester reliability (ICC > 0.97). This study demonstrates that physiotherapists are reliable in the administration of the COVS.
Resumo:
40Ar/39Ar laser incremental heating analyses of individual grains of supergene jarosite, alunite, and cryptomelane from weathering profiles in the Dugald River area, Queensland, Australia, show a strong positive correlation between a sample’s age and its elevation. We analyzed 125 grains extracted from 35 hand specimens collected from weathering profiles at 11 sites located at 3 distinct elevations. The highest elevation profile hosts the oldest supergene minerals, whereas progressively younger samples occur at lower positions in the landscape. The highest elevation sampling sites (three sites), located on top of an elongated mesa (255 to 275 m elevation), yield ages in the 16 to 12 Ma range. Samples from an intermediate elevation site (225 to 230 m elevation) yield ages in the 6 to 4 Ma range. Samples collected at the lowest elevation sites (200 to 220 m elevation) yield ages in the 2.2 to 0.8 Ma interval. Grains of supergene alunite, jarosite, and cryptomelane analyzed from individual single hand specimens yield reproducible results, confirming the suitability of these minerals to 40Ar/39Ar geochronology. Multiple samples collected from the same site also yield reproducible results, indicating that the ages measured are true precipitation ages for the samples analyzed. Different sites, up to 3 km apart, sampled from weathering profiles at the same elevation again yield reproducible results. The consistency of results confirms that 40Ar/39Ar geochronology of supergene jarosite, alunite, and cryptomelane yields ages of formation of weathering profiles, providing a reliable numerical basis for differentiating and correlating these profiles. The age versus elevation relationship obtained suggest that the stepped landscapes in the Dugald River area record a progressive downward migration of a relatively flat weathering front. The steps in the landscape result from differential erosion of previously weathered bedrock displaying different susceptibility to weathering and contrasting resistance to erosion. Combined, the age versus elevation relationships measured yield a weathering rate of 3.8 m. Myr−1 (for the past 15 Ma) if a descending subhorizontal weathering front is assumed. The results also permit the calculation of the erosion rate of the more easily weathered and eroded lithologies, assuming an initially flat landscape as proposed in models of episodic landscape development. The average erosion rate for the past 15 Ma is 3.3 m. Myr−1, consistent with erosion rates obtained by cosmogenic isotope studies in the region.
Resumo:
While the physiological adaptations that occur following endurance training in previously sedentary and recreationally active individuals are relatively well understood, the adaptations to training in already highly trained endurance athletes remain unclear. While significant improvements in endurance performance and corresponding physiological markers are evident following submaximal endurance training in sedentary and recreationally active groups, an additional increase in submaximal training (i.e. volume) in highly trained individuals does not appear to further enhance either endurance performance or associated physiological variables [e.g. peak oxygen uptake (V-dot O2peak), oxidative enzyme activity]. It seems that, for athletes who are already trained, improvements in endurance performance can be achieved only through high-intensity interval training (HIT). The limited research which has examined changes in muscle enzyme activity in highly trained athletes, following HIT, has revealed no change in oxidative or glycolytic enzyme activity, despite significant improvements in endurance performance (p < 0.05). Instead, an increase in skeletal muscle buffering capacity may be one mechanism responsible for an improvement in endurance performance. Changes in plasma volume, stroke volume, as well as muscle cation pumps, myoglobin, capillary density and fibre type characteristics have yet to be investigated in response to HIT with the highly trained athlete. Information relating to HIT programme optimisation in endurance athletes is also very sparse. Preliminary work using the velocity at which V-dot O2max is achieved (Vmax) as the interval intensity, and fractions (50 to 75%) of the time to exhaustion at Vmax (Tmax) as the interval duration has been successful in eliciting improvements in performance in long-distance runners. However, Vmax and Tmax have not been used with cyclists. Instead, HIT programme optimisation research in cyclists has revealed that repeated supramaximal sprinting may be equally effective as more traditional HIT programmes for eliciting improvements in endurance performance. Further examination of the biochemical and physiological adaptations which accompany different HIT programmes, as well as investigation into the optimal HIT programme for eliciting performance enhancements in highly trained athletes is required.
Resumo:
The generalized Gibbs sampler (GGS) is a recently developed Markov chain Monte Carlo (MCMC) technique that enables Gibbs-like sampling of state spaces that lack a convenient representation in terms of a fixed coordinate system. This paper describes a new sampler, called the tree sampler, which uses the GGS to sample from a state space consisting of phylogenetic trees. The tree sampler is useful for a wide range of phylogenetic applications, including Bayesian, maximum likelihood, and maximum parsimony methods. A fast new algorithm to search for a maximum parsimony phylogeny is presented, using the tree sampler in the context of simulated annealing. The mathematics underlying the algorithm is explained and its time complexity is analyzed. The method is tested on two large data sets consisting of 123 sequences and 500 sequences, respectively. The new algorithm is shown to compare very favorably in terms of speed and accuracy to the program DNAPARS from the PHYLIP package.
Resumo:
Stable carbon isotope analyses of wool staples provided insight into the vegetation consumed by sheep at a temporal resolution not previously studied. Contemporary Australian and historic South African samples dating back to 1916 were analyzed for their stable carbon isotope ratio, a proxy for the proportion of C-3 and C-4 plant species consumed by animals. Sheep sample vegetation continuously throughout a year, and as their wool grows it integrates and stores information about their diet. In subtropical and tropical rangelands the majority of grass species are C-4. Since sheep prefer to graze, and their wool is an isotopic record of their diet, we now have the potential to develop a high resolution index to the availability of grass from a sheep's perspective. Isotopic analyses of wool suggest a new direction for monitoring grazing and for the reconstruction of past vegetation changes, which will make a significant contribution to traditional rangeland ecology and management. It is recommended that isotopic and other analyses of wool be further developed for use in rangeland monitoring programs to provide valuable feedback for land managers.
Resumo:
This paper discusses a multi-layer feedforward (MLF) neural network incident detection model that was developed and evaluated using field data. In contrast to published neural network incident detection models which relied on simulated or limited field data for model development and testing, the model described in this paper was trained and tested on a real-world data set of 100 incidents. The model uses speed, flow and occupancy data measured at dual stations, averaged across all lanes and only from time interval t. The off-line performance of the model is reported under both incident and non-incident conditions. The incident detection performance of the model is reported based on a validation-test data set of 40 incidents that were independent of the 60 incidents used for training. The false alarm rates of the model are evaluated based on non-incident data that were collected from a freeway section which was video-taped for a period of 33 days. A comparative evaluation between the neural network model and the incident detection model in operation on Melbourne's freeways is also presented. The results of the comparative performance evaluation clearly demonstrate the substantial improvement in incident detection performance obtained by the neural network model. The paper also presents additional results that demonstrate how improvements in model performance can be achieved using variable decision thresholds. Finally, the model's fault-tolerance under conditions of corrupt or missing data is investigated and the impact of loop detector failure/malfunction on the performance of the trained model is evaluated and discussed. The results presented in this paper provide a comprehensive evaluation of the developed model and confirm that neural network models can provide fast and reliable incident detection on freeways. (C) 1997 Elsevier Science Ltd. All rights reserved.