36 resultados para Curve number method
Resumo:
This paper presents a new method to measure the sinking rates of individual phytoplankton “particles” (cells, chains, colonies, and aggregates) in the laboratory. Conventional particle tracking and high resolution video imaging were used to measure particle sinking rates and particle size. The stabilizing force of a very mild linear salinity gradient (1 ppt over 15 cm) prevented the formation of convection currents in the laboratory settling chamber. Whereas bulk settling methods such as SETCOL provide a single value of sinking rate for a population, this method allows the measurement of sinking rate and particle size for a large number of individual particles or phytoplankton within a population. The method has applications where sinking rates vary within a population, or where sinking rate-size relationships are important. Preliminary data from experiments with both laboratory and field samples of marine phytoplankton are presented here to illustrate the use of the technique, its applications, and limitations. Whereas this paper deals only with sinking phytoplankton, the method is equally valid for positively buoyant species, as well as nonbiological particles.
Resumo:
All muscle contractions are dependent on the functioning of motor units. In diseases such as amyotrophic lateral sclerosis (ALS), progressive loss of motor units leads to gradual paralysis. A major difficulty in the search for a treatment for these diseases has been the lack of a reliable measure of disease progression. One possible measure would be an estimate of the number of surviving motor units. Despite over 30 years of motor unit number estimation (MUNE), all proposed methods have been met with practical and theoretical objections. Our aim is to develop a method of MUNE that overcomes these objections. We record the compound muscle action potential (CMAP) from a selected muscle in response to a graded electrical stimulation applied to the nerve. As the stimulus increases, the threshold of each motor unit is exceeded, and the size of the CMAP increases until a maximum response is obtained. However, the threshold potential required to excite an axon is not a precise value but fluctuates over a small range leading to probabilistic activation of motor units in response to a given stimulus. When the threshold ranges of motor units overlap, there may be alternation where the number of motor units that fire in response to the stimulus is variable. This means that increments in the value of the CMAP correspond to the firing of different combinations of motor units. At a fixed stimulus, variability in the CMAP, measured as variance, can be used to conduct MUNE using the "statistical" or the "Poisson" method. However, this method relies on the assumptions that the numbers of motor units that are firing probabilistically have the Poisson distribution and that all single motor unit action potentials (MUAP) have a fixed and identical size. These assumptions are not necessarily correct. We propose to develop a Bayesian statistical methodology to analyze electrophysiological data to provide an estimate of motor unit numbers. Our method of MUNE incorporates the variability of the threshold, the variability between and within single MUAPs, and baseline variability. Our model not only gives the most probable number of motor units but also provides information about both the population of units and individual units. We use Markov chain Monte Carlo to obtain information about the characteristics of individual motor units and about the population of motor units and the Bayesian information criterion for MUNE. We test our method of MUNE on three subjects. Our method provides a reproducible estimate for a patient with stable but severe ALS. In a serial study, we demonstrate a decline in the number of motor unit numbers with a patient with rapidly advancing disease. Finally, with our last patient, we show that our method has the capacity to estimate a larger number of motor units.
Resumo:
A number of systematic conservation planning tools are available to aid in making land use decisions. Given the increasing worldwide use and application of reserve design tools, including measures of site irreplaceability, it is essential that methodological differences and their potential effect on conservation planning outcomes are understood. We compared the irreplaceability of sites for protecting ecosystems within the Brigalow Belt Bioregion, Queensland, Australia, using two alternative reserve system design tools, Marxan and C-Plan. We set Marxan to generate multiple reserve systems that met targets with minimal area; the first scenario ignored spatial objectives, while the second selected compact groups of areas. Marxan calculates the irreplaceability of each site as the proportion of solutions in which it occurs for each of these set scenarios. In contrast, C-Plan uses a statistical estimate of irreplaceability as the likelihood that each site is needed in all combinations of sites that satisfy the targets. We found that sites containing rare ecosystems are almost always irreplaceable regardless of the method. Importantly, Marxan and C-Plan gave similar outcomes when spatial objectives were ignored. Marxan with a compactness objective defined twice as much area as irreplaceable, including many sites with relatively common ecosystems. However, targets for all ecosystems were met using a similar amount of area in C-Plan and Marxan, even with compactness. The importance of differences in the outcomes of using the two methods will depend on the question being addressed; in general, the use of two or more complementary tools is beneficial.
Resumo:
In this paper, we examine the problem of fitting a hypersphere to a set of noisy measurements of points on its surface. Our work generalises an estimator of Delogne (Proc. IMEKO-Symp. Microwave Measurements 1972,117-123) which he proposed for circles and which has been shown by Kasa (IEEE Trans. Instrum. Meas. 25, 1976, 8-14) to be convenient for its ease of analysis and computation. We also generalise Chan's 'circular functional relationship' to describe the distribution of points. We derive the Cramer-Rao lower bound (CRLB) under this model and we derive approximations for the mean and variance for fixed sample sizes when the noise variance is small. We perform a statistical analysis of the estimate of the hypersphere's centre. We examine the existence of the mean and variance of the estimator for fixed sample sizes. We find that the mean exists when the number of sample points is greater than M + 1, where M is the dimension of the hypersphere. The variance exists when the number of sample points is greater than M + 2. We find that the bias approaches zero as the noise variance diminishes and that the variance approaches the CRLB. We provide simulation results to support our findings.
Resumo:
Summary form only given. The Java programming language supports concurrency. Concurrent programs are harder to verify than their sequential counterparts due to their inherent nondeterminism and a number of specific concurrency problems such as interference and deadlock. In previous work, we proposed a method for verifying concurrent Java components based on a mix of code inspection, static analysis tools, and the ConAn testing tool. The method was derived from an analysis of concurrency failures in Java components, but was not applied in practice. In this paper, we explore the method by applying it to an implementation of the well-known readers-writers problem and a number of mutants of that implementation. We only apply it to a single, well-known example, and so we do not attempt to draw any general conclusions about the applicability or effectiveness of the method. However, the exploration does point out several strengths and weaknesses in the method, which enable us to fine-tune the method before we carry out a more formal evaluation on other, more realistic components.
Resumo:
Indexing high dimensional datasets has attracted extensive attention from many researchers in the last decade. Since R-tree type of index structures are known as suffering curse of dimensionality problems, Pyramid-tree type of index structures, which are based on the B-tree, have been proposed to break the curse of dimensionality. However, for high dimensional data, the number of pyramids is often insufficient to discriminate data points when the number of dimensions is high. Its effectiveness degrades dramatically with the increase of dimensionality. In this paper, we focus on one particular issue of curse of dimensionality; that is, the surface of a hypercube in a high dimensional space approaches 100% of the total hypercube volume when the number of dimensions approaches infinite. We propose a new indexing method based on the surface of dimensionality. We prove that the Pyramid tree technology is a special case of our method. The results of our experiments demonstrate clear priority of our novel method.