871 resultados para Rejection-sampling Algorithm
Resumo:
Koala (Phascolarctos cinereus) populations in eastern Australia are threatened by land clearing for agricultural and urban development. At the same time, conservation efforts are hindered by a dearth of information about inland populations. Faecal deposits offer a source of information that is readily available and easily collected non-invasively. We detail a faecal pellet sampling protocol that was developed for use in a large rangeland biogeographic region. The method samples trees in belt transects, uses a thorough search at the tree base to quickly identify trees with koala pellets under them, then estimates the abundance of faecal pellets under those trees using 1-m(2) quadrats. There was a strong linear relationship between these estimates and a complete enumeration of pellet abundance under the same trees. We evaluated the accuracy of our method in detecting trees where pellets were present by means of a misclassification index that was weighed more heavily for missed trees that had high numbers of pellets under them. This showed acceptable accuracy in all landforms except riverine, where some trees with large numbers of pellets were missed. Here, accuracy in detecting pellet presence was improved by sampling with quadrats, rather than basal searches. Finally, we developed a method to reliably age pellets and demonstrate how this protocol could be used with the faecal-standing-crop method to derive a regional estimate of absolute koala abundance.
Resumo:
The suitable use of an array antenna at the base station of a wireless communications system can result in improvement in the signal-to-interference ratio (SIR). In general, the SIR is a function of the direction of arrival of the desired signal and depends on the configuration of the array, the number of elements, and their spacing. In this paper, we consider a uniform linear array antenna and study the effect of varying the number of its elements and inter-element spacing on the SIR performance. (C) 2002 Wiley Periodicals, Inc.
Resumo:
The suitable use of array antennas in cellular systems results in improvement in the signal-to-interference ratio (StR), This property is the basis for introducing smart or adaptive antenna systems. in general, the SIR depends on the array configuration and is a function of the direction of the desired user and interferers. Here, the SIR performance for linear and circular arrays is analysed and compared.
Resumo:
A balanced sampling plan excluding contiguous units (or BSEC for short) was first introduced by Hedayat, Rao and Stufken in 1988. These designs can be used for survey sampling when the units are arranged in one-dimensional ordering and the contiguous units in this ordering provide similar information. In this paper, we generalize the concept of a BSEC to the two-dimensional situation and give constructions of two-dimensional BSECs with block size 3. The existence problem is completely solved in the case where lambda = 1.
Resumo:
A new algorithm has been developed for smoothing the surfaces in finite element formulations of contact-impact. A key feature of this method is that the smoothing is done implicitly by constructing smooth signed distance functions for the bodies. These functions are then employed for the computation of the gap and other variables needed for implementation of contact-impact. The smoothed signed distance functions are constructed by a moving least-squares approximation with a polynomial basis. Results show that when nodes are placed on a surface, the surface can be reproduced with an error of about one per cent or less with either a quadratic or a linear basis. With a quadratic basis, the method exactly reproduces a circle or a sphere even for coarse meshes. Results are presented for contact problems involving the contact of circular bodies. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
The choice of genotyping families vs unrelated individuals is a critical factor in any large-scale linkage disequilibrium (LD) study. The use of unrelated individuals for such studies is promising, but in contrast to family designs, unrelated samples do not facilitate detection of genotyping errors, which have been shown to be of great importance for LD and linkage studies and may be even more important in genotyping collaborations across laboratories. Here we employ some of the most commonly-used analysis methods to examine the relative accuracy of haplotype estimation using families vs unrelateds in the presence of genotyping error. The results suggest that even slight amounts of genotyping error can significantly decrease haplotype frequency and reconstruction accuracy, that the ability to detect such errors in large families is essential when the number/complexity of haplotypes is high (low LD/common alleles). In contrast, in situations of low haplotype complexity (high LD and/or many rare alleles) unrelated individuals offer such a high degree of accuracy that there is little reason for less efficient family designs. Moreover, parent-child trios, which comprise the most popular family design and the most efficient in terms of the number of founder chromosomes per genotype but which contain little information for error detection, offer little or no gain over unrelated samples in nearly all cases, and thus do not seem a useful sampling compromise between unrelated individuals and large families. The implications of these results are discussed in the context of large-scale LD mapping projects such as the proposed genome-wide haplotype map.
Resumo:
Libraries of cyclic peptides are being synthesized using combinatorial chemistry for high throughput screening in the drug discovery process. This paper describes the min_syn_steps.cpp program (available at http://www.imb.uq.edu.au/groups/smythe/tran), which after inputting a list of cyclic peptides to be synthesized, removes cyclic redundant sequences and calculates synthetic strategies which minimize the synthetic steps as well as the reagent requirements. The synthetic steps and reagent requirements could be minimized by finding common subsets within the sequences for block synthesis. Since a brute-force approach to search for optimum synthetic strategies is impractically large, a subset-orientated approach is utilized here to limit the size of the search. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Distance sampling using line transects has not been previously used or tested for estimating koala abundance. In July 2001, a pilot survey was conducted to compare the use of line transects with strip transects for estimating koala abundance. Both methods provided a similar estimate of density. On the basis of the results of the pilot survey, the distribution and abundance of koalas in the Pine Rivers Shire, south-east Queensland, was determined using line-transect sampling. In total, 134 lines (length 64 km) were used to sample bushland areas. Eighty-two independent koalas were sighted. Analysis of the frequency distribution of sighting distances using the software program DISTANCE enabled a global detection function to be estimated for survey sites in bushland areas across the Shire. Abundance in urban parts of the Shire was estimated from densities obtained from total counts at eight urban sites that ranged from 26 to 51 ha in size. Koala abundance in the Pine Rivers Shire was estimated at 4584 (95% confidence interval, 4040-5247). Line-transect sampling is a useful method for estimating koala abundance provided experienced koala observers are used when conducting surveys.
Resumo:
The Lanczos algorithm is appreciated in many situations due to its speed. and economy of storage. However, the advantage that the Lanczos basis vectors need not be kept is lost when the algorithm is used to compute the action of a matrix function on a vector. Either the basis vectors need to be kept, or the Lanczos process needs to be applied twice. In this study we describe an augmented Lanczos algorithm to compute a dot product relative to a function of a large sparse symmetric matrix, without keeping the basis vectors.
Resumo:
Despite extensive efforts to confirm a direct association between Chlamydia pneumoniae and atherosclerosis, different laboratories continue to report a large variability in detection rates. In this study, we analyzed multiple sections from atherosclerotic carotid arteries from 10 endartectomy patients to determine the location of C. pneumoniae DNA and the number of sections of the plaque required for analysis to obtain a 95% confidence of detecting the bacterium. A sensitive nested PCR assay detected C. pneumoniae DNA in all patients at one or more locations within the plaque. On average, 42% (ranging from 5 to 91%) of the sections from any single patient had C. pneumoniae DNA present. A patchy distribution of C. pneumoniae in the atherosclerotic lesions was observed, with no area of the carotid having significantly more C. pneumoniae DNA present. If a single random 30-mum-thick section was tested, there was only a 35.6 to 41.6% (95% confidence interval) chance of detecting C. pneumoniae DNA in a patient with carotid artery disease. A minimum of 15 sections would therefore be required to obtain a 95% chance of detecting all true positives. The low concentration and patchy distribution of C. pneumoniae DNA in atherosclerotic plaque appear to be among the reasons for inconsistency between laboratories in the results reported.
Resumo:
This article presents Monte Carlo techniques for estimating network reliability. For highly reliable networks, techniques based on graph evolution models provide very good performance. However, they are known to have significant simulation cost. An existing hybrid scheme (based on partitioning the time space) is available to speed up the simulations; however, there are difficulties with optimizing the important parameter associated with this scheme. To overcome these difficulties, a new hybrid scheme (based on partitioning the edge set) is proposed in this article. The proposed scheme shows orders of magnitude improvement of performance over the existing techniques in certain classes of network. It also provides reliability bounds with little overhead.
Resumo:
Frequency deviation is a common problem for power system signal processing. Many power system measurements are carried out in a fixed sampling rate assuming the system operates in its nominal frequency (50 or 60 Hz). However, the actual frequency may deviate from the normal value from time to time due to various reasons such as disturbances and subsequent system transients. Measurement of signals based on a fixed sampling rate may introduce errors under such situations. In order to achieve high precision signal measurement appropriate algorithms need to be employed to reduce the impact from frequency deviation in the power system data acquisition process. This paper proposes an advanced algorithm to enhance Fourier transform for power system signal processing. The algorithm is able to effectively correct frequency deviation under fixed sampling rate. Accurate measurement of power system signals is essential for the secure and reliable operation of power systems. The algorithm is readily applicable to such occasions where signal processing is affected by frequency deviation. Both mathematical proof and numerical simulation are given in this paper to illustrate robustness and effectiveness of the proposed algorithm. Crown Copyright (C) 2003 Published by Elsevier Science B.V. All rights reserved.