915 resultados para Sampling method
Resumo:
Topography of a granite surface has an effect on the vertical positioning of a wafer stage in a lithographic tool, when the wafer stage moves on the granite. The inaccurate measurement of the topography results in a bad leveling and focusing performance. In this paper, an in situ method to measure the topography of a granite surface with high accuracy is present. In this method, a high-order polynomial is set up to express the topography of the granite surface. Two double-frequency laser interferometers are used to measure the tilts of the wafer stage in the X- and Y-directions. From the sampling tilts information, the coefficients of the high-order polynomial can be obtained by a special algorithm. Experiment results shows that the measurement reproducibility of the method is better than 10 nm. (c) 2006 Elsevier GmbH. All rights reserved.
Resumo:
This paper presents a method to generate new melodies, based on conserving the semiotic structure of a template piece. A pattern discovery algorithm is applied to a template piece to extract significant segments: those that are repeated and those that are transposed in the piece. Two strategies are combined to describe the semiotic coherence structure of the template piece: inter-segment coherence and intra-segment coherence. Once the structure is described it is used as a template for new musical content that is generated using a statistical model created from a corpus of bertso melodies and iteratively improved using a stochastic optimization method. Results show that the method presented here effectively describes a coherence structure of a piece by discovering repetition and transposition relations between segments, and also by representing the relations among notes within the segments. For bertso generation the method correctly conserves all intra and inter-segment coherence of the template, and the optimization method produces coherent generated melodies.
Resumo:
We describe a method to explore the configurational phase space of chemical systems. It is based on the nested sampling algorithm recently proposed by Skilling (AIP Conf. Proc. 2004, 395; J. Bayesian Anal. 2006, 1, 833) and allows us to explore the entire potential energy surface (PES) efficiently in an unbiased way. The algorithm has two parameters which directly control the trade-off between the resolution with which the space is explored and the computational cost. We demonstrate the use of nested sampling on Lennard-Jones (LJ) clusters. Nested sampling provides a straightforward approximation for the partition function; thus, evaluating expectation values of arbitrary smooth operators at arbitrary temperatures becomes a simple postprocessing step. Access to absolute free energies allows us to determine the temperature-density phase diagram for LJ cluster stability. Even for relatively small clusters, the efficiency gain over parallel tempering in calculating the heat capacity is an order of magnitude or more. Furthermore, by analyzing the topology of the resulting samples, we are able to visualize the PES in a new and illuminating way. We identify a discretely valued order parameter with basins and suprabasins of the PES, allowing a straightforward and unambiguous definition of macroscopic states of an atomistic system and the evaluation of the associated free energies.
Resumo:
Interest in development of offshore renewable energy facilities has led to a need for high-quality, statistically robust information on marine wildlife distributions. A practical approach is described to estimate the amount of sampling effort required to have sufficient statistical power to identify species specific “hotspots” and “coldspots” of marine bird abundance and occurrence in an offshore environment divided into discrete spatial units (e.g., lease blocks), where “hotspots” and “coldspots” are defined relative to a reference (e.g., regional) mean abundance and/or occurrence probability for each species of interest. For example, a location with average abundance or occurrence that is three times larger the mean (3x effect size) could be defined as a “hotspot,” and a location that is three times smaller than the mean (1/3x effect size) as a “coldspot.” The choice of the effect size used to define hot and coldspots will generally depend on a combination of ecological and regulatory considerations. A method is also developed for testing the statistical significance of possible hotspots and coldspots. Both methods are illustrated with historical seabird survey data from the USGS Avian Compendium Database.
Resumo:
Experiments were conducted to study the significance of difference between samples taken from the surface and interior of a frozen shrimps block, as well as to determine the size of sample necessary to represent the whole block, with respect to bacterial count determination. The results showed that the surface samples and interior samples did not differ significantly at 5% level of significance and that the minimum quantity representative of the block was 21-26 gms in the case of a block weighing about 1300 gms. The procedure adopted for taking the bacterial count was the normal standard plate count method.
Resumo:
It is extremely difficult to explore mRNA folding structure by biological experiments. In this report, we use stochastic sampling and folding simulation to test the existence of the stable secondary structural units of-mRNA, look for the folding units, and explore the probabilistic stabilization of the units. Using this method, We made simulations for all possible local optimum secondary structures of a single strand mRNA within a certain range, and searched for the common parts of the secondary structures. The consensus secondary structure units (CSSUs) extracted from the above method are mainly hairpins, with a few single strands. These CSSUs suggest that the mRNA folding units could be relatively stable and could perform specific biological function. The significance of these observations for the mRNA folding problem in general is also discussed. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
The nervous system implements a networked control system in which the plants take the form of limbs, the controller is the brain, and neurons form the communication channels. Unlike standard networked control architectures, there is no periodic sampling, and the fundamental units of communication contain little numerical information. This paper describes a novel communication channel, modeled after spiking neurons, in which the transmitter integrates an input signal and sends out a spike when the integral reaches a threshold value. The reciever then filters the sequence of spikes to approximately reconstruct the input signal. It is shown that for appropriate choices of channel parameters, stable feedback control over these spiking channels is possible. Furthermore, good tracking performance can be achieved. The data rate of the channel increases linearly with the size of the inputs. Thus, when placed in a feedback loop, small loop gains imply a low data rate. ©2010 IEEE.
Resumo:
In this paper, we propose a low complexity and reliable wideband spectrum sensing technique that operates at sub-Nyquist sampling rates. Unlike the majority of other sub-Nyquist spectrum sensing algorithms that rely on the Compressive Sensing (CS) methodology, the introduced method does not entail solving an optimisation problem. It is characterised by simplicity and low computational complexity without compromising the system performance and yet delivers substantial reductions on the operational sampling rates. The reliability guidelines of the devised non-compressive sensing approach are provided and simulations are presented to illustrate its superior performance. © 2013 IEEE.
Resumo:
Hybrid opto-digital joint transform correlator (HODJTC) is effective for image motion measurement, but it is different from the traditional joint transform correlator because it only has one optical transform and the joint power spectrum is directly input into a digital processing unit to compute the image shift. The local cross-correlation image can be directly obtained by adopting a local Fourier transform operator. After the pixel-level location of cross-correlation peak is initially obtained, the up-sampling technique is introduced to relocate the peak in even higher accuracy. With signal-to-noise ratio >= 20 dB, up-sampling factor k >= 10 and the maximum image shift <= 60 pixels, the root-mean-square error of motion measurement accuracy can be controlled below 0.05 pixels.
Resumo:
The density and distribution of spatial samples heavily affect the precision and reliability of estimated population attributes. An optimization method based on Mean of Surface with Nonhomogeneity (MSN) theory has been developed into a computer package with the purpose of improving accuracy in the global estimation of some spatial properties, given a spatial sample distributed over a heterogeneous surface; and in return, for a given variance of estimation, the program can export both the optimal number of sample units needed and their appropriate distribution within a specified research area. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Recent work in sensor databases has focused extensively on distributed query problems, notably distributed computation of aggregates. Existing methods for computing aggregates broadcast queries to all sensors and use in-network aggregation of responses to minimize messaging costs. In this work, we focus on uniform random sampling across nodes, which can serve both as an alternative building block for aggregation and as an integral component of many other useful randomized algorithms. Prior to our work, the best existing proposals for uniform random sampling of sensors involve contacting all nodes in the network. We propose a practical method which is only approximately uniform, but contacts a number of sensors proportional to the diameter of the network instead of its size. The approximation achieved is tunably close to exact uniform sampling, and only relies on well-known existing primitives, namely geographic routing, distributed computation of Voronoi regions and von Neumann's rejection method. Ultimately, our sampling algorithm has the same worst-case asymptotic cost as routing a point-to-point message, and thus it is asymptotically optimal among request/reply-based sampling methods. We provide experimental results demonstrating the effectiveness of our algorithm on both synthetic and real sensor topologies.
Resumo:
Data on the abundance and biomass of zooplankton off the northwestern Portuguese coast, separately estimated with a Longhurst-Hardy Plankton Recorder (LHPR) and a Bongo net, were analysed to assess the comparative performance of the samplers. Zooplankton was collected along four transects perpendicular to the coast, deployments alternating between samplers. Total zooplankton biomass measured using the LHPR was significantly higher than that using the Bongo net. Apart from Appendicularia and Cladocera, abundances of other taxa (Copepoda, Mysidacea, Euphausiacea, Decapoda larvae, Amphipoda, Siphonophora, Hydromedusae, Chaetognatha and Fish eggs) were also consistently higher in the LHPR. Some of these differences were probably due to avoidance by the zooplankton of the Bongo net. This was supported by a comparative analysis of prosome length of the copepod Calanus helgolandicus sampled by the two nets that showed that Calanus in the LHPR samples were on average significantly larger, particularly in day samples. A ratio estimator was used to produce a factor to convert Bongo net biomass and abundance estimates to equate them with those taken with the LHPR. This method demonstrates how results from complementary zooplankton sampling strategies can be made more equivalent.
Resumo:
Closing feedback loops using an IEEE 802.11b ad hoc wireless communication network incurs many challenges sensitivity to varying channel conditions and lower physical transmission rates tend to limit the bandwidth of the communication channel. Given that the bandwidth usage and control performance are linked, a method of adapting the sampling interval based on an 'a priori', static sampling policy has been proposed and, more significantly, assuring stability in the mean square sense using discrete-time Markov jump linear system theory. Practical issues including current limitations of the 802.11 b protocol, the sampling policy and stability are highlighted. Simulation results on a cart-mounted inverted pendulum show that closed-loop stability can be improved using sample rate adaptation and that the control design criteria can be met in the presence of channel errors and severe channel contention.
Resumo:
The comet assay is a technique used to quantify DNA damage and repair at a cellular level. In the assay, cells are embedded in agarose and the cellular content is stripped away leaving only the DNA trapped in an agarose cavity which can then be electrophoresed. The damaged DNA can enter the agarose and migrate while the undamaged DNA cannot and is retained. DNA damage is measured as the proportion of the migratory ‘tail’ DNA compared to the total DNA in the cell. The fundamental basis of these arbitrary values is obtained in the comet acquisition phase using fluorescence microscopy with a stoichiometric stain in tandem with image analysis software. Current methods deployed in such an acquisition are expected to be both objectively and randomly obtained. In this paper we examine the ‘randomness’ of the acquisition phase and suggest an alternative method that offers both objective and unbiased comet selection. In order to achieve this, we have adopted a survey sampling approach widely used in stereology, which offers a method of systematic random sampling (SRS). This is desirable as it offers an impartial and reproducible method of comet analysis that can be used both manually or automated. By making use of an unbiased sampling frame and using microscope verniers, we are able to increase the precision of estimates of DNA damage. Results obtained from a multiple-user pooled variation experiment showed that the SRS technique attained a lower variability than that of the traditional approach. The analysis of a single user with repetition experiment showed greater individual variances while not being detrimental to overall averages. This would suggest that the SRS method offers a better reflection of DNA damage for a given slide and also offers better user reproducibility.
Developing a simple, rapid method for identifying and monitoring jellyfish aggregations from the air
Resumo:
Within the marine environment, aerial surveys have historically centred on apex predators, such as pinnipeds, cetaceans and sea birds. However, it is becoming increasingly apparent that the utility of this technique may also extend to subsurface species such as pre-spawning fish stocks and aggregations of jellyfish that occur close to the surface. In light of this, we tested the utility of aerial surveys to provide baseline data for 3 poorly understood scyphozoan jellyfish found throughout British and Irish waters: Rhizostoma octopus, Cyanea capillata and Chrysaora hysoscella. Our principal objectives were to develop a simple sampling protocol to identify and quantify surface aggregations, assess their consistency in space and time, and consider the overall applicability of this technique to the study of gelatinous zooplankton. This approach provided a general understanding of range and relative abundance for each target species, with greatest suitability to the study of R. octopus. For this species it was possible to identify and monitor extensive, temporally consistent and previously undocumented aggregations throughout the Irish Sea, an area spanning thousands of square kilometres. This finding has pronounced implications for ecologists and fisheries managers alike and, moreover, draws attention to the broad utility of aerial surveys for the study of gelatinous aggregations beyond the range of conventional ship-based techniques.