988 resultados para Rejection-sampling Algorithm
MODIFIED DIRECT TWOS-COMPLEMENT PARALLEL ARRAY MULTIPLICATION ALGORITHM FOR COMPLEX MATRIX OPERATION
Resumo:
A direct twos-complement parallel array multiplication algorithm is introduced and modified for digital optical numerical computation. The modified version overcomes the problems encountered in the conventional optical twos-complement algorithm. In the array, all the summands are generated in parallel, and the relevant summands having the same weights are added simultaneously without carries, resulting in the product expressed in a mixed twos-complement system. In a two-stage array, complex multiplication is possible with using four real subarrays. Furthermore, with a three-stage array architecture, complex matrix operation is straightforwardly accomplished. In the experiment, parallel two-stage array complex multiplication with liquid-crystal panels is demonstrated.
Resumo:
A visual pattern recognition network and its training algorithm are proposed. The network constructed of a one-layer morphology network and a two-layer modified Hamming net. This visual network can implement invariant pattern recognition with respect to image translation and size projection. After supervised learning takes place, the visual network extracts image features and classifies patterns much the same as living beings do. Moreover we set up its optoelectronic architecture for real-time pattern recognition. (C) 1996 Optical Society of America
Resumo:
A novel, to our knowledge, two-step digit-set-restricted modified signed-digit (MSD) addition-subtraction algorithm is proposed. With the introduction of the reference digits, the operand words are mapped into an intermediate carry word with all digits restricted to the set {(1) over bar, 0} and an intermediate sum word with all digits restricted to the set {0, 1}, which can be summed to form the final result without carry generation. The operation can be performed in parallel by use of binary logic. An optical system that utilizes an electron-trapping device is suggested for accomplishing the required binary logic operations. By programming of the illumination of data arrays, any complex logic operations of multiple variables can be realized without additional temporal latency of the intermediate results. This technique has a high space-bandwidth product and signal-to-noise ratio. The main structure can be stacked to construct a compact optoelectronic MSD adder-subtracter. (C) 1999 Optical Society of America.
Resumo:
Negabinary is a component of the positional number system. A complete set of negabinary arithmetic operations are presented, including the basic addition/subtraction logic, the two-step carry-free addition/subtraction algorithm based on negabinary signed-digit (NSD) representation, parallel multiplication, and the fast conversion from NSD to the normal negabinary in the carry-look-ahead mode. All the arithmetic operations can be performed with binary logic. By programming the binary reference bits, addition and subtraction can be realized in parallel with the same binary logic functions. This offers a technique to perform space-variant arithmetic-logic functions with space-invariant instructions. Multiplication can be performed in the tree structure and it is simpler than the modified signed-digit (MSD) counterpart. The parallelism of the algorithms is very suitable for optical implementation. Correspondingly, a general-purpose optical logic system using an electron trapping device is suggested. Various complex logic functions can be performed by programming the illumination of the data arrays without additional temporal latency of the intermediate results. The system can be compact. These properties make the proposed negabinary arithmetic-logic system a strong candidate for future applications in digital optical computing with the development of smart pixel arrays. (C) 1999 Society of Photo-Optical Instrumentation Engineers. [S0091-3286(99)00803-X].
Resumo:
A two-step digit-set-restricted modified signed-digit (MSD) adder based on symbolic substitution is presented. In the proposed addition algorithm, carry propagation is avoided by using reference digits to restrict the intermediate MSD carry and sum digits into {(1) over bar ,0} and {0, 1}, respectively. The algorithm requires only 12 minterms to generate the final results, and no complementarity operations for nonzero outputs are involved, which simplifies the system complexity significantly. An optoelectronic shared content-addressable memory based on an incoherent correlator is used for experimental demonstration. (c) 2005 Society of Photo-Optical Instrumentation Engineers.
Resumo:
A two-step digit-set-restricted modified signed-digit (MSD) adder based on symbolic substitution is presented. In the proposed addition algorithm, carry propagation is avoided by using reference digits to restrict the intermediate MSD carry and sum digits into {(1) over bar ,0} and {0, 1}, respectively. The algorithm requires only 12 minterms to generate the final results, and no complementarity operations for nonzero outputs are involved, which simplifies the system complexity significantly. An optoelectronic shared content-addressable memory based on an incoherent correlator is used for experimental demonstration. (c) 2005 Society of Photo-Optical Instrumentation Engineers.
Resumo:
A new type of wave-front analysis method for the collimation testing of laser beams is proposed. A concept of wave-front height is defined, and, on this basis, the wave-front analysis method of circular aperture sampling is introduced. The wave-front height of the tested noncollimated wave can be estimated from the distance between two identical fiducial diffraction planes of the sampled wave, and then the divergence is determined. The design is detailed, and the experiment is demonstrated. The principle and experiment results of the method are presented. Owing to the simplicity of the method and its low cost, it is a promising method for checking the collimation of a laser beam with a large divergence. © 2005 Optical Society of America.
Resumo:
A fast and reliable phase unwrapping (PhU) algorithm, based on the local quality-guided fitting plane, is presented. Its framework depends on the basic plane-approximated assumption for phase values of local pixels and on the phase derivative variance (PDV) quality map. Compared with other existing popular unwrapping algorithms, the proposed algorithm demonstrated improved robustness and immunity to strong noise and high phase variations, given that the plane assumption for local phase is reasonably satisfied. Its effectiveness is demonstrated by computer-simulated and experimental results.
Resumo:
The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.
Resumo:
This research program consisted of three major component areas: (I) development of experimental design, (II) calibration of the trawl design, and (III) development of the foundation for stock assessment analysis. The products which have I. EXPERIMENTAL DESIGN resulted from - the program are indicated below: The study was successful in identifying spatial and temporal distribution characteristics of the several key species, and the relationships between given species catches and environmental and physical factors which are thought to influence species abundance by areas within the mainstem of the Chesapeake Bay and tributaries
Resumo:
Among different phase unwrapping approaches, the weighted least-squares minimization methods are gaining attention. In these algorithms, weighting coefficient is generated from a quality map. The intrinsic drawbacks of existing quality maps constrain the application of these algorithms. They often fail to handle wrapped phase data contains error sources, such as phase discontinuities, noise and undersampling. In order to deal with those intractable wrapped phase data, a new weighted least-squares phase unwrapping algorithm based on derivative variance correlation map is proposed. In the algorithm, derivative variance correlation map, a novel quality map, can truly reflect wrapped phase quality, ensuring a more reliable unwrapped result. The definition of the derivative variance correlation map and the principle of the proposed algorithm are present in detail. The performance of the new algorithm has been tested by use of a simulated spherical surface wrapped data and an experimental interferometric synthetic aperture radar (IFSAR) wrapped data. Computer simulation and experimental results have verified that the proposed algorithm can work effectively even when a wrapped phase map contains intractable error sources. (c) 2006 Elsevier GmbH. All rights reserved.
Resumo:
The first bilateral study of methods of biological sampling and biological methods of water quality assessment took place during June 1977 on selected sampling sites in the catchment of the River Trent (UK). The study was arranged in accordance with the protocol established by the joint working group responsible for the Anglo-Soviet Environmental Agreement. The main purpose of the bilateral study in Nottingham was for some of the methods of sampling and biological assessment used by UK biologists to be demonstrated to their Soviet counterparts and for the Soviet biologists to have the opportunity to test these methods at first hand in order to judge the potential of any of these methods for use within the Soviet Union. This paper is concerned with the nine river stations in the Trent catchment.
Resumo:
An FFT-based two-step phase-shifting (TPS) algorithm is described in detail and implemented by use of experimental interferograms. This algorithm has been proposed to solve the TPS problem with random phase shift except pi. By comparison with the visibility-function-based TPS algorithm, it proves that the FFT-based algorithm has obvious advantages in phase extracting. Meanwhile, we present a pi-phase-shift supplement to the TPS algorithm, which combines the two interferograms and demodulates the phase map by locating the extrema of the combined fringes after removing the respective backgrounds. So combining this method and FFT-based one, one could really implement the TPS with random phase shift. Whereafter, we systematically compare the TPS with single-interferogram analysis algorithm and conventional three-step phase-shifting one. The results demonstrate that the FFT-based TPS algorithm has a satisfactory accuracy. At last, based on the polarizing interferometry, a schematic setup of two-channel TPS interferometer with random phase shift is suggested to implement the simultaneous collection of interferograms. (c) 2007 Elsevier GrnbH. All rights reserved.
Resumo:
Topography of a granite surface has an effect on the vertical positioning of a wafer stage in a lithographic tool, when the wafer stage moves on the granite. The inaccurate measurement of the topography results in a bad leveling and focusing performance. In this paper, an in situ method to measure the topography of a granite surface with high accuracy is present. In this method, a high-order polynomial is set up to express the topography of the granite surface. Two double-frequency laser interferometers are used to measure the tilts of the wafer stage in the X- and Y-directions. From the sampling tilts information, the coefficients of the high-order polynomial can be obtained by a special algorithm. Experiment results shows that the measurement reproducibility of the method is better than 10 nm. (c) 2006 Elsevier GmbH. All rights reserved.