865 resultados para Ant-based algorithm
Resumo:
On the basis of signed-digit negabinary representation, parallel two-step addition and one-step subtraction can be performed for arbitrary-length negabinary operands.; The arithmetic is realized by signed logic operations and optically implemented by spatial encoding and decoding techniques. The proposed algorithm and optical system are simple, reliable, and practicable, and they have the property of parallel processing of two-dimensional data. This leads to an efficient design for the optical arithmetic and logic unit. (C) 1997 Optical Society of America.
Resumo:
Negabinary is a component of the positional number system. A complete set of negabinary arithmetic operations are presented, including the basic addition/subtraction logic, the two-step carry-free addition/subtraction algorithm based on negabinary signed-digit (NSD) representation, parallel multiplication, and the fast conversion from NSD to the normal negabinary in the carry-look-ahead mode. All the arithmetic operations can be performed with binary logic. By programming the binary reference bits, addition and subtraction can be realized in parallel with the same binary logic functions. This offers a technique to perform space-variant arithmetic-logic functions with space-invariant instructions. Multiplication can be performed in the tree structure and it is simpler than the modified signed-digit (MSD) counterpart. The parallelism of the algorithms is very suitable for optical implementation. Correspondingly, a general-purpose optical logic system using an electron trapping device is suggested. Various complex logic functions can be performed by programming the illumination of the data arrays without additional temporal latency of the intermediate results. The system can be compact. These properties make the proposed negabinary arithmetic-logic system a strong candidate for future applications in digital optical computing with the development of smart pixel arrays. (C) 1999 Society of Photo-Optical Instrumentation Engineers. [S0091-3286(99)00803-X].
Resumo:
We present, for the first time to our knowledge, a generalized lookahead logic algorithm for number conversion from signed-digit to complement representation. By properly encoding the signed-digits, all the operations are performed by binary logic, and unified logical expressions can be obtained for conversion from modified-signed-digit (MSD) to 2's complement, trinary signed-digit (TSD) to 3's complement, and quarternary signed-digit (QSD) to 4's complement. For optical implementation, a parallel logical array module using an electron-trapping device is employed and experimental results are shown. This optical module is suitable for implementing complex logic functions in the form of the sum of the product. The algorithm and architecture are compatible with a general-purpose optoelectronic computing system. (C) 2001 Society of Photo-Optical Instrumentation Engineers.
Resumo:
A new 2-D quality-guided phase-unwrapping algorithm, based on the placement of the branch cuts, is presented. Its framework consists of branch cut placing guided by an original quality map and reliability ordering performed on a final quality map. To improve the noise immunity of the new algorithm, a new quality map, which is used as the original quality map to guide the placement of the branch cuts, is proposed. After a complete description of the algorithm and the quality map, several wrapped images are used to examine the effectiveness of the algorithm. Computer simulation and experimental results make it clear that the proposed algorithm works effectively even when a wrapped phase map contains error sources, such as phase discontinuities, noise, and undersampling. (c) 2005 Society of Photo-Optical Instrumentation Engineers.
Resumo:
A novel method to construct a quality map, called modulation-phase-gradient variance (MPGV), is proposed, based on modulation and the phase gradient. The MPGV map is successfully applied to two phase-unwrapping algorithms - the improved weighted least square and the quality-guided unwrapping algorithm. Both simulated and experimental data testify to the validity of our proposed quality map. Moreover, the unwrapped-phase results show that the new quality map can have higher reliability than the conventional phase-derivative variance quality map in helping to unwrap noisy, low-modulation, and/or discontinuous phase maps. (c) 2006 Society of Photo-Optical Instrumentation Engineers.
Resumo:
The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.
Resumo:
A novel phase-step calibration technique is presented on the basis of a two-run-times-two-frame phase-shift method. First the symmetry factor M is defined to describe the distribution property of the distorted phase due to phase-shifter miscalibration; then the phase-step calibration technique, in which two sets of two interferograms with a straight fringe pattern are recorded and the phase step is obtained by calculating M of the wrapped phase map, is developed. With this technique, a good mirror is required, but no uniform illumination is needed and no complex mathematical operation is involved. This technique can be carried out in situ and is applicable to any phase shifter, whether linear or nonlinear. (c) 2006 Optical Society of America.
Resumo:
A novel method to construct a quality map, called modulation-phase-gradient variance (MPGV), is proposed, based on modulation and the phase gradient. The MPGV map is successfully applied to two phase-unwrapping algorithms - the improved weighted least square and the quality-guided unwrapping algorithm. Both simulated and experimental data testify to the validity of our proposed quality map. Moreover, the unwrapped-phase results show that the new quality map can have higher reliability than the conventional phase-derivative variance quality map in helping to unwrap noisy, low-modulation, and/or discontinuous phase maps. (c) 2006 Society of Photo-Optical Instrumentation Engineers.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.
Resumo:
In this paper, the feed-forward back-propagation artificial neural network (BP-ANN) algorithm is introduced in the traditional Focus Calibration using Alignment procedure (FOCAL) technique, and a novel FOCAL technique based on BP-ANN is proposed. The effects of the parameters, such as the number of neurons on the hidden-layer and the number of training epochs, on the measurement accuracy are analyzed in detail. It is proved that the novel FOCAL technique based on BP-ANN is more reliable and it is a better choice for measurement of the image quality parameters. (c) 2005 Elsevier GmbH. All rights reserved.
Resumo:
A new 2-D quality-guided phase-unwrapping algorithm, based on the placement of the branch cuts, is presented. Its framework consists of branch cut placing guided by an original quality map and reliability ordering performed on a final quality map. To improve the noise immunity of the new algorithm, a new quality map, which is used as the original quality map to guide the placement of the branch cuts, is proposed. After a complete description of the algorithm and the quality map, several wrapped images are used to examine the effectiveness of the algorithm. Computer simulation and experimental results make it clear that the proposed algorithm works effectively even when a wrapped phase map contains error sources, such as phase discontinuities, noise, and undersampling. (c) 2005 Society of Photo-Optical Instrumentation Engineers.
Resumo:
We theoretically investigated the design of a metal-mirror-based reflecting polarizing beam splitter (RPBS). The metal mirror is a silver slab, which is embedded in the substrate of a rectangular silica transmission grating. By using a modal analysis and rigorous coupled-wave analysis, an RPBS grating is designed for operation at 1550 nm. When it is illuminated in Littrow mounting, the transverse electric (TE) and transverse magnetic (TM) waves will be mainly reflected in the minus-first and zeroth orders, respectively. Moreover, a wideband RPBS grating is obtained by adopting the simulated annealing algorithm. The RPBS gratings exhibit high diffraction efficiencies (similar to 95%) and high extinction ratios over a certain angle and wavelength range, especially for the minus-first-order reflection. This kind of RPBS should be useful in practical optical applications.
Resumo:
This doctoral Thesis defines and develops a new methodology for feeder reconfiguration in distribution networks with Distributed Energy Resources (DER). The proposed methodology is based on metaheuristic Ant Colony Optimization (ACO) algorithms. The methodology is called Item Oriented Ant System (IOAS) and the doctoral Thesis also defines three variations of the original methodology, Item Oriented Ant Colony System (IOACS), Item Oriented Max-min Ant System (IOMMAS) y Item Oriented Max-min Ant Colony System (IOACS). All methodologies pursue a twofold objective, to minimize the power losses and maximize DER penetration in distribution networks. The aim of the variations is to find the algorithm that adapts better to the present optimization problem, solving it most efficiently. The main feature of the methodology lies in the fact that the heuristic information and the exploitation information (pheromone) are attached to the item not to the path. Besides, the doctoral Thesis proposes to use feeder reconfiguration in order to increase the distribution network capacity of accepting a major degree of DER. The proposed methodology and its three variations have been tested and verified in two distribution networks well documented in the existing bibliography. These networks have been modeled and used to test all proposed methodologies for different scenarios with various DER penetration degrees.
Resumo:
This paper presents a method to generate new melodies, based on conserving the semiotic structure of a template piece. A pattern discovery algorithm is applied to a template piece to extract significant segments: those that are repeated and those that are transposed in the piece. Two strategies are combined to describe the semiotic coherence structure of the template piece: inter-segment coherence and intra-segment coherence. Once the structure is described it is used as a template for new musical content that is generated using a statistical model created from a corpus of bertso melodies and iteratively improved using a stochastic optimization method. Results show that the method presented here effectively describes a coherence structure of a piece by discovering repetition and transposition relations between segments, and also by representing the relations among notes within the segments. For bertso generation the method correctly conserves all intra and inter-segment coherence of the template, and the optimization method produces coherent generated melodies.
Resumo:
This paper describes a path-following phase unwrapping algorithm and a phase unwrapping algorithm based on discrete cosine transform (DCT) which accelerates the Computation and suppresses the propagation of noise. Through analysis of fringe pattern with serious noises simulated in mathematic model, we make a contrast between path-following algorithm and DCT algorithm. The advantages and disadvantages or analytical fringe pattern are also given through comparison of two algorithms. Three-dimensional experimental results have been given to prove the validity of these algorithms. Despite DCT phase unwrapping technique robustness and speed in some cases, it cannot be unwrapping inconsistencies phase. The path-following algorithm can be used in automation analysis of fringe patterns with little influence of noise. (c) 2007 Elsevier GmbH. All rights reserved.