925 resultados para Search-based algorithms
Resumo:
Lateral or transaxial truncation of cone-beam data can occur either due to the field of view limitation of the scanning apparatus or iregion-of-interest tomography. In this paper, we Suggest two new methods to handle lateral truncation in helical scan CT. It is seen that reconstruction with laterally truncated projection data, assuming it to be complete, gives severe artifacts which even penetrates into the field of view. A row-by-row data completion approach using linear prediction is introduced for helical scan truncated data. An extension of this technique known as windowed linear prediction approach is introduced. Efficacy of the two techniques are shown using simulation with standard phantoms. A quantitative image quality measure of the resulting reconstructed images are used to evaluate the performance of the proposed methods against an extension of a standard existing technique.
Resumo:
The Reeb graph tracks topology changes in level sets of a scalar function and finds applications in scientific visualization and geometric modeling. We describe an algorithm that constructs the Reeb graph of a Morse function defined on a 3-manifold. Our algorithm maintains connected components of the two dimensional levels sets as a dynamic graph and constructs the Reeb graph in O(nlogn+nlogg(loglogg)3) time, where n is the number of triangles in the tetrahedral mesh representing the 3-manifold and g is the maximum genus over all level sets of the function. We extend this algorithm to construct Reeb graphs of d-manifolds in O(nlogn(loglogn)3) time, where n is the number of triangles in the simplicial complex that represents the d-manifold. Our result is a significant improvement over the previously known O(n2) algorithm. Finally, we present experimental results of our implementation and demonstrate that our algorithm for 3-manifolds performs efficiently in practice.
Resumo:
In this paper, we generalize the existing rate-one space frequency (SF) and space-time frequency (STF) code constructions. The objective of this exercise is to provide a systematic design of full-diversity STF codes with high coding gain. Under this generalization, STF codes are formulated as linear transformations of data. Conditions on these linear transforms are then derived so that the resulting STF codes achieve full diversity and high coding gain with a moderate decoding complexity. Many of these conditions involve channel parameters like delay profile (DP) and temporal correlation. When these quantities are not available at the transmitter, design of codes that exploit full diversity on channels with arbitrary DIP and temporal correlation is considered. Complete characterization of a class of such robust codes is provided and their bit error rate (BER) performance is evaluated. On the other hand, when channel DIP and temporal correlation are available at the transmitter, linear transforms are optimized to maximize the coding gain of full-diversity STF codes. BER performance of such optimized codes is shown to be better than those of existing codes.
Resumo:
A new parallel algorithm for transforming an arithmetic infix expression into a par se tree is presented. The technique is based on a result due to Fischer (1980) which enables the construction of the parse tree, by appropriately scanning the vector of precedence values associated with the elements of the expression. The algorithm presented here is suitable for execution on a shared memory model of an SIMD machine with no read/write conflicts permitted. It uses O(n) processors and has a time complexity of O(log2n) where n is the expression length. Parallel algorithms for generating code for an SIMD machine are also presented.
Resumo:
Systems of learning automata have been studied by various researchers to evolve useful strategies for decision making under uncertainity. Considered in this paper are a class of hierarchical systems of learning automata where the system gets responses from its environment at each level of the hierarchy. A classification of such sequential learning tasks based on the complexity of the learning problem is presented. It is shown that none of the existing algorithms can perform in the most general type of hierarchical problem. An algorithm for learning the globally optimal path in this general setting is presented, and its convergence is established. This algorithm needs information transfer from the lower levels to the higher levels. Using the methodology of estimator algorithms, this model can be generalized to accommodate other kinds of hierarchical learning tasks.
Resumo:
A fuzzy logic based centralized control algorithm for irrigation canals is presented. Purpose of the algorithm is to control downstream discharge and water level of pools in the canal, by adjusting discharge release from the upstream end and gates settings. The algorithm is based on the dynamic wave model (Saint-Venant equations) inversion in space, wherein the momentum equation is replaced by a fuzzy rule based model, while retaining the continuity equation in its complete form. The fuzzy rule based model is developed on fuzzification of a new mathematical model for wave velocity, the derivational details of which are given. The advantages of the fuzzy control algorithm, over other conventional control algorithms, are described. It is transparent and intuitive, and no linearizations of the governing equations are involved. Timing of the algorithm and method of computation are explained. It is shown that the tuning is easy and the computations are straightforward. The algorithm provides stable, realistic and robust outputs. The disadvantage of the algorithm is reduced precision in its outputs due to the approximation inherent in the fuzzy logic. Feed back control logic is adopted to eliminate error caused by the system disturbances as well as error caused by the reduced precision in the outputs. The algorithm is tested by applying it to water level control problem in a fictitious canal with a single pool and also in a real canal with a series of pools. It is found that results obtained from the algorithm are comparable to those obtained from conventional control algorithms.
Resumo:
Sets of multivalued dependencies (MVDs) having conflict-free covers are important to the theory and design of relational databases [2,12,15,16]. Their desirable properties motivate the problem of testing a set M of MVDs for the existence of a confiict-free cover. In [8] Goodman and Tay have proposed an approach based on the possible equivalence of M to a single (acyclic) join dependency (JD). We remark that their characterization does not lend an insight into the nature of such sets of MVDs. Here, we use notions that are intrinsic to MVDs to develop a new characterization. Our approach proceeds in two stages. In the first stage, we use the notion of “split-free” sets of MVDs and obtain a characterization of sets M of MVDs having split-free covers. In the second, we use the notion of “intersection” of MVDs to arrive at a necessary and sufficient condition for a split-free set of MVDs to be conflict-free. Based on our characterizations, we also give polynomial-time algorithms for testing whether M has split-free and conflict-free covers. The highlight of our approach is the clear insight it provides into the nature of sets of MVDs having conflict-free covers. Less emphasis is given in this paper to the actual efficiency of the algorthms. Finally, as a bonus, we derive a desirable property of split-free sets of MVDs,thereby showing that they are interesting in their own right.
Resumo:
In visual search one tries to find the currently relevant item among other, irrelevant items. In the present study, visual search performance for complex objects (characters, faces, computer icons and words) was investigated, and the contribution of different stimulus properties, such as luminance contrast between characters and background, set size, stimulus size, colour contrast, spatial frequency, and stimulus layout were investigated. Subjects were required to search for a target object among distracter objects in two-dimensional stimulus arrays. The outcome measure was threshold search time, that is, the presentation duration of the stimulus array required by the subject to find the target with a certain probability. It reflects the time used for visual processing separated from the time used for decision making and manual reactions. The duration of stimulus presentation was controlled by an adaptive staircase method. The number and duration of eye fixations, saccade amplitude, and perceptual span, i.e., the number of items that can be processed during a single fixation, were measured. It was found that search performance was correlated with the number of fixations needed to find the target. Search time and the number of fixations increased with increasing stimulus set size. On the other hand, several complex objects could be processed during a single fixation, i.e., within the perceptual span. Search time and the number of fixations depended on object type as well as luminance contrast. The size of the perceptual span was smaller for more complex objects, and decreased with decreasing luminance contrast within object type, especially for very low contrasts. In addition, the size and shape of perceptual span explained the changes in search performance for different stimulus layouts in word search. Perceptual span was scale invariant for a 16-fold range of stimulus sizes, i.e., the number of items processed during a single fixation was independent of retinal stimulus size or viewing distance. It is suggested that saccadic visual search consists of both serial (eye movements) and parallel (processing within perceptual span) components, and that the size of the perceptual span may explain the effectiveness of saccadic search in different stimulus conditions. Further, low-level visual factors, such as the anatomical structure of the retina, peripheral stimulus visibility and resolution requirements for the identification of different object types are proposed to constrain the size of the perceptual span, and thus, limit visual search performance. Similar methods were used in a clinical study to characterise the visual search performance and eye movements of neurological patients with chronic solvent-induced encephalopathy (CSE). In addition, the data about the effects of different stimulus properties on visual search in normal subjects were presented as simple practical guidelines, so that the limits of human visual perception could be taken into account in the design of user interfaces.
Resumo:
Need to analyze particles in a flow? This system takes electrical pulses from acoustical or optical sensors and groups them into bands representing ranges of particle sizes.
Resumo:
The biomass resources, existing utilization levels and the efficiency of its use have been analyzed for a South Indian village. A biomass based energy efficient strategy has been devised to meet all the energy needs of the village, including substitution of fuels such as electricity and kerosene used in specific activities. Results indicate that the potential as well as the technologies exist for such substitutions. The proposed strategy will lead to an increase in the efficiency of energy use, reduce human drudgery and make villages more self reliant.
Resumo:
In this paper the notion of conceptual cohesiveness is precised and used to group objects semantically, based on a knowledge structure called ‘cohesion forest’. A set of axioms is proposed which should be satisfied to make the generated clusters meaningful.
Resumo:
An 8085-based microprocessor system readily available in the laboratory has been developed in conjunction with a slow A/D converter to digitize repetitive transient signals arising in a solid-state physics experiment. The unit has been successfully used to measure the domain switching time in ferroelectric crystals, the duration of which is of the order of milliseconds-seconds.
Resumo:
Theoretical approaches are of fundamental importance to predict the potential impact of waste disposal facilities on ground water contamination. Appropriate design parameters are generally estimated be fitting theoretical models to data gathered from field monitoring or laboratory experiments. Transient through-diffusion tests are generally conducted in the laboratory to estimate the mass transport parameters of the proposed barrier material. Thes parameters are usually estimated either by approximate eye-fitting calibration or by combining the solution of the direct problem with any available gradient-based techniques. In this work, an automated, gradient-free solver is developed to estimate the mass transport parameters of a transient through-diffusion model. The proposed inverse model uses a particle swarm optimization (PSO) algorithm that is based on the social behavior of animals searching for food sources. The finite difference numerical solution of the forward model is integrated with the PSO algorithm to solve the inverse problem of parameter estimation. The working principle of the new solver is demonstrated and mass transport parameters are estimated from laboratory through-diffusion experimental data. An inverse model based on the standard gradient-based technique is formulated to compare with the proposed solver. A detailed comparative study is carried out between conventional methods and the proposed solver. The present automated technique is found to be very efficient and robust. The mass transport parameters are obtained with great precision.
Resumo:
The creep behaviour of a creep-resistant AE42 magnesium alloy reinforced with Saffil short fibres and SiC particulates in various combinations has been investigated in the transverse direction, i.e., the plane containing random fibre orientation was perpendicular to the loading direction, in the temperature range of 175-300 degrees C at the stress levels ranging from 60 to 140 MPa using impression creep test technique. Normal creep behaviour, i.e., strain rate decreasing with strain and then reaching a steady state, is observed at 175 degrees C at all the stresses employed, and up to 80 MPa stress at 240 degrees C. A reverse creep behaviour, i.e., strain rate increasing with strain, then reaching a steady state and then decreasing, is observed above 80 MPa stress at 240 degrees C and at all the stress levels at 300 degrees C. This pattern remains the same for all the composites employed. The reverse creep behaviour is found to be associated with fibre breakage. The apparent stress exponent is found to be very high for all the composites. However, after taking the threshold stress into account, the true stress exponent is found to range between 4 and 7, which suggests viscous glide and dislocation climb being the dominant creep mechanisms. The apparent activation energy Q(C) was not calculated due to insufficient data at any stress level either for normal or reverse creep behaviour. The creep resistance of the hybrid composites is found to be comparable to that of the composite reinforced with 20% Saffil short fibres alone at all the temperatures and stress levels investigated. The creep rate of the composites in the transverse direction is found to be higher than the creep rate in the longitudinal direction reported in a previous paper.
Resumo:
The development of algorithms, based on Haar functions, for extracting the desired frequency components from transient power-system relaying signals is presented. The applications of these algorithms to impedance detection in transmission line protection and to harmonic restraint in transformer differential protection are discussed. For transmission line protection, three modes of application of the Haar algorithms are described: a full-cycle window algorithm, an approximate full-cycle window algorithm, and a half-cycle window algorithm. For power transformer differential protection, the combined second and fifth harmonic magnitude of the differential current is compared with that of fundamental to arrive at a trip decision. The proposed line protection algorithms are evaluated, under different fault conditions, using realistic relaying signals obtained from transient analysis conducted on a model 400 kV, 3-phase system. The transformer differential protection algorithms are also evaluated using a variety of simulated inrush and internal fault signals.