887 resultados para Beam Search Method
Resumo:
In this paper, we present a distributed computing framework for problems characterized by a highly irregular search tree, whereby no reliable workload prediction is available. The framework is based on a peer-to-peer computing environment and dynamic load balancing. The system allows for dynamic resource aggregation, does not depend on any specific meta-computing middleware and is suitable for large-scale, multi-domain, heterogeneous environments, such as computational Grids. Dynamic load balancing policies based on global statistics are known to provide optimal load balancing performance, while randomized techniques provide high scalability. The proposed method combines both advantages and adopts distributed job-pools and a randomized polling technique. The framework has been successfully adopted in a parallel search algorithm for subgraph mining and evaluated on a molecular compounds dataset. The parallel application has shown good calability and close-to linear speedup in a distributed network of workstations.
Resumo:
Empirical orthogonal functions (EOFs) are widely used in climate research to identify dominant patterns of variability and to reduce the dimensionality of climate data. EOFs, however, can be difficult to interpret. Rotated empirical orthogonal functions (REOFs) have been proposed as more physical entities with simpler patterns than EOFs. This study presents a new approach for finding climate patterns with simple structures that overcomes the problems encountered with rotation. The method achieves simplicity of the patterns by using the main properties of EOFs and REOFs simultaneously. Orthogonal patterns that maximise variance subject to a constraint that induces a form of simplicity are found. The simplified empirical orthogonal function (SEOF) patterns, being more 'local'. are constrained to have zero loadings outside the main centre of action. The method is applied to winter Northern Hemisphere (NH) monthly mean sea level pressure (SLP) reanalyses over the period 1948-2000. The 'simplified' leading patterns of variability are identified and compared to the leading patterns obtained from EOFs and REOFs. Copyright (C) 2005 Royal Meteorological Society.
Resumo:
This paper deals with the design of optimal multiple gravity assist trajectories with deep space manoeuvres. A pruning method which considers the sequential nature of the problem is presented. The method locates feasible vectors using local optimization and applies a clustering algorithm to find reduced bounding boxes which can be used in a subsequent optimization step. Since multiple local minima remain within the pruned search space, the use of a global optimization method, such as Differential Evolution, is suggested for finding solutions which are likely to be close to the global optimum. Two case studies are presented.
Resumo:
A novel Swarm Intelligence method for best-fit search, Stochastic Diffusion Search, is presented capable of rapid location of the optimal solution in the search space. Population based search mechanisms employed by Swarm Intelligence methods can suffer lack of convergence resulting in ill defined stopping criteria and loss of the best solution. Conversely, as a result of its resource allocation mechanism, the solutions SDS discovers enjoy excellent stability.
Resumo:
Feature tracking is a key step in the derivation of Atmospheric Motion Vectors (AMV). Most operational derivation processes use some template matching technique, such as Euclidean distance or cross-correlation, for the tracking step. As this step is very expensive computationally, often shortrange forecasts generated by Numerical Weather Prediction (NWP) systems are used to reduce the search area. Alternatives, such as optical flow methods, have been explored, with the aim of improving the number and quality of the vectors generated and the computational efficiency of the process. This paper will present the research carried out to apply Stochastic Diffusion Search, a generic search technique in the Swarm Intelligence family, to feature tracking in the context of AMV derivation. The method will be described, and we will present initial results, with Euclidean distance as reference.
Resumo:
We introduce a classification-based approach to finding occluding texture boundaries. The classifier is composed of a set of weak learners, which operate on image intensity discriminative features that are defined on small patches and are fast to compute. A database that is designed to simulate digitized occluding contours of textured objects in natural images is used to train the weak learners. The trained classifier score is then used to obtain a probabilistic model for the presence of texture transitions, which can readily be used for line search texture boundary detection in the direction normal to an initial boundary estimate. This method is fast and therefore suitable for real-time and interactive applications. It works as a robust estimator, which requires a ribbon-like search region and can handle complex texture structures without requiring a large number of observations. We demonstrate results both in the context of interactive 2D delineation and of fast 3D tracking and compare its performance with other existing methods for line search boundary detection.
Resumo:
A novel Linear Hashtable Method Predicted Hexagonal Search (LHMPHS) method for block based motion compensation is proposed. Fast block matching algorithms use the origin as the initial search center, which often does not track motion very well. To improve the accuracy of the fast BMA's, we employ a predicted starting search point, which reflects the motion trend of the current block. The predicted search centre is found closer to the global minimum. Thus the center-biased BMA's can be used to find the motion vector more efficiently. The performance of the algorithm is evaluated by using standard video sequences, considers the three important metrics: The results show that the proposed algorithm enhances the accuracy of current hexagonal algorithms and is better than Full Search, Logarithmic Search etc.
Resumo:
Coatings and filters for spaceflight far-infrared components require a robust, non-absorptive low-index thin film material to contrast with the typically higher refractive index infrared materials. Barium fluoride is one such material for the 10 to 20µm wavelength infrared region, however its optical and mechanical properties vary depending on the process used to deposit it in thin film form. Thin films of dielectric produced by thermal evaporation are well documented as having a lower packing density and refractive index than bulk material. The porous and columnar micro structure of these films causes possible deterioration of their performance in varied environmental conditions, primarily because of the moisture absorption. Dielectric thin films produced by the more novel technique of ion-beam sputtering are denser with no columnar micro structure and have a packing density and refractive index similar to the bulk material. A comparative study of Barium Fluoride (BaF2) thin films made by conventional thermal evaporation and ion-beam sputtering is reported. Films of similar thicknesses are deposited on Cadmium Telluride and Germanium substrates. The optical and mechanical properties of these films are then examined. The refractive index n of the films is obtained from applying the modified Manifacier's evvelope method to the spectral measurements made on a Perkin Elmer 580 spectrophotometer. An estimate is also made of the value of the extinction coefficient k in the infrared wavelength transparent region of the thin film. In order to study the mechanical properties of the BaF2 films, and evaluate their usefulness in spaceflight infrared filters and coatings, the thin film samples are subjected to MIL-F-48616 environmental tests. Comparisons are made of their performance under these tests.
Resumo:
*** Purpose – Computer tomography (CT) for 3D reconstruction entails a huge number of coplanar fan-beam projections for each of a large number of 2D slice images, and excessive radiation intensities and dosages. For some applications its rate of throughput is also inadequate. A technique for overcoming these limitations is outlined. *** Design methodology/approach – A novel method to reconstruct 3D surface models of objects is presented, using, typically, ten, 2D projective images. These images are generated by relative motion between this set of objects and a set of ten fanbeam X-ray sources and sensors, with their viewing axes suitably distributed in 2D angular space. *** Findings – The method entails a radiation dosage several orders of magnitude lower than CT, and requires far less computational power. Experimental results are given to illustrate the capability of the technique *** Practical implications – The substantially lower cost of the method and, more particularly, its dramatically lower irradiation make it relevant to many applications precluded by current techniques *** Originality/value – The method can be used in many applications such as aircraft hold-luggage screening, 3D industrial modelling and measurement, and it should also have important applications to medical diagnosis and surgery.
Resumo:
Controllers for feedback substitution schemes demonstrate a trade-off between noise power gain and normalized response time. Using as an example the design of a controller for a radiometric transduction process subjected to arbitrary noise power gain and robustness constraints, a Pareto-front of optimal controller solutions fulfilling a range of time-domain design objectives can be derived. In this work, we consider designs using a loop shaping design procedure (LSDP). The approach uses linear matrix inequalities to specify a range of objectives and a genetic algorithm (GA) to perform a multi-objective optimization for the controller weights (MOGA). A clonal selection algorithm is used to further provide a directed search of the GA towards the Pareto front. We demonstrate that with the proposed methodology, it is possible to design higher order controllers with superior performance in terms of response time, noise power gain and robustness.
Resumo:
This report presents the canonical Hamiltonian formulation of relative satellite motion. The unperturbed Hamiltonian model is shown to be equivalent to the well known Hill-Clohessy-Wilshire (HCW) linear formulation. The in°uence of perturbations of the nonlinear Gravitational potential and the oblateness of the Earth; J2 perturbations are also modelled within the Hamiltonian formulation. The modelling incorporates eccentricity of the reference orbit. The corresponding Hamiltonian vector ¯elds are computed and implemented in Simulink. A numerical method is presented aimed at locating periodic or quasi-periodic relative satellite motion. The numerical method outlined in this paper is applied to the Hamiltonian system. Although the orbits considered here are weakly unstable at best, in the case of eccentricity only, the method ¯nds exact periodic orbits. When other perturbations such as nonlinear gravitational terms are added, drift is signicantly reduced and in the case of the J2 perturbation with and without the nonlinear gravitational potential term, bounded quasi-periodic solutions are found. Advantages of using Newton's method to search for periodic or quasi-periodic relative satellite motion include simplicity of implementation, repeatability of solutions due to its non-random nature, and fast convergence. Given that the use of bounded or drifting trajectories as control references carries practical di±culties over long-term missions, Principal Component Analysis (PCA) is applied to the quasi-periodic or slowly drifting trajectories to help provide a closed reference trajectory for the implementation of closed loop control. In order to evaluate the e®ect of the quality of the model used to generate the periodic reference trajectory, a study involving closed loop control of a simulated master/follower formation was performed. 2 The results of the closed loop control study indicate that the quality of the model employed for generating the reference trajectory used for control purposes has an important in°uence on the resulting amount of fuel required to track the reference trajectory. The model used to generate LQR controller gains also has an e®ect on the e±ciency of the controller.
Resumo:
We present a novel method for retrieving high-resolution, three-dimensional (3-D) nonprecipitating cloud fields in both overcast and broken-cloud situations. The method uses scanning cloud radar and multiwavelength zenith radiances to obtain gridded 3-D liquid water content (LWC) and effective radius (re) and 2-D column mean droplet number concentration (Nd). By using an adaption of the ensemble Kalman filter, radiances are used to constrain the optical properties of the clouds using a forward model that employs full 3-D radiative transfer while also providing full error statistics given the uncertainty in the observations. To evaluate the new method, we first perform retrievals using synthetic measurements from a challenging cumulus cloud field produced by a large-eddy simulation snapshot. Uncertainty due to measurement error in overhead clouds is estimated at 20% in LWC and 6% in re, but the true error can be greater due to uncertainties in the assumed droplet size distribution and radiative transfer. Over the entire domain, LWC and re are retrieved with average error 0.05–0.08 g m-3 and ~2 μm, respectively, depending on the number of radiance channels used. The method is then evaluated using real data from the Atmospheric Radiation Measurement program Mobile Facility at the Azores. Two case studies are considered, one stratocumulus and one cumulus. Where available, the liquid water path retrieved directly above the observation site was found to be in good agreement with independent values obtained from microwave radiometer measurements, with an error of 20 g m-2.
Resumo:
Searching for and mapping the physical extent of unmarked graves using geophysical techniques has proven difficult in many cases. The success of individual geophysical techniques for detecting graves depends on a site-by-site basis. Significantly, detection of graves often results from measured contrasts that are linked to the background soils rather than the type of archaeological feature associated with the grave. It is evident that investigation of buried remains should be considered within a 3D space as the variation in burial environment can be extremely varied through the grave. Within this paper, we demonstrate the need for a multi-method survey strategy to investigate unmarked graves, as applied at a “planned” but unmarked pauper’s cemetery. The outcome from this case study provides new insights into the strategy that is required at such sites. Perhaps the most significant conclusion is that unmarked graves are best understood in terms of characterization rather than identification. In this paper, we argue for a methodological approach that, while following the current trends to use multiple techniques, is fundamentally dependent on a structured approach to the analysis of the data. The ramifications of this case study illustrate the necessity of an integrated strategy to provide a more holistic understanding of unmarked graves that may help aid in management of these unseen but important aspects of our heritage. It is concluded that the search for graves is still a current debate and one that will be solved by methodological rather than technique-based arguments.
Resumo:
A method for linearly constrained optimization which modifies and generalizes recent box-constraint optimization algorithms is introduced. The new algorithm is based on a relaxed form of Spectral Projected Gradient iterations. Intercalated with these projected steps, internal iterations restricted to faces of the polytope are performed, which enhance the efficiency of the algorithm. Convergence proofs are given and numerical experiments are included and commented. Software supporting this paper is available through the Tango Project web page: http://www.ime.usp.br/similar to egbirgin/tango/.
Resumo:
The most significant radiation field nonuniformity is the well-known Heel effect. This nonuniform beam effect has a negative influence on the results of computer-aided diagnosis of mammograms, which is frequently used for early cancer detection. This paper presents a method to correct all pixels in the mammography image according to the excess or lack on radiation to which these have been submitted as a result of the this effect. The current simulation method calculates the intensities at all points of the image plane. In the simulated image, the percentage of radiation received by all the points takes the center of the field as reference. In the digitized mammography, the percentages of the optical density of all the pixels of the analyzed image are also calculated. The Heel effect causes a Gaussian distribution around the anode-cathode axis and a logarithmic distribution parallel to this axis. Those characteristic distributions are used to determine the center of the radiation field as well as the cathode-anode axis, allowing for the automatic determination of the correlation between these two sets of data. The measurements obtained with our proposed method differs on average by 2.49 mm in the direction perpendicular to the anode-cathode axis and 2.02 mm parallel to the anode-cathode axis of commercial equipment. The method eliminates around 94% of the Heel effect in the radiological image and the objects will reflect their x-ray absorption. To evaluate this method, experimental data was taken from known objects, but could also be done with clinical and digital images.