939 resultados para binary tree
Resumo:
Based on the fractal theories, contractive mapping principles as well as the fixed point theory, by means of affine transform, this dissertation develops a novel Explicit Fractal Interpolation Function(EFIF)which can be used to reconstruct the seismic data with high fidelity and precision. Spatial trace interpolation is one of the important issues in seismic data processing. Under the ideal circumstances, seismic data should be sampled with a uniform spatial coverage. However, practical constraints such as the complex surface conditions indicate that the sampling density may be sparse or for other reasons some traces may be lost. The wide spacing between receivers can result in sparse sampling along traverse lines, thus result in a spatial aliasing of short-wavelength features. Hence, the method of interpolation is of very importance. It not only needs to make the amplitude information obvious but the phase information, especially that of the point that the phase changes acutely. Many people put forward several interpolation methods, yet this dissertation focuses attention on a special class of fractal interpolation function, referred to as explicit fractal interpolation function to improve the accuracy of the interpolation reconstruction and to make the local information obvious. The traditional fractal interpolation method mainly based on the randomly Fractional Brown Motion (FBM) model, furthermore, the vertical scaling factor which plays a critical role in the implementation of fractal interpolation is assigned the same value during the whole interpolating process, so it can not make the local information obvious. In addition, the maximal defect of the traditional fractal interpolation method is that it cannot obtain the function values on each interpolating nodes, thereby it cannot analyze the node error quantitatively and cannot evaluate the feasibility of this method. Detailed discussions about the applications of fractal interpolation in seismology have not been given by the pioneers, let alone the interpolating processing of the single trace seismogram. On the basis of the previous work and fractal theory this dissertation discusses the fractal interpolation thoroughly and the stability of this special kind of interpolating function is discussed, at the same time the explicit presentation of the vertical scaling factor which controls the precision of the interpolation has been proposed. This novel method develops the traditional fractal interpolation method and converts the fractal interpolation with random algorithms into the interpolation with determined algorithms. The data structure of binary tree method has been applied during the process of interpolation, and it avoids the process of iteration that is inevitable in traditional fractal interpolation and improves the computation efficiency. To illustrate the validity of the novel method, this dissertation develops several theoretical models and synthesizes the common shot gathers and seismograms and reconstructs the traces that were erased from the initial section using the explicit fractal interpolation method. In order to compare the differences between the theoretical traces that were erased in the initial section and the resulting traces after reconstruction on waveform and amplitudes quantitatively, each missing traces are reconstructed and the residuals are analyzed. The numerical experiments demonstrate that the novel fractal interpolation method is not only applicable to reconstruct the seismograms with small offset but to the seismograms with large offset. The seismograms reconstructed by explicit fractal interpolation method resemble the original ones well. The waveform of the missing traces could be estimated very well and also the amplitudes of the interpolated traces are a good approximation of the original ones. The high precision and computational efficiency of the explicit fractal interpolation make it a useful tool to reconstruct the seismic data; it can not only make the local information obvious but preserve the overall characteristics of the object investigated. To illustrate the influence of the explicit fractal interpolation method to the accuracy of the imaging of the structure in the earth’s interior, this dissertation applies the method mentioned above to the reverse-time migration. The imaging sections obtained by using the fractal interpolated reflected data resemble the original ones very well. The numerical experiments demonstrate that even with the sparse sampling we can still obtain the high accurate imaging of the earth’s interior’s structure by means of the explicit fractal interpolation method. So we can obtain the imaging results of the earth’s interior with fine quality by using relatively small number of seismic stations. With the fractal interpolation method we will improve the efficiency and the accuracy of the reverse-time migration under economic conditions. To verify the application effect to real data of the method presented in this paper, we tested the method by using the real data provided by the Broadband Seismic Array Laboratory, IGGCAS. The results demonstrate that the accuracy of explicit fractal interpolation is still very high even with the real data with large epicenter and large offset. The amplitudes and the phase of the reconstructed station data resemble the original ones that were erased in the initial section very well. Altogether, the novel fractal interpolation function provides a new and useful tool to reconstruct the seismic data with high precision and efficiency, and presents an alternative to image the deep structure of the earth accurately.
Resumo:
Several popular Machine Learning techniques are originally designed for the solution of two-class problems. However, several classification problems have more than two classes. One approach to deal with multiclass problems using binary classifiers is to decompose the multiclass problem into multiple binary sub-problems disposed in a binary tree. This approach requires a binary partition of the classes for each node of the tree, which defines the tree structure. This paper presents two algorithms to determine the tree structure taking into account information collected from the used dataset. This approach allows the tree structure to be determined automatically for any multiclass dataset.
Resumo:
In this paper, we present ICICLE (Image ChainNet and Incremental Clustering Engine), a prototype system that we have developed to efficiently and effectively retrieve WWW images based on image semantics. ICICLE has two distinguishing features. First, it employs a novel image representation model called Weight ChainNet to capture the semantics of the image content. A new formula, called list space model, for computing semantic similarities is also introduced. Second, to speed up retrieval, ICICLE employs an incremental clustering mechanism, ICC (Incremental Clustering on ChainNet), to cluster images with similar semantics into the same partition. Each cluster has a summary representative and all clusters' representatives are further summarized into a balanced and full binary tree structure. We conducted an extensive performance study to evaluate ICICLE. Compared with some recently proposed methods, our results show that ICICLE provides better recall and precision. Our clustering technique ICC facilitates speedy retrieval of images without sacrificing recall and precision significantly.
Resumo:
We report on the construction of anatomically realistic three-dimensional in-silico breast phantoms with adjustable sizes, shapes and morphologic features. The concept of multiscale spatial resolution is implemented for generating breast tissue images from multiple modalities. Breast epidermal boundary and subcutaneous fat layer is generated by fitting an ellipsoid and 2nd degree polynomials to reconstructive surgical data and ultrasound imaging data. Intraglandular fat is simulated by randomly distributing and orienting adipose ellipsoids within a fibrous region immediately within the dermal layer. Cooper’s ligaments are simulated as fibrous ellipsoidal shells distributed within the subcutaneous fat layer. Individual ductal lobes are simulated following a random binary tree model which is generated based upon probabilistic branching conditions described by ramification matrices, as originally proposed by Bakic et al [3, 4]. The complete ductal structure of the breast is simulated from multiple lobes that extend from the base of the nipple and branch towards the chest wall. As lobe branching progresses, branches are reduced in height and radius and terminal branches are capped with spherical lobular clusters. Biophysical parameters are mapped onto the complete anatomical model and synthetic multimodal images (Mammography, Ultrasound, CT) are generated for phantoms of different adipose percentages (40%, 50%, 60%, and 70%) and are analytically compared with clinical examples. Results demonstrate that the in-silico breast phantom has applications in imaging performance evaluation and, specifically, great utility for solving image registration issues in multimodality imaging.
Resumo:
Computer modelling promises to be an important tool for analysing and predicting interactions between trees within mixed species forest plantations. This study explored the use of an individual-based mechanistic model as a predictive tool for designing mixed species plantations of Australian tropical trees. The `spatially explicit individually based-forest simulator' (SeXI-FS) modelling system was used to describe the spatial interaction of individual tree crowns within a binary mixed-species experiment. The three-dimensional model was developed and verified with field data from three forest tree species grown in tropical Australia. The model predicted the interactions within monocultures and binary mixtures of Flindersia brayleyana, Eucalyptus pellita and Elaeocarpus grandis, accounting for an average of 42% of the growth variation exhibited by species in different treatments. The model requires only structural dimensions and shade tolerance as species parameters. By modelling interactions in existing tree mixtures, the model predicted both increases and reductions in the growth of mixtures (up to +/-50% of stem volume at 7 years) compared to monocultures. This modelling approach may be useful for designing mixed tree plantations.
Resumo:
Computer modelling promises to. be an important tool for analysing and predicting interactions between trees within mixed species forest plantations. This study explored the use of an individual-based mechanistic model as a predictive tool for designing mixed species plantations of Australian tropical trees. The 'spatially explicit individually based-forest simulator' (SeXI-FS) modelling system was used to describe the spatial interaction of individual tree crowns within a binary mixed-species experiment. The three-dimensional model was developed and verified with field data from three forest tree species grown in tropical Australia. The model predicted the interactions within monocultures and binary mixtures of Flindersia brayleyana, Eucalyptus pellita and Elaeocarpus grandis, accounting for an average of 42% of the growth variation exhibited by species in different treatments. The model requires only structural dimensions and shade tolerance as species parameters. By modelling interactions in existing tree mixtures, the model predicted both increases and reductions in the growth of mixtures (up to +/- 50% of stem volume at 7 years) compared to monocultures. This modelling approach may be useful for designing mixed tree plantations. (c) 2006 Published by Elsevier B.V.
Resumo:
Fault tree analysis (FTA) is presented to model the reliability of a railway traction power system in this paper. First, the construction of fault tree is introduced to integrate components in traction power systems into a fault tree; then the binary decision diagram (BDD) method is used to evaluate fault trees qualitatively and quantitatively. The components contributing to the reliability of overall system are identified with their relative importance through sensitivity analysis. Finally, an AC traction power system is evaluated by the proposed methods.
Resumo:
Construction of Huffman binary codes for WLN symbols is described for the compression of a WLN file. Here, a parenthesized representation of the tree structure is used for computer encoding.
Resumo:
This paper presents a novel way to speed up the evaluation time of a boosting classifier. We make a shallow (flat) network deep (hierarchical) by growing a tree from decision regions of a given boosting classifier. The tree provides many short paths for speeding up while preserving the reasonably smooth decision regions of the boosting classifier for good generalisation. For converting a boosting classifier into a decision tree, we formulate a Boolean optimization problem, which has been previously studied for circuit design but limited to a small number of binary variables. In this work, a novel optimisation method is proposed for, firstly, several tens of variables i.e. weak-learners of a boosting classifier, and then any larger number of weak-learners by using a two-stage cascade. Experiments on the synthetic and face image data sets show that the obtained tree achieves a significant speed up both over a standard boosting classifier and the Fast-exit-a previously described method for speeding-up boosting classification, at the same accuracy. The proposed method as a general meta-algorithm is also useful for a boosting cascade, where it speeds up individual stage classifiers by different gains. The proposed method is further demonstrated for fast-moving object tracking and segmentation problems. © 2011 Springer Science+Business Media, LLC.
Resumo:
This paper proposes compact adders that are based on non-binary redundant number systems and single-electron (SE) devices. The adders use the number of single electrons to represent discrete multiple-valued logic state and manipulate single electrons to perform arithmetic operations. These adders have fast speed and are referred as fast adders. We develop a family of SE transfer circuits based on MOSFET-based SE turnstile. The fast adder circuit can be easily designed by directly mapping the graphical counter tree diagram (CTD) representation of the addition algorithm to SE devices and circuits. We propose two design approaches to implement fast adders using SE transfer circuits the threshold approach and the periodic approach. The periodic approach uses the voltage-controlled single-electron transfer characteristics to efficiently achieve periodic arithmetic functions. We use HSPICE simulator to verify fast adders operations. The speeds of the proposed adders are fast. The numbers of transistors of the adders are much smaller than conventional approaches. The power dissipations are much lower than CMOS and multiple-valued current-mode fast adders. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
One among the most influential and popular data mining methods is the k-Means algorithm for cluster analysis. Techniques for improving the efficiency of k-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting geometrical constraints and an efficient data structure, notably a multidimensional binary search tree (KD-Tree). These techniques allow to reduce the number of distance computations the algorithm performs at each iteration. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient k-Means variants in parallel computing environments. In this work, we provide a parallel formulation of the KD-Tree based k-Means algorithm for distributed memory systems and address its load balancing issue. Three solutions have been developed and tested. Two approaches are based on a static partitioning of the data set and a third solution incorporates a dynamic load balancing policy.
Resumo:
In this paper, we propose a novel high-dimensional index method, the BM+-tree, to support efficient processing of similarity search queries in high-dimensional spaces. The main idea of the proposed index is to improve data partitioning efficiency in a high-dimensional space by using a rotary binary hyperplane, which further partitions a subspace and can also take advantage of the twin node concept used in the M+-tree. Compared with the key dimension concept in the M+-tree, the binary hyperplane is more effective in data filtering. High space utilization is achieved by dynamically performing data reallocation between twin nodes. In addition, a post processing step is used after index building to ensure effective filtration. Experimental results using two types of real data sets illustrate a significantly improved filtering efficiency.
Resumo:
Usually, data mining projects that are based on decision trees for classifying test cases will use the probabilities provided by these decision trees for ranking classified test cases. We have a need for a better method for ranking test cases that have already been classified by a binary decision tree because these probabilities are not always accurate and reliable enough. A reason for this is that the probability estimates computed by existing decision tree algorithms are always the same for all the different cases in a particular leaf of the decision tree. This is only one reason why the probability estimates given by decision tree algorithms can not be used as an accurate means of deciding if a test case has been correctly classified. Isabelle Alvarez has proposed a new method that could be used to rank the test cases that were classified by a binary decision tree [Alvarez, 2004]. In this paper we will give the results of a comparison of different ranking methods that are based on the probability estimate, the sensitivity of a particular case or both.
Resumo:
This paper presents a multi-class AdaBoost based on incorporating an ensemble of binary AdaBoosts which is organized as Binary Decision Tree (BDT). It is proved that binary AdaBoost is extremely successful in producing accurate classification but it does not perform very well for multi-class problems. To avoid this performance degradation, the multi-class problem is divided into a number of binary problems and binary AdaBoost classifiers are invoked to solve these classification problems. This approach is tested with a dataset consisting of 6500 binary images of traffic signs. Haar-like features of these images are computed and the multi-class AdaBoost classifier is invoked to classify them. A classification rate of 96.7% and 95.7% is achieved for the traffic sign boarders and pictograms, respectively. The proposed approach is also evaluated using a number of standard datasets such as Iris, Wine, Yeast, etc. The performance of the proposed BDT classifier is quite high as compared with the state of the art and it converges very fast to a solution which indicates it as a reliable classifier.