904 resultados para Quad-Tree decomposition
Resumo:
In this paper, a novel fast method for modeling mammograms by deterministic fractal coding approach to detect the presence of microcalcifications, which are early signs of breast cancer, is presented. The modeled mammogram obtained using fractal encoding method is visually similar to the original image containing microcalcifications, and therefore, when it is taken out from the original mammogram, the presence of microcalcifications can be enhanced. The limitation of fractal image modeling is the tremendous time required for encoding. In the present work, instead of searching for a matching domain in the entire domain pool of the image, three methods based on mean and variance, dynamic range of the image blocks, and mass center features are used. This reduced the encoding time by a factor of 3, 89, and 13, respectively, in the three methods with respect to the conventional fractal image coding method with quad tree partitioning. The mammograms obtained from The Mammographic Image Analysis Society database (ground truth available) gave a total detection score of 87.6%, 87.6%, 90.5%, and 87.6%, for the conventional and the proposed three methods, respectively.
Resumo:
This paper presents a novel mobile sink area allocation scheme for consumer based mobile robotic devices with a proven application to robotic vacuum cleaners. In the home or office environment, rooms are physically separated by walls and an automated robotic cleaner cannot make a decision about which room to move to and perform the cleaning task. Likewise, state of the art cleaning robots do not move to other rooms without direct human interference. In a smart home monitoring system, sensor nodes may be deployed to monitor each separate room. In this work, a quad tree based data gathering scheme is proposed whereby the mobile sink physically moves through every room and logically links all separated sub-networks together. The proposed scheme sequentially collects data from the monitoring environment and transmits the information back to a base station. According to the sensor nodes information, the base station can command a cleaning robot to move to a specific location in the home environment. The quad tree based data gathering scheme minimizes the data gathering tour length and time through the efficient allocation of data gathering areas. A calculated shortest path data gathering tour can efficiently be allocated to the robotic cleaner to complete the cleaning task within a minimum time period. Simulation results show that the proposed scheme can effectively allocate and control the cleaning area to the robot vacuum cleaner without any direct interference from the consumer. The performance of the proposed scheme is then validated with a set of practical sequential data gathering tours in a typical office/home environment.
Resumo:
In this paper is presented a region-based methodology for Digital Elevation Model segmentation obtained from laser scanning data. The methodology is based on two sequential techniques, i.e., a recursive splitting technique using the quad tree structure followed by a region merging technique using the Markov Random Field model. The recursive splitting technique starts splitting the Digital Elevation Model into homogeneous regions. However, due to slight height differences in the Digital Elevation Model, region fragmentation can be relatively high. In order to minimize the fragmentation, a region merging technique based on the Markov Random Field model is applied to the previously segmented data. The resulting regions are firstly structured by using the so-called Region Adjacency Graph. Each node of the Region Adjacency Graph represents a region of the Digital Elevation Model segmented and two nodes have connectivity between them if corresponding regions share a common boundary. Next it is assumed that the random variable related to each node, follows the Markov Random Field model. This hypothesis allows the derivation of the posteriori probability distribution function whose solution is obtained by the Maximum a Posteriori estimation. Regions presenting high probability of similarity are merged. Experiments carried out with laser scanning data showed that the methodology allows to separate the objects in the Digital Elevation Model with a low amount of fragmentation.
Resumo:
Image segmentation is one of the most computationally intensive operations in image processing and computer vision. This is because a large volume of data is involved and many different features have to be extracted from the image data. This thesis is concerned with the investigation of practical issues related to the implementation of several classes of image segmentation algorithms on parallel architectures. The Transputer is used as the basic building block of hardware architectures and Occam is used as the programming language. The segmentation methods chosen for implementation are convolution, for edge-based segmentation; the Split and Merge algorithm for segmenting non-textured regions; and the Granlund method for segmentation of textured images. Three different convolution methods have been implemented. The direct method of convolution, carried out in the spatial domain, uses the array architecture. The other two methods, based on convolution in the frequency domain, require the use of the two-dimensional Fourier transform. Parallel implementations of two different Fast Fourier Transform algorithms have been developed, incorporating original solutions. For the Row-Column method the array architecture has been adopted, and for the Vector-Radix method, the pyramid architecture. The texture segmentation algorithm, for which a system-level design is given, demonstrates a further application of the Vector-Radix Fourier transform. A novel concurrent version of the quad-tree based Split and Merge algorithm has been implemented on the pyramid architecture. The performance of the developed parallel implementations is analysed. Many of the obtained speed-up and efficiency measures show values close to their respective theoretical maxima. Where appropriate comparisons are drawn between different implementations. The thesis concludes with comments on general issues related to the use of the Transputer system as a development tool for image processing applications; and on the issues related to the engineering of concurrent image processing applications.
Resumo:
3D geographic information system (GIS) is data and computation intensive in nature. Internet users are usually equipped with low-end personal computers and network connections of limited bandwidth. Data reduction and performance optimization techniques are of critical importance in quality of service (QoS) management for online 3D GIS. In this research, QoS management issues regarding distributed 3D GIS presentation were studied to develop 3D TerraFly, an interactive 3D GIS that supports high quality online terrain visualization and navigation. ^ To tackle the QoS management challenges, multi-resolution rendering model, adaptive level of detail (LOD) control and mesh simplification algorithms were proposed to effectively reduce the terrain model complexity. The rendering model is adaptively decomposed into sub-regions of up-to-three detail levels according to viewing distance and other dynamic quality measurements. The mesh simplification algorithm was designed as a hybrid algorithm that combines edge straightening and quad-tree compression to reduce the mesh complexity by removing geometrically redundant vertices. The main advantage of this mesh simplification algorithm is that grid mesh can be directly processed in parallel without triangulation overhead. Algorithms facilitating remote accessing and distributed processing of volumetric GIS data, such as data replication, directory service, request scheduling, predictive data retrieving and caching were also proposed. ^ A prototype of the proposed 3D TerraFly implemented in this research demonstrates the effectiveness of our proposed QoS management framework in handling interactive online 3D GIS. The system implementation details and future directions of this research are also addressed in this thesis. ^
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The objective of this paper is to study selected components of the nutrient cycle of pure and mixed stands of native forest species of Atlantic Forest in southeastern Brazil. Tree diameter, height, above-ground biomass, and nutrient content were determined in 22-year-old stands. Litterfall, litter decomposition, and nutrient concentration were evaluated from August 1994 to July 1995. The following species were studied: Peltogyne angustiflora, Centrolobium robustum, Arapatiella psilophylla, Sclerolobium chrysophyllum, Cordia trichotoma, Macrolobium latifolium. The litter of a natural forest and a 40-year-old naturally regenerated second-growth forest was sampled as well. The mixed-species outmatched pure stands in height, stem volume and total biomass (29.4 % more). The greatest amount of forest litter was observed in the natural forest (9.3 Mg ha-1), followed by the mixed-species stand (7.6 Mg ha-1) and secondary forest (7.3 Mg ha-1), and least litterfall was measured in the pure C. robustum stand (5.5 Mg ha-1). Litterfall seasonality varied among species in pure stands (CV from 44.7 to 91.4 %), unlike litterfall in the mixed-tree stand, where the variation was lower (CV 31.2 %). In the natural and second-growth forest, litterfall varied by 57.8 and 34.0 %, respectively. The annual rate of nutrient return via litterfall varied widely among forest ecosystems. Differences were detected between forest ecosystems in both the litter accumulation and quantity of litterlayer nutrients. The highest mean nutrient accumulation in above-ground biomass was observed in mixed-species stands. The total nutrient accumulation (N + P + K+ Ca + Mg) ranged from 0.97 to 1.93 kg tree-1 in pure stands, and from 1.21 to 2.63 kg tree-1 in mixed-species stands. Soil fertility under mixed-species stands (0-10 cm) was intermediate between the primary forest and pure-stand systems. The litterfall rate of native forest species in a mixed-species system is more constant, resulting in a more continuous decomposition rate. Consequently, both nutrient availability and quantity of organic matter in the soil are higher and the production system ecologically more sustainable.
Resumo:
We answer the following question: given any n∈ℕ, which is the minimum number of endpoints en of a tree admitting a zero-entropy map f with a periodic orbit of period n? We prove that en=s1s2…sk−∑i=2ksisi+1…sk, where n=s1s2…sk is the decomposition of n into a product of primes such that si≤si+1 for 1≤i
Resumo:
Decomposition was studied in a reciprocal litter transplant experiment to examine the effects of forest type, litter quality and their interaction on leaf decomposition in four tropical forests in south-east Brazil. Litterbags were used to measure decomposition of leaves of one tree species from each forest type: Calophyllum brasiliense from restinga forest; Guapira opposita from Atlantic forest; Esenbeckia leiocarpa from semi-deciduous forest; and Copaifera langsdorffii from cerradao. Decomposition rates in rain forests (Atlantic and restinga) were twice as fast as those in seasonal forests (semi-deciduous and cerradao), suggesting that intensity and distribution of precipitation are important predictors of decomposition rates at regional scales. Decomposition rates varied by species, in the following order: E. leiocarpa > C. langsdorffii > G. opposita > C. brasiliense. However, there was no correlation between decomposition rates and chemical litter quality parameters: C:N, C:P, lignin concentration and lignin:N. The interaction between forest type and litter quality was positive mainly because C. langsdorffii decomposed faster than expected in its native forest. This is a potential indication of a decomposer`s adaptation to specific substrates in a tropical forest. These findings suggest that besides climate, interactions between decomposers and plants might play an essential role in decomposition processes and it must be better understood.
Resumo:
Various popular machine learning techniques, like support vector machines, are originally conceived for the solution of two-class (binary) classification problems. However, a large number of real problems present more than two classes. A common approach to generalize binary learning techniques to solve problems with more than two classes, also known as multiclass classification problems, consists of hierarchically decomposing the multiclass problem into multiple binary sub-problems, whose outputs are combined to define the predicted class. This strategy results in a tree of binary classifiers, where each internal node corresponds to a binary classifier distinguishing two groups of classes and the leaf nodes correspond to the problem classes. This paper investigates how measures of the separability between classes can be employed in the construction of binary-tree-based multiclass classifiers, adapting the decompositions performed to each particular multiclass problem. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Several popular Machine Learning techniques are originally designed for the solution of two-class problems. However, several classification problems have more than two classes. One approach to deal with multiclass problems using binary classifiers is to decompose the multiclass problem into multiple binary sub-problems disposed in a binary tree. This approach requires a binary partition of the classes for each node of the tree, which defines the tree structure. This paper presents two algorithms to determine the tree structure taking into account information collected from the used dataset. This approach allows the tree structure to be determined automatically for any multiclass dataset.
Resumo:
Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.