799 resultados para recursive partitioning algorithm
Resumo:
This paper presents a new approach to the design of combinational digital circuits with multiplexers using Evolutionary techniques. Genetic Algorithm (GA) is used as the optimization tool. Several circuits are synthesized with this method and compared with two design techniques such as standard implementation of logic functions using multiplexers and implementation using Shannon’s decomposition technique using GA. With the proposed method complexity of the circuit and the associated delay can be reduced significantly
Resumo:
Type and rate of fertilizers influence the level of soil organic carbon (Corg) and total nitrogen (Nt) markedly, but the effect on C and N partitioning into different pools is open to question. The objectives of the present work were to: (i) quantify the impact of fertilizer type and rate on labile, intermediate and passive C and N pools by using a combination of biological, chemical and mathematical methods; (ii) explain previously reported differences in the soil organic matter (SOM) levels between soils receiving farmyard manure with or without biodynamic preparations by using Corg time series and information on SOM partitioning; and (iii) quantify the long-term and short-term dynamics of SOM in density fractions and microbial biomass as affected by fertilizer type and rate and determine the incorporation of crop residues into labile SOM fractions. Samples were taken from a sandy Cambisol from the long-term fertilization trial in Darmstadt, Germany, founded in 1980. The nine treatments (four field replicates) were: straw incorporation plus application of mineral fertilizer (MSI) and application of rotted farmyard manure with (DYN) or without (FYM) addition of biodynamic preparations, each at high (140 – 150 kg N ha-1 year-1; MSIH, DYNH, FYMH), medium (100 kg N ha-1 year-1; MSIM, DYNM, FYMM) and low (50 – 60 kg N ha-1 year-1; MSIL, DYNL, FYML) rates. The main findings were: (i) The stocks of Corg (t ha-1) were affected by fertilizer type and rate and increased in the order MSIL (23.6), MSIM (23.7), MSIH (24.2) < FYML (25.3) < FYMM (28.1), FYMH (28.1). Stocks of Nt were affected in the same way (C/N ratio: 11). Storage of C and N in the modelled labile pools (turnover times: 462 and 153 days for C and N, respectively) were not influenced by the type of fertilizer (FYM and MSI) but depended significantly (p ≤ 0.05) on the application rate and ranged from 1.8 to 3.2 t C ha 1 (7 – 13% of Corg) and from 90 to 140 kg N ha-1 (4-5% of Nt). In the calculated intermediate pool (C/N ratio 7), stocks of C were markedly higher in FYM treatments (15-18 t ha-1) compared to MSI treatments (12-14 t ha-1). This showed that differences in SOM stocks in the sandy Cambisol induced by fertilizer rate may be short-lived in case of changing management, but differences induced by fertilizer type may persist for decades. (ii) Crop yields, estimated C inputs (1.5 t ha-1 year-1) with crop residue, microbial bio¬mass C (Cmic, 118 – 150 mg kg-1), microbial biomass N (17 – 20 mg kg-1) and labile C and N pools did not differ significantly between FYM and DYN treatments. However, labile C increased linearly with application rate (R2 = 0.53) from 7 to 11% of Corg. This also applied for labile N (3.5 to 4.9% of Nt). The higher contents of Corg in DYN treatments existed since 1982, when the first sampling was conducted for all individual treatments. Contents of Corg between DYN and FYM treatments con-verged slightly since then. Furthermore, at least 30% of the difference in Corg was located in the passive pool where a treatment effect could be excluded. Therefore, the reported differences in Corg contents existed most likely since the beginning of the experiment and, as a single factor of biodynamic agriculture, application of bio-dynamic preparations had no effect on SOM stocks. (iii) Stocks of SOM, light fraction organic C (LFOC, ρ ≤ 2.0 g cm-3), light fraction organic N and Cmic decreased in the order FYMH > FYML > MSIH, MSIL for all sampling dates in 2008 (March, May, September, December). However, statistical significance of treatment effects differed between the dates, probably due to dif-ferences in the spatial variation throughout the year. The high proportion of LFOC on total Corg stocks (45 – 55%) highlighted the importance of selective preservation of OM as a stabilization mechanism in this sandy Cambisol. The apparent turnover time of LFOC was between 21 and 32 years, which agreed very well with studies with substantially longer vegetation change compared to our study. Overall, both approaches; (I) the combination of incubation, chemical fractionation and simple modelling and (II) the density fractionation; provided complementary information on the partitioning of SOM into pools of different stability. The density fractionation showed that differences in Corg stocks between FYM and MSI treatments were mainly located in the light fraction, i.e. induced by higher recalcitrance of the organic input in the FYM treatments. Moreover, the use of the combination of biological, chemical and mathematical methods indicated that effects of fertilizer rate on total Corg and Nt stocks may be short-lived, but that the effect of fertilizer type may persist for longer time spans in the sandy Cambisol.
Resumo:
We develop an algorithm that computes the gravitational potentials and forces on N point-masses interacting in three-dimensional space. The algorithm, based on analytical techniques developed by Rokhlin and Greengard, runs in order N time. In contrast to other fast N-body methods such as tree codes, which only approximate the interaction potentials and forces, this method is exact ?? computes the potentials and forces to within any prespecified tolerance up to machine precision. We present an implementation of the algorithm for a sequential machine. We numerically verify the algorithm, and compare its speed with that of an O(N2) direct force computation. We also describe a parallel version of the algorithm that runs on the Connection Machine in order 0(logN) time. We compare experimental results with those of the sequential implementation and discuss how to minimize communication overhead on the parallel machine.
Resumo:
"Expectation-Maximization'' (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix $P$, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of $P$ and provide new results analyzing the effect that $P$ has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models.
Resumo:
We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.
Resumo:
The discontinuities in the solutions of systems of conservation laws are widely considered as one of the difficulties in numerical simulation. A numerical method is proposed for solving these partial differential equations with discontinuities in the solution. The method is able to track these sharp discontinuities or interfaces while still fully maintain the conservation property. The motion of the front is obtained by solving a Riemann problem based on the state values at its both sides which are reconstructed by using weighted essentially non oscillatory (WENO) scheme. The propagation of the front is coupled with the evaluation of "dynamic" numerical fluxes. Some numerical tests in 1D and preliminary results in 2D are presented.
Resumo:
In 2000 the European Statistical Office published the guidelines for developing the Harmonized European Time Use Surveys system. Under such a unified framework, the first Time Use Survey of national scope was conducted in Spain during 2002– 03. The aim of these surveys is to understand human behavior and the lifestyle of people. Time allocation data are of compositional nature in origin, that is, they are subject to non-negativity and constant-sum constraints. Thus, standard multivariate techniques cannot be directly applied to analyze them. The goal of this work is to identify homogeneous Spanish Autonomous Communities with regard to the typical activity pattern of their respective populations. To this end, fuzzy clustering approach is followed. Rather than the hard partitioning of classical clustering, where objects are allocated to only a single group, fuzzy method identify overlapping groups of objects by allowing them to belong to more than one group. Concretely, the probabilistic fuzzy c-means algorithm is conveniently adapted to deal with the Spanish Time Use Survey microdata. As a result, a map distinguishing Autonomous Communities with similar activity pattern is drawn. Key words: Time use data, Fuzzy clustering; FCM; simplex space; Aitchison distance
Resumo:
Image segmentation of natural scenes constitutes a major problem in machine vision. This paper presents a new proposal for the image segmentation problem which has been based on the integration of edge and region information. This approach begins by detecting the main contours of the scene which are later used to guide a concurrent set of growing processes. A previous analysis of the seed pixels permits adjustment of the homogeneity criterion to the region's characteristics during the growing process. Since the high variability of regions representing outdoor scenes makes the classical homogeneity criteria useless, a new homogeneity criterion based on clustering analysis and convex hull construction is proposed. Experimental results have proven the reliability of the proposed approach
Resumo:
This paper proposes a parallel architecture for estimation of the motion of an underwater robot. It is well known that image processing requires a huge amount of computation, mainly at low-level processing where the algorithms are dealing with a great number of data. In a motion estimation algorithm, correspondences between two images have to be solved at the low level. In the underwater imaging, normalised correlation can be a solution in the presence of non-uniform illumination. Due to its regular processing scheme, parallel implementation of the correspondence problem can be an adequate approach to reduce the computation time. Taking into consideration the complexity of the normalised correlation criteria, a new approach using parallel organisation of every processor from the architecture is proposed
Resumo:
This paper proposes a pose-based algorithm to solve the full SLAM problem for an autonomous underwater vehicle (AUV), navigating in an unknown and possibly unstructured environment. The technique incorporate probabilistic scan matching with range scans gathered from a mechanical scanning imaging sonar (MSIS) and the robot dead-reckoning displacements estimated from a Doppler velocity log (DVL) and a motion reference unit (MRU). The proposed method utilizes two extended Kalman filters (EKF). The first, estimates the local path travelled by the robot while grabbing the scan as well as its uncertainty and provides position estimates for correcting the distortions that the vehicle motion produces in the acoustic images. The second is an augment state EKF that estimates and keeps the registered scans poses. The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. The algorithm has been tested on an AUV guided along a 600 m path within a marina environment, showing the viability of the proposed approach
Resumo:
The authors focus on one of the methods for connection acceptance control (CAC) in an ATM network: the convolution approach. With the aim of reducing the cost in terms of calculation and storage requirements, they propose the use of the multinomial distribution function. This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements. This in turn makes possible a simple deconvolution process. Moreover, under certain conditions additional improvements may be achieved
Resumo:
The aim of traffic engineering is to optimise network resource utilization. Although several works on minimizing network resource utilization have been published, few works have focused on LSR label space. This paper proposes an algorithm that uses MPLS label stack features in order to reduce the number of labels used in LSPs forwarding. Some tunnelling methods and their MPLS implementation drawbacks are also discussed. The algorithm described sets up the NHLFE tables in each LSR, creating asymmetric tunnels when possible. Experimental results show that the algorithm achieves a large reduction factor in the label space. The work presented here applies for both types of connections: P2MP and P2P
Resumo:
In computer graphics, global illumination algorithms take into account not only the light that comes directly from the sources, but also the light interreflections. This kind of algorithms produce very realistic images, but at a high computational cost, especially when dealing with complex environments. Parallel computation has been successfully applied to such algorithms in order to make it possible to compute highly-realistic images in a reasonable time. We introduce here a speculation-based parallel solution for a global illumination algorithm in the context of radiosity, in which we have taken advantage of the hierarchical nature of such an algorithm
Resumo:
En este trabajo se implementa una metodología para incluir momentos de orden superior en la selección de portafolios, haciendo uso de la Distribución Hiperbólica Generalizada, para posteriormente hacer un análisis comparativo frente al modelo de Markowitz.
Resumo:
In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms