161 resultados para k-Means algorithm
Resumo:
In this work, the oxidation of the model pollutant phenol has been studied by means of the O(3), O(3)-UV, and O(3)-H(2)O(2) processes. Experiments were carried out in a fed-batch system to investigate the effects of initial dissolved organic carbon concentration, initial, ozone concentration in the gas phase, the presence or absence of UVC radiation, and initial hydrogen peroxide concentration. Experimental results were used in the modeling of the degradation processes by neural networks in order to simulate DOC-time profiles and evaluate the relative importance of process variables.
Resumo:
The photodegradation of the herbicide clomazone in the presence of S(2)O(8)(2-) or of humic substances of different origin was investigated. A value of (9.4 +/- 0.4) x 10(8) m(-1) s(-1) was measured for the bimolecular rate constant for the reaction of sulfate radicals with clomazone in flash-photolysis experiments. Steady state photolysis of peroxydisulfate, leading to the formation of the sulfate radicals, in the presence of clomazone was shown to be an efficient photodegradation method of the herbicide. This is a relevant result regarding the in situ chemical oxidation procedures involving peroxydisulfate as the oxidant. The main reaction products are 2-chlorobenzylalcohol and 2-chlorobenzaldehyde. The degradation kinetics of clomazone was also studied under steady state conditions induced by photolysis of Aldrich humic acid or a vermicompost extract (VCE). The results indicate that singlet oxygen is the main species responsible for clomazone degradation. The quantum yield of O(2)(a(1)Delta(g)) generation (lambda = 400 nm) for the VCE in D(2)O, Phi(Delta) = (1.3 +/- 0.1) x 10(-3), was determined by measuring the O(2)(a(1)Delta(g)) phosphorescence at 1270 nm. The value of the overall quenching constant of O(2)(a(1)Delta(g)) by clomazone was found to be (5.7 +/- 0.3) x 10(7) m(-1) s(-1) in D(2)O. The bimolecular rate constant for the reaction of clomazone with singlet oxygen was k(r) = (5.4 +/- 0.1) x 10(7) m(-1) s(-1), which means that the quenching process is mainly reactive.
Resumo:
The solar driven photo-Fenton process for treating water containing phenol as a contaminant has been evaluated by means of pilot-scale experiments with a parabolic trough solar reactor (PTR). The effects of Fe(II) (0.04-1.0 mmol L(-1)), H(2)O(2) (7-270 mmol L(-1)), initial phenol concentration (100 and 500 mg C L(-1)), solar radiation, and operation mode (batch and fed-batch) on the process efficiency were investigated. More than 90% of the dissolved organic carbon (DOC) was removed within 3 hours of irradiation or less, a performance equivalent to that of artificially-irradiated reactors, indicating that solar light can be used either as an effective complementary or as an alternative source of photons for the photo-Fenton degradation process. A non-linear multivariable model based on a neural network was fit to the experimental results of batch-mode experiments in order to evaluate the relative importance of the process variables considered on the DOC removal over the reaction time. This included solar radiation, which is not a controlled variable. The observed behavior of the system in batch-mode was compared with fed-batch experiments carried out under similar conditions. The main contribution of the study consists of the results from experiments under different conditions and the discussion of the system behavior. Both constitute important information for the design and scale-up of solar radiation-based photodegradation processes.
Resumo:
We present a novel array RLS algorithm with forgetting factor that circumvents the problem of fading regularization, inherent to the standard exponentially-weighted RLS, by allowing for time-varying regularization matrices with generic structure. Simulations in finite precision show the algorithm`s superiority as compared to alternative algorithms in the context of adaptive beamforming.
Distributed Estimation Over an Adaptive Incremental Network Based on the Affine Projection Algorithm
Resumo:
We study the problem of distributed estimation based on the affine projection algorithm (APA), which is developed from Newton`s method for minimizing a cost function. The proposed solution is formulated to ameliorate the limited convergence properties of least-mean-square (LMS) type distributed adaptive filters with colored inputs. The analysis of transient and steady-state performances at each individual node within the network is developed by using a weighted spatial-temporal energy conservation relation and confirmed by computer simulations. The simulation results also verify that the proposed algorithm provides not only a faster convergence rate but also an improved steady-state performance as compared to an LMS-based scheme. In addition, the new approach attains an acceptable misadjustment performance with lower computational and memory cost, provided the number of regressor vectors and filter length parameters are appropriately chosen, as compared to a distributed recursive-least-squares (RLS) based method.
Resumo:
We derive an easy-to-compute approximate bound for the range of step-sizes for which the constant-modulus algorithm (CMA) will remain stable if initialized close to a minimum of the CM cost function. Our model highlights the influence, of the signal constellation used in the transmission system: for smaller variation in the modulus of the transmitted symbols, the algorithm will be more robust, and the steady-state misadjustment will be smaller. The theoretical results are validated through several simulations, for long and short filters and channels.
Resumo:
Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.
Resumo:
Starting from the Durbin algorithm in polynomial space with an inner product defined by the signal autocorrelation matrix, an isometric transformation is defined that maps this vector space into another one where the Levinson algorithm is performed. Alternatively, for iterative algorithms such as discrete all-pole (DAP), an efficient implementation of a Gohberg-Semencul (GS) relation is developed for the inversion of the autocorrelation matrix which considers its centrosymmetry. In the solution of the autocorrelation equations, the Levinson algorithm is found to be less complex operationally than the procedures based on GS inversion for up to a minimum of five iterations at various linear prediction (LP) orders.
Resumo:
In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.
Resumo:
Due to the several kinds of services that use the Internet and data networks infra-structures, the present networks are characterized by the diversity of types of traffic that have statistical properties as complex temporal correlation and non-gaussian distribution. The networks complex temporal correlation may be characterized by the Short Range Dependence (SRD) and the Long Range Dependence - (LRD). Models as the fGN (Fractional Gaussian Noise) may capture the LRD but not the SRD. This work presents two methods for traffic generation that synthesize approximate realizations of the self-similar fGN with SRD random process. The first one employs the IDWT (Inverse Discrete Wavelet Transform) and the second the IDWPT (Inverse Discrete Wavelet Packet Transform). It has been developed the variance map concept that allows to associate the LRD and SRD behaviors directly to the wavelet transform coefficients. The developed methods are extremely flexible and allow the generation of Gaussian time series with complex statistical behaviors.
Resumo:
An algorithm inspired on ant behavior is developed in order to find out the topology of an electric energy distribution network with minimum power loss. The algorithm performance is investigated in hypothetical and actual circuits. When applied in an actual distribution system of a region of the State of Sao Paulo (Brazil), the solution found by the algorithm presents loss lower than the topology built by the concessionary company.
Resumo:
The most popular algorithms for blind equalization are the constant-modulus algorithm (CMA) and the Shalvi-Weinstein algorithm (SWA). It is well-known that SWA presents a higher convergence rate than CMA. at the expense of higher computational complexity. If the forgetting factor is not sufficiently close to one, if the initialization is distant from the optimal solution, or if the signal-to-noise ratio is low, SWA can converge to undesirable local minima or even diverge. In this paper, we show that divergence can be caused by an inconsistency in the nonlinear estimate of the transmitted signal. or (when the algorithm is implemented in finite precision) by the loss of positiveness of the estimate of the autocorrelation matrix, or by a combination of both. In order to avoid the first cause of divergence, we propose a dual-mode SWA. In the first mode of operation. the new algorithm works as SWA; in the second mode, it rejects inconsistent estimates of the transmitted signal. Assuming the persistence of excitation condition, we present a deterministic stability analysis of the new algorithm. To avoid the second cause of divergence, we propose a dual-mode lattice SWA, which is stable even in finite-precision arithmetic, and has a computational complexity that increases linearly with the number of adjustable equalizer coefficients. The good performance of the proposed algorithms is confirmed through numerical simulations.
Resumo:
This work aims at proposing the use of the evolutionary computation methodology in order to jointly solve the multiuser channel estimation (MuChE) and detection problems at its maximum-likelihood, both related to the direct sequence code division multiple access (DS/CDMA). The effectiveness of the proposed heuristic approach is proven by comparing performance and complexity merit figures with that obtained by traditional methods found in literature. Simulation results considering genetic algorithm (GA) applied to multipath, DS/CDMA and MuChE and multi-user detection (MuD) show that the proposed genetic algorithm multi-user channel estimation (GAMuChE) yields a normalized mean square error estimation (nMSE) inferior to 11%, under slowly varying multipath fading channels, large range of Doppler frequencies and medium system load, it exhibits lower complexity when compared to both maximum likelihood multi-user channel estimation (MLMuChE) and gradient descent method (GrdDsc). A near-optimum multi-user detector (MuD) based on the genetic algorithm (GAMuD), also proposed in this work, provides a significant reduction in the computational complexity when compared to the optimum multi-user detector (OMuD). In addition, the complexity of the GAMuChE and GAMuD algorithms were (jointly) analyzed in terms of number of operations necessary to reach the convergence, and compared to other jointly MuChE and MuD strategies. The joint GAMuChE-GAMuD scheme can be regarded as a promising alternative for implementing third-generation (3G) and fourth-generation (4G) wireless systems in the near future. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
Titanium oxide (TiO(2)) has been extensively applied in the medical area due to its proved biocompatibility with human cells [1]. This work presents the characterization of titanium oxide thin films as a potential dielectric to be applied in ion sensitive field-effect transistors. The films were obtained by rapid thermal oxidation and annealing (at 300, 600, 960 and 1200 degrees C) of thin titanium films of different thicknesses (5 nm, 10 nm and 20 nm) deposited by e-beam evaporation on silicon wafers. These films were analyzed as-deposited and after annealing in forming gas for 25 min by Ellipsometry, Fourier Transform Infrared Spectroscopy (FTIR), Raman Spectroscopy (RAMAN), Atomic Force Microscopy (AFM), Rutherford Backscattering Spectroscopy (RBS) and Ti-K edge X-ray Absorption Near Edge Structure (XANES). Thin film thickness, roughness, surface grain sizes, refractive indexes and oxygen concentration depend on the oxidation and annealing temperature. Structural characterization showed mainly presence of the crystalline rutile phase, however, other oxides such Ti(2)O(3), an interfacial SiO(2) layer between the dielectric and the substrate and the anatase crystalline phase of TiO(2) films were also identified. Electrical characteristics were obtained by means of I-V and C-V measured curves of Al/Si/TiO(x)/Al capacitors. These curves showed that the films had high dielectric constants between 12 and 33, interface charge density of about 10(10)/cm(2) and leakage current density between 1 and 10(-4) A/cm(2). Field-effect transistors were fabricated in order to analyze I(D) x V(DS) and log I(D) x Bias curves. Early voltage value of -1629 V, R(OUT) value of 215 M Omega and slope of 100 mV/dec were determined for the 20 nm TiO(x) film thermally treated at 960 degrees C. (C) 2009 Elsevier B.V. All rights reserved.