932 resultados para Information dispersal algorithm
Resumo:
A broader characterization of industrial wastewaters, especially in respect to hazardous compounds and their potential toxicity, is often necessary in order to determine the best practical treatment (or pretreatment) technology available to reduce the discharge of harmful pollutants to the environment or publicly owned treatment works. Using a toxicity-directed approach, this paper sets the base for a rational treatability study of polyester resin manufacturing. Relevant physical and chemical characteristics were determined. Respirometry was used for toxicity reduction evaluation after physical and chemical effluent fractionation. Of all the procedures investigated, only air stripping was significantly effective in reducing wastewater toxicity. Air stripping in pH 7 reduced toxicity in 18.2%, while in pH 11 a toxicity reduction of 62.5% was observed. Results indicated that toxicants responsible for the most significant fraction of the effluent`s instantaneous toxic effect to unadapted activated sludge were organic compounds poorly or not volatilized in acid conditions. These results led to useful directions for conducting treatability studies which will be grounded on actual effluent properties rather than empirical or based on the rare specific data on this kind of industrial wastewater. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this work, an algorithm to compute the envelope of non-destructive testing (NDT) signals is proposed. This method allows increasing the speed and reducing the memory in extensive data processing. Also, this procedure presents advantage of preserving the data information for physical modeling applications of time-dependent measurements. The algorithm is conceived to be applied for analyze data from non-destructive testing. The comparison between different envelope methods and the proposed method, applied to Magnetic Bark Signal (MBN), is studied. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this work, the applicability of a new algorithm for the estimation of mechanical properties from instrumented indentation data was studied for thin films. The applicability was analyzed with the aid of both three-dimensional finite element simulations and experimental indentation tests. The numerical approach allowed studying the effect of the substrate on the estimation of mechanical properties of the film, which was conducted based on the ratio h(max)/l between maximum indentation depth and film thickness. For the experimental analysis, indentation tests were conducted on AISI H13 tool steel specimens, plasma nitrated and coated with TiN thin films. Results have indicated that, for the conditions analyzed in this work, the elastic deformation of the substrate limited the extraction of mechanical properties of the film/substrate system. This limitation occurred even at low h(max)/l ratios and especially for the estimation of the values of yield strength and strain hardening exponent. At indentation depths lower than 4% of the film thickness, the proposed algorithm estimated the mechanical properties of the film with accuracy. Particularly for hardness, precise values were estimated at h(max)/l lower than 0.1, i.e. 10% of film thickness. (C) 2010 Published by Elsevier B.V.
Resumo:
This work discusses the determination of the breathing patterns in time sequence of images obtained from magnetic resonance (MR) and their use in the temporal registration of coronal and sagittal images. The registration is made without the use of any triggering information and any special gas to enhance the contrast. The temporal sequences of images are acquired in free breathing. The real movement of the lung has never been seen directly, as it is totally dependent on its surrounding muscles and collapses without them. The visualization of the lung in motion is an actual topic of research in medicine. The lung movement is not periodic and it is susceptible to variations in the degree of respiration. Compared to computerized tomography (CT), MR imaging involves longer acquisition times and it is preferable because it does not involve radiation. As coronal and sagittal sequences of images are orthogonal to each other, their intersection corresponds to a segment in the three-dimensional space. The registration is based on the analysis of this intersection segment. A time sequence of this intersection segment can be stacked, defining a two-dimension spatio-temporal (2DST) image. The algorithm proposed in this work can detect asynchronous movements of the internal lung structures and lung surrounding organs. It is assumed that the diaphragmatic movement is the principal movement and all the lung structures move almost synchronously. The synchronization is performed through a pattern named respiratory function. This pattern is obtained by processing a 2DST image. An interval Hough transform algorithm searches for synchronized movements with the respiratory function. A greedy active contour algorithm adjusts small discrepancies originated by asynchronous movements in the respiratory patterns. The output is a set of respiratory patterns. Finally, the composition of coronal and sagittal image pairs that are in the same breathing phase is realized by comparing of respiratory patterns originated from diaphragmatic and upper boundary surfaces. When available, the respiratory patterns associated to lung internal structures are also used. The results of the proposed method are compared with the pixel-by-pixel comparison method. The proposed method increases the number of registered pairs representing composed images and allows an easy check of the breathing phase. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The cost of a new ship design heavily depends on the principal dimensions of the ship; however, dimensions minimization often conflicts with the minimum oil outflow (in the event of an accidental spill). This study demonstrates one rational methodology for selecting the optimal dimensions and coefficients of form of tankers via the use of a genetic algorithm. Therein, a multi-objective optimization problem was formulated by using two objective attributes in the evaluation of each design, specifically, total cost and mean oil outflow. In addition, a procedure that can be used to balance the designs in terms of weight and useful space is proposed. A genetic algorithm was implemented to search for optimal design parameters and to identify the nondominated Pareto frontier. At the end of this study, three real ships are used as case studies. [DOI:10.1115/1.4002740]
Resumo:
A great deal of attention in the supply chain management literature is devoted to study material and demand information flows and their coordination. But in many situations, supply chains may convey information from different nature, they may be an important channel companies have to deliver knowledge, or specifically, technical information to the market. This paper studies the technical flow and highlights its particular requirements. Drawing upon a qualitative field research, it studies pharmaceutical companies, since those companies face a very specific challenge: consumers do not have discretion over their choices, ethical drugs must be prescribed by physicians to be bought and used by final consumers. Technical information flow is rich, and must be redundant and early delivered at multiple points. Thus, apart from the regular material channel where products and order information flow, those companies build a specialized information channel, developed to communicate to those who need it to create demand. Conclusions can be extended to supply chains where products and services are complex and decision makers must be clearly informed about technology-related information. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We present a novel array RLS algorithm with forgetting factor that circumvents the problem of fading regularization, inherent to the standard exponentially-weighted RLS, by allowing for time-varying regularization matrices with generic structure. Simulations in finite precision show the algorithm`s superiority as compared to alternative algorithms in the context of adaptive beamforming.
Distributed Estimation Over an Adaptive Incremental Network Based on the Affine Projection Algorithm
Resumo:
We study the problem of distributed estimation based on the affine projection algorithm (APA), which is developed from Newton`s method for minimizing a cost function. The proposed solution is formulated to ameliorate the limited convergence properties of least-mean-square (LMS) type distributed adaptive filters with colored inputs. The analysis of transient and steady-state performances at each individual node within the network is developed by using a weighted spatial-temporal energy conservation relation and confirmed by computer simulations. The simulation results also verify that the proposed algorithm provides not only a faster convergence rate but also an improved steady-state performance as compared to an LMS-based scheme. In addition, the new approach attains an acceptable misadjustment performance with lower computational and memory cost, provided the number of regressor vectors and filter length parameters are appropriately chosen, as compared to a distributed recursive-least-squares (RLS) based method.
Resumo:
We derive an easy-to-compute approximate bound for the range of step-sizes for which the constant-modulus algorithm (CMA) will remain stable if initialized close to a minimum of the CM cost function. Our model highlights the influence, of the signal constellation used in the transmission system: for smaller variation in the modulus of the transmitted symbols, the algorithm will be more robust, and the steady-state misadjustment will be smaller. The theoretical results are validated through several simulations, for long and short filters and channels.
Resumo:
Higher order (2,4) FDTD schemes used for numerical solutions of Maxwell`s equations are focused on diminishing the truncation errors caused by the Taylor series expansion of the spatial derivatives. These schemes use a larger computational stencil, which generally makes use of the two constant coefficients, C-1 and C-2, for the four-point central-difference operators. In this paper we propose a novel way to diminish these truncation errors, in order to obtain more accurate numerical solutions of Maxwell`s equations. For such purpose, we present a method to individually optimize the pair of coefficients, C-1 and C-2, based on any desired grid size resolution and size of time step. Particularly, we are interested in using coarser grid discretizations to be able to simulate electrically large domains. The results of our optimization algorithm show a significant reduction in dispersion error and numerical anisotropy for all modeled grid size resolutions. Numerical simulations of free-space propagation verifies the very promising theoretical results. The model is also shown to perform well in more complex, realistic scenarios.
Resumo:
Starting from the Durbin algorithm in polynomial space with an inner product defined by the signal autocorrelation matrix, an isometric transformation is defined that maps this vector space into another one where the Levinson algorithm is performed. Alternatively, for iterative algorithms such as discrete all-pole (DAP), an efficient implementation of a Gohberg-Semencul (GS) relation is developed for the inversion of the autocorrelation matrix which considers its centrosymmetry. In the solution of the autocorrelation equations, the Levinson algorithm is found to be less complex operationally than the procedures based on GS inversion for up to a minimum of five iterations at various linear prediction (LP) orders.
Resumo:
In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
In order to provide adequate multivariate measures of information flow between neural structures, modified expressions of partial directed coherence (PDC) and directed transfer function (DTF), two popular multivariate connectivity measures employed in neuroscience, are introduced and their formal relationship to mutual information rates are proved.
Resumo:
This work introduces the problem of the best choice among M combinations of the shortest paths for dynamic provisioning of lightpaths in all-optical networks. To solve this problem in an optimized way (shortest path and load balance), a new fixed routing algorithm, named Best among the Shortest Routes (BSR), is proposed. The BSR`s performance is compared in terms of blocking probability and network utilization with Dijkstra`s shortest path algorithm and others algorithms proposed in the literature. The evaluated scenarios include several representative topologies for all-optical networking and different wavelength conversion architectures. For all studied scenarios, BSR achieved superior performance. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.