112 resultados para complexity regularization
Resumo:
This work proposes a method based on both preprocessing and data mining with the objective of identify harmonic current sources in residential consumers. In addition, this methodology can also be applied to identify linear and nonlinear loads. It should be emphasized that the entire database was obtained through laboratory essays, i.e., real data were acquired from residential loads. Thus, the residential system created in laboratory was fed by a configurable power source and in its output were placed the loads and the power quality analyzers (all measurements were stored in a microcomputer). So, the data were submitted to pre-processing, which was based on attribute selection techniques in order to minimize the complexity in identifying the loads. A newer database was generated maintaining only the attributes selected, thus, Artificial Neural Networks were trained to realized the identification of loads. In order to validate the methodology proposed, the loads were fed both under ideal conditions (without harmonics), but also by harmonic voltages within limits pre-established. These limits are in accordance with IEEE Std. 519-1992 and PRODIST (procedures to delivery energy employed by Brazilian`s utilities). The results obtained seek to validate the methodology proposed and furnish a method that can serve as alternative to conventional methods.
Resumo:
The confined flows in tubes with permeable surfaces arc associated to tangential filtration processes (microfiltration or ultrafiltration). The complexity of the phenomena do not allow for the development of exact analytical solutions, however, approximate solutions are of great interest for the calculation of the transmembrane outflow and estimate of the concentration, polarization phenomenon. In the present work, the generalized integral transform technique (GITT) was employed in solving the laminar and permanent flow in permeable tubes of Newtonian and incompressible fluid. The mathematical formulation employed the parabolic differential equation of chemical species conservation (convective-diffusive equation). The velocity profiles for the entrance region flow, which are found in the connective terms of the equation, were assessed by solutions obtained from literature. The velocity at the permeable wall was considered uniform, with the concentration at the tube wall regarded as variable with an axial position. A computational methodology using global error control was applied to determine the concentration in the wall and concentration boundary layer thickness. The results obtained for the local transmembrane flux and the concentration boundary layer thickness were compared against others in literature. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Aquatic humic substances (AHS) isolated from two characteristic seasons of the Negro river, winter and summer corresponding to floody and dry periods, were structurally characterized by (13)C nuclear magnetic ressonance. Subsequently, AHS aqueous solutions were irradiated with a polychromatic lamp (290-475 nm) and monitored by its total organic carbon (TOC) content, ultraviolet-visible (UV-vis) absorbance, fluorescence and Fourier transformed infrared spectroscopy (FTIR). As a result, a photobleaching upto 80% after irradiation of 48 h was observed. Conformational rearrangements and formation of low molecular complexity structures were formed during the irradiation, as deduced from the pH decrement and the fluorescence shifting to lower wavelengths. Additionally a significant mineralization with the formation Of CO(2), CO, and inorganic carbon compounds was registered, as assumed by TOC losses of up to 70%. The differences in photodegradation between samples expressed by photobleaching efficiency were enhanced in the summer sample and related to its elevated aromatic content. Aromatic structures are assumed to have high autosensitization capacity effects mediated by the free radical generation from quinone and phenolic moieties.
Resumo:
Chemical admixtures increase the theological complexity of cement pastes owing to their chemical and physical interactions with particles, which affects cement hydration and agglomeration kinetics. Using oscillatory rheometry and isothermal calorimetry, this article shows that the cellulose ether HMEC (hydroxymethyl ethylcellulose), widely used as a viscosity modifying agent in self-compacting concretes and dry-set mortars, displayed a steric dispersant barrier effect during the first 2 h of hydration associated to a cement retarding nature, consequently reducing the setting speed. However, despite this stabilization effect, the polymer increased the cohesion strength when comparing cement particles with the same hydration degree. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper contains a new proposal for the definition of the fundamental operation of query under the Adaptive Formalism, one capable of locating functional nuclei from descriptions of their semantics. To demonstrate the method`s applicability, an implementation of the query procedure constrained to a specific class of devices is shown, and its asymptotic computational complexity is discussed.
Resumo:
The volumetric reconstruction technique presented in this paper employs a two-camera stereoscopic particle image velocimetry (SPIV) system in order to reconstruct the mean flow behind a fixed cylinder fitted with helical strakes, which are commonly used to suppress vortex-induced vibrations (VIV). The technique is based on the measurement of velocity fields at equivalent adjacent planes that results in pseudo volumetric fields. The main advantage over proper volumetric techniques is the avoidance of additional equipment and complexity. The averaged velocity fields behind the straked cylinders and the geometrical periodicity of the three-start configuration are used to further simplify the reconstruction process. Two straked cylindrical models with the same pitch (p = 10d) and two different heights (h = 0.1 and 0.2d) are tested. The reconstructed flow shows that the strakes introduce in the wake flow a well-defined wavelength of one-third of the pitch. Measurements of hydrodynamic forces, fluctuating velocity, vortex formation length, and vortex shedding frequency show the interdependence of the wake parameters. The vortex formation length is increased by the strakes, which is an important effect for the suppression of vortex-induced vibrations. The results presented complement previous investigations concerning the effectiveness of strakes as VIV suppressors and provide a basis of comparison to numerical simulations.
Resumo:
This paper investigates probabilistic logics endowed with independence relations. We review propositional probabilistic languages without and with independence. We then consider graph-theoretic representations for propositional probabilistic logic with independence; complexity is analyzed, algorithms are derived, and examples are discussed. Finally, we examine a restricted first-order probabilistic logic that generalizes relational Bayesian networks. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
This paper presents a family of algorithms for approximate inference in credal networks (that is, models based on directed acyclic graphs and set-valued probabilities) that contain only binary variables. Such networks can represent incomplete or vague beliefs, lack of data, and disagreements among experts; they can also encode models based on belief functions and possibilistic measures. All algorithms for approximate inference in this paper rely on exact inferences in credal networks based on polytrees with binary variables, as these inferences have polynomial complexity. We are inspired by approximate algorithms for Bayesian networks; thus the Loopy 2U algorithm resembles Loopy Belief Propagation, while the Iterated Partial Evaluation and Structured Variational 2U algorithms are, respectively, based on Localized Partial Evaluation and variational techniques. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
The thermal performance of a cooling tower and its cooling water system is critical for industrial plants, and small deviations from the design conditions may cause severe instability in the operation and economics of the process. External disturbances such as variation in the thermal demand of the process or oscillations in atmospheric conditions may be suppressed in multiple ways. Nevertheless, such alternatives are hardly ever implemented in the industrial operation due to the poor coordination between the utility and process sectors. The complexity of the operation increases because of the strong interaction among the process variables. In the present work, an integrated model for the minimization of the operating costs of a cooling water system is developed. The system is composed of a cooling tower as well as a network of heat exchangers. After the model is verified, several cases are studied with the objective of determining the optimal operation. It is observed that the most important operational resources to mitigate disturbances in the thermal demand of the process are, in this order: the increase in recycle water flow rate, the increase in air flow rate and finally the forced removal of a portion of the water flow rate that enters the cooling tower with the corresponding make-up flow rate. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
We propose a robust and low complexity scheme to estimate and track carrier frequency from signals traveling under low signal-to-noise ratio (SNR) conditions in highly nonstationary channels. These scenarios arise in planetary exploration missions subject to high dynamics, such as the Mars exploration rover missions. The method comprises a bank of adaptive linear predictors (ALP) supervised by a convex combiner that dynamically aggregates the individual predictors. The adaptive combination is able to outperform the best individual estimator in the set, which leads to a universal scheme for frequency estimation and tracking. A simple technique for bias compensation considerably improves the ALP performance. It is also shown that retrieval of frequency content by a fast Fourier transform (FFT)-search method, instead of only inspecting the angle of a particular root of the error predictor filter, enhances performance, particularly at very low SNR levels. Simple techniques that enforce frequency continuity improve further the overall performance. In summary we illustrate by extensive simulations that adaptive linear prediction methods render a robust and competitive frequency tracking technique.
Resumo:
In Part I [""Fast Transforms for Acoustic Imaging-Part I: Theory,"" IEEE TRANSACTIONS ON IMAGE PROCESSING], we introduced the Kronecker array transform (KAT), a fast transform for imaging with separable arrays. Given a source distribution, the KAT produces the spectral matrix which would be measured by a separable sensor array. In Part II, we establish connections between the KAT, beamforming and 2-D convolutions, and show how these results can be used to accelerate classical and state of the art array imaging algorithms. We also propose using the KAT to accelerate general purpose regularized least-squares solvers. Using this approach, we avoid ill-conditioned deconvolution steps and obtain more accurate reconstructions than previously possible, while maintaining low computational costs. We also show how the KAT performs when imaging near-field source distributions, and illustrate the trade-off between accuracy and computational complexity. Finally, we show that separable designs can deliver accuracy competitive with multi-arm logarithmic spiral geometries, while having the computational advantages of the KAT.
Resumo:
In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
The behavior of normal individuals and psychiatric patients vary in a similar way following power laws. The presence of identical patterns of behavioral variation occurring in individuals with different levels of activity is suggestive of self-similarity phenomena. Based on these findings, we propose that the human behavior in social context can constitute a system exhibiting self-organized criticality (SOC). The introduction of SOC concept in psychological theories can help to approach the question of behavior predictability by taking into consideration their intrinsic stochastic character. Also, the ceteris paribus generalizations characteristic of psychological laws can be seen as a consequence of individual level description of a more complex collective phenomena. Although limited, this study suggests that, if an adequate level of description is adopted, the complexity of human behavior can be more easily approached and their individual and social components can be more realistically modeled. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
A new excitation model for the numerical solution of field integral equation (EFIE) applied to arbitrarily shaped monopole antennas fed by coaxial lines is presented. This model yields a stable solution for the input impedance of such antennas with very low numerical complexity and without the convergence and high parasitic capacitance problems associated with the usual delta gap excitation.
Resumo:
In this work, a wide analysis of local search multiuser detection (LS-MUD) for direct sequence/code division multiple access (DS/CDMA) systems under multipath channels is carried out considering the performance-complexity trade-off. It is verified the robustness of the LS-MUD to variations in loading, E(b)/N(0), near-far effect, number of fingers of the Rake receiver and errors in the channel coefficients estimates. A compared analysis of the bit error rate (BER) and complexity trade-off is accomplished among LS, genetic algorithm (GA) and particle swarm optimization (PSO). Based on the deterministic behavior of the LS algorithm, it is also proposed simplifications over the cost function calculation, obtaining more efficient algorithms (simplified and combined LS-MUD versions) and creating new perspectives for the MUD implementation. The computational complexity is expressed in terms of the number of operations in order to converge. Our conclusion pointed out that the simplified LS (s-LS) method is always more efficient, independent of the system conditions, achieving a better performance with a lower complexity than the others heuristics detectors. Associated to this, the deterministic strategy and absence of input parameters made the s-LS algorithm the most appropriate for the MUD problem. (C) 2008 Elsevier GmbH. All rights reserved.