904 resultados para inverse Bergman rule
Resumo:
Advances in hardware and software in the past decade allow to capture, record and process fast data streams at a large scale. The research area of data stream mining has emerged as a consequence from these advances in order to cope with the real time analysis of potentially large and changing data streams. Examples of data streams include Google searches, credit card transactions, telemetric data and data of continuous chemical production processes. In some cases the data can be processed in batches by traditional data mining approaches. However, in some applications it is required to analyse the data in real time as soon as it is being captured. Such cases are for example if the data stream is infinite, fast changing, or simply too large in size to be stored. One of the most important data mining techniques on data streams is classification. This involves training the classifier on the data stream in real time and adapting it to concept drifts. Most data stream classifiers are based on decision trees. However, it is well known in the data mining community that there is no single optimal algorithm. An algorithm may work well on one or several datasets but badly on others. This paper introduces eRules, a new rule based adaptive classifier for data streams, based on an evolving set of Rules. eRules induces a set of rules that is constantly evaluated and adapted to changes in the data stream by adding new and removing old rules. It is different from the more popular decision tree based classifiers as it tends to leave data instances rather unclassified than forcing a classification that could be wrong. The ongoing development of eRules aims to improve its accuracy further through dynamic parameter setting which will also address the problem of changing feature domain values.
Resumo:
This contribution introduces a new digital predistorter to compensate serious distortions caused by memory high power amplifiers (HPAs) which exhibit output saturation characteristics. The proposed design is based on direct learning using a data-driven B-spline Wiener system modeling approach. The nonlinear HPA with memory is first identified based on the B-spline neural network model using the Gauss-Newton algorithm, which incorporates the efficient De Boor algorithm with both B-spline curve and first derivative recursions. The estimated Wiener HPA model is then used to design the Hammerstein predistorter. In particular, the inverse of the amplitude distortion of the HPA's static nonlinearity can be calculated effectively using the Newton-Raphson formula based on the inverse of De Boor algorithm. A major advantage of this approach is that both the Wiener HPA identification and the Hammerstein predistorter inverse can be achieved very efficiently and accurately. Simulation results obtained are presented to demonstrate the effectiveness of this novel digital predistorter design.
Resumo:
We consider the Dirichlet boundary-value problem for the Helmholtz equation, Au + x2u = 0, with Imx > 0. in an hrbitrary bounded or unbounded open set C c W. Assuming continuity of the solution up to the boundary and a bound on growth a infinity, that lu(x)l < Cexp (Slxl), for some C > 0 and S~< Imx, we prove that the homogeneous problem has only the trivial salution. With this resnlt we prove uniqueness results for direct and inverse problems of scattering by a bounded or infinite obstacle.
Resumo:
Infrared polarization and intensity imagery provide complementary and discriminative information in image understanding and interpretation. In this paper, a novel fusion method is proposed by effectively merging the information with various combination rules. It makes use of both low-frequency and highfrequency images components from support value transform (SVT), and applies fuzzy logic in the combination process. Images (both infrared polarization and intensity images) to be fused are firstly decomposed into low-frequency component images and support value image sequences by the SVT. Then the low-frequency component images are combined using a fuzzy combination rule blending three sub-combination methods of (1) region feature maximum, (2) region feature weighting average, and (3) pixel value maximum; and the support value image sequences are merged using a fuzzy combination rule fusing two sub-combination methods of (1) pixel energy maximum and (2) region feature weighting. With the variables of two newly defined features, i.e. the low-frequency difference feature for low-frequency component images and the support-value difference feature for support value image sequences, trapezoidal membership functions are proposed and developed in tuning the fuzzy fusion process. Finally the fused image is obtained by inverse SVT operations. Experimental results of visual inspection and quantitative evaluation both indicate the superiority of the proposed method to its counterparts in image fusion of infrared polarization and intensity images.
Shaming men, performing power: female authority in Zimbabwe and Tanzania on the eve of colonial rule
Resumo:
Communication signal processing applications often involve complex-valued (CV) functional representations for signals and systems. CV artificial neural networks have been studied theoretically and applied widely in nonlinear signal and data processing [1–11]. Note that most artificial neural networks cannot be automatically extended from the real-valued (RV) domain to the CV domain because the resulting model would in general violate Cauchy-Riemann conditions, and this means that the training algorithms become unusable. A number of analytic functions were introduced for the fully CV multilayer perceptrons (MLP) [4]. A fully CV radial basis function (RBF) nework was introduced in [8] for regression and classification applications. Alternatively, the problem can be avoided by using two RV artificial neural networks, one processing the real part and the other processing the imaginary part of the CV signal/system. A even more challenging problem is the inverse of a CV
Resumo:
This article examines whether a country's economic reforms are affected by reforms adopted by other countries. Our theoretical model predicts that reforms are more likely when factors of production are internationally mobile and reforms are pursued in other economies. Using the change in the Index of Economic Freedom as the measure of market-liberalizing reforms and panel data (144 countries, 1995–2006), we test our model. We find evidence of the spillover of reforms. Moreover, consistent with our model, international trade is not a vehicle for the diffusion of economic reforms; rather the most important mechanism is geographical or cultural proximity.
Resumo:
Lipid cubic phases are complex nanostructures that form naturally in a variety of biological systems, with applications including drug delivery and nanotemplating. Most X-ray scattering studies on lipid cubic phases have used unoriented polydomain samples as either bulk gels or suspensions of micrometer-sized cubosomes. We present a method of investigating cubic phases in a new form, as supported thin films that can be analyzed using grazing incidence small-angle X-ray scattering (GISAXS). We present GISAXS data on three lipid systems: phytantriol and two grades of monoolein (research and industrial). The use of thin films brings a number of advantages. First, the samples exhibit a high degree of uniaxial orientation about the substrate normal. Second, the new morphology allows precise control of the substrate mesophase geometry and lattice parameter using a controlled temperature and humidity environment, and we demonstrate the controllable formation of oriented diamond and gyroid inverse bicontinuous cubic along with lamellar phases. Finally, the thin film morphology allows the induction of reversible phase transitions between these mesophase structures by changes in humidity on subminute time scales, and we present timeresolved GISAXS data monitoring these transformations.
Resumo:
We present a theoretical study of the distribution of Al atoms in zeolite ZSM-5 with Si/Al=47, where we focus on the role of Al-Al interactions rather than on the energetics of Al/Si substitutions at individual sites. Using interatomic potential methods, we evaluate the energies of the full set of symmetrically independent configurations of Al siting in a Si94Al2O192 cell. The equilibrium Al distribution is determined by the interplay of two factors: the energetics of the Al/Si substitution at an individual site, which tends to populate particular T sites (e.g. the T14 site), and the Al-Al interaction, which at this Si/Al maximises Al-Al distances in agreement with Dempsey’s rule. However, it is found that the interaction energy changes approximately as the inverse of the square of the distance between the two Al atoms, rather than the inverse of the distance expected if this were merely charge repulsion. Moreover, we find that the anisotropic nature of the framework density plays an important role in determining the magnitude of the interactions, which are not simply dependent on Al-Al distances.
Resumo:
This paper considers the use of Association Rule Mining (ARM) and our proposed Transaction based Rule Change Mining (TRCM) to identify the rule types present in tweet’s hashtags over a specific consecutive period of time and their linkage to real life occurrences. Our novel algorithm was termed TRCM-RTI in reference to Rule Type Identification. We created Time Frame Windows (TFWs) to detect evolvement statuses and calculate the lifespan of hashtags in online tweets. We link RTI to real life events by monitoring and recording rule evolvement patterns in TFWs on the Twitter network.
Resumo:
Automatic generation of classification rules has been an increasingly popular technique in commercial applications such as Big Data analytics, rule based expert systems and decision making systems. However, a principal problem that arises with most methods for generation of classification rules is the overfit-ting of training data. When Big Data is dealt with, this may result in the generation of a large number of complex rules. This may not only increase computational cost but also lower the accuracy in predicting further unseen instances. This has led to the necessity of developing pruning methods for the simplification of rules. In addition, classification rules are used further to make predictions after the completion of their generation. As efficiency is concerned, it is expected to find the first rule that fires as soon as possible by searching through a rule set. Thus a suit-able structure is required to represent the rule set effectively. In this chapter, the authors introduce a unified framework for construction of rule based classification systems consisting of three operations on Big Data: rule generation, rule simplification and rule representation. The authors also review some existing methods and techniques used for each of the three operations and highlight their limitations. They introduce some novel methods and techniques developed by them recently. These methods and techniques are also discussed in comparison to existing ones with respect to efficient processing of Big Data.
Resumo:
Expert systems have been increasingly popular for commercial importance. A rule based system is a special type of an expert system, which consists of a set of ‘if-then‘ rules and can be applied as a decision support system in many areas such as healthcare, transportation and security. Rule based systems can be constructed based on both expert knowledge and data. This paper aims to introduce the theory of rule based systems especially on categorization and construction of such systems from a conceptual point of view. This paper also introduces rule based systems for classification tasks in detail.