32 resultados para generalization


Relevância:

10.00% 10.00%

Publicador:

Resumo:

As a class of defects in software requirements specification, inconsistency has been widely studied in both requirements engineering and software engineering. It has been increasingly recognized that maintaining consistency alone often results in some other types of non-canonical requirements, including incompleteness of a requirements specification, vague requirements statements, and redundant requirements statements. It is therefore desirable for inconsistency handling to take into account the related non-canonical requirements in requirements engineering. To address this issue, we propose an intuitive generalization of logical techniques for handling inconsistency to those that are suitable for managing non-canonical requirements, which deals with incompleteness and redundancy, in addition to inconsistency. We first argue that measuring non-canonical requirements plays a crucial role in handling them effectively. We then present a measure-driven logic framework for managing non-canonical requirements. The framework consists of five main parts, identifying non-canonical requirements, measuring them, generating candidate proposals for handling them, choosing commonly acceptable proposals, and revising them according to the chosen proposals. This generalization can be considered as an attempt to handle non-canonical requirements along with logic-based inconsistency handling in requirements engineering.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper introduces a logical model of inductive generalization, and specifically of the machine learning task of inductive concept learning (ICL). We argue that some inductive processes, like ICL, can be seen as a form of defeasible reasoning. We define a consequence relation characterizing which hypotheses can be induced from given sets of examples, and study its properties, showing they correspond to a rather well-behaved non-monotonic logic. We will also show that with the addition of a preference relation on inductive theories we can characterize the inductive bias of ICL algorithms. The second part of the paper shows how this logical characterization of inductive generalization can be integrated with another form of non-monotonic reasoning (argumentation), to define a model of multiagent ICL. This integration allows two or more agents to learn, in a consistent way, both from induction and from arguments used in the communication between them. We show that the inductive theories achieved by multiagent induction plus argumentation are sound, i.e. they are precisely the same as the inductive theories built by a single agent with all data. © 2012 Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a generalization of belief functions over fuzzy events. In particular we focus on belief functions defined in the algebraic framework of finite MV-algebras of fuzzy sets. We introduce a fuzzy modal logic to formalize reasoning with belief functions on many-valued events. We prove, among other results, that several different notions of belief functions can be characterized in a quite uniform way, just by slightly modifying the complete axiomatization of one of the modal logics involved in the definition of our formalism. © 2012 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantum coherence between electron and ion dynamics, observed in organic semiconductors by means of ultrafast spectroscopy, is the object of recent theoretical and computational studies. To simulate this kind of quantum coherent dynamics, we have introduced in a previous article [L. Stella, M. Meister, A. J. Fisher, and A. P. Horsfield, J. Chem. Phys. 127, 214104 (2007)] an improved computational scheme based on Correlated Electron-Ion Dynamics (CEID). In this article, we provide a generalization of that scheme to model several ionic degrees of freedom and many-body electronic states. To illustrate the capability of this extended CEID, we study a model system which displays the electron-ion analog of the Rabi oscillations. Finally, we discuss convergence and scaling properties of the extended CEID along with its applicability to more realistic problems. (C) 2011 American Institute of Physics. [doi: 10.1063/1.3589165]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A reduced-density-operator description is developed for coherent optical phenomena in many-electron atomic systems, utilizing a Liouville-space, multiple-mode Floquet–Fourier representation. The Liouville-space formulation provides a natural generalization of the ordinary Hilbert-space (Hamiltonian) R-matrix-Floquet method, which has been developed for multi-photon transitions and laser-assisted electron–atom collision processes. In these applications, the R-matrix-Floquet method has been demonstrated to be capable of providing an accurate representation of the complex, multi-level structure of many-electron atomic systems in bound, continuum, and autoionizing states. The ordinary Hilbert-space (Hamiltonian) formulation of the R-matrix-Floquet method has been implemented in highly developed computer programs, which can provide a non-perturbative treatment of the interaction of a classical, multiple-mode electromagnetic field with a quantum system. This quantum system may correspond to a many-electron, bound atomic system and a single continuum electron. However, including pseudo-states in the expansion of the many-electron atomic wave function can provide a representation of multiple continuum electrons. The 'dressed' many-electron atomic states thereby obtained can be used in a realistic non-perturbative evaluation of the transition probabilities for an extensive class of atomic collision and radiation processes in the presence of intense electromagnetic fields. In order to incorporate environmental relaxation and decoherence phenomena, we propose to utilize the ordinary Hilbert-space (Hamiltonian) R-matrix-Floquet method as a starting-point for a Liouville-space (reduced-density-operator) formulation. To illustrate how the Liouville-space R-matrix-Floquet formulation can be implemented for coherent atomic radiative processes, we discuss applications to electromagnetically induced transparency, as well as to related pump–probe optical phenomena, and also to the unified description of radiative and dielectronic recombination in electron–ion beam interactions and high-temperature plasmas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the construction of linear-in-the-parameters (LITP) models for multi-output regression problems. Most existing stepwise forward algorithms choose the regressor terms one by one, each time maximizing the model error reduction ratio. The drawback is that such procedures cannot guarantee a sparse model, especially under highly noisy learning conditions. The main objective of this paper is to improve the sparsity and generalization capability of a model for multi-output regression problems, while reducing the computational complexity. This is achieved by proposing a novel multi-output two-stage locally regularized model construction (MTLRMC) method using the extreme learning machine (ELM). In this new algorithm, the nonlinear parameters in each term, such as the width of the Gaussian function and the power of a polynomial term, are firstly determined by the ELM. An initial multi-output LITP model is then generated according to the termination criteria in the first stage. The significance of each selected regressor is checked and the insignificant ones are replaced at the second stage. The proposed method can produce an optimized compact model by using the regularized parameters. Further, to reduce the computational complexity, a proper regression context is used to allow fast implementation of the proposed method. Simulation results confirm the effectiveness of the proposed technique. © 2013 Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When studying heterogeneous aquifer systems, especially at regional scale, a degree of generalization is anticipated. This can be due to sparse sampling regimes, complex depositional environments or lack of accessibility to measure the subsurface. This can lead to an inaccurate conceptualization which can be detrimental when applied to groundwater flow models. It is important that numerical models are based on observed and accurate geological information and do not rely on the distribution of artificial aquifer properties. This can still be problematic as data will be modelled at a different scale to which it was collected. It is proposed here that integrating geophysics and upscaling techniques can assist in a more realistic and deterministic groundwater flow model. In this study, the sedimentary aquifer of the Lagan Valley in Northern Ireland is chosen due to intruding sub-vertical dolerite dykes. These dykes are of a lower permeability than the sandstone aquifer. The use of airborne magnetics allows the delineation of heterogeneities, confirmed by field analysis. Permeability measured at the field scale is then upscaled to different levels using a correlation with the geophysical data, creating equivalent parameters that can be directly imported into numerical groundwater flow models. These parameters include directional equivalent permeabilities and anisotropy. Several stages of upscaling are modelled in finite element. Initial modelling is providing promising results, especially at the intermediate scale, suggesting an accurate distribution of aquifer properties. This deterministic based methodology is being expanded to include stochastic methods of obtaining heterogeneity location based on airborne geophysical data. This is through the Direct Sample method of Multiple-Point Statistics (MPS). This method uses the magnetics as a training image to computationally determine a probabilistic occurrence of heterogeneity. There is also a need to apply the method to alternate geological contexts where the heterogeneity is of a higher permeability than the host rock.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the generation, propagation, and application of multipartite continuous variable entanglement in a noisy environment. In particular, we focus our attention on the multimode entangled states achievable by second-order nonlinear crystals-i.e., coherent states of the SU(m,1) group-which provide a generalization of the twin-beam state of a bipartite system. The full inseparability in the ideal case is shown, whereas thresholds for separability are given for the tripartite case in the presence of noise. We find that entanglement of tripartite states is robust against thermal noise, both in the generation process and during propagation. We then consider coherent states of SU(m,1) as a resource for multipartite distribution of quantum information and analyze a specific protocol for telecloning, proving its optimality in the case of symmetric cloning of pure Gaussian states. We show that the proposed protocol also provides the first example of a completely asymmetric 1 -> m telecloning and derive explicitly the optimal relation among the different fidelities of the m clones. The effect of noise in the various stages of the protocol is taken into account, and the fidelities of the clones are analytically obtained as a function of the noise parameters. In turn, this permits the optimization of the telecloning protocol, including its adaptive modifications to the noisy environment. In the optimized scheme the clones' fidelity remains maximal even in the presence of losses (in the absence of thermal noise), for propagation times that diverge as the number of modes increases. In the optimization procedure the prominent role played by the location of the entanglement source is analyzed in details. Our results indicate that, when only losses are present, telecloning is a more effective way to distribute quantum information than direct transmission followed by local cloning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In highly heterogeneous aquifer systems, conceptualization of regional groundwater flow models frequently results in the generalization or negligence of aquifer heterogeneities, both of which may result in erroneous model outputs. The calculation of equivalence related to hydrogeological parameters and applied to upscaling provides a means of accounting for measurement scale information but at regional scale. In this study, the Permo-Triassic Lagan Valley strategic aquifer in Northern Ireland is observed to be heterogeneous, if not discontinuous, due to subvertical trending low-permeability Tertiary dolerite dykes. Interpretation of ground and aerial magnetic surveys produces a deterministic solution to dyke locations. By measuring relative permeabilities of both the dykes and the sedimentary host rock, equivalent directional permeabilities, that determine anisotropy calculated as a function of dyke density, are obtained. This provides parameters for larger scale equivalent blocks, which can be directly imported to numerical groundwater flow models. Different conceptual models with different degrees of upscaling are numerically tested and results compared to regional flow observations. Simulation results show that the upscaled permeabilities from geophysical data allow one to properly account for the observed spatial variations of groundwater flow, without requiring artificial distribution of aquifer properties. It is also found that an intermediate degree of upscaling, between accounting for mapped field-scale dykes and accounting for one regional anisotropy value (maximum upscaling) provides results the closest to the observations at the regional scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is extensive theoretical work on measures of inconsistency for arbitrary formulae in knowledge bases. Many of these are defined in terms of the set of minimal inconsistent subsets (MISes) of the base. However, few have been implemented or experimentally evaluated to support their viability, since computing all MISes is intractable in the worst case. Fortunately, recent work on a related problem of minimal unsatisfiable sets of clauses (MUSes) offers a viable solution in many cases. In this paper, we begin by drawing connections between MISes and MUSes through algorithms based on a MUS generalization approach and a new optimized MUS transformation approach to finding MISes. We implement these algorithms, along with a selection of existing measures for flat and stratified knowledge bases, in a tool called mimus. We then carry out an extensive experimental evaluation of mimus using randomly generated arbitrary knowledge bases. We conclude that these measures are viable for many large and complex random instances. Moreover, they represent a practical and intuitive tool for inconsistency handling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes an efficient learning mechanism to build fuzzy rule-based systems through the construction of sparse least-squares support vector machines (LS-SVMs). In addition to the significantly reduced computational complexity in model training, the resultant LS-SVM-based fuzzy system is sparser while offers satisfactory generalization capability over unseen data. It is well known that the LS-SVMs have their computational advantage over conventional SVMs in the model training process; however, the model sparseness is lost, which is the main drawback of LS-SVMs. This is an open problem for the LS-SVMs. To tackle the nonsparseness issue, a new regression alternative to the Lagrangian solution for the LS-SVM is first presented. A novel efficient learning mechanism is then proposed in this paper to extract a sparse set of support vectors for generating fuzzy IF-THEN rules. This novel mechanism works in a stepwise subset selection manner, including a forward expansion phase and a backward exclusion phase in each selection step. The implementation of the algorithm is computationally very efficient due to the introduction of a few key techniques to avoid the matrix inverse operations to accelerate the training process. The computational efficiency is also confirmed by detailed computational complexity analysis. As a result, the proposed approach is not only able to achieve the sparseness of the resultant LS-SVM-based fuzzy systems but significantly reduces the amount of computational effort in model training as well. Three experimental examples are presented to demonstrate the effectiveness and efficiency of the proposed learning mechanism and the sparseness of the obtained LS-SVM-based fuzzy systems, in comparison with other SVM-based learning techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A forward and backward least angle regression (LAR) algorithm is proposed to construct the nonlinear autoregressive model with exogenous inputs (NARX) that is widely used to describe a large class of nonlinear dynamic systems. The main objective of this paper is to improve model sparsity and generalization performance of the original forward LAR algorithm. This is achieved by introducing a replacement scheme using an additional backward LAR stage. The backward stage replaces insignificant model terms selected by forward LAR with more significant ones, leading to an improved model in terms of the model compactness and performance. A numerical example to construct four types of NARX models, namely polynomials, radial basis function (RBF) networks, neuro fuzzy and wavelet networks, is presented to illustrate the effectiveness of the proposed technique in comparison with some popular methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new battery modelling method is presented based on the simulation error minimization criterion rather than the conventional prediction error criterion. A new integrated optimization method to optimize the model parameters is proposed. This new method is validated on a set of Li ion battery test data, and the results confirm the advantages of the proposed method in terms of the model generalization performance and long-term prediction accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Possibilistic answer set programming (PASP) extends answer set programming (ASP) by attaching to each rule a degree of certainty. While such an extension is important from an application point of view, existing semantics are not well-motivated, and do not always yield intuitive results. To develop a more suitable semantics, we first introduce a characterization of answer sets of classical ASP programs in terms of possibilistic logic where an ASP program specifies a set of constraints on possibility distributions. This characterization is then naturally generalized to define answer sets of PASP programs. We furthermore provide a syntactic counterpart, leading to a possibilistic generalization of the well-known Gelfond-Lifschitz reduct, and we show how our framework can readily be implemented using standard ASP solvers.