916 resultados para multiple discrepancies theory
Resumo:
We propose a physical mechanism that leads to the emergence of secondary threshold laws in processes of multiple ionization of atoms. We argue that the removal of n electrons (n>2) from a many-electron atom may proceed via intermediate resonant states of the corresponding doubly charged ion. For atoms such as rare gases, the density of such resonances in the vicinity of subsequent ionization thresholds is high. As a result, the appearance energies for multiply charged ions are close to these thresholds, while the effective power indices mu in the near-threshold energy dependence of the cross section, sigmaproportional toE(mu), are lower compared to those from the Wannier theory. This provides a possible explanation of the recent experimental results of B. Gstir [Nucl. Instrum. Methods Phys. Res. B 205, 413 (2003)].
Resumo:
Reported herein are measured absolute single, double, and triple charge exchange (CE) cross sections for the highly charged ions (HCIs) Cq+ (q=5,6), Oq+ (q=6,7,8), and Neq+ (q=7,8) colliding with the molecular species H2O, CO, and CO2. Present data can be applied to interpreting observations of x-ray emissions from comets as they interact with the solar wind. As such, the ion impact energies of 7.0q keV (1.62–3.06 keV/amu) are representative of the fast solar wind, and data at 1.5q keV for O6+ (0.56 keV/amu) on CO and CO2 and 3.5q keV for O5+ (1.09 keV/amu) on CO provide checks of the energy dependence of the cross sections at intermediate and typical slow solar wind velocities. The HCIs are generated within a 14 GHz electron cyclotron resonance ion source. Absolute CE measurements are made using a retarding potential energy analyzer, with measurement of the target gas cell pressure and incident and final ion currents. Trends in the cross sections are discussed in light of the classical overbarrier model (OBM), extended OBM, and with recent results of the classical trajectory Monte Carlo theory.
Resumo:
In many domains when we have several competing classifiers available we want to synthesize them or some of them to get a more accurate classifier by a combination function. In this paper we propose a ‘class-indifferent’ method for combining classifier decisions represented by evidential structures called triplet and quartet, using Dempster's rule of combination. This method is unique in that it distinguishes important elements from the trivial ones in representing classifier decisions, makes use of more information than others in calculating the support for class labels and provides a practical way to apply the theoretically appealing Dempster–Shafer theory of evidence to the problem of ensemble learning. We present a formalism for modelling classifier decisions as triplet mass functions and we establish a range of formulae for combining these mass functions in order to arrive at a consensus decision. In addition we carry out a comparative study with the alternatives of simplet and dichotomous structure and also compare two combination methods, Dempster's rule and majority voting, over the UCI benchmark data, to demonstrate the advantage our approach offers. (A continuation of the work in this area that was published in IEEE Trans on KDE, and conferences)
Resumo:
The need to merge multiple sources of uncertaininformation is an important issue in many application areas,especially when there is potential for contradictions betweensources. Possibility theory offers a flexible framework to represent,and reason with, uncertain information, and there isa range of merging operators, such as the conjunctive anddisjunctive operators, for combining information. However, withthe proposals to date, the context of the information to be mergedis largely ignored during the process of selecting which mergingoperators to use. To address this shortcoming, in this paper,we propose an adaptive merging algorithm which selects largelypartially maximal consistent subsets (LPMCSs) of sources, thatcan be merged through relaxation of the conjunctive operator, byassessing the coherence of the information in each subset. In thisway, a fusion process can integrate both conjunctive and disjunctiveoperators in a more flexible manner and thereby be morecontext dependent. A comparison with related merging methodsshows how our algorithm can produce a more consensual result.
Resumo:
Purpose: Environmental turbulence including rapid changes in technology and markets has resulted in the need for new approaches to performance measurement and benchmarking. There is a need for studies that attempt to measure and benchmark upstream, leading or developmental aspects of organizations. Therefore, the aim of this paper is twofold. The first is to conduct an in-depth case analysis of lead performance measurement and benchmarking leading to the further development of a conceptual model derived from the extant literature and initial survey data. The second is to outline future research agendas that could further develop the framework and the subject area.
Design/methodology/approach: A multiple case analysis involving repeated in-depth interviews with managers in organisational areas of upstream influence in the case organisations.
Findings: It was found that the effect of external drivers for lead performance measurement and benchmarking was mediated by organisational context factors such as level of progression in business improvement methods. Moreover, the legitimation of the business improvement methods used for this purpose, although typical, had been extended beyond their original purpose with the development of bespoke sets of lead measures.
Practical implications: Examples of methods and lead measures are given that can be used by organizations in developing a programme of lead performance measurement and benchmarking.
Originality/value: There is a paucity of in-depth studies relating to the theory and practice of lead performance measurement and benchmarking in organisations.
Resumo:
We report the discovery of WASP-8b, a transiting planet of 2.25 ± 0.08 MJup on a strongly inclined eccentric 8.15-day orbit, moving in a retrograde direction to the rotation of its late-G host star. Evidence is found that the star is in a multiple stellar system with two other companions. The dynamical complexity of the system indicates that it may have experienced secular interactions such as the Kozai mechanism or a formation that differs from the “classical” disc-migration theory.
Resumo:
Abundant evidence for the occurrence of modulated envelope plasma wave packets is provided by recent satellite missions. These excitations are characterized by a slowly varying localized envelope structure, embedding the fast carrier wave, which appears to be the result of strong modulation of the wave amplitude. This modulation may be due to parametric interactions between different modes or, simply, to the nonlinear (self-)interaction of the carrier wave. A generic exact theory is presented in this study, for the nonlinear self-modulation of known electrostatic plasma modes, by employing a collisionless fluid model. Both cold (zero-temperature) and warm fluid descriptions are discussed and the results are compared. The (moderately) nonlinear oscillation regime is investigated by applying a multiple scale technique. The calculation leads to a Nonlinear Schrodinger-type Equation (NLSE), which describes the evolution of the slowly varying wave amplitude in time and space. The NLSE admits localized envelope (solitary wave) solutions of bright(pulses) or dark- (holes, voids) type, whose characteristics (maximum amplitude, width) depend on intrinsic plasma parameters. Effects like amplitude perturbation obliqueness (with respect to the propagation direction), finite temperature and defect (dust) concentration are explicitly considered. Relevance with similar highly localized modulated wave structures observed during recent satellite missions is discussed.
Absorbing new knowledge in small and medium-sized enterprises: A multiple case analysis of Six Sigma
Resumo:
The primary aim of this article is to critically analyse the development of Six Sigma theory and practice within small and medium-sized enterprises (SMEs) using a multiple case study approach. The article also explores the subsequent development of Lean Six Sigma as a means of addressing the perceived limitations of the efficacy of Six Sigma in this context. The overarching theoretical framework is that of absorptive capacity, where Six Sigma is conceptualized as new knowledge to be absorbed by smaller firms. The findings from a multiple case study involving repeat interviews and focus groups informed the development of an analytical model demonstrating the dynamic underlying routines for the absorptive capacity process and the development of a number of summative propositions relating the characteristics of SMEs to Six Sigma and Lean Six Sigma implementation.
Resumo:
We present a one-dimensional scattering theory which enables us to describe a wealth of effects arising from the coupling of the motional degree of freedom of scatterers to the electromagnetic field. Multiple scattering to all orders is taken into account. The theory is applied to describe the scheme of a Fabry-Perot resonator with one of its mirrors moving. The friction force, as well as the diffusion, acting on the moving mirror is derived. In the limit of a small reflection coefficient, the same model provides for the description of the mechanical effect of light on an atom moving in front of a mirror.
Resumo:
We present a generic transfer matrix approach for the description of the interaction of atoms possessing multiple ground state and excited state sublevels with light fields. This model allows us to treat multi-level atoms as classical scatterers in light fields modified by, in principle, arbitrarily complex optical components such as mirrors, resonators, dispersive or dichroic elements, or filters. We verify our formalism for two prototypical sub-Doppler cooling mechanisms and show that it agrees with the standard literature.
Resumo:
We tested the hypothesis that regulation of discrepancies between perceived actual and ideal differentiation between the ingroup and outgroup could help to explain the relationship between ingroup identification and intergroup bias when participants are recategorized into a superordinate group. Replicating previous findings, we found that following recategorization, identification was positively related to intergroup bias. No such differences emerged in a control condition. However, we also, in the recategorization condition only, observed a positive association between ingroup identification and the perceived discrepancy between actual and ideal degree of differentiation from the outgroup: at higher levels of identification, participants increasingly perceived the ingroup to be less differentiated from the outgroup than they would ideally like. This tendency mediated the relationship between identification and bias. We discuss the theoretical, methodological and practical implications of these findings.