531 resultados para invariance


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The drinking refusal self-efficacy questionnaire (DRSEQ: Young, R.M., Oei, T.P.S., 1996. Drinking expectancy profile: test manual. Behaviour Research and Therapy Centre, University of Queensland, Australia Young, R.M., Oei, T.P.S., Crook, G.M., 1991. Development of a drinking refusal self-efficacy questionnaire. J. Psychopathol. Behav. Assess., 13, 1-15) assesses a person's belief in their ability to resist alcohol. The DRSEQ is a sound psychometric instrument based on exploratory factor analyses, but has not been subjected to confirmatory factor analysis. In total 2773 participants were used to confirm the factor structure of the DRSEQ. Initial analyses revealed that the original structure was not confirmed in the current study. Subsequent analyses resulted in a revised factor structure (DRSEQ-R) being confirmed in community, student and clinical samples. The DRSEQ-R was also found to have good construct and concurrent validity. The factor structure of the DRSEQ-R is more stable than the original structure of the DRSEQ and the revised scale has considerable potential in future alcohol-related research. (c) 2004 Elsevier Ireland Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The interaction of electromagnetic radiation with plasmas is studied in relativistic four-vector formalism. A gauge and Lorentz invariant ponderomotive four-force is derived from the time dependent nonlinear three-force of Hora (1985). This four-force, due to its Lorentz invariance, contains new magnetic field terms. A new gauge and Lorentz invariant model of the response of plasma to electromagnetic radiation is then devised. An expression for the dispersion relation is obtained from this model. It is then proved that the magnetic permeability of plasma is unity for a general reference frame. This is an important result since it has been previously assumed in many plasma models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present investigation aimed to critically examine the factor structure and psychometric properties of the Anxiety Sensitivity Index - Revised (ASI-R). Confirmatory factor analysis using a clinical sample of adults (N = 248) revealed that the ASI-R could be improved substantially through the removal of 15 problematic items in order to account for the most robust dimensions of anxiety sensitivity. This modified scale was renamed the 21-item Anxiety Sensitivity Index (21-item ASI) and reanalyzed with a large sample of normative adults (N = 435), revealing configural and metric invariance across groups. Further comparisons with other alternative models, using multi-sample analysis, indicated the 21-item ASI to be the best fitting model for both groups. There was also evidence of internal consistency, test-retest reliability, and construct validity for both samples suggesting that the 21-item ASI is a useful assessment device for investigating the construct of anxiety sensitivity in both clinical and normative populations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a Lorentz invariant extension of a previous model for intrinsic decoherence (Milburn 1991 Phys. Rev. A 44 5401). The extension uses unital semigroup representations of space and time translations rather than the more usual unitary representation, and does the least violence to physically important invariance principles. Physical consequences include a modification of the uncertainty principle and a modification of field dispersion relations, similar to modifications suggested by quantum gravity and string theory, but without sacrificing Lorentz invariance. Some observational signatures are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In order to quantify quantum entanglement in two-impurity Kondo systems, we calculate the concurrence, negativity, and von Neumann entropy. The entanglement of the two Kondo impurities is shown to be determined by two competing many-body effects, namely the Kondo effect and the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction, I. Due to the spin-rotational invariance of the ground state, the concurrence and negativity are uniquely determined by the spin-spin correlation between the impurities. It is found that there exists a critical minimum value of the antiferromagnetic correlation between the impurity spins which is necessary for entanglement of the two impurity spins. The critical value is discussed in relation with the unstable fixed point in the two-impurity Kondo problem. Specifically, at the fixed point there is no entanglement between the impurity spins. Entanglement will only be created [and quantum information processing (QIP) will only be possible] if the RKKY interaction exchange energy, I, is at least several times larger than the Kondo temperature, T-K. Quantitative criteria for QIP are given in terms of the impurity spin-spin correlation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditionally, machine learning algorithms have been evaluated in applications where assumptions can be reliably made about class priors and/or misclassification costs. In this paper, we consider the case of imprecise environments, where little may be known about these factors and they may well vary significantly when the system is applied. Specifically, the use of precision-recall analysis is investigated and compared to the more well known performance measures such as error-rate and the receiver operating characteristic (ROC). We argue that while ROC analysis is invariant to variations in class priors, this invariance in fact hides an important factor of the evaluation in imprecise environments. Therefore, we develop a generalised precision-recall analysis methodology in which variation due to prior class probabilities is incorporated into a multi-way analysis of variance (ANOVA). The increased sensitivity and reliability of this approach is demonstrated in a remote sensing application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recovering position from sensor information is an important problem in mobile robotics, known as localisation. Localisation requires a map or some other description of the environment to provide the robot with a context to interpret sensor data. The mobile robot system under discussion is using an artificial neural representation of position. Building a geometrical map of the environment with a single camera and artificial neural networks is difficult. Instead it would be simpler to learn position as a function of the visual input. Usually when learning images, an intermediate representation is employed. An appropriate starting point for biologically plausible image representation is the complex cells of the visual cortex, which have invariance properties that appear useful for localisation. The effectiveness for localisation of two different complex cell models are evaluated. Finally the ability of a simple neural network with single shot learning to recognise these representations and localise a robot is examined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A family of measurements of generalisation is proposed for estimators of continuous distributions. In particular, they apply to neural network learning rules associated with continuous neural networks. The optimal estimators (learning rules) in this sense are Bayesian decision methods with information divergence as loss function. The Bayesian framework guarantees internal coherence of such measurements, while the information geometric loss function guarantees invariance. The theoretical solution for the optimal estimator is derived by a variational method. It is applied to the family of Gaussian distributions and the implications are discussed. This is one in a series of technical reports on this topic; it generalises the results of ¸iteZhu95:prob.discrete to continuous distributions and serve as a concrete example of a larger picture ¸iteZhu95:generalisation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neural networks can be regarded as statistical models, and can be analysed in a Bayesian framework. Generalisation is measured by the performance on independent test data drawn from the same distribution as the training data. Such performance can be quantified by the posterior average of the information divergence between the true and the model distributions. Averaging over the Bayesian posterior guarantees internal coherence; Using information divergence guarantees invariance with respect to representation. The theory generalises the least mean squares theory for linear Gaussian models to general problems of statistical estimation. The main results are: (1)~the ideal optimal estimate is always given by average over the posterior; (2)~the optimal estimate within a computational model is given by the projection of the ideal estimate to the model. This incidentally shows some currently popular methods dealing with hyperpriors are in general unnecessary and misleading. The extension of information divergence to positive normalisable measures reveals a remarkable relation between the dlt dual affine geometry of statistical manifolds and the geometry of the dual pair of Banach spaces Ld and Ldd. It therefore offers conceptual simplification to information geometry. The general conclusion on the issue of evaluating neural network learning rules and other statistical inference methods is that such evaluations are only meaningful under three assumptions: The prior P(p), describing the environment of all the problems; the divergence Dd, specifying the requirement of the task; and the model Q, specifying available computing resources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a generic strategic framework of alternative international marketing strategies and market segmentation based on intra- and inter-cultural behavioural homogeneity. Consumer involvement (CI) is proposed as a pivotal construct to capture behavioural homogeneity, for the identification of market segments. Results from a five-country study demonstrate how the strategic framework can be valuable in managerial decision-making. First, there is evidence for the cultural invariance of the measurement of CI, allowing a true comparison of inter- and intra-cultural behavioural homogeneity. Second, CI influences purchase behaviour, and its evaluation provides a rich source of information for responsive market segmentation. Finally, a decomposition of behavioural variance suggests that national-cultural environment and nationally transcendent variables explain differences in behaviour. The Behavioural Homogeneity Evaluation Framework therefore suggests appropriate international marketing strategies, providing practical guidance for implementing involvement-contingent strategies. © 2007 Academy of International Business. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Blurred edges appear sharper in motion than when they are stationary. We proposed a model of this motion sharpening that invokes a local, nonlinear contrast transducer function (Hammett et al, 1998 Vision Research 38 2099-2108). Response saturation in the transducer compresses or 'clips' the input spatial waveform, rendering the edges as sharper. To explain the increasing distortion of drifting edges at higher speeds, the degree of nonlinearity must increase with speed or temporal frequency. A dynamic contrast gain control before the transducer can account for both the speed dependence and approximate contrast invariance of motion sharpening (Hammett et al, 2003 Vision Research, in press). We show here that this model also predicts perceived sharpening of briefly flashed and flickering edges, and we show that the model can account fairly well for experimental data from all three modes of presentation (motion, flash, and flicker). At moderate durations and lower temporal frequencies the gain control attenuates the input signal, thus protecting it from later compression by the transducer. The gain control is somewhat sluggish, and so it suffers both a slow onset, and loss of power at high temporal frequencies. Consequently, brief presentations and high temporal frequencies of drift and flicker are less protected from distortion, and show greater perceptual sharpening.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Blurred edges appear sharper in motion than when they are stationary. We (Vision Research 38 (1998) 2108) have previously shown how such distortions in perceived edge blur may be accounted for by a model which assumes that luminance contrast is encoded by a local contrast transducer whose response becomes progressively more compressive as speed increases. If the form of the transducer is fixed (independent of contrast) for a given speed, then a strong prediction of the model is that motion sharpening should increase with increasing contrast. We measured the sharpening of periodic patterns over a large range of contrasts, blur widths and speeds. The results indicate that whilst sharpening increases with speed it is practically invariant with contrast. The contrast invariance of motion sharpening is not explained by an early, static compressive non-linearity alone. However, several alternative explanations are also inconsistent with these results. We show that if a dynamic contrast gain control precedes the static non-linear transducer then motion sharpening, its speed dependence, and its invariance with contrast, can be predicted with reasonable accuracy. © 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Blurred edges appear sharper in motion than when they are stationary. We have previously shown how such distortions in perceived edge blur may be explained by a model which assumes that luminance contrast is encoded by a local contrast transducer whose response becomes progressively more compressive as speed increases. To test this model further, we measured the sharpening of drifting, periodic patterns over a large range of contrasts, blur widths, and speeds Human Vision. The results indicate that, while sharpening increased with speed, it was practically invariant with contrast. This contrast invariance cannot be explained by a fixed compressive nonlinearity since that predicts almost no sharpening at low contrasts.We show by computational modelling of spatiotemporal responses that, if a dynamic contrast gain control precedes the static nonlinear transducer, then motion sharpening, its speed dependence, and its invariance with contrast can be predicted with reasonable accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Edge blur is an important perceptual cue, but how does the visual system encode the degree of blur at edges? Blur could be measured by the width of the luminance gradient profile, peak ^ trough separation in the 2nd derivative profile, or the ratio of 1st-to-3rd derivative magnitudes. In template models, the system would store a set of templates of different sizes and find which one best fits the `signature' of the edge. The signature could be the luminance profile itself, or one of its spatial derivatives. I tested these possibilities in blur-matching experiments. In a 2AFC staircase procedure, observers adjusted the blur of Gaussian edges (30% contrast) to match the perceived blur of various non-Gaussian test edges. In experiment 1, test stimuli were mixtures of 2 Gaussian edges (eg 10 and 30 min of arc blur) at the same location, while in experiment 2, test stimuli were formed from a blurred edge sharpened to different extents by a compressive transformation. Predictions of the various models were tested against the blur-matching data, but only one model was strongly supported. This was the template model, in which the input signature is the 2nd derivative of the luminance profile, and the templates are applied to this signature at the zero-crossings. The templates are Gaussian derivative receptive fields that covary in width and length to form a self-similar set (ie same shape, different sizes). This naturally predicts that shorter edges should look sharper. As edge length gets shorter, responses of longer templates drop more than shorter ones, and so the response distribution shifts towards shorter (smaller) templates, signalling a sharper edge. The data confirmed this, including the scale-invariance implied by self-similarity, and a good fit was obtained from templates with a length-to-width ratio of about 1. The simultaneous analysis of edge blur and edge location may offer a new solution to the multiscale problem in edge detection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present empirical investigation had a 3-fold purpose: (a) to cross-validate L. R. Offermann, J. K. Kennedy, and P. W. Wirtz's (1994) scale of Implicit Leadership Theories (ILTs) in several organizational settings and to further provide a shorter scale of ILTs in organizations; (b) to assess the generalizability of ILTs across different employee groups, and (c) to evaluate ILTs' change over time. Two independent samples were used for the scale validation (N 1 = 500 and N 2 = 439). A 6-factor structure (Sensitivity, Intelligence, Dedication, Dynamism, Tyranny, and Masculinity) was found to most accurately represent ILTs in organizational settings. Regarding the generalizability of ILTs, although the 6-factor structure was consistent across different employee groups, there was only partial support for total factorial invariance. Finally, evaluation of gamma, beta, and alpha change provided support for ILTs' stability over time.