934 resultados para response theory
Resumo:
An investigation was conducted to evaluate the impact of experimental designs and spatial analyses (single-trial models) of the response to selection for grain yield in the northern grains region of Australia (Queensland and northern New South Wales). Two sets of multi-environment experiments were considered. One set, based on 33 trials conducted from 1994 to 1996, was used to represent the testing system of the wheat breeding program and is referred to as the multi-environment trial (MET). The second set, based on 47 trials conducted from 1986 to 1993, sampled a more diverse set of years and management regimes and was used to represent the target population of environments (TPE). There were 18 genotypes in common between the MET and TPE sets of trials. From indirect selection theory, the phenotypic correlation coefficient between the MET and TPE single-trial adjusted genotype means [r(p(MT))] was used to determine the effect of the single-trial model on the expected indirect response to selection for grain yield in the TPE based on selection in the MET. Five single-trial models were considered: randomised complete block (RCB), incomplete block (IB), spatial analysis (SS), spatial analysis with a measurement error (SSM) and a combination of spatial analysis and experimental design information to identify the preferred (PF) model. Bootstrap-resampling methodology was used to construct multiple MET data sets, ranging in size from 2 to 20 environments per MET sample. The size and environmental composition of the MET and the single-trial model influenced the r(p(MT)). On average, the PF model resulted in a higher r(p(MT)) than the IB, SS and SSM models, which were in turn superior to the RCB model for MET sizes based on fewer than ten environments. For METs based on ten or more environments, the r(p(MT)) was similar for all single-trial models.
Resumo:
The study investigated theory of mind and central coherence abilities in adults with high-functioning autism (HFA) or Asperger syndrome (AS) using naturalistic tasks. Twenty adults with HTA/AS correctly answered significantly fewer theory of mind questions than 20 controls on a forced-choice response task. On a narrative task, there were no differences in the proportion of mental state words between the two groups, although the participants with HFA/AS were less inclined to provide explanations for characters' mental states. No between-group differences existed on the central coherence questions of the forced-choice response task, and the participants with HTA/AS included an equivalent proportion of explanations for non-mental state phenomena in their narratives as did controls. These results support the theory of mind deficit account of autism spectrum disorders, and suggest that difficulties in mental state attribution cannot be exclusively attributed to weak central coherence.
Resumo:
Fuzzy signal detection analysis can be a useful complementary technique to traditional signal detection theory analysis methods, particularly in applied settings. For example, traffic situations are better conceived as being on a continuum from no potential for hazard to high potential, rather than either having potential or not having potential. This study examined the relative contribution of sensitivity and response bias to explaining differences in the hazard perception performance of novices and experienced drivers, and the effect of a training manipulation. Novice drivers and experienced drivers were compared (N = 64). Half the novices received training, while the experienced drivers and half the novices remained untrained. Participants completed a hazard perception test and rated potential for hazard in occluded scenes. The response latency of participants to the hazard perception test replicated previous findings of experienced/novice differences and trained/untrained differences. Fuzzy signal detection analysis of both the hazard perception task and the occluded rating task suggested that response bias may be more central to hazard perception test performance than sensitivity, with trained and experienced drivers responding faster and with a more liberal bias than untrained novices. Implications for driver training and the hazard perception test are discussed.
Resumo:
Cet article présente une synthèse des recherches et théories qui éclairent notre compréhension de la créativité et de la mise en oeuvre de l’innovation dans les groupes de travail. Il semble que la créativité apparaisse essentiellement au cours des premières étapes du processus, avant la mise en oeuvre. On étudie l’influence des caractéristiques de la tâche, des capacités et de l’éventail des connaissances du groupe, des demandes externes, des mécanismes d’intégration et de cohérence de groupe. La perception d’une menace, l’incertitude ou de fortes exigences entravent la créativité, mais favorisent l’innovation. La diversité des connaissances et des capacités est un bon prédicteur de l’innovation, mais l’intégration du groupe et les compétences sont indispensables pour récolter les fruits de la diversité. On examine aussi les implications théoriques et pratiques de ces considérations. In this article I synthesise research and theory that advance our understanding of creativity and innovation implementation in groups at work. It is suggested that creativity occurs primarily at the early stages of innovation processes with innovation implementation later. The influences of task characteristics, group knowledge diversity and skill, external demands, integrating group processes and intragroup safety are explored. Creativity, it is proposed, is hindered whereas perceived threat, uncertainty or other high levels of demands aid the implementation of innovation. Diversity of knowledge and skills is a powerful predictor of innovation, but integrating group processes and competencies are needed to enable the fruits of this diversity to be harvested. The implications for theory and practice are also explored.
Resumo:
The convergence on the Big Five in personality theory has produced a demand for efficient yet psychometrically sound measures. Therefore, five single-item measures, using bipolar response scales, were constructed to measure the Big Five and evaluated in terms of their convergent and off-diagonal divergent properties, their pattern of criterion correlations and their reliability when compared with four longer Big Five measures. In a combined sample (N?=?791) the Single-Item Measures of Personality (SIMP) demonstrated a mean convergence of r?=?0.61 with the longer scales. The SIMP also demonstrated acceptable reliability, self–other accuracy, and divergent correlations, and a closely similar pattern of criterion correlations when compared with the longer scales. It is concluded that the SIMP offer a reasonable alternative to longer scales, balancing the demands of brevity versus reliability and validity.
Resumo:
This thesis has two aims. First, it sets out to develop an alternative methodology for the investigation of risk homeostasis theory (RHT). It is argued that the current methodologies of the pseudo-experimental design and post hoc analysis of road-traffic accident data both have their limitations, and that the newer 'game' type simulation exercises are also, but for different reasons, incapable of testing RHT predictions. The alternative methodology described here is based on the simulation of physical risk with intrinsic reward rather than a 'points pay-off'. The second aim of the thesis is to examine a number of predictions made by RHT through the use of this alternative methodology. Since the pseudo-experimental design and post hoc analysis of road-traffic data are both ill-suited to the investigation of that part of RHT which deals with the role of utility in determining risk-taking behaviour in response to a change in environmental risk, and since the concept of utility is critical to RHT, the methodology reported here is applied to the specific investigation of utility. Attention too is given to the question of which behavioural pathways carry the homeostasis effect, and whether those pathways are 'local' to the nature of the change in environmental risk. It is suggested that investigating RHT through this new methodology holds a number of advantages and should be developed further in an attempt to answer the RHT question. It is suggested too that the methodology allows RHT to be seen in a psychological context, rather than the statistical context that has so far characterised its investigation. The experimental findings reported here are in support of hypotheses derived from RHT and would therefore seem to argue for the importance of the individual and collective target level of risk, as opposed to the level of environmental risk, as the major determinant of accident loss.
Resumo:
Purpose – This paper aims to respond to John Rossiter's call for a “Marketing measurement revolution” in the current issue of EJM, as well as providing broader comment on Rossiter's C-OAR-SE framework, and measurement practice in marketing in general. Design/methodology/approach – The paper is purely theoretical, based on interpretation of measurement theory. Findings – The authors find that much of Rossiter's diagnosis of the problems facing measurement practice in marketing and social science is highly relevant. However, the authors find themselves opposed to the revolution advocated by Rossiter. Research limitations/implications – The paper presents a comment based on interpretation of measurement theory and observation of practices in marketing and social science. As such, the interpretation is itself open to disagreement. Practical implications – There are implications for those outside academia who wish to use measures derived from academic work as well as to derive their own measures of key marketing and other social variables. Originality/value – This paper is one of the few to explicitly respond to the C-OAR-SE framework proposed by Rossiter, and presents a number of points critical to good measurement theory and practice, which appear to remain underdeveloped in marketing and social science.
Resumo:
This paper uses a practice perspective to study coordinating as dynamic activities that are continuously created and modified in order to enact organizational relationships and activities. It is based on the case of Servico, an organization undergoing a major restructuring of its value chain in response to a change in government regulation. In our case, the actors iterate between the abstract concept of a coordinating mechanism referred to as end-to-end management and its performance in practice. They do this via five performative–ostensive cycles: (1) enacting disruption, (2) orienting to absence, (3) creating elements, (4) forming new patterns, and (5) stabilizing new patterns. These cycles and the relationships between them constitute a process model of coordinating. This model highlights the importance of absence in the coordinating process and demonstrates how experiencing absence shapes subsequent coordinating activity.
Resumo:
Chlamydia is a common sexually transmitted infection that has potentially serious consequences unless detected and treated early. The health service in the UK offers clinic-based testing for chlamydia but uptake is low. Identifying the predictors of testing behaviours may inform interventions to increase uptake. Self-tests for chlamydia may facilitate testing and treatment in people who avoid clinic-based testing. Self-testing and being tested by a health care professional (HCP) involve two contrasting contexts that may influence testing behaviour. However, little is known about how predictors of behaviour differ as a function of context. In this study, theoretical models of behaviour were used to assess factors that may predict intention to test in two different contexts: self-testing and being tested by a HCP. Individuals searching for or reading about chlamydia testing online were recruited using Google Adwords. Participants completed an online questionnaire that addressed previous testing behaviour and measured constructs of the Theory of Planned Behaviour and Protection Motivation Theory, which propose a total of eight possible predictors of intention. The questionnaire was completed by 310 participants. Sufficient data for multiple regression were provided by 102 and 118 respondents for self-testing and testing by a HCP respectively. Intention to self-test was predicted by vulnerability and self-efficacy, with a trend-level effect for response efficacy. Intention to be tested by a HCP was predicted by vulnerability, attitude and subjective norm. Thus, intentions to carry out two testing behaviours with very similar goals can have different predictors depending on test context. We conclude that interventions to increase self-testing should be based on evidence specifically related to test context.
Resumo:
Similar to classic Signal Detection Theory (SDT), recent optimal Binary Signal Detection Theory (BSDT) and based on it Neural Network Assembly Memory Model (NNAMM) can successfully reproduce Receiver Operating Characteristic (ROC) curves although BSDT/NNAMM parameters (intensity of cue and neuron threshold) and classic SDT parameters (perception distance and response bias) are essentially different. In present work BSDT/NNAMM optimal likelihood and posterior probabilities are analytically analyzed and used to generate ROCs and modified (posterior) mROCs, optimal overall likelihood and posterior. It is shown that for the description of basic discrimination experiments in psychophysics within the BSDT a ‘neural space’ can be introduced where sensory stimuli as neural codes are represented and decision processes are defined, the BSDT’s isobias curves can simultaneously be interpreted as universal psychometric functions satisfying the Neyman-Pearson objective, the just noticeable difference (jnd) can be defined and interpreted as an atom of experience, and near-neutral values of biases are observers’ natural choice. The uniformity or no-priming hypotheses, concerning the ‘in-mind’ distribution of false-alarm probabilities during ROC or overall probability estimations, is introduced. The BSDT’s and classic SDT’s sensitivity, bias, their ROC and decision spaces are compared.
Resumo:
Architecture and learning algorithm of self-learning spiking neural network in fuzzy clustering task are outlined. Fuzzy receptive neurons for pulse-position transformation of input data are considered. It is proposed to treat a spiking neural network in terms of classical automatic control theory apparatus based on the Laplace transform. It is shown that synapse functioning can be easily modeled by a second order damped response unit. Spiking neuron soma is presented as a threshold detection unit. Thus, the proposed fuzzy spiking neural network is an analog-digital nonlinear pulse-position dynamic system. It is demonstrated how fuzzy probabilistic and possibilistic clustering approaches can be implemented on the base of the presented spiking neural network.
Resumo:
This review incorporates strategic planning research conducted over more than 30 years and ranges from the classical model of strategic planning to recent empirical work on intermediate outcomes, such as the reduction of managers’ position bias and the coordination of subunit activity. Prior reviews have not had the benefit of more socialized perspectives that developed in response to Mintzberg’s critique of planning, including research on planned emergence and strategy-as-practice approaches. To stimulate a resurgence of research interest on strategic planning, this review therefore draws on a diverse body of theory beyond the rational design and contingency approaches that characterized research in this domain until the mid-1990s. We develop a broad conceptualization of strategic planning and identify future research opportunities for improving our understanding of how strategic planning influences organizational outcomes. Our framework incorporates the role of strategic planning practitioners; the underlying routines, norms, and procedures of strategic planning (practices); and the concrete activities of planners (praxis).
Resumo:
Purpose – The purpose of this paper is to constructively discuss the meaning and nature of (theoretical) contribution in accounting research, as represented by Lukka and Vinnari (2014) (hereafter referred to as LV). The authors aim is to further encourage debate on what constitutes management accounting theory (or theories) and how to modestly clarify contributions to the extant literature. Design/methodology/approach – The approach the authors take can be seen as (a)n interdisciplinary literature sourced analysis and critique of the movement’s positioning and trajectory” (Parker and Guthrie, 2014, p. 1218). The paper also draws upon and synthesizes the present authors and other’s contributions to accounting research using actor network theory. Findings – While a distinction between domain and methods theories … may appear analytically viable, it may be virtually impossible to separate them in practice. In line with Armstrong (2008), the authors cast a measure of doubt on the quest to significantly extend theoretical contributions from accounting research. Research limitations/implications – Rather than making (apparently) grandiose claims about (theoretical) contributions from individual studies, the authors suggest making more modest claims from the research. The authors try to provide a more appropriate and realistic approach to the appreciation of research contributions. Originality/value – The authors contribute to the debate on how theoretical contributions can be made in the accounting literature by constructively debating some views that have recently been outlined by LV. The aim is to provide some perspective on the usefulness of the criteria suggested by these authors. The authors also suggest and highlight (alternative) ways in which contributions might be discerned and clarified.
Resumo:
This study is to theoretically investigate shockwave and microbubble formation due to laser absorption by microparticles and nanoparticles. The initial motivation for this research was to understand the underlying physical mechanisms responsible for laser damage to the retina, as well as the predict threshold levels for damage for laser pulses with of progressively shorter durations. The strongest absorbers in the retina are micron size melanosomes, and their absorption of laser light causes them to accrue very high energy density. I theoretically investigate how this absorbed energy is transferred to the surrounding medium. For a wide range of conditions I calculate shockwave generation and bubble growth as a function of the three parameters; fluence, pulse duration and pulse shape. In order to develop a rigorous physical treatment, the governing equations for the behavior of an absorber and for the surrounding medium are derived. Shockwave theory is investigated and the conclusion is that a shock pressure explanation is likely to be the underlying physical cause of retinal damage at threshold fluences for sub-nanosecond pulses. The same effects are also expected for non-biological micro and nano absorbers. ^