44 resultados para test-process features
em Aston University Research Archive
Resumo:
This research addressed the question: "Which factors predict the effectiveness of healthcare teams?" It was addressed by assessing the psychometric properties of a new measure of team functioning with the use of data collected from 797 team members in 61 healthcare teams. This new measure is the Aston Team Performance Inventory (ATPI) developed by West, Markiewicz and Dawson (2005) and based on the IPO model. The ATPI was pilot tested in order to examine the reliability of this measure in the Jordanian cultural context. A sample of five teams comprising 3-6 members each was randomly selected from the Jordan Red Crescent health centers in Amman. Factors that predict team effectiveness were explored in a Jordanian sample (comprising 1622 members in 277 teams with 255 leaders from healthcare teams in hospitals in Amman) using self-report and Leader Ratings measures adapted from work by West, Borrill et al (2000) to determine team effectiveness and innovation from the leaders' point of view. The results demonstrate the validity and reliability of the measures for use in healthcare settings. Team effort and skills and leader managing had the strongest association with team processes in terms of team objectives, reflexivity, participation, task focus, creativity and innovation. Team inputs in terms of task design, team effort and skills, and organizational support were associated with team effectiveness and innovation whereas team resources were associated only with team innovation. Team objectives had the strongest mediated and direct association with team effectiveness whereas task focus had the strongest mediated and direct association with team innovation. Finally, among leadership variables, leader managing had the strongest association with team effectiveness and innovation. The theoretical and practical implications of this thesis are that: team effectiveness and innovation are influenced by multiple factors that must all be taken into account. The key factors managers need to ensure are in place for effective teams are team effort and skills, organizational support and team objectives. To conclude, the application of these findings to healthcare teams in Jordan will help improve their team effectiveness, and thus the healthcare services that they provide.
Resumo:
This study examined an integrated model of the antecedents and outcomes of organisational and overall justice using a sample of Indian Call Centre employees (n = 458). Results of structural equation modelling (SEM) revealed that the four organisational justice dimensions relate to overall justice. Further, work group identification mediated the influence of overall justice on counterproductive work behaviors, such as presenteeism and social loafing, while conscientiousness was a significant moderator between work group identification and presenteeism and social loafing. Theoretical and practical implications are discussed.
Resumo:
Based on data from 2091 call centre representatives working in 85 call centres in the UK, central assumptions of affective events theory (AET) are tested. AET predicts that specific features of work (e.g. autonomy) have an impact on the arousal of emotions and moods at work that, in turn, co-determine job satisfaction of employees. AET further proposes that job satisfaction is an evaluative judgement that mainly explains cognitive-based behaviour, whereas emotions and moods better predict affective-based behaviour. The results support these assumptions. A clear separation of key constructs (job satisfaction, positive and negative emotions) was possible. Moreover, correlations between several work features (e.g. supervisory support) and job satisfaction were, in part, mediated by work emotions, even when controlling for gender, age, call centre type (in-house versus outsourced centres) and call centre size. Predictions regarding consequences of satisfaction and affect were partly corroborated as continuance commitment was more strongly related to job satisfaction than to positive emotions. In addition, affective commitment and health complaints were related to both emotions and job satisfaction to the same extent. Thus, AET is a fruitful framework for explaining why and how specific management strategies used for designing work features influence important organizational attitudes and well-being of employees. © 2006 British Academy of Management.
Resumo:
Purpose – To investigate the role of simulation in the introduction of technology in a continuous operations process. Design/methodology/approach – A case-based research method was chosen with the aim to provide an exemplar of practice and test the proposition that the use of simulation can improve the implementation and running of conveyor systems in continuous process facilities. Findings – The research determines the optimum rate of re-introduction of inventory to a conveyor system generated during a breakdown event. Research limitations/implications – More case studies are required demonstrating the operational and strategic benefits that can be gained by using simulation to assess technology in organisations. Practical implications – A practical outcome of the study was the implementation of a policy for the manual re-introduction of inventory on a conveyor line after a breakdown event had occurred. Originality/value – The paper presents a novel example of the use of simulation to estimate the re-introduction rate of inventory after a breakdown event on a conveyor line. The paper highlights how by addressing this operational issue, ahead of implementation, the likelihood of the success of the strategic decision to acquire the technology can be improved.
Resumo:
This thesis presents an investigation, of synchronisation and causality, motivated by problems in computational neuroscience. The thesis addresses both theoretical and practical signal processing issues regarding the estimation of interdependence from a set of multivariate data generated by a complex underlying dynamical system. This topic is driven by a series of problems in neuroscience, which represents the principal background motive behind the material in this work. The underlying system is the human brain and the generative process of the data is based on modern electromagnetic neuroimaging methods . In this thesis, the underlying functional of the brain mechanisms are derived from the recent mathematical formalism of dynamical systems in complex networks. This is justified principally on the grounds of the complex hierarchical and multiscale nature of the brain and it offers new methods of analysis to model its emergent phenomena. A fundamental approach to study the neural activity is to investigate the connectivity pattern developed by the brain’s complex network. Three types of connectivity are important to study: 1) anatomical connectivity refering to the physical links forming the topology of the brain network; 2) effective connectivity concerning with the way the neural elements communicate with each other using the brain’s anatomical structure, through phenomena of synchronisation and information transfer; 3) functional connectivity, presenting an epistemic concept which alludes to the interdependence between data measured from the brain network. The main contribution of this thesis is to present, apply and discuss novel algorithms of functional connectivities, which are designed to extract different specific aspects of interaction between the underlying generators of the data. Firstly, a univariate statistic is developed to allow for indirect assessment of synchronisation in the local network from a single time series. This approach is useful in inferring the coupling as in a local cortical area as observed by a single measurement electrode. Secondly, different existing methods of phase synchronisation are considered from the perspective of experimental data analysis and inference of coupling from observed data. These methods are designed to address the estimation of medium to long range connectivity and their differences are particularly relevant in the context of volume conduction, that is known to produce spurious detections of connectivity. Finally, an asymmetric temporal metric is introduced in order to detect the direction of the coupling between different regions of the brain. The method developed in this thesis is based on a machine learning extensions of the well known concept of Granger causality. The thesis discussion is developed alongside examples of synthetic and experimental real data. The synthetic data are simulations of complex dynamical systems with the intention to mimic the behaviour of simple cortical neural assemblies. They are helpful to test the techniques developed in this thesis. The real datasets are provided to illustrate the problem of brain connectivity in the case of important neurological disorders such as Epilepsy and Parkinson’s disease. The methods of functional connectivity in this thesis are applied to intracranial EEG recordings in order to extract features, which characterize underlying spatiotemporal dynamics before during and after an epileptic seizure and predict seizure location and onset prior to conventional electrographic signs. The methodology is also applied to a MEG dataset containing healthy, Parkinson’s and dementia subjects with the scope of distinguishing patterns of pathological from physiological connectivity.
Resumo:
In stereo vision, regions with ambiguous or unspecified disparity can acquire perceived depth from unambiguous regions. This has been called stereo capture, depth interpolation or surface completion. We studied some striking induced depth effects suggesting that depth interpolation and surface completion are distinct stages of visual processing. An inducing texture (2-D Gaussian noise) had sinusoidal modulation of disparity, creating a smooth horizontal corrugation. The central region of this surface was replaced by various test patterns whose perceived corrugation was measured. When the test image was horizontal 1-D noise, shown to one eye or to both eyes without disparity, it appeared corrugated in much the same way as the disparity-modulated (DM) flanking regions. But when the test image was 2-D noise, or vertical 1-D noise, little or no depth was induced. This suggests that horizontal orientation was a key factor. For a horizontal sine-wave luminance grating, strong depth was induced, but for a square-wave grating, depth was induced only when its edges were aligned with the peaks and troughs of the DM flanking surface. These and related results suggest that disparity (or local depth) propagates along horizontal 1-D features, and then a 3-D surface is constructed from the depth samples acquired. The shape of the constructed surface can be different from the inducer, and so surface construction appears to operate on the results of a more local depth propagation process.
Resumo:
To make vision possible, the visual nervous system must represent the most informative features in the light pattern captured by the eye. Here we use Gaussian scale-space theory to derive a multiscale model for edge analysis and we test it in perceptual experiments. At all scales there are two stages of spatial filtering. An odd-symmetric, Gaussian first derivative filter provides the input to a Gaussian second derivative filter. Crucially, the output at each stage is half-wave rectified before feeding forward to the next. This creates nonlinear channels selectively responsive to one edge polarity while suppressing spurious or "phantom" edges. The two stages have properties analogous to simple and complex cells in the visual cortex. Edges are found as peaks in a scale-space response map that is the output of the second stage. The position and scale of the peak response identify the location and blur of the edge. The model predicts remarkably accurately our results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter. The model enhances our understanding of early vision by integrating computational, physiological, and psychophysical approaches. © ARVO.
Resumo:
A multi-scale model of edge coding based on normalized Gaussian derivative filters successfully predicts perceived scale (blur) for a wide variety of edge profiles [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision]. Our model spatially differentiates the luminance profile, half-wave rectifies the 1st derivative, and then differentiates twice more, to give the 3rd derivative of all regions with a positive gradient. This process is implemented by a set of Gaussian derivative filters with a range of scales. Peaks in the inverted normalized 3rd derivative across space and scale indicate the positions and scales of the edges. The edge contrast can be estimated from the height of the peak. The model provides a veridical estimate of the scale and contrast of edges that have a Gaussian integral profile. Therefore, since scale and contrast are independent stimulus parameters, the model predicts that the perceived value of either of these parameters should be unaffected by changes in the other. This prediction was found to be incorrect: reducing the contrast of an edge made it look sharper, and increasing its scale led to a decrease in the perceived contrast. Our model can account for these effects when the simple half-wave rectifier after the 1st derivative is replaced by a smoothed threshold function described by two parameters. For each subject, one pair of parameters provided a satisfactory fit to the data from all the experiments presented here and in the accompanying paper [May, K. A. & Georgeson, M. A. (2007). Added luminance ramp alters perceived edge blur and contrast: A critical test for derivative-based models of edge coding. Vision Research, 47, 1721-1731]. Thus, when we allow for the visual system's insensitivity to very shallow luminance gradients, our multi-scale model can be extended to edge coding over a wide range of contrasts and blurs. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
In many models of edge analysis in biological vision, the initial stage is a linear 2nd derivative operation. Such models predict that adding a linear luminance ramp to an edge will have no effect on the edge's appearance, since the ramp has no effect on the 2nd derivative. Our experiments did not support this prediction: adding a negative-going ramp to a positive-going edge (or vice-versa) greatly reduced the perceived blur and contrast of the edge. The effects on a fairly sharp edge were accurately predicted by a nonlinear multi-scale model of edge processing [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision], in which a half-wave rectifier comes after the 1st derivative filter. But we also found that the ramp affected perceived blur more profoundly when the edge blur was large, and this greater effect was not predicted by the existing model. The model's fit to these data was much improved when the simple half-wave rectifier was replaced by a threshold-like transducer [May, K. A. & Georgeson, M. A. (2007). Blurred edges look faint, and faint edges look sharp: The effect of a gradient threshold in a multi-scale edge coding model. Vision Research, 47, 1705-1720.]. This modified model correctly predicted that the interaction between ramp gradient and edge scale would be much larger for blur perception than for contrast perception. In our model, the ramp narrows an internal representation of the gradient profile, leading to a reduction in perceived blur. This in turn reduces perceived contrast because estimated blur plays a role in the model's estimation of contrast. Interestingly, the model predicts that analogous effects should occur when the width of the window containing the edge is made narrower. This has already been confirmed for blur perception; here, we further support the model by showing a similar effect for contrast perception. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
There have been two main approaches to feature detection in human and computer vision - based either on the luminance distribution and its spatial derivatives, or on the spatial distribution of local contrast energy. Thus, bars and edges might arise from peaks of luminance and luminance gradient respectively, or bars and edges might be found at peaks of local energy, where local phases are aligned across spatial frequency. This basic issue of definition is important because it guides more detailed models and interpretations of early vision. Which approach better describes the perceived positions of features in images? We used the class of 1-D images defined by Morrone and Burr in which the amplitude spectrum is that of a (partially blurred) square-wave and all Fourier components have a common phase. Observers used a cursor to mark where bars and edges were seen for different test phases (Experiment 1) or judged the spatial alignment of contours that had different phases (e.g. 0 degrees and 45 degrees ; Experiment 2). The feature positions defined by both tasks shifted systematically to the left or right according to the sign of the phase offset, increasing with the degree of blur. These shifts were well predicted by the location of luminance peaks (bars) and gradient peaks (edges), but not by energy peaks which (by design) predicted no shift at all. These results encourage models based on a Gaussian-derivative framework, but do not support the idea that human vision uses points of phase alignment to find local, first-order features. Nevertheless, we argue that both approaches are presently incomplete and a better understanding of early vision may combine insights from both. (C)2004 Elsevier Ltd. All rights reserved.
Resumo:
The object of this work was to further develop the idea introduced by Muaddi et al (1981) which enables some of the disadvantages of earlier destructive adhesion test methods to be overcome. The test is non-destructive in nature but it does need to be calibrated against a destructive method. Adhesion is determined by measuring the effect of plating on internal friction. This is achieved by determining the damping of vibrations of a resonating specimen before and after plating. The level of adhesion was considered by the above authors to influence the degree of damping. In the major portion of the research work the electrodeposited metal was Watt's nickel, which is ductile in nature and is therefore suitable for peel adhesion testing. The base metals chosen were aluminium alloys S1C and HE9 as it is relatively easy to produce varying levels of adhesion between the substrate and electrodeposited coating by choosing the appropriate process sequence. S1C alloy is the commercially pure aluminium and was used to produce good adhesion. HE9 aluminium alloy is a more difficult to plate alloy and was chosen to produce poorer adhesion. The "Modal Testing" method used for studying vibrations was investigated as a possible means of evaluating adhesion but was not successful and so research was concentrated on the "Q" meter. The method based on the use of a "Q" meter involves the principle of exciting vibrations in a sample, interrupting the driving signal and counting the number of oscillations of the freely decaying vibrations between two known preselected amplitudes of oscillations. It was not possible to reconstruct a working instrument using Muaddi's thesis (1982) as it had either a serious error or the information was incomplete. Hence a modified "Q" meter had to be designed and constructed but it was then difficult to resonate non-magnetic materials, such as aluminium, therefore, a comparison before and after plating could not be made. A new "Q" meter was then developed based on an Impulse Technique. A regulated miniature hammer was used to excite the test piece at the fundamental mode instead of an electronic hammer and test pieces were supported at the two predetermined nodal points using nylon threads. This instrument developed was not very successful at detecting changes due to good and poor pretreatments given before plating, however, it was more sensitive to changes at the surface such as room temperature oxidation. Statistical analysis of test results from untreated aluminium alloys show that the instrument is not always consistent, the variation was even bigger when readings were taken on different days. Although aluminium is said to form protective oxides at room temperature there was evidence that the aluminium surface changes continuously due to film formation, growth and breakdown. Nickel plated and zinc alloy immersion coated samples also showed variation in Q with time. In order to prove that the variations in Q were mainly due to surface oxidation, aluminium samples were lacquered and anodised Such treatments enveloped the active surfaces reacting with the environment and the Q variation with time was almost eliminated especially after hard anodising. This instrument detected major differences between different untreated aluminium substrates.Also Q values decreased progressively as coating thicknesses were increased. This instrument was also able to detect changes in Q due to heat-treatment of aluminium alloys.
Resumo:
Molecular transport in phase space is crucial for chemical reactions because it defines how pre-reactive molecular configurations are found during the time evolution of the system. Using Molecular Dynamics (MD) simulated atomistic trajectories we test the assumption of the normal diffusion in the phase space for bulk water at ambient conditions by checking the equivalence of the transport to the random walk model. Contrary to common expectations we have found that some statistical features of the transport in the phase space differ from those of the normal diffusion models. This implies a non-random character of the path search process by the reacting complexes in water solutions. Our further numerical experiments show that a significant long period of non-stationarity in the transition probabilities of the segments of molecular trajectories can account for the observed non-uniform filling of the phase space. Surprisingly, the characteristic periods in the model non-stationarity constitute hundreds of nanoseconds, that is much longer time scales compared to typical lifetime of known liquid water molecular structures (several picoseconds).
Resumo:
In this paper, we describe the development of two new measures of innovation trust, ‘trust that heard’ and ‘trust that benefit’. We report the findings from their use in a survey of design engineers in two large aerospace companies. We test a range of hypotheses covering different plausible roles for trust and confirm a ‘main effects’ model, whereby the variables predict the number of ideas suggested and the number of ideas implemented. In addition, we replicate earlier findings by Axtel et al. (2000), namely that personal and job variables predict idea suggestion, whereas organizational variables predict implementation.
Resumo:
This paper contributes to the developing literature on complementarities in organizational design. We test for the existence of complementarities in the use of external networking between stages of the innovation process in a sample of UK and German manufacturing plants. Our evidence suggests some differences between the UK and Germany in terms of the optimal combination of innovation activities in which to implement external networking. Broadly, there is more evidence of complementarities in the case of Germany, with the exception of the product engineering stage. By contrast, the UK exhibits generally strong evidence of substitutability in external networking in different stages, except between the identification of new products and product design and development stages. These findings suggest that previous studies indicating strong complementarity between internal and external knowledge sources have provided only part of the picture of the strategic dilemmas facing firms.