925 resultados para Key feature
Resumo:
Anchored in the service-dominant logic and service innovation literature, this study investigates the drivers of employee generation of ideas for service improvement (GISI). Employee GISI focuses on customer needs and providing the exact service wanted by customers. GISI should enhance competitive advantage and organizational success (cf. Berry et al. 2006; Wang and Netemeyer 2004). Despite its importance, there is little research on the idea generation stage of the service development process (Chai, Zhang, and Tan 2005). This study contributes to the service field by providing the first empirical evaluation of the drivers of GISI. It also investigates a new explanatory determinant of reading of customer needs, namely, perceived organizational support (POS), and an outcome of POS, in the form of emotional exhaustion. Results show that the major driver of GISI is reading of customer needs by employees followed by affective organizational commitment and job satisfaction. This research provides several new and important insights for service management practice by suggesting that special care should be put into selecting and recruiting employees who have the ability to read customer needs. Additionally, organizations should invest in creating work environments that encourage and reward the flow of ideas for service improvement
Resumo:
Over the last decade issues related to the financial viability of development have become increasingly important to the English planning system. As part of a wider shift towards the compartmentalisation of planning tasks, expert consultants are required to quantify, in an attempt to rationalise, planning decisions in terms of economic ‘viability’. Often with a particular focus on planning obligations, the results of development viability modelling have emerged as a key part of the evidence base used in site-specific negotiations and in planning policy formation. Focussing on the role of clients and other stakeholders, this paper investigates how development viability is tested in practice. It draws together literature on the role of calculative practices in policy formation, client feedback and influence in real estate appraisals and stakeholder engagement and consultation in the planning literature to critically evaluate the role of clients and other interest groups in influencing the production and use of development viability appraisal models. The paper draws upon semi-structured interviews with the main producers of development viability appraisals to conclude that, whilst appraisals have the potential to be biased by client and stakeholder interests, there are important controlling influences on potential opportunistic behaviour. One such control is local authorities’ weak understanding of development viability appraisal techniques which limits their capacity to question the outputs of appraisal models. However, this also is of concern given that viability is now a central feature of the town planning system.
Resumo:
This is a fully revised edition of the UK’s leading textbook on the law governing construction contracts and the management and administration of those contracts. Although the legal principles involved are an aspect of general contract law, the practical and commercial complexities of the construction industry have increasingly made this a specialist area. This new edition has been brought up to date with recent cases and developments in the law as it stands at March 2007. The basic approach of the book has been retained. Rather than provide a commentary on standard-form contracts, our approach is to introduce the general principles that underlie contracts in construction, illustrating them by reference to the most important standard forms currently in use. Some of the common standard-form contracts have been revised since the previous edition, and the text has been revised to take account of these changes. Practitioners (consultants, builders, clients and lawyers) will find this an extremely useful source of reference, providing in-depth explanations for all of the features found in contemporary construction contracts, with reasons. A unique feature of this book is the way that it brings together the relevant principles of law with the practical issues arising in construction cases. It is a key text for construction undergraduates and postgraduates as well as for those taking the RIBA Part III and CIOB Part II examinations.
Resumo:
Considerable effort is presently being devoted to producing high-resolution sea surface temperature (SST) analyses with a goal of spatial grid resolutions as low as 1 km. Because grid resolution is not the same as feature resolution, a method is needed to objectively determine the resolution capability and accuracy of SST analysis products. Ocean model SST fields are used in this study as simulated “true” SST data and subsampled based on actual infrared and microwave satellite data coverage. The subsampled data are used to simulate sampling errors due to missing data. Two different SST analyses are considered and run using both the full and the subsampled model SST fields, with and without additional noise. The results are compared as a function of spatial scales of variability using wavenumber auto- and cross-spectral analysis. The spectral variance at high wavenumbers (smallest wavelengths) is shown to be attenuated relative to the true SST because of smoothing that is inherent to both analysis procedures. Comparisons of the two analyses (both having grid sizes of roughly ) show important differences. One analysis tends to reproduce small-scale features more accurately when the high-resolution data coverage is good but produces more spurious small-scale noise when the high-resolution data coverage is poor. Analysis procedures can thus generate small-scale features with and without data, but the small-scale features in an SST analysis may be just noise when high-resolution data are sparse. Users must therefore be skeptical of high-resolution SST products, especially in regions where high-resolution (~5 km) infrared satellite data are limited because of cloud cover.
Resumo:
Infant faces elicit early, specific activity in the orbitofrontal cortex (OFC), a key cortical region for reward and affective processing. A test of the causal relationship between infant facial configuration and OFC activity is provided by naturally occurring disruptions to the face structure. One such disruption is cleft lip, a small change to one facial feature, shown to disrupt parenting. Using magnetoencephalography, we investigated neural responses to infant faces with cleft lip compared with typical infant and adult faces. We found activity in the right OFC at 140 ms in response to typical infant faces but diminished activity to infant faces with cleft lip or adult faces. Activity in the right fusiform face area was of similar magnitude for typical adult and infant faces but was significantly lower for infant faces with cleft lip. This is the first evidence that a minor change to the infant face can disrupt neural activity potentially implicated in caregiving.
How self-determined choice facilitates performance: a key role of the ventromedial prefrontal cortex
Resumo:
Recent studies have documented that self-determined choice does indeed enhance performance. However, the precise neural mechanisms underlying this effect are not well understood. We examined the neural correlates of the facilitative effects of self-determined choice using functional magnetic resonance imaging (fMRI). Participants played a game-like task involving a stopwatch with either a stopwatch they selected (self-determined-choice condition) or one they were assigned without choice (forced-choice condition). Our results showed that self-determined choice enhanced performance on the stopwatch task, despite the fact that the choices were clearly irrelevant to task difficulty. Neuroimaging results showed that failure feedback, compared with success feedback, elicited a drop in the vmPFC activation in the forced-choice condition, but not in the self-determined-choice condition, indicating that negative reward value associated with the failure feedback vanished in the self-determined-choice condition. Moreover, the vmPFC resilience to failure in the self-determined-choice condition was significantly correlated with the increased performance. Striatal responses to failure and success feedback were not modulated by the choice condition, indicating the dissociation between the vmPFC and striatal activation pattern. These findings suggest that the vmPFC plays a unique and critical role in the facilitative effects of self-determined choice on performance.
Resumo:
Recent studies showed that features extracted from brain MRIs can well discriminate Alzheimer’s disease from Mild Cognitive Impairment. This study provides an algorithm that sequentially applies advanced feature selection methods for findings the best subset of features in terms of binary classification accuracy. The classifiers that provided the highest accuracies, have been then used for solving a multi-class problem by the one-versus-one strategy. Although several approaches based on Regions of Interest (ROIs) extraction exist, the prediction power of features has not yet investigated by comparing filter and wrapper techniques. The findings of this work suggest that (i) the IntraCranial Volume (ICV) normalization can lead to overfitting and worst the accuracy prediction of test set and (ii) the combined use of a Random Forest-based filter with a Support Vector Machines-based wrapper, improves accuracy of binary classification.
Resumo:
RNA secondary structures in the 3'untranslated regions (3'UTR) of the viruses of the family Flaviviridae, previously identified as essential (promoters) or beneficial (enhancers) for replication, have been analysed. Duplicated enhancer elements are revealed as a global feature in the evolution of the 3'UTR of distantly related viruses within the genera Flavivirus and Pestivirus. For the flaviviruses, duplicated structures occur in the 3'UTR of all four distantly related ecological virus subgroups (tick-borne, mosquito-borne, no known vector and insect-specific flaviviruses (ISFV). RNA structural differences distinguish tick-borne flaviviruses with discrete pathogenetic characteristics. For Aedes- and Culex-associated ISFV, secondary RNA structures with different conformations display numerous short ssRNA direct repeats, exposed as loops and bulges. Long quadruplicate regions comprise almost the entire 3'UTR of Culex-associated ISFV. Extended duplicated sequence and associated RNA structures were also discovered in the 3'UTR of pestiviruses. In both the Flavivirus and Pestivirus genera, duplicated RNA structures were localized to the enhancer regions of the 3'UTR suggesting an adaptive role predominantly in wild-type viruses. We propose sequence reiteration might act as a scaffold for dimerization of proteins involved in assembly of viral replicase complexes. Numerous nucleotide repeats exposed as loops/bulges might also interfere with host immune responses acting as a molecular sponge to sequester key host proteins or microRNAs.
Resumo:
The question of climate at high obliquity is raised in the context of both exoplanet studies (e.g. habitability) and paleoclimates studies (evidence for low-latitude glaciation during the Neoproterozoic and the ”Snowball Earth” hypothesis). States of high obliquity, φ, are distinctive in that, for φ ≥54◦, the poles receive more solar radiation in the annual mean than the Equator, opposite to the present day situation. In addition, the seasonal cycle of insolation is extreme, with the poles alternatively “facing” the sun and sheltering in the dark for months. The novelty of our approach is to consider the role of a dynamical ocean in controlling the surface climate at high obliquity, which in turn requires understanding of the surface winds patterns when temperature gradients are reversed. To address these questions, a coupled ocean-atmosphere-sea ice GCM configured on an aquaplanet is employed. Except for the absence of topography and modified obliquity, the set-up is Earth-like. Two large obliquities φ, 54◦ and 90◦, are compared to today’s Earth value, φ=23.5◦. Three key results emerge at high obliquity: 1) despite reversed temper- ature gradients, mid-latitudes surface winds are westerly and trade winds exist at the equator (as for φ=23.5◦) although the westerlies are confined to the summer hemisphere, 2) a habitable planet is possible with mid-latitude temperatures in the range 300-280 K and 3) a stable climate state with an ice cap limited to the equatorial region is unlikely. We clarify the dynamics behind these features (notably by an analysis of the potential vorticity structure and conditions for baroclinic instability of the atmosphere). Interestingly, we find that the absence of a stable partially glaciated state is critically linked to the absence of ocean heat transport during winter, a feature ultimately traced back to the high seasonality of baroclinic instability conditions in the atmosphere.
Resumo:
he first international urban land surface model comparison was designed to identify three aspects of the urban surface-atmosphere interactions: (1) the dominant physical processes, (2) the level of complexity required to model these, and 3) the parameter requirements for such a model. Offline simulations from 32 land surface schemes, with varying complexity, contributed to the comparison. Model results were analysed within a framework of physical classifications and over four stages. The results show that the following are important urban processes; (i) multiple reflections of shortwave radiation within street canyons, (ii) reduction in the amount of visible sky from within the canyon, which impacts on the net long-wave radiation, iii) the contrast in surface temperatures between building roofs and street canyons, and (iv) evaporation from vegetation. Models that use an appropriate bulk albedo based on multiple solar reflections, represent building roof surfaces separately from street canyons and include a representation of vegetation demonstrate more skill, but require parameter information on the albedo, height of the buildings relative to the width of the streets (height to width ratio), the fraction of building roofs compared to street canyons from a plan view (plan area fraction) and the fraction of the surface that is vegetated. These results, whilst based on a single site and less than 18 months of data, have implications for the future design of urban land surface models, the data that need to be measured in urban observational campaigns, and what needs to be included in initiatives for regional and global parameter databases.
Resumo:
This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets.
Resumo:
Fractal with microscopic anisotropy shows a unique type of macroscopic isotropy restoration phenomenon that is absent in Euclidean space [M. T. Barlow et al., Phys. Rev. Lett. 75, 3042]. In this paper the isotropy restoration feature is considered for a family of two-dimensional Sierpinski gasket type fractal resistor networks. A parameter xi is introduced to describe this phenomenon. Our numerical results show that xi satisfies the scaling law xi similar to l(-alpha), where l is the system size and alpha is an exponent independent of the degree of microscopic anisotropy, characterizing the isotropy restoration feature of the fractal systems. By changing the underlying fractal structure towards the Euclidean triangular lattice through increasing the side length b of the gasket generators, the fractal-to-Euclidean crossover behavior of the isotropy restoration feature is discussed.
Resumo:
Three coupled knowledge transfer partnerships used pattern recognition techniques to produce an e-procurement system which, the National Audit Office reports, could save the National Health Service £500 m per annum. An extension to the system, GreenInsight, allows the environmental impact of procurements to be assessed and savings made. Both systems require suitable products to be discovered and equivalent products recognised, for which classification is a key component. This paper describes the innovative work done for product classification, feature selection and reducing the impact of mislabelled data.