28 resultados para information theoretic measures
Resumo:
We address the question of how to communicate among distributed processes valuessuch as real numbers, continuous functions and geometrical solids with arbitrary precision, yet efficiently. We extend the established concept of lazy communication using streams of approximants by introducing explicit queries. We formalise this approach using protocols of a query-answer nature. Such protocols enable processes to provide valid approximations with certain accuracy and focusing on certain locality as demanded by the receiving processes through queries. A lattice-theoretic denotational semantics of channel and process behaviour is developed. Thequery space is modelled as a continuous lattice in which the top element denotes the query demanding all the information, whereas other elements denote queries demanding partial and/or local information. Answers are interpreted as elements of lattices constructed over suitable domains of approximations to the exact objects. An unanswered query is treated as an error anddenoted using the top element. The major novel characteristic of our semantic model is that it reflects the dependency of answerson queries. This enables the definition and analysis of an appropriate concept of convergence rate, by assigning an effort indicator to each query and a measure of information content to eachanswer. Thus we capture not only what function a process computes, but also how a process transforms the convergence rates from its inputs to its outputs. In future work these indicatorscan be used to capture further computational complexity measures. A robust prototype implementation of our model is available.
Resumo:
Early, lesion-based models of language processing suggested that semantic and phonological processes are associated with distinct temporal and parietal regions respectively, with frontal areas more indirectly involved. Contemporary spatial brain mapping techniques have not supported such clear-cut segregation, with strong evidence of activation in left temporal areas by both processes and disputed evidence of involvement of frontal areas in both processes. We suggest that combining spatial information with temporal and spectral data may allow a closer scrutiny of the differential involvement of closely overlapping cortical areas in language processing. Using beamforming techniques to analyze magnetoencephalography data, we localized the neuronal substrates underlying primed responses to nouns requiring either phonological or semantic processing, and examined the associated measures of time and frequency in those areas where activation was common to both tasks. Power changes in the beta (14-30 Hz) and gamma (30-50 Hz) frequency bandswere analyzed in pre-selected time windows of 350-550 and 500-700ms In left temporal regions, both tasks elicited power changes in the same time window (350-550 ms), but with different spectral characteristics, low beta (14-20 Hz) for the phonological task and high beta (20-30 Hz) for the semantic task. In frontal areas (BA10), both tasks elicited power changes in the gamma band (30-50 Hz), but in different time windows, 500-700ms for the phonological task and 350-550ms for the semantic task. In the left inferior parietal area (BA40), both tasks elicited changes in the 20-30 Hz beta frequency band but in different time windows, 350-550ms for the phonological task and 500-700ms for the semantic task. Our findings suggest that, where spatial measures may indicate overlapping areas of involvement, additional beamforming techniques can demonstrate differential activation in time and frequency domains. © 2012 McNab, Hillebrand, Swithenby and Rippon.
Resumo:
In industrial selling situations, the questions of what factors drive pricing authority delegation to salespeople and under what conditions price delegation is beneficial for the firm are often asked. To advance knowledge in this area, we (1) develop and empirically test a framework of important drivers of price delegation based on agency-theoretic research and (2) investigate the impact of price delegation on firm performance, taking into account agency theory variables as potential moderators. The study is based on data from a sample of 181 companies from the industrial machinery and electrical engineering industry in Germany. The results indicate that the degree of pricing delegation increases as information asymmetry between the salesperson and sales manager increases and as it becomes more difficult to monitor salespeople's efforts. Conversely, risk-aversion of salespeople is negatively related to the degree of price delegation. Furthermore, we find a positive effect of price delegation on firm performance, which is amplified when market-related uncertainty is high and when salespeople possess better customer-related information than their managers. Hence, our results clearly show that rigid, “one price fits all” policies are inappropriate in many B2B market situations. Instead, sales managers should grant their salespeople sufficient leeway to adapt prices to changing customer requirements and market conditions, especially in firms that operate in highly uncertain selling environments.
Resumo:
Dissimilarity measurement plays a crucial role in content-based image retrieval, where data objects and queries are represented as vectors in high-dimensional content feature spaces. Given the large number of dissimilarity measures that exist in many fields, a crucial research question arises: Is there a dependency, if yes, what is the dependency, of a dissimilarity measure’s retrieval performance, on different feature spaces? In this paper, we summarize fourteen core dissimilarity measures and classify them into three categories. A systematic performance comparison is carried out to test the effectiveness of these dissimilarity measures with six different feature spaces and some of their combinations on the Corel image collection. From our experimental results, we have drawn a number of observations and insights on dissimilarity measurement in content-based image retrieval, which will lay a foundation for developing more effective image search technologies.
Resumo:
For over 30. years information-processing approaches to leadership and more specifically Implicit Leadership Theories (ILTs) research has contributed a significant body of knowledge on leadership processes in applied settings. A new line of research on Implicit Followership Theories (IFTs) has re-ignited interest in information-processing and socio-cognitive approaches to leadership and followership. In this review, we focus on organizational research on ILTs and IFTs and highlight their practical utility for the exercise of leadership and followership in applied settings. We clarify common misperceptions regarding the implicit nature of ILTs and IFTs, review both direct and indirect measures, synthesize current and ongoing research on ILTs and IFTs in organizational settings, address issues related to different levels of analysis in the context of leadership and follower schemas and, finally, propose future avenues for organizational research. © 2013 Elsevier Inc.
Resumo:
Objective: To examine patients' experiences of information and support provision for age-related macular degeneration (AMD) in the UK. Study design: Exploratory qualitative study investigating patient experiences of healthcare consultations and living with AMD over 18 months. Setting: Specialist eye clinics at a Birmingham hospital. Participants: 13 patients diagnosed with AMD. Main outcome measures: Analysis of patients' narratives to identify key themes and issues relating to information and support needs. Results: Information was accessed from a variety of sources. There was evidence of clear information deficits prior to diagnosis, following diagnosis and ongoing across the course of the condition. Patients were often ill informed and therefore unable to self-advocate and recognise when support was needed, what support was available and how to access support. Conclusions: AMD patients have a variety of information needs that are variable across the course of the condition. Further research is needed to determine whether these experiences are typical and identify ways of translating the guidelines into practice. Methods of providing information need to be investigated and improved for this patient group.
Resumo:
Aim: Contrast sensitivity (CS) provides important information on visual function. This study aimed to assess differences in clinical expediency of the CS increment-matched new back-lit and original paper versions of the Melbourne Edge Test (MET) to determine the CS of the visually impaired. Methods: The back-lit and paper MET were administered to 75 visually impaired subjects (28-97 years). Two versions of the back-lit MET acetates were used to match the CS increments with the paper-based MET. Measures of CS were repeated after 30 min and again in the presence of a focal light source directed onto the MET. Visual acuity was measured with a Bailey-Lovie chart and subjects rated how much difficulty they had with face and vehicle recognition. Results: The back-lit MET gave a significantly higher CS than the paper-based version (14.2 ± 4.1 dB vs 11.3 ± 4.3 dB, p < 0.001). A significantly higher reading resulted with repetition of the paper-based MET (by 1.0 ± 1.7 dB, p < 0.001), but this was not evident with the back-lit MET (by 0.1 ± 1.4 dB, p = 0.53). The MET readings were increased by a focal light source, in both the back-lit (by 0.3 ± 0.81, p < 0.01) and paper-based (1.2 ± 1.7, p < 0.001) versions. CS as measured by the back-lit and paper-based versions of the MET was significantly correlated to patients' perceived ability to recognise faces (r = 0.71, r = 0.85 respectively; p < 0.001) and vehicles (r = 0.67, r = 0.82 respectively; p < 0.001), and with distance visual acuity (both r =-0.64; p < 0.001). Conclusions: The CS increment-matched back-lit MET gives higher CS values than the old paper-based test by approximately 3 dB and is more repeatable and less affected by external light sources. Clinically, the MET score provides information on patient difficulties with visual tasks, such as recognising faces. © 2005 The College of Optometrists.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Population measures for genetic programs are defined and analysed in an attempt to better understand the behaviour of genetic programming. Some measures are simple, but do not provide sufficient insight. The more meaningful ones are complex and take extra computation time. Here we present a unified view on the computation of population measures through an information hypertree (iTree). The iTree allows for a unified and efficient calculation of population measures via a basic tree traversal. © Springer-Verlag 2004.
Resumo:
This study examines the information content of alternative implied volatility measures for the 30 components of the Dow Jones Industrial Average Index from 1996 until 2007. Along with the popular Black-Scholes and \model-free" implied volatility expectations, the recently proposed corridor implied volatil- ity (CIV) measures are explored. For all pair-wise comparisons, it is found that a CIV measure that is closely related to the model-free implied volatility, nearly always delivers the most accurate forecasts for the majority of the firms. This finding remains consistent for different forecast horizons, volatility definitions, loss functions and forecast evaluation settings.
Resumo:
The UK government aims at achieving 80% CO2 emission reduction by 2050 which requires collective efforts across all the UK industry sectors. In particular, the housing sector has a large potential to contribute to achieving the aim because the housing sector alone accounts for 27% of the total UK CO2 emission, and furthermore, 87% of the housing which is responsible for current 27% CO2 emission will still stand in 2050. Therefore, it is essential to improve energy efficiency of existing housing stock built with low energy efficiency standard. In order for this, a whole‐house needs to be refurbished in a sustainable way by considering the life time financial and environmental impacts of a refurbished house. However, the current refurbishment process seems to be challenging to generate a financially and environmentally affordable refurbishment solution due to the highly fragmented nature of refurbishment practice and a lack of knowledge and skills about whole‐house refurbishment in the construction industry. In order to generate an affordable refurbishment solution, diverse information regarding costs and environmental impacts of refurbishment measures and materials should be collected and integrated in right sequences throughout the refurbishment project life cycle among key project stakeholders. Consequently, various researchers increasingly study a way of utilizing Building Information Modelling (BIM) to tackle current problems in the construction industry because BIM can support construction professionals to manage construction projects in a collaborative manner by integrating diverse information, and to determine the best refurbishment solution among various alternatives by calculating the life cycle costs and lifetime CO2 performance of a refurbishment solution. Despite the capability of BIM, the BIM adoption rate is low with 25% in the housing sector and it has been rarely studied about a way of using BIM for housing refurbishment projects. Therefore, this research aims to develop a BIM framework to formulate a financially and environmentally affordable whole‐house refurbishment solution based on the Life Cycle Costing (LCC) and Life Cycle Assessment (LCA) methods simultaneously. In order to achieve the aim, a BIM feasibility study was conducted as a pilot study to examine whether BIM is suitable for housing refurbishment, and a BIM framework was developed based on the grounded theory because there was no precedent research. After the development of a BIM framework, this framework was examined by a hypothetical case study using BIM input data collected from questionnaire survey regarding homeowners’ preferences for housing refurbishment. Finally, validation of the BIM framework was conducted among academics and professionals by providing the BIM framework and a formulated refurbishment solution based on the LCC and LCA studies through the framework. As a result, BIM was identified as suitable for housing refurbishment as a management tool, and it is timely for developing the BIM framework. The BIM framework with seven project stages was developed to formulate an affordable refurbishment solution. Through the case study, the Building Regulation is identified as the most affordable energy efficiency standard which renders the best LCC and LCA results when it is applied for whole‐house refurbishment solution. In addition, the Fabric Energy Efficiency Standard (FEES) is recommended when customers are willing to adopt high energy standard, and the maximum 60% of CO2 emissions can be reduced through whole‐house fabric refurbishment with the FEES. Furthermore, limitations and challenges to fully utilize BIM framework for housing refurbishment were revealed such as a lack of BIM objects with proper cost and environmental information, limited interoperability between different BIM software and limited information of LCC and LCA datasets in BIM system. Finally, the BIM framework was validated as suitable for housing refurbishment projects, and reviewers commented that the framework can be more practical if a specific BIM library for housing refurbishment with proper LCC and LCA datasets is developed. This research is expected to provide a systematic way of formulating a refurbishment solution using BIM, and to become a basis for further research on BIM for the housing sector to resolve the current limitations and challenges. Future research should enhance the BIM framework by developing more detailed process map and develop BIM objects with proper LCC and LCA Information.
Resumo:
Context Many large organizations juggle an application portfolio that contains different applications that fulfill similar tasks in the organization. In an effort to reduce operating costs, they are attempting to consolidate such applications. Before consolidating applications, the work that is done with these applications must be harmonized. This is also known as process harmonization. Objective The increased interest in process harmonization calls for measures to quantify the extent to which processes have been harmonized. These measures should also uncover the factors that are of interest when harmonizing processes. Currently, such measures do not exist. Therefore, this study develops and validates a measurement model to quantify the level of process harmonization in an organization. Method The measurement model was developed by means of a literature study and structured interviews. Subsequently, it was validated through a survey, using factor analysis and correlations with known related constructs. Results As a result, a valid and reliable measurement model was developed. The factors that are found to constitute process harmonization are: the technical design of the business process and its data, the resources that execute the process, and the information systems that are used in the process. In addition, strong correlations were found between process harmonization and process standardization and between process complexity and process harmonization. Conclusion The measurement model can be used by practitioners, because it shows them the factors that must be taken into account when harmonizing processes, and because it provides them with a means to quantify the extent to which they succeeded in harmonizing their processes. At the same time, it can be used by researchers to conduct further empirical research in the area of process harmonization.
Resumo:
The focus of this thesis is the extension of topographic visualisation mappings to allow for the incorporation of uncertainty. Few visualisation algorithms in the literature are capable of mapping uncertain data with fewer able to represent observation uncertainties in visualisations. As such, modifications are made to NeuroScale, Locally Linear Embedding, Isomap and Laplacian Eigenmaps to incorporate uncertainty in the observation and visualisation spaces. The proposed mappings are then called Normally-distributed NeuroScale (N-NS), T-distributed NeuroScale (T-NS), Probabilistic LLE (PLLE), Probabilistic Isomap (PIso) and Probabilistic Weighted Neighbourhood Mapping (PWNM). These algorithms generate a probabilistic visualisation space with each latent visualised point transformed to a multivariate Gaussian or T-distribution, using a feed-forward RBF network. Two types of uncertainty are then characterised dependent on the data and mapping procedure. Data dependent uncertainty is the inherent observation uncertainty. Whereas, mapping uncertainty is defined by the Fisher Information of a visualised distribution. This indicates how well the data has been interpolated, offering a level of ‘surprise’ for each observation. These new probabilistic mappings are tested on three datasets of vectorial observations and three datasets of real world time series observations for anomaly detection. In order to visualise the time series data, a method for analysing observed signals and noise distributions, Residual Modelling, is introduced. The performance of the new algorithms on the tested datasets is compared qualitatively with the latent space generated by the Gaussian Process Latent Variable Model (GPLVM). A quantitative comparison using existing evaluation measures from the literature allows performance of each mapping function to be compared. Finally, the mapping uncertainty measure is combined with NeuroScale to build a deep learning classifier, the Cascading RBF. This new structure is tested on the MNist dataset achieving world record performance whilst avoiding the flaws seen in other Deep Learning Machines.