19 resultados para Predictive Analytics
em Aston University Research Archive
Resumo:
Physiological and neuroimaging studies provide evidence to suggest that attentional mechanisms operating within the fronto-parietal network may exert top–down control on early visual areas, priming them for forthcoming sensory events. The believed consequence of such priming is enhanced task performance. Using the technique of magnetoencephalography (MEG), we investigated this possibility by examining whether attention-driven changes in cortical activity are correlated with performance on a line-orientation judgment task. We observed that, approximately 200 ms after a covert attentional shift towards the impending visual stimulus, the level of phase-resetting (transient neural coherence) within the calcarine significantly increased for 2–10 Hz activity. This was followed by a suppression of alpha activity (near 10 Hz) which persisted until the onset of the stimulus. The levels of phase-resetting, alpha suppression and subsequent behavioral performance varied between subjects in a systematic fashion. The magnitudes of phase-resetting and alpha-band power were negatively correlated, with high levels of coherence associated with high levels of performance. We propose that top–down attentional control mechanisms exert their initial effects within the calcarine through a phase-resetting within the 2–10 Hz band, which in turn triggers a suppression of alpha activity, priming early visual areas for incoming information and enhancing behavioral performance.
Resumo:
This article considers the role of accounting in organisational decision making. It challenges the rational nature of decisions made in organisations through the use of accounting models and the problems of predicting the future through the use of such models. The use of accounting in this manner is evaluated from an epochal postmodern stance. Issues raised by chaos theory and the uncertainty principle are used to demonstrate problems with the predictive ability of accounting models. The authors argue that any consideration of the predictive value of accounting needs to change to incorporate a recognition of the turbulent external environment, if it is to be of use for organisational decision making. Thus it is argued that the role of accounting as a mechanism for knowledge creation regarding the future is fundamentally flawed. We take this as a starting-point to argue for the real purpose of the use of the predictive techniques of accounting, using its ritualistic role in the context of myth creation to argue for the cultural benefits of the use of such flawed techniques.
Resumo:
In the last two decades there have been substantial developments in the mathematical theory of inverse optimization problems, and their applications have expanded greatly. In parallel, time series analysis and forecasting have become increasingly important in various fields of research such as data mining, economics, business, engineering, medicine, politics, and many others. Despite the large uses of linear programming in forecasting models there is no a single application of inverse optimization reported in the forecasting literature when the time series data is available. Thus the goal of this paper is to introduce inverse optimization into forecasting field, and to provide a streamlined approach to time series analysis and forecasting using inverse linear programming. An application has been used to demonstrate the use of inverse forecasting developed in this study. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
This article proposes a framework of alternative international marketing strategies, based on the evaluation of intra- and inter-cultural behavioural homogeneity for market segmentation. The framework developed in this study provides a generic structure to behavioural homogeneity, proposing consumer involvement as a construct with unique predictive ability for international marketing strategy decisions. A model-based segmentation process, using structural equation models, is implemented to illustrate the application of the framework.
Resumo:
A study on heat pump thermodynamic characteristics has been made in the laboratory on a specially designed and instrumented air to water heat pump system. The design, using refrigerant R12, was based on the requirement to produce domestic hot water at a temperature of about 50 °C and was assembled in the laboratory. All the experimental data were fed to a microcomputer and stored on disk automatically from appropriate transducers via amplifier and 16 channel analogue to digital converters. The measurements taken were R12 pressures and temperatures, water and R12 mass flow rates, air speed, fan and compressor input powers, water and air inlet and outlet temperatures, wet and dry bulb temperatures. The time interval between the observations could be varied. The results showed, as expected, that the COP was higher at higher air inlet temperatures and at lower hot water output temperatures. The optimum air speed was found to be at a speed when the fan input power was about 4% of the condenser heat output. It was also found that the hot water can be produced at a temperature higher than the appropriate R12 condensing temperature corresponding to condensing pressure. This was achieved by condenser design to take advantage of discharge superheat and by further heating the water using heat recovery from the compressor. Of the input power to the compressor, typically about 85% was transferred to the refrigerant, 50 % by the compression work and 35% due to the heating of the refrigerant by the cylinder wall, and the remaining 15% (of the input power) was rejected to the cooling medium. The evaporator effectiveness was found to be about 75% and sensitive to the air speed. Using the data collected, a steady state computer model was developed. For given input conditions s air inlet temperature, air speed, the degree of suction superheat , water inlet and outlet temperatures; the model is capable of predicting the refrigerant cycle, compressor efficiency, evaporator effectiveness, condenser water flow rate and system Cop.
Resumo:
This thesis introduces and develops a novel real-time predictive maintenance system to estimate the machine system parameters using the motion current signature. Recently, motion current signature analysis has been addressed as an alternative to the use of sensors for monitoring internal faults of a motor. A maintenance system based upon the analysis of motion current signature avoids the need for the implementation and maintenance of expensive motion sensing technology. By developing nonlinear dynamical analysis for motion current signature, the research described in this thesis implements a novel real-time predictive maintenance system for current and future manufacturing machine systems. A crucial concept underpinning this project is that the motion current signature contains information relating to the machine system parameters and that this information can be extracted using nonlinear mapping techniques, such as neural networks. Towards this end, a proof of concept procedure is performed, which substantiates this concept. A simulation model, TuneLearn, is developed to simulate the large amount of training data required by the neural network approach. Statistical validation and verification of the model is performed to ascertain confidence in the simulated motion current signature. Validation experiment concludes that, although, the simulation model generates a good macro-dynamical mapping of the motion current signature, it fails to accurately map the micro-dynamical structure due to the lack of knowledge regarding performance of higher order and nonlinear factors, such as backlash and compliance. Failure of the simulation model to determine the micro-dynamical structure suggests the presence of nonlinearity in the motion current signature. This motivated us to perform surrogate data testing for nonlinearity in the motion current signature. Results confirm the presence of nonlinearity in the motion current signature, thereby, motivating the use of nonlinear techniques for further analysis. Outcomes of the experiment show that nonlinear noise reduction combined with the linear reverse algorithm offers precise machine system parameter estimation using the motion current signature for the implementation of the real-time predictive maintenance system. Finally, a linear reverse algorithm, BJEST, is developed and applied to the motion current signature to estimate the machine system parameters.
Resumo:
Background: The controversy surrounding the non-uniqueness of predictive gene lists (PGL) of small selected subsets of genes from very large potential candidates as available in DNA microarray experiments is now widely acknowledged 1. Many of these studies have focused on constructing discriminative semi-parametric models and as such are also subject to the issue of random correlations of sparse model selection in high dimensional spaces. In this work we outline a different approach based around an unsupervised patient-specific nonlinear topographic projection in predictive gene lists. Methods: We construct nonlinear topographic projection maps based on inter-patient gene-list relative dissimilarities. The Neuroscale, the Stochastic Neighbor Embedding(SNE) and the Locally Linear Embedding(LLE) techniques have been used to construct two-dimensional projective visualisation plots of 70 dimensional PGLs per patient, classifiers are also constructed to identify the prognosis indicator of each patient using the resulting projections from those visualisation techniques and investigate whether a-posteriori two prognosis groups are separable on the evidence of the gene lists. A literature-proposed predictive gene list for breast cancer is benchmarked against a separate gene list using the above methods. Generalisation ability is investigated by using the mapping capability of Neuroscale to visualise the follow-up study, but based on the projections derived from the original dataset. Results: The results indicate that small subsets of patient-specific PGLs have insufficient prognostic dissimilarity to permit a distinction between two prognosis patients. Uncertainty and diversity across multiple gene expressions prevents unambiguous or even confident patient grouping. Comparative projections across different PGLs provide similar results. Conclusion: The random correlation effect to an arbitrary outcome induced by small subset selection from very high dimensional interrelated gene expression profiles leads to an outcome with associated uncertainty. This continuum and uncertainty precludes any attempts at constructing discriminative classifiers. However a patient's gene expression profile could possibly be used in treatment planning, based on knowledge of other patients' responses. We conclude that many of the patients involved in such medical studies are intrinsically unclassifiable on the basis of provided PGL evidence. This additional category of 'unclassifiable' should be accommodated within medical decision support systems if serious errors and unnecessary adjuvant therapy are to be avoided.
Resumo:
In this article, we discuss the state of the art of models for customer engagement and the problems that are inherent to calibrating and implementing these models. The authors first provide an overview of the data available for customer analytics and discuss recent developments. Next, the authors discuss the models used for studying customer engagement, where they distinguish the following stages: customer acquisition, customer development, and customer retention. Finally, they discuss several organizational issues of analytics for customer engagement, which constitute barriers for introducing analytics for customer engagement.
Resumo:
Approximately half of current contact lens wearers suffer from dryness and discomfort, particularly towards the end of the day. Contact lens practitioners have a number of dry eye tests available to help them to predict which of their patients may be at risk of contact lens drop out and advise them accordingly. This thesis set out to rationalize them to see if any are of more diagnostic significance than others. This doctorate has found: (1) The Keratograph, a device which permits an automated, examiner independent technique for measuring non invasive tear break up time (NITBUT) measured NITBUT consistently shorter than measurements recorded with the Tearscope. When measuring central corneal curvature the spherical equivalent power of the cornea was measured as being significantly flatter than with a validated automated keratometer. (2) Non-invasive and invasive tear break-up times significantly correlated to each other, but not the other tear metrics. Symptomology, assessed using the OSDI questionnaire, correlated more with those tests indicating possible damage to the ocular surface (including LWE, LIPCOF and conjunctival staining) than with tests of either tear volume or stability. Cluster analysis showed some statistically significant groups of patients with different sign and symptom profiles. The largest cluster demonstrated poor tear quality with both non-invasive and invasive tests, low tear volume and more symptoms. (3) Care should be taken in fitting patients new to contact lenses if they have a NITBUT less than 10s or an OSDI comfort rating greater than 4.2 as they are more likely to drop-out within the first 6 months. Cluster analysis was not found to be beneficial in predicting which patients will succeed with lenses and which will not. A combination of the OSDI questionnaire and a NITBUT measurement was most useful both in diagnosing dry eye and in predicting contact lens drop out.
Resumo:
Background - Bipolar disorder (BD) is one of the leading causes of disability worldwide. Patients are further disadvantaged by delays in accurate diagnosis ranging between 5 and 10 years. We applied Gaussian process classifiers (GPCs) to structural magnetic resonance imaging (sMRI) data to evaluate the feasibility of using pattern recognition techniques for the diagnostic classification of patients with BD. Method - GPCs were applied to gray (GM) and white matter (WM) sMRI data derived from two independent samples of patients with BD (cohort 1: n = 26; cohort 2: n = 14). Within each cohort patients were matched on age, sex and IQ to an equal number of healthy controls. Results - The diagnostic accuracy of the GPC for GM was 73% in cohort 1 and 72% in cohort 2; the sensitivity and specificity of the GM classification were respectively 69% and 77% in cohort 1 and 64% and 99% in cohort 2. The diagnostic accuracy of the GPC for WM was 69% in cohort 1 and 78% in cohort 2; the sensitivity and specificity of the WM classification were both 69% in cohort 1 and 71% and 86% respectively in cohort 2. In both samples, GM and WM clusters discriminating between patients and controls were localized within cortical and subcortical structures implicated in BD. Conclusions - Our results demonstrate the predictive value of neuroanatomical data in discriminating patients with BD from healthy individuals. The overlap between discriminative networks and regions implicated in the pathophysiology of BD supports the biological plausibility of the classifiers.
Resumo:
The link between off-target anticholinergic effects of medications and acute cognitive impairment in older adults requires urgent investigation. We aimed to determine whether a relevant in vitro model may aid the identification of anticholinergic responses to drugs and the prediction of anticholinergic risk during polypharmacy. In this preliminary study we employed a co-culture of human-derived neurons and astrocytes (NT2.N/A) derived from the NT2 cell line. NT2.N/A cells possess much of the functionality of mature neurons and astrocytes, key cholinergic phenotypic markers and muscarinic acetylcholine receptors (mAChRs). The cholinergic response of NT2 astrocytes to the mAChR agonist oxotremorine was examined using the fluorescent dye fluo-4 to quantitate increases in intracellular calcium [Ca2+]i. Inhibition of this response by drugs classified as severe (dicycloverine, amitriptyline), moderate (cyclobenzaprine) and possible (cimetidine) on the Anticholinergic Cognitive Burden (ACB) scale, was examined after exposure to individual and pairs of compounds. Individually, dicycloverine had the most significant effect regarding inhibition of the astrocytic cholinergic response to oxotremorine, followed by amitriptyline then cyclobenzaprine and cimetidine, in agreement with the ACB scale. In combination, dicycloverine with cyclobenzaprine had the most significant effect, followed by dicycloverine with amitriptyline. The order of potency of the drugs in combination frequently disagreed with predicted ACB scores derived from summation of the individual drug scores, suggesting current scales may underestimate the effect of polypharmacy. Overall, this NT2.N/A model may be appropriate for further investigation of adverse anticholinergic effects of multiple medications, in order to inform clinical choices of suitable drug use in the elderly.
Resumo:
This paper considers the problem of low-dimensional visualisation of very high dimensional information sources for the purpose of situation awareness in the maritime environment. In response to the requirement for human decision support aids to reduce information overload (and specifically, data amenable to inter-point relative similarity measures) appropriate to the below-water maritime domain, we are investigating a preliminary prototype topographic visualisation model. The focus of the current paper is on the mathematical problem of exploiting a relative dissimilarity representation of signals in a visual informatics mapping model, driven by real-world sonar systems. A realistic noise model is explored and incorporated into non-linear and topographic visualisation algorithms building on the approach of [9]. Concepts are illustrated using a real world dataset of 32 hydrophones monitoring a shallow-water environment in which targets are present and dynamic.
Resumo:
The acceleration of solid dosage form product development can be facilitated by the inclusion of excipients that exhibit poly-/multi-functionality with reduction of the time invested in multiple excipient optimisations. Because active pharmaceutical ingredients (APIs) and tablet excipients present diverse densification behaviours upon compaction, the involvement of these different powders during compaction makes the compaction process very complicated. The aim of this study was to assess the macrometric characteristics and distribution of surface charges of two powders: indomethacin (IND) and arginine (ARG); and evaluate their impact on the densification properties of the two powders. Response surface modelling (RSM) was employed to predict the effect of two independent variables; Compression pressure (F) and ARG percentage (R) in binary mixtures on the properties of resultant tablets. The study looked at three responses namely; porosity (P), tensile strength (S) and disintegration time (T). Micrometric studies showed that IND had a higher charge density (net charge to mass ratio) when compared to ARG; nonetheless, ARG demonstrated good compaction properties with high plasticity (Y=28.01MPa). Therefore, ARG as filler to IND tablets was associated with better mechanical properties of the tablets (tablet tensile strength (σ) increased from 0.2±0.05N/mm2 to 2.85±0.36N/mm2 upon adding ARG at molar ratio of 8:1 to IND). Moreover, tablets' disintegration time was shortened to reach few seconds in some of the formulations. RSM revealed tablet porosity to be affected by both compression pressure and ARG ratio for IND/ARG physical mixtures (PMs). Conversely, the tensile strength (σ) and disintegration time (T) for the PMs were influenced by the compression pressure, ARG ratio and their interactive term (FR); and a strong correlation was observed between the experimental results and the predicted data for tablet porosity. This work provides clear evidence of the multi-functionality of ARG as filler, binder and disintegrant for directly compressed tablets.
Resumo:
A novel simulation model for pyrolysis processes oflignocellulosicbiomassin AspenPlus (R) was presented at the BC&E 2013. Based on kinetic reaction mechanisms, the simulation calculates product compositions and yields depending on reactor conditions (temperature, residence time, flue gas flow rate) and feedstock composition (biochemical composition, atomic composition, ash and alkali metal content). The simulation model was found to show good correlation with existing publications. In order to further verify the model, own pyrolysis experiments in a 1 kg/h continuously fed fluidized bed fast pyrolysis reactor are performed. Two types of biomass with different characteristics are processed in order to evaluate the influence of the feedstock composition on the yields of the pyrolysis products and their composition. One wood and one straw-like feedstock are used due to their different characteristics. Furthermore, the temperature response of yields and product compositions is evaluated by varying the reactor temperature between 450 and 550 degrees C for one of the feedstocks. The yields of the pyrolysis products (gas, oil, char) are determined and their detailed composition is analysed. The experimental runs are reproduced with the corresponding reactor conditions in the AspenPlus model and the results compared with the experimental findings.