17 resultados para THEORETICAL BASIS
em Aston University Research Archive
Resumo:
Neural networks have often been motivated by superficial analogy with biological nervous systems. Recently, however, it has become widely recognised that the effective application of neural networks requires instead a deeper understanding of the theoretical foundations of these models. Insight into neural networks comes from a number of fields including statistical pattern recognition, computational learning theory, statistics, information geometry and statistical mechanics. As an illustration of the importance of understanding the theoretical basis for neural network models, we consider their application to the solution of multi-valued inverse problems. We show how a naive application of the standard least-squares approach can lead to very poor results, and how an appreciation of the underlying statistical goals of the modelling process allows the development of a more general and more powerful formalism which can tackle the problem of multi-modality.
Resumo:
Outcomes measures, which is the measurement of effectiveness of interventions and services has been propelled onto the health service agenda since the introduction of the internal market in the 1990s. It arose as a result of the escalating cost of inpatient care, the need to identify what interventions work and in what situations, and the desire for effective information by service users enabled by the consumerist agenda introduced by Working for Patients white paper. The research reported in this thesis is an assessment of the readiness of the forensic mental health service to measure outcomes of interventions. The research examines the type, prevalence and scope of use of outcomes measures, and further seeks a consensus of views of key stakeholders on the priority areas for future development. It discusses the theoretical basis for defining health and advocates the argument that the present focus on measuring effectiveness of care is misdirected without the input of users, particularly patients in their care, drawing together the views of the many stakeholders who have an interest in the provision of care in the service. The research further draws on the theory of structuration to demonstrate the degree to which a duality of action, which is necessary for the development, and use of outcomes measures is in place within the service. Consequently, it highlights some of the hurdles that need to be surmounted before effective measurement of health gain can be developed in the field of study. It concludes by advancing the view that outcomes research can enable practitioners to better understand the relationship between the illness of the patient and the efficacy of treatment. This understanding it is argued would contribute to improving dialogue between the health care practitioner and the patient, and further providing the information necessary for moving away from untested assumptions, which are numerous in the field about the superiority of one treatment approach over another.
Resumo:
In this paper we discuss a fast Bayesian extension to kriging algorithms which has been used successfully for fast, automatic mapping in emergency conditions in the Spatial Interpolation Comparison 2004 (SIC2004) exercise. The application of kriging to automatic mapping raises several issues such as robustness, scalability, speed and parameter estimation. Various ad-hoc solutions have been proposed and used extensively but they lack a sound theoretical basis. In this paper we show how observations can be projected onto a representative subset of the data, without losing significant information. This allows the complexity of the algorithm to grow as O(n m 2), where n is the total number of observations and m is the size of the subset of the observations retained for prediction. The main contribution of this paper is to further extend this projective method through the application of space-limited covariance functions, which can be used as an alternative to the commonly used covariance models. In many real world applications the correlation between observations essentially vanishes beyond a certain separation distance. Thus it makes sense to use a covariance model that encompasses this belief since this leads to sparse covariance matrices for which optimised sparse matrix techniques can be used. In the presence of extreme values we show that space-limited covariance functions offer an additional benefit, they maintain the smoothness locally but at the same time lead to a more robust, and compact, global model. We show the performance of this technique coupled with the sparse extension to the kriging algorithm on synthetic data and outline a number of computational benefits such an approach brings. To test the relevance to automatic mapping we apply the method to the data used in a recent comparison of interpolation techniques (SIC2004) to map the levels of background ambient gamma radiation. © Springer-Verlag 2007.
Resumo:
The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.
Resumo:
Methods of dynamic modelling and analysis of structures, for example the finite element method, are well developed. However, it is generally agreed that accurate modelling of complex structures is difficult and for critical applications it is necessary to validate or update the theoretical models using data measured from actual structures. The techniques of identifying the parameters of linear dynamic models using Vibration test data have attracted considerable interest recently. However, no method has received a general acceptance due to a number of difficulties. These difficulties are mainly due to (i) Incomplete number of Vibration modes that can be excited and measured, (ii) Incomplete number of coordinates that can be measured, (iii) Inaccuracy in the experimental data (iv) Inaccuracy in the model structure. This thesis reports on a new approach to update the parameters of a finite element model as well as a lumped parameter model with a diagonal mass matrix. The structure and its theoretical model are equally perturbed by adding mass or stiffness and the incomplete number of eigen-data is measured. The parameters are then identified by an iterative updating of the initial estimates, by sensitivity analysis, using eigenvalues or both eigenvalues and eigenvectors of the structure before and after perturbation. It is shown that with a suitable choice of the perturbing coordinates exact parameters can be identified if the data and the model structure are exact. The theoretical basis of the technique is presented. To cope with measurement errors and possible inaccuracies in the model structure, a well known Bayesian approach is used to minimize the least squares difference between the updated and the initial parameters. The eigen-data of the structure with added mass or stiffness is also determined using the frequency response data of the unmodified structure by a structural modification technique. Thus, mass or stiffness do not have to be added physically. The mass-stiffness addition technique is demonstrated by simulation examples and Laboratory experiments on beams and an H-frame.
Resumo:
This thesis proposes that despite many experimental studies of thinking, and the development of models of thinking, such as Bruner's (1966) enactive, iconic and symbolic developmental modes, the imagery and inner verbal strategies used by children need further investigation to establish a coherent, theoretical basis from which to create experimental curricula for direct improvement of those strategies. Five hundred and twenty-three first, second and third year comprehensive school children were tested on 'recall' imagery, using a modified Betts Imagery Test; and a test of dual-coding processes (Paivio, 1971, p.179), by the P/W Visual/Verbal Questionnaire, measuring 'applied imagery' and inner verbalising. Three lines of investigation were pursued: 1. An investigation a. of hypothetical representational strategy differences between boys and girls; and b. the extent to which strategies change with increasing age. 2. The second and third year children's use of representational processes, were taken separately and compared with performance measures of perception, field independence, creativity, self-sufficiency and self-concept. 3. The second and third year children were categorised into four dual-coding strategy groups: a. High Visual/High Verbal b. Low Visual/High Verbal c. High Visual/Low Verbal d. Low Visual/Low Verbal These groups were compared on the same performance measures. The main result indicates that: 1. A hierarchy of dual-coding strategy use can be identified that is significantly related (.01, Binomial Test) to success or failure in the performance measures: the High Visual/High Verbal group registering the highest scores, the Low Visual/High Verbal and High Visual/Low Verbal groups registering intermediate scores, and the Low Visual/Low Verbal group registering the lowest scores on the performance measures. Subsidiary results indicate that: 2. Boys' use of visual strategies declines, and of verbal strategies increases, with age; girls' recall imagery strategy increases with age. Educational implications from the main result are discussed, the establishment of experimental curricula proposed, and further research suggested.
Resumo:
The need for an adequate information system for the Highways Departments in the United Kingdom has been recognised by the report of a committee presented to the Minister of Transport in 1970, (The Marshall Report). This research aims to present a comprehensive information system on a sound theoretical basis which should enable the different levels of management to execute their work adequately. The suggested system presented in this research covers the different functions of the Highways Department, and presents a suggested solution for problems which may occur during the planning and controlling of work in the different locations of the Highways Department. The information system consists of:- 1. A coding system covering the cost units, cost centres and cost elements. 2. Cost accounting records for the cost units and cost centres. 3. A budgeting and budgetary control system covering, the different planning methods and procedures which are required for preparing the capital expenditure budget, the improvement and maintenance operation flexible budgets and programme of work, the plant budget, the administration budget, and the purchasing budget. 4. A reporting system which ensures that the different levels of management are receiving relevant and timely information. 5. The flow of documents which covers the relationship between the prime documents, the cost accounting records, budgets, reports and their relation to the different sections and offices within the department. A comprehensive cost units, cost centres, and cost elements codes together with a number of examples demonstrating the results of the survey, and examples of the application and procedures of the suggested information system have been illustrated separately as appendices. The emphasis is on the information required for internal control by management personnel within the County Council.
Resumo:
This work is the result of an action-research-type study of the diversification effort of part of a major U.K. industrial company. Work in contingency theory concerning the impact of environmental factors on organizational design, and the systemic model of viable systems put forward by Stafford Beer form the theoretical basis of the vvork. The two streams of thought are compared and found to offer similar conclusions about the design of effective organizations. These findings are taken as the framework for an analysis both of organization structures for promoting innovation described in the literature, and of those employed by the company for this purpose in recent years. Much attention is given to the use of venture groups, and conclusions are drawn on particular factors which may influence their success or failure. Both theoretical considerations, and the examination of the company' s recent experience suggested that the formation of the policy of diversification, as well as the method of implementation of the police, might affect its outcorre. Attention is therefore focused on the policy-making and planning process, and in particular on possible problems that this process could generate in a multi-division company. The view finally taken of diversification effort is that it should be regarded as a learning system. This view helps to expose some ambiguities in the concepts of success and failure in this area, and demonstrates considerable weaknesses in traditional project evaluation procedures.
Resumo:
Prior to the development of a production standard control system for ML Aviation's plan-symmetric remotely piloted helicopter system, SPRITE, optimum solutions to technical requirements had yet to be found for some aspects of the work. This thesis describes an industrial project where solutions to real problems have been provided within strict timescale constraints. Use has been made of published material wherever appropriate, new solutions have been contributed where none existed previously. A lack of clearly defined user requirements from potential Remotely Piloted Air Vehicle (RPAV) system users is identified, A simulation package is defined to enable the RPAV designer to progress with air vehicle and control system design, development and evaluation studies and to assist the user to investigate his applications. The theoretical basis of this simulation package is developed including Co-axial Contra-rotating Twin Rotor (CCTR), six degrees of freedom motion, fuselage aerodynamics and sensor and control system models. A compatible system of equations is derived for modelling a miniature plan-symmetric helicopter. Rigorous searches revealed a lack of CCTR models, based on closed form expressions to obviate integration along the rotor blade, for stabilisation and navigation studies through simulation. An economic CCTR simulation model is developed and validated by comparison with published work and practical tests. Confusion in published work between attitude and Euler angles is clarified. The implementation of package is discussed. dynamic adjustment of assessment. the theory into a high integrity software Use is made of a novel technique basing the integration time step size on error Simulation output for control system stability verification, cross coupling of motion between control channels and air vehicle response to demands and horizontal wind gusts studies are presented. Contra-Rotating Twin Rotor Flight Control System Remotely Piloted Plan-Symmetric Helicopter Simulation Six Degrees of Freedom Motion ( i i)
Resumo:
This study examines the invention, innovation, introduction and use of a new drug therapy for coronary heart disease and hypertension; beta-blockade. The relationships between drug introductions and changes in medical perceptions of disease are analysed, and the development and effects of our perception of heart disease through drug treatments and diagnostic technology is described. The first section looks at the evolution of hypertension from its origin as a kidney disorder, Bright's disease, to the introduction and use of effective drugs for its treatment. It is shown that this has been greatly influenced by the introduction of new medical technologies. A medical controversy over its nature is shown both to be strongly influenced by the use of new drugs, and to influence their subsequent use. The second section reviews the literature analysing drug innovation, and examines the innovation of the beta-blocking drugs, making extensive use of participant accounts. The way in which the development of receptor theory, the theoretical basis of the innovation,was influenced by the innovation and use of drugs is discussed, then the innovation at ICI, the introduction into clinical use, and the production of similar drugs by other manufacturers are described. A study of the effects of these drugs is then undertaken, concentrating on therapeutic costs and benefits, and changes in medical perceptions of disease. The third section analyses the effects of other drugs on heart disease, looking at changes in mortality statistics and in medical opinions. The study concludes that linking work on drug innovation with that on drug effects is fruitful, that new drugs and diagnostic technology have greatly influenced medical perceptions of the nature and extent of heart disease, and that in hypertension, the improvement in drug treatment will soon result in much of the population being defined as in need of it life-long.
Resumo:
By spectral analysis, and using joint time-frequency representations, we present the theoretical basis to design invariant band-limited Airy pulses with an arbitrary degree of robustness, and an arbitrary range of single mode fiber chromatic dispersion. The numerically simulated examples confirm the theoretically predicted pulse partial invariance in the propagation of the pulse in the fiber.
Resumo:
This thesis begins with a review of the literature on team-based working in organisations, highlighting the variations in research findings, and the need for greater precision in our measurement of teams. It continues with an illustration of the nature and prevalence of real and pseudo team-based working, by presenting results from a large sample of secondary data from the UK National Health Service. Results demonstrate that ‘real teams’ have an important and significant impact on the reduction of many work-related safety outcomes. Based on both theoretical and methodological limitations of existing approaches, the thesis moves on to provide a clarification and extension of the ‘real team’ construct, demarcating this from other (pseudo-like) team typologies on a sliding scale, rather than a simple dichotomy. A conceptual model for defining real teams is presented, providing a theoretical basis for the development of a scale on which teams can be measured for varying extents of ‘realness’. A new twelve-item scale is developed and tested with three samples of data comprising 53 undergraduate teams, 52 postgraduate teams, and 63 public sector teams from a large UK organisation. Evidence for the content, construct and criterion-related validity of the real team scale is examined over seven separate validation studies. Theoretical, methodological and practical implications of the real team scale are then discussed.
Resumo:
By evolving brands and building on the importance of self-expression, Aaker (1997) developed the brand personality framework as a means to understand brand-consumer relationships. The brand personality framework captures the core values and characteristics described in human personality research in an attempt to humanize brands. Although influential across many streams of brand personality research, the current conceptualization of brand personality only offers a positively-framed approach. To date, no research, both conceptually and empirically, has thoroughly incorporated factors reflective of Negative Brand Personality, despite the fact that almost all researchers in personality are in agreement that factors akin to Extraversion (positive) and Neuroticism (negative) should be in a comprehensive personality scale to accommodate consumers’ expressions. As a result, the study of brand personality is only half complete since the current research trend is to position brand personality under brand image. However, with the brand personality concept being confused with brand identity at the empirical stage, factors reflective of Negative Brand Personality have been neglected. Accordingly, this thesis extends the current conceptualization of brand personality by demarcating the existing typologies of desirable brand personality and incorporating the characteristics reflective of consumers’ discrepant self-meaning to provide a more complete understanding of brand personality. However, it is not enough to interpret negative factors as the absence of positive factors. Negative factors reflect consumers’ anxious and frustrated feelings. Therefore, this thesis contributes to the current conceptualization of brand personality by, firstly, presenting a conceptual definition of Negative Brand Personality in order to provide a theoretical basis for the development of a Negative Brand Personality scale, then, secondly, identifying what constitutes Negative Brand Personality and to what extent consumers’ cognitive dissonance explains the nature of Negative Brand Personality, and, thirdly, ascertaining the impact Negative Brand Personality has on attitudinal constructs, namely: Negative Attitude, Detachment, Brand Loyalty and Satisfaction, which have proven to predict behaviors such as choice and (re-)purchasing. In order to deliver on the three main contributions, two comprehensive studies were conducted to a) develop a valid, parsimonious, yet relatively short measure of Negative Brand Personality, and b) ascertain how the Negative Brand Personality measure behaves within a network of related constructs. The mixed methods approach, grounded in theoretical and empirical development, provides evidence to suggest that there are four factors to Negative Brand Personality and, tested through use of a structural equation modeling technique, that these are influenced by Brand Confusion, Price Unfairness, Self- Incongruence and Corporate Hypocrisy. Negative Brand Personality factors mainly determined Consumers Negative Attitudes and Brand Detachment. The research contributes to the literature on brand personality by improving the consumer-brand relationship by means of engaging in a brandconsumer conversation in order to reduce consumers’ cognitive strain. The study concludes with a discussion on the theoretical and practical implications of the findings, its limitations, and potential directions for future research.
Resumo:
The "living" and/or controlled cationic ring-opening bulk copolymerization of oxetane (Ox) with tetrahydropyran (THP) (cyclic ether with no homopolymerizability) at 35°C was examined using ethoxymethyl-1 -oxoniacyclohexane hexafluoroantimonate (EMOA) and (BF3 · CH3OH)THP as fast and slow initiator, respectively, yielding living and nonliving polymers with pseudoperiodic sequences (i.e., each pentamethylene oxide fragment inserted into the polymer is flanked by two trimethylene oxide fragments). Good control over number-average molecular weight (Mn up to 150000 g mol-1) with molecular weight distribution (MWD ∼ 1.4-1, 5) broader than predicted by the Poison distribution (MWDs > 1 +1/DPn) was attained using EMOA as initiating system, i.e., C 2H5OCH2Cl with 1.1 equiv of AgSbF6 as a stable catalyst and 1.1 equiv of 2,6-di-tert-butylpyridine used as a non-nucleophilic proton trap. With (BF3 · CH 3OH)THP, a drift of the linear dependence M n(GPC) vs Mn(theory) to lower molecular weight was observed together with the production of cyclic oligomers, ∼3-5% of the Ox consumed in THP against ∼30% in dichloromethane. Structural and kinetics studies highlighted a mechanism of chains growth where the rate of mutual conversion between "strain ACE species" (chain terminated by a tertiary 1-oxoniacyclobutane ion, Al) and "strain-free ACE species" (chain terminated by a tertiary 1-oxoniacyclohexane ion, Tl) depends on the rate at which Ox converts the stable species T1 (kind of "dormant" species) into a living "propagating" center A1 (i.e., k aapp[Ox]). The role of the THP solvent associated with the suspension of irreversible and reversible transfer reactions to polymer, when the polymerization is initiated with EMOA, was predicted by our kinetic considerations. The activation -deactivation pseudoequilibrium coefficient (Qt) was then calculated in a pure theoretical basis. From the measured apparent rate constant of Ox (kOxapp) and THP (kTHPapp = ka(endo)app) consumption, Qt and reactivity ratio (kp/kd, k a(endo)/ka(exo), and ks/ka(endo) were calculated, which then allow the determination of the transition rate constant of elementary step reactions that governs the increase of Mu with conversion. © 2009 American Chemical Society.
Resumo:
AIM: To assess the suitability and potential cost savings, from both the hospital and community perspective, of prescribed oral liquid medicine substitution with acceptable solid forms for children over 2 years. METHOD: Oral liquid medicines dispensed from a paediatric hospital (UK) in 1 week were assessed by screening for existence of the solid form alternative and evaluating the acceptability of the available solid form, firstly related to the prescribed dose and secondly to acceptable size depending on the child's age. Costs were calculated based on providing treatment for 28 days or prescribed duration for short term treatments. RESULTS: Over 90% (440/476) of liquid formulations were available as a marketed solid form. Considering dosage acceptability (maximum of 10% deviation from prescribed dosage or 0% for narrow therapeutic range drugs, maximum tablet divisions into quarters) 80% of liquids could be substituted with a solid form. The main limitation for liquid substitution would be solid form size. However, two-thirds of prescribed liquids could have been substituted with a suitable solid form for dosage and size, with estimated savings being of 5K and 8K in 1 week, respectively based on hospital and community costs, corresponding to a projected annual saving of 238K and 410K (single institution). CONCLUSION: Whilst not all children over 2 years will be able to swallow tablets, drug cost savings if oral liquid formulations were substituted with suitable solid dosage forms would be considerable. Given the numerous advantages of solid forms compared with liquids, this study may provide a theoretical basis for investing in supporting children to swallow tablets/capsules.